Commit graph

4420 commits

Author SHA1 Message Date
Sultan Alsawaf
2f43de3476 dma-buf/sync_file: Speed up ioctl by omitting debug names
A lot of CPU time is wasted on allocating, populating, and copying
debug names back and forth with userspace when they're not actually
needed. We can't just remove the name buffers from the various sync data
structures though because we must preserve ABI compatibility with
userspace, but instead we can just pretend the name fields of the
user-shared structs aren't there. This massively reduces the sizes of
memory allocated for these data structures and the amount of data passed
between userspace, as well as eliminates a kzalloc() entirely from
sync_file_ioctl_fence_info(), thus improving graphics performance.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:53:23 +01:00
Sultan Alsawaf
07a5ef1eeb qos: Don't disable interrupts while holding pm_qos_lock
None of the pm_qos functions actually run in interrupt context; if some
driver calls pm_qos_update_target in interrupt context then it's already
broken. There's no need to disable interrupts while holding pm_qos_lock,
so don't do it.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:53:07 +01:00
Nahuel Gómez
27fe6f89a2 kernel: sched: ems: drop usage of SCHED_FEAT
We removed this.

../kernel/sched/ems/core.c:1370:23: error: use of undeclared identifier 'sched_feat_names'
 1370 |         index = match_string(sched_feat_names, __SCHED_FEAT_NR, "TTWU_QUEUE");
      |                              ^
../kernel/sched/ems/core.c:1370:41: error: use of undeclared identifier '__SCHED_FEAT_NR'
 1370 |         index = match_string(sched_feat_names, __SCHED_FEAT_NR, "TTWU_QUEUE");
      |                                                ^
../kernel/sched/ems/core.c:1372:23: error: use of undeclared identifier 'sched_feat_keys'
 1372 |                 static_key_disable(&sched_feat_keys[index]);
      |                                     ^
../kernel/sched/ems/core.c:1373:3: error: use of undeclared identifier 'sysctl_sched_features'; did you mean 'sysctl_sched_latency'?
 1373 |                 sysctl_sched_features &= ~(1UL << index);
      |                 ^~~~~~~~~~~~~~~~~~~~~
      |                 sysctl_sched_latency
../include/linux/sched/sysctl.h:29:21: note: 'sysctl_sched_latency' declared here
   29 | extern unsigned int sysctl_sched_latency;
      |                     ^
4 errors generated.

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:52:14 +01:00
Ksawlii
89efaaeccf ARM64: configs: disable ZRAM_LRU_WRITEBACK 2024-11-19 17:51:54 +01:00
Ruchit
c94f14266e zram: Protect handle_decomp_fail behind a check
the previous definitions as well as the creation of this is locked behind CONFIG_ZRAM_LRU_WRITEBACK as well

Change-Id: I869b5595f69cc481e93ca6862b460594762d9b25
Signed-off-by: Ruchit <risenid@duck.com>
2024-11-19 17:50:10 +01:00
Nahuel Gómez
2cb2ac56fc drivers: zram: also guard lzo_marker
../drivers/block/zram/zram_drv.c:62:22: error: unused variable 'lzo_marker' [-Werror,-Wunused-variable]
   62 | static unsigned char lzo_marker[4] = {0x11, 0x00, 0x00};
      |                      ^~~~~~~~~~
1 error generated.

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:49:47 +01:00
flar2
d1c1915a6c mmc: Disable crc check
Signed-off-by: flar2 <asegaert@gmail.com>
2024-11-19 17:47:04 +01:00
Pzqqt
8f30152c01 drivers: scsi: Reduce logspam 2024-11-19 17:47:00 +01:00
Pzqqt
1c7f2b3800 drivers: staging: Import Xiaomi's binder prio driver
- From branch: `liuqin-t-oss`

Signed-off-by: Pzqqt <821026875@qq.com>
2024-11-19 17:46:55 +01:00
jonascardoso
a37de4bafa slub: Optimized SLUB Memory Allocator
(cherry picked from commit 110e6c989068385cc84f71bb02bfda2b58e56a0f)
Signed-off-by: rk134 <rahul-k@bigdi.cc>
Signed-off-by: priiii1808 <priyanshusinghal0818@gmail.com>
2024-11-19 17:44:40 +01:00
Sultan Alsawaf
b8eba3b6e6 mm: kmemleak: Don't die when memory allocation fails
When memory is leaking, it's going to be harder to allocate more memory,
making it more likely for this failure condition inside of kmemleak to
manifest itself. This is extremely frustrating since kmemleak kills
itself upon the first instance of memory allocation failure.

Bypass that and make kmemleak more resilient when memory is running low.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: priiii1808 <priyanshusinghal0818@gmail.com>
2024-11-19 17:44:35 +01:00
Diab Neiroukh
1c19be24ee mm: oom_kill: Reduce some verbose logging
Signed-off-by: engstk <eng.stk@sapo.pt>
2024-11-19 17:44:31 +01:00
UtsavBalar1231
8ed372cd67 mm: page_alloc: Hardcode min_free_kbytes to 32768 kb
Change-Id: I08355acd995e956c63cc0d3f1587604e39f91269
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2024-11-19 17:44:24 +01:00
Sultan Alsawaf
1dca369959 mm: Don't hog the CPU and zone lock in rmqueue_bulk()
There is noticeable scheduling latency and heavy zone lock contention
stemming from rmqueue_bulk's single hold of the zone lock while doing
its work, as seen with the preemptoff tracer. There's no actual need for
rmqueue_bulk() to hold the zone lock the entire time; it only does so
for supposed efficiency. As such, we can relax the zone lock and even
reschedule when IRQs are enabled in order to keep the scheduling delays
and zone lock contention at bay. Forward progress is still guaranteed,
as the zone lock can only be relaxed after page removal.

With this change, rmqueue_bulk() no longer appears as a serious offender
in the preemptoff tracer, and system latency is noticeably improved.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:44:18 +01:00
Juhyung Park
cfd1b6ca17 zsmalloc: backport from 5994eabf3bbb
Backport zsmalloc from commit 5994eabf3bbb ("merge mm-hotfixes-stable into
mm-stable to pick up depended-upon changes").

Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
2024-11-19 17:44:14 +01:00
Ben Gardon
e9933557cb locking/rwlocks: Add contention detection for rwlocks
rwlocks do not currently have any facility to detect contention
like spinlocks do. In order to allow users of rwlocks to better manage
latency, add contention detection for queued rwlocks.

CC: Ingo Molnar <mingo@redhat.com>
CC: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-7-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-11-19 17:44:08 +01:00
Minchan Kim
f19a9560cc locking/rwlocks: introduce write_lock_nested
In preparation for converting bit_spin_lock to rwlock in zsmalloc so
that multiple writers of zspages can run at the same time but those
zspages are supposed to be different zspage instance.  Thus, it's not
deadlock.  This patch adds write_lock_nested to support the case for
LOCKDEP.

[minchan@kernel.org: fix write_lock_nested for RT]
  Link: https://lkml.kernel.org/r/YZfrMTAXV56HFWJY@google.com
[bigeasy@linutronix.de: fixup write_lock_nested() implementation]
  Link: https://lkml.kernel.org/r/20211123170134.y6xb7pmpgdn4m3bn@linutronix.de

Link: https://lkml.kernel.org/r/20211115185909.3949505-8-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-11-19 17:44:05 +01:00
Sultan Alsawaf
d4bbaf5715 sched/core: Forbid Unity-based games from changing their CPU affinity
Unity-based games (such as Wild Rift) like to shoot themselves in the foot
by setting a nonsense CPU affinity, restricting the game to a narrow set of
CPU cores that it thinks are the "big" cores in a heterogeneous CPU. It
assumes that CPUs only have two performance domains (clusters), and
therefore royally mucks up games' CPU affinities on CPUs which have more
than two performance domains.

Check if a setaffinity target task is part of a Unity-based game and
silently ignore the setaffinity request so that it can't sabotage itself.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:43:59 +01:00
ztc1997
136bbfd757 block: Add default I/O scheduler option 2024-11-19 17:43:55 +01:00
Paolo Valente
fbbabdb3bc block, bfq: use half slice_idle as a threshold to check short ttime
The value of the I/O plugging (idling) timeout is used also as the
think-time threshold to decide whether a process has a short think
time.  In this respect, a good value of this timeout for rotational
drives is un the order of several ms. Yet, this is often too long a
time interval to be effective as a think-time threshold. This commit
mitigates this problem (by a lot, according to tests), by halving the
threshold.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit b5f74ecacc3139ef873e69acc3aba28083ecc416)
(cherry picked from commit b1511c438e8a5668e6be04ad9107d6695332756c)
(cherry picked from commit 389992d9dc78340676248d0f01c7569b3db950ed)
(cherry picked from commit 49919eface6f4391cda0e77bcaad3e2786cbbab3)
(cherry picked from commit 87b015de51122ea9b5d9e56b846ae945db8444f0)
(cherry picked from commit 6ada34cdc94c89e97926a2d001412ecc027e1392)
(cherry picked from commit 2782bcc2919dd2a0a1d461d36c22338e67bc6327)
2024-11-19 17:43:46 +01:00
Paolo Valente
fe945719eb block, bfq: increase time window for waker detection
Tests on slower machines showed current window to be way too
small. This commit increases it.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit ab1fb47e33dc7754a7593181ffe0742c7105ea9a)
(cherry picked from commit 0d1663f1922c5f6fb3a4b3cc5a3a861c765a3704)
(cherry picked from commit 85d9e1637a38d0cfdeba4e3847f1797dcd18da5d)
(cherry picked from commit 6bd707bb9a60e2bf0e680a271208f6c82a331571)
(cherry picked from commit 43755e08d048ccd6f3b2a3bbd34bea4a71c5bc12)
(cherry picked from commit b1a8cce9e99277ce53da20ab603473ad6c3e95d1)
(cherry picked from commit 74d27133a3261a296ddd98e9ff09d89bfab797bb)
2024-11-19 17:43:43 +01:00
Paolo Valente
7034a03ec0 block, bfq: do not raise non-default weights
BFQ heuristics try to detect interactive I/O, and raise the weight of
the queues containing such an I/O. Yet, if also the user changes the
weight of a queue (i.e., the user changes the ioprio of the process
associated with that queue), then it is most likely better to prevent
BFQ heuristics from silently changing the same weight.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 91b896f65d32610d6d58af02170b15f8d37a7702)
(cherry picked from commit cbbd2f045e60073978fe1b721c0953cd8762ecbb)
(cherry picked from commit 88b650c71f7d0d30ac2fa215a139d7a48d069cd9)
(cherry picked from commit 9a4725f0341c71a9b4f50f2d203f9740029e42e5)
(cherry picked from commit a2c57345ffa5404cefd3d43e2fd4e4492ac7c6e0)
(cherry picked from commit df56458ca85c681d163d879b832f868ed5044c8e)
(cherry picked from commit dfc085aad98db2bcabd2c438fcd722a90303e6cb)
2024-11-19 17:43:40 +01:00
Paolo Valente
4b23f1e69b block, bfq: do not expire a queue when it is the only busy one
This commits preserves I/O-dispatch plugging for a special symmetric
case that may suddenly turn into asymmetric: the case where only one
bfq_queue, say bfqq, is busy. In this case, not expiring bfqq does not
cause any harm to any other queues in terms of service guarantees. In
contrast, it avoids the following unlucky sequence of events: (1) bfqq
is expired, (2) a new queue with a lower weight than bfqq becomes busy
(or more queues), (3) the new queue is served until a new request
arrives for bfqq, (4) when bfqq is finally served, there are so many
requests of the new queue in the drive that the pending requests for
bfqq take a lot of time to be served. In particular, event (2) may
case even already dispatched requests of bfqq to be delayed, inside
the drive. So, to avoid this series of events, the scenario is
preventively declared as asymmetric also if bfqq is the only busy
queues. By doing so, I/O-dispatch plugging is performed for bfqq.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 2391d13ed484df1515f0025458e1f82317823fab)
(cherry picked from commit 79827eb41d8fb0f838a2c592775a8e63caeb7c57)
(cherry picked from commit 41720669259995fb7f064fc0f988c9d228750b37)
(cherry picked from commit 07d273c955ea2c34a42f6de0f1e3f1bfb00c6ce1)
(cherry picked from commit 8034c856b8fcafbef405eedddc12bb0625e52a42)
(cherry picked from commit f49083d304bda30647196b550a109f528c8266dc)
(cherry picked from commit 8a597f0ab5e7e83bfa426d071185c3d3ce5fa535)
2024-11-19 17:43:34 +01:00
Paolo Valente
5238084cd8 block, bfq: save also injection state on queue merging
To prevent injection information from being lost on bfq_queue merging,
also the amount of service that a bfq_queue receives must be saved and
restored when the bfq_queue is merged and split, respectively.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 5a5436b98d5cd2714feaaa579cec49dd7f7057bb)
(cherry picked from commit 9372e98dc77c7f2ebbb808a60abb01f30d70d0bc)
(cherry picked from commit e6a5b66cfe56495f26182cfd2340e3336bb4b2b4)
(cherry picked from commit c579a3634d163ed05cc4ac258411f03db969926e)
(cherry picked from commit 359f87d07390f687634185b0dd9d6f106fb5afdd)
(cherry picked from commit d1d1f1336ed77b83e98d26175e196b45a28958f4)
(cherry picked from commit 0ff8068594640924e0cffe27d8b0273bb80d74ca)
2024-11-19 17:43:15 +01:00
Paolo Valente
0769622634 block, bfq: save also weight-raised service on queue merging
To prevent weight-raising information from being lost on bfq_queue merging,
also the amount of service that a bfq_queue receives must be saved and
restored when the bfq_queue is merged and split, respectively.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit e673914d52f913584cc4c454dfcff2e8eb04533f)
(cherry picked from commit 48f3cf9bb6ae73de3e8e6cad2e50c6e70a6cd33f)
(cherry picked from commit d947cf3f8bcbcbe2dd8f5eec82e83a35198f874b)
(cherry picked from commit 39b91f1f22265c70cdc48916ac694dad6c21c191)
(cherry picked from commit 421c82648e46467d29dc0b5cd5522f00a026083d)
(cherry picked from commit e9eecde7c67303c1dc87864c10c372019d609b0b)
(cherry picked from commit 41d4c63679c36dc63b4cc9be301ec8d8d518d33f)
2024-11-19 17:43:10 +01:00
Paolo Valente
5267faf794 block, bfq: fix switch back from soft-rt weitgh-raising
A bfq_queue may happen to be deemed as soft real-time while it is
still enjoying interactive weight-raising. If this happens because of
a false positive, then the bfq_queue is likely to loose its soft
real-time status soon. Upon losing such a status, the bfq_queue must
get back its interactive weight-raising, if its interactive period is
not over yet. But this case is not handled. This commit corrects this
error.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit d1f600fa4732dac36c71a03b790f0c829a076475)
(cherry picked from commit db891a7d6aed6cc37d681d2bbf6c9bd697059281)
(cherry picked from commit 647b877a9a8493df84a1d4abd94be089c8fed49b)
(cherry picked from commit 7eda6de0bbbfa1d05b8888b697d9b7aeffe4d64e)
(cherry picked from commit c1e076d9f4688c77dfa0f859060ae1f27a8d889e)
(cherry picked from commit db0058abb7534aeb0abebe01c65659aa3886de78)
(cherry picked from commit 40bc06529a2053ca0caf2053dd6f2a27bf7af916)
2024-11-19 17:42:58 +01:00
Paolo Valente
8b47ef547b block, bfq: re-evaluate convenience of I/O plugging on rq arrivals
Upon an I/O-dispatch attempt, BFQ may detect that it was better to
plug I/O dispatch, and to wait for a new request to arrive for the
currently in-service queue. But the arrival of a new request for an
empty bfq_queue, and thus the switch from idle to busy of the
bfq_queue, may cause the scenario to change, and make plugging no
longer needed for service guarantees, or more convenient for
throughput. In this case, keeping I/O-dispatch plugged would certainly
lower throughput.

To address this issue, this commit makes such a check, and stops
plugging I/O if it is better to stop plugging I/O.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 7f1995c27b19060dbdff23442f375e3097c90707)
(cherry picked from commit 12ec5a8ca2486d06f880d41751383c0d9549ba49)
(cherry picked from commit 64c6efc5ccb01edf553487aff312c0b7110cb30f)
(cherry picked from commit 3e04c1949f447a8166fa6d6343bd5332d8c12a4b)
(cherry picked from commit 40a263c36cf2094311e8189b6e9173360a808b12)
(cherry picked from commit 61a02ce46503671c747e550a13972ca8abaf5030)
(cherry picked from commit 3707ff2d32dccd807b8e5e6885f07f3874c71180)
2024-11-19 17:42:55 +01:00
Pavel Begunkov
f029d24207 splice: don't generate zero-len segement bvecs
iter_file_splice_write() may spawn bvec segments with zero-length. In
preparation for prohibiting them, filter out by hand at splice level.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 0f1d344feb534555a0dcd0beafb7211a37c5355e)
(cherry picked from commit 4c72fdc13bd20d10f59b8145627312814583a945)
(cherry picked from commit cba6a18da1cc8144a07ba6a4b03e8e8dc8d24428)
(cherry picked from commit 54a17499483118cd3c92feb747c88207ce30e9ce)
(cherry picked from commit 4dec661d05c16a8e62dd833262ff68ce3e466770)
(cherry picked from commit fe99d86b681099f662b2b01155b02b8476ff428d)
(cherry picked from commit aa033460cd26157fe81e829e4744b3396a09860b)
2024-11-19 17:42:24 +01:00
Pavel Begunkov
3c61c6aa45 bvec/iter: disallow zero-length segment bvecs
zero-length bvec segments are allowed in general, but not handled by bio
and down the block layer so filtered out. This inconsistency may be
confusing and prevent from optimisations. As zero-length segments are
useless and places that were generating them are patched, declare them
not allowed.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 9b2e0016d04c6542ace0128eb82ecb3b10c97e43)
(cherry picked from commit 87afbd40acbb99860f846ad6f199e62e93be96c2)
(cherry picked from commit f0677085687d50b5ecd6e7a2e19e4aff23251cb6)
(cherry picked from commit affb154c088db678d4a541f8a4080fa5088cb10b)
(cherry picked from commit 9b383b80e8432af1d0421acf9287076db26996d7)
(cherry picked from commit f643066fcac50220888ecfe9b86c5d895d621648)
(cherry picked from commit d2f588cf9664d76f78287142f505e4f375503ae6)
2024-11-19 17:42:21 +01:00
Christoph Hellwig
8ae63d0654 target/file: allocate the bvec array as part of struct target_core_file_cmd
This saves one memory allocation, and ensures the bvecs aren't freed
before the AIO completion.  This will allow the lower level code to be
optimized so that it can avoid allocating another bvec array.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit ecd7fba0ade1d6d8d49d320df9caf96922a376b2)
(cherry picked from commit 272d2ea22b0e3da786a506896e36d3a586e6c252)
(cherry picked from commit 83ff0aa1cc08c329feb0748c575810b3ce8c0077)
(cherry picked from commit d0dc27fcc3f57d556ce4468a060e54f25c7b91b0)
(cherry picked from commit 847a30a99fc4b11c9e6cf2ec049ca20a6da9c769)
(cherry picked from commit 3799ad215edeb9276c4d16150a33de916cfa4ea1)
(cherry picked from commit ee8f417b3276049e4f0bbadf4c4524f071de2361)
2024-11-19 17:42:15 +01:00
Pavel Begunkov
f6172ea41b iov_iter: optimise bvec iov_iter_advance()
iov_iter_advance() is heavily used, but implemented through generic
means. For bvecs there is a specifically crafted function for that, so
use bvec_iter_advance() instead, it's faster and slimmer.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 54c8195b4ebe10af66b49ab9c809bc16939555fc)
(cherry picked from commit 8cac76228025fb022b1bb15e100efae8acde0425)
(cherry picked from commit c8b0dff6b5ac38ff23605bdae1c5bf62766d0fa3)
(cherry picked from commit 5bbff4ddbd3f87ddb409753269fa933109a99a7f)
(cherry picked from commit 689d9157a0b58f95cb2641a17226b023a1fb226a)
(cherry picked from commit 0df724cafe05ae311556249c7df0c2cd00e05007)
(cherry picked from commit ba5d942df07c03782ab2aa2b2dd1f7b96b3b5c52)
2024-11-19 17:42:10 +01:00
Jan Kara
b93af2c415 bfq: Use 'ttime' local variable
Use local variable 'ttime' instead of dereferencing bfqq.

Signed-off-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 28c6def009192b673f92ea357dfb535ba15e00a4)
(cherry picked from commit bb2a213aa0a2b717c3a6e7848c6f82656d80897f)
(cherry picked from commit 2e0cfffb9a6da88cb1a786fb95618bfa714fea32)
(cherry picked from commit caff780963fdfda0ab456c24027298482d745b2f)
(cherry picked from commit b893b660ea8e998b760d48faeed2834e483158ad)
(cherry picked from commit 7e3d952af5fdcf6b02d01d55dbf658fbc2d67f41)
(cherry picked from commit 033b49f66e3808fead9e65e7c9417f26d423374f)
2024-11-19 17:42:05 +01:00
Joseph Qi
fdcb87e105 block/bfq: update comments and default value in docs for fifo_expire
Correct the comments since bfq_fifo_expire[0] is for async request,
while bfq_fifo_expire[1] is for sync request.
Also update docs, according the source code, the default
fifo_expire_async is 250ms, and fifo_expire_sync is 125ms.

Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 4168a8d27ed3a00f160e7f885c956f060d2a0741)
(cherry picked from commit a31ff2eb7d7cfa8331e513bb282f304117f18a77)
(cherry picked from commit a78637befaa4106f9858b3ad8e3273960d3de82b)
(cherry picked from commit bd8e7d3845c7a3b602aee361c7e3d0b5764ce060)
(cherry picked from commit a8543954accfadbb9a1cf1f64c6b3749ee3a629b)
(cherry picked from commit 960981f44b77dcd0d4e786aaef72d39057ccfc03)
(cherry picked from commit 50cfb4b6c1c2e4a3778f66510fee7a2e86e053f2)
2024-11-19 17:41:49 +01:00
Paolo Valente
8eb5a42575 block, bfq: always inject I/O of queues blocked by wakers
Suppose that I/O dispatch is plugged, to wait for new I/O for the
in-service bfq-queue, say bfqq.  Suppose then that there is a further
bfq_queue woken by bfqq, and that this woken queue has pending I/O. A
woken queue does not steal bandwidth from bfqq, because it remains
soon without I/O if bfqq is not served. So there is virtually no risk
of loss of bandwidth for bfqq if this woken queue has I/O dispatched
while bfqq is waiting for new I/O. In contrast, this extra I/O
injection boosts throughput. This commit performs this extra
injection.

Tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Link: https://lore.kernel.org/r/20210304174627.161-2-paolo.valente@linaro.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 2ec5a5c48373d4bc2f0699f86507a65bf0b9df35)
(cherry picked from commit 0750db9767232fc2e4850868e526f4b02ecfb247)
(cherry picked from commit 8676f43249bbb0478a8b18bd87703da59902dbfd)
(cherry picked from commit df655d250f253a2f8a6792569108f30a04b7b894)
(cherry picked from commit d76168c1c3805a2c948e7ff60c8eb341e2ff0013)
(cherry picked from commit f213ae4e575f8ed67ae065fe80d06dc957f0b068)
(cherry picked from commit eb1ff3ab6d66081fbaf007c6cfc1a5e841719c0c)
2024-11-19 17:41:42 +01:00
Jan Kara
9abe5bf065 bfq: Provide helper to generate bfqq name
Instead of having helper formating bfqq pid, provide a helper to
generate full bfqq name as used in the traces. It saves some code
duplication and will save more in the coming tracepoints.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211125133645.27483-6-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 582f04e19ad7b41df993c669805e48a01bcd9c5b)
(cherry picked from commit e030e88a4c2e220366f3db1af33d72d9638f93b5)
(cherry picked from commit e925a5fdce15f914ec2386b03bf64242792acce0)
(cherry picked from commit 9265a0e6952305932aa2b5caf2183387859dcfce)
(cherry picked from commit 41794de36673c11faca8c57625dfa50b76edde20)
(cherry picked from commit 5e830976b50a9f0a2c927b02f921f0d6ae796183)
(cherry picked from commit b5344876556e4a62cac7905bf11ca7ccf8d16d6d)
2024-11-19 17:41:18 +01:00
Yahu Gao
1297c45dcc block/bfq_wf2q: correct weight to ioprio
The return value is ioprio * BFQ_WEIGHT_CONVERSION_COEFF or 0.
What we want is ioprio or 0.
Correct this by changing the calculation.

Signed-off-by: Yahu Gao <gaoyahu19@gmail.com>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220107065859.25689-1-gaoyahu19@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit bcd2be763252f3a4d5fc4d6008d4d96c601ee74b)
(cherry picked from commit 81806db867a17e49d37b1d556dd39f4da5227f56)
(cherry picked from commit aed9dbfda208b30130c64bac55570e2f89084d2b)
(cherry picked from commit 7158b54afec4b986d52cc646a5dffc30eac6dc19)
(cherry picked from commit fb4f80f773e0fc89f372c7afda9c8e9794849f67)
(cherry picked from commit 5ad409c78ed2bfca202490fa13f0a93c49f21382)
2024-11-19 17:40:48 +01:00
Jan Kara
c72c9473f5 blk: Fix lock inversion between ioc lock and bfqd lock
Lockdep complains about lock inversion between ioc->lock and bfqd->lock:

bfqd -> ioc:
 put_io_context+0x33/0x90 -> ioc->lock grabbed
 blk_mq_free_request+0x51/0x140
 blk_put_request+0xe/0x10
 blk_attempt_req_merge+0x1d/0x30
 elv_attempt_insert_merge+0x56/0xa0
 blk_mq_sched_try_insert_merge+0x4b/0x60
 bfq_insert_requests+0x9e/0x18c0 -> bfqd->lock grabbed
 blk_mq_sched_insert_requests+0xd6/0x2b0
 blk_mq_flush_plug_list+0x154/0x280
 blk_finish_plug+0x40/0x60
 ext4_writepages+0x696/0x1320
 do_writepages+0x1c/0x80
 __filemap_fdatawrite_range+0xd7/0x120
 sync_file_range+0xac/0xf0

ioc->bfqd:
 bfq_exit_icq+0xa3/0xe0 -> bfqd->lock grabbed
 put_io_context_active+0x78/0xb0 -> ioc->lock grabbed
 exit_io_context+0x48/0x50
 do_exit+0x7e9/0xdd0
 do_group_exit+0x54/0xc0

To avoid this inversion we change blk_mq_sched_try_insert_merge() to not
free the merged request but rather leave that upto the caller similarly
to blk_mq_sched_try_merge(). And in bfq_insert_requests() we make sure
to free all the merged requests after dropping bfqd->lock.

Fixes: aee69d78dec0 ("block, bfq: introduce the BFQ-v0 I/O scheduler as an extra scheduler")
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20210623093634.27879-3-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit fd2ef39cc9a6b9c4c41864ac506906c52f94b06a)
(cherry picked from commit 786e392c4a7bd2559bdc1a1c6ac28d8b612a0735)
(cherry picked from commit aa8e3e1451bde73dff60f1e5110b6a3cb810e35b)
(cherry picked from commit 4deef6abb13a82b148c583d9ab37374c876fe4c2)
(cherry picked from commit 1988f864ec1c494bb54e5b9df1611195f6d923f2)
(cherry picked from commit 9dc0074b0dd8960f9e06dc1494855493ff53eb68)
(cherry picked from commit c937983724111bb4526e34da0d5c6c8aea1902af)
2024-11-19 17:40:26 +01:00
Johannes Weiner
90c0c9aa4a cgroup: rstat: punt root-level optimization to individual controllers
Current users of the rstat code can source root-level statistics from
the native counters of their respective subsystem, allowing them to
forego aggregation at the root level.  This optimization is currently
implemented inside the generic rstat code, which doesn't track the root
cgroup and doesn't invoke the subsystem flush callbacks on it.

However, the memory controller cannot do this optimization, because
cgroup1 breaks out memory specifically for the local level, including at
the root level.  In preparation for the memory controller switching to
rstat, move the optimization from rstat core to the controllers.

Afterwards, rstat will always track the root cgroup for changes and
invoke the subsystem callbacks on it; and it's up to the subsystem to
special-case and skip aggregation of the root cgroup if it can source
this information through other, cheaper means.

This is the case for the io controller and the cgroup base stats.  In
their respective flush callbacks, check whether the parent is the root
cgroup, and if so, skip the unnecessary upward propagation.

The extra cost of tracking the root cgroup is negligible: on stat
changes, we actually remove a branch that checks for the root.  The
queueing for a flush touches only per-cpu data, and only the first stat
change since a flush requires a (per-cpu) lock.

Link: https://lkml.kernel.org/r/20210209163304.77088-6-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Koutný <mkoutny@suse.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit dc26532aed0ab25c0801a34640d1f3b9b9098a48)
(cherry picked from commit 69da183fcd0112af130879a1c93113a941e2241b)
(cherry picked from commit ddf1013871482b246147e71a04c865c1be5cf74d)
(cherry picked from commit 30fcd52e18dd1d508b1b22f7c660ac22de734f67)
(cherry picked from commit 19c9a1b9d9ae9a4f359deaf89101f9013254f43d)
(cherry picked from commit 0b4286aea9bb0a6ea6acb723f8396e476044190b)
2024-11-19 17:40:21 +01:00
Nahuel Gómez
a4d33f6631 block: ssg-iosched: adapt to new patches
../block/ssg-iosched.c:684:41: error: too few arguments to function call, expected 3, have 2
  684 |         if (blk_mq_sched_try_insert_merge(q, rq))
      |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      ^
../block/blk-mq-sched.h:15:6: note: 'blk_mq_sched_try_insert_merge' declared here
   15 | bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
      |      ^                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   16 |                                    struct list_head *free);
      |                                    ~~~~~~~~~~~~~~~~~~~~~~
1 error generated.

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:40:09 +01:00
Nahuel Gómez
82eba12440 exynos-pm: fix build without CONFIG_SEC_PM_DEBUG
We remove the checks to allow the function to be used anyway.

../drivers/soc/samsung/exynos-pm/exynos-pm.c:107:10: error: declaration of 'struct wakeup_stat_name' will not be visible outside of this function [-Werror,-Wvisibility]
  107 |                 struct wakeup_stat_name *ws_names)
      |                        ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:114:18: error: incomplete definition of type 'struct wakeup_stat_name'
  114 |                 name = ws_names->name[bit];
      |                        ~~~~~~~~^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:107:10: note: forward declaration of 'struct wakeup_stat_name'
  107 |                 struct wakeup_stat_name *ws_names)
      |                        ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:131:25: error: no member named 'ws_names' in 'struct exynos_pm_info'
  131 |         if (unlikely(!pm_info->ws_names))
      |                       ~~~~~~~  ^
../include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
   78 | # define unlikely(x)    __builtin_expect(!!(x), 0)
      |                                             ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:143:51: error: no member named 'ws_names' in 'struct exynos_pm_info'
  143 |                 exynos_show_wakeup_reason_sysint(wss, &pm_info->ws_names[i]);
      |                                                        ~~~~~~~  ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:465:11: error: no member named 'ws_names' in 'struct exynos_pm_info'
  465 |         pm_info->ws_names = kzalloc(sizeof(*pm_info->ws_names) * n, GFP_KERNEL);
      |         ~~~~~~~  ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:465:47: error: no member named 'ws_names' in 'struct exynos_pm_info'
  465 |         pm_info->ws_names = kzalloc(sizeof(*pm_info->ws_names) * n, GFP_KERNEL);
      |                                             ~~~~~~~  ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:466:16: error: no member named 'ws_names' in 'struct exynos_pm_info'
  466 |         if (!pm_info->ws_names)
      |              ~~~~~~~  ^
../drivers/soc/samsung/exynos-pm/exynos-pm.c:478:14: error: no member named 'ws_names' in 'struct exynos_pm_info'
  478 |                                 pm_info->ws_names[idx].name, size);
      |                                 ~~~~~~~  ^
8 errors generated.

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:39:21 +01:00
Nahuel Gómez
7059d8baa3 kernel: sysctl: add init protection to common mm-related nodes
The protected nodes are:
* dirty_ratio
* dirty_background_ratio
* dirty_bytes
* dirty_background_bytes
* dirty_expire_centisecs
* dirty_writeback_centisecs
* swappiness

This approach is inspired by [1] and makes use of the node tampering blacklist.

[1]: 239efdc263

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:39:17 +01:00
Nahuel Gómez
bfb3710a7c mm: new writeback and swappiness values from Ktweak
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-19 17:39:12 +01:00
Adam W. Willis
bcec04dde1 mm: apply init protection
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Change-Id: I1a1928fec9efeb29203a94644388c3ca48e7d96e
[TogoFire]: adapt to k5.4.
Signed-off-by: TogoFire <togofire@mailfence.com>
2024-11-19 17:39:06 +01:00
Uladzislau Rezki
d0dc26b405 workqueue: Make queue_rcu_work() use call_rcu_flush()
Earlier commits in this series allow battery-powered systems to build
their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option.
This Kconfig option causes call_rcu() to delay its callbacks in order
to batch them.  This means that a given RCU grace period covers more
callbacks, thus reducing the number of grace periods, in turn reducing
the amount of energy consumed, which increases battery lifetime which
can be a very good thing.  This is not a subtle effect: In some important
use cases, the battery lifetime is increased by more than 10%.

This CONFIG_RCU_LAZY=y option is available only for CPUs that offload
callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot
parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y.

Delaying callbacks is normally not a problem because most callbacks do
nothing but free memory.  If the system is short on memory, a shrinker
will kick all currently queued lazy callbacks out of their laziness,
thus freeing their memory in short order.  Similarly, the rcu_barrier()
function, which blocks until all currently queued callbacks are invoked,
will also kick lazy callbacks, thus enabling rcu_barrier() to complete
in a timely manner.

However, there are some cases where laziness is not a good option.
For example, synchronize_rcu() invokes call_rcu(), and blocks until
the newly queued callback is invoked.  It would not be a good for
synchronize_rcu() to block for ten seconds, even on an idle system.
Therefore, synchronize_rcu() invokes call_rcu_flush() instead of
call_rcu().  The arrival of a non-lazy call_rcu_flush() callback on a
given CPU kicks any lazy callbacks that might be already queued on that
CPU.  After all, if there is going to be a grace period, all callbacks
might as well get full benefit from it.

Yes, this could be done the other way around by creating a
call_rcu_lazy(), but earlier experience with this approach and
feedback at the 2022 Linux Plumbers Conference shifted the approach
to call_rcu() being lazy with call_rcu_flush() for the few places
where laziness is inappropriate.

And another call_rcu() instance that cannot be lazy is the one
in queue_rcu_work(), given that callers to queue_rcu_work() are
not necessarily OK with long delays.

Therefore, make queue_rcu_work() use call_rcu_flush() in order to revert
to the old behavior.

Signed-off-by: Uladzislau Rezki <urezki@gmail.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2024-11-19 17:37:56 +01:00
Sultan Alsawaf
fa6b06bf46 sched/fair: Always update CPU capacity when load balancing
Limiting CPU capacity updates, which are quite cheap, results in worse
balancing decisions during opportunistic balancing (e.g., SD_BALANCE_WAKE).
This causes opportunistic placement decisions to be skewed using stale CPU
capacity data, and when a CPU isn't idling much, its capacity suffers from
even more staleness since the only exception to the 100 ms capacity update
ratelimit is a CPU exiting idle.

Since the capacity updates are cheap, always do it when load balancing in
order to improve opportunistic task placement decisions.

Change-Id: If1d451ce742fd093010057e31e71012d47fad70a
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:34:49 +01:00
Joel Fernandes (Google)
323a4009a4 rcu: Avoid unnecessary softirq when system is idle
When there are no callbacks pending on an idle system, I noticed that
RCU softirq is continuously firing. During this the cpu_no_qs is set to
false, and core_needs_qs is set to true indefinitely. This causes
rcu_process_callbacks to be repeatedly called, even though the node
corresponding to the CPU has that CPU's mask bit cleared and the system
is idle. I believe the race is when such mask clearing is done during
idle CPU scan of the quiescent state forcing stage in the kthread
instead of the softirq. Since the rnp mask is cleared, but the flags on
the CPU's rdp are not cleared, the CPU thinks it still needs to report
to core RCU.

Cure this by clearing the core_needs_qs flag when the CPU detects that
its node is already updated which will avoid the unwanted softirq raises
to the benefit of real-time systems.

Test: Ran rcutorture for various tree RCU configs.

Change-Id: Iee374d1dcdc74ecc5e6816a99be51feddd876931
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: mydongistiny <jaysonedson@gmail.com>
2024-11-19 17:34:20 +01:00
Tyler Nijmeh
35df832f54 mm: oom_kill: Do not dump tasks by default
This takes RCU and tasks locks to simply print debugging information,
which with certain log levels will not even display. Disable this by
default.

Change-Id: I952dba176f955239061acc7b178d88fceff8ecdf

Signed-off-by: RyuujiX <saputradenny712@gmail.com>
Signed-off-by: onettboots <blackcocopet@gmail.com>
2024-11-19 17:33:46 +01:00
Panchajanya1999
0ab2f838a5 tcp: Force the TCP no-delay option for everything
Forcing TCP no-delay will disable Nagle's algorithm, which basically collects small outgoing packets to send all at once. Disabling this will lead to all the packets being sent at their respective times, leading to better latency.

Read https://brooker.co.za/blog/2024/05/09/nagle.html for details.

Signed-off-by: prathamdubey2005 <134331217+prathamdubey2005@users.noreply.github.com>
2024-11-19 17:33:40 +01:00
gustavoss
9a9f44e174 Optimized Console FrameBuffer for upto 70% increase in Performance
Signed-off-by: Joe Maples <joe@frap129.org>
Signed-off-by: John Vincent <git@tenseventyseven.cf>
2024-11-19 17:30:21 +01:00
Ksawlii
9b077df9ac Revert "net: mac802154: Fix racy device stats updates by DEV_STATS_INC() and DEV_STATS_ADD()"
This reverts commit 97f5298d5c.
2024-11-19 14:52:14 +01:00