Commit graph

3616 commits

Author SHA1 Message Date
Ksawlii
3d69c25605 Revert "xfrm6: fix inet6_dev refcount underflow problem"
This reverts commit 00fddf40c2.
2024-11-17 19:38:30 +01:00
Sultan Alsawaf
0a960ba529 kernel: Introduce SBalance IRQ balancer
This is a simple IRQ balancer that polls every X number of milliseconds and
moves IRQs from the most interrupt-heavy CPU to the least interrupt-heavy
CPUs until the heaviest CPU is no longer the heaviest. IRQs are only moved
from one source CPU to any number of destination CPUs per balance run.
Balancing is skipped if the gap between the most interrupt-heavy CPU and
the least interrupt-heavy CPU is below the configured threshold of
interrupts.

The heaviest IRQs are targeted for migration in order to reduce the number
of IRQs to migrate. If moving an IRQ would reduce overall balance, then it
won't be migrated.

The most interrupt-heavy CPU is calculated by scaling the number of new
interrupts on that CPU to the CPU's current capacity. This way, interrupt
heaviness takes into account factors such as thermal pressure and time
spent processing interrupts rather than just the sheer number of them. This
also makes SBalance aware of CPU asymmetry, where different CPUs can have
different performance capacities and be proportionally balanced.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:55:12 +01:00
Divyanshu-Modi
6d78e5b6ae irq: Don't allow IRQ affinities to be set from userspace
msm_irqbalance balances IRQs across CPUs, which collides with SBalance.

Signed-off-by: Divyanshu-Modi <divyan.m05@gmail.com>
2024-11-17 17:45:42 +01:00
Sultan Alsawaf
d174ab61d6 i2c: exynos5: Silence noisy error and info logs
I2C errors are quite common with these controllers and are non-fatal since
the transactions are retried. Silence the noisy logs so that dmesg isn't
destroyed by I2C log spam.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:37 +01:00
Sultan Alsawaf
60e07cae48 i2c: exynos5: Set IRQF_NOBALANCING
IRQ balancing is already performed naturally by moving the i2c IRQ to the
CPU that kicks off an i2c transaction. Therefore, opt out from IRQ
balancing operations by setting IRQF_NOBALANCING.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:35 +01:00
Sultan Alsawaf
60bb8b3c0a drm: Stub out debug prints
Within the display server process, __drm_dbg consumes significant CPU time:
    2.40%  [kernel]       [k] __drm_dbg

Instead of compiling in all DRM debug print statements, stub them out to
reduce runtime overhead and size.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:26 +01:00
Viresh Kumar
0b239017a9 arch_topology: Rename freq_scale as arch_freq_scale
Rename freq_scale to a less generic name, as it will get exported soon
for modules. Since x86 already names its own implementation of this as
arch_freq_scale, lets stick to that.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2024-11-17 17:45:22 +01:00
Sultan Alsawaf
669f8aa664 sched/completion: Expose wait_for_common*() to drivers
Allow drivers to wait with a custom task state specified by exposing the
raw wait_for_common*() functions. This allows code to wait for completions
that are invariant with respect to CPU performance *without* contributing
to load avg, without requiring the wait to be interruptible.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:08 +01:00
Sultan Alsawaf
15898055b7 arm64: lse: Always use LSE atomic instructions
Since we are compiling for a single chipset that is known to support LSE,
the system_uses_lse_atomics() static branch can be eliminated entirely.

Therefore, make system_uses_lse_atomics() always true to always use LSE
atomics, and update ARM64_LSE_ATOMIC_INSN() users to get rid of the extra
nops used for alternatives patching at runtime.

This reduces generated code size by removing LL/SC atomics, which improves
instruction cache footprint.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:05 +01:00
Sultan Alsawaf
6007c2066e soc/samsung/cpif: Silence PCI doorbell interrupt log spam
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:45:02 +01:00
Sultan Alsawaf
ee1e7f7173 soc/samsung/cpif: Don't affine IRQs when SBalance is enabled
Let SBalance handle IRQ affinities when it's enabled for better efficiency
and performance.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:44:58 +01:00
Sultan Alsawaf
74fef07067 qcacld-3.0: Disable auto IRQ affinity feature
This is at odds with sbalance, which balances this IRQ automatically.
Disable the IRQ affinity feature and leave this up to sbalance.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:44:53 +01:00
Sultan Alsawaf
095d158f6b scsi: ufs: Implement IRQ-affined PM QoS for reduced latency
This implements a simple IRQ-affined PM QoS mechanism for each UFS adapter
which uses atomics to elide locking, and enqueues a worker to apply PM QoS
to the target CPU as soon as a command request is issued.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:44:48 +01:00
Sultan Alsawaf
7780f16398 dma: pl330: Make IRQ handler non-threaded on RT
-EIO errors with SPI transfers over DMA are observed on RT sometimes.
Looking at the pl330 IRQ handler, it appears that it just masks interrupts
and dispatches tasklets to do further processing.

Since the hard IRQ handler just masks interrupts and dispatches work, make
it non-threaded on RT and introduce a threaded handler to offload some
burden from hard IRQ context.

This appears to resolve the sporadic -EIO errors observed on RT.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:44:35 +01:00
Sultan Alsawaf
8e8c1728e7 iommu/samsung: Disable fault reporting by default
Lots of subsystems, such as the TPU, occasionally spam hundreds of
thousands of IOMMU faults which are not only resource heavy due to the IRQ
overhead, but also destroy dmesg/ramoops with tons of spam. These errors
appear to be nonfatal and don't seem actionable for anyone outside of
Samsung or Google, so turn them off by default.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:44:13 +01:00
Sultan Alsawaf
5d8a1bc838 exynos_pm_qos: Remove exynos_pm_qos_update_request_timeout()
The possibility of a timeout being used with a PM QoS request incurs
overhead for *all* PM QoS requests due to the necessary calls to
cancel_delayed_work_sync().

Furthermore, using a timeout for a PM QoS request can lead to disastrous
results on power consumption. It's always possible to find a fixed scope in
which a PM QoS request should be applied, so timeouts aren't ever strictly
needed; they're usually just a lazy way of using PM QoS.

Remove the timeout API to eliminate the added overhead for non-timeout PM
QoS requests, and so that timeouts cannot be misused.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:44:09 +01:00
ThunderStorms21th
008b431fd1 mfc: Reduce QoS boosting from Samsung hacks
By default, everything is set to 240fps for optimal playback performance
However, the situation is not always true, as it applies to cases when
the video bitrate isn't necessarily high, causing high power consumption

Reduce and limit the boosting needed. For decoder, only apply for UHD
video resolution

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
[TenSeventy7: Negative unsigned integer fixes already present on 9610]
Signed-off-by: John Vincent <git@tensevntysevn.cf>

Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
2024-11-17 17:43:58 +01:00
LibXZR
8ab3a6194e binder_alloc: Disable debug logging by default
* This is never useful to us

Signed-off-by: LibXZR <xzr467706992@163.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
Signed-off-by: forenche <prahul2003@gmail.com>
2024-11-17 17:43:54 +01:00
ThunderStorms21th
81a8e0f970 blk-throttle: Target 1ms latencies for throttling
Flash memory is extremely latency efficient, so we can make the maximum
target latency 1ms. Anything exceeding 1ms of latency will cause
blk-throttle to trigger.

Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: John Vincent <git@tensevntysevn.cf>

Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
2024-11-17 17:43:50 +01:00
ThunderStorms21th
46df2bf46c thermal: Increase thermal trip points to 16
Some DT devices, mainly smartphones, do need more trip points
to allow more fine grained thermal mitigations, hence allowing
a better user experience (and overall performance), for example,
by lowering the CPU clocks just a little for each temperature
step.

taken from : f40204b196

Signed-off-by: Henrique Pereira <hlcpereira@pixelexperience.org>

Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
2024-11-17 17:43:45 +01:00
ThunderStorms21th
e0c461eb4f usb: correct function name
Other drivers like the mtp driver use a proper 'function.name' to make the configfs work. So lets correct mass storages name which will allow drivedroid to work.

from : 132f7d90fd

Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
2024-11-17 17:43:41 +01:00
LuK1337
0de37a8853 ext4: Add no_sehash_xattr mount option
* Useful for devices where /persist may have unexpected SELinux
  contexts but xattr of root directory is valid, leading to
  restorecon early exitting without traversing the partition.

Change-Id: I5089ff90f76aa9f3db7da26f73548cf62fe67bd0
(cherry picked from commit 8c6c40aa2ce13b83bcc137424e99be5e39d5245d)
2024-11-17 17:43:35 +01:00
engstk
f8f036ffe5 Optimized Integer SQRT. for upto 3x faster operation 2024-11-17 17:43:31 +01:00
Danny Lin
e0f839a996 f2fs: Add support for reporting a fake kernel version to fsck
fsck.f2fs forces a filesystem fix on boot if it detects that the current
kernel version differs from the one saved in the superblock, which results in
fsck blocking boot for a long time (~35 seconds). This commit provides a
way to report a constant fake kernel version to fsck to avoid triggering
the version check, which is useful if you boot new kernel builds
frequently.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2024-11-17 17:43:26 +01:00
Tim Zimmermann
c0f37aaa23 drivers: net: wireless: broadcom: enable p2p mac randomization support
Change-Id: Ie7103ad2da7c080a4dff7c9d040b7c7fe6d08509
2024-11-17 17:43:19 +01:00
Jesse Chan
b422cd8b8a drivers: battery_v2: sec_battery: export {CURRENT/VOLTAGE}_MAX to sysfs
Inform the system if we are charging normal, fast or rapidly. It will be
displayed in the locksreen.

Change-Id: Id0de196e02bd5393cb4fb90835f18caa1d2fe20d
2024-11-17 17:43:14 +01:00
Diep Quynh
e9ed6faf2c acpm: Disable logging by default
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-11-17 17:43:06 +01:00
idkwhoiam322
f434223a87 printk: Don't allow userspace to write to /dev/kmsg
There is extensive spam in dmesg because of userspace.
This kills all "init", "healthd", "logd" messages in kernel logs and makes them
more readable.

Extracted from 59f163ac76.

Signed-off-by: idkwhoiam322 <idkwhoiam322@raphielgang.org>
Signed-off-by: prorooter007 <shreyashwasnik112@gmail.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
2024-11-17 17:43:00 +01:00
Park Ju Hyung
311d21b734 blk: disable IO_STAT completely
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
(cherry picked from commit 4d3c7baa4acb5fa3b238dde267826254788e86e5)
(cherry picked from commit 6e658026fe20dc1d651c5f4a56afd363a3195f42)
(cherry picked from commit 2ce27817f2fa8e4bbd3420b2f6c050d404703efc)
(cherry picked from commit 85c200268a4bd8b9ad639991bdaf233ba14f6ade)
(cherry picked from commit 35ab17dbef37e36630d542dce420dc3ac6467d74)
(cherry picked from commit 77662095632a51547ca5f921ec453802788d58ee)
(cherry picked from commit 46e2b47e9560de0877079c8a5db0f5ae742133c4)
(cherry picked from commit d12b3702c6e03ac84d399d41e2859b24e8630dea)
(cherry picked from commit 79be04236891dcd6e5e87a25626a64d6d0d0a42f)
(cherry picked from commit c85b5a7d9c215ca4dc35e894149523b33409fd40)
2024-11-17 17:42:57 +01:00
haridhayal11
58d55d8296 block: disable I/O stats accounting by default
While Android userspace (e.g. storaged) does use iostats via
/proc/diskstats, init will explicitly enable iostats for the devices on
which it is primarily used - sda and sdf. Avoid the 0.5-1% overhead for
block devices that do not need it.

Co-Authored-By: kdrag0n <dragon@khronodragon.com>
2024-11-17 17:42:54 +01:00
Danny Lin
84b47ecf71 tcp: Enable ECN negotiation by default
This is now the default for all connections in iOS 11+, and we have
RFC 3168 ECN fallback to detect and disable ECN for broken flows.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2024-11-17 17:42:51 +01:00
Zhangyao,Ye
67763ebe89 Disable vmscan warning print
When free memory drops to a  threshold, system high frequency print
call stack message, too much print log affect system normal run.

Change-Id: I9b1cb84537486e2979cb93ac9a248bec85453d9c
Signed-off-by: wya <wya@codeaurora.org>
2024-11-17 17:42:47 +01:00
Jens Axboe
81024ea319 blk-mq: fix corruption with direct issue
If we attempt a direct issue to a SCSI device, and it returns BUSY, then
we queue the request up normally. However, the SCSI layer may have
already setup SG tables etc for this particular command. If we later
merge with this request, then the old tables are no longer valid. Once
we issue the IO, we only read/write the original part of the request,
not the new state of it.

This causes data corruption, and is most often noticed with the file
system complaining about the just read data being invalid:

[  235.934465] EXT4-fs error (device sda1): ext4_iget:4831: inode #7142: comm dpkg-query: bad extra_isize 24937 (inode size 256)

because most of it is garbage...

This doesn't happen from the normal issue path, as we will simply defer
the request to the hardware queue dispatch list if we fail. Once it's on
the dispatch list, we never merge with it.

Fix this from the direct issue path by flagging the request as
REQ_NOMERGE so we don't change the size of it before issue.

See also:
  https://bugzilla.kernel.org/show_bug.cgi?id=201685

Tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: 6ce3dd6eec1 ("blk-mq: issue directly if hw queue isn't busy in case of 'none'")
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
(cherry picked from commit 9a897ce1d5b6611daa27bf00fcfb5c97a3d826b4)
(cherry picked from commit 66af19f52cf6d2a9deef8de2f451604d49ef42f1)
2024-11-17 17:42:37 +01:00
Panchajanya1999
f2d8b4a3b3 binder_alloc: Avoid page memory allocation in high memory
In binder, using GFP_HIGHMEM will result in the allocated memory
not to be mapped in the kernel's virtual address space.
This prevents the kernel from being capable of directly
referring it.

Change-Id: I952dbc8ae205e47fa00ddf186ef306903f623367
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
Signed-off-by: Jebaitedneko <Jebaitedneko@gmail.com>
2024-11-17 17:42:33 +01:00
TheCrazyLex
99c5ba5745 binder: Disable debug mask
According to Google we should set this to 0
as there is excessive logging in specific usecases
which has a negative impact on latency.

Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
Change-Id: Id619335848802e9d9a9bc13100d09a2cadbab07a
2024-11-17 17:42:26 +01:00
Edmond Chung
1fd74bc9af i2c: exynos: Clear pending interrupt for all operation modes
Hybrid mode could be switching between polling and interrupt mode. In
which case, we should always clear the pending IRQs to avoid spurious
interrupts.

Bug: 288490582
Test: Device boots, GCA, CTS
Signed-off-by: Edmond Chung <edmondchung@google.com>
Change-Id: Id33160b4c724cf800430c0833ce6703a5c2946ef
2024-11-17 17:42:21 +01:00
Tyler Nijmeh
425dc84103 genirq: Use interruptible wait
Allow this task to be preempted in order to reduce latency.

Signed-off-by: kyvangka1610 <kyvangka2002@gmail.com>
2024-11-17 17:42:12 +01:00
Sultan Alsawaf
5d1ef2f0ad kernel: ems/ego: Allow CPU frequency changes to be amended before they're set
If the last CPU frequency selected isn't set before a new CPU frequency
selection arrives, then use the new selection immediately to avoid using a
stale frequency choice. This improves both performance and energy by more
closely tracking the scheduler's latest decisions.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:42:09 +01:00
Rafael J. Wysocki
51d3ee0bf3 kernel: ems/ego: Reduce frequencies slower
The schedutil governor reduces frequencies too fast in some
situations which cases undesirable performance drops to
appear.

To address that issue, make schedutil reduce the frequency slower by
setting it to the average of the value chosen during the previous
iteration of governor computations and the new one coming from its
frequency selection formula.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=194963
Reported-by: John <john.ettedgui@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Cykeek <Cykeek@proton.me>
Signed-off-by: negrroo <mohammedaelnaggar1@gmail.com>
Signed-off-by: priiii1808 <priyanshusinghal0818@gmail.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:42:02 +01:00
Sultan Alsawaf
cdf47a7386 kernel: ems/ego: Set default up/down rate limits to 500/1000 us
This is empirically observed to yield good performance with reduced power
consumption via having the down rate limit configured to be 2x longer than
the up rate limit. This reduces bouncing between CPU frequencies by
stalling down-clocking, which not only improves performance, but also
counter-intuitively improves power consumption.

The short up/down rate limits also provide improved interactivity and
real-time response.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:41:59 +01:00
Sugar Zhang
7f76519186 dmaengine: pl330: Use tasklet_hi_schedule
Use tasklet_hi_schedule for better audio performance,
especially for LLA (Low Latency Audio) situation.

Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Change-Id: Ic5a215a269e718b0e5613132cb9fe9b58940d0e1
2024-11-17 17:41:55 +01:00
Park Ju Hyung
77fa911b76 ssg: Set max available ratio to 25
Testing:
[ElectroPerf & resist15]
In testing we found out that there were significant improvements
in the sequential read and write speeds. Some screenshots of the tests are below:

Before: https://i.imgur.com/UBL74X2.jpg
After: https://i.imgur.com/CrkD5iE.jpg

Change-Id: Idd7f5c7df0a7fc1535555927923491ecb39bc6a9
[Tashar02: Apply patch on kernel]
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2024-11-17 17:41:50 +01:00
Sultan Alsawaf
09f69d7d5f cpuidle: Reject idle entry if need_resched() is true
There's no reason to enter idle at this point in __CPU_PM_CPU_IDLE_ENTER()
if the CPU needs to reschedule. Instead of fruitlessly entering the
architecture's idle routine, reject the idle entry attempt with an error as
an optimization.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:41:46 +01:00
ztc1997
1b23bb6575 f2fs: use copy_page for full page copy 2024-11-17 17:41:42 +01:00
Mark-PK Tsai
8ddfc9be05 zram: use copy_page for full page copy
Some architectures, such as arm, have implemented optimized copy_page for
full page copying.

Replace the full page memcpy with copy_page to take advantage of the
optimization.

Link: https://lkml.kernel.org/r/20231007070554.8657-1-mark-pk.tsai@mediatek.com
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: YJ Chiang <yj.chiang@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-17 17:41:38 +01:00
Mark-PK Tsai
33247c3931 zsmalloc: use copy_page for full page copy
Some architectures have implemented optimized copy_page for full page
copying, such as arm.

On my arm platform, use the copy_page helper for single page copying is
about 10 percent faster than memcpy.

Link: https://lkml.kernel.org/r/20231006060245.7411-1-mark-pk.tsai@mediatek.com
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: YJ Chiang <yj.chiang@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-17 17:41:35 +01:00
Gao Xiang
fa819c7433 BACKPORT: erofs: fix lz4 inplace decompression
commit 3c12466b6b7bf1e56f9b32c366a3d83d87afb4de upstream.

Currently EROFS can map another compressed buffer for inplace
decompression, that was used to handle the cases that some pages of
compressed data are actually not in-place I/O.

However, like most simple LZ77 algorithms, LZ4 expects the compressed
data is arranged at the end of the decompressed buffer and it
explicitly uses memmove() to handle overlapping:
  __________________________________________________________
 |_ direction of decompression --> ____ |_ compressed data _|

Although EROFS arranges compressed data like this, it typically maps two
individual virtual buffers so the relative order is uncertain.
Previously, it was hardly observed since LZ4 only uses memmove() for
short overlapped literals and x86/arm64 memmove implementations seem to
completely cover it up and they don't have this issue.  Juhyung reported
that EROFS data corruption can be found on a new Intel x86 processor.
After some analysis, it seems that recent x86 processors with the new
FSRM feature expose this issue with "rep movsb".

Let's strictly use the decompressed buffer for lz4 inplace
decompression for now.  Later, as an useful improvement, we could try
to tie up these two buffers together in the correct order.

Reported-and-tested-by: Juhyung Park <qkrwngud825@gmail.com>
Closes: https://lore.kernel.org/r/CAD14+f2AVKf8Fa2OO1aAUdDNTDsVzzR6ctU_oJSmTyd6zSYR2Q@mail.gmail.com
Fixes: 0ffd71bcc3a0 ("staging: erofs: introduce LZ4 decompression inplace")
Fixes: 598162d05080 ("erofs: support decompress big pcluster for lz4 backend")
Cc: stable <stable@vger.kernel.org> # 5.4+
Tested-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20231206045534.3920847-1-hsiangkao@linux.alibaba.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-17 17:41:30 +01:00
Sultan Alsawaf
18de03d1a6 kernel: Don't allow userspace to alter IRQ affinities
The msm_irqbalance service in userspace constantly migrates IRQs between
CPUs according to its whims, which is not desired. All of the IRQs have
a sane affinity (CPU0 if unimportant, CPU4-7 otherwise), so prevent
userspace from tampering with that.

Signed-off-by: Sultan Alsawaf <sultanxda@gmail.com>
(cherry picked from commit 6cedf3c9b1f8c962d19ce4151ca5caaff69e3c6a)
(cherry picked from commit 8fc0013ba4094fd8fe95fb0d23af0936347060f4)
2024-11-17 17:41:27 +01:00
Pzqqt
3de61e729d kernel: sched: Provide more PELT half-life options
- Regenerate `kernel/sched/sched-pelt.h` by `Documentation/scheduler/sched-pelt`.
- Now we can choose from 32ms (default), 16ms, 12ms, 8ms.
2024-11-17 17:41:17 +01:00
Pzqqt
648fb626ad kernel: sched: Configuring PELT half-life via Kconfig
Note that adjusting PELT half-life via kernel parameters is only allowed when CONFIG_PELT_UTIL_HALFLIFE_DEFAULT is selected.
2024-11-17 17:41:11 +01:00