Commit graph

293 commits

Author SHA1 Message Date
Danny Lin
e0f839a996 f2fs: Add support for reporting a fake kernel version to fsck
fsck.f2fs forces a filesystem fix on boot if it detects that the current
kernel version differs from the one saved in the superblock, which results in
fsck blocking boot for a long time (~35 seconds). This commit provides a
way to report a constant fake kernel version to fsck to avoid triggering
the version check, which is useful if you boot new kernel builds
frequently.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2024-11-17 17:43:26 +01:00
Tim Zimmermann
c0f37aaa23 drivers: net: wireless: broadcom: enable p2p mac randomization support
Change-Id: Ie7103ad2da7c080a4dff7c9d040b7c7fe6d08509
2024-11-17 17:43:19 +01:00
Jesse Chan
b422cd8b8a drivers: battery_v2: sec_battery: export {CURRENT/VOLTAGE}_MAX to sysfs
Inform the system if we are charging normal, fast or rapidly. It will be
displayed in the locksreen.

Change-Id: Id0de196e02bd5393cb4fb90835f18caa1d2fe20d
2024-11-17 17:43:14 +01:00
Diep Quynh
e9ed6faf2c acpm: Disable logging by default
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-11-17 17:43:06 +01:00
idkwhoiam322
f434223a87 printk: Don't allow userspace to write to /dev/kmsg
There is extensive spam in dmesg because of userspace.
This kills all "init", "healthd", "logd" messages in kernel logs and makes them
more readable.

Extracted from 59f163ac76.

Signed-off-by: idkwhoiam322 <idkwhoiam322@raphielgang.org>
Signed-off-by: prorooter007 <shreyashwasnik112@gmail.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
2024-11-17 17:43:00 +01:00
Park Ju Hyung
311d21b734 blk: disable IO_STAT completely
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
(cherry picked from commit 4d3c7baa4acb5fa3b238dde267826254788e86e5)
(cherry picked from commit 6e658026fe20dc1d651c5f4a56afd363a3195f42)
(cherry picked from commit 2ce27817f2fa8e4bbd3420b2f6c050d404703efc)
(cherry picked from commit 85c200268a4bd8b9ad639991bdaf233ba14f6ade)
(cherry picked from commit 35ab17dbef37e36630d542dce420dc3ac6467d74)
(cherry picked from commit 77662095632a51547ca5f921ec453802788d58ee)
(cherry picked from commit 46e2b47e9560de0877079c8a5db0f5ae742133c4)
(cherry picked from commit d12b3702c6e03ac84d399d41e2859b24e8630dea)
(cherry picked from commit 79be04236891dcd6e5e87a25626a64d6d0d0a42f)
(cherry picked from commit c85b5a7d9c215ca4dc35e894149523b33409fd40)
2024-11-17 17:42:57 +01:00
haridhayal11
58d55d8296 block: disable I/O stats accounting by default
While Android userspace (e.g. storaged) does use iostats via
/proc/diskstats, init will explicitly enable iostats for the devices on
which it is primarily used - sda and sdf. Avoid the 0.5-1% overhead for
block devices that do not need it.

Co-Authored-By: kdrag0n <dragon@khronodragon.com>
2024-11-17 17:42:54 +01:00
Danny Lin
84b47ecf71 tcp: Enable ECN negotiation by default
This is now the default for all connections in iOS 11+, and we have
RFC 3168 ECN fallback to detect and disable ECN for broken flows.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2024-11-17 17:42:51 +01:00
Zhangyao,Ye
67763ebe89 Disable vmscan warning print
When free memory drops to a  threshold, system high frequency print
call stack message, too much print log affect system normal run.

Change-Id: I9b1cb84537486e2979cb93ac9a248bec85453d9c
Signed-off-by: wya <wya@codeaurora.org>
2024-11-17 17:42:47 +01:00
Jens Axboe
81024ea319 blk-mq: fix corruption with direct issue
If we attempt a direct issue to a SCSI device, and it returns BUSY, then
we queue the request up normally. However, the SCSI layer may have
already setup SG tables etc for this particular command. If we later
merge with this request, then the old tables are no longer valid. Once
we issue the IO, we only read/write the original part of the request,
not the new state of it.

This causes data corruption, and is most often noticed with the file
system complaining about the just read data being invalid:

[  235.934465] EXT4-fs error (device sda1): ext4_iget:4831: inode #7142: comm dpkg-query: bad extra_isize 24937 (inode size 256)

because most of it is garbage...

This doesn't happen from the normal issue path, as we will simply defer
the request to the hardware queue dispatch list if we fail. Once it's on
the dispatch list, we never merge with it.

Fix this from the direct issue path by flagging the request as
REQ_NOMERGE so we don't change the size of it before issue.

See also:
  https://bugzilla.kernel.org/show_bug.cgi?id=201685

Tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: 6ce3dd6eec1 ("blk-mq: issue directly if hw queue isn't busy in case of 'none'")
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
(cherry picked from commit 9a897ce1d5b6611daa27bf00fcfb5c97a3d826b4)
(cherry picked from commit 66af19f52cf6d2a9deef8de2f451604d49ef42f1)
2024-11-17 17:42:37 +01:00
Panchajanya1999
f2d8b4a3b3 binder_alloc: Avoid page memory allocation in high memory
In binder, using GFP_HIGHMEM will result in the allocated memory
not to be mapped in the kernel's virtual address space.
This prevents the kernel from being capable of directly
referring it.

Change-Id: I952dbc8ae205e47fa00ddf186ef306903f623367
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
Signed-off-by: Jebaitedneko <Jebaitedneko@gmail.com>
2024-11-17 17:42:33 +01:00
TheCrazyLex
99c5ba5745 binder: Disable debug mask
According to Google we should set this to 0
as there is excessive logging in specific usecases
which has a negative impact on latency.

Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
Change-Id: Id619335848802e9d9a9bc13100d09a2cadbab07a
2024-11-17 17:42:26 +01:00
Edmond Chung
1fd74bc9af i2c: exynos: Clear pending interrupt for all operation modes
Hybrid mode could be switching between polling and interrupt mode. In
which case, we should always clear the pending IRQs to avoid spurious
interrupts.

Bug: 288490582
Test: Device boots, GCA, CTS
Signed-off-by: Edmond Chung <edmondchung@google.com>
Change-Id: Id33160b4c724cf800430c0833ce6703a5c2946ef
2024-11-17 17:42:21 +01:00
Tyler Nijmeh
425dc84103 genirq: Use interruptible wait
Allow this task to be preempted in order to reduce latency.

Signed-off-by: kyvangka1610 <kyvangka2002@gmail.com>
2024-11-17 17:42:12 +01:00
Sultan Alsawaf
5d1ef2f0ad kernel: ems/ego: Allow CPU frequency changes to be amended before they're set
If the last CPU frequency selected isn't set before a new CPU frequency
selection arrives, then use the new selection immediately to avoid using a
stale frequency choice. This improves both performance and energy by more
closely tracking the scheduler's latest decisions.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:42:09 +01:00
Rafael J. Wysocki
51d3ee0bf3 kernel: ems/ego: Reduce frequencies slower
The schedutil governor reduces frequencies too fast in some
situations which cases undesirable performance drops to
appear.

To address that issue, make schedutil reduce the frequency slower by
setting it to the average of the value chosen during the previous
iteration of governor computations and the new one coming from its
frequency selection formula.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=194963
Reported-by: John <john.ettedgui@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Cykeek <Cykeek@proton.me>
Signed-off-by: negrroo <mohammedaelnaggar1@gmail.com>
Signed-off-by: priiii1808 <priyanshusinghal0818@gmail.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:42:02 +01:00
Sultan Alsawaf
cdf47a7386 kernel: ems/ego: Set default up/down rate limits to 500/1000 us
This is empirically observed to yield good performance with reduced power
consumption via having the down rate limit configured to be 2x longer than
the up rate limit. This reduces bouncing between CPU frequencies by
stalling down-clocking, which not only improves performance, but also
counter-intuitively improves power consumption.

The short up/down rate limits also provide improved interactivity and
real-time response.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:41:59 +01:00
Sugar Zhang
7f76519186 dmaengine: pl330: Use tasklet_hi_schedule
Use tasklet_hi_schedule for better audio performance,
especially for LLA (Low Latency Audio) situation.

Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Change-Id: Ic5a215a269e718b0e5613132cb9fe9b58940d0e1
2024-11-17 17:41:55 +01:00
Park Ju Hyung
77fa911b76 ssg: Set max available ratio to 25
Testing:
[ElectroPerf & resist15]
In testing we found out that there were significant improvements
in the sequential read and write speeds. Some screenshots of the tests are below:

Before: https://i.imgur.com/UBL74X2.jpg
After: https://i.imgur.com/CrkD5iE.jpg

Change-Id: Idd7f5c7df0a7fc1535555927923491ecb39bc6a9
[Tashar02: Apply patch on kernel]
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2024-11-17 17:41:50 +01:00
Sultan Alsawaf
09f69d7d5f cpuidle: Reject idle entry if need_resched() is true
There's no reason to enter idle at this point in __CPU_PM_CPU_IDLE_ENTER()
if the CPU needs to reschedule. Instead of fruitlessly entering the
architecture's idle routine, reject the idle entry attempt with an error as
an optimization.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:41:46 +01:00
ztc1997
1b23bb6575 f2fs: use copy_page for full page copy 2024-11-17 17:41:42 +01:00
Mark-PK Tsai
8ddfc9be05 zram: use copy_page for full page copy
Some architectures, such as arm, have implemented optimized copy_page for
full page copying.

Replace the full page memcpy with copy_page to take advantage of the
optimization.

Link: https://lkml.kernel.org/r/20231007070554.8657-1-mark-pk.tsai@mediatek.com
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: YJ Chiang <yj.chiang@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-17 17:41:38 +01:00
Mark-PK Tsai
33247c3931 zsmalloc: use copy_page for full page copy
Some architectures have implemented optimized copy_page for full page
copying, such as arm.

On my arm platform, use the copy_page helper for single page copying is
about 10 percent faster than memcpy.

Link: https://lkml.kernel.org/r/20231006060245.7411-1-mark-pk.tsai@mediatek.com
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: YJ Chiang <yj.chiang@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-17 17:41:35 +01:00
Gao Xiang
fa819c7433 BACKPORT: erofs: fix lz4 inplace decompression
commit 3c12466b6b7bf1e56f9b32c366a3d83d87afb4de upstream.

Currently EROFS can map another compressed buffer for inplace
decompression, that was used to handle the cases that some pages of
compressed data are actually not in-place I/O.

However, like most simple LZ77 algorithms, LZ4 expects the compressed
data is arranged at the end of the decompressed buffer and it
explicitly uses memmove() to handle overlapping:
  __________________________________________________________
 |_ direction of decompression --> ____ |_ compressed data _|

Although EROFS arranges compressed data like this, it typically maps two
individual virtual buffers so the relative order is uncertain.
Previously, it was hardly observed since LZ4 only uses memmove() for
short overlapped literals and x86/arm64 memmove implementations seem to
completely cover it up and they don't have this issue.  Juhyung reported
that EROFS data corruption can be found on a new Intel x86 processor.
After some analysis, it seems that recent x86 processors with the new
FSRM feature expose this issue with "rep movsb".

Let's strictly use the decompressed buffer for lz4 inplace
decompression for now.  Later, as an useful improvement, we could try
to tie up these two buffers together in the correct order.

Reported-and-tested-by: Juhyung Park <qkrwngud825@gmail.com>
Closes: https://lore.kernel.org/r/CAD14+f2AVKf8Fa2OO1aAUdDNTDsVzzR6ctU_oJSmTyd6zSYR2Q@mail.gmail.com
Fixes: 0ffd71bcc3a0 ("staging: erofs: introduce LZ4 decompression inplace")
Fixes: 598162d05080 ("erofs: support decompress big pcluster for lz4 backend")
Cc: stable <stable@vger.kernel.org> # 5.4+
Tested-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20231206045534.3920847-1-hsiangkao@linux.alibaba.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-17 17:41:30 +01:00
Sultan Alsawaf
18de03d1a6 kernel: Don't allow userspace to alter IRQ affinities
The msm_irqbalance service in userspace constantly migrates IRQs between
CPUs according to its whims, which is not desired. All of the IRQs have
a sane affinity (CPU0 if unimportant, CPU4-7 otherwise), so prevent
userspace from tampering with that.

Signed-off-by: Sultan Alsawaf <sultanxda@gmail.com>
(cherry picked from commit 6cedf3c9b1f8c962d19ce4151ca5caaff69e3c6a)
(cherry picked from commit 8fc0013ba4094fd8fe95fb0d23af0936347060f4)
2024-11-17 17:41:27 +01:00
Pzqqt
3de61e729d kernel: sched: Provide more PELT half-life options
- Regenerate `kernel/sched/sched-pelt.h` by `Documentation/scheduler/sched-pelt`.
- Now we can choose from 32ms (default), 16ms, 12ms, 8ms.
2024-11-17 17:41:17 +01:00
Pzqqt
648fb626ad kernel: sched: Configuring PELT half-life via Kconfig
Note that adjusting PELT half-life via kernel parameters is only allowed when CONFIG_PELT_UTIL_HALFLIFE_DEFAULT is selected.
2024-11-17 17:41:11 +01:00
ztc1997
c05672273a block: Do not allow boosters to adjusting scheduler 2024-11-17 17:41:04 +01:00
Nahuel Gómez
f351f687ab block: elevator: fix missing header
We this to access task_is_booster().

ld.lld: error: undefined symbol: task_is_booster
>>> referenced by elevator.c:774 (../block/elevator.c:774)
>>>               vmlinux.o:(elv_iosched_store)
>>> did you mean: task_is_booster
>>> defined in: vmlinux.o

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:41:00 +01:00
Nahuel Gómez
58a2720a2c mm: default overcommit_ratio to 100
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:40:33 +01:00
Nahuel Gómez
fcc88303d8 fs: set VFS cache pressure to 20
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:40:30 +01:00
Nahuel Gómez
50b38430ba sound: abox: Bump buffer sizes up
I'm not sure if this will help, but the idea is to give the codec more room for error, since currently there is audio crackling under moderate CPU load.

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:40:16 +01:00
TEACAET
ee9eb5ecb8 config: Enable Cpusets Assist 2024-11-17 17:38:55 +01:00
darkhz
bf2ac59ec9 sched/uclamp: Fix incorrect uclamp.latency_sensitive setting
This patch fixes the latency_sensitive flag for all cpuset cgroups, and
the value present in the uclamp.latency_sensitive node directly
corresponds to the task_group's latency_sensitive value.

Prior to this patch, this was not the case. The
uclamp_latency_sensitive() function applied values only to the cpu
cgroup subsys instead of the required cpuset cgroup subsys, as a
result of which the latency_sensitive value remained zero for all
taskgroups irrespective of its setting.

Also, fix a situation where latency_sensitive is enabled for the
cpuset's root cgroup, in which case all tasks will have their value
as 1, which in turn will enable prefer_idle for all tasks. This is
undesired and may cause high battery drain.
2024-11-17 17:38:14 +01:00
Uladzislau Rezki
de53544dd2 workqueue: Make queue_rcu_work() use call_rcu_flush()
Earlier commits in this series allow battery-powered systems to build
their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option.
This Kconfig option causes call_rcu() to delay its callbacks in order
to batch them.  This means that a given RCU grace period covers more
callbacks, thus reducing the number of grace periods, in turn reducing
the amount of energy consumed, which increases battery lifetime which
can be a very good thing.  This is not a subtle effect: In some important
use cases, the battery lifetime is increased by more than 10%.

This CONFIG_RCU_LAZY=y option is available only for CPUs that offload
callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot
parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y.

Delaying callbacks is normally not a problem because most callbacks do
nothing but free memory.  If the system is short on memory, a shrinker
will kick all currently queued lazy callbacks out of their laziness,
thus freeing their memory in short order.  Similarly, the rcu_barrier()
function, which blocks until all currently queued callbacks are invoked,
will also kick lazy callbacks, thus enabling rcu_barrier() to complete
in a timely manner.

However, there are some cases where laziness is not a good option.
For example, synchronize_rcu() invokes call_rcu(), and blocks until
the newly queued callback is invoked.  It would not be a good for
synchronize_rcu() to block for ten seconds, even on an idle system.
Therefore, synchronize_rcu() invokes call_rcu_flush() instead of
call_rcu().  The arrival of a non-lazy call_rcu_flush() callback on a
given CPU kicks any lazy callbacks that might be already queued on that
CPU.  After all, if there is going to be a grace period, all callbacks
might as well get full benefit from it.

Yes, this could be done the other way around by creating a
call_rcu_lazy(), but earlier experience with this approach and
feedback at the 2022 Linux Plumbers Conference shifted the approach
to call_rcu() being lazy with call_rcu_flush() for the few places
where laziness is inappropriate.

And another call_rcu() instance that cannot be lazy is the one
in queue_rcu_work(), given that callers to queue_rcu_work() are
not necessarily OK with long delays.

Therefore, make queue_rcu_work() use call_rcu_flush() in order to revert
to the old behavior.

Signed-off-by: Uladzislau Rezki <urezki@gmail.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2024-11-17 17:38:08 +01:00
John Galt
bfd895a6cc rcu_boost: always without delay
Simultaneously improves interactivity and power efficiency

[Flopster101: This also invalidates any value set by RCU_BOOST_DELAY.]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:37:56 +01:00
Qais Yousef
17cc903017 kernel: ems/ego: cap iowait boost by uclamp_max
Which is a backport of upstream fix:

d37aee9018e6 ("sched/uclamp: Fix iowait boost escaping uclamp restriction")

Bug: 261695814
Signed-off-by: Qais Yousef <qyousef@google.com>
Change-Id: Ibe8175edb9dea35e325f1a6f4306885ab8b6b28a
[Flopster101: Adapted to Exynos energy_aware governor]
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:37:53 +01:00
ThunderStorms21th
484f198a6b mm: swap - set page_cluster at 0
Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
2024-11-17 17:37:44 +01:00
Nahuel Gómez
4cf82f496d drivers: tty: fix build without SEC_MM
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:37:39 +01:00
Nahuel Gómez
185d81abe4 drivers: zram: set default comp, algorithm to lzo-rle
Now that we have dropped Samsung's mm hacks, lzo-rle performs much better. Weird, right?

Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:36:58 +01:00
Joel Gómez
689e517a93 zram_drv: Allow overriding disk size from kernel, but in bytes
* Based on 0418f87051

Same concept, uses bytes instead of GBs.

Signed-off-by: Joel Gómez <nahuelgomez329@gmail.com>
2024-11-17 17:36:52 +01:00
Haky86
3ca15848a1 drivers: gpu: arm: bv_r38p1: Get rid of MALI_ARBITRATION configs
* Fixes:
drivers/gpu/arm/bv_r38p1/Kconfig:389:warning: ignoring type redefinition of 'MALI_ARBITRATION' from 'bool' to 'tristate'

Change-Id: Ia7a7d1a4fd68344abd1c07f7bb5e9ef214bdc51c
2024-11-17 17:08:28 +01:00
fluffball3
da38a69671 security: selinux: Disable Samsung SELinux
Change-Id: I85f450d1a4b93e150e7df90a7471b38fa027d673
2024-11-17 17:07:10 +01:00
Ksawlii
d2437dcc2f Added build dir 2024-11-17 16:33:43 +01:00
Ksawlii
2c1547731e Rebrand to FireAsf 2024-11-17 16:33:16 +01:00
Ksawlii
bc2b96f62c gpu: exynos: Underclock to 2093MHz memory frequency 2024-11-17 16:26:31 +01:00
Ksawlii
097a76dfd7 build_kernel.sh: Made my life easier 2024-11-08 12:01:32 +01:00
Greg Kroah-Hartman
34ff7ac6f6 Linux 5.10.199
Link: https://lore.kernel.org/r/20231023104817.691299567@linuxfoundation.org
Tested-by: Florian Fainelli <florian.fainelli@broadcom.com>
Tested-by: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
Link: https://lore.kernel.org/r/20231024083306.700855687@linuxfoundation.org
Tested-by: Florian Fainelli <florian.fainelli@broadcom.com>
Tested-by: Slade Watkins <srw@sladewatkins.net>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Tested-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-08 11:26:21 +01:00
Zhang Changzhong
00fddf40c2 xfrm6: fix inet6_dev refcount underflow problem
[ Upstream commit cc9b364bb1d58d3dae270c7a931a8cc717dc2b3b ]

There are race conditions that may lead to inet6_dev refcount underflow
in xfrm6_dst_destroy() and rt6_uncached_list_flush_dev().

One of the refcount underflow bugs is shown below:
	(cpu 1)                	|	(cpu 2)
xfrm6_dst_destroy()             |
  ...                           |
  in6_dev_put()                 |
				|  rt6_uncached_list_flush_dev()
  ...				|    ...
				|    in6_dev_put()
  rt6_uncached_list_del()       |    ...
  ...                           |

xfrm6_dst_destroy() calls rt6_uncached_list_del() after in6_dev_put(),
so rt6_uncached_list_flush_dev() has a chance to call in6_dev_put()
again for the same inet6_dev.

Fix it by moving in6_dev_put() after rt6_uncached_list_del() in
xfrm6_dst_destroy().

Fixes: 510c321b5571 ("xfrm: reuse uncached_list to track xdsts")
Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-08 11:26:20 +01:00
Kees Cook
9a0c387013 Bluetooth: hci_sock: Correctly bounds check and pad HCI_MON_NEW_INDEX name
commit cb3871b1cd135a6662b732fbc6b3db4afcdb4a64 upstream.

The code pattern of memcpy(dst, src, strlen(src)) is almost always
wrong. In this case it is wrong because it leaves memory uninitialized
if it is less than sizeof(ni->name), and overflows ni->name when longer.

Normally strtomem_pad() could be used here, but since ni->name is a
trailing array in struct hci_mon_new_index, compilers that don't support
-fstrict-flex-arrays=3 can't tell how large this array is via
__builtin_object_size(). Instead, open-code the helper and use sizeof()
since it will work correctly.

Additionally mark ni->name as __nonstring since it appears to not be a
%NUL terminated C string.

Cc: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Cc: Edward AD <twuufnxlz@gmail.com>
Cc: Marcel Holtmann <marcel@holtmann.org>
Cc: Johan Hedberg <johan.hedberg@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: linux-bluetooth@vger.kernel.org
Cc: netdev@vger.kernel.org
Fixes: 18f547f3fc07 ("Bluetooth: hci_sock: fix slab oob read in create_monitor_event")
Link: https://lore.kernel.org/lkml/202310110908.F2639D3276@keescook/
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-08 11:26:20 +01:00