It is not common, but defining an overlapped region is possible.
Actually memblock_add_range allows to overlap with existing ones.
The memsize currently does not handle this overlapped case. But this
patch tries to handle one overlapped case.
Here's the case.
There is an unknown memsize region, which means the region was removed
and not passed at bootloader stage. And there is a reserved memory
region defined in device tree which is overlapped with the unknown
region.
We expect that information in device tree make the unknown region clear.
This patch handle the overlapped region with following conditions.
1) The already existing overlapped region should be unknown and no-map.
2) The newly added region should have a name, and its region should be
same with or part of the existing one.
Here is an example.
Before this patch, memsize shows both overlapped region.
0x0ea000000-0x0ed900000 0x03900000 ( 58368 KB ) nomap unusable overlapped
0x0ea000000-0x0f1400000 0x07400000 ( 118784 KB ) nomap unusable unknown
After this patch, the overlapped region is named.
0x0ea000000-0x0ed900000 0x03900000 ( 58368 KB ) nomap unusable overlapped
0x0e9b00000-0x0ea000000 0x00500000 ( 5120 KB ) nomap unusable unknown
Bug: 340432773
Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com>
Link: https://lore.kernel.org/linux-mm/20240521023957.2587005-4-jaewon31.kim@samsung.com/
Change-Id: I7091eedcebd1f2f771124c157e5e00600b9a36fd
These are disabled in the kernel already, so it's pointless to have them here.
This time we keep dss on, because otherwise last_kmsg stops working.
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
These labels are based on observations from a running system as well as
from inspecting the code:
!delayed_work_pending:
true = 3509732
false = 7495535
!need_update:
true = 6656251
false = 840000
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
(cherry picked from commit 52e60a930f0480f2082bc65836c7edb4861de7aa)
(cherry picked from commit bb05861a4f3bb99b76e7e43ba17f3659fcb12443)
(cherry picked from commit d92be4aa5222a52104bed81fe60c5c10a091b4c1)
Signed-off-by: Samuel Pascua <sgpascua@ngcp.ph>
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
We don't want sched_nr_migrate to be too high, as that would impact
real-time latencies as the load balancing process for SCHED_OTHER
disallows IRQ interrupts.
Instead of making this a constant value, let's use an amount that
relates to the CPU performance of a device.
Consider that a device with a very weak 4 core processor needs to load
balance 32 tasks from the most busy-but-idle CPU. That balancing would
take significantly longer than a device with 8 cores.
On the contrary view, that same 8 core device balancing 32 tasks would
have no issues with SCHED_OTHER performance, however, real-time tasks
would suffer more jitter than is necessary.
Let's only balance as many tasks as there are CPUs in a device for
optimal SCHED_OTHER performance and SCHED_FIFO / SCHED_RR latency.
Freezing processes on Android usually takes less than 100 ms, and if it
takes longer than that to the point where the 20 second freeze timeout is
reached, it's because the remaining processes to be frozen are deadlocked
waiting for something from a process which is already frozen. There's no
point in burning power trying to freeze for that long, so reduce the freeze
timeout to a very generous 1 second for Android and don't let anything mess
with it.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
(cherry picked from commit 72fc44bd8bcfb6d1cbaafb2053173cd985d81d0c)
(cherry picked from commit 7097f6cabd228cf426da77ee71088e56beba3cd0)
(cherry picked from commit 4f1c2d26818d846c0aea6713652e2a0eaa6a23c4)
(cherry picked from commit d416b35f8187c0605ac8816e4d771288514876f1)
When some CPUs cycle out of s2idle due to non-wakeup IRQs, it's possible
for them to run while the CPU responsible for jiffies updates remains idle.
This can delay the execution of timers indefinitely until the CPU managing
the jiffies updates finally wakes up, by which point everything could be
dead if enough time passes.
Fix it by handing off timekeeping duties when the timekeeping CPU enters
s2idle and freezes its tick. When all CPUs are in s2idle, the first one to
wake up for any reason (either from a wakeup IRQ or non-wakeup IRQ) will
assume responsibility for the timekeeping tick.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
(cherry picked from commit 717100653a78c63fe56b95721fffee5fad96ee91)
(cherry picked from commit 83196f829f9e2d4aef6a4d30ac449a7bfc985208)
(cherry picked from commit ab251b18b95c2fe80f457dabdf2e7132a6b0ea27)
(cherry picked from commit e58369f386e19f02bc5db1a155040e54dd201a60)
Asynchronous IPI users must already handle csd object lifetimes on their
own, so there's no need to prevent re-entrancy on a single CPU inside
smp_call_function_single_async(). As such, smp_call_function_single_async()
can be made more RT-friendly by just using migrate enable/disable instead
of disabling preemption, since preventing migration is all that's needed.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
(cherry picked from commit 9983455a67226511e036f19f8725cd54bac10aa1)
(cherry picked from commit a792f43200afd6594a1e07b3cb11d048a9ec1218)
(cherry picked from commit f2d0b87c64e5a1e7b73430950f92d7a64d39c964)
(cherry picked from commit 0076a73f19418f3f7867b6ce2fa93f46105458c5)
After a dedicated grace-period workqueue was added to RCU in order to
benefit from rescuer threads, the relevant workers were moved to the new
workqueue away from system_power_efficient_wq. The old workqueue was
unbound, which is desirable for performance reasons. Making the workers
bound measurably regressed performance, so make them unbound again.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
(cherry picked from commit 139be6da6ee92c8b0aa2d19081fa230435082054)
(cherry picked from commit 3428cd70087fe5fd538d96e89f8f12266aacc2d1)
(cherry picked from commit b4e1beb3e73ccd6f341f95020becfff0b7139745)
(cherry picked from commit 428f05ee3318ed4314317bb697abec26a4bcc930)
There is an issue in clang's ThinLTO caching (enabled for the kernel via
'--thinlto-cache-dir') with .incbin, which the kernel occasionally uses
to include data within the kernel, such as the .config file for
/proc/config.gz. For example, when changing the .config and rebuilding
vmlinux, the copy of .config in vmlinux does not match the copy of
.config in the build folder:
$ echo 'CONFIG_LTO_NONE=n
CONFIG_LTO_CLANG_THIN=y
CONFIG_IKCONFIG=y
CONFIG_HEADERS_INSTALL=y' >kernel/configs/repro.config
$ make -skj"$(nproc)" ARCH=x86_64 LLVM=1 clean defconfig repro.config vmlinux
...
$ grep CONFIG_HEADERS_INSTALL .config
CONFIG_HEADERS_INSTALL=y
$ scripts/extract-ikconfig vmlinux | grep CONFIG_HEADERS_INSTALL
CONFIG_HEADERS_INSTALL=y
$ scripts/config -d HEADERS_INSTALL
$ make -kj"$(nproc)" ARCH=x86_64 LLVM=1 vmlinux
...
UPD kernel/config_data
GZIP kernel/config_data.gz
CC kernel/configs.o
...
LD vmlinux
...
$ grep CONFIG_HEADERS_INSTALL .config
# CONFIG_HEADERS_INSTALL is not set
$ scripts/extract-ikconfig vmlinux | grep CONFIG_HEADERS_INSTALL
CONFIG_HEADERS_INSTALL=y
Without '--thinlto-cache-dir' or when using full LTO, this issue does
not occur.
Benchmarking incremental builds on a few different machines with and
without the cache shows a 20% increase in incremental build time without
the cache when measured by touching init/main.c and running 'make all'.
ARCH=arm64 defconfig + CONFIG_LTO_CLANG_THIN=y on an arm64 host:
Benchmark 1: With ThinLTO cache
Time (mean ± σ): 56.347 s ± 0.163 s [User: 83.768 s, System: 24.661 s]
Range (min … max): 56.109 s … 56.594 s 10 runs
Benchmark 2: Without ThinLTO cache
Time (mean ± σ): 67.740 s ± 0.479 s [User: 718.458 s, System: 31.797 s]
Range (min … max): 67.059 s … 68.556 s 10 runs
Summary
With ThinLTO cache ran
1.20 ± 0.01 times faster than Without ThinLTO cache
ARCH=x86_64 defconfig + CONFIG_LTO_CLANG_THIN=y on an x86_64 host:
Benchmark 1: With ThinLTO cache
Time (mean ± σ): 85.772 s ± 0.252 s [User: 91.505 s, System: 8.408 s]
Range (min … max): 85.447 s … 86.244 s 10 runs
Benchmark 2: Without ThinLTO cache
Time (mean ± σ): 103.833 s ± 0.288 s [User: 232.058 s, System: 8.569 s]
Range (min … max): 103.286 s … 104.124 s 10 runs
Summary
With ThinLTO cache ran
1.21 ± 0.00 times faster than Without ThinLTO cache
While it is unfortunate to take this performance improvement off the
table, correctness is more important. If/when this is fixed in LLVM, it
can potentially be brought back in a conditional manner. Alternatively,
a developer can just disable LTO if doing incremental compiles quickly
is important, as a full compile cycle can still take over a minute even
with the cache and it is unlikely that LTO will result in functional
differences for a kernel change.
Cc: stable@vger.kernel.org
Fixes: dc5723b02e52 ("kbuild: add support for Clang LTO")
Reported-by: Yifan Hong <elsk@google.com>
Closes: https://github.com/ClangBuiltLinux/linux/issues/2021
Reported-by: Masami Hiramatsu <mhiramat@kernel.org>
Closes: https://lore.kernel.org/r/20220327115526.cc4b0ff55fc53c97683c3e4d@kernel.org/
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Bug: 312268956
Bug: 335301039
Change-Id: Iace492db67f28e172427669b1b7eb6a8c44dd3aa
(cherry picked from commit aba091547ef6159d52471f42a3ef531b7b660ed8
https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git kbuild)
[elsk: Resolved minor conflict in Makefile]
Signed-off-by: Yifan Hong <elsk@google.com>
Signed-off-by: mrsrimar22 <mrsrimar22@gmail.com>