kernel_samsung_a53x/kernel/sched
K Prateek Nayak d4d43810f2 sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy
[ Upstream commit ff47a0acfcce309cf9e175149c75614491953c8f ]

Commit b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
optimizes IPIs to idle CPUs in TIF_POLLING_NRFLAG mode by setting the
TIF_NEED_RESCHED flag in idle task's thread info and relying on
flush_smp_call_function_queue() in idle exit path to run the
call-function. A softirq raised by the call-function is handled shortly
after in do_softirq_post_smp_call_flush() but the TIF_NEED_RESCHED flag
remains set and is only cleared later when schedule_idle() calls
__schedule().

need_resched() check in _nohz_idle_balance() exists to bail out of load
balancing if another task has woken up on the CPU currently in-charge of
idle load balancing which is being processed in SCHED_SOFTIRQ context.
Since the optimization mentioned above overloads the interpretation of
TIF_NEED_RESCHED, check for idle_cpu() before going with the existing
need_resched() check which can catch a genuine task wakeup on an idle
CPU processing SCHED_SOFTIRQ from do_softirq_post_smp_call_flush(), as
well as the case where ksoftirqd needs to be preempted as a result of
new task wakeup or slice expiry.

In case of PREEMPT_RT or threadirqs, although the idle load balancing
may be inhibited in some cases on the ilb CPU, the fact that ksoftirqd
is the only fair task going back to sleep will trigger a newidle balance
on the CPU which will alleviate some imbalance if it exists if idle
balance fails to do so.

Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20241119054432.6405-4-kprateek.nayak@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-17 13:24:33 +01:00
..
ems kernel: sched: ems: drop usage of SCHED_FEAT 2024-11-19 17:52:14 +01:00
autogroup.c
autogroup.h
clock.c
completion.c
core.c sched/fair: Trigger the update of blocked load on newly idle cpu 2024-12-17 13:24:33 +01:00
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq.c
cpufreq_schedutil.c schedutil: Allow CPU frequency changes to be amended before they're set 2024-11-19 18:06:02 +01:00
cpupri.c
cpupri.h
cputime.c sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime 2024-11-23 23:20:24 +01:00
deadline.c
debug.c
fair.c sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy 2024-12-17 13:24:33 +01:00
features.h
idle.c sched/fair: Trigger the update of blocked load on newly idle cpu 2024-12-17 13:24:33 +01:00
isolation.c
loadavg.c
Makefile
membarrier.c sched/membarrier: reduce the ability to hammer on sys_membarrier 2024-11-18 12:13:39 +01:00
pelt.c
pelt.h
psi.c
rt.c sched/rt: Disallow writing invalid values to sched_rt_period_us 2024-11-18 22:25:32 +01:00
sched-pelt.h
sched.h sched/fair: Add NOHZ balancer flag for nohz.next_balance updates 2024-12-17 13:24:33 +01:00
sec_mpam.c
sec_mpam_cpbm.h
sec_mpam_sysfs.c
sec_mpam_sysfs.h
smp.h
stats.c
stats.h
stop_task.c
swait.c
topology.c sched/fair: Allow disabling sched_balance_newidle with sched_relax_domain_level 2024-11-19 12:27:00 +01:00
wait.c
wait_bit.c