fe0cdb4e3f
[ Upstream commit 0826530de3cbdc89e60a89e86def94a5f0fc81ca ] newidle_balance runs with both preempt and irq disabled which prevent local irq to run during this period. The duration for updating the blocked load of CPUs varies according to the number of CPU cgroups with non-decayed load and extends this critical period to an uncontrolled level. Remove the update from newidle_balance and trigger a normal ILB that will take care of the update instead. This reduces the IRQ latency from O(nr_cgroups * nr_nohz_cpus) to O(nr_cgroups). Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224133007.28644-2-vincent.guittot@linaro.org Stable-dep-of: ff47a0acfcce ("sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy") Signed-off-by: Sasha Levin <sashal@kernel.org> |
||
---|---|---|
.. | ||
ems | ||
autogroup.c | ||
autogroup.h | ||
clock.c | ||
completion.c | ||
core.c | ||
cpuacct.c | ||
cpudeadline.c | ||
cpudeadline.h | ||
cpufreq.c | ||
cpufreq_schedutil.c | ||
cpupri.c | ||
cpupri.h | ||
cputime.c | ||
deadline.c | ||
debug.c | ||
fair.c | ||
features.h | ||
idle.c | ||
isolation.c | ||
loadavg.c | ||
Makefile | ||
membarrier.c | ||
pelt.c | ||
pelt.h | ||
psi.c | ||
rt.c | ||
sched-pelt.h | ||
sched.h | ||
sec_mpam.c | ||
sec_mpam_cpbm.h | ||
sec_mpam_sysfs.c | ||
sec_mpam_sysfs.h | ||
smp.h | ||
stats.c | ||
stats.h | ||
stop_task.c | ||
swait.c | ||
topology.c | ||
wait.c | ||
wait_bit.c |