Asynchronous IPI users must already handle csd object lifetimes on their
own, so there's no need to prevent re-entrancy on a single CPU inside
smp_call_function_single_async(). As such, smp_call_function_single_async()
can be made more RT-friendly by just using migrate enable/disable instead
of disabling preemption, since preventing migration is all that's needed.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
(cherry picked from commit 9983455a67226511e036f19f8725cd54bac10aa1)
(cherry picked from commit a792f43200afd6594a1e07b3cb11d048a9ec1218)
(cherry picked from commit f2d0b87c64e5a1e7b73430950f92d7a64d39c964)
(cherry picked from commit 0076a73f19418f3f7867b6ce2fa93f46105458c5)
[ Upstream commit 77aeb1b685f9db73d276bad4bb30d48505a6fd23 ]
For CONFIG_DEBUG_OBJECTS_WORK=y kernels sscs.work defined by
INIT_WORK_ONSTACK() is initialized by debug_object_init_on_stack() for
the debug check in __init_work() to work correctly.
But this lacks the counterpart to remove the tracked object from debug
objects again, which will cause a debug object warning once the stack is
freed.
Add the missing destroy_work_on_stack() invocation to cure that.
[ tglx: Massaged changelog ]
Signed-off-by: Zqiang <qiang.zhang1211@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/20240704065213.13559-1-qiang.zhang1211@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>