Commit graph

13 commits

Author SHA1 Message Date
zhujingpeng
dacbe677af ANDROID: GKI: Add initialization for mutex oem_data.
Although __mutex_init() already contains a hook, but this
function may be called before the mutex_init hook is registered,
causing mutex's oem_data to be uninitialized and causing unpredictable errors.

Bug: 352181884

Change-Id: I04378d6668fb4e7b93c11d930ac46aae484fc835
Signed-off-by: zhujingpeng <zhujingpeng@vivo.com>
(cherry picked from commit 96d66062d0767aeafb690ce014ec91785820d62b)
(cherry picked from commit 4c213b2ea1f58bc640894684f65c92b6cdee522d)
2025-01-19 14:56:36 +01:00
zhujingpeng
b406bff5bd ANDROID: GKI: Add initialization for rwsem's oem_data and vendor_data.
Add initialization for rwsem's oem_data and vendor_data.
The __init_rwsem() already contains a hook, but this function
may be called before the rwsem_init hook is registered,
causing some rwsem's oem_data to be uninitialized and
causing unpredictable errors

Bug: 351133539

Change-Id: I7bbb83894d200102bc7d84e91678f164529097a0
Signed-off-by: zhujingpeng <zhujingpeng@vivo.com>
(cherry picked from commit aaca6b10f1a352dec4596548396f590500f2001b)
(cherry picked from commit 1a5cffcbb543099802723c420f2f538b12619a72)
2025-01-19 14:56:25 +01:00
zhujingpeng
5fd39502da ANDROID: GKI: add percpu_rwsem vendor hooks
When a writer has set sem->block and is waiting for active readers
to complete, we still allow some specific new readers to entry
the critical section, which can help prevent priority inversion
from impacting system responsiveness and performance.

Bug: 334851707

Change-Id: I9e2a7df1efb326763487423d64bcf74d8dec23f8
Signed-off-by: zhujingpeng <zhujingpeng@vivo.com>
(cherry picked from commit 458cdb59f7719f81504d392daa95a8c99c30c276)
(cherry picked from commit b74701f8daa1a415c3ea2e819237d1bff59736eb)
2025-01-19 14:55:30 +01:00
zhujingpeng
f0f9800398 ANDROID: vendor_hooks: add hooks in rwsem
these hooks are required by the following features:
1.For rwsem readers, currently only the latest reader will be
recorded in sem->owner. We add hooks to record all readers which
have acquired the lock, once there are UX threads blocked in the
rwsem waiting list, these read_owners will be given high
priority in scheduling.
2.For rwsem writer, when a writer acquires the lock, we check
whether there are UX threads blocked in the rwsem wait list. If
so, we give this writer a high priority in scheduling so that it
can release the lock as soon as possible.

Both of these features can optimize the priority inversion
problem caused by rwsem and improve system responsiveness and
performance.

There is already a hook android_vh_rwsem_set_owner in rwsem_set_owner() to record writer owned, so the hook android_vh_record_rwsem_writer_owned in cherry pick is removed.

Bug: 335408185
Change-Id: I82a6fbb6acd2ce05d049e686b61e34e4d3b39a5e
Signed-off-by: zhujingpeng <zhujingpeng@vivo.com>
[jstultz: Rebased and resolved minor conflict]
Signed-off-by: John Stultz <jstultz@google.com>
(cherry picked from commit 188c41744ddcccef6daf6dcd4d6a444dabdc2f94)
(cherry picked from commit 869fc79d3abf1a1bcd97925fa3e0ee7188bb737c)
2025-01-19 14:55:21 +01:00
Roland Xu
6fd51c9abe rtmutex: Drop rt_mutex::wait_lock before scheduling
commit d33d26036a0274b472299d7dcdaa5fb34329f91b upstream.

rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held.  In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.

Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.

[ tglx: Moved unlock before the WARN(), removed the pointless comment,
  	massaged changelog, added Fixes tag ]

Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
Signed-off-by: Roland Xu <mu001999@outlook.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-19 00:09:58 +01:00
Ksawlii
1dfae2e328 Revert "rtmutex: Drop rt_mutex::wait_lock before scheduling"
This reverts commit 07dcd58fea.
2024-11-24 00:23:36 +01:00
Ksawlii
c848ca572d Revert "lockdep: fix deadlock issue between lockdep and rcu"
This reverts commit 6624949eca.
2024-11-24 00:23:13 +01:00
Zhiguo Niu
6624949eca lockdep: fix deadlock issue between lockdep and rcu
commit a6f88ac32c6e63e69c595bfae220d8641704c9b7 upstream.

There is a deadlock scenario between lockdep and rcu when
rcu nocb feature is enabled, just as following call stack:

     rcuop/x
-000|queued_spin_lock_slowpath(lock = 0xFFFFFF817F2A8A80, val = ?)
-001|queued_spin_lock(inline) // try to hold nocb_gp_lock
-001|do_raw_spin_lock(lock = 0xFFFFFF817F2A8A80)
-002|__raw_spin_lock_irqsave(inline)
-002|_raw_spin_lock_irqsave(lock = 0xFFFFFF817F2A8A80)
-003|wake_nocb_gp_defer(inline)
-003|__call_rcu_nocb_wake(rdp = 0xFFFFFF817F30B680)
-004|__call_rcu_common(inline)
-004|call_rcu(head = 0xFFFFFFC082EECC28, func = ?)
-005|call_rcu_zapped(inline)
-005|free_zapped_rcu(ch = ?)// hold graph lock
-006|rcu_do_batch(rdp = 0xFFFFFF817F245680)
-007|nocb_cb_wait(inline)
-007|rcu_nocb_cb_kthread(arg = 0xFFFFFF817F245680)
-008|kthread(_create = 0xFFFFFF80803122C0)
-009|ret_from_fork(asm)

     rcuop/y
-000|queued_spin_lock_slowpath(lock = 0xFFFFFFC08291BBC8, val = 0)
-001|queued_spin_lock()
-001|lockdep_lock()
-001|graph_lock() // try to hold graph lock
-002|lookup_chain_cache_add()
-002|validate_chain()
-003|lock_acquire
-004|_raw_spin_lock_irqsave(lock = 0xFFFFFF817F211D80)
-005|lock_timer_base(inline)
-006|mod_timer(inline)
-006|wake_nocb_gp_defer(inline)// hold nocb_gp_lock
-006|__call_rcu_nocb_wake(rdp = 0xFFFFFF817F2A8680)
-007|__call_rcu_common(inline)
-007|call_rcu(head = 0xFFFFFFC0822E0B58, func = ?)
-008|call_rcu_hurry(inline)
-008|rcu_sync_call(inline)
-008|rcu_sync_func(rhp = 0xFFFFFFC0822E0B58)
-009|rcu_do_batch(rdp = 0xFFFFFF817F266680)
-010|nocb_cb_wait(inline)
-010|rcu_nocb_cb_kthread(arg = 0xFFFFFF817F266680)
-011|kthread(_create = 0xFFFFFF8080363740)
-012|ret_from_fork(asm)

rcuop/x and rcuop/y are rcu nocb threads with the same nocb gp thread.
This patch release the graph lock before lockdep call_rcu.

Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use")
Cc: stable@vger.kernel.org
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Carlos Llamas <cmllamas@google.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Zhiguo Niu <zhiguo.niu@unisoc.com>
Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
Reviewed-by: Waiman Long <longman@redhat.com>
Reviewed-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20240620225436.3127927-1-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-23 23:21:34 +01:00
Roland Xu
07dcd58fea rtmutex: Drop rt_mutex::wait_lock before scheduling
commit d33d26036a0274b472299d7dcdaa5fb34329f91b upstream.

rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held.  In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.

Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.

[ tglx: Moved unlock before the WARN(), removed the pointless comment,
  	massaged changelog, added Fixes tag ]

Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
Signed-off-by: Roland Xu <mu001999@outlook.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-23 23:21:09 +01:00
Minchan Kim
f19a9560cc locking/rwlocks: introduce write_lock_nested
In preparation for converting bit_spin_lock to rwlock in zsmalloc so
that multiple writers of zspages can run at the same time but those
zspages are supposed to be different zspage instance.  Thus, it's not
deadlock.  This patch adds write_lock_nested to support the case for
LOCKDEP.

[minchan@kernel.org: fix write_lock_nested for RT]
  Link: https://lkml.kernel.org/r/YZfrMTAXV56HFWJY@google.com
[bigeasy@linutronix.de: fixup write_lock_nested() implementation]
  Link: https://lkml.kernel.org/r/20211123170134.y6xb7pmpgdn4m3bn@linutronix.de

Link: https://lkml.kernel.org/r/20211115185909.3949505-8-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-11-19 17:44:05 +01:00
Peter Zijlstra
289a89bfdc lockdep: Fix block chain corruption
[ Upstream commit bca4104b00fec60be330cd32818dd5c70db3d469 ]

Kent reported an occasional KASAN splat in lockdep. Mark then noted:

> I suspect the dodgy access is to chain_block_buckets[-1], which hits the last 4
> bytes of the redzone and gets (incorrectly/misleadingly) attributed to
> nr_large_chain_blocks.

That would mean @size == 0, at which point size_to_bucket() returns -1
and the above happens.

alloc_chain_hlocks() has 'size - req', for the first with the
precondition 'size >= rq', which allows the 0.

This code is trying to split a block, del_chain_block() takes what we
need, and add_chain_block() puts back the remainder, except in the
above case the remainder is 0 sized and things go sideways.

Fixes: 810507fe6fd5 ("locking/lockdep: Reuse freed chain_hlocks entries")
Reported-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Kent Overstreet <kent.overstreet@linux.dev>
Link: https://lkml.kernel.org/r/20231121114126.GH8262@noisy.programming.kicks-ass.net
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-11-18 12:10:56 +01:00
John Stultz
a76d8bf7e4 locking/ww_mutex/test: Fix potential workqueue corruption
[ Upstream commit bccdd808902f8c677317cec47c306e42b93b849e ]

In some cases running with the test-ww_mutex code, I was seeing
odd behavior where sometimes it seemed flush_workqueue was
returning before all the work threads were finished.

Often this would cause strange crashes as the mutexes would be
freed while they were being used.

Looking at the code, there is a lifetime problem as the
controlling thread that spawns the work allocates the
"struct stress" structures that are passed to the workqueue
threads. Then when the workqueue threads are finished,
they free the stress struct that was passed to them.

Unfortunately the workqueue work_struct node is in the stress
struct. Which means the work_struct is freed before the work
thread returns and while flush_workqueue is waiting.

It seems like a better idea to have the controlling thread
both allocate and free the stress structures, so that we can
be sure we don't corrupt the workqueue by freeing the structure
prematurely.

So this patch reworks the test to do so, and with this change
I no longer see the early flush_workqueue returns.

Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20230922043616.19282-3-jstultz@google.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-11-18 11:43:11 +01:00
Gabriel2392
7ed7ee9edf Import A536BXXU9EXDC 2024-06-15 16:02:09 -03:00