Commit graph

4 commits

Author SHA1 Message Date
Oliver Upton
22e90035c9 KVM: arm64: vgic-its: Avoid potential UAF in LPI translation cache
commit ad362fe07fecf0aba839ff2cc59a3617bd42c33f upstream.

There is a potential UAF scenario in the case of an LPI translation
cache hit racing with an operation that invalidates the cache, such
as a DISCARD ITS command. The root of the problem is that
vgic_its_check_cache() does not elevate the refcount on the vgic_irq
before dropping the lock that serializes refcount changes.

Have vgic_its_check_cache() raise the refcount on the returned vgic_irq
and add the corresponding decrement after queueing the interrupt.

Cc: stable@vger.kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20240104183233.3560639-1-oliver.upton@linux.dev
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-18 12:12:48 +01:00
Marc Zyngier
ac208c1a5e KVM: arm64: vgic-v4: Restore pending state on host userspace write
commit 7b95382f965133ef61ce44aaabc518c16eb46909 upstream.

When the VMM writes to ISPENDR0 to set the state pending state of
an SGI, we fail to convey this to the HW if this SGI is already
backed by a GICv4.1 vSGI.

This is a bit of a corner case, as this would only occur if the
vgic state is changed on an already running VM, but this can
apparently happen across a guest reset driven by the VMM.

Fix this by always writing out the pending_latch value to the
HW, and reseting it to false.

Reported-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Cc: stable@vger.kernel.org # 5.10+
Link: https://lore.kernel.org/r/7e7f2c0c-448b-10a9-8929-4b8f4f6e2a32@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-18 12:12:48 +01:00
Sultan Alsawaf
15898055b7 arm64: lse: Always use LSE atomic instructions
Since we are compiling for a single chipset that is known to support LSE,
the system_uses_lse_atomics() static branch can be eliminated entirely.

Therefore, make system_uses_lse_atomics() always true to always use LSE
atomics, and update ARM64_LSE_ATOMIC_INSN() users to get rid of the extra
nops used for alternatives patching at runtime.

This reduces generated code size by removing LL/SC atomics, which improves
instruction cache footprint.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-17 17:45:05 +01:00
Gabriel2392
7ed7ee9edf Import A536BXXU9EXDC 2024-06-15 16:02:09 -03:00