[ Upstream commit 3d887d512494d678b17c57b835c32f4e48d34f26 ]
As drm_dp_get_mst_branch_device_by_guid() is called from
drm_dp_get_mst_branch_device_by_guid(), mstb parameter has to be checked,
otherwise NULL dereference may occur in the call to
the memcpy() and cause following:
[12579.365869] BUG: kernel NULL pointer dereference, address: 0000000000000049
[12579.365878] #PF: supervisor read access in kernel mode
[12579.365880] #PF: error_code(0x0000) - not-present page
[12579.365882] PGD 0 P4D 0
[12579.365887] Oops: 0000 [#1] PREEMPT SMP NOPTI
...
[12579.365895] Workqueue: events_long drm_dp_mst_up_req_work
[12579.365899] RIP: 0010:memcmp+0xb/0x29
[12579.365921] Call Trace:
[12579.365927] get_mst_branch_device_by_guid_helper+0x22/0x64
[12579.365930] drm_dp_mst_up_req_work+0x137/0x416
[12579.365933] process_one_work+0x1d0/0x419
[12579.365935] worker_thread+0x11a/0x289
[12579.365938] kthread+0x13e/0x14f
[12579.365941] ? process_one_work+0x419/0x419
[12579.365943] ? kthread_blkcg+0x31/0x31
[12579.365946] ret_from_fork+0x1f/0x30
As get_mst_branch_device_by_guid_helper() is recursive, moving condition
to the first line allow to remove a similar one for step over of NULL elements
inside a loop.
Fixes: 5e93b8208d3c ("drm/dp/mst: move GUID storage from mgr, port to only mst branch")
Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Lukasz Majczak <lma@semihalf.com>
Reviewed-by: Radoslaw Biernacki <rad@chromium.org>
Signed-off-by: Manasi Navare <navaremanasi@chromium.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20230922063410.23626-1-lma@semihalf.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
commit 9f12cac1bb88e3296990e760d867a98308d6b0ac upstream.
Populate the new member for custom mask values to make sure this value
is applied whenever needed. Also, rename the define holding the value
because this is not only about initialization anymore.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Tested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Link: https://lore.kernel.org/r/20210304092903.8534-1-wsa+renesas@sang-engineering.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[geert: Backport to v5.10.199]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 61e21cf2d2c3cc5e60e8d0a62a77e250fccda62c upstream.
When guard page debug is enabled and set_page_guard returns success, we
miss to forward page to point to start of next split range and we will do
split unexpectedly in page range without target page. Move start page
update before set_page_guard to fix this.
As we split to wrong target page, then splited pages are not able to merge
back to original order when target page is put back and splited pages
except target page is not usable. To be specific:
Consider target page is the third page in buddy page with order 2.
| buddy-2 | Page | Target | Page |
After break down to target page, we will only set first page to Guard
because of bug.
| Guard | Page | Target | Page |
When we try put_page_back_buddy with target page, the buddy page of target
if neither guard nor buddy, Then it's not able to construct original page
with order 2
| Guard | Page | buddy-0 | Page |
All pages except target page is not in free list and is not usable.
Link: https://lkml.kernel.org/r/20230927094401.68205-1-shikemeng@huaweicloud.com
Fixes: 06be6ff3d2ec ("mm,hwpoison: rework soft offline for free pages")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fab7f259227b8f70aa6d54e1de1a1f5f4729041c upstream.
With the recent removal of vm_dev from devres its memory is only freed
via the callback virtio_mmio_release_dev. However, this only takes
effect after device_add is called by register_virtio_device. Until then
it's an unmanaged resource and must be explicitly freed on error exit.
This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.
Cc: stable@vger.kernel.org
Fixes: 55c91fedd03d ("virtio-mmio: don't break lifecycle of vm_dev")
Signed-off-by: Maximilian Heyne <mheyne@amazon.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Message-Id: <20230911090328.40538-1-mheyne@amazon.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
[ Upstream commit 2025b2ca8004c04861903d076c67a73a0ec6dfca ]
mcb-lpc requests a fixed-size memory region to parse the chameleon
table, however, if the chameleon table is smaller that the allocated
region, it could overlap with the IP Cores' memory regions.
After parsing the chameleon table, drop/reallocate the memory region
with the actual chameleon table size.
Co-developed-by: Jorge Sanjuan Garcia <jorge.sanjuangarcia@duagon.com>
Signed-off-by: Jorge Sanjuan Garcia <jorge.sanjuangarcia@duagon.com>
Signed-off-by: Javier Rodriguez <josejavier.rodriguez@duagon.com>
Signed-off-by: Johannes Thumshirn <jth@kernel.org>
Link: https://lore.kernel.org/r/20230411083329.4506-4-jth@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit a889c276d33d333ae96697510f33533f6e9d9591 ]
The function chameleon_parse_cells() returns the number of cells
parsed which has an undetermined size. This return value is only
used for error checking but the number of cells is never used.
Change return value to be number of bytes parsed to allow for
memory management improvements.
Co-developed-by: Jorge Sanjuan Garcia <jorge.sanjuangarcia@duagon.com>
Signed-off-by: Jorge Sanjuan Garcia <jorge.sanjuangarcia@duagon.com>
Signed-off-by: Javier Rodriguez <josejavier.rodriguez@duagon.com>
Signed-off-by: Johannes Thumshirn <jth@kernel.org>
Link: https://lore.kernel.org/r/20230411083329.4506-2-jth@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 03b80ff8023adae6780e491f66e932df8165e3a0 ]
If name_show() is non unique, this test will try to install a kprobe on this
function which should fail returning EADDRNOTAVAIL.
On kernel where name_show() is not unique, this test is skipped.
Link: https://lore.kernel.org/all/20231020104250.9537-3-flaniel@linux.microsoft.com/
Cc: stable@vger.kernel.org
Signed-off-by: Francis Laniel <flaniel@linux.microsoft.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This is a simple IRQ balancer that polls every X number of milliseconds and
moves IRQs from the most interrupt-heavy CPU to the least interrupt-heavy
CPUs until the heaviest CPU is no longer the heaviest. IRQs are only moved
from one source CPU to any number of destination CPUs per balance run.
Balancing is skipped if the gap between the most interrupt-heavy CPU and
the least interrupt-heavy CPU is below the configured threshold of
interrupts.
The heaviest IRQs are targeted for migration in order to reduce the number
of IRQs to migrate. If moving an IRQ would reduce overall balance, then it
won't be migrated.
The most interrupt-heavy CPU is calculated by scaling the number of new
interrupts on that CPU to the CPU's current capacity. This way, interrupt
heaviness takes into account factors such as thermal pressure and time
spent processing interrupts rather than just the sheer number of them. This
also makes SBalance aware of CPU asymmetry, where different CPUs can have
different performance capacities and be proportionally balanced.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
I2C errors are quite common with these controllers and are non-fatal since
the transactions are retried. Silence the noisy logs so that dmesg isn't
destroyed by I2C log spam.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
IRQ balancing is already performed naturally by moving the i2c IRQ to the
CPU that kicks off an i2c transaction. Therefore, opt out from IRQ
balancing operations by setting IRQF_NOBALANCING.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Within the display server process, __drm_dbg consumes significant CPU time:
2.40% [kernel] [k] __drm_dbg
Instead of compiling in all DRM debug print statements, stub them out to
reduce runtime overhead and size.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Rename freq_scale to a less generic name, as it will get exported soon
for modules. Since x86 already names its own implementation of this as
arch_freq_scale, lets stick to that.
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Allow drivers to wait with a custom task state specified by exposing the
raw wait_for_common*() functions. This allows code to wait for completions
that are invariant with respect to CPU performance *without* contributing
to load avg, without requiring the wait to be interruptible.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Since we are compiling for a single chipset that is known to support LSE,
the system_uses_lse_atomics() static branch can be eliminated entirely.
Therefore, make system_uses_lse_atomics() always true to always use LSE
atomics, and update ARM64_LSE_ATOMIC_INSN() users to get rid of the extra
nops used for alternatives patching at runtime.
This reduces generated code size by removing LL/SC atomics, which improves
instruction cache footprint.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Let SBalance handle IRQ affinities when it's enabled for better efficiency
and performance.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
This is at odds with sbalance, which balances this IRQ automatically.
Disable the IRQ affinity feature and leave this up to sbalance.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
This implements a simple IRQ-affined PM QoS mechanism for each UFS adapter
which uses atomics to elide locking, and enqueues a worker to apply PM QoS
to the target CPU as soon as a command request is issued.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
-EIO errors with SPI transfers over DMA are observed on RT sometimes.
Looking at the pl330 IRQ handler, it appears that it just masks interrupts
and dispatches tasklets to do further processing.
Since the hard IRQ handler just masks interrupts and dispatches work, make
it non-threaded on RT and introduce a threaded handler to offload some
burden from hard IRQ context.
This appears to resolve the sporadic -EIO errors observed on RT.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Lots of subsystems, such as the TPU, occasionally spam hundreds of
thousands of IOMMU faults which are not only resource heavy due to the IRQ
overhead, but also destroy dmesg/ramoops with tons of spam. These errors
appear to be nonfatal and don't seem actionable for anyone outside of
Samsung or Google, so turn them off by default.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
The possibility of a timeout being used with a PM QoS request incurs
overhead for *all* PM QoS requests due to the necessary calls to
cancel_delayed_work_sync().
Furthermore, using a timeout for a PM QoS request can lead to disastrous
results on power consumption. It's always possible to find a fixed scope in
which a PM QoS request should be applied, so timeouts aren't ever strictly
needed; they're usually just a lazy way of using PM QoS.
Remove the timeout API to eliminate the added overhead for non-timeout PM
QoS requests, and so that timeouts cannot be misused.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Nahuel Gómez <nahuelgomez329@gmail.com>
By default, everything is set to 240fps for optimal playback performance
However, the situation is not always true, as it applies to cases when
the video bitrate isn't necessarily high, causing high power consumption
Reduce and limit the boosting needed. For decoder, only apply for UHD
video resolution
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
[TenSeventy7: Negative unsigned integer fixes already present on 9610]
Signed-off-by: John Vincent <git@tensevntysevn.cf>
Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
* This is never useful to us
Signed-off-by: LibXZR <xzr467706992@163.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
Signed-off-by: forenche <prahul2003@gmail.com>
Flash memory is extremely latency efficient, so we can make the maximum
target latency 1ms. Anything exceeding 1ms of latency will cause
blk-throttle to trigger.
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: John Vincent <git@tensevntysevn.cf>
Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>
Some DT devices, mainly smartphones, do need more trip points
to allow more fine grained thermal mitigations, hence allowing
a better user experience (and overall performance), for example,
by lowering the CPU clocks just a little for each temperature
step.
taken from : f40204b196
Signed-off-by: Henrique Pereira <hlcpereira@pixelexperience.org>
Signed-off-by: ThunderStorms21th <pinakastorm@gmail.com>