kernel_samsung_a53x/kernel/irq/Kconfig
Sultan Alsawaf d9e7f45cc4 arm64: Disable GENERIC_IRQ_EFFECTIVE_AFF_MASK
The effective affinity mask causes a lot of bugs by virtue of many
set_irq_affinity handlers only setting an effective affinity mask for an
IRQ's parent but not the IRQ itself. Since this is a widespread issue that
would require manual fixing on every different SoC, just disable the
effective affinity mask altogether and use the first CPU in an affinity
mask configured.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-11-19 17:54:22 +01:00

205 lines
5.2 KiB
Text
Executable file

# SPDX-License-Identifier: GPL-2.0-only
menu "IRQ subsystem"
# Options selectable by the architecture code
# Make sparse irq Kconfig switch below available
config MAY_HAVE_SPARSE_IRQ
bool
# Legacy support, required for itanic
config GENERIC_IRQ_LEGACY
bool
# Enable the generic irq autoprobe mechanism
config GENERIC_IRQ_PROBE
bool
# Use the generic /proc/interrupts implementation
config GENERIC_IRQ_SHOW
bool
# Print level/edge extra information
config GENERIC_IRQ_SHOW_LEVEL
bool
# Supports effective affinity mask
config GENERIC_IRQ_EFFECTIVE_AFF_MASK
bool
depends on !ARM64
# Facility to allocate a hardware interrupt. This is legacy support
# and should not be used in new code. Use irq domains instead.
config GENERIC_IRQ_LEGACY_ALLOC_HWIRQ
bool
# Support for delayed migration from interrupt context
config GENERIC_PENDING_IRQ
bool
# Support for generic irq migrating off cpu before the cpu is offline.
config GENERIC_IRQ_MIGRATION
bool
# Alpha specific irq affinity mechanism
config AUTO_IRQ_AFFINITY
bool
# Interrupt injection mechanism
config GENERIC_IRQ_INJECTION
bool
# Tasklet based software resend for pending interrupts on enable_irq()
config HARDIRQS_SW_RESEND
bool
# Edge style eoi based handler (cell)
config IRQ_EDGE_EOI_HANDLER
bool
# Generic configurable interrupt chip implementation
config GENERIC_IRQ_CHIP
bool
select IRQ_DOMAIN
# Generic irq_domain hw <--> linux irq number translation
config IRQ_DOMAIN
bool
# Support for simulated interrupts
config IRQ_SIM
bool
select IRQ_WORK
select IRQ_DOMAIN
# Support for hierarchical irq domains
config IRQ_DOMAIN_HIERARCHY
bool
select IRQ_DOMAIN
# Support for hierarchical fasteoi+edge and fasteoi+level handlers
config IRQ_FASTEOI_HIERARCHY_HANDLERS
bool
# Generic IRQ IPI support
config GENERIC_IRQ_IPI
bool
depends on SMP
select IRQ_DOMAIN_HIERARCHY
# Generic MSI interrupt support
config GENERIC_MSI_IRQ
bool
# Generic MSI hierarchical interrupt domain support
config GENERIC_MSI_IRQ_DOMAIN
bool
select IRQ_DOMAIN_HIERARCHY
select GENERIC_MSI_IRQ
config IRQ_MSI_IOMMU
bool
config HANDLE_DOMAIN_IRQ
bool
config IRQ_TIMINGS
bool
config GENERIC_IRQ_MATRIX_ALLOCATOR
bool
config GENERIC_IRQ_RESERVATION_MODE
bool
config ARCH_WANTS_IRQ_RAW
bool
# Support forced irq threading
config IRQ_FORCED_THREADING
bool
config SPARSE_IRQ
bool "Support sparse irq numbering" if MAY_HAVE_SPARSE_IRQ
help
Sparse irq numbering is useful for distro kernels that want
to define a high CONFIG_NR_CPUS value but still want to have
low kernel memory footprint on smaller machines.
( Sparse irqs can also be beneficial on NUMA boxes, as they spread
out the interrupt descriptors in a more NUMA-friendly way. )
If you don't know what to do here, say N.
config GENERIC_IRQ_DEBUGFS
bool "Expose irq internals in debugfs"
depends on DEBUG_FS
select GENERIC_IRQ_INJECTION
default n
help
Exposes internal state information through debugfs. Mostly for
developers and debugging of hard to diagnose interrupt problems.
If you don't know what to do here, say N.
config IRQ_SBALANCE
bool "SBalance IRQ balancer"
depends on SMP
default n
help
This is a simple IRQ balancer that polls every X number of
milliseconds and moves IRQs from the most interrupt-heavy CPU to the
least interrupt-heavy CPUs until the heaviest CPU is no longer the
heaviest. IRQs are only moved from one source CPU to any number of
destination CPUs per balance run. Balancing is skipped if the gap
between the most interrupt-heavy CPU and the least interrupt-heavy CPU
is below the configured threshold of interrupts.
The heaviest IRQs are targeted for migration in order to reduce the
number of IRQs to migrate. If moving an IRQ would reduce overall
balance, then it won't be migrated.
The most interrupt-heavy CPU is calculated by scaling the number of
new interrupts on that CPU to the CPU's current capacity. This way,
interrupt heaviness takes into account factors such as thermal
pressure and time spent processing interrupts rather than just the
sheer number of them. This also makes SBalance aware of CPU asymmetry,
where different CPUs can have different performance capacities and be
proportionally balanced.
if IRQ_SBALANCE
config IRQ_SBALANCE_POLL_MSEC
int "Polling interval in milliseconds"
default 3000
help
Perform IRQ balancing every X milliseconds.
config IRQ_SBALANCE_THRESH
int "Balance threshold in number of interrupts"
default 1024
help
There needs to be a difference of at least this many new interrupts
between the heaviest and least-heavy CPUs during the last polling
window in order for balancing to occur. This is to avoid balancing
when the system is quiet.
This threshold is compared to the _scaled_ interrupt counts per CPU;
i.e., the number of interrupts scaled to the CPU's capacity.
config SBALANCE_EXCLUDE_CPUS
string "CPUs to exclude from balancing"
help
Comma-separated list of CPUs to exclude from IRQ balancing.
For example, to ignore CPU0, CPU1, and CPU2, it is valid to provide
"0,1-2" or "0-2" or "0,1,2".
endif
endmenu
config GENERIC_IRQ_MULTI_HANDLER
bool
help
Allow to specify the low level IRQ handler at run time.