Bug#972709: Wishlist/RFC: Change to CONFIG_PREEMPT_NONE in linux-image-cloud-*
Package: linux-image-cloud-amd64
Version: 4.19+105+deb10u7
Severity: wishlist
Since cloud images are mostly run for server workloads in headless
environments accessed via network only, it would be better if
"linux-image-cloud-*" kernels were compiled with CONFIG_PREEMPT_NONE=y
("No Forced Preemption (Server)").
Currently those packages use CONFIG_PREEMPT_VOLUNTARY=y ("Voluntary
Kernel Preemption (Desktop)")
CONFIG_PREEMPT_NONE description from kernel help:
"This is the traditional Linux preemption model, geared towards
throughput. It will still provide good latencies most of the time,
but there are no guarantees and occasional longer delays are
possible.
Select this option if you are building a kernel for a server
or scientific/computation system, or if you want to maximize the
raw processing power of the kernel, irrespective of scheduling
latencies."
Help on CONFIG_PREEMPT_VOLUNTARY:
"This option reduces the latency of the kernel by adding more
"explicit preemption points" to the kernel code. These new
preemption points have been selected to reduce the maximum latency
of rescheduling, providing faster application reactions, at the cost
of slightly lower throughput.
This allows reaction to interactive events by allowing a low
priority process to voluntarily preempt itself even if it is in
kernel mode executing a system call. This allows applications to run
more 'smoothly' even when the system is under load.
Select this if you are building a kernel for a desktop system.
In other words, choosing CONFIG_PREEMPT_NONE would favour throughput
over latency, the latter being more important on GUI environment (where
a non-responsive mouse is bad, for example) but not on servers.
A second benefit of CONFIG_PREEMPT_NONE is that it reduces context
switch overhead, which means more CPU cycles are available for doing
useful computing. This is specially important on virtualized
environments and/or guest scheduling is based on CPU "credits" (for
example, AWS).
--
FV
Reply to: