You should keep the default CPUSchedulingPolicy=other
and only use Nice
with caution. Real time processes can block the whole system until they yield. RT schedules should be reserved for threads running real-time code, engineered to yield as soon as their work is done. A python interpreter is the opposite of that.
Personally I do set a Nice=
level below zero. However, one issue is that it sets the priority of all klippy threads, some of which handle background tasks, see here.
Ideally klippy should use setpriority(2)
on the background threads. Something like nice = -20 < C serial thread < Python serial thread < main thread < background logger < 0. The locations starting the threads are highlighted in this patch that sets the thread names for profiling purpose.
Another trick is to restrict all the system process to one or two cores by putting in /etc/systemd/system.conf
:
[Manager]
CPUAffinity=0,1
And then in klipper.service
, whitelist the other cores:
[Service]
CPUAffinity=1,2,3
That way cores 2 and 3 are reserved for the klippy process. I share core 1 with the other processes as I think it might be beneficial for communication with moonraker, but I havenāt conduced enough measurements (with perf sched
) to be confident about it. The smp_afinity
of the USBās IRQ can also be set to the klippy only cores.
Just to clarify, at this point, these are not recommendations but merely experimental tweaks Iāve tested in my own environment. I havenāt even formally demonstrated that they do not have any negative effects.
I run tests with down-clocked cpu to 480MHz (cpupower frequency-set -g powersave
), loading them with openssl speed
, while I test the max speed on two steppers at 128 microsteps. I have not yet found a way to measure the tightness of klippyās deadlines without running it to its limits.