The problem:
Some places in the kernel implement a parallel programming strategy
consisting on local_locks() for most of the work, and some rare remote
operations are scheduled on target cpu. This keeps cache bouncing low since
cacheline tends to be mostly local, and avoids the cost of locks in non-RT
kernels, even though the very few remote operations will be expensive due
to scheduling overhead.
On the other hand, for RT workloads this can represent a problem: getting
an important workload scheduled out to deal with remote requests is
sure to introduce unexpected deadline misses.
The idea:
Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks.
In this case, instead of scheduling work on a remote cpu, it should
be safe to grab that remote cpu's per-cpu spinlock and run the required
work locally. That major cost, which is un/locking in every local function,
already happens in PREEMPT_RT.
Also, there is no need to worry about extra cache bouncing:
The cacheline invalidation already happens due to schedule_work_on().
This will avoid schedule_work_on(), and thus avoid scheduling-out an
RT workload.
Proposed solution:
A new interface called Queue PerCPU Work (QPW), which should replace
Work Queue in the above mentioned use case.
If PREEMPT_RT=n this interfaces just wraps the current
local_locks + WorkQueue behavior, so no expected change in runtime.
If PREEMPT_RT=y, or CONFIG_QPW=y, queue_percpu_work_on(cpu,...) will
lock that cpu's per-cpu structure and perform work on it locally.
This is possible because on functions that can be used for performing
remote work on remote per-cpu structures, the local_lock (which is already
a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which
is able to get the per_cpu spinlock() for the cpu passed as parameter.
RFC->v1:
- Introduce CONFIG_QPW and qpw= kernel boot option to enable
remote spinlocking and execution even on !CONFIG_PREEMPT_RT
kernels (Leonardo Bras).
- Move buffer_head draining to separate workqueue (Marcelo Tosatti).
- Convert mlock per-CPU page lists to QPW (Marcelo Tosatti).
- Drop memcontrol convertion (as isolated CPUs are not targets
of queue_work_on anymore).
- Rebase SLUB against Vlastimil's slab/next.
- Add basic document for QPW (Waiman Long).
The following testcase triggers lru_add_drain_all on an isolated CPU
(that does sys_write to a file before entering its realtime
loop).
/*
* Simulates a low latency loop program that is interrupted
* due to lru_add_drain_all. To trigger lru_add_drain_all, run:
*
* blockdev --flushbufs /dev/sdX
*
*/
#define _GNU_SOURCE
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <stdlib.h>
#include <stdarg.h>
#include <pthread.h>
#include <sched.h>
#include <unistd.h>
int cpu;
static void *run(void *arg)
{
pthread_t current_thread;
cpu_set_t cpuset;
int ret, nrloops;
struct sched_param sched_p;
pid_t pid;
int fd;
char buf[] = "xxxxxxxxxxx";
CPU_ZERO(&cpuset);
CPU_SET(cpu, &cpuset);
current_thread = pthread_self();
ret = pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset);
if (ret) {
perror("pthread_setaffinity_np failed\n");
exit(0);
}
memset(&sched_p, 0, sizeof(struct sched_param));
sched_p.sched_priority = 1;
pid = gettid();
ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p);
if (ret) {
perror("sched_setscheduler");
exit(0);
}
fd = open("/tmp/tmpfile", O_RDWR|O_CREAT|O_TRUNC);
if (fd == -1) {
perror("open");
exit(0);
}
ret = write(fd, buf, sizeof(buf));
if (ret == -1) {
perror("write");
exit(0);
}
do {
nrloops = nrloops+2;
nrloops--;
} while (1);
}
int main(int argc, char *argv[])
{
int fd, ret;
pthread_t thread;
long val;
char *endptr, *str;
struct sched_param sched_p;
pid_t pid;
if (argc != 2) {
printf("usage: %s cpu-nr\n", argv[0]);
printf("where CPU number is the CPU to pin thread to\n");
exit(0);
}
str = argv[1];
cpu = strtol(str, &endptr, 10);
if (cpu < 0) {
printf("strtol returns %d\n", cpu);
exit(0);
}
printf("cpunr=%d\n", cpu);
memset(&sched_p, 0, sizeof(struct sched_param));
sched_p.sched_priority = 1;
pid = getpid();
ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p);
if (ret) {
perror("sched_setscheduler");
exit(0);
}
pthread_create(&thread, NULL, run, NULL);
sleep(5000);
pthread_join(thread, NULL);
}
On Fri, Feb 06, 2026 at 11:34:30AM -0300, Marcelo Tosatti wrote:
> The problem:
> Some places in the kernel implement a parallel programming strategy
> consisting on local_locks() for most of the work, and some rare remote
> operations are scheduled on target cpu. This keeps cache bouncing low since
> cacheline tends to be mostly local, and avoids the cost of locks in non-RT
> kernels, even though the very few remote operations will be expensive due
> to scheduling overhead.
>
> On the other hand, for RT workloads this can represent a problem: getting
> an important workload scheduled out to deal with remote requests is
> sure to introduce unexpected deadline misses.
>
> The idea:
> Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks.
> In this case, instead of scheduling work on a remote cpu, it should
> be safe to grab that remote cpu's per-cpu spinlock and run the required
> work locally. That major cost, which is un/locking in every local function,
> already happens in PREEMPT_RT.
>
> Also, there is no need to worry about extra cache bouncing:
> The cacheline invalidation already happens due to schedule_work_on().
>
> This will avoid schedule_work_on(), and thus avoid scheduling-out an
> RT workload.
>
Marcelo, thanks for finishing this series!
> Proposed solution:
> A new interface called Queue PerCPU Work (QPW), which should replace
> Work Queue in the above mentioned use case.
>
> If PREEMPT_RT=n this interfaces just wraps the current
Are we enabling it by default in PREEMPT_RT=y? If not,
If CONFIG_QPW=n or qpw=0 this interfaces just wraps the current
> local_locks + WorkQueue behavior, so no expected change in runtime.
>
> If PREEMPT_RT=y, or CONFIG_QPW=y, queue_percpu_work_on(cpu,...) will
Same here
If CONFIG_QPW=y and qpw=1, queue_percpu_work_on(cpu,...) will
> lock that cpu's per-cpu structure and perform work on it locally.
> This is possible because on functions that can be used for performing
> remote work on remote per-cpu structures, the local_lock (which is already
> a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which
> is able to get the per_cpu spinlock() for the cpu passed as parameter.
>
> RFC->v1:
>
> - Introduce CONFIG_QPW and qpw= kernel boot option to enable
> remote spinlocking and execution even on !CONFIG_PREEMPT_RT
> kernels (Leonardo Bras).
> - Move buffer_head draining to separate workqueue (Marcelo Tosatti).
> - Convert mlock per-CPU page lists to QPW (Marcelo Tosatti).
> - Drop memcontrol convertion (as isolated CPUs are not targets
> of queue_work_on anymore).
> - Rebase SLUB against Vlastimil's slab/next.
> - Add basic document for QPW (Waiman Long).
A document was a nice touch :)
>
>
> The following testcase triggers lru_add_drain_all on an isolated CPU
> (that does sys_write to a file before entering its realtime
> loop).
>
> /*
> * Simulates a low latency loop program that is interrupted
> * due to lru_add_drain_all. To trigger lru_add_drain_all, run:
> *
> * blockdev --flushbufs /dev/sdX
> *
> */
> #define _GNU_SOURCE
> #include <fcntl.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/mman.h>
> #include <string.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <stdlib.h>
> #include <stdarg.h>
> #include <pthread.h>
> #include <sched.h>
> #include <unistd.h>
>
> int cpu;
>
> static void *run(void *arg)
> {
> pthread_t current_thread;
> cpu_set_t cpuset;
> int ret, nrloops;
> struct sched_param sched_p;
> pid_t pid;
> int fd;
> char buf[] = "xxxxxxxxxxx";
>
> CPU_ZERO(&cpuset);
> CPU_SET(cpu, &cpuset);
>
> current_thread = pthread_self();
> ret = pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset);
> if (ret) {
> perror("pthread_setaffinity_np failed\n");
> exit(0);
> }
>
> memset(&sched_p, 0, sizeof(struct sched_param));
> sched_p.sched_priority = 1;
> pid = gettid();
> ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p);
> if (ret) {
> perror("sched_setscheduler");
> exit(0);
> }
>
> fd = open("/tmp/tmpfile", O_RDWR|O_CREAT|O_TRUNC);
> if (fd == -1) {
> perror("open");
> exit(0);
> }
>
> ret = write(fd, buf, sizeof(buf));
> if (ret == -1) {
> perror("write");
> exit(0);
> }
>
> do {
> nrloops = nrloops+2;
> nrloops--;
> } while (1);
> }
>
> int main(int argc, char *argv[])
> {
> int fd, ret;
> pthread_t thread;
> long val;
> char *endptr, *str;
> struct sched_param sched_p;
> pid_t pid;
>
> if (argc != 2) {
> printf("usage: %s cpu-nr\n", argv[0]);
> printf("where CPU number is the CPU to pin thread to\n");
> exit(0);
> }
> str = argv[1];
> cpu = strtol(str, &endptr, 10);
> if (cpu < 0) {
> printf("strtol returns %d\n", cpu);
> exit(0);
> }
> printf("cpunr=%d\n", cpu);
>
> memset(&sched_p, 0, sizeof(struct sched_param));
> sched_p.sched_priority = 1;
> pid = getpid();
> ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p);
> if (ret) {
> perror("sched_setscheduler");
> exit(0);
> }
>
> pthread_create(&thread, NULL, run, NULL);
>
> sleep(5000);
>
> pthread_join(thread, NULL);
> }
>
>
Also, having the reproducer in the cover letter was a great idea!
Thanks!
Leo
© 2016 - 2026 Red Hat, Inc.