[RFC PATCH 0/1] Add FUTEX_SPIN operation

André Almeida posted 1 patch 1 week, 4 days ago
include/uapi/linux/futex.h |  2 +-
kernel/futex/futex.h       |  6 ++-
kernel/futex/waitwake.c    | 79 +++++++++++++++++++++++++++++++++++++-
3 files changed, 83 insertions(+), 4 deletions(-)
[RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by André Almeida 1 week, 4 days ago
Hi,

In the last LPC, Mathieu Desnoyers and I presented[0] a proposal to extend the
rseq interface to be able to implement spin locks in userspace correctly. Thomas
Gleixner agreed that this is something that Linux could improve, but asked for
an alternative proposal first: a futex operation that allows to spin a user
lock inside the kernel. This patchset implements a prototype of this idea for
further discussion.

With FUTEX2_SPIN flag set during a futex_wait(), the futex value is expected to
be the PID of the lock owner. Then, the kernel gets the task_struct of the
corresponding PID, and checks if it's running. It spins until the futex
is awaken, the task is scheduled out or if a timeout happens.  If the lock owner
is scheduled out at any time, then the syscall follows the normal path of
sleeping as usual.

If the futex is awaken and we are spinning, we can return to userspace quickly,
avoid the scheduling out and in again to wake from a futex_wait(), thus
speeding up the wait operation.

I didn't manage to find a good mechanism to prevent race conditions between
setting *futex = PID in userspace and doing find_get_task_by_vpid(PID) in kernel
space, giving that there's enough room for the original PID owner exit and such
PID to be relocated to another unrelated task in the system. I didn't performed
benchmarks so far, as I hope to clarify if this interface makes sense prior to
doing measurements on it.

This implementation has some debug prints to make it easy to inspect what the
kernel is doing, so you can check if the futex woke during spinning or if
just slept as the normal path:

[ 6331] futex_spin: spinned 64738 times, sleeping
[ 6331] futex_spin: woke after 1864606 spins
[ 6332] futex_spin: woke after 1820906 spins
[ 6351] futex_spin: spinned 1603293 times, sleeping
[ 6352] futex_spin: woke after 1848199 spins

[0] https://lpc.events/event/17/contributions/1481/

You can find a small snippet to play with this interface here:

---

/*
 * futex2_spin example, by André Almeida <andrealmeid@igalia.com>
 *
 * gcc spin.c -o spin
 */

#define _GNU_SOURCE
#include <err.h>
#include <errno.h>
#include <linux/futex.h>
#include <linux/sched.h>
#include <pthread.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <unistd.h>

#define __NR_futex_wake 454
#define __NR_futex_wait 455

#define WAKE_WAIT_US	10000
#define FUTEX2_SPIN	0x08
#define STACK_SIZE	(1024 * 1024)

#define FUTEX2_SIZE_U32	0x02
#define FUTEX2_PRIVATE	FUTEX_PRIVATE_FLAG

#define timeout_ns  30000000

void *futex;

static inline int futex2_wake(volatile void *uaddr, unsigned long mask, int nr, unsigned int flags)
{
	return syscall(__NR_futex_wake, uaddr, mask, nr, flags);
}

static inline int futex2_wait(volatile void *uaddr, unsigned long val, unsigned long mask,
			      unsigned int flags, struct timespec *timo, clockid_t clockid)
{
	return syscall(__NR_futex_wait, uaddr, val, mask, flags, timo, clockid);
}

void waiter_fn()
{
	struct timespec to;
	unsigned int flags = FUTEX2_PRIVATE | FUTEX2_SIZE_U32 | FUTEX2_SPIN;

	uint32_t child_pid = *(uint32_t *) futex;

	clock_gettime(CLOCK_MONOTONIC, &to);
	to.tv_nsec += timeout_ns;
	if (to.tv_nsec >= 1000000000) {
		to.tv_sec++;
		to.tv_nsec -= 1000000000;
	}

	printf("waiting on PID %d...\n", child_pid);
	if (futex2_wait(futex, child_pid, ~0U, flags, &to, CLOCK_MONOTONIC))
		printf("waiter failed errno %d\n", errno);

	puts("waiting done");
}

int function(int n)
{
	return n + n;
}

#define CHILD_LOOPS 500000

static int child_fn(void *arg)
{
	int i, n = 2;

	for (i = 0; i < CHILD_LOOPS; i++)
		n = function(n);

	futex2_wake(futex, ~0U, 1, FUTEX2_SIZE_U32 | FUTEX_PRIVATE_FLAG);

	puts("child thread is done");

	return 0;
}

int main() {
	uint32_t child_pid = 0;
	char *stack;

	futex = &child_pid;

	stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE,
			MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);

	if (stack == MAP_FAILED)
		err(EXIT_FAILURE, "mmap");

	child_pid = clone(child_fn, stack + STACK_SIZE, CLONE_VM, NULL);

	waiter_fn();

	usleep(WAKE_WAIT_US * 10);

	return 0;
}

---

André Almeida (1):
  futex: Add FUTEX_SPIN operation

 include/uapi/linux/futex.h |  2 +-
 kernel/futex/futex.h       |  6 ++-
 kernel/futex/waitwake.c    | 79 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 83 insertions(+), 4 deletions(-)

-- 
2.44.0

Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Christian Brauner 1 week, 3 days ago
On Thu, Apr 25, 2024 at 05:43:31PM -0300, André Almeida wrote:
> Hi,
> 
> In the last LPC, Mathieu Desnoyers and I presented[0] a proposal to extend the
> rseq interface to be able to implement spin locks in userspace correctly. Thomas
> Gleixner agreed that this is something that Linux could improve, but asked for
> an alternative proposal first: a futex operation that allows to spin a user
> lock inside the kernel. This patchset implements a prototype of this idea for
> further discussion.
> 
> With FUTEX2_SPIN flag set during a futex_wait(), the futex value is expected to
> be the PID of the lock owner. Then, the kernel gets the task_struct of the
> corresponding PID, and checks if it's running. It spins until the futex
> is awaken, the task is scheduled out or if a timeout happens.  If the lock owner
> is scheduled out at any time, then the syscall follows the normal path of
> sleeping as usual.
> 
> If the futex is awaken and we are spinning, we can return to userspace quickly,
> avoid the scheduling out and in again to wake from a futex_wait(), thus
> speeding up the wait operation.
> 
> I didn't manage to find a good mechanism to prevent race conditions between
> setting *futex = PID in userspace and doing find_get_task_by_vpid(PID) in kernel
> space, giving that there's enough room for the original PID owner exit and such
> PID to be relocated to another unrelated task in the system. I didn't performed

One option would be to also allow pidfds. Starting with v6.9 they can be
used to reference individual threads.

So for the really fast case where you have multiple threads and you
somehow may really do care about the impact of the atomic_long_inc() on
pidfd_file->f_count during fdget() (for the single-threaded case the
increment is elided), callers can pass the TID. But in cases where the
inc and put aren't a performance sensitive, you can use pidfds.

So something like the _completely untested_ below:

diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 94feac92cf4f..b842680aa7e0 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -4,6 +4,9 @@
 #include <linux/sched/task.h>
 #include <linux/sched/signal.h>
 #include <linux/freezer.h>
+#include <linux/cleanup.h>
+#include <linux/file.h>
+#include <uapi/linux/pidfd.h>
 
 #include "futex.h"
 
@@ -385,19 +388,29 @@ static int futex_spin(struct futex_hash_bucket *hb, struct futex_q *q,
 		       struct hrtimer_sleeper *timeout, void __user *uaddr, u32 val)
 {
 	struct task_struct *p;
-	u32 pid, uval;
+	struct pid *pid;
+	u32 pidfd, uval;
 	unsigned int i = 0;
 
 	if (futex_get_value_locked(&uval, uaddr))
 		return -EFAULT;
 
-	pid = uval;
+	pidfd = uval;
+	CLASS(fd, f)(pidfd);
 
-	p = find_get_task_by_vpid(pid);
-	if (!p) {
-		printk("%s: no task found with PID %d\n", __func__, pid);
-		return -EAGAIN;
-	}
+	if (!f.file)
+		return -EBADF;
+
+	pid = pidfd_pid(f.file);
+	if (IS_ERR(pid))
+		return PTR_ERR(pid);
+
+	if (f.file->f_flags & PIDFD_THREAD)
+		p = get_pid_task(pid, PIDTYPE_PID); /* individual thread */
+	else
+		p = get_pid_task(pid, PIDTYPE_TGID); /* thread-group leader */
+	if (!p)
+		return -ESRCH;
 
 	if (unlikely(p->flags & PF_KTHREAD)) {
 		put_task_struct(p);


> benchmarks so far, as I hope to clarify if this interface makes sense prior to
> doing measurements on it.
> 
> This implementation has some debug prints to make it easy to inspect what the
> kernel is doing, so you can check if the futex woke during spinning or if
> just slept as the normal path:
> 
> [ 6331] futex_spin: spinned 64738 times, sleeping
> [ 6331] futex_spin: woke after 1864606 spins
> [ 6332] futex_spin: woke after 1820906 spins
> [ 6351] futex_spin: spinned 1603293 times, sleeping
> [ 6352] futex_spin: woke after 1848199 spins
> 
> [0] https://lpc.events/event/17/contributions/1481/
> 
> You can find a small snippet to play with this interface here:
> 
> ---
> 
> /*
>  * futex2_spin example, by André Almeida <andrealmeid@igalia.com>
>  *
>  * gcc spin.c -o spin
>  */
> 
> #define _GNU_SOURCE
> #include <err.h>
> #include <errno.h>
> #include <linux/futex.h>
> #include <linux/sched.h>
> #include <pthread.h>
> #include <stdint.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/mman.h>
> #include <unistd.h>
> 
> #define __NR_futex_wake 454
> #define __NR_futex_wait 455
> 
> #define WAKE_WAIT_US	10000
> #define FUTEX2_SPIN	0x08
> #define STACK_SIZE	(1024 * 1024)
> 
> #define FUTEX2_SIZE_U32	0x02
> #define FUTEX2_PRIVATE	FUTEX_PRIVATE_FLAG
> 
> #define timeout_ns  30000000
> 
> void *futex;
> 
> static inline int futex2_wake(volatile void *uaddr, unsigned long mask, int nr, unsigned int flags)
> {
> 	return syscall(__NR_futex_wake, uaddr, mask, nr, flags);
> }
> 
> static inline int futex2_wait(volatile void *uaddr, unsigned long val, unsigned long mask,
> 			      unsigned int flags, struct timespec *timo, clockid_t clockid)
> {
> 	return syscall(__NR_futex_wait, uaddr, val, mask, flags, timo, clockid);
> }
> 
> void waiter_fn()
> {
> 	struct timespec to;
> 	unsigned int flags = FUTEX2_PRIVATE | FUTEX2_SIZE_U32 | FUTEX2_SPIN;
> 
> 	uint32_t child_pid = *(uint32_t *) futex;
> 
> 	clock_gettime(CLOCK_MONOTONIC, &to);
> 	to.tv_nsec += timeout_ns;
> 	if (to.tv_nsec >= 1000000000) {
> 		to.tv_sec++;
> 		to.tv_nsec -= 1000000000;
> 	}
> 
> 	printf("waiting on PID %d...\n", child_pid);
> 	if (futex2_wait(futex, child_pid, ~0U, flags, &to, CLOCK_MONOTONIC))
> 		printf("waiter failed errno %d\n", errno);
> 
> 	puts("waiting done");
> }
> 
> int function(int n)
> {
> 	return n + n;
> }
> 
> #define CHILD_LOOPS 500000
> 
> static int child_fn(void *arg)
> {
> 	int i, n = 2;
> 
> 	for (i = 0; i < CHILD_LOOPS; i++)
> 		n = function(n);
> 
> 	futex2_wake(futex, ~0U, 1, FUTEX2_SIZE_U32 | FUTEX_PRIVATE_FLAG);
> 
> 	puts("child thread is done");
> 
> 	return 0;
> }
> 
> int main() {
> 	uint32_t child_pid = 0;
> 	char *stack;
> 
> 	futex = &child_pid;
> 
> 	stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE,
> 			MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
> 
> 	if (stack == MAP_FAILED)
> 		err(EXIT_FAILURE, "mmap");
> 
> 	child_pid = clone(child_fn, stack + STACK_SIZE, CLONE_VM, NULL);
> 
> 	waiter_fn();
> 
> 	usleep(WAKE_WAIT_US * 10);
> 
> 	return 0;
> }
> 
> ---
> 
> André Almeida (1):
>   futex: Add FUTEX_SPIN operation
> 
>  include/uapi/linux/futex.h |  2 +-
>  kernel/futex/futex.h       |  6 ++-
>  kernel/futex/waitwake.c    | 79 +++++++++++++++++++++++++++++++++++++-
>  3 files changed, 83 insertions(+), 4 deletions(-)
> 
> -- 
> 2.44.0
> 
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by André Almeida 5 days, 7 hours ago
Hi Christian,

Em 26/04/2024 07:26, Christian Brauner escreveu:
> On Thu, Apr 25, 2024 at 05:43:31PM -0300, André Almeida wrote:
>> Hi,
>>
>> In the last LPC, Mathieu Desnoyers and I presented[0] a proposal to extend the
>> rseq interface to be able to implement spin locks in userspace correctly. Thomas
>> Gleixner agreed that this is something that Linux could improve, but asked for
>> an alternative proposal first: a futex operation that allows to spin a user
>> lock inside the kernel. This patchset implements a prototype of this idea for
>> further discussion.
>>
>> With FUTEX2_SPIN flag set during a futex_wait(), the futex value is expected to
>> be the PID of the lock owner. Then, the kernel gets the task_struct of the
>> corresponding PID, and checks if it's running. It spins until the futex
>> is awaken, the task is scheduled out or if a timeout happens.  If the lock owner
>> is scheduled out at any time, then the syscall follows the normal path of
>> sleeping as usual.
>>
>> If the futex is awaken and we are spinning, we can return to userspace quickly,
>> avoid the scheduling out and in again to wake from a futex_wait(), thus
>> speeding up the wait operation.
>>
>> I didn't manage to find a good mechanism to prevent race conditions between
>> setting *futex = PID in userspace and doing find_get_task_by_vpid(PID) in kernel
>> space, giving that there's enough room for the original PID owner exit and such
>> PID to be relocated to another unrelated task in the system. I didn't performed
> 
> One option would be to also allow pidfds. Starting with v6.9 they can be
> used to reference individual threads.
> 
> So for the really fast case where you have multiple threads and you
> somehow may really do care about the impact of the atomic_long_inc() on
> pidfd_file->f_count during fdget() (for the single-threaded case the
> increment is elided), callers can pass the TID. But in cases where the
> inc and put aren't a performance sensitive, you can use pidfds.
> 

Thank you very much for making the effort here, much appreciated :)

While I agree that pidfds would fix the PID race conditions, I will move 
this interface to support TIDs instead, as noted by Florian and Peter. 
With TID the race conditions are diminished I reckon?
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Christian Brauner 4 days, 22 hours ago
On Wed, May 01, 2024 at 08:44:36PM -0300, André Almeida wrote:
> Hi Christian,
> 
> Em 26/04/2024 07:26, Christian Brauner escreveu:
> > On Thu, Apr 25, 2024 at 05:43:31PM -0300, André Almeida wrote:
> > > Hi,
> > > 
> > > In the last LPC, Mathieu Desnoyers and I presented[0] a proposal to extend the
> > > rseq interface to be able to implement spin locks in userspace correctly. Thomas
> > > Gleixner agreed that this is something that Linux could improve, but asked for
> > > an alternative proposal first: a futex operation that allows to spin a user
> > > lock inside the kernel. This patchset implements a prototype of this idea for
> > > further discussion.
> > > 
> > > With FUTEX2_SPIN flag set during a futex_wait(), the futex value is expected to
> > > be the PID of the lock owner. Then, the kernel gets the task_struct of the
> > > corresponding PID, and checks if it's running. It spins until the futex
> > > is awaken, the task is scheduled out or if a timeout happens.  If the lock owner
> > > is scheduled out at any time, then the syscall follows the normal path of
> > > sleeping as usual.
> > > 
> > > If the futex is awaken and we are spinning, we can return to userspace quickly,
> > > avoid the scheduling out and in again to wake from a futex_wait(), thus
> > > speeding up the wait operation.
> > > 
> > > I didn't manage to find a good mechanism to prevent race conditions between
> > > setting *futex = PID in userspace and doing find_get_task_by_vpid(PID) in kernel
> > > space, giving that there's enough room for the original PID owner exit and such
> > > PID to be relocated to another unrelated task in the system. I didn't performed
> > 
> > One option would be to also allow pidfds. Starting with v6.9 they can be
> > used to reference individual threads.
> > 
> > So for the really fast case where you have multiple threads and you
> > somehow may really do care about the impact of the atomic_long_inc() on
> > pidfd_file->f_count during fdget() (for the single-threaded case the
> > increment is elided), callers can pass the TID. But in cases where the
> > inc and put aren't a performance sensitive, you can use pidfds.
> > 
> 
> Thank you very much for making the effort here, much appreciated :)
> 
> While I agree that pidfds would fix the PID race conditions, I will move
> this interface to support TIDs instead, as noted by Florian and Peter. With
> TID the race conditions are diminished I reckon?

Unless I'm missing something the question here is PID (as in TGID aka
thread-group leader id gotten via getpid()) vs TID (thread specific id
gotten via gettid()). You want the thread-specific id as you want to
interact with the futex state of a specific thread not the thread-group
leader.

Aside from that TIDs are subject to the same race conditions that PIDs
are. They are allocated from the same pool (see alloc_pid()).
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Florian Weimer 4 days, 21 hours ago
* Christian Brauner:

> Unless I'm missing something the question here is PID (as in TGID aka
> thread-group leader id gotten via getpid()) vs TID (thread specific id
> gotten via gettid()). You want the thread-specific id as you want to
> interact with the futex state of a specific thread not the thread-group
> leader.
>
> Aside from that TIDs are subject to the same race conditions that PIDs
> are. They are allocated from the same pool (see alloc_pid()).

For most mutex types (but not robust mutexes), it is undefined in
userspace if a thread exits while it has locked a mutex.  Such a usage
condition would ensure that the race doesn't happen, I believe.

From a glibc perspective, we typically cannot use long-term file
descriptors (that are kept open across function calls) because some
applications do not expect them, or even close them behind our back.

Thanks,
Florian
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Christian Brauner 4 days, 21 hours ago
On Thu, May 02, 2024 at 11:51:56AM +0200, Florian Weimer wrote:
> * Christian Brauner:
> 
> > Unless I'm missing something the question here is PID (as in TGID aka
> > thread-group leader id gotten via getpid()) vs TID (thread specific id
> > gotten via gettid()). You want the thread-specific id as you want to
> > interact with the futex state of a specific thread not the thread-group
> > leader.
> >
> > Aside from that TIDs are subject to the same race conditions that PIDs
> > are. They are allocated from the same pool (see alloc_pid()).
> 
> For most mutex types (but not robust mutexes), it is undefined in
> userspace if a thread exits while it has locked a mutex.  Such a usage
> condition would ensure that the race doesn't happen, I believe.

The argument is a bit shaky imho because the race not being able to
happen is predicated on no one being careless enough to exit with a
mutex held. That doesn't do anything against someone doing it on
purpose.

> 
> From a glibc perspective, we typically cannot use long-term file
> descriptors (that are kept open across function calls) because some
> applications do not expect them, or even close them behind our back.

Yeah, good point. Note, I suggested it as an extension not as a
replacement for the TID. I still think it would be a useful extension in
general.
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Florian Weimer 4 days, 20 hours ago
* Christian Brauner:

>> From a glibc perspective, we typically cannot use long-term file
>> descriptors (that are kept open across function calls) because some
>> applications do not expect them, or even close them behind our back.
>
> Yeah, good point. Note, I suggested it as an extension not as a
> replacement for the TID. I still think it would be a useful extension in
> general.

Applications will need a way to determine when it is safe to close the
pidfd, though.  If we automate this in glibc (in the same way we handle
thread stack deallocation for example), I think we are essentially back
to square one, except that pidfd collisions are much more likely than
TID collisions, especially on systems that have adjusted kernel.pid_max.
(File descriptor allocation is designed to maximize collisions, after
all.)

Thanks,
Florian
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Christian Brauner 4 days, 18 hours ago
On Thu, May 02, 2024 at 12:39:34PM +0200, Florian Weimer wrote:
> * Christian Brauner:
> 
> >> From a glibc perspective, we typically cannot use long-term file
> >> descriptors (that are kept open across function calls) because some
> >> applications do not expect them, or even close them behind our back.
> >
> > Yeah, good point. Note, I suggested it as an extension not as a
> > replacement for the TID. I still think it would be a useful extension in
> > general.
> 
> Applications will need a way to determine when it is safe to close the
> pidfd, though.  If we automate this in glibc (in the same way we handle
> thread stack deallocation for example), I think we are essentially back
> to square one, except that pidfd collisions are much more likely than
> TID collisions, especially on systems that have adjusted kernel.pid_max.
> (File descriptor allocation is designed to maximize collisions, after
> all.)

(Note that with pidfs (current mainline), pidfds have 64bit unique inode
numbers that are unique for the lifetime of the system. So they can
reliably be compared via statx() and so on.)
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Florian Weimer 1 week, 3 days ago
* André Almeida:

> With FUTEX2_SPIN flag set during a futex_wait(), the futex value is
> expected to be the PID of the lock owner. Then, the kernel gets the
> task_struct of the corresponding PID, and checks if it's running. It
> spins until the futex is awaken, the task is scheduled out or if a
> timeout happens.  If the lock owner is scheduled out at any time, then
> the syscall follows the normal path of sleeping as usual.

PID or TID?

I think we'd like to have at least one, possibly more, bits for free
use, so the kernel ID comparison should at least mask off the MSB,
possibly more.

I haven't really thought about the proposed locking protocol, sorry.

Thanks,
Florian
Re: [RFC PATCH 0/1] Add FUTEX_SPIN operation
Posted by Peter Zijlstra 1 week, 3 days ago
On Fri, Apr 26, 2024 at 11:43:51AM +0200, Florian Weimer wrote:
> * André Almeida:
> 
> > With FUTEX2_SPIN flag set during a futex_wait(), the futex value is
> > expected to be the PID of the lock owner. Then, the kernel gets the
> > task_struct of the corresponding PID, and checks if it's running. It
> > spins until the futex is awaken, the task is scheduled out or if a
> > timeout happens.  If the lock owner is scheduled out at any time, then
> > the syscall follows the normal path of sleeping as usual.
> 
> PID or TID?

TID, just like PI_LOCK I would presume.

> I think we'd like to have at least one, possibly more, bits for free
> use, so the kernel ID comparison should at least mask off the MSB,
> possibly more.

Yeah, it should be using FUTEX_TID_MASK -- just like PI_LOCK :-)

I suppose the question is if this thing should then also imply
FUTEX_WAITERS or not.