kernel/futex/core.c | 10 +++ robust_bug.c | 178 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 188 insertions(+) create mode 100644 robust_bug.c
During LPC 2025, I presented a session about creating a new syscall for robust_list[0][1]. However, most of the session discussion wasn't much related to the new syscall itself, but much more related to an old bug that exists in the current robust_list mechanism. Since at least 2012, there's an open bug reporting a race condition, as Carlos O'Donell pointed out: "File corruption race condition in robust mutex unlocking" https://sourceware.org/bugzilla/show_bug.cgi?id=14485 To help understand the bug, I've created a reproducer (patch 1/2) and a companion kernel hack (patch 2/2) that helps to make the race condition more likely. When the bug happens, the reproducer shows a message comparing the original memory with the corrupted one: "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs 8001fe8dc0000000" I'm not sure yet what would be the appropriated approach to fix it, so I decided to reach the community before moving forward in some direction. One suggestion from Peter[2] resolves around serializing the mmap() and the robust list exit path, which might cause overheads for the common case, where list_op_pending is empty. However, giving that there's a new interface being prepared, this could also give the opportunity to rethink how list_op_pending works, and get rid of the race condition by design. Feedback is very much welcome. Thanks! André [0] https://lore.kernel.org/lkml/20251122-tonyk-robust_futex-v6-0-05fea005a0fd@igalia.com/ [1] https://lpc.events/event/19/contributions/2108/ [2] https://lore.kernel.org/lkml/20241219171344.GA26279@noisy.programming.kicks-ass.net/ André Almeida (2): futex: Create reproducer for robust_list race condition futex: Add debug delays kernel/futex/core.c | 10 +++ robust_bug.c | 178 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 188 insertions(+) create mode 100644 robust_bug.c -- 2.53.0
+CC libc-alpha.
On 2026-02-20 15:26, André Almeida wrote:
> During LPC 2025, I presented a session about creating a new syscall for
> robust_list[0][1]. However, most of the session discussion wasn't much related
> to the new syscall itself, but much more related to an old bug that exists in
> the current robust_list mechanism.
>
> Since at least 2012, there's an open bug reporting a race condition, as
> Carlos O'Donell pointed out:
>
> "File corruption race condition in robust mutex unlocking"
> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>
> To help understand the bug, I've created a reproducer (patch 1/2) and a
> companion kernel hack (patch 2/2) that helps to make the race condition
> more likely. When the bug happens, the reproducer shows a message
> comparing the original memory with the corrupted one:
>
> "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs 8001fe8dc0000000"
>
> I'm not sure yet what would be the appropriated approach to fix it, so I
> decided to reach the community before moving forward in some direction.
> One suggestion from Peter[2] resolves around serializing the mmap() and the
> robust list exit path, which might cause overheads for the common case,
> where list_op_pending is empty.
>
> However, giving that there's a new interface being prepared, this could
> also give the opportunity to rethink how list_op_pending works, and get
> rid of the race condition by design.
>
> Feedback is very much welcome.
Looking at this bug, one thing I'm starting to consider is that it
appears to be an issue inherent to lack of synchronization between
pthread_mutex_destroy(3) and the per-thread list_op_pending fields
and not so much a kernel issue.
Here is why I think the issue is purely userspace:
Let's suppose we have a shared memory area across Processes 1 and Process 2,
which internally have its own custom memory allocator in userspace to
allocate/free space within that shared memory.
Process 1, Thread A stumbles through the scenario highlighted by this bug, and
basically gets preempted at this FIXME in libc __pthread_mutex_unlock_full():
if (__glibc_unlikely ((atomic_exchange_release (&mutex->__data.__lock, 0)
& FUTEX_WAITERS) != 0))
futex_wake ((unsigned int *) &mutex->__data.__lock, 1, private);
/* We must clear op_pending after we release the mutex.
FIXME However, this violates the mutex destruction requirements
because another thread could acquire the mutex, destroy it, and
reuse the memory for something else; then, if this thread crashes,
and the memory happens to have a value equal to the TID, the kernel
will believe it is still related to the mutex (which has been
destroyed already) and will modify some other random object. */
__asm ("" ::: "memory");
THREAD_SETMEM (THREAD_SELF, robust_head.list_op_pending, NULL);
Then Process 1, Thread B runs, grabs the lock, releases it, and based on
program state it knows it can pthread_mutex_destroy() this lock, free its
associated memory through the custom shared memory allocator, and allocate
it for other purposes. Then we get to the point where Process 1 is
killed, and where the robust futex kernel code corrupts data in shared
memory because of the dangling list_op_pending pointer.
That shared memory data is still observable by Process B, which will get a
corrupted state.
Notice how this all happens without any munmap(2)/mmap(2) in the sequence ?
This is why I think this is purely a userspace issue rather than an issue
we can solve by adding extra synchronization in the kernel.
The one point we have in that sequence where I think we can add synchronization
is pthread_mutex_destroy(3) in libc. One possible "big hammer" solution would be
to make pthread_mutex_destroy iterate on all other threads list_op_pending
and busy-wait if it finds that the mutex address is in use. It would of course
only have to do that for robust futexes.
If that big hammer solution is not fast enough for many-threaded use-cases,
then we can think of other approaches such as adding a reference counter
in the mutex structure, or introducing hazard pointers in userspace to reduce
synchronization iteration from nr_threads to nr_cpus (or even down to max
rseq mm_cid).
Thoughts ?
Thanks,
Mathieu
>
> Thanks!
> André
>
> [0] https://lore.kernel.org/lkml/20251122-tonyk-robust_futex-v6-0-05fea005a0fd@igalia.com/
> [1] https://lpc.events/event/19/contributions/2108/
> [2] https://lore.kernel.org/lkml/20241219171344.GA26279@noisy.programming.kicks-ass.net/
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2026-02-20 16:42, Mathieu Desnoyers wrote:
> +CC libc-alpha.
>
> On 2026-02-20 15:26, André Almeida wrote:
>> During LPC 2025, I presented a session about creating a new syscall for
>> robust_list[0][1]. However, most of the session discussion wasn't much
>> related
>> to the new syscall itself, but much more related to an old bug that
>> exists in
>> the current robust_list mechanism.
>>
>> Since at least 2012, there's an open bug reporting a race condition, as
>> Carlos O'Donell pointed out:
>>
>> "File corruption race condition in robust mutex unlocking"
>> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>>
>> To help understand the bug, I've created a reproducer (patch 1/2) and a
>> companion kernel hack (patch 2/2) that helps to make the race condition
>> more likely. When the bug happens, the reproducer shows a message
>> comparing the original memory with the corrupted one:
>>
>> "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs
>> 8001fe8dc0000000"
>>
>> I'm not sure yet what would be the appropriated approach to fix it, so I
>> decided to reach the community before moving forward in some direction.
>> One suggestion from Peter[2] resolves around serializing the mmap()
>> and the
>> robust list exit path, which might cause overheads for the common case,
>> where list_op_pending is empty.
>>
>> However, giving that there's a new interface being prepared, this could
>> also give the opportunity to rethink how list_op_pending works, and get
>> rid of the race condition by design.
>>
>> Feedback is very much welcome.
>
> Looking at this bug, one thing I'm starting to consider is that it
> appears to be an issue inherent to lack of synchronization between
> pthread_mutex_destroy(3) and the per-thread list_op_pending fields
> and not so much a kernel issue.
>
> Here is why I think the issue is purely userspace:
>
> Let's suppose we have a shared memory area across Processes 1 and
> Process 2,
> which internally have its own custom memory allocator in userspace to
> allocate/free space within that shared memory.
>
> Process 1, Thread A stumbles through the scenario highlighted by this
> bug, and
> basically gets preempted at this FIXME in libc
> __pthread_mutex_unlock_full():
>
> if (__glibc_unlikely ((atomic_exchange_release (&mutex-
> >__data.__lock, 0)
> & FUTEX_WAITERS) != 0))
> futex_wake ((unsigned int *) &mutex->__data.__lock, 1, private);
>
> /* We must clear op_pending after we release the mutex.
> FIXME However, this violates the mutex destruction requirements
> because another thread could acquire the mutex, destroy it, and
> reuse the memory for something else; then, if this thread
> crashes,
> and the memory happens to have a value equal to the TID, the
> kernel
> will believe it is still related to the mutex (which has been
> destroyed already) and will modify some other random object. */
> __asm ("" ::: "memory");
> THREAD_SETMEM (THREAD_SELF, robust_head.list_op_pending, NULL);
>
> Then Process 1, Thread B runs, grabs the lock, releases it, and based on
> program state it knows it can pthread_mutex_destroy() this lock, free its
> associated memory through the custom shared memory allocator, and allocate
> it for other purposes. Then we get to the point where Process 1 is
> killed, and where the robust futex kernel code corrupts data in shared
> memory because of the dangling list_op_pending pointer.
>
> That shared memory data is still observable by Process B, which will get a
> corrupted state.
>
> Notice how this all happens without any munmap(2)/mmap(2) in the sequence ?
> This is why I think this is purely a userspace issue rather than an issue
> we can solve by adding extra synchronization in the kernel.
>
> The one point we have in that sequence where I think we can add
> synchronization
> is pthread_mutex_destroy(3) in libc. One possible "big hammer" solution
> would be
> to make pthread_mutex_destroy iterate on all other threads list_op_pending
> and busy-wait if it finds that the mutex address is in use. It would of
> course
> only have to do that for robust futexes.
>
> If that big hammer solution is not fast enough for many-threaded use-cases,
> then we can think of other approaches such as adding a reference counter
> in the mutex structure, or introducing hazard pointers in userspace to
> reduce
> synchronization iteration from nr_threads to nr_cpus (or even down to max
> rseq mm_cid).
To make matters even worse, the pthread_mutex_destroy(3) and reallocation
could happen from Process 2 rather than Process 1. So iterating on a
threads from Process 1 is not sufficient. We'd need to synchronize
pthread_mutex_destroy on something within the mutex structure which is
observable from all processes using the lock, for instance a reference count.
Thanks,
Mathieu
>
> Thoughts ?
>
> Thanks,
>
> Mathieu
>
>>
>> Thanks!
>> André
>>
>> [0] https://lore.kernel.org/lkml/20251122-tonyk-robust_futex-
>> v6-0-05fea005a0fd@igalia.com/
>> [1] https://lpc.events/event/19/contributions/2108/
>> [2] https://lore.kernel.org/
>> lkml/20241219171344.GA26279@noisy.programming.kicks-ass.net/
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2026-02-20 17:41, Mathieu Desnoyers wrote:
> On 2026-02-20 16:42, Mathieu Desnoyers wrote:
>> +CC libc-alpha.
>>
>> On 2026-02-20 15:26, André Almeida wrote:
>>> During LPC 2025, I presented a session about creating a new syscall for
>>> robust_list[0][1]. However, most of the session discussion wasn't
>>> much related
>>> to the new syscall itself, but much more related to an old bug that
>>> exists in
>>> the current robust_list mechanism.
>>>
>>> Since at least 2012, there's an open bug reporting a race condition, as
>>> Carlos O'Donell pointed out:
>>>
>>> "File corruption race condition in robust mutex unlocking"
>>> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>>>
>>> To help understand the bug, I've created a reproducer (patch 1/2) and a
>>> companion kernel hack (patch 2/2) that helps to make the race condition
>>> more likely. When the bug happens, the reproducer shows a message
>>> comparing the original memory with the corrupted one:
>>>
>>> "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs
>>> 8001fe8dc0000000"
>>>
>>> I'm not sure yet what would be the appropriated approach to fix it, so I
>>> decided to reach the community before moving forward in some direction.
>>> One suggestion from Peter[2] resolves around serializing the mmap()
>>> and the
>>> robust list exit path, which might cause overheads for the common case,
>>> where list_op_pending is empty.
>>>
>>> However, giving that there's a new interface being prepared, this could
>>> also give the opportunity to rethink how list_op_pending works, and get
>>> rid of the race condition by design.
>>>
>>> Feedback is very much welcome.
>>
>> Looking at this bug, one thing I'm starting to consider is that it
>> appears to be an issue inherent to lack of synchronization between
>> pthread_mutex_destroy(3) and the per-thread list_op_pending fields
>> and not so much a kernel issue.
>>
>> Here is why I think the issue is purely userspace:
>>
>> Let's suppose we have a shared memory area across Processes 1 and
>> Process 2,
>> which internally have its own custom memory allocator in userspace to
>> allocate/free space within that shared memory.
>>
>> Process 1, Thread A stumbles through the scenario highlighted by this
>> bug, and
>> basically gets preempted at this FIXME in libc
>> __pthread_mutex_unlock_full():
>>
>> if (__glibc_unlikely ((atomic_exchange_release (&mutex-
>> >__data.__lock, 0)
>> & FUTEX_WAITERS) != 0))
>> futex_wake ((unsigned int *) &mutex->__data.__lock, 1, private);
>>
>> /* We must clear op_pending after we release the mutex.
>> FIXME However, this violates the mutex destruction requirements
>> because another thread could acquire the mutex, destroy it, and
>> reuse the memory for something else; then, if this thread
>> crashes,
>> and the memory happens to have a value equal to the TID, the
>> kernel
>> will believe it is still related to the mutex (which has been
>> destroyed already) and will modify some other random
>> object. */
>> __asm ("" ::: "memory");
>> THREAD_SETMEM (THREAD_SELF, robust_head.list_op_pending, NULL);
>>
>> Then Process 1, Thread B runs, grabs the lock, releases it, and based on
>> program state it knows it can pthread_mutex_destroy() this lock, free its
>> associated memory through the custom shared memory allocator, and
>> allocate
>> it for other purposes. Then we get to the point where Process 1 is
>> killed, and where the robust futex kernel code corrupts data in shared
>> memory because of the dangling list_op_pending pointer.
>>
>> That shared memory data is still observable by Process B, which will
>> get a
>> corrupted state.
>>
>> Notice how this all happens without any munmap(2)/mmap(2) in the
>> sequence ?
>> This is why I think this is purely a userspace issue rather than an issue
>> we can solve by adding extra synchronization in the kernel.
>>
>> The one point we have in that sequence where I think we can add
>> synchronization
>> is pthread_mutex_destroy(3) in libc. One possible "big hammer"
>> solution would be
>> to make pthread_mutex_destroy iterate on all other threads
>> list_op_pending
>> and busy-wait if it finds that the mutex address is in use. It would
>> of course
>> only have to do that for robust futexes.
>>
>> If that big hammer solution is not fast enough for many-threaded use-
>> cases,
>> then we can think of other approaches such as adding a reference counter
>> in the mutex structure, or introducing hazard pointers in userspace to
>> reduce
>> synchronization iteration from nr_threads to nr_cpus (or even down to max
>> rseq mm_cid).
>
> To make matters even worse, the pthread_mutex_destroy(3) and reallocation
> could happen from Process 2 rather than Process 1. So iterating on a
> threads from Process 1 is not sufficient. We'd need to synchronize
> pthread_mutex_destroy on something within the mutex structure which is
> observable from all processes using the lock, for instance a reference
> count.
Trying to find a backward compatible way to solve this may be tricky.
Here is one possible approach I have in mind: Introduce a new syscall,
e.g. sys_cleanup_robust_list(void *addr)
This system call would be invoked on pthread_mutex_destroy(3) of
robust mutexes, and do the following:
- Calculate the offset of @addr within its mapping,
- Iterate on all processes which map the backing store which contain
the lock address @addr.
- Iterate on each thread sibling within each of those processes,
- If the thread has a robust list, and its list_op_pending points
to the same offset within the backing store mapping, clear the
list_op_pending pointer.
The overhead would be added specifically to pthread_mutex_destroy(3),
and only for robust mutexes.
Thoughts ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
Hi Mathieu,
Em 20/02/2026 20:17, Mathieu Desnoyers escreveu:
> On 2026-02-20 17:41, Mathieu Desnoyers wrote:
>> On 2026-02-20 16:42, Mathieu Desnoyers wrote:
>>> +CC libc-alpha.
>>>
>>> On 2026-02-20 15:26, André Almeida wrote:
>>>> During LPC 2025, I presented a session about creating a new syscall for
>>>> robust_list[0][1]. However, most of the session discussion wasn't
>>>> much related
>>>> to the new syscall itself, but much more related to an old bug that
>>>> exists in
>>>> the current robust_list mechanism.
>>>>
>>>> Since at least 2012, there's an open bug reporting a race condition, as
>>>> Carlos O'Donell pointed out:
>>>>
>>>> "File corruption race condition in robust mutex unlocking"
>>>> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>>>>
>>>> To help understand the bug, I've created a reproducer (patch 1/2) and a
>>>> companion kernel hack (patch 2/2) that helps to make the race condition
>>>> more likely. When the bug happens, the reproducer shows a message
>>>> comparing the original memory with the corrupted one:
>>>>
>>>> "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs
>>>> 8001fe8dc0000000"
>>>>
>>>> I'm not sure yet what would be the appropriated approach to fix it,
>>>> so I
>>>> decided to reach the community before moving forward in some direction.
>>>> One suggestion from Peter[2] resolves around serializing the mmap()
>>>> and the
>>>> robust list exit path, which might cause overheads for the common case,
>>>> where list_op_pending is empty.
>>>>
>>>> However, giving that there's a new interface being prepared, this could
>>>> also give the opportunity to rethink how list_op_pending works, and get
>>>> rid of the race condition by design.
>>>>
>>>> Feedback is very much welcome.
>>>
>>> Looking at this bug, one thing I'm starting to consider is that it
>>> appears to be an issue inherent to lack of synchronization between
>>> pthread_mutex_destroy(3) and the per-thread list_op_pending fields
>>> and not so much a kernel issue.
>>>
>>> Here is why I think the issue is purely userspace:
>>>
>>> Let's suppose we have a shared memory area across Processes 1 and
>>> Process 2,
>>> which internally have its own custom memory allocator in userspace to
>>> allocate/free space within that shared memory.
>>>
>>> Process 1, Thread A stumbles through the scenario highlighted by this
>>> bug, and
>>> basically gets preempted at this FIXME in libc
>>> __pthread_mutex_unlock_full():
>>>
>>> if (__glibc_unlikely ((atomic_exchange_release (&mutex-
>>> >__data.__lock, 0)
>>> & FUTEX_WAITERS) != 0))
>>> futex_wake ((unsigned int *) &mutex->__data.__lock, 1,
>>> private);
>>>
>>> /* We must clear op_pending after we release the mutex.
>>> FIXME However, this violates the mutex destruction
>>> requirements
>>> because another thread could acquire the mutex, destroy it,
>>> and
>>> reuse the memory for something else; then, if this thread
>>> crashes,
>>> and the memory happens to have a value equal to the TID,
>>> the kernel
>>> will believe it is still related to the mutex (which has been
>>> destroyed already) and will modify some other random
>>> object. */
>>> __asm ("" ::: "memory");
>>> THREAD_SETMEM (THREAD_SELF, robust_head.list_op_pending, NULL);
>>>
>>> Then Process 1, Thread B runs, grabs the lock, releases it, and based on
>>> program state it knows it can pthread_mutex_destroy() this lock, free
>>> its
>>> associated memory through the custom shared memory allocator, and
>>> allocate
>>> it for other purposes. Then we get to the point where Process 1 is
>>> killed, and where the robust futex kernel code corrupts data in shared
>>> memory because of the dangling list_op_pending pointer.
>>>
>>> That shared memory data is still observable by Process B, which will
>>> get a
>>> corrupted state.
>>>
>>> Notice how this all happens without any munmap(2)/mmap(2) in the
>>> sequence ?
>>> This is why I think this is purely a userspace issue rather than an
>>> issue
>>> we can solve by adding extra synchronization in the kernel.
>>>
>>> The one point we have in that sequence where I think we can add
>>> synchronization
>>> is pthread_mutex_destroy(3) in libc. One possible "big hammer"
>>> solution would be
>>> to make pthread_mutex_destroy iterate on all other threads
>>> list_op_pending
>>> and busy-wait if it finds that the mutex address is in use. It would
>>> of course
>>> only have to do that for robust futexes.
>>>
>>> If that big hammer solution is not fast enough for many-threaded use-
>>> cases,
>>> then we can think of other approaches such as adding a reference counter
>>> in the mutex structure, or introducing hazard pointers in userspace
>>> to reduce
>>> synchronization iteration from nr_threads to nr_cpus (or even down to
>>> max
>>> rseq mm_cid).
>>
>> To make matters even worse, the pthread_mutex_destroy(3) and reallocation
>> could happen from Process 2 rather than Process 1. So iterating on a
>> threads from Process 1 is not sufficient. We'd need to synchronize
>> pthread_mutex_destroy on something within the mutex structure which is
>> observable from all processes using the lock, for instance a reference
>> count.
> Trying to find a backward compatible way to solve this may be tricky.
> Here is one possible approach I have in mind: Introduce a new syscall,
> e.g. sys_cleanup_robust_list(void *addr)
>
> This system call would be invoked on pthread_mutex_destroy(3) of
> robust mutexes, and do the following:
>
> - Calculate the offset of @addr within its mapping,
> - Iterate on all processes which map the backing store which contain
> the lock address @addr.
> - Iterate on each thread sibling within each of those processes,
> - If the thread has a robust list, and its list_op_pending points
> to the same offset within the backing store mapping, clear the
> list_op_pending pointer.
>
> The overhead would be added specifically to pthread_mutex_destroy(3),
> and only for robust mutexes.
>
> Thoughts ?
>
Right, your explanation makes sense to me. I think the only difference
between alloc/free and map/munmap is that ""freeing" memory does not
actually return it to the operating system for other applications to
use"[1], so I don't know if this custom allocator is violating some
memory rules.
About the system call, we would call sys_cleanup_robust_list() before
freeing/unmapping the robust mutex. To guarantee that we check every
process that shares the memory region, would we need to check *every*
single process? I don't think there's a way find a way to find such maps
without checking them all.
I'm trying to explore the idea about the reference counter. Would the
mummap() be blocked till the refcount goes to zero or something like
that? I've also tried to find more examples of a memory region that's
shared between one or more process and the kernel at the same time to
get some inspiration, but it seems robust_list is a quite unique design
on its own regarding this memory sharing problem.
[1] https://sourceware.org/glibc/wiki/MallocInternals
On 2026-02-27 14:16, André Almeida wrote: [...] >> Trying to find a backward compatible way to solve this may be tricky. >> Here is one possible approach I have in mind: Introduce a new syscall, >> e.g. sys_cleanup_robust_list(void *addr) >> >> This system call would be invoked on pthread_mutex_destroy(3) of >> robust mutexes, and do the following: >> >> - Calculate the offset of @addr within its mapping, >> - Iterate on all processes which map the backing store which contain >> the lock address @addr. >> - Iterate on each thread sibling within each of those processes, >> - If the thread has a robust list, and its list_op_pending points >> to the same offset within the backing store mapping, clear the >> list_op_pending pointer. >> >> The overhead would be added specifically to pthread_mutex_destroy(3), >> and only for robust mutexes. >> >> Thoughts ? >> [...] > > About the system call, we would call sys_cleanup_robust_list() before > freeing/unmapping the robust mutex. To guarantee that we check every > process that shares the memory region, would we need to check *every* > single process? I don't think there's a way find a way to find such maps > without checking them all. We should be able to do it with just an iteration on the struct address_space reverse mapping (list of vma which map the shared mapping). AFAIU we'd want to get the struct address_space associated with the __user pointer, then, while holding i_mmap_lock_read(mapping), iterate on its reverse mapping (i_mmap field) with vma_interval_tree_foreach. We can get each mm_struct through vma->vm_mm. We'd want to do most of this in a kthread and use other mm_struct through use_mm(). For each mm_struct, we go through the owner field to get the thread group leader, and iterate on all thread siblings (for_each_thread). For each of those threads, we'd want to clear the list_op_pending if it matches the offset of @addr within the mapping. I suspect we'd want to clear that userspace pointer with a futex_atomic_cmpxchg_inatomic which only clears the pointer if the old value match the one we expect. Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. https://www.efficios.com
Hi André,
So it looks like I got a simpler idea on how to solve this at some
point between going to bed and waking up.
Let's extend the rseq system call. Here is how:
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
index 863c4a00a66b..0592be0c3b32 100644
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -86,6 +86,59 @@ struct rseq_slice_ctrl {
};
};
+/**
+ * rseq_rl_cs - Robust list unlock transaction descriptor
+ *
+ * rseq_rl_cs describes a transaction which begins with a successful
+ * robust mutex unlock followed by clearing a robust list pending ops.
+ *
+ * Userspace prepares for a robust_list unlock transaction by storing
+ * the address of a struct rseq_rl_cs descriptor into its per-thread
+ * rseq area rseq_rl_cs field. After the transaction is over, userspace
+ * clears the rseq_rl_cs pointer.
+ *
+ * A thread is considered to be within a rseq_rl_cs transaction if
+ * either of those conditions are true:
+ *
+ * - ip >= post_cond_store_ip && ip < post_success_ip && ll_sc_success(pt_regs)
+ * - ip >= post_success_ip && ip < post_clear_op_pending_ip
+ *
+ * If the kernel terminates a process within an active robust list
+ * unlock transaction, it should consider the robust list op pending
+ * as empty even if it contains an op pending address.
+ */
+struct rseq_rl_cs {
+ /* Version of this structure. */
+ __u32 version;
+ /* Reserved flags. */
+ __u32 flags;
+ /*
+ * Address immediately after store which unlocks the robust
+ * mutex. This store is usually implemented with an atomic
+ * exchange, or linked-load/store-conditional. In case it is
+ * implemented with ll/sc, the kernel needs to check whether the
+ * conditional store has succeeded with the appropriate registers
+ * or flags, as defined by the architecture ABI.
+ */
+ __u64 post_cond_store_ip;
+ /*
+ * For architectures implementing atomic exchange as ll/sc,
+ * a conditional branch is needed to handle failure.
+ * The unlock success IP is the address immediately after
+ * the conditional branch instruction after which the kernel
+ * can assume that the ll/sc has succeeded without checking
+ * registers or flags. For architectures where the the mutex
+ * unlock store instruction cannot fail, this address is equal
+ * to post_cond_store_ip.
+ */
+ __u64 post_success_ip;
+ /*
+ * Address after the instruction which clears the op pending
+ * list. This store is the last instruction of this sequence.
+ */
+ __u64 post_clear_op_pending_ip;
+} __attribute__((aligned(4 * sizeof(__u64))));
+
/*
* struct rseq is aligned on 4 * 8 bytes to ensure it is always
* contained within a single cache-line.
@@ -180,6 +233,28 @@ struct rseq {
*/
struct rseq_slice_ctrl slice_ctrl;
+ /*
+ * Restartable sequences rseq_rl_cs field.
+ *
+ * Contains NULL when no robust list unlock transaction is
+ * active for the current thread, or holds a pointer to the
+ * currently active struct rseq_rl_cs.
+ *
+ * Updated by user-space, which sets the address of the currently
+ * active rseq_rl_cs at some point before the beginning of the
+ * transaction, and set to NULL by user-space at some point
+ * after the transaction has completed.
+ *
+ * Read by the kernel. Set by user-space with single-copy
+ * atomicity semantics. This field should only be updated by the
+ * thread which registered this data structure. Aligned on
+ * 64-bit.
+ *
+ * 32-bit architectures should update the low order bits of the
+ * rseq_cs field, leaving the high order bits initialized to 0.
+ */
+ __u64 rseq_rl_cs;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
Of course, we'd have to implement the whole transaction in assembler for each
architecture.
Feedback is welcome!
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Fri, Feb 27, 2026 at 8:00 PM Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > > On 2026-02-27 14:16, André Almeida wrote: > [...] > >> Trying to find a backward compatible way to solve this may be tricky. > >> Here is one possible approach I have in mind: Introduce a new syscall, > >> e.g. sys_cleanup_robust_list(void *addr) > >> > >> This system call would be invoked on pthread_mutex_destroy(3) of > >> robust mutexes, and do the following: > >> > >> - Calculate the offset of @addr within its mapping, > >> - Iterate on all processes which map the backing store which contain > >> the lock address @addr. > >> - Iterate on each thread sibling within each of those processes, > >> - If the thread has a robust list, and its list_op_pending points > >> to the same offset within the backing store mapping, clear the > >> list_op_pending pointer. > >> > >> The overhead would be added specifically to pthread_mutex_destroy(3), > >> and only for robust mutexes. > >> > >> Thoughts ? > >> > [...] > > > > About the system call, we would call sys_cleanup_robust_list() before > > freeing/unmapping the robust mutex. To guarantee that we check every > > process that shares the memory region, would we need to check *every* > > single process? I don't think there's a way find a way to find such maps > > without checking them all. > > We should be able to do it with just an iteration on the struct address_space > reverse mapping (list of vma which map the shared mapping). > > AFAIU we'd want to get the struct address_space associated with the > __user pointer, then, while holding i_mmap_lock_read(mapping), iterate > on its reverse mapping (i_mmap field) with vma_interval_tree_foreach. We > can get each mm_struct through vma->vm_mm. > > We'd want to do most of this in a kthread and use other mm_struct through > use_mm(). > > For each mm_struct, we go through the owner field to get the thread > group leader, and iterate on all thread siblings (for_each_thread). > > For each of those threads, we'd want to clear the list_op_pending > if it matches the offset of @addr within the mapping. I suspect we'd > want to clear that userspace pointer with a futex_atomic_cmpxchg_inatomic > which only clears the pointer if the old value match the one we expect. I've been looking into this problem this week and IIUC Nico Pache pursued this direction at some point (see [1]). I'm CC'ing him to share his experience. FYI, the link also contains an interesting discussion between Thomas and Michal about difficulty of identifying all the VMAs possibly involved in the lock chain and some technical challenges. [1] https://lore.kernel.org/all/bd61369c-ef50-2eb4-2cca-91422fbfa328@redhat.com/ Thanks, Suren. > > Thanks, > > Mathieu > > -- > Mathieu Desnoyers > EfficiOS Inc. > https://www.efficios.com >
* Mathieu Desnoyers: > Trying to find a backward compatible way to solve this may be tricky. > Here is one possible approach I have in mind: Introduce a new syscall, > e.g. sys_cleanup_robust_list(void *addr) > > This system call would be invoked on pthread_mutex_destroy(3) of > robust mutexes, and do the following: > > - Calculate the offset of @addr within its mapping, > - Iterate on all processes which map the backing store which contain > the lock address @addr. > - Iterate on each thread sibling within each of those processes, > - If the thread has a robust list, and its list_op_pending points > to the same offset within the backing store mapping, clear the > list_op_pending pointer. > > The overhead would be added specifically to pthread_mutex_destroy(3), > and only for robust mutexes. Would we have to do this for pthread_mutex_destroy only, or also for pthread_join? It is defined to exit a thread with mutexes still locked, and the pthread_join call could mean that the application can determine by its own logic that the backing store can be deallocated. Thanks, Florian
On 2026-02-23 06:13, Florian Weimer wrote:
> * Mathieu Desnoyers:
>
>> Trying to find a backward compatible way to solve this may be tricky.
>> Here is one possible approach I have in mind: Introduce a new syscall,
>> e.g. sys_cleanup_robust_list(void *addr)
>>
>> This system call would be invoked on pthread_mutex_destroy(3) of
>> robust mutexes, and do the following:
>>
>> - Calculate the offset of @addr within its mapping,
>> - Iterate on all processes which map the backing store which contain
>> the lock address @addr.
>> - Iterate on each thread sibling within each of those processes,
>> - If the thread has a robust list, and its list_op_pending points
>> to the same offset within the backing store mapping, clear the
>> list_op_pending pointer.
>>
>> The overhead would be added specifically to pthread_mutex_destroy(3),
>> and only for robust mutexes.
>
> Would we have to do this for pthread_mutex_destroy only, or also for
> pthread_join? It is defined to exit a thread with mutexes still locked,
> and the pthread_join call could mean that the application can determine
> by its own logic that the backing store can be deallocated.
Let me try to wrap my head around this scenario.
AFAIU, the https://man7.org/linux/man-pages/man3/pthread_join.3.html
NOTES section states the following for pthread_join(3):
After a successful call to pthread_join(), the caller is
guaranteed that the target thread has terminated. The caller may
then choose to do any clean-up that is required after termination
of the thread (e.g., freeing memory or other resources that were
allocated to the target thread).
What is the behavior when a thread exits with a mutex locked ? I would
expect that this mutex stays locked and the pthread_join(3) caller gets
to release that mutex and eventually calls pthread_mutex_destroy(3) if
the application logic allows it.
But it looks like you are implying that the pthread_mutex_destroy(3) is
somehow implicit to pthread_join, and I really don't understand that
part. Am I missing something ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Mon, Feb 23, 2026 at 08:37:13AM -0500, Mathieu Desnoyers wrote: > On 2026-02-23 06:13, Florian Weimer wrote: > > * Mathieu Desnoyers: > > > > > Trying to find a backward compatible way to solve this may be tricky. > > > Here is one possible approach I have in mind: Introduce a new syscall, > > > e.g. sys_cleanup_robust_list(void *addr) > > > > > > This system call would be invoked on pthread_mutex_destroy(3) of > > > robust mutexes, and do the following: > > > > > > - Calculate the offset of @addr within its mapping, > > > - Iterate on all processes which map the backing store which contain > > > the lock address @addr. > > > - Iterate on each thread sibling within each of those processes, > > > - If the thread has a robust list, and its list_op_pending points > > > to the same offset within the backing store mapping, clear the > > > list_op_pending pointer. > > > > > > The overhead would be added specifically to pthread_mutex_destroy(3), > > > and only for robust mutexes. > > > > Would we have to do this for pthread_mutex_destroy only, or also for > > pthread_join? It is defined to exit a thread with mutexes still locked, > > and the pthread_join call could mean that the application can determine > > by its own logic that the backing store can be deallocated. > Let me try to wrap my head around this scenario. > > AFAIU, the https://man7.org/linux/man-pages/man3/pthread_join.3.html > NOTES section states the following for pthread_join(3): > > After a successful call to pthread_join(), the caller is > guaranteed that the target thread has terminated. The caller may > then choose to do any clean-up that is required after termination > of the thread (e.g., freeing memory or other resources that were > allocated to the target thread). > > What is the behavior when a thread exits with a mutex locked ? I would > expect that this mutex stays locked For a robust mutex, if the owning thread exits, the mutex enters EOWNERDEAD state. Otherwise, per POSIX the mutex just remains permanently locked and undestroyable. glibc does not actually implement this for recursive or errorchecking mutexes, as the tid might get reused and then the new thread that got the same tid will now behave as if it were the owner (e.g. it's allowed to take further recursive locks or observe itself as the owner via EDEADLK). In musl we implement this by putting all recursive and errorchecking mutexes on a robust list to reassign an unmatchable tid to them at pthread_exit time. > and the pthread_join(3) caller gets > to release that mutex and eventually calls pthread_mutex_destroy(3) if > the application logic allows it. No other thread can release the mutex that was left locked unless it was robust and it goes via the EOWNERDEAD/recovery process. Nor can you legally call pthread_mutex_destroy on a mutex that's still owned. Rich
+Cc Suren, Lorenzo, and Michal
* André Almeida <andrealmeid@igalia.com> [260220 15:27]:
> During LPC 2025, I presented a session about creating a new syscall for
> robust_list[0][1]. However, most of the session discussion wasn't much related
> to the new syscall itself, but much more related to an old bug that exists in
> the current robust_list mechanism.
Ah, sorry for hijacking the session, that was not my intention, but this
needs to be addressed before we propagate the issue into the next
iteration.
>
> Since at least 2012, there's an open bug reporting a race condition, as
> Carlos O'Donell pointed out:
>
> "File corruption race condition in robust mutex unlocking"
> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>
> To help understand the bug, I've created a reproducer (patch 1/2) and a
> companion kernel hack (patch 2/2) that helps to make the race condition
> more likely. When the bug happens, the reproducer shows a message
> comparing the original memory with the corrupted one:
>
> "Memory was corrupted by the kernel: 8001fe8d8001fe8d vs 8001fe8dc0000000"
>
> I'm not sure yet what would be the appropriated approach to fix it, so I
> decided to reach the community before moving forward in some direction.
> One suggestion from Peter[2] resolves around serializing the mmap() and the
> robust list exit path, which might cause overheads for the common case,
> where list_op_pending is empty.
>
> However, giving that there's a new interface being prepared, this could
> also give the opportunity to rethink how list_op_pending works, and get
> rid of the race condition by design.
>
> Feedback is very much welcome.
There was a delay added to the oom reaper for these tasks [1] by commit
e4a38402c36e ("oom_kill.c: futex: delay the OOM reaper to allow time for
proper futex cleanup")
We did discuss marking the vmas as needing to be skipped by the oom
manager, but no clear path forward was clear. It's also not clear if
that's the only area where such a problem exists.
[1]. https://lore.kernel.org/all/20220414144042.677008-1-npache@redhat.com/T/#u
>
> Thanks!
> André
>
> [0] https://lore.kernel.org/lkml/20251122-tonyk-robust_futex-v6-0-05fea005a0fd@igalia.com/
> [1] https://lpc.events/event/19/contributions/2108/
> [2] https://lore.kernel.org/lkml/20241219171344.GA26279@noisy.programming.kicks-ass.net/
>
> André Almeida (2):
> futex: Create reproducer for robust_list race condition
> futex: Add debug delays
>
> kernel/futex/core.c | 10 +++
> robust_bug.c | 178 ++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 188 insertions(+)
> create mode 100644 robust_bug.c
>
> --
> 2.53.0
>
Hi Liam,
Em 20/02/2026 17:51, Liam R. Howlett escreveu:
> +Cc Suren, Lorenzo, and Michal
>
> * André Almeida <andrealmeid@igalia.com> [260220 15:27]:
>> During LPC 2025, I presented a session about creating a new syscall for
>> robust_list[0][1]. However, most of the session discussion wasn't much related
>> to the new syscall itself, but much more related to an old bug that exists in
>> the current robust_list mechanism.
>
> Ah, sorry for hijacking the session, that was not my intention, but this
> needs to be addressed before we propagate the issue into the next
> iteration.
>
No problem! I believe that this reflects the fact that the race
condition is the main concern about this new interface, and that we
should focus our discussion around this.
>>
>> Since at least 2012, there's an open bug reporting a race condition, as
>> Carlos O'Donell pointed out:
>>
>> "File corruption race condition in robust mutex unlocking"
>> https://sourceware.org/bugzilla/show_bug.cgi?id=14485
>>
[...]
>
> There was a delay added to the oom reaper for these tasks [1] by commit
> e4a38402c36e ("oom_kill.c: futex: delay the OOM reaper to allow time for
> proper futex cleanup")
>
> We did discuss marking the vmas as needing to be skipped by the oom
> manager, but no clear path forward was clear. It's also not clear if
> that's the only area where such a problem exists.
>
> [1]. https://lore.kernel.org/all/20220414144042.677008-1-npache@redhat.com/T/#u
>
So how would you detect which vmas should be skipped? And this won't fix
the issue when the memory is unmapped right, just for the OOM case?
© 2016 - 2026 Red Hat, Inc.