Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
---
kernel/events/uprobes.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 73cc47708679..76a51a1f51e2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
WARN_ON_ONCE(utask->state != UTASK_SSTEP);
if (task_sigpending(t)) {
- spin_lock_irq(&t->sighand->siglock);
clear_tsk_thread_flag(t, TIF_SIGPENDING);
- spin_unlock_irq(&t->sighand->siglock);
if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
utask->state = UTASK_SSTEP_TRAPPED;
--
2.34.1
On 08/09, Liao Chang wrote:
>
> Since clearing a bit in thread_info is an atomic operation, the spinlock
> is redundant and can be removed, reducing lock contention is good for
> performance.
My ack still stays, but let me add some notes...
sighand->siglock doesn't protect clear_bit() per se. It was used to not
break the "the state of TIF_SIGPENDING of every thread is stable with
sighand->siglock held" rule.
But we already have the lockless users of clear_thread_flag(TIF_SIGPENDING)
(some if not most of them look buggy), and afaics in this (very special)
case it should be fine.
Oleg.
> Acked-by: Oleg Nesterov <oleg@redhat.com>
> Signed-off-by: Liao Chang <liaochang1@huawei.com>
> ---
> kernel/events/uprobes.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 73cc47708679..76a51a1f51e2 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
> WARN_ON_ONCE(utask->state != UTASK_SSTEP);
>
> if (task_sigpending(t)) {
> - spin_lock_irq(&t->sighand->siglock);
> clear_tsk_thread_flag(t, TIF_SIGPENDING);
> - spin_unlock_irq(&t->sighand->siglock);
>
> if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
> utask->state = UTASK_SSTEP_TRAPPED;
> --
> 2.34.1
>
在 2024/8/12 20:07, Oleg Nesterov 写道:
> On 08/09, Liao Chang wrote:
>>
>> Since clearing a bit in thread_info is an atomic operation, the spinlock
>> is redundant and can be removed, reducing lock contention is good for
>> performance.
>
> My ack still stays, but let me add some notes...
>
> sighand->siglock doesn't protect clear_bit() per se. It was used to not
> break the "the state of TIF_SIGPENDING of every thread is stable with
> sighand->siglock held" rule.
>
> But we already have the lockless users of clear_thread_flag(TIF_SIGPENDING)
> (some if not most of them look buggy), and afaics in this (very special)
> case it should be fine.
Oleg, your explaination is more accurate. So I will reword the commit log and
quote some of your note like this:
Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING).
And for uprobe singlestep case, it doesn't break the rule of "the state of
TIF_SIGPENDING of every thread is stable with sighand->siglock held". So
removing sighand->siglock to reduce contention for better performance.
>
> Oleg.
>
>> Acked-by: Oleg Nesterov <oleg@redhat.com>
>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
>> ---
>> kernel/events/uprobes.c | 2 --
>> 1 file changed, 2 deletions(-)
>>
>> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
>> index 73cc47708679..76a51a1f51e2 100644
>> --- a/kernel/events/uprobes.c
>> +++ b/kernel/events/uprobes.c
>> @@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
>> WARN_ON_ONCE(utask->state != UTASK_SSTEP);
>>
>> if (task_sigpending(t)) {
>> - spin_lock_irq(&t->sighand->siglock);
>> clear_tsk_thread_flag(t, TIF_SIGPENDING);
>> - spin_unlock_irq(&t->sighand->siglock);
>>
>> if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
>> utask->state = UTASK_SSTEP_TRAPPED;
>> --
>> 2.34.1
>>
>
>
--
BR
Liao, Chang
On 08/13, Liao, Chang wrote: > > > Oleg, your explaination is more accurate. So I will reword the commit log and > quote some of your note like this: Oh, please don't. I just tried to explain the history of this spin_lock(siglock). > Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING). > And for uprobe singlestep case, it doesn't break the rule of "the state of > TIF_SIGPENDING of every thread is stable with sighand->siglock held". It obviously does break the rule above. Please keep your changelog as is. Oleg.
On Tue, Aug 13, 2024 at 5:47 AM Oleg Nesterov <oleg@redhat.com> wrote: > > On 08/13, Liao, Chang wrote: > > > > > > Oleg, your explaination is more accurate. So I will reword the commit log and > > quote some of your note like this: > > Oh, please don't. I just tried to explain the history of this spin_lock(siglock). > > > Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING). > > And for uprobe singlestep case, it doesn't break the rule of "the state of > > TIF_SIGPENDING of every thread is stable with sighand->siglock held". > > It obviously does break the rule above. Please keep your changelog as is. > > Oleg. > Liao, Can you please rebase and resend your patches now that the first part of my uprobe patches landed in perf/core? Seems like there is some tiny merge conflict or something. Thanks!
© 2016 - 2026 Red Hat, Inc.