Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
---
kernel/events/uprobes.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index e421a5f2ec7d..7a3348dfedeb 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2298,9 +2298,7 @@ bool uprobe_deny_signal(void)
WARN_ON_ONCE(utask->state != UTASK_SSTEP);
if (task_sigpending(t)) {
- spin_lock_irq(&t->sighand->siglock);
clear_tsk_thread_flag(t, TIF_SIGPENDING);
- spin_unlock_irq(&t->sighand->siglock);
if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
utask->state = UTASK_SSTEP_TRAPPED;
--
2.34.1
On Fri, 24 Jan 2025 09:38:25 +0000 Liao Chang <liaochang1@huawei.com> wrote: > Since clearing a bit in thread_info is an atomic operation, the spinlock > is redundant and can be removed, reducing lock contention is good for > performance. Although this patch is probably fine, the change log suggests a dangerous precedence. Just because clearing a flag is atomic, that alone does not guarantee that it doesn't need spin locks around it. There may be another path that tests the flag within a spin lock, and then does a bunch of work assuming that the flag does not change while it is doing that work. That other path would require a spin lock around the clearing of the flag elsewhere. I don't know this code well enough to know if this has that scenario, and seeing the Acked-by from Oleg, I'm assuming it does not. But in any case, the change log needs to give a better rationale for removing a spin lock than just "clearing a flag atomically doesn't need a spin lock"! -- Steve > > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > Acked-by: Oleg Nesterov <oleg@redhat.com> > Signed-off-by: Liao Chang <liaochang1@huawei.com>
On 01/24, Steven Rostedt wrote: > > On Fri, 24 Jan 2025 09:38:25 +0000 > Liao Chang <liaochang1@huawei.com> wrote: > > > Since clearing a bit in thread_info is an atomic operation, the spinlock > > is redundant and can be removed, reducing lock contention is good for > > performance. > > Although this patch is probably fine, the change log suggests a dangerous > precedence. Just because clearing a flag is atomic, that alone does not > guarantee that it doesn't need spin locks around it. Yes. And iirc we already have the lockless users of clear(TIF_SIGPENDING) (some if not most of them look buggy). But afaics in this (very special) case it should be fine. See also https://lore.kernel.org/all/20240812120738.GC11656@redhat.com/ > There may be another path that tests the flag within a spin lock, Yes, retarget_shared_pending() or the complete_signal/wants_signal loop. That is why it was decided to take siglock in uprobe_deny_signal(), just to be "safe". But I still think this patch is fine. The current task is going to execute a single insn which can't enter the kernel and/or return to the userspace before it calls handle_singlestep() and restores TIF_SIGPENDING. We do not care if it races with another source of TIF_SIGPENDING. The only problem is that task_sigpending() from another task can "wrongly" return false in this window, but I don't see any problem. Oleg.
On 01/24, Oleg Nesterov wrote:
>
> But I still think this patch is fine. The current task is going to execute
> a single insn which can't enter the kernel and/or return to the userspace
^^^^^^^^^^^^^^^^^^^^^^
I mean't, it can't do syscall, sorry for the possible confusion.
Oleg.
The following commit has been merged into the perf/core branch of tip:
Commit-ID: eae8a56ae0c74c1cf2f92a6709d215a9f329f60c
Gitweb: https://git.kernel.org/tip/eae8a56ae0c74c1cf2f92a6709d215a9f329f60c
Author: Liao Chang <liaochang1@huawei.com>
AuthorDate: Fri, 24 Jan 2025 09:38:25
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Mon, 03 Feb 2025 11:46:06 +01:00
uprobes: Remove redundant spinlock in uprobe_deny_signal()
Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.
Signed-off-by: Liao Chang <liaochang1@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Link: https://lore.kernel.org/r/20250124093826.2123675-2-liaochang1@huawei.com
---
kernel/events/uprobes.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 2ca797c..33bd608 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2302,9 +2302,7 @@ bool uprobe_deny_signal(void)
WARN_ON_ONCE(utask->state != UTASK_SSTEP);
if (task_sigpending(t)) {
- spin_lock_irq(&t->sighand->siglock);
clear_tsk_thread_flag(t, TIF_SIGPENDING);
- spin_unlock_irq(&t->sighand->siglock);
if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
utask->state = UTASK_SSTEP_TRAPPED;
© 2016 - 2025 Red Hat, Inc.