From nobody Wed Oct 8 09:04:35 2025 Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B18A1DE4FC; Tue, 1 Jul 2025 00:54:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.40.44.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751331268; cv=none; b=IkeCXDEux2IVpwPxHYbgQooUOMKoRnzc0r8ucHM0yV7ShHFfv8kQyWax6sHvu4bKhZgE4ekR7YM6WmGJswTX7pjqO8bQ9i3Q07oKIMp7s4NWP12Fyd7H6GKUeYIhT1IJZ+MWSV3VYcbMvuuHuCf9ffLZrRUogLr9xC61Sss79gw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751331268; c=relaxed/simple; bh=zFlV6PkJW7QtFFFgn9LhTnGdVPMu4vYQ8ztUb+m2Ubc=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=VlWVKn/ua6uVlPx8ywsDmrl1Hfc/rEtFR6EZkSfGZkF/QloDuzHyU61nGgU3y86leQOohfvRCnr/8iqJ3VqOCfC9RcYR/O4wWLGTXttG5YMgMb5kAbLHs4JtOlTYOjsPvreN86u/UM7EOwCPDK8FItDHs/CyZoQlM6WfVRG//jg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org; spf=pass smtp.mailfrom=goodmis.org; arc=none smtp.client-ip=216.40.44.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=goodmis.org Received: from omf10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id AD09D122C9C; Tue, 1 Jul 2025 00:54:18 +0000 (UTC) Received: from [HIDDEN] (Authenticated sender: nevets@goodmis.org) by omf10.hostedemail.com (Postfix) with ESMTPA id 4E2543D; Tue, 1 Jul 2025 00:54:15 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98.2) (envelope-from ) id 1uWPGu-00000007Nig-0w2H; Mon, 30 Jun 2025 20:54:52 -0400 Message-ID: <20250701005452.075382262@goodmis.org> User-Agent: quilt/0.68 Date: Mon, 30 Jun 2025 20:53:30 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, bpf@vger.kernel.org, x86@kernel.org Cc: Masami Hiramatsu , Mathieu Desnoyers , Josh Poimboeuf , Peter Zijlstra , Ingo Molnar , Jiri Olsa , Namhyung Kim , Thomas Gleixner , Andrii Nakryiko , Indu Bhagat , "Jose E. Marchesi" , Beau Belgrave , Jens Remus , Linus Torvalds , Andrew Morton , Jens Axboe , Florian Weimer Subject: [PATCH v12 09/14] unwind deferred: Use SRCU unwind_deferred_task_work() References: <20250701005321.942306427@goodmis.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Stat-Signature: wwho8yoafpus59qp7dg4f3wg1h4iknsm X-Rspamd-Server: rspamout05 X-Rspamd-Queue-Id: 4E2543D X-Session-Marker: 6E657665747340676F6F646D69732E6F7267 X-Session-ID: U2FsdGVkX19Aa1oLHTzRf4GBtDkSVWEHGm/YjUmMFMs= X-HE-Tag: 1751331255-91572 X-HE-Meta: U2FsdGVkX19sgRL/WSoCwR8lln+Bf5cmaeBSAN8Xk7y5jPTP4z22qLzMivV+soSoTw1cvSKifMWGn1DbTq3TA+TGi5WiYWegirzQj1h29L+jwqT6TRB2P67LDQp7B6E/UUgofXPs/cIIoQSo4XpzEl7pB+cq+6AfldpPpiRvViv3/+ASwN5IDV/AmrGBfzS4Fbhyx4ZJRYkJIQvv6lzJYUuca4Qp3SXftxNF77LO+CwTbc1exaN9grZuZuMWlIw2abqmQy7aCUB1Caar1NoiEQuxQOtOcOK9ikStjd/4/LVjlkp5RrvnDA8RVDSmLNUKO7c9HJ8GEXreTOTb9sHDu6r9er7Fri8Ll59WOdLXuyzlD1hpNZe/p9xtfEgnlQbXvRhRpRzhDh4d0A669JwNp8JYzki8n5V43HezS527OToHh87eM14Oo3fdriL2gnW80N5nQRbTh0+iRxIKUzNhAGsz6qFJDPKMcsglsMM9vFc= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Steven Rostedt Instead of using the callback_mutex to protect the link list of callbacks in unwind_deferred_task_work(), use SRCU instead. This gets called every time a task exits that has to record a stack trace that was requested. This can happen for many tasks on several CPUs at the same time. A mutex is a bottleneck and can cause a bit of contention and slow down performance. As the callbacks themselves are allowed to sleep, regular RCU cannot be used to protect the list. Instead use SRCU, as that still allows the callbacks to sleep and the list can be read without needing to hold the callback_mutex. Link: https://lore.kernel.org/all/ca9bd83a-6c80-4ee0-a83c-224b9d60b755@effi= cios.com/ Suggested-by: Mathieu Desnoyers Signed-off-by: Steven Rostedt (Google) --- kernel/unwind/deferred.c | 35 ++++++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c index 6c558d00ff41..7309c9e0e57a 100644 --- a/kernel/unwind/deferred.c +++ b/kernel/unwind/deferred.c @@ -45,10 +45,11 @@ static inline u64 assign_timestamp(struct unwind_task_i= nfo *info, #define UNWIND_MAX_ENTRIES \ ((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long)) =20 -/* Guards adding to and reading the list of callbacks */ +/* Guards adding to or removing from the list of callbacks */ static DEFINE_MUTEX(callback_mutex); static LIST_HEAD(callbacks); static unsigned long unwind_mask; +DEFINE_STATIC_SRCU(unwind_srcu); =20 /* * Read the task context timestamp, if this is the first caller then @@ -134,6 +135,7 @@ static void unwind_deferred_task_work(struct callback_h= ead *head) struct unwind_stacktrace trace; struct unwind_work *work; u64 timestamp; + int idx; =20 if (WARN_ON_ONCE(!local_read(&info->pending))) return; @@ -152,13 +154,15 @@ static void unwind_deferred_task_work(struct callback= _head *head) =20 timestamp =3D local64_read(&info->timestamp); =20 - guard(mutex)(&callback_mutex); - list_for_each_entry(work, &callbacks, list) { + idx =3D srcu_read_lock(&unwind_srcu); + list_for_each_entry_srcu(work, &callbacks, list, + srcu_read_lock_held(&unwind_srcu)) { if (test_bit(work->bit, &info->unwind_mask)) { work->func(work, &trace, timestamp); clear_bit(work->bit, &info->unwind_mask); } } + srcu_read_unlock(&unwind_srcu, idx); } =20 /** @@ -193,6 +197,7 @@ int unwind_deferred_request(struct unwind_work *work, u= 64 *timestamp) { struct unwind_task_info *info =3D ¤t->unwind_info; long pending; + int bit; int ret; =20 *timestamp =3D 0; @@ -205,12 +210,17 @@ int unwind_deferred_request(struct unwind_work *work,= u64 *timestamp) if (!CAN_USE_IN_NMI && in_nmi()) return -EINVAL; =20 + /* Do not allow cancelled works to request again */ + bit =3D READ_ONCE(work->bit); + if (WARN_ON_ONCE(bit < 0)) + return -EINVAL; + guard(irqsave)(); =20 *timestamp =3D get_timestamp(info); =20 /* This is already queued */ - if (test_bit(work->bit, &info->unwind_mask)) + if (test_bit(bit, &info->unwind_mask)) return 1; =20 /* callback already pending? */ @@ -234,25 +244,32 @@ int unwind_deferred_request(struct unwind_work *work,= u64 *timestamp) } =20 out: - return test_and_set_bit(work->bit, &info->unwind_mask); + return test_and_set_bit(bit, &info->unwind_mask); } =20 void unwind_deferred_cancel(struct unwind_work *work) { struct task_struct *g, *t; + int bit; =20 if (!work) return; =20 guard(mutex)(&callback_mutex); - list_del(&work->list); + list_del_rcu(&work->list); + bit =3D work->bit; + + /* Do not allow any more requests and prevent callbacks */ + work->bit =3D -1; + + __clear_bit(bit, &unwind_mask); =20 - __clear_bit(work->bit, &unwind_mask); + synchronize_srcu(&unwind_srcu); =20 guard(rcu)(); /* Clear this bit from all threads */ for_each_process_thread(g, t) { - clear_bit(work->bit, &t->unwind_info.unwind_mask); + clear_bit(bit, &t->unwind_info.unwind_mask); } } =20 @@ -269,7 +286,7 @@ int unwind_deferred_init(struct unwind_work *work, unwi= nd_callback_t func) work->bit =3D ffz(unwind_mask); __set_bit(work->bit, &unwind_mask); =20 - list_add(&work->list, &callbacks); + list_add_rcu(&work->list, &callbacks); work->func =3D func; return 0; } --=20 2.47.2