From nobody Sat Jan 3 06:31:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F73FE936ED for ; Thu, 5 Oct 2023 01:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244388AbjJEBwr (ORCPT ); Wed, 4 Oct 2023 21:52:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244261AbjJEBwp (ORCPT ); Wed, 4 Oct 2023 21:52:45 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92F68CE for ; Wed, 4 Oct 2023 18:52:42 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B2DAC433CC; Thu, 5 Oct 2023 01:52:42 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qoDYk-005FIy-3B; Wed, 04 Oct 2023 21:53:50 -0400 Message-ID: <20231005015350.799850910@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 04 Oct 2023 21:53:12 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Uros Bizjak Subject: [for-next][PATCH 2/7] ring_buffer: Use try_cmpxchg instead of cmpxchg in rb_insert_pages References: <20231005015310.859143353@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Uros Bizjak Use try_cmpxchg instead of cmpxchg (*ptr, old, new) =3D=3D old in rb_insert_pages. x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg). No functional change intended. Link: https://lore.kernel.org/linux-trace-kernel/20230914163420.12923-1-ubi= zjak@gmail.com Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 515cafdb18d9..43cc47d7faaf 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2056,7 +2056,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffe= r) retries =3D 10; success =3D false; while (retries--) { - struct list_head *head_page, *prev_page, *r; + struct list_head *head_page, *prev_page; struct list_head *last_page, *first_page; struct list_head *head_page_with_bit; struct buffer_page *hpage =3D rb_set_head_page(cpu_buffer); @@ -2075,9 +2075,9 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffe= r) last_page->next =3D head_page_with_bit; first_page->prev =3D prev_page; =20 - r =3D cmpxchg(&prev_page->next, head_page_with_bit, first_page); - - if (r =3D=3D head_page_with_bit) { + /* caution: head_page_with_bit gets updated on cmpxchg failure */ + if (try_cmpxchg(&prev_page->next, + &head_page_with_bit, first_page)) { /* * yay, we replaced the page pointer to our new list, * now, we just have to update to head page's prev --=20 2.40.1