From nobody Sat Oct 18 02:27:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF2F0C4332F for ; Wed, 19 Oct 2022 08:51:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231625AbiJSIvB (ORCPT ); Wed, 19 Oct 2022 04:51:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231657AbiJSItA (ORCPT ); Wed, 19 Oct 2022 04:49:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D061814C0; Wed, 19 Oct 2022 01:46:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 17E5B6183D; Wed, 19 Oct 2022 08:46:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D08EBC433D7; Wed, 19 Oct 2022 08:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666169201; bh=b8MlR2uiSgcn2sfEO2O+eztyP0E+QW2yZ9/nitKtF8o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TdEO1GpDwPg7DlGi5O4JVH3GJpvjeLcb5Z742a1W8BVjwUUn66/EPknkecp/0Fcmg r3B3ReEM401ifl9g2jCcksacK+O+n1Mg4UfOyvTACSiPtwYdNCbMHQhZUatx0NFvXJ 82DDpqiHy6wNZxmq5DExJWxR+y2BOZb9cQ2N23Lw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ingo Molnar , Andrew Morton , "Steven Rostedt (Google)" Subject: [PATCH 6.0 155/862] ring-buffer: Add ring_buffer_wake_waiters() Date: Wed, 19 Oct 2022 10:24:02 +0200 Message-Id: <20221019083256.828194839@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221019083249.951566199@linuxfoundation.org> References: <20221019083249.951566199@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Steven Rostedt (Google) commit 7e9fbbb1b776d8d7969551565bc246f74ec53b27 upstream. On closing of a file that represents a ring buffer or flushing the file, there may be waiters on the ring buffer that needs to be woken up and exit the ring_buffer_wait() function. Add ring_buffer_wake_waiters() to wake up the waiters on the ring buffer and allow them to exit the wait loop. Link: https://lkml.kernel.org/r/20220928133938.28dc2c27@gandalf.local.home Cc: stable@vger.kernel.org Cc: Ingo Molnar Cc: Andrew Morton Fixes: 15693458c4bc0 ("tracing/ring-buffer: Move poll wake ups into ring bu= ffer code") Signed-off-by: Steven Rostedt (Google) Signed-off-by: Greg Kroah-Hartman --- include/linux/ring_buffer.h | 2 +- kernel/trace/ring_buffer.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+), 1 deletion(-) --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -101,7 +101,7 @@ __ring_buffer_alloc(unsigned long size, int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full); __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, struct file *filp, poll_table *poll_table); - +void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu); =20 #define RING_BUFFER_ALL_CPUS -1 =20 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -413,6 +413,7 @@ struct rb_irq_work { struct irq_work work; wait_queue_head_t waiters; wait_queue_head_t full_waiters; + long wait_index; bool waiters_pending; bool full_waiters_pending; bool wakeup_full; @@ -925,6 +926,37 @@ static void rb_wake_up_waiters(struct ir } =20 /** + * ring_buffer_wake_waiters - wake up any waiters on this ring buffer + * @buffer: The ring buffer to wake waiters on + * + * In the case of a file that represents a ring buffer is closing, + * it is prudent to wake up any waiters that are on this. + */ +void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + struct rb_irq_work *rbwork; + + if (cpu =3D=3D RING_BUFFER_ALL_CPUS) { + + /* Wake up individual ones too. One level recursion */ + for_each_buffer_cpu(buffer, cpu) + ring_buffer_wake_waiters(buffer, cpu); + + rbwork =3D &buffer->irq_work; + } else { + cpu_buffer =3D buffer->buffers[cpu]; + rbwork =3D &cpu_buffer->irq_work; + } + + rbwork->wait_index++; + /* make sure the waiters see the new index */ + smp_wmb(); + + rb_wake_up_waiters(&rbwork->work); +} + +/** * ring_buffer_wait - wait for input to the ring buffer * @buffer: buffer to wait on * @cpu: the cpu buffer to wait on @@ -939,6 +971,7 @@ int ring_buffer_wait(struct trace_buffer struct ring_buffer_per_cpu *cpu_buffer; DEFINE_WAIT(wait); struct rb_irq_work *work; + long wait_index; int ret =3D 0; =20 /* @@ -957,6 +990,7 @@ int ring_buffer_wait(struct trace_buffer work =3D &cpu_buffer->irq_work; } =20 + wait_index =3D READ_ONCE(work->wait_index); =20 while (true) { if (full) @@ -1021,6 +1055,11 @@ int ring_buffer_wait(struct trace_buffer } =20 schedule(); + + /* Make sure to see the new wait index */ + smp_rmb(); + if (wait_index !=3D work->wait_index) + break; } =20 if (full)