From nobody Mon May 13 02:14:52 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54318159915; Wed, 27 Mar 2024 12:17:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711541867; cv=none; b=soVSFpgg9HoWH7wI1c56+UPD6fTsjGbsA8nVE5gY0cxYlPUR69zChn02efJUpe6DXQjnbEn4G7tz5Dh+FoA5fEb2Ou9b9FXr8ez8b2htQIIy1zeGAsgDy9+B6zdI3tOzkiVErHlkaoNAc1xv87xJ7ST9alTxEfWagtgLyTB3ZV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711541867; c=relaxed/simple; bh=Pq7lp8d2Lb531er0b5SBKhDI1SvbKsdJR4dp9X7nx2s=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=hj7ziKyYESQrwm0pEHKPEOpTke7OwE0BhOrPiEENeRzSOmEgR7i/MhrfV8okUbN2sZDiGvG9IVRvGskTeUw33+hBb6N2gXQJz+y4YitNM3Vg4l4oZEMfuETBTNceIq22o+hyh8wv7sr9AIxgs2amCAGA9jQzoEa4ReN82xel0XI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ti2XHZ49; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ti2XHZ49" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2EABC433C7; Wed, 27 Mar 2024 12:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711541867; bh=Pq7lp8d2Lb531er0b5SBKhDI1SvbKsdJR4dp9X7nx2s=; h=From:To:Cc:Subject:Date:From; b=Ti2XHZ49qW5Q8wsMd5l5SZPmjihUF3iRW8U1/65f3NDA8L6j1IiodJisU96iNXjq/ aeXZIjc6GFoqeWkMGoOu2lziLZRL1GLLzoVR6gPVWLu33Qm/4r4dp9ZAkLuGg+yCfF 3ViRElhuPF2tBjk+aSxT+NFIzx+OIBpEA2xvfPQKLKlXVuScBJX0nWpNabm9hfVo1U 2ju1d9+UopgZqNQeJLpiOwM9KaRGAySLDtPUhYv35BWWv5XlnjFnm8UZXfyXhqz7m4 vzfPO3+TTta/kOurmcEJGbVV0qk09KIgXkITfpKNBoa7lV28QcSl1+xN0Q6+cjImmz c9q1qTysnAeQw== From: Sasha Levin To: stable@vger.kernel.org, rostedt@goodmis.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Linus Torvalds , linke li , Rabin Vincent , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: FAILED: Patch "tracing/ring-buffer: Fix wait_on_pipe() race" failed to apply to 5.15-stable tree Date: Wed, 27 Mar 2024 08:17:44 -0400 Message-ID: <20240327121745.2833766-1-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Hint: ignore X-stable: review Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The patch below does not apply to the 5.15-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Thanks, Sasha ------------------ original commit in Linus's tree ------------------ From 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf Mon Sep 17 00:00:00 2001 From: "Steven Rostedt (Google)" Date: Tue, 12 Mar 2024 08:15:08 -0400 Subject: [PATCH] tracing/ring-buffer: Fix wait_on_pipe() race When the trace_pipe_raw file is closed, there should be no new readers on the file descriptor. This is mostly handled with the waking and wait_index fields of the iterator. But there's still a slight race. CPU 0 CPU 1 ----- ----- wait_index++; index =3D wait_index; ring_buffer_wake_waiters(); wait_on_pipe() ring_buffer_wait(); The ring_buffer_wait() will miss the wakeup from CPU 1. The problem is that the ring_buffer_wait() needs the logic of: prepare_to_wait(); if (!condition) schedule(); Where the missing condition check is the iter->wait_index update. Have the ring_buffer_wait() take a conditional callback function and a data parameter that can be used within the wait_event_interruptible() of the ring_buffer_wait() function. In wait_on_pipe(), pass a condition function that will check if the wait_index has been updated, if it has, it will return true to break out of the wait_event_interruptible() loop. Create a new field "closed" in the trace_iterator and set it in the .flush() callback before calling ring_buffer_wake_waiters(). This will keep any new readers from waiting on a closed file descriptor. Have the wait_on_pipe() condition callback also check the closed field. Change the wait_index field of the trace_iterator to atomic_t. There's no reason it needs to be 'long' and making it atomic and using atomic_read_acquire() and atomic_fetch_inc_release() will provide the necessary memory barriers. Add a "woken" flag to tracing_buffers_splice_read() to exit the loop after one more try to fetch data. That is, if it waited for data and something woke it up, it should try to collect any new data and then exit back to user space. Link: https://lore.kernel.org/linux-trace-kernel/CAHk-=3DwgsNgewHFxZAJiAQzn= wPMqEtQmi1waeS2O1v6L4c_Um5A@mail.gmail.com/ Link: https://lore.kernel.org/linux-trace-kernel/20240312121703.557950713@g= oodmis.org Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Cc: Linus Torvalds Cc: linke li Cc: Rabin Vincent Fixes: f3ddb74ad0790 ("tracing: Wake up ring buffer waiters on closing of t= he file") Signed-off-by: Steven Rostedt (Google) --- include/linux/ring_buffer.h | 3 ++- include/linux/trace_events.h | 5 ++++- kernel/trace/ring_buffer.c | 13 ++++++----- kernel/trace/trace.c | 43 ++++++++++++++++++++++++++---------- 4 files changed, 45 insertions(+), 19 deletions(-) diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index 338a33db1577e..dc5ae4e96aee0 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -99,7 +99,8 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, s= truct lock_class_key *k }) =20 typedef bool (*ring_buffer_cond_fn)(void *data); -int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full); +int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full, + ring_buffer_cond_fn cond, void *data); __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, struct file *filp, poll_table *poll_table, int full); void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu); diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index d68ff9b1247f9..fc6d0af56bb17 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -103,13 +103,16 @@ struct trace_iterator { unsigned int temp_size; char *fmt; /* modified format holder */ unsigned int fmt_size; - long wait_index; + atomic_t wait_index; =20 /* trace_seq for __print_flags() and __print_symbolic() etc. */ struct trace_seq tmp_seq; =20 cpumask_var_t started; =20 + /* Set when the file is closed to prevent new waiters */ + bool closed; + /* it's true when current open file is snapshot */ bool snapshot; =20 diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index f4c34b7c7e1e7..350607cce8694 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -902,23 +902,26 @@ static bool rb_wait_once(void *data) * @buffer: buffer to wait on * @cpu: the cpu buffer to wait on * @full: wait until the percentage of pages are available, if @cpu !=3D R= ING_BUFFER_ALL_CPUS + * @cond: condition function to break out of wait (NULL to run once) + * @data: the data to pass to @cond. * * If @cpu =3D=3D RING_BUFFER_ALL_CPUS then the task will wake up as soon * as data is added to any of the @buffer's cpu buffers. Otherwise * it will wait for data to be added to a specific cpu buffer. */ -int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full) +int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full, + ring_buffer_cond_fn cond, void *data) { struct ring_buffer_per_cpu *cpu_buffer; struct wait_queue_head *waitq; - ring_buffer_cond_fn cond; struct rb_irq_work *rbwork; - void *data; long once =3D 0; int ret =3D 0; =20 - cond =3D rb_wait_once; - data =3D &once; + if (!cond) { + cond =3D rb_wait_once; + data =3D &once; + } =20 /* * Depending on what the caller is waiting for, either any diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index c9c8983073485..d390fea3a6a52 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1955,15 +1955,36 @@ update_max_tr_single(struct trace_array *tr, struct= task_struct *tsk, int cpu) =20 #endif /* CONFIG_TRACER_MAX_TRACE */ =20 +struct pipe_wait { + struct trace_iterator *iter; + int wait_index; +}; + +static bool wait_pipe_cond(void *data) +{ + struct pipe_wait *pwait =3D data; + struct trace_iterator *iter =3D pwait->iter; + + if (atomic_read_acquire(&iter->wait_index) !=3D pwait->wait_index) + return true; + + return iter->closed; +} + static int wait_on_pipe(struct trace_iterator *iter, int full) { + struct pipe_wait pwait; int ret; =20 /* Iterators are static, they should be filled or empty */ if (trace_buffer_iter(iter, iter->cpu_file)) return 0; =20 - ret =3D ring_buffer_wait(iter->array_buffer->buffer, iter->cpu_file, full= ); + pwait.wait_index =3D atomic_read_acquire(&iter->wait_index); + pwait.iter =3D iter; + + ret =3D ring_buffer_wait(iter->array_buffer->buffer, iter->cpu_file, full, + wait_pipe_cond, &pwait); =20 #ifdef CONFIG_TRACER_MAX_TRACE /* @@ -8398,9 +8419,9 @@ static int tracing_buffers_flush(struct file *file, f= l_owner_t id) struct ftrace_buffer_info *info =3D file->private_data; struct trace_iterator *iter =3D &info->iter; =20 - iter->wait_index++; + iter->closed =3D true; /* Make sure the waiters see the new wait_index */ - smp_wmb(); + (void)atomic_fetch_inc_release(&iter->wait_index); =20 ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file); =20 @@ -8500,6 +8521,7 @@ tracing_buffers_splice_read(struct file *file, loff_t= *ppos, .spd_release =3D buffer_spd_release, }; struct buffer_ref *ref; + bool woken =3D false; int page_size; int entries, i; ssize_t ret =3D 0; @@ -8573,17 +8595,17 @@ tracing_buffers_splice_read(struct file *file, loff= _t *ppos, =20 /* did we read anything? */ if (!spd.nr_pages) { - long wait_index; =20 if (ret) goto out; =20 + if (woken) + goto out; + ret =3D -EAGAIN; if ((file->f_flags & O_NONBLOCK) || (flags & SPLICE_F_NONBLOCK)) goto out; =20 - wait_index =3D READ_ONCE(iter->wait_index); - ret =3D wait_on_pipe(iter, iter->snapshot ? 0 : iter->tr->buffer_percent= ); if (ret) goto out; @@ -8592,10 +8614,8 @@ tracing_buffers_splice_read(struct file *file, loff_= t *ppos, if (!tracer_tracing_is_on(iter->tr)) goto out; =20 - /* Make sure we see the new wait_index */ - smp_rmb(); - if (wait_index !=3D iter->wait_index) - goto out; + /* Iterate one more time to collect any new data then exit */ + woken =3D true; =20 goto again; } @@ -8618,9 +8638,8 @@ static long tracing_buffers_ioctl(struct file *file, = unsigned int cmd, unsigned =20 mutex_lock(&trace_types_lock); =20 - iter->wait_index++; /* Make sure the waiters see the new wait_index */ - smp_wmb(); + (void)atomic_fetch_inc_release(&iter->wait_index); =20 ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file); =20 --=20 2.43.0