From nobody Thu Apr 2 14:20:18 2026 Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72E94378D87 for ; Sat, 28 Mar 2026 12:39:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.40.44.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774701596; cv=none; b=aLgb4XlcdkZyCP6Qc0FGQiav+jw+SlOofzcDwuxp4585WXCRBw8SbEhMn+uPEQm8Wo1IRowDtKAGbLUoQChUIWUFX9gm68lkynv604VbjV86WVC+Kmxvo5P04//zKIspk0FtBAt5pJfKLTYJjUfVX/mQtrQDN3S0TSbaH7Rbm1w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774701596; c=relaxed/simple; bh=z1jz2x0UGe/vXtKfBR92uBEIIYjCMcIu6aV69NtCRYo=; h=Date:From:To:Cc:Subject:Message-ID:MIME-Version:Content-Type; b=XaZ5I/ScVvjWYDN3QJoJPgSlUdcUHOSHEMShmkVSKW3afZ8qXL3/2Zuv5xk5eqFqbf1Rt85X8wrqfDmOdSYcLfmsQ4a8enIbA3VSraP8+nMRAxJPm2rUqELWSpmWRHk2xy8DXY8WCyBCYJ1xdfoE4n0rw/5NASyGqrU0c2w4kZ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=goodmis.org; spf=pass smtp.mailfrom=goodmis.org; arc=none smtp.client-ip=216.40.44.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=goodmis.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=goodmis.org Received: from omf03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 59EE31A0147; Sat, 28 Mar 2026 12:39:53 +0000 (UTC) Received: from [HIDDEN] (Authenticated sender: rostedt@goodmis.org) by omf03.hostedemail.com (Postfix) with ESMTPA id CC9456000F; Sat, 28 Mar 2026 12:39:51 +0000 (UTC) Date: Sat, 28 Mar 2026 08:39:50 -0400 From: Steven Rostedt To: LKML Cc: Masami Hiramatsu , Mathieu Desnoyers , Wesley Atwell Subject: [for-linus][PATCH] tracing: Drain deferred trigger frees if kthread creation fails Message-ID: <20260328083950.32770bbf@robin> X-Mailer: Claws Mail 4.3.1 (GTK 3.24.51; x86_64-redhat-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Stat-Signature: ose3gw7kxh59op4uso9ke745cqf7mtxb X-Rspamd-Server: rspamout08 X-Rspamd-Queue-Id: CC9456000F X-Session-Marker: 726F737465647440676F6F646D69732E6F7267 X-Session-ID: U2FsdGVkX19RgsFO9J/xBPygiwIQ0ss/zvD0AuSUSnY= X-HE-Tag: 1774701591-959525 X-HE-Meta: U2FsdGVkX1+VoxrygercKIvavQRRZMoqhezQbQCUpmPfqlCQHOPT6TrtOEQwQbnsFr3iQs/Q3mkGMatAHRkkTmiG4yT7qtxsI9vOLU5Q78b5ttz1A3u1D1ttrsBabLH70jBpGOsqruLdyaGtF/NMT+ZkPmWe5jJXpDw9vIL0plGm9okfLTPqgxUcXLczJMpOxDBuAeKgVsFLy+QzS072tlERCEicYZWwisz4gteJ+x/CenSjYfREXNJXKoOJy77VOyWsp2sbyXMOELw6QUgZv/P1x/TZ2mkjW4LCycunD6HseRjFje4czCQsY2setA1Po9PmU4b5y0R0W1wA+i+ANbSyiU8iALUDURuAEWFcCA+IC3c82wxPz1ZAkMfx8bK0K2PqSX1pLMr7aPmwWBoxhYGE7i43/4zTHCNbCS2aAbTqBmLQvLuhiFmbfqOCiTwOC4AGoR26G2dlgPP5EVz+ewatM4jXrGLnhbtyBoM0jkv6+Cu9oeTVYK8Fsoc2xzcy Content-Type: text/plain; charset="utf-8" tracing fix for 7.0: - Fix freeing of event triggers in early boot up If the same trigger is added on the kernel command line, the second one will fail to be applied and the trigger created will be freed. This calls into the deferred logic and creates a kernel thread to do the freeing. But the command line logic is called before kernel threads can be created and this leads to a NULL pointer dereference. Delay freeing event triggers until late init. git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git trace/fixes Head SHA1: 250ab25391edeeab8462b68be42e4904506c409c Wesley Atwell (1): tracing: Drain deferred trigger frees if kthread creation fails ---- kernel/trace/trace_events_trigger.c | 79 +++++++++++++++++++++++++++++++--= ---- 1 file changed, 66 insertions(+), 13 deletions(-) --------------------------- commit 250ab25391edeeab8462b68be42e4904506c409c Author: Wesley Atwell Date: Tue Mar 24 16:13:26 2026 -0600 tracing: Drain deferred trigger frees if kthread creation fails =20 Boot-time trigger registration can fail before the trigger-data cleanup kthread exists. Deferring those frees until late init is fine, but the post-boot fallback must still drain the deferred list if kthread creation never succeeds. =20 Otherwise, boot-deferred nodes can accumulate on trigger_data_free_list, later frees fall back to synchronously freeing only the current object, and the older queued entries are leaked forever. =20 To trigger this, add the following to the kernel command line: =20 trace_event=3Dsched_switch trace_trigger=3Dsched_switch.traceon,sched= _switch.traceon =20 The second traceon trigger will fail and be freed. This triggers a NULL pointer dereference and crashes the kernel. =20 Keep the deferred boot-time behavior, but when kthread creation fails, drain the whole queued list synchronously. Do the same in the late-init drain path so queued entries are not stranded there either. =20 Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260324221326.1395799-3-atwellwea@gmail= .com Fixes: 61d445af0a7c ("tracing: Add bulk garbage collection of freeing e= vent_trigger_data") Signed-off-by: Wesley Atwell Signed-off-by: Steven Rostedt (Google) diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_event= s_trigger.c index d5230b759a2d..655db2e82513 100644 --- a/kernel/trace/trace_events_trigger.c +++ b/kernel/trace/trace_events_trigger.c @@ -22,6 +22,39 @@ static struct task_struct *trigger_kthread; static struct llist_head trigger_data_free_list; static DEFINE_MUTEX(trigger_data_kthread_mutex); =20 +static int trigger_kthread_fn(void *ignore); + +static void trigger_create_kthread_locked(void) +{ + lockdep_assert_held(&trigger_data_kthread_mutex); + + if (!trigger_kthread) { + struct task_struct *kthread; + + kthread =3D kthread_create(trigger_kthread_fn, NULL, + "trigger_data_free"); + if (!IS_ERR(kthread)) + WRITE_ONCE(trigger_kthread, kthread); + } +} + +static void trigger_data_free_queued_locked(void) +{ + struct event_trigger_data *data, *tmp; + struct llist_node *llnodes; + + lockdep_assert_held(&trigger_data_kthread_mutex); + + llnodes =3D llist_del_all(&trigger_data_free_list); + if (!llnodes) + return; + + tracepoint_synchronize_unregister(); + + llist_for_each_entry_safe(data, tmp, llnodes, llist) + kfree(data); +} + /* Bulk garbage collection of event_trigger_data elements */ static int trigger_kthread_fn(void *ignore) { @@ -56,30 +89,50 @@ void trigger_data_free(struct event_trigger_data *data) if (data->cmd_ops->set_filter) data->cmd_ops->set_filter(NULL, data, NULL); =20 + /* + * Boot-time trigger registration can fail before kthread creation + * works. Keep the deferred-free semantics during boot and let late + * init start the kthread to drain the list. + */ + if (system_state =3D=3D SYSTEM_BOOTING && !trigger_kthread) { + llist_add(&data->llist, &trigger_data_free_list); + return; + } + if (unlikely(!trigger_kthread)) { guard(mutex)(&trigger_data_kthread_mutex); + + trigger_create_kthread_locked(); /* Check again after taking mutex */ if (!trigger_kthread) { - struct task_struct *kthread; - - kthread =3D kthread_create(trigger_kthread_fn, NULL, - "trigger_data_free"); - if (!IS_ERR(kthread)) - WRITE_ONCE(trigger_kthread, kthread); + llist_add(&data->llist, &trigger_data_free_list); + /* Drain the queued frees synchronously if creation failed. */ + trigger_data_free_queued_locked(); + return; } } =20 - if (!trigger_kthread) { - /* Do it the slow way */ - tracepoint_synchronize_unregister(); - kfree(data); - return; - } - llist_add(&data->llist, &trigger_data_free_list); wake_up_process(trigger_kthread); } =20 +static int __init trigger_data_free_init(void) +{ + guard(mutex)(&trigger_data_kthread_mutex); + + if (llist_empty(&trigger_data_free_list)) + return 0; + + trigger_create_kthread_locked(); + if (trigger_kthread) + wake_up_process(trigger_kthread); + else + trigger_data_free_queued_locked(); + + return 0; +} +late_initcall(trigger_data_free_init); + static inline void data_ops_trigger(struct event_trigger_data *data, struct trace_buffer *buffer, void *rec, struct ring_buffer_event *event)