From nobody Thu Apr 2 09:30:14 2026 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C24A434C9B7 for ; Fri, 27 Mar 2026 12:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.196 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774614609; cv=none; b=Ll7WjFtDMrlPdD4s+gWNrIO5a+jDXpP3oGULkWT/48n2REDSDINl+7zHYHOzlfj662BrHKSIFgI3rptu/2LCWar1EhJXes555S5XItREOis7EIOgzuBgUEn0b+pkITMsn/4HIiX0OsfGfybOCB85fYNo9iqzwoEacwwXv4vPsrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774614609; c=relaxed/simple; bh=jiwfoKiW7Iqq0nVesngfTGnAbrVpLhAYRfRLuRDPCos=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wp33ZX+GxSxtMn0fq6CEXrf4usOV3BBT+ul04ykqOBXgujsgDHoIM726nSFiCUeesIl5AS2e/0Y31hacDhNNknLeBW3CYXgSwcj5wqxUqtYt1aYLfyeNjuNR1wKCVmFoJ663qgBOgU20hm/q6lIijw8cv10z06P6krmjxdUoO5w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=d6VRUmWv; arc=none smtp.client-ip=209.85.210.196 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d6VRUmWv" Received: by mail-pf1-f196.google.com with SMTP id d2e1a72fcca58-82c20b9fb15so982971b3a.3 for ; Fri, 27 Mar 2026 05:30:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774614607; x=1775219407; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mwAVa0COxv1kxQ+9exw+d50siXWOj7kmeww57r3MWyA=; b=d6VRUmWvePq/aRV5XsqfvpbVhWfYUn6kiEo+DichsiUlfKA6A0Jxkc82aIaIdXb4tI D1lCAOpyxjn5GKo96f851OHGDtgNkUde3cWmfCWSlUONLCzHT/TXwWVXxyad+pNq0BPr PXUPoJnvMX44r/6JV7rX7pt//cykHR3iaKsaDLjXC92X/WkFabEDAE2XWp/eAtJLR+FX xzC4x+vnPBEisBd1PLb1q3XlPZW5a+xC7KqcXls257w9e/A4yPFpcCGVLX7DC6xOomzd gDytNnD7lv3E0nOzZXFdZ0jvOGJBSF9dPt6NujSVDBG8wLPpCGHFquL7KUbbc5SWBYuK unwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774614607; x=1775219407; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=mwAVa0COxv1kxQ+9exw+d50siXWOj7kmeww57r3MWyA=; b=ECq1AuP0m4e/qCwTL7XCZ8V+jJL0jXHuUvWyoufsCYiB/j7YAXjY44pEprLbGaiVXN t+ABFDiYzKNMgP0Zt8aiu+nc/FfNkCMhPa/GqcaAwZwB0csssdMLIT7fH65aritRMkgv qgGSbfrkuBauJ4f4h+xK9UjrHuqdfTN1Zh2lQjUAgCP5WXZYdmgtG+MA9FNc5RrAXj7r WXa2FqcIwFsBwnZKpC8BWgN8YLLtOD6+yLJcO+7EQenONd0ZRWAnb/K556mwcwOLYfUI mK3+/UUQkzv4k8Tv0s25plfj92BhEZE2fhdtTjmGZn+Hbrh0b615Uqqb9J310j3M7VOm pmjA== X-Forwarded-Encrypted: i=1; AJvYcCV0ZCdT4TIL8vU+PXL9ab/6H2j+81P6y1pQurShJ9PgHqEKdRecPgiLKjnE30rXN4nXBWYf/xl7DnSrRQ4=@vger.kernel.org X-Gm-Message-State: AOJu0Yycaxh6gwTS2S1xpqCrTzJjYA+3zKuEx0/TRkxNRewZvAM8BPaa eX6k/05hPBx7AKhw8bGhoU52n5Y7VO4sYcvWCmj/aLmCJ+GJnrjAnKvn X-Gm-Gg: ATEYQzyg3jRqDPya3Arx9iSOs4iOU34ahw2RwEcvZ8a9JN5FE/Sc5XI4VW688PQdJGb xdFqBU3X/5996NBqSDLO/6jvKyV8Saj4LyhDBd+tfVPFckcq+WhBz9KZ1/Js8s0LN4/0zaIXQ/w nf5eVwR6ySnrsOakprvVIRpw0rumAKvOuhzlZu9LwD3m50AP6MCSbxVmK8mQIgQ3g3eknyK7fdr UU+H8bj9RcG7+LTpDkFXZZ9aaAu4hMpNlHvg4gy4Ht6AUi0FGwddUupLYZoaf0qgLG6nIC4/ki2 a5zcYxfbAtFwZqDY/951MT8tFw08lleEAZtWa5GW0ttYHtuAMxwvDqKRu5AcsgRtgO5BAzJqYfa 7nCapwqRe9zdChLO6KvyurtQ14Fv5H1M1ZdCalIId/VOZ4ip+e0axoYF9aXqrBH3+mlYNcebFpZ xoY2YOpncuo4NgWfTRx4t9wv8RRiA= X-Received: by 2002:a05:6a00:6c8d:b0:829:809e:8972 with SMTP id d2e1a72fcca58-82c95e5893bmr2369531b3a.3.1774614606793; Fri, 27 Mar 2026 05:30:06 -0700 (PDT) Received: from localhost ([2408:8642:893:d2da:bd6e:7277:59f0:fb2b]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-82c7d3f959esm5618143b3a.56.2026.03.27.05.30.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2026 05:30:06 -0700 (PDT) From: yuhaocheng035@gmail.com To: Peter Zijlstra , Qing Wang Cc: acme@kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, irogers@google.com, james.clark@linaro.org, jolsa@kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, mark.rutland@arm.com, mingo@redhat.com, namhyung@kernel.org, syzbot+196a82fd904572696b3c@syzkaller.appspotmail.com Subject: [PATCH v4] perf/core: Fix refcount bug and potential UAF in perf_mmap Date: Fri, 27 Mar 2026 20:29:52 +0800 Message-ID: <20260327122953.64466-1-yuhaocheng035@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260326112821.GK3738786@noisy.programming.kicks-ass.net> References: <20260326112821.GK3738786@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Haocheng Yu Syzkaller reported a refcount_t: addition on 0; use-after-free warning in perf_mmap. The issue is caused by a race condition between a failing mmap() setup and a concurrent mmap() on a dependent event (e.g., using output redirection). In perf_mmap(), the ring_buffer (rb) is allocated and assigned to event->rb with the mmap_mutex held. The mutex is then released to perform map_range(). If map_range() fails, perf_mmap_close() is called to clean up. However, since the mutex was dropped, another thread attaching to this event (via inherited events or output redirection) can acquire the mutex, observe the valid event->rb pointer, and attempt to increment its reference count. If the cleanup path has already dropped the reference count to zero, this results in a use-after-free or refcount saturation warning. Fix this by extending the scope of mmap_mutex to cover the map_range() call. This ensures that the ring buffer initialization and mapping (or cleanup on failure) happens atomically effectively, preventing other threads from accessing a half-initialized or dying ring buffer. v2: Because expanding the guarded region would cause the event->mmap_mutex to be acquired repeatedly in the perf_mmap_close function, potentially leading to a self deadlock, the original logic of perf_mmap_close was retained, and the mutex-holding logic was modified to obtain the perf_mmap_close_locked function. v3: The fix is made smaller by passing the parameter "holds_event_mmap_mutex" to perf_mmap_close. v4: This problem is solved in a smarter way. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@int= el.com/ Reviewed-by: Ian Rogers Reviewed-by: Peter Zijlstra Signed-off-by: Haocheng Yu --- kernel/events/core.c | 59 +++++++++++++++++++++++++++++-------- kernel/events/internal.h | 1 + kernel/events/ring_buffer.c | 2 ++ 3 files changed, 49 insertions(+), 13 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 22a0f405585b..d3f978402b1e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7010,7 +7010,7 @@ static void perf_mmap_open(struct vm_area_struct *vma) } =20 static void perf_pmu_output_stop(struct perf_event *event); - +static void perf_mmap_unaccount(struct vm_area_struct *vma, struct perf_bu= ffer *rb); /* * A buffer can be mmap()ed multiple times; either directly through the sa= me * event, or through other events by use of perf_event_set_output(). @@ -7025,8 +7025,6 @@ static void perf_mmap_close(struct vm_area_struct *vm= a) mapped_f unmapped =3D get_mapped(event, event_unmapped); struct perf_buffer *rb =3D ring_buffer_get(event); struct user_struct *mmap_user =3D rb->mmap_user; - int mmap_locked =3D rb->mmap_locked; - unsigned long size =3D perf_data_size(rb); bool detach_rest =3D false; =20 /* FIXIES vs perf_pmu_unregister() */ @@ -7121,11 +7119,7 @@ static void perf_mmap_close(struct vm_area_struct *v= ma) * Aside from that, this buffer is 'fully' detached and unmapped, * undo the VM accounting. */ - - atomic_long_sub((size >> PAGE_SHIFT) + 1 - mmap_locked, - &mmap_user->locked_vm); - atomic64_sub(mmap_locked, &vma->vm_mm->pinned_vm); - free_uid(mmap_user); + perf_mmap_unaccount(vma, rb); =20 out_put: ring_buffer_put(rb); /* could be last */ @@ -7265,6 +7259,15 @@ static void perf_mmap_account(struct vm_area_struct = *vma, long user_extra, long atomic64_add(extra, &vma->vm_mm->pinned_vm); } =20 +static void perf_mmap_unaccount(struct vm_area_struct *vma, struct perf_bu= ffer *rb) +{ + struct user_struct *user =3D rb->mmap_user; + + atomic_long_sub((perf_data_size(rb) >> PAGE_SHIFT) + 1 - rb->mmap_locked, + &user->locked_vm); + atomic64_sub(rb->mmap_locked, &vma->vm_mm->pinned_vm); +} + static int perf_mmap_rb(struct vm_area_struct *vma, struct perf_event *eve= nt, unsigned long nr_pages) { @@ -7327,8 +7330,6 @@ static int perf_mmap_rb(struct vm_area_struct *vma, s= truct perf_event *event, if (!rb) return -ENOMEM; =20 - refcount_set(&rb->mmap_count, 1); - rb->mmap_user =3D get_current_user(); rb->mmap_locked =3D extra; =20 ring_buffer_attach(event, rb); @@ -7484,10 +7485,42 @@ static int perf_mmap(struct file *file, struct vm_a= rea_struct *vma) * vmops::close(). */ ret =3D map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); - } + if (likely(!ret)) + return 0; + + /* Error path */ =20 + /* + * If this is the first mmap(), then event->mmap_count should + * be stable at 1. It is only modified by: + * perf_mmap_{open,close}() and perf_mmap(). + * + * The former are not possible because this mmap() hasn't been + * successful yet, and the latter is serialized by + * event->mmap_mutex which we still hold (note that mmap_lock + * is not strictly sufficient here, because the event fd can + * be passed to another process through trivial means like + * fork(), leading to concurrent mmap() from different mm). + * + * Make sure to remove event->rb before releasing + * event->mmap_mutex, such that any concurrent mmap() will not + * attempt use this failed buffer. + */ + if (refcount_read(&event->mmap_count) =3D=3D 1) { + /* + * Minimal perf_mmap_close(); there can't be AUX or + * other events on account of this being the first. + */ + mapped =3D get_mapped(event, event_unmapped); + if (mapped) + mapped(event, vma->vm_mm); + perf_mmap_unaccount(vma, event->rb); + ring_buffer_attach(event, NULL); /* drops last rb->refcount */ + refcount_set(&event->mmap_count, 0); + return ret; + } + } + perf_mmap_close(vma); return ret; } =20 diff --git a/kernel/events/internal.h b/kernel/events/internal.h index d9cc57083091..c03c4f2eea57 100644 --- a/kernel/events/internal.h +++ b/kernel/events/internal.h @@ -67,6 +67,7 @@ static inline void rb_free_rcu(struct rcu_head *rcu_head) struct perf_buffer *rb; =20 rb =3D container_of(rcu_head, struct perf_buffer, rcu_head); + free_uid(rb->mmap_user); rb_free(rb); } =20 diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 3e7de2661417..9fe92161715e 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -340,6 +340,8 @@ ring_buffer_init(struct perf_buffer *rb, long watermark= , int flags) rb->paused =3D 1; =20 mutex_init(&rb->aux_mutex); + rb->mmap_user =3D get_current_user(); + refcount_set(&rb->mmap_count, 1); } =20 void perf_aux_output_flag(struct perf_output_handle *handle, u64 flags) base-commit: 77de62ad3de3967818c3dbe656b7336ebee461d2 --=20 2.51.0