[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap

Haocheng Yu posted 1 patch 6 days, 3 hours ago
There is a newer version of this series
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Haocheng Yu 6 days, 3 hours ago
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.

Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.

Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
 kernel/events/core.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 			ret = perf_mmap_aux(vma, event, nr_pages);
 		if (ret)
 			return ret;
-	}
-
-	/*
-	 * Since pinned accounting is per vm we cannot allow fork() to copy our
-	 * vma.
-	 */
-	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
-	vma->vm_ops = &perf_mmap_vmops;
 
-	mapped = get_mapped(event, event_mapped);
-	if (mapped)
-		mapped(event, vma->vm_mm);
-
-	/*
-	 * Try to map it into the page table. On fail, invoke
-	 * perf_mmap_close() to undo the above, as the callsite expects
-	 * full cleanup in this case and therefore does not invoke
-	 * vmops::close().
-	 */
-	ret = map_range(event->rb, vma);
-	if (ret)
-		perf_mmap_close(vma);
+	/*
+	 * Since pinned accounting is per vm we cannot allow fork() to copy our
+	 * vma.
+	 */
+	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+	vma->vm_ops = &perf_mmap_vmops;
+
+	mapped = get_mapped(event, event_mapped);
+	if (mapped)
+		mapped(event, vma->vm_mm);
+
+	/*
+	 * Try to map it into the page table. On fail, invoke
+	 * perf_mmap_close() to undo the above, as the callsite expects
+	 * full cleanup in this case and therefore does not invoke
+	 * vmops::close().
+	 */
+	ret = map_range(event->rb, vma);
+	if (ret)
+		perf_mmap_close(vma);
+	}
 
 	return ret;
 }
-- 
2.51.0
Re: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by kernel test robot 5 days, 20 hours ago
Hi Haocheng,

kernel test robot noticed the following build warnings:

[auto build test WARNING on perf-tools-next/perf-tools-next]
[also build test WARNING on tip/perf/core perf-tools/perf-tools linus/master v6.19-rc7 next-20260130]
[cannot apply to acme/perf/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Haocheng-Yu/perf-core-Fix-refcount-bug-and-potential-UAF-in-perf_mmap/20260201-193746
base:   https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git perf-tools-next
patch link:    https://lore.kernel.org/r/20260201113446.4328-1-yuhaocheng035%40gmail.com
patch subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
config: mips-randconfig-r072-20260201 (https://download.01.org/0day-ci/archive/20260202/202602020208.m7KIjdzW-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 9b8addffa70cee5b2acc5454712d9cf78ce45710)
smatch version: v0.5.0-8994-gd50c5a4c

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/

smatch warnings:
kernel/events/core.c:7183 perf_mmap() warn: inconsistent indenting

vim +7183 kernel/events/core.c

7b732a75047738 kernel/perf_counter.c Peter Zijlstra          2009-03-23  7131  
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7132  static int perf_mmap(struct file *file, struct vm_area_struct *vma)
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7133  {
cdd6c482c9ff9c kernel/perf_event.c   Ingo Molnar             2009-09-21  7134  	struct perf_event *event = file->private_data;
81e026ca47b386 kernel/events/core.c  Thomas Gleixner         2025-08-12  7135  	unsigned long vma_size, nr_pages;
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7136  	mapped_f mapped;
5d299897f1e360 kernel/events/core.c  Peter Zijlstra          2025-08-12  7137  	int ret;
d57e34fdd60be7 kernel/perf_event.c   Peter Zijlstra          2010-05-28  7138  
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7139  	/*
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7140  	 * Don't allow mmap() of inherited per-task counters. This would
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7141  	 * create a performance issue due to all children writing to the
76369139ceb955 kernel/events/core.c  Frederic Weisbecker     2011-05-19  7142  	 * same rb.
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7143  	 */
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7144  	if (event->cpu == -1 && event->attr.inherit)
c7920614cebbf2 kernel/perf_event.c   Peter Zijlstra          2010-05-18  7145  		return -EINVAL;
4ec8363dfc1451 kernel/events/core.c  Vince Weaver            2011-06-01  7146  
43a21ea81a2400 kernel/perf_counter.c Peter Zijlstra          2009-03-25  7147  	if (!(vma->vm_flags & VM_SHARED))
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7148  		return -EINVAL;
26cb63ad11e040 kernel/events/core.c  Peter Zijlstra          2013-05-28  7149  
da97e18458fb42 kernel/events/core.c  Joel Fernandes (Google  2019-10-14  7150) 	ret = security_perf_event_read(event);
da97e18458fb42 kernel/events/core.c  Joel Fernandes (Google  2019-10-14  7151) 	if (ret)
da97e18458fb42 kernel/events/core.c  Joel Fernandes (Google  2019-10-14  7152) 		return ret;
26cb63ad11e040 kernel/events/core.c  Peter Zijlstra          2013-05-28  7153  
7b732a75047738 kernel/perf_counter.c Peter Zijlstra          2009-03-23  7154  	vma_size = vma->vm_end - vma->vm_start;
0c8a4e4139adf0 kernel/events/core.c  Peter Zijlstra          2024-11-04  7155  	nr_pages = vma_size / PAGE_SIZE;
ac9721f3f54b27 kernel/perf_event.c   Peter Zijlstra          2010-05-27  7156  
0c8a4e4139adf0 kernel/events/core.c  Peter Zijlstra          2024-11-04  7157  	if (nr_pages > INT_MAX)
0c8a4e4139adf0 kernel/events/core.c  Peter Zijlstra          2024-11-04  7158  		return -ENOMEM;
9a0f05cb368885 kernel/events/core.c  Peter Zijlstra          2011-11-21  7159  
0c8a4e4139adf0 kernel/events/core.c  Peter Zijlstra          2024-11-04  7160  	if (vma_size != PAGE_SIZE * nr_pages)
0c8a4e4139adf0 kernel/events/core.c  Peter Zijlstra          2024-11-04  7161  		return -EINVAL;
45bfb2e50471ab kernel/events/core.c  Peter Zijlstra          2015-01-14  7162  
d23a6dbc0a7174 kernel/events/core.c  Peter Zijlstra          2025-08-12  7163  	scoped_guard (mutex, &event->mmap_mutex) {
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7164  		/*
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7165  		 * This relies on __pmu_detach_event() taking mmap_mutex after marking
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7166  		 * the event REVOKED. Either we observe the state, or __pmu_detach_event()
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7167  		 * will detach the rb created here.
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7168  		 */
d23a6dbc0a7174 kernel/events/core.c  Peter Zijlstra          2025-08-12  7169  		if (event->state <= PERF_EVENT_STATE_REVOKED)
d23a6dbc0a7174 kernel/events/core.c  Peter Zijlstra          2025-08-12  7170  			return -ENODEV;
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7171  
5d299897f1e360 kernel/events/core.c  Peter Zijlstra          2025-08-12  7172  		if (vma->vm_pgoff == 0)
5d299897f1e360 kernel/events/core.c  Peter Zijlstra          2025-08-12  7173  			ret = perf_mmap_rb(vma, event, nr_pages);
5d299897f1e360 kernel/events/core.c  Peter Zijlstra          2025-08-12  7174  		else
2aee3768239133 kernel/events/core.c  Peter Zijlstra          2025-08-12  7175  			ret = perf_mmap_aux(vma, event, nr_pages);
07091aade394f6 kernel/events/core.c  Thomas Gleixner         2025-08-02  7176  		if (ret)
07091aade394f6 kernel/events/core.c  Thomas Gleixner         2025-08-02  7177  			return ret;
07091aade394f6 kernel/events/core.c  Thomas Gleixner         2025-08-02  7178  
9bb5d40cd93c9d kernel/events/core.c  Peter Zijlstra          2013-06-04  7179  	/*
9bb5d40cd93c9d kernel/events/core.c  Peter Zijlstra          2013-06-04  7180  	 * Since pinned accounting is per vm we cannot allow fork() to copy our
9bb5d40cd93c9d kernel/events/core.c  Peter Zijlstra          2013-06-04  7181  	 * vma.
9bb5d40cd93c9d kernel/events/core.c  Peter Zijlstra          2013-06-04  7182  	 */
1c71222e5f2393 kernel/events/core.c  Suren Baghdasaryan      2023-01-26 @7183  	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7184  	vma->vm_ops = &perf_mmap_vmops;
7b732a75047738 kernel/perf_counter.c Peter Zijlstra          2009-03-23  7185  
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7186  	mapped = get_mapped(event, event_mapped);
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7187  	if (mapped)
da916e96e2dedc kernel/events/core.c  Peter Zijlstra          2024-10-25  7188  		mapped(event, vma->vm_mm);
1e0fb9ec679c92 kernel/events/core.c  Andy Lutomirski         2014-10-24  7189  
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7190  	/*
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7191  	 * Try to map it into the page table. On fail, invoke
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7192  	 * perf_mmap_close() to undo the above, as the callsite expects
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7193  	 * full cleanup in this case and therefore does not invoke
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7194  	 * vmops::close().
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7195  	 */
191759e5ea9f69 kernel/events/core.c  Peter Zijlstra          2025-08-12  7196  	ret = map_range(event->rb, vma);
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7197  	if (ret)
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7198  		perf_mmap_close(vma);
8f75f689bf8133 kernel/events/core.c  Haocheng Yu             2026-02-01  7199  	}
f74b9f4ba63ffd kernel/events/core.c  Thomas Gleixner         2025-08-02  7200  
7b732a75047738 kernel/perf_counter.c Peter Zijlstra          2009-03-23  7201  	return ret;
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7202  }
37d81828385f8f kernel/perf_counter.c Paul Mackerras          2009-03-23  7203  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Re: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Greg KH 6 days, 3 hours ago
On Sun, Feb 01, 2026 at 07:34:36PM +0800, Haocheng Yu wrote:
> The issue is caused by a race condition between mmap() and event
> teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
> map_range() after the mmap_mutex is released. If another thread
> closes the event or detaches the buffer during this window, the
> reference count of rb can drop to zero, leading to a UAF or
> refcount saturation when map_range() or subsequent logic attempts
> to use it.
> 
> Fix this by extending the scope of mmap_mutex to cover the entire
> setup process, including map_range(), ensuring the buffer remains
> valid until the mapping is complete.
> 
> Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
> ---
>  kernel/events/core.c | 42 +++++++++++++++++++++---------------------
>  1 file changed, 21 insertions(+), 21 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2c35acc2722b..7c93f7d057cb 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>  			ret = perf_mmap_aux(vma, event, nr_pages);
>  		if (ret)
>  			return ret;
> -	}
> -
> -	/*
> -	 * Since pinned accounting is per vm we cannot allow fork() to copy our
> -	 * vma.
> -	 */
> -	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> -	vma->vm_ops = &perf_mmap_vmops;
>  
> -	mapped = get_mapped(event, event_mapped);
> -	if (mapped)
> -		mapped(event, vma->vm_mm);
> -
> -	/*
> -	 * Try to map it into the page table. On fail, invoke
> -	 * perf_mmap_close() to undo the above, as the callsite expects
> -	 * full cleanup in this case and therefore does not invoke
> -	 * vmops::close().
> -	 */
> -	ret = map_range(event->rb, vma);
> -	if (ret)
> -		perf_mmap_close(vma);
> +	/*
> +	 * Since pinned accounting is per vm we cannot allow fork() to copy our
> +	 * vma.
> +	 */
> +	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> +	vma->vm_ops = &perf_mmap_vmops;
> +
> +	mapped = get_mapped(event, event_mapped);
> +	if (mapped)
> +		mapped(event, vma->vm_mm);
> +
> +	/*
> +	 * Try to map it into the page table. On fail, invoke
> +	 * perf_mmap_close() to undo the above, as the callsite expects
> +	 * full cleanup in this case and therefore does not invoke
> +	 * vmops::close().
> +	 */
> +	ret = map_range(event->rb, vma);
> +	if (ret)
> +		perf_mmap_close(vma);
> +	}


This indentation looks very odd, are you sure it is correct?

thanks,

greg k-h
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Haocheng Yu 5 days, 7 hours ago
Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.

The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.

Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
 kernel/events/core.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..abefd1213582 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 			ret = perf_mmap_aux(vma, event, nr_pages);
 		if (ret)
 			return ret;
-	}
 
-	/*
-	 * Since pinned accounting is per vm we cannot allow fork() to copy our
-	 * vma.
-	 */
-	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
-	vma->vm_ops = &perf_mmap_vmops;
+		/*
+		 * Since pinned accounting is per vm we cannot allow fork() to copy our
+		 * vma.
+		 */
+		vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+		vma->vm_ops = &perf_mmap_vmops;
 
-	mapped = get_mapped(event, event_mapped);
-	if (mapped)
-		mapped(event, vma->vm_mm);
+		mapped = get_mapped(event, event_mapped);
+		if (mapped)
+			mapped(event, vma->vm_mm);
 
-	/*
-	 * Try to map it into the page table. On fail, invoke
-	 * perf_mmap_close() to undo the above, as the callsite expects
-	 * full cleanup in this case and therefore does not invoke
-	 * vmops::close().
-	 */
-	ret = map_range(event->rb, vma);
-	if (ret)
-		perf_mmap_close(vma);
+		/*
+		 * Try to map it into the page table. On fail, invoke
+		 * perf_mmap_close() to undo the above, as the callsite expects
+		 * full cleanup in this case and therefore does not invoke
+		 * vmops::close().
+		 */
+		ret = map_range(event->rb, vma);
+		if (ret)
+			perf_mmap_close(vma);
+	}
 
 	return ret;
 }

base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
-- 
2.51.0
Re: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Peter Zijlstra 5 days, 1 hour ago
On Mon, Feb 02, 2026 at 03:44:35PM +0800, Haocheng Yu wrote:
> Syzkaller reported a refcount_t: addition on 0; use-after-free warning
> in perf_mmap.
> 
> The issue is caused by a race condition between mmap() and event
> teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
> map_range() after the mmap_mutex is released. If another thread
> closes the event or detaches the buffer during this window, the
> reference count of rb can drop to zero, leading to a UAF or
> refcount saturation when map_range() or subsequent logic attempts
> to use it.

So you're saying this is something like:

	Thread-1		Thread-2

	mmap(fd)
				close(fd) / ioctl(fd, IOC_SET_OUTPUT)


I don't think close() is possible, because mmap() should have a
reference on the struct file from fget(), no?

That leaves the ioctl(), let me go have a peek.
Re: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Peter Zijlstra 5 days ago
On Mon, Feb 02, 2026 at 02:58:59PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 02, 2026 at 03:44:35PM +0800, Haocheng Yu wrote:
> > Syzkaller reported a refcount_t: addition on 0; use-after-free warning
> > in perf_mmap.
> > 
> > The issue is caused by a race condition between mmap() and event
> > teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
> > map_range() after the mmap_mutex is released. If another thread
> > closes the event or detaches the buffer during this window, the
> > reference count of rb can drop to zero, leading to a UAF or
> > refcount saturation when map_range() or subsequent logic attempts
> > to use it.
> 
> So you're saying this is something like:
> 
> 	Thread-1		Thread-2
> 
> 	mmap(fd)
> 				close(fd) / ioctl(fd, IOC_SET_OUTPUT)
> 
> 
> I don't think close() is possible, because mmap() should have a
> reference on the struct file from fget(), no?
> 
> That leaves the ioctl(), let me go have a peek.

I'm not seeing it; once perf_mmap_rb() completes, we should have
event->mmap_count != 0, and this the IOC_SET_OUTPUT will fail.

Please provide a better explanation.
Re: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by 余昊铖 4 days, 23 hours ago
Hi Peter,

Thanks for the review. You are right, my previous explanation was
inaccurate. The actual race condition occurs between a failing
mmap() on one event and a concurrent mmap() on a second event
that shares the ring buffer (e.g., via output redirection).

Detailed scenario is as follows, for example:
1. Thread A calls mmap(event_A). It allocates the ring buffer, sets
event_A->rb, and initializes refcount to 1. It then drops mmap_mutex.
2. Thread A calls map_range(). Suppose this fails. Thread A then
proceeds to the error path and calls perf_mmap_close().
3. Thread B concurrently calls mmap(event_B), where event_B is
configured to share event_A's buffer. Thread B acquires
event_A->mmap_mutex and sees the valid event_A->rb pointer.
4. The race triggers here: If Thread A's perf_mmap_close() logic
decrements the ring buffer's refcount to 0 (releasing it) but the pointer
event_A->rb is still visible to Thread B (or was read by Thread B before
it was cleared), Thread B triggers the "refcount_t: addition on 0" warning
when it attempts to increment the refcount in perf_mmap_rb().

The fix extends the scope of mmap_mutex to cover map_range() and the
potential error handling path. This ensures that event->rb is only exposed
to other threads after it is fully successfully mapped, or it is cleaned up
atomically inside the lock if mapping fails.

I have updated the commit message accordingly.

Thanks,
Haocheng
[PATCH v2] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by yuhaocheng035@gmail.com 4 days, 22 hours ago
From: Haocheng Yu <yuhaocheng035@gmail.com>

Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.

The issue is caused by a race condition between a failing mmap() setup
and a concurrent mmap() on a dependent event (e.g., using output
redirection).

In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
event->rb with the mmap_mutex held. The mutex is then released to
perform map_range().

If map_range() fails, perf_mmap_close() is called to clean up.
However, since the mutex was dropped, another thread attaching to
this event (via inherited events or output redirection) can acquire
the mutex, observe the valid event->rb pointer, and attempt to
increment its reference count. If the cleanup path has already
dropped the reference count to zero, this results in a
use-after-free or refcount saturation warning.

Fix this by extending the scope of mmap_mutex to cover the
map_range() call. This ensures that the ring buffer initialization
and mapping (or cleanup on failure) happens atomically effectively,
preventing other threads from accessing a half-initialized or
dying ring buffer.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
 kernel/events/core.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..abefd1213582 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 			ret = perf_mmap_aux(vma, event, nr_pages);
 		if (ret)
 			return ret;
-	}
 
-	/*
-	 * Since pinned accounting is per vm we cannot allow fork() to copy our
-	 * vma.
-	 */
-	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
-	vma->vm_ops = &perf_mmap_vmops;
+		/*
+		 * Since pinned accounting is per vm we cannot allow fork() to copy our
+		 * vma.
+		 */
+		vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+		vma->vm_ops = &perf_mmap_vmops;
 
-	mapped = get_mapped(event, event_mapped);
-	if (mapped)
-		mapped(event, vma->vm_mm);
+		mapped = get_mapped(event, event_mapped);
+		if (mapped)
+			mapped(event, vma->vm_mm);
 
-	/*
-	 * Try to map it into the page table. On fail, invoke
-	 * perf_mmap_close() to undo the above, as the callsite expects
-	 * full cleanup in this case and therefore does not invoke
-	 * vmops::close().
-	 */
-	ret = map_range(event->rb, vma);
-	if (ret)
-		perf_mmap_close(vma);
+		/*
+		 * Try to map it into the page table. On fail, invoke
+		 * perf_mmap_close() to undo the above, as the callsite expects
+		 * full cleanup in this case and therefore does not invoke
+		 * vmops::close().
+		 */
+		ret = map_range(event->rb, vma);
+		if (ret)
+			perf_mmap_close(vma);
+	}
 
 	return ret;
 }

base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
-- 
2.51.0
Re: [PATCH v2] perf/core: Fix refcount bug and potential UAF in perf_mmap
Posted by Peter Zijlstra 1 day, 6 hours ago
On Tue, Feb 03, 2026 at 12:20:56AM +0800, yuhaocheng035@gmail.com wrote:
> From: Haocheng Yu <yuhaocheng035@gmail.com>
> 
> Syzkaller reported a refcount_t: addition on 0; use-after-free warning
> in perf_mmap.
> 
> The issue is caused by a race condition between a failing mmap() setup
> and a concurrent mmap() on a dependent event (e.g., using output
> redirection).
> 
> In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
> event->rb with the mmap_mutex held. The mutex is then released to
> perform map_range().
> 
> If map_range() fails, perf_mmap_close() is called to clean up.
> However, since the mutex was dropped, another thread attaching to
> this event (via inherited events or output redirection) can acquire
> the mutex, observe the valid event->rb pointer, and attempt to
> increment its reference count. If the cleanup path has already
> dropped the reference count to zero, this results in a
> use-after-free or refcount saturation warning.
> 
> Fix this by extending the scope of mmap_mutex to cover the
> map_range() call. This ensures that the ring buffer initialization
> and mapping (or cleanup on failure) happens atomically effectively,
> preventing other threads from accessing a half-initialized or
> dying ring buffer.

And you're sure this time? To me it feels bit like talking to an LLM.

I suppose there is nothing wrong with having an LLM process syzkaller
output and even have it propose patches, but before you send it out an
actual human should get involved and apply critical thinking skills.

Just throwing stuff at a maintainer and hoping he does the thinking for
you is not appreciated.

> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
> Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
> ---
>  kernel/events/core.c | 38 +++++++++++++++++++-------------------
>  1 file changed, 19 insertions(+), 19 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2c35acc2722b..abefd1213582 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>  			ret = perf_mmap_aux(vma, event, nr_pages);
>  		if (ret)
>  			return ret;
> -	}
>  
> -	/*
> -	 * Since pinned accounting is per vm we cannot allow fork() to copy our
> -	 * vma.
> -	 */
> -	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> -	vma->vm_ops = &perf_mmap_vmops;
> +		/*
> +		 * Since pinned accounting is per vm we cannot allow fork() to copy our
> +		 * vma.
> +		 */
> +		vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> +		vma->vm_ops = &perf_mmap_vmops;
>  
> -	mapped = get_mapped(event, event_mapped);
> -	if (mapped)
> -		mapped(event, vma->vm_mm);
> +		mapped = get_mapped(event, event_mapped);
> +		if (mapped)
> +			mapped(event, vma->vm_mm);
>  
> -	/*
> -	 * Try to map it into the page table. On fail, invoke
> -	 * perf_mmap_close() to undo the above, as the callsite expects
> -	 * full cleanup in this case and therefore does not invoke
> -	 * vmops::close().
> -	 */
> -	ret = map_range(event->rb, vma);
> -	if (ret)
> -		perf_mmap_close(vma);
> +		/*
> +		 * Try to map it into the page table. On fail, invoke
> +		 * perf_mmap_close() to undo the above, as the callsite expects
> +		 * full cleanup in this case and therefore does not invoke
> +		 * vmops::close().
> +		 */
> +		ret = map_range(event->rb, vma);
> +		if (ret)
> +			perf_mmap_close(vma);
> +	}
>  
>  	return ret;
>  }
> 
> base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
> -- 
> 2.51.0
>