fs/hugetlbfs/inode.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] Fix a lock imbalance bug in hugetlb_vmdelete_list() that
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
Fix a lock imbalance bug in hugetlb_vmdelete_list() that causes:
WARNING: bad unlock balance detected!
hugetlb_vmdelete_list+0x179/0x1c0 is trying to release lock
(&vma_lock->rw_sema) but there are no more locks to release!
The issue is a race condition between multiple threads operating on the
same VMA:
1. Thread 1 calls hugetlb_vma_trylock_write() when vma->vm_private_data=NULL
2. trylock returns success (no lock needed for this VMA type)
3. Thread 2 allocates a lock structure: vma->vm_private_data=&new_lock
4. Thread 1 calls hugetlb_vma_unlock_write(), sees non-NULL vm_private_data
5. Thread 1 tries to unlock a lock it never acquired → crash
The fix is to save the VMA lock state at the time we make the locking
decision, rather than checking it again at unlock time. This prevents
the time-of-check-time-of-use (TOCTOU) race condition.
Reported-by: syzbot+62edf7e27b2e8f754525@syzkaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?extid=62edf7e27b2e8f754525
Fixes: 8d9bfb2608cf ("hugetlb: add vma based lock for pmd sharing")
Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
---
fs/hugetlbfs/inode.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9e0625167517..ae3e07eacd37 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -475,15 +475,16 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
zap_flags_t zap_flags)
{
struct vm_area_struct *vma;
-
/*
* end == 0 indicates that the entire range after start should be
* unmapped. Note, end is exclusive, whereas the interval tree takes
* an inclusive "last".
*/
vma_interval_tree_foreach(vma, root, start, end ? end - 1 : ULONG_MAX) {
+ struct hugetlb_vma_lock *vma_lock;
unsigned long v_start;
unsigned long v_end;
+ vma_lock = vma->vm_private_data;
if (!hugetlb_vma_trylock_write(vma))
continue;
@@ -498,7 +499,8 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
* vmas. Therefore, lock is not held when calling
* unmap_hugepage_range for private vmas.
*/
- hugetlb_vma_unlock_write(vma);
+ if (vma_lock)
+ hugetlb_vma_unlock_write(vma);
}
}
--
2.43.0
© 2016 - 2025 Red Hat, Inc.