For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] mm: Fix KASAN-induced recursive page faults in ptlock_ptr
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
When pte_offset_map_rw_nolock() is called during signal frame setup
in the ret_from_fork path, KASAN instrumentation of ptlock_ptr() can
trigger recursive page faults. The KASAN shadow memory access itself
may cause a page fault while already handling a page fault.
In RT priority contexts (SCHED_FIFO), this recursive faulting prevents
the task from yielding, causing RCU stalls as the grace period kthread
cannot get CPU time.
Disable KASAN instrumentation for both variants of ptlock_ptr() to
prevent this recursion.
Reported-by: syzbot+42836f91edd58eb82c6a@syzkaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?extid=42836f91edd58eb82c6a
Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
---
include/linux/mm.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f0d5be9dc736..e4c33731bdd6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3235,7 +3235,7 @@ void __init ptlock_cache_init(void);
bool ptlock_alloc(struct ptdesc *ptdesc);
void ptlock_free(struct ptdesc *ptdesc);
-static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) __no_sanitize_address
{
return ptdesc->ptl;
}
@@ -3253,7 +3253,7 @@ static inline void ptlock_free(struct ptdesc *ptdesc)
{
}
-static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) __no_sanitize_address
{
return &ptdesc->ptl;
}
--
2.43.0