From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 108F42E5B0E; Mon, 27 Oct 2025 08:43:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554627; cv=none; b=LWPFtl9q3cK01miy3Jh35B91wtC8GjgscOoj6uHx/wlLCmJDzsluF2Br2WEz9HTzjK0KpGSsVWhGTzI+CmVWWRGvbp0OAnvBuqehJK6Bb5uH8MfQY8UxMRoGSb8wq/8WNijlsD2CNmq0/qmdWswOg1hXTfMOAY7tgCJKMyYkbc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554627; c=relaxed/simple; bh=B6FY70g1YU4tpzw5guto+VK5McncnWIMenNCFYwqfSs=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=K4NXFjbU48t0Zc3W2zJtxGKoqfgoFHS0PgZSY735Aslaq6edA7pVFxyKwP6oVtiCclDQ/z+JeLcOlQ21WLxdAJGXYUbqpJBgP18PpjsWYwv7v4Af7ZgoehMCMvc+iJxsainyFT1DUS+2EzdDhfpuCvkGXq4QGZI9veM7Q1isjng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=iFqtRUO/; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=5XK95FhE; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="iFqtRUO/"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="5XK95FhE" Message-ID: <20251027083745.168468637@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554623; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=fZFzuFe1kZwWOtUDwXLfQwu5MJAw6qMRRDC36ZwVfYw=; b=iFqtRUO/UF2NLRN/kVvPD3nQK4TtwRE+uDnO/hAO9U0W3NxlSPxD/v9M3GkAgfLhzRq8vt CCj5rekchm56nj5Mkf0unik0Wc1yln3p29i6fLF4pDN2PJ4nydOyxBIJ7aCWP4ANtZSG2m EjPshhXBnv1o0WyIzzi7xVc6doO/sprXROIpHSbWiTpe3hvvz1UIyNOTUbrLwKuI0+Coh9 kRGd2RAdEdrM7wGIoDnrf7H33CaumUKoK1SI1UJVHhvofEUCgYuVGouzrKyUl6KLHmEkfh zO6cBR1DBJ/kbO/dGTiHnJ+qTvaCmUiblx012RiwgwaWPfEfF3sWe7Txt2k2tg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554623; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=fZFzuFe1kZwWOtUDwXLfQwu5MJAw6qMRRDC36ZwVfYw=; b=5XK95FhES5auvsDsX3IerBIddRz/phA3Vz8BHEXdqt/pEUw83Sq0/r1f7Qo4op0y1URgn/ liQFWLDPUinvqECg== From: Thomas Gleixner To: LKML Cc: kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:42 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When CONFIG_CPU_SPECTRE=3Dn then get_user() is missing the 8 byte ASM varia= nt for no real good reason. This prevents using get_user(u64) in generic code. Implement it as a sequence of two 4-byte reads with LE/BE awareness and make the unsigned long (or long long) type for the intermediate variable to read into dependend on the the target type. The __long_type() macro and idea was lifted from PowerPC. Thanks to Christophe for pointing it out. Reported-by: kernel test robot Signed-off-by: Thomas Gleixner Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Closes: https://lore.kernel.org/oe-kbuild-all/202509120155.pFgwfeUD-lkp@int= el.com/ Reviewed-by: Mathieu Desnoyers --- V2a: Solve the *ptr issue vs. unsigned long long - Russell/Christophe V2: New patch to fix the 0-day fallout --- arch/arm/include/asm/uaccess.h | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -283,10 +283,17 @@ extern int __put_user_8(void *, unsigned __gu_err; \ }) =20 +/* + * This is a type: either unsigned long, if the argument fits into + * that type, or otherwise unsigned long long. + */ +#define __long_type(x) \ + __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) + #define __get_user_err(x, ptr, err, __t) \ do { \ unsigned long __gu_addr =3D (unsigned long)(ptr); \ - unsigned long __gu_val; \ + __long_type(x) __gu_val; \ unsigned int __ua_flags; \ __chk_user_ptr(ptr); \ might_fault(); \ @@ -295,6 +302,7 @@ do { \ case 1: __get_user_asm_byte(__gu_val, __gu_addr, err, __t); break; \ case 2: __get_user_asm_half(__gu_val, __gu_addr, err, __t); break; \ case 4: __get_user_asm_word(__gu_val, __gu_addr, err, __t); break; \ + case 8: __get_user_asm_dword(__gu_val, __gu_addr, err, __t); break; \ default: (__gu_val) =3D __get_user_bad(); \ } \ uaccess_restore(__ua_flags); \ @@ -353,6 +361,22 @@ do { \ #define __get_user_asm_word(x, addr, err, __t) \ __get_user_asm(x, addr, err, "ldr" __t) =20 +#ifdef __ARMEB__ +#define __WORD0_OFFS 4 +#define __WORD1_OFFS 0 +#else +#define __WORD0_OFFS 0 +#define __WORD1_OFFS 4 +#endif + +#define __get_user_asm_dword(x, addr, err, __t) \ + ({ \ + unsigned long __w0, __w1; \ + __get_user_asm(__w0, addr + __WORD0_OFFS, err, "ldr" __t); \ + __get_user_asm(__w1, addr + __WORD1_OFFS, err, "ldr" __t); \ + (x) =3D ((u64)__w1 << 32) | (u64) __w0; \ +}) + #define __put_user_switch(x, ptr, __err, __fn) \ do { \ const __typeof__(*(ptr)) __user *__pu_ptr =3D (ptr); \ From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D458B2F2619; Mon, 27 Oct 2025 08:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554630; cv=none; b=NNwttL79/6yqNbf5NGGOQYr+k0BI4vqktxCscHutjwigEqGZHX7bYyruP+qeCkhl21+b3vXqddPF2FXe9AAbqiSMrVXLV4Eu50xWO/V11NDNSB4J2dYYgVMMHRosbwf4RNod1uhDQa2uRQwDvJRhlWpqCBEXMdJ7gx4SXX40W8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554630; c=relaxed/simple; bh=87O4I+Gs+q+53VRSgJDF74yqnaDGggOQykXNLW0MPKg=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=Th5d2DU61GL0rtwtVBGaxVCLvc5liW5k/VO0SVC85McjwBKQVHtPl/t03MPQ58nbaniBb3TJp7LMddLAfDmG0WwpjzbdR3ZSD3cooO998WAaUZ9BIogN2iHQR6jeOOt4KrsvW3Vy6k1EHfpbKnTplEMaFhMraw2VNcBYus8a7v4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=wrLk8rRx; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=omM8YhRj; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="wrLk8rRx"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="omM8YhRj" Message-ID: <20251027083745.231716098@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554625; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=riOmJf9FiDkmu6m5dlKP9M/8U5xAklSgvZrcN3Urpuk=; b=wrLk8rRx2bNeKAaJl6WBftKZnkNPcrFYqCMAl/zJy/hCrLw0x4DaPvmCayDcz0lHmx2HzE vzo0Zk3NxXNDZ6q2pC7iPNy2ntJvVw9OShkKgFAMiaNFfiCpU2DI8tiJAxdVklCw/H28hY IRDQziyxREpuwwjEJt9/NN2o3FuavrClAWQ8fitEzUtuARFM4JA7jaV0zYBxtI4BkNeuv/ hntSLtuLS8qszH8MmXznbM8THCI0y/kh9d1/xBWPLPQ5czT8xNYyc1imvf9Ien7pWVfG5I o9zMNLE/7GnlxahxsbHdlOZBYn4UkgRuQFELWLGxhsYKL9xg2HCU0f0wPWuRiQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554625; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=riOmJf9FiDkmu6m5dlKP9M/8U5xAklSgvZrcN3Urpuk=; b=omM8YhRjGxfqHtY5RmR87E1nFdkMmoyTAWgP/CpURp6JaA/IJWtXzYJdgCjHkZcEYt0qvv qNJRCwuFncCHLMDA== From: Thomas Gleixner To: LKML Cc: Linus Torvalds , kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:44 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope: bool foo(u32 __user *p, u32 val) { scoped_guard(pagefault) unsafe_put_user(val, p, efault); return true; efault: return false; } e80: e8 00 00 00 00 call e85 e85: 65 48 8b 05 00 00 00 00 mov %gs:0x0(%rip),%rax e8d: 83 80 04 14 00 00 01 addl $0x1,0x1404(%rax) // pf_disable++ e94: 89 37 mov %esi,(%rdi) e96: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable-- e9d: b8 01 00 00 00 mov $0x1,%eax // success ea2: e9 00 00 00 00 jmp ea7 // ret ea7: 31 c0 xor %eax,%eax // fail ea9: e9 00 00 00 00 jmp eae // ret which is broken as it leaks the pagefault disable counter on failure. Clang at least fails the build. Linus suggested to add a local label into the macro scope and let that jump to the actual caller supplied error label. __label__ local_label; \ arch_unsafe_get_user(x, ptr, local_label); \ if (0) { \ local_label: \ goto label; \ That works for both GCC and clang. clang: c80: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)=09 c85: 65 48 8b 0c 25 00 00 00 00 mov %gs:0x0,%rcx c8e: ff 81 04 14 00 00 incl 0x1404(%rcx) // pf_disable++ c94: 31 c0 xor %eax,%eax // set retval to fal= se c96: 89 37 mov %esi,(%rdi) // write c98: b0 01 mov $0x1,%al // set retval to true c9a: ff 89 04 14 00 00 decl 0x1404(%rcx) // pf_disable-- ca0: 2e e9 00 00 00 00 cs jmp ca6 // ret The exception table entry points correctly to c9a GCC: f70: e8 00 00 00 00 call f75 f75: 65 48 8b 05 00 00 00 00 mov %gs:0x0(%rip),%rax f7d: 83 80 04 14 00 00 01 addl $0x1,0x1404(%rax) // pf_disable++ f84: 8b 17 mov (%rdi),%edx f86: 89 16 mov %edx,(%rsi) f88: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable-- f8f: b8 01 00 00 00 mov $0x1,%eax // success f94: e9 00 00 00 00 jmp f99 // ret f99: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable-- fa0: 31 c0 xor %eax,%eax // fail fa2: e9 00 00 00 00 jmp fa7 // ret The exception table entry points correctly to f99 So both compilers optimize out the extra goto and emit correct and efficient code. Provide a generic wrapper to do that to avoid modifying all the affected architecture specific implementation with that workaround. The only change required for architectures is to rename unsafe_*_user() to arch_unsafe_*_user(). That's done in subsequent changes. Suggested-by: Linus Torvalds Signed-off-by: Thomas Gleixner Reviewed-by: Christophe Leroy Reviewed-by: Mathieu Desnoyers --- include/linux/uaccess.h | 72 +++++++++++++++++++++++++++++++++++++++++++= ++--- 1 file changed, 68 insertions(+), 4 deletions(-) --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -518,7 +518,34 @@ long strncpy_from_user_nofault(char *dst long count); long strnlen_user_nofault(const void __user *unsafe_addr, long count); =20 -#ifndef __get_kernel_nofault +#ifdef arch_get_kernel_nofault +/* + * Wrap the architecture implementation so that @label can be outside of a + * cleanup() scope. A regular C goto works correctly, but ASM goto does + * not. Clang rejects such an attempt, but GCC silently emits buggy code. + */ +#define __get_kernel_nofault(dst, src, type, label) \ +do { \ + __label__ local_label; \ + arch_get_kernel_nofault(dst, src, type, local_label); \ + if (0) { \ + local_label: \ + goto label; \ + } \ +} while (0) + +#define __put_kernel_nofault(dst, src, type, label) \ +do { \ + __label__ local_label; \ + arch_get_kernel_nofault(dst, src, type, local_label); \ + if (0) { \ + local_label: \ + goto label; \ + } \ +} while (0) + +#elif !defined(__get_kernel_nofault) /* arch_get_kernel_nofault */ + #define __get_kernel_nofault(dst, src, type, label) \ do { \ type __user *p =3D (type __force __user *)(src); \ @@ -535,7 +562,8 @@ do { \ if (__put_user(data, p)) \ goto label; \ } while (0) -#endif + +#endif /* !__get_kernel_nofault */ =20 /** * get_kernel_nofault(): safely attempt to read from a location @@ -549,7 +577,42 @@ do { \ copy_from_kernel_nofault(&(val), __gk_ptr, sizeof(val));\ }) =20 -#ifndef user_access_begin +#ifdef user_access_begin + +#ifdef arch_unsafe_get_user +/* + * Wrap the architecture implementation so that @label can be outside of a + * cleanup() scope. A regular C goto works correctly, but ASM goto does + * not. Clang rejects such an attempt, but GCC silently emits buggy code. + * + * Some architectures use internal local labels already, but this extra + * indirection here is harmless because the compiler optimizes it out + * completely in any case. This construct just ensures that the ASM GOTO + * target is always in the local scope. The C goto 'label' works correct + * when leaving a cleanup() scope. + */ +#define unsafe_get_user(x, ptr, label) \ +do { \ + __label__ local_label; \ + arch_unsafe_get_user(x, ptr, local_label); \ + if (0) { \ + local_label: \ + goto label; \ + } \ +} while (0) + +#define unsafe_put_user(x, ptr, label) \ +do { \ + __label__ local_label; \ + arch_unsafe_put_user(x, ptr, local_label); \ + if (0) { \ + local_label: \ + goto label; \ + } \ +} while (0) +#endif /* arch_unsafe_get_user */ + +#else /* user_access_begin */ #define user_access_begin(ptr,len) access_ok(ptr, len) #define user_access_end() do { } while (0) #define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (= 0) @@ -559,7 +622,8 @@ do { \ #define unsafe_copy_from_user(d,s,l,e) unsafe_op_wrap(__copy_from_user(d,s= ,l),e) static inline unsigned long user_access_save(void) { return 0UL; } static inline void user_access_restore(unsigned long flags) { } -#endif +#endif /* !user_access_begin */ + #ifndef user_write_access_begin #define user_write_access_begin user_access_begin #define user_write_access_end user_access_end From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D54C258CF0; Mon, 27 Oct 2025 08:43:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554631; cv=none; b=pXvrZWEoFlJen6IAaG+e2ahW9Bc4vJHwr+XeNUhTWDw4nhGRyOi1w/q3iTQWN6PZxrWOnzbki1FMGP76Oiq/SNZACYG92F3FcddK3WwbIuMua+VvaA4idqL/FXz3j7IsB4if7x/i8oe9sIyfJdCg65FaLszbeeE773tgMWnkVoU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554631; c=relaxed/simple; bh=1JN45mC64c47dz9TU2Ot3j2U1wNiqKWBKgn8OQGE8ow=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=QK1ZQoVL3Arr0SRnthICbhoa0krQaM0o7LEQhNzmEg1nuZDLqq4PRpJMxWfPxaF7lXrbJ1ZpTuJMul2Z/1arkF/LRt8YujdU0FiUh1hm4M1MFbBpjWF9Q6Aw7ujlioJZ52UdHtaB6cZhuCQ0TORpd/HwqkABNgzonBbSj6CYOJE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=f7URCY0T; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=6ORfe7vG; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="f7URCY0T"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="6ORfe7vG" Message-ID: <20251027083745.294359925@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=EKzyB0lrI1vv7+y80GeaggsaapcXMx0ewLMg7+vmnx4=; b=f7URCY0TR9ngNx+ZE1kYF399YPHlCW2R464N5D1nINUxLVUiedaieiKbYdrtnyP5kNjN0S 5KssZ03JP1lyw2KPiALYSsZa/q6vKSn+CzHBkjnk8xqYfH4muQNFqw+T+qRjnc/0hhq9Bo wRHycNe6JmU69bMgYxKgtgYHCEMMDBGovCmj7rIKGpZHQyN3jBty20W+u10BfNO7bpjM43 xrTn92A+2N2yyo9uiLanEt0BYKc2uAacE6bah1ytK26+xwgY33tfuL/iEWw3gufhZYW/UN O1ArHPyolf7kgV5C7h1MKCVG4+OuP21b9CnAxz7kJXdHu5zqs+gl9xYoYdanFg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=EKzyB0lrI1vv7+y80GeaggsaapcXMx0ewLMg7+vmnx4=; b=6ORfe7vGObPfrhxyRUXWtMNjB8eOm7ffGS2wmPG+hUOUXR0gNG822D5Mj2JN/tv/It+Bhy 5UL5oCN2ZCcSYcDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:46 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope: bool foo(u32 __user *p, u32 val) { scoped_guard(pagefault) unsafe_put_user(val, p, efault); return true; efault: return false; } It ends up leaking the pagefault disable counter in the fault path. clang at least fails the build. Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic uaccess header wrap it with a local label that makes both compilers emit correct code. Same for the kernel_nofault() variants. Signed-off-by: Thomas Gleixner Cc: x86@kernel.org Reviewed-by: Mathieu Desnoyers --- arch/x86/include/asm/uaccess.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -528,18 +528,18 @@ static __must_check __always_inline bool #define user_access_save() smap_save() #define user_access_restore(x) smap_restore(x) =20 -#define unsafe_put_user(x, ptr, label) \ +#define arch_unsafe_put_user(x, ptr, label) \ __put_user_size((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), label) =20 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT -#define unsafe_get_user(x, ptr, err_label) \ +#define arch_unsafe_get_user(x, ptr, err_label) \ do { \ __inttype(*(ptr)) __gu_val; \ __get_user_size(__gu_val, (ptr), sizeof(*(ptr)), err_label); \ (x) =3D (__force __typeof__(*(ptr)))__gu_val; \ } while (0) #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT -#define unsafe_get_user(x, ptr, err_label) \ +#define arch_unsafe_get_user(x, ptr, err_label) \ do { \ int __gu_err; \ __inttype(*(ptr)) __gu_val; \ @@ -618,11 +618,11 @@ do { \ } while (0) =20 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT -#define __get_kernel_nofault(dst, src, type, err_label) \ +#define arch_get_kernel_nofault(dst, src, type, err_label) \ __get_user_size(*((type *)(dst)), (__force type __user *)(src), \ sizeof(type), err_label) #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT -#define __get_kernel_nofault(dst, src, type, err_label) \ +#define arch_get_kernel_nofault(dst, src, type, err_label) \ do { \ int __kr_err; \ \ @@ -633,7 +633,7 @@ do { \ } while (0) #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT =20 -#define __put_kernel_nofault(dst, src, type, err_label) \ +#define arch_put_kernel_nofault(dst, src, type, err_label) \ __put_user_size(*((type *)(src)), (__force type __user *)(dst), \ sizeof(type), err_label) From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CD5D2F1FE6; Mon, 27 Oct 2025 08:43:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554632; cv=none; b=CB9iCDNLmDCOWdEo4VENf1Fe3uHAwktJGAWuttUku49ibm8pSeK/loNpxXKHgsseRMyMhqSS5xxs+Nu9Tgq7SWYu9sQJcvwvPX4q330Nqsa5v0XE/gkqszHCVTRF29G/eS/PAGv1rRD3LqY6PRRGFXcdFBtS+W9AOtaLcuPkubQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554632; c=relaxed/simple; bh=kv2agbdsqIbzMLDcTPxxSBLHwg3H9gA4/g6IZ/c2ULk=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=CWDBlntoyAJ8M0snCfKdCbnzxlRxQ48WM+CvnF/73OgRNo5pJUbWgqfIDwztp9n0qCwFTacA5ak1zc4/B4rbAqL4DoPPyhRhxhJzusFbWQRoHr1ESXfFG7Xnrh0k1LkS5Nun3gOvswu84+9WJ9n/6qH3MJFUHxnY1bcLBL6Fzts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Shun7IjV; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=CjAAJqTB; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Shun7IjV"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="CjAAJqTB" Message-ID: <20251027083745.356628509@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DNQHcterXWXQuFWR5KPGbkzD3Sh/vJk9lLVEwO1xPSA=; b=Shun7IjVfYUPlEkByDviiP8FOOaPtcom7r3qT0QBgoYlkb5vVwr+bQ4W4TGvFBSI3ZaZZR +20dugYVw1XjPXkDP5YySKtN01v9L3UhhgPMAozbd9iTr5u/pfXPwlvPDEoj2eN9Rj3tdS FgNcCCZMo7m/ESx0d2/P7CcfOtUZByEi1Q5vdVk6Rm9cKUF0/gPodfvQYfARbcL06zSFj8 EJcAJO5BlCn5G3xT2nW85AxFwE6WyRsbFUVB1x4KZYyo7Y3JTTFgskCCG72B3F2UERG1en UpXuHrFLcxh5qeAKwe+AtutqsB/hJ93fVNcggB2CsGQvpnMSloAE9dGSvojtdw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DNQHcterXWXQuFWR5KPGbkzD3Sh/vJk9lLVEwO1xPSA=; b=CjAAJqTBR4AiPBtCOu7ZrMjj2ByjpD9kOdxzU0+8Jcl/6N+Mh+z+AJN5HxvT1ZXUhXLEd0 lCdJ913xubPYmqBA== From: Thomas Gleixner To: LKML Cc: Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 04/12] powerpc/uaccess: Use unsafe wrappers for ASM GOTO References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:48 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope: bool foo(u32 __user *p, u32 val) { scoped_guard(pagefault) unsafe_put_user(val, p, efault); return true; efault: return false; } It ends up leaking the pagefault disable counter in the fault path. clang at least fails the build. Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic uaccess header wrap it with a local label that makes both compilers emit correct code. Same for the kernel_nofault() variants. Signed-off-by: Thomas Gleixner Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org Reviewed-by: Christophe Leroy Reviewed-by: Mathieu Desnoyers --- arch/powerpc/include/asm/uaccess.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -451,7 +451,7 @@ user_write_access_begin(const void __use #define user_write_access_begin user_write_access_begin #define user_write_access_end prevent_current_write_to_user =20 -#define unsafe_get_user(x, p, e) do { \ +#define arch_unsafe_get_user(x, p, e) do { \ __long_type(*(p)) __gu_val; \ __typeof__(*(p)) __user *__gu_addr =3D (p); \ \ @@ -459,7 +459,7 @@ user_write_access_begin(const void __use (x) =3D (__typeof__(*(p)))__gu_val; \ } while (0) =20 -#define unsafe_put_user(x, p, e) \ +#define arch_unsafe_put_user(x, p, e) \ __put_user_size_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e) =20 #define unsafe_copy_from_user(d, s, l, e) \ @@ -504,11 +504,11 @@ do { \ unsafe_put_user(*(u8*)(_src + _i), (u8 __user *)(_dst + _i), e); \ } while (0) =20 -#define __get_kernel_nofault(dst, src, type, err_label) \ +#define arch_get_kernel_nofault(dst, src, type, err_label) \ __get_user_size_goto(*((type *)(dst)), \ (__force type __user *)(src), sizeof(type), err_label) =20 -#define __put_kernel_nofault(dst, src, type, err_label) \ +#define arch_put_kernel_nofault(dst, src, type, err_label) \ __put_user_size_goto(*((type *)(src)), \ (__force type __user *)(dst), sizeof(type), err_label) From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D26E32F3632; Mon, 27 Oct 2025 08:43:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554635; cv=none; b=TTSZBSMSLzKxCc3BGd1mlc/U9WzSZnJdcy2Z4w0enqQmtqewd3ZRB1knc/lT9lnDffBlKt7HqFlbX/1f68GlgLWek+7YoyqhJocuRPkfxk5qbfkk1bQBPCKHOUDsBBpJEfpcXq522Xc2dKco9erp0ZHx/iExfRqPmfb6J9M+wdg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554635; c=relaxed/simple; bh=SVRRpksI+B5YHxzelxwgKXxbQk2UaLHrIQwq4nbHVuE=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=N3QNJOEn/Bsws2Z3b6ReRV4A+44MLp42zsYhShz/AIwYCKBZICSSAP5dP8rTd12oO9yAurLyLd5k+KjqsLtG6IYAIUOOCi4UxFbTUGPJVVEmQst39ccMYnZKTmAHrUh0KIeLm3SonEfBN1ZkJSim2nzh/yhtgWzvbIo8FPvCmHA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=gejG86uG; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=cWp0c/Y7; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="gejG86uG"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="cWp0c/Y7" Message-ID: <20251027083745.419351819@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554631; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Q6hUKo07m0tisXGEHl1vXx2So4mSCWEC1HB29LLd09o=; b=gejG86uGYkBt6kWWKbtkkSyMBhIL8DMix1IJdC1Rv4n2zeeBoyu48PVGhZVUSHn02j0zdT nmQpQtSmY9gZ7ggvnh67Q8n3mXcoapnf9GCM73OSpXBVD22oPU6VOAqFwbhmorvSAMHnrB pFIq7EJ8RfZkfAnKdb0+OxNiUSsRkeQeab4bvpXdk82BQLUdBrERKKZ0uiO5fuB1tEj9ov +w1HWnCm3PlTeUxvOwVUNopURrTcD6NNQuNGZT6D/lqReqmCXB0Cy0U6g7lA/wcDI1GoOJ eBuFmh33SvtZhYmNWbWopURvy96FHOZ+N/ukhFraVnsT7MXVuVZPaDU37taYEg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554631; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Q6hUKo07m0tisXGEHl1vXx2So4mSCWEC1HB29LLd09o=; b=cWp0c/Y7PNgdaZpuBpvFN/3v3mB9pcZY8cVBK4GP/w1IMHXEMiwFHPbdgZAayFLqDnHzUj V/9jZrSCHbwq1wBw== From: Thomas Gleixner To: LKML Cc: Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 05/12] riscv/uaccess: Use unsafe wrappers for ASM GOTO References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:50 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope: bool foo(u32 __user *p, u32 val) { scoped_guard(pagefault) unsafe_put_user(val, p, efault); return true; efault: return false; } It ends up leaking the pagefault disable counter in the fault path. clang at least fails the build. Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic uaccess header wrap it with a local label that makes both compilers emit correct code. Same for the kernel_nofault() variants. Signed-off-by: Thomas Gleixner Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: linux-riscv@lists.infradead.org Reviewed-by: Mathieu Desnoyers --- arch/riscv/include/asm/uaccess.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -437,10 +437,10 @@ unsigned long __must_check clear_user(vo __clear_user(untagged_addr(to), n) : n; } =20 -#define __get_kernel_nofault(dst, src, type, err_label) \ +#define arch_get_kernel_nofault(dst, src, type, err_label) \ __get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_la= bel) =20 -#define __put_kernel_nofault(dst, src, type, err_label) \ +#define arch_put_kernel_nofault(dst, src, type, err_label) \ __put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_la= bel) =20 static __must_check __always_inline bool user_access_begin(const void __us= er *ptr, size_t len) @@ -460,10 +460,10 @@ static inline void user_access_restore(u * We want the unsafe accessors to always be inlined and use * the error labels - thus the macro games. */ -#define unsafe_put_user(x, ptr, label) \ +#define arch_unsafe_put_user(x, ptr, label) \ __put_user_nocheck(x, (ptr), label) =20 -#define unsafe_get_user(x, ptr, label) do { \ +#define arch_unsafe_get_user(x, ptr, label) do { \ __inttype(*(ptr)) __gu_val; \ __get_user_nocheck(__gu_val, (ptr), label); \ (x) =3D (__force __typeof__(*(ptr)))__gu_val; \ From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FC592F4A00; Mon, 27 Oct 2025 08:43:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554637; cv=none; b=L7iQkiAbYnRrEdIHCZ1BykNf2ZnNASVt9CeI7yrgSm3wvERlpdsO6ngczHwzknZ2DdAvg9F6v0u281LMrU527rZBIATZnTfuxp9D/cPZa4xgrLK8VxK5OpWfGuvEhczduC88oW07pZmaCyvTf29lMV1axsBjR8PhTFN/cBGFMk4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554637; c=relaxed/simple; bh=FsZSIWhTPJE0xIE/oepHzfDLYz2eO19Uvxr9kcu+0sA=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=Hu6iXZ70Vvu1KYL1eD6N2XoWXweb2aJ7T6l0QnSZRMl8fSVhdj/2pTjNW9wIQISL29qqqdX00AdQssU1q67fDO7ZCjTEWZOWz12/NGowrrwzkjUlMZp2I3K581m8OE4Pz4509mEchZ2+POv+mbElfxVOLcwcovWy439WTVhkr+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=IcvIXSV9; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Vmmptw8E; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="IcvIXSV9"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Vmmptw8E" Message-ID: <20251027083745.483079889@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554633; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FNPRKTae+LjL38iEukoJJJeJEzdmx+2M9xnDzgu4YJA=; b=IcvIXSV9t45rrEEtw03twVZVH+Cf5EExRyy1qZwgGd2Y2K9M7co7XW96jp6EmLrClLh++l jr3dOdZVz1fQF1UNp8c66r8mpb686vf7ljbpmrb4etjJUNfu0WZ0tNmwXYxy6Mqvw48chQ teHdD8NcqJ5t4brSX2XHKi1D2UTR2iN7lsnmZZ5QHAxZR754VjOvZIWlkyZixDFJBBtlSn Um78y/D6hLteYGeICRmVsulOuNKGt4ISc1DU1WFAeDKeAEgUMCqb72WarbJcppE/J9tA40 R9d2T/O1gaT0zAdoPYZIGb4dytqpU0iVWXNI3iuLN6NYCVVOnAwoWQUxmw21lA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554633; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FNPRKTae+LjL38iEukoJJJeJEzdmx+2M9xnDzgu4YJA=; b=Vmmptw8EZHTXIjSgtQJjhgReiceqLlCamqWreAYU9B/vuJ6P+xL9UOF2A8pB7gfIWhGfee YyXsaFaxl4ENOxCg== From: Thomas Gleixner To: LKML Cc: Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 06/12] s390/uaccess: Use unsafe wrappers for ASM GOTO References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:52 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope: bool foo(u32 __user *p, u32 val) { scoped_guard(pagefault) unsafe_put_user(val, p, efault); return true; efault: return false; } It ends up leaking the pagefault disable counter in the fault path. clang at least fails the build. S390 is not affected for unsafe_*_user() as it uses it's own local label already, but __get/put_kernel_nofault() lack that. Rename them to arch_*_kernel_nofault() which makes the generic uaccess header wrap it with a local label that makes both compilers emit correct code. Signed-off-by: Thomas Gleixner Acked-by: Heiko Carstens Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org Reviewed-by: Mathieu Desnoyers --- arch/s390/include/asm/uaccess.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/arch/s390/include/asm/uaccess.h +++ b/arch/s390/include/asm/uaccess.h @@ -468,8 +468,8 @@ do { \ =20 #endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT && CONFIG_CC_HAS_ASM_AOR_FORMAT_FL= AGS */ =20 -#define __get_kernel_nofault __mvc_kernel_nofault -#define __put_kernel_nofault __mvc_kernel_nofault +#define arch_get_kernel_nofault __mvc_kernel_nofault +#define arch_put_kernel_nofault __mvc_kernel_nofault =20 void __cmpxchg_user_key_called_with_bad_pointer(void); From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 506D42FABFE; Mon, 27 Oct 2025 08:43:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554639; cv=none; b=begnmY87xMuBN9tBibkQOFMADADiQZ5n1v6h+UExHm57VpCpZHWFKToTeEWwQFVefcQ7mjfvWpMbwrK6eJR1g27WsCvX79ZGeB/NfamnlwhBGvuA91czJ4TyVUdygzA8XFl75wtanZoNOkcn//rKtNIOOAatNKz3qER3AAlps0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554639; c=relaxed/simple; bh=ayo+bEGTx89l93m5cFVtFn3rRxx45tVRUcaklRv3XSM=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=D6Vq1yX5TbFCK1IqFgu6gPocSzg9Ai5nmATNyWMbxw2KD81oQ+rlQXEoqSxjh3P20kV9qrIN1SHRQH3qnzylcGg8atTX0SWSEohdAX74My11D9j/unEOnwtsldqk0dgOkYTV1fyrkkXwIdsXXe0Ok0bE015pygpYQfNsZQath3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=oHyfClpR; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=s55tla3L; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="oHyfClpR"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="s55tla3L" Message-ID: <20251027083745.546420421@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ZrsCoY/QONubl8AjZs982M2i5AjJzlGkiHAIdVey4uk=; b=oHyfClpREYOt1DiKIHZy2q1D6EpBPTaoCeVSeGU+7kWzsJwDmc6t4vwq/Nzp417tixXozL zHUFCAziO5WB0+iBFYECuXo2LFeYCLSiv/HuhcVfrHrKr8COk39C5F/wZ7LEYcxVbe2UBc C6JMDd73S9T/wtyUNwJuzqVvOVwWFUefct9Barbcn14VwUDotP2TvjQfEvOTFsfbxAchOb 0xRG4HJyR4oZZVeEYccaFPPZ2qe37pUEyYXcLsOwjGnRS5OYkIgtzUrBH/1C4xFs30IMtj VpKpPwIxkT79pG+f59BRo7YiVsbqFiG1Z/gZw0eOpA+IiQYHcrL0SCfyYzCoyA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ZrsCoY/QONubl8AjZs982M2i5AjJzlGkiHAIdVey4uk=; b=s55tla3LEt3rPB0wqY2iQdz+zsY4pWv51I0pXo2FSoEgL+EUsdiSNtmtRkZjOfVYQ+A0iT KSA1hGNBK3FruZBw== From: Thomas Gleixner To: LKML Cc: Christophe Leroy , Mathieu Desnoyers , Andrew Cooper , Linus Torvalds , David Laight , kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 07/12] uaccess: Provide scoped user access regions References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:55 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" User space access regions are tedious and require similar code patterns all over the place: if (!user_read_access_begin(from, sizeof(*from))) return -EFAULT; unsafe_get_user(val, from, Efault); user_read_access_end(); return 0; Efault: user_read_access_end(); return -EFAULT; This got worse with the recent addition of masked user access, which optimizes the speculation prevention: if (can_do_masked_user_access()) from =3D masked_user_read_access_begin((from)); else if (!user_read_access_begin(from, sizeof(*from))) return -EFAULT; unsafe_get_user(val, from, Efault); user_read_access_end(); return 0; Efault: user_read_access_end(); return -EFAULT; There have been issues with using the wrong user_*_access_end() variant in the error path and other typical Copy&Pasta problems, e.g. using the wrong fault label in the user accessor which ends up using the wrong accesss end variant.=20 These patterns beg for scopes with automatic cleanup. The resulting outcome is: scoped_user_read_access(from, Efault) unsafe_get_user(val, from, Efault); return 0; Efault: return -EFAULT; The scope guarantees the proper cleanup for the access mode is invoked both in the success and the failure (fault) path. The scoped_user_$MODE_access() macros are implemented as self terminating nested for() loops. Thanks to Andrew Cooper for pointing me at them. The scope can therefore be left with 'break', 'goto' and 'return'. Even 'continue' "works" due to the self termination mechanism. Both GCC and clang optimize all the convoluted macro maze out and the above results with clang in: b80: f3 0f 1e fa endbr64 b84: 48 b8 ef cd ab 89 67 45 23 01 movabs $0x123456789abcdef,%rax b8e: 48 39 c7 cmp %rax,%rdi b91: 48 0f 47 f8 cmova %rax,%rdi b95: 90 nop b96: 90 nop b97: 90 nop b98: 31 c9 xor %ecx,%ecx b9a: 8b 07 mov (%rdi),%eax b9c: 89 06 mov %eax,(%rsi) b9e: 85 c9 test %ecx,%ecx ba0: 0f 94 c0 sete %al ba3: 90 nop ba4: 90 nop ba5: 90 nop ba6: c3 ret Which looks as compact as it gets. The NOPs are placeholder for STAC/CLAC. GCC emits the fault path seperately: bf0: f3 0f 1e fa endbr64 bf4: 48 b8 ef cd ab 89 67 45 23 01 movabs $0x123456789abcdef,%rax bfe: 48 39 c7 cmp %rax,%rdi c01: 48 0f 47 f8 cmova %rax,%rdi c05: 90 nop c06: 90 nop c07: 90 nop c08: 31 d2 xor %edx,%edx c0a: 8b 07 mov (%rdi),%eax c0c: 89 06 mov %eax,(%rsi) c0e: 85 d2 test %edx,%edx c10: 75 09 jne c1b c12: 90 nop c13: 90 nop c14: 90 nop c15: b8 01 00 00 00 mov $0x1,%eax c1a: c3 ret c1b: 90 nop c1c: 90 nop c1d: 90 nop c1e: 31 c0 xor %eax,%eax c20: c3 ret The fault labels for the scoped*() macros and the fault labels for the actual user space accessors can be shared and must be placed outside of the scope. If masked user access is enabled on an architecture, then the pointer handed in to scoped_user_$MODE_access() can be modified to point to a guaranteed faulting user address. This modification is only scope local as the pointer is aliased inside the scope. When the scope is left the alias is not longer in effect. IOW the original pointer value is preserved so it can be used e.g. for fixup or diagnostic purposes in the fault path. Signed-off-by: Thomas Gleixner Cc: Christophe Leroy Cc: Mathieu Desnoyers Cc: Andrew Cooper Cc: Linus Torvalds Cc: David Laight Reviewed-by: Mathieu Desnoyers --- V4: Remove the _masked_ naming as it's actually confusing - David Remove underscores and make _tmpptr void - David Add comment about access size and range - David Shorten local variables and remove a few unneeded brackets - Mathieu V3: Make it a nested for() loop Get rid of the code in macro parameters - Linus Provide sized variants - Mathieu V2: Remove the shady wrappers around the opening and use scopes with automa= tic cleanup --- include/linux/uaccess.h | 192 +++++++++++++++++++++++++++++++++++++++++++= +++++ 1 file changed, 192 insertions(+) --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -2,6 +2,7 @@ #ifndef __LINUX_UACCESS_H__ #define __LINUX_UACCESS_H__ =20 +#include #include #include #include @@ -35,9 +36,17 @@ =20 #ifdef masked_user_access_begin #define can_do_masked_user_access() 1 +# ifndef masked_user_write_access_begin +# define masked_user_write_access_begin masked_user_access_begin +# endif +# ifndef masked_user_read_access_begin +# define masked_user_read_access_begin masked_user_access_begin +#endif #else #define can_do_masked_user_access() 0 #define masked_user_access_begin(src) NULL + #define masked_user_read_access_begin(src) NULL + #define masked_user_write_access_begin(src) NULL #define mask_user_address(src) (src) #endif =20 @@ -633,6 +642,189 @@ static inline void user_access_restore(u #define user_read_access_end user_access_end #endif =20 +/* Define RW variant so the below _mode macro expansion works */ +#define masked_user_rw_access_begin(u) masked_user_access_begin(u) +#define user_rw_access_begin(u, s) user_access_begin(u, s) +#define user_rw_access_end() user_access_end() + +/* Scoped user access */ +#define USER_ACCESS_GUARD(_mode) \ +static __always_inline void __user * \ +class_user_##_mode##_begin(void __user *ptr) \ +{ \ + return ptr; \ +} \ + \ +static __always_inline void \ +class_user_##_mode##_end(void __user *ptr) \ +{ \ + user_##_mode##_access_end(); \ +} \ + \ +DEFINE_CLASS(user_ ##_mode## _access, void __user *, \ + class_user_##_mode##_end(_T), \ + class_user_##_mode##_begin(ptr), void __user *ptr) \ + \ +static __always_inline class_user_##_mode##_access_t \ +class_user_##_mode##_access_ptr(void __user *scope) \ +{ \ + return scope; \ +} + +USER_ACCESS_GUARD(read) +USER_ACCESS_GUARD(write) +USER_ACCESS_GUARD(rw) +#undef USER_ACCESS_GUARD + +/** + * __scoped_user_access_begin - Start a scoped user access + * @mode: The mode of the access class (read, write, rw) + * @uptr: The pointer to access user space memory + * @size: Size of the access + * @elbl: Error label to goto when the access region is rejected + * + * Internal helper for __scoped_user_access(). Don't use directly + */ +#define __scoped_user_access_begin(mode, uptr, size, elbl) \ +({ \ + typeof(uptr) __retptr; \ + \ + if (can_do_masked_user_access()) { \ + __retptr =3D masked_user_##mode##_access_begin(uptr); \ + } else { \ + __retptr =3D uptr; \ + if (!user_##mode##_access_begin(uptr, size)) \ + goto elbl; \ + } \ + __retptr; \ +}) + +/** + * __scoped_user_access - Open a scope for user access + * @mode: The mode of the access class (read, write, rw) + * @uptr: The pointer to access user space memory + * @size: Size of the access + * @elbl: Error label to goto when the access region is rejected. It + * must be placed outside the scope + * + * If the user access function inside the scope requires a fault label, it + * can use @elvl or a different label outside the scope, which requires + * that user access which is implemented with ASM GOTO has been properly + * wrapped. See unsafe_get_user() for reference. + * + * scoped_user_rw_access(ptr, efault) { + * unsafe_get_user(rval, &ptr->rval, efault); + * unsafe_put_user(wval, &ptr->wval, efault); + * } + * return 0; + * efault: + * return -EFAULT; + * + * The scope is internally implemented as a autoterminating nested for() + * loop, which can be left with 'return', 'break' and 'goto' at any + * point. + * + * When the scope is left user_##@_mode##_access_end() is automatically + * invoked. + * + * When the architecture supports masked user access and the access region + * which is determined by @uptr and @size is not a valid user space + * address, i.e. < TASK_SIZE, the scope sets the pointer to a faulting user + * space address and does not terminate early. This optimizes for the good + * case and lets the performance uncritical bad case go through the fault. + * + * The eventual modification of the pointer is limited to the scope. + * Outside of the scope the original pointer value is unmodified, so that + * the original pointer value is available for diagnostic purposes in an + * out of scope fault path. + * + * Nesting scoped user access into a user access scope is invalid and fails + * the build. Nesting into other guards, e.g. pagefault is safe. + * + * The masked variant does not check the size of the access and relies on a + * mapping hole (e.g. guard page) to catch an out of range pointer, the + * first access to user memory inside the scope has to be within + * @uptr ... @uptr + PAGE_SIZE - 1 + * + * Don't use directly. Use scoped_masked_user_$MODE_access() instead. + */ +#define __scoped_user_access(mode, uptr, size, elbl) \ +for (bool done =3D false; !done; done =3D true) \ + for (void __user *_tmpptr =3D __scoped_user_access_begin(mode, uptr, size= , elbl); \ + !done; done =3D true) \ + for (CLASS(user_##mode##_access, scope)(_tmpptr); !done; done =3D true) \ + /* Force modified pointer usage within the scope */ \ + for (const typeof(uptr) uptr =3D _tmpptr; !done; done =3D true) + +/** + * scoped_user_read_access_size - Start a scoped user read access with giv= en size + * @usrc: Pointer to the user space address to read from + * @size: Size of the access starting from @usrc + * @elbl: Error label to goto when the access region is rejected + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_read_access_size(usrc, size, elbl) \ + __scoped_user_access(read, usrc, size, elbl) + +/** + * scoped_user_read_access - Start a scoped user read access + * @usrc: Pointer to the user space address to read from + * @elbl: Error label to goto when the access region is rejected + * + * The size of the access starting from @usrc is determined via sizeof(*@u= src)). + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_read_access(usrc, elbl) \ + scoped_user_read_access_size(usrc, sizeof(*(usrc)), elbl) + +/** + * scoped_user_write_access_size - Start a scoped user write access with g= iven size + * @udst: Pointer to the user space address to write to + * @size: Size of the access starting from @udst + * @elbl: Error label to goto when the access region is rejected + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_write_access_size(udst, size, elbl) \ + __scoped_user_access(write, udst, size, elbl) + +/** + * scoped_user_write_access - Start a scoped user write access + * @udst: Pointer to the user space address to write to + * @elbl: Error label to goto when the access region is rejected + * + * The size of the access starting from @udst is determined via sizeof(*@u= dst)). + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_write_access(udst, elbl) \ + scoped_user_write_access_size(udst, sizeof(*(udst)), elbl) + +/** + * scoped_user_rw_access_size - Start a scoped user read/write access with= given size + * @uptr Pointer to the user space address to read from and write to + * @size: Size of the access starting from @uptr + * @elbl: Error label to goto when the access region is rejected + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_rw_access_size(uptr, size, elbl) \ + __scoped_user_access(rw, uptr, size, elbl) + +/** + * scoped_user_rw_access - Start a scoped user read/write access + * @uptr Pointer to the user space address to read from and write to + * @elbl: Error label to goto when the access region is rejected + * + * The size of the access starting from @uptr is determined via sizeof(*@u= ptr)). + * + * For further information see __scoped_user_access() above. + */ +#define scoped_user_rw_access(uptr, elbl) \ + scoped_user_rw_access_size(uptr, sizeof(*(uptr)), elbl) + #ifdef CONFIG_HARDENED_USERCOPY void __noreturn usercopy_abort(const char *name, const char *detail, bool to_user, unsigned long offset, From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74E3E2FC00E; Mon, 27 Oct 2025 08:44:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554643; cv=none; b=g7li9Ry0bDuu+INFjHs1eisSDwlDG/PzwBN0dQgn/ShOj3E1m0007DC45vZ9XCdlQQYorANRxXuK5GWUILG21k2XI/J/3peIRE4X7es2z7ITE4rPPwg36wsQyswxgZIEDqsfgAXvh12CNnJeo/fK3i2QyuJD3ZrPJ7QvGIsKqfE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554643; c=relaxed/simple; bh=xYNHqlUV4558B3B1ToI0PaNV5yELgt5cMzX/MaR4N9Q=; h=Message-ID:From:To:Subject:References:MIME-Version:Content-Type: cc:Date; b=bZ4nxa2z0y5Zz662fWQbFb5uXV4LJVSK/S9aN1MC22eonS/2ENObnx3CLLO8sbNqSYumwbzoKem/q/aDKnx+bfDou6YJyDLWbywHiTbxz21pfexGVarsyudQL8XTcjHsZdVs7mBc8Q3HS53aUVb6xtZgA+w6aX0XWbaMmTHYkqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=ZVzA63Zc; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=wry6l0Yw; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="ZVzA63Zc"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="wry6l0Yw" Message-ID: <20251027083745.609031602@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554638; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=as4pbOCxyeTzYGzifOoPeMiju/LrgMig7nFy3Mj05oM=; b=ZVzA63ZcU5mloVrwo8+Jfcyohj0YpcWp8iDv0uosSTIGcM45Tl8yQXKTF3WdtzlsM1z+mJ f8TA9NOrPnTyPyVmugWaX5MxqCnytrkuKrBuLV26iaNZqxMEtBr6xJ45WTbOR/vB+3gzzV 0jNCxcRcapxFnr3bWoJY97mTfcv5YEKGvo7Tq4oSQsHCMrQ6FBCAcYDG+ifbE9gMHosLHb zkzf0SDgTCLBEJJbQ1BEUmbtn9GjOI9CG6cUmFHccokGfgEJjy/2JkinfGlDNP8ioNLaW0 dnn85SIAnHnJayOM5hUUKyanxcds+OjyyMzirFKdQw9EW1wOg2U3qD+4JnQUmg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554638; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=as4pbOCxyeTzYGzifOoPeMiju/LrgMig7nFy3Mj05oM=; b=wry6l0YwTe1mHwd3wZkgxRkSddPV4gTf1dO6fAnvsK4rZdyFD/mI82Mc3upvCPaozRA33y FSBO9vnYcONdE5Dw== From: Thomas Gleixner To: LKML Subject: [patch V5 08/12] uaccess: Provide put/get_user_inline() References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 cc: kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Date: Mon, 27 Oct 2025 09:43:56 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Provide conveniance wrappers around scoped user access similiar to put/get_user(), which reduce the usage sites to: if (!get_user_inline(val, ptr)) return -EFAULT; Should only be used if there is a demonstrable performance benefit. Signed-off-by: Thomas Gleixner Reviewed-by: Christophe Leroy Reviewed-by: Mathieu Desnoyers --- V5: Rename to inline V4: Rename to scoped --- include/linux/uaccess.h | 50 +++++++++++++++++++++++++++++++++++++++++++= +++++ 1 file changed, 50 insertions(+) --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -825,6 +825,56 @@ for (bool done =3D false; !done; done =3D tr #define scoped_user_rw_access(uptr, elbl) \ scoped_user_rw_access_size(uptr, sizeof(*(uptr)), elbl) =20 +/** + * get_user_inline - Read user data inlined + * @val: The variable to store the value read from user memory + * @usrc: Pointer to the user space memory to read from + * + * Return: 0 if successful, -EFAULT when faulted + * + * Inlined variant of get_user(). Only use when there is a demonstrable + * performance reason. + */ +#define get_user_inline(val, usrc) \ +({ \ + __label__ efault; \ + typeof(usrc) _tmpsrc =3D usrc; \ + int _ret =3D 0; \ + \ + scoped_user_read_access(_tmpsrc, efault) \ + unsafe_get_user(val, _tmpsrc, efault); \ + if (0) { \ + efault: \ + _ret =3D -EFAULT; \ + } \ + _ret; \ +}) + +/** + * put_user_inline - Write to user memory inlined + * @val: The value to write + * @udst: Pointer to the user space memory to write to + * + * Return: 0 if successful, -EFAULT when faulted + * + * Inlined variant of put_user(). Only use when there is a demonstrable + * performance reason. + */ +#define put_user_inline(val, udst) \ +({ \ + __label__ efault; \ + typeof(udst) _tmpdst =3D udst; \ + int _ret =3D 0; \ + \ + scoped_user_write_access(_tmpdst, efault) \ + unsafe_put_user(val, _tmpdst, efault); \ + if (0) { \ + efault: \ + _ret =3D -EFAULT; \ + } \ + _ret; \ +}) + #ifdef CONFIG_HARDENED_USERCOPY void __noreturn usercopy_abort(const char *name, const char *detail, bool to_user, unsigned long offset, From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D34632FBE0E; Mon, 27 Oct 2025 08:44:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554643; cv=none; b=WlfIF4PcZGyKKU0i+xXH7pe7HNlRthp+F0TbFHemu6NGaL8QLzKT5kT9FTC6A5OcifKARxyUwfsMr3jc859K0c/evwVvmVnWJ038Grwwgaxv/edsFraoq7LMY4ASB/NwLsSi8p5Q5Lc2cbAUoNHGLWur1WUruC8nLLYCvg/5Vck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554643; c=relaxed/simple; bh=CqIQMzxScnmRhxY/MRtdFz8xslsBV59WX6sY2GtoO0k=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=syArh8tfLT9fApcEsC41ruuCLfk1hP8bp5vsiwGtuV5dR4420y66sZvUr0Z8Vv3DuZKHeij43Pls/x4i7Njr5F/XdKODSv7niSQjTasnqIlawqOODAe9axmUNFocXb1kcNx7rS5UFwvmcKh0n+s2YFgDt10i2CBTdU3THi1M58U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=xItvuyS+; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=tYimNmTf; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="xItvuyS+"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="tYimNmTf" Message-ID: <20251027083745.673465359@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uMwKIluQZ/G+KoDWjXfgtZpH34AaSudGJEkVAvBfiV4=; b=xItvuyS+liJchgafJHpU9tYzPOno9TqkarDH1yZCK3/SY8LaeF0qT3opEQkDdO2pbpY6RP Htv+fdHsKQ7EcNx1mU7B7kjkr2MwCU/4llsA9ZTs/iz+8JRvkjQPVcPUyemRxjlbtktGKw GsJHGW2ZY09kGlWJaszbHS77TSX4HGu2viKiUm9yVfGlp4Wr7eYi0dBWXqDf+lbjgf3UOL EK1SbwjiDt9mqIf2xFqEMIs2XBc6hrKI7n1ZGKPIsTdyVEWZ5039b7tBmAnJ6wMzAJxKjx WAWU5PRgWHSrODttFM/9WH+13KTG9uP6NXR5SYwhtmgFY+wKA/oNAOAMka0zHA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uMwKIluQZ/G+KoDWjXfgtZpH34AaSudGJEkVAvBfiV4=; b=tYimNmTfzHE55VQN/cxG6Mfbc65avtlgEq43Xb/8LXKZ65sSl0k4ojYbJ4I2f258Ii1cd0 MyXDnpD51wgg7VCQ== From: Thomas Gleixner To: LKML Cc: Julia Lawall , Nicolas Palix , kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:43:58 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A common mistake in user access code is that the wrong access mode is selected for starting the user access section. As most architectures map Read and Write modes to ReadWrite this goes often unnoticed for quite some time. Aside of that the scoped user access mechanism requires that the same pointer is used for the actual accessor macros that was handed in to start the scope because the pointer can be modified by the scope begin mechanism if the architecture supports masking. Add a basic (and incomplete) coccinelle script to check for the common issues. The error output is: kernel/futex/futex.h:303:2-17: ERROR: Invalid pointer for unsafe_put_user(p= ) in scoped_masked_user_write_access(to) kernel/futex/futex.h:292:2-17: ERROR: Invalid access mode unsafe_get_user()= in scoped_masked_user_write_access() Not-Yet-Signed-off-by: Thomas Gleixner Cc: Julia Lawall Cc: Nicolas Palix --- scripts/coccinelle/misc/scoped_uaccess.cocci | 108 ++++++++++++++++++++++= +++++ 1 file changed, 108 insertions(+) --- /dev/null +++ b/scripts/coccinelle/misc/scoped_uaccess.cocci @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only +/// Validate scoped_masked_user*access() scopes +/// +// Confidence: Zero +// Options: --no-includes --include-headers + +virtual context +virtual report +virtual org + +@initialize:python@ +@@ + +scopemap =3D { + 'scoped_user_read_access_size' : 'scoped_user_read_access', + 'scoped_user_write_access_size' : 'scoped_user_write_access', + 'scoped_user_rw_access_size' : 'scoped_user_rw_access', +} + +# Most common accessors. Incomplete list +noaccessmap =3D { + 'scoped_user_read_access' : ('unsafe_put_user', 'unsafe_copy_to_us= er'), + 'scoped_user_write_access' : ('unsafe_get_user', 'unsafe_copy_from_= user'), +} + +# Most common accessors. Incomplete list +ptrmap =3D { + 'unsafe_put_user' : 1, + 'unsafe_get_user' : 1, + 'unsafe_copy_to_user' : 0, + 'unsafe_copy_from_user' : 0, +} + +print_mode =3D None + +def pr_err(pos, msg): + if print_mode =3D=3D 'R': + coccilib.report.print_report(pos[0], msg) + elif print_mode =3D=3D 'O': + cocci.print_main(msg, pos) + +@r0 depends on report || org@ +iterator name scoped_user_read_access, + scoped_user_read_access_size, + scoped_user_write_access, + scoped_user_write_access_size, + scoped_user_rw_access, + scoped_user_rw_access_size; +iterator scope; +statement S; +@@ + +( +( +scoped_user_read_access(...) S +| +scoped_user_read_access_size(...) S +| +scoped_user_write_access(...) S +| +scoped_user_write_access_size(...) S +| +scoped_user_rw_access(...) S +| +scoped_user_rw_access_size(...) S +) +& +scope(...) S +) + +@script:python depends on r0 && report@ +@@ +print_mode =3D 'R' + +@script:python depends on r0 && org@ +@@ +print_mode =3D 'O' + +@r1@ +expression sp, a0, a1; +iterator r0.scope; +identifier ac; +position p; +@@ + + scope(sp,...) { + <... + ac@p(a0, a1, ...); + ...> + } + +@script:python@ +pos << r1.p; +scope << r0.scope; +ac << r1.ac; +sp << r1.sp; +a0 << r1.a0; +a1 << r1.a1; +@@ + +scope =3D scopemap.get(scope, scope) +if ac in noaccessmap.get(scope, []): + pr_err(pos, 'ERROR: Invalid access mode %s() in %s()' %(ac, scope)) + +if ac in ptrmap: + ap =3D (a0, a1)[ptrmap[ac]] + if sp !=3D ap.lstrip('&').split('->')[0].strip(): + pr_err(pos, 'ERROR: Invalid pointer for %s(%s) in %s(%s)' %(ac, ap, = scope, sp)) From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F46D2FC86F; Mon, 27 Oct 2025 08:44:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554645; cv=none; b=LGkXYgD7ddabYtt8JtJjY9b9K3psYaA6Wv0LRO8DI5mafMQ1EMmKVYiKd1sWm4fvfh+1No4O2MQOrrx1lQtLhZ01K+4Ca+9fRYuGT3JOOigaINZ0dVHxhnZWY2YQbrzEB+aZslpY9v+KyP/iYMKhs2Rr8shblu4gHulyher/lQU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554645; c=relaxed/simple; bh=RHzrkT+vZzf5UcXSprmKWB9t7GIHeTbjDsxtqxDPTmE=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=o9WJPIaFHNdLOQU/Doi3xTk7CxNemBmz10ymKDjNO2mnrSLfcjNZ7yUsXBnI7L61vffZ+d5+bKTuefBHjmdd8NBYOeM7rPLpMFvKMk1A7lM3Cdj7mSzLHlk1W7XzITgz2F7HtAyEUkguauESnd55oLVeF1L3A9LenpwB5xJ3p+o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=I4Kw5Ywe; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=5fUpbazZ; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="I4Kw5Ywe"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="5fUpbazZ" Message-ID: <20251027083745.736737934@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554641; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=435+37KhAU0mhng1XnAbeS2nHoeU+rxEjwbVMsKftxA=; b=I4Kw5Ywek/IX9WMSt/JXEa6snrzGFocVq/IjZtkNm1ANMjiG+drhyzT6KMpNFoZ81+hpq6 7eExgfokt10EBfmMcQQqV3fHS7o+tZ1wKnuLPw3dhoCmS+7DNc4GlQebCPGHmHeHNEXGGv dHUpqGVUsaIl7lhAhzePlzvZ9TZmCnQJkRaIhw6kSszp+Oe3T28gR/NmuCpMOdRMmdbgWK 49R4Iz46M9aRnw8O5tqYDlJeRDC1t10xoiAaSTGN43wU0DwX72QRSrd7bLWPPaoJLnpDRt WsrcrnhgR+2dkKcsStXQmlVjHMHAgmZ8OHZztJdfAQhnJWROQZ2fWyeT4rsVLg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554641; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=435+37KhAU0mhng1XnAbeS2nHoeU+rxEjwbVMsKftxA=; b=5fUpbazZrmkMXN7jn9+9G33jFnKwKg6Vab+X7NXNyTTxhdWjb/TPTSoWsl2TtGRcijzTye 9yPo5IrGwO3RP/DA== From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 10/12] futex: Convert to get/put_user_inline() References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Date: Mon, 27 Oct 2025 09:44:00 +0100 (CET) Content-Transfer-Encoding: quoted-printable From: Thomas Gleixner Replace the open coded implementation with the new get/put_user_inline() helpers. This might be replaced by a regular get/put_user(), but that needs a proper performance evaluation. No functional change intended Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Darren Hart Cc: Davidlohr Bueso Cc: "Andr=C3=A9 Almeida" Reviewed-by: Christophe Leroy --- V5: Rename again and remove the helpers V4: Rename once moar V3: Adapt to scope changes V2: Convert to scoped variant --- kernel/futex/core.c | 4 +-- kernel/futex/futex.h | 58 ++--------------------------------------------= ----- 2 files changed, 5 insertions(+), 57 deletions(-) --- --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -581,7 +581,7 @@ int get_futex_key(u32 __user *uaddr, uns if (flags & FLAGS_NUMA) { u32 __user *naddr =3D (void *)uaddr + size / 2; =20 - if (futex_get_value(&node, naddr)) + if (get_user_inline(node, naddr)) return -EFAULT; =20 if ((node !=3D FUTEX_NO_NODE) && @@ -601,7 +601,7 @@ int get_futex_key(u32 __user *uaddr, uns node =3D numa_node_id(); node_updated =3D true; } - if (node_updated && futex_put_value(node, naddr)) + if (node_updated && put_user_inline(node, naddr)) return -EFAULT; } =20 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -281,63 +281,11 @@ static inline int futex_cmpxchg_value_lo return ret; } =20 -/* - * This does a plain atomic user space read, and the user pointer has - * already been verified earlier by get_futex_key() to be both aligned - * and actually in user space, just like futex_atomic_cmpxchg_inatomic(). - * - * We still want to avoid any speculation, and while __get_user() is - * the traditional model for this, it's actually slower than doing - * this manually these days. - * - * We could just have a per-architecture special function for it, - * the same way we do futex_atomic_cmpxchg_inatomic(), but rather - * than force everybody to do that, write it out long-hand using - * the low-level user-access infrastructure. - * - * This looks a bit overkill, but generally just results in a couple - * of instructions. - */ -static __always_inline int futex_get_value(u32 *dest, u32 __user *from) -{ - u32 val; - - if (can_do_masked_user_access()) - from =3D masked_user_access_begin(from); - else if (!user_read_access_begin(from, sizeof(*from))) - return -EFAULT; - unsafe_get_user(val, from, Efault); - user_read_access_end(); - *dest =3D val; - return 0; -Efault: - user_read_access_end(); - return -EFAULT; -} - -static __always_inline int futex_put_value(u32 val, u32 __user *to) -{ - if (can_do_masked_user_access()) - to =3D masked_user_access_begin(to); - else if (!user_write_access_begin(to, sizeof(*to))) - return -EFAULT; - unsafe_put_user(val, to, Efault); - user_write_access_end(); - return 0; -Efault: - user_write_access_end(); - return -EFAULT; -} - +/* Read from user memory with pagefaults disabled */ static inline int futex_get_value_locked(u32 *dest, u32 __user *from) { - int ret; - - pagefault_disable(); - ret =3D futex_get_value(dest, from); - pagefault_enable(); - - return ret; + guard(pagefault)(); + return get_user_inline(*dest, from); } =20 extern void __futex_unqueue(struct futex_q *q); From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BA5F2FE05F; Mon, 27 Oct 2025 08:44:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554649; cv=none; b=mgIKke0VhDGDkMwYOysBn77pD+wbh68KnFtc2BP02IsLMId5vvuZ5kXQGq7v/Y1UQkVkvGvF9iAfsJhuJ2qSDRZ8nFXDalofPe39kMuj2tWWbtSWBHfXjevVHQIV7hoilFcdNnv+PXQC3+zqZdFveZ5dP9Amw1gswlgqjQwFyxA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554649; c=relaxed/simple; bh=XFbP7c4oWTMLAjnLkkeDQLOvFLNhdk14vwJ7xtqtbVU=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=WWknpChwGUZvGNJJb1xoAnRupCemxFyhmWsKVPQQ2kO3o6Mm5C8OJneXRRrLGbkniEKPy7fKEQ2jg6BbcJTdC7pXTUb4Cg2W7wtxn3AaFLaP7JSRZkz5JAicxbvcDQP90MzTnJKE7hN0oC/tvZ2eDA1nVh4GXfn5PXxbxUMDib8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=XJp1D9/T; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=7tqLdWfu; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="XJp1D9/T"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="7tqLdWfu" Message-ID: <20251027083745.799714344@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uuF1PbflRL7G3epzTRJvFK2AzVPBYAY82eKYUINaU84=; b=XJp1D9/TPX0ZM7MIHCKed9tqeLgqSv/2F+apqKQDebqYn6NoPuN/KgyUGR7e9+Pb46cv7z PidFEGWYLqR5eMMBoAM5oKne2DNQrQtqnhAYM8P3M+C0njz6W7tSAUwWakDK7lkuC7hKUn 4Y98eyJ1Wf91vEpfGdDB7ST9gnjTrgb4Dolb+fcAOSA9UaxhNkGcCUuLSu7NgE2YXw9Ggr 8qJdqGMTrUsPkhzNn4k/mS4PBpWTdV/3G4wRCxNvdHKfTDchiv/BOu1s9hjAjN/oy4chY8 8Zo/NMPt5F0WhtrkrKzONFyzP7XWX1fHqR0zHKeHhoHZXajxla+RO1Z1XiUNIA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uuF1PbflRL7G3epzTRJvFK2AzVPBYAY82eKYUINaU84=; b=7tqLdWfui9Z7ALY3gMC74wCiJZ4FeyGAeXFp+T4tRe0iTRQU/C1aRO5OvfbOePqeBzYzHC JnYYqYdj61kHXhBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org Subject: [patch V5 11/12] x86/futex: Convert to scoped user access References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:44:02 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace the open coded implementation with the scoped user access guards No functional change intended. Signed-off-by: Thomas Gleixner Cc: x86@kernel.org --- V4: Rename once more Use asm_inline - Andrew V3: Adapt to scope changes V2: Convert to scoped masked access Use RW access functions - Christophe --- arch/x86/include/asm/futex.h | 75 ++++++++++++++++++--------------------= ----- 1 file changed, 33 insertions(+), 42 deletions(-) --- --- a/arch/x86/include/asm/futex.h +++ b/arch/x86/include/asm/futex.h @@ -46,38 +46,31 @@ do { \ } while(0) =20 static __always_inline int arch_futex_atomic_op_inuser(int op, int oparg, = int *oval, - u32 __user *uaddr) + u32 __user *uaddr) { - if (can_do_masked_user_access()) - uaddr =3D masked_user_access_begin(uaddr); - else if (!user_access_begin(uaddr, sizeof(u32))) - return -EFAULT; - - switch (op) { - case FUTEX_OP_SET: - unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault); - break; - case FUTEX_OP_ADD: - unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval, - uaddr, oparg, Efault); - break; - case FUTEX_OP_OR: - unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault); - break; - case FUTEX_OP_ANDN: - unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault); - break; - case FUTEX_OP_XOR: - unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault); - break; - default: - user_access_end(); - return -ENOSYS; + scoped_user_rw_access(uaddr, Efault) { + switch (op) { + case FUTEX_OP_SET: + unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault); + break; + case FUTEX_OP_ADD: + unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval, uaddr, oparg, Efaul= t); + break; + case FUTEX_OP_OR: + unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault); + break; + case FUTEX_OP_ANDN: + unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault); + break; + case FUTEX_OP_XOR: + unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault); + break; + default: + return -ENOSYS; + } } - user_access_end(); return 0; Efault: - user_access_end(); return -EFAULT; } =20 @@ -86,21 +79,19 @@ static inline int futex_atomic_cmpxchg_i { int ret =3D 0; =20 - if (can_do_masked_user_access()) - uaddr =3D masked_user_access_begin(uaddr); - else if (!user_access_begin(uaddr, sizeof(u32))) - return -EFAULT; - asm volatile("\n" - "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n" - "2:\n" - _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \ - : "+r" (ret), "=3Da" (oldval), "+m" (*uaddr) - : "r" (newval), "1" (oldval) - : "memory" - ); - user_access_end(); - *uval =3D oldval; + scoped_user_rw_access(uaddr, Efault) { + asm_inline volatile("\n" + "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n" + "2:\n" + _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) + : "+r" (ret), "=3Da" (oldval), "+m" (*uaddr) + : "r" (newval), "1" (oldval) + : "memory"); + *uval =3D oldval; + } return ret; +Efault: + return -EFAULT; } =20 #endif From nobody Sun Feb 8 07:07:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36C8D2FE066; Mon, 27 Oct 2025 08:44:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554649; cv=none; b=HjCapX2RTZFEpKDe7HjE8O7Pl34xbpBF6WPXEcN0658Y41Tw5eMGxg8oXrFoflZHMeRNuI0OQ4cLybrK6iR3h1i03dshGkwToPs9tr4dJ7+zGWcjd96hS3vhCGvtNbBMOC2l8z7HNC8GIUy++krjZA5vumruxSRglVxHQoWQCbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761554649; c=relaxed/simple; bh=V1mfCsRfXgy86CzSvOjy9288devaZUurilfSaMjXsyI=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=gvrzqXCTyL64m5Rnpj7BhedV/iY8XA4MzZPANVCC4kCHHwwJPz7eHX085k9Qxpa0S6kSeJmWh2SML1dq7y77HYti3dgWWysOIx4fxEn5Lbs9iCrlkl6XtqRbmKSaJcH6aUpMcaILwbPkoTz7WLl5uY9MwhK8spAYXMBf6KG1FLI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=vc02t/+I; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=h5a/XjQ9; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="vc02t/+I"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="h5a/XjQ9" Message-ID: <20251027083745.862419776@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761554645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Zgu2fA8f5xJPsST4+i4W8NVv6qck0BV/bxjOydsn72k=; b=vc02t/+I+nf33n4SyJKuoN622JX8SnrnzVtfhDPK+/CUbQ/RqLF/DthXP1xJ7sULRhvFl2 ervlj9kpfpMbHwMkAynBmHLcTJGBcGkuv85J1dwgCoOnI/zfTSvkKoRqQMa9Hf4Ov6KyNc 2rN2oN1ArrM7znk74JARchtCqYx5WcGfoH2YzHDbunlrBv0LlZa49GKnmYBm5bRxysyvD4 lAfZl2u+xJgpU+3ekTXaT18RIdUteVcBr/OibuH9jL9sg+2MENKRz5IiDVDBTQbsohFXwL icBtnSnGBPVlo/TpaDZdcBIO7NUpOu7h0egc6gY+lB4hcffx4LFtczx8LgyTUQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761554645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Zgu2fA8f5xJPsST4+i4W8NVv6qck0BV/bxjOydsn72k=; b=h5a/XjQ9yWZ3Y3xQQNZYqfN6ytz0CnIzmt8Mgcy5kYj76XXOgxcRFSmKp7julm7XZYSWmA kYpiplU0qPYsnOCA== From: Thomas Gleixner To: LKML Cc: Alexander Viro , Christian Brauner , Jan Kara , linux-fsdevel@vger.kernel.org, kernel test robot , Russell King , linux-arm-kernel@lists.infradead.org, Linus Torvalds , x86@kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Heiko Carstens , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org, Mathieu Desnoyers , Andrew Cooper , David Laight , Julia Lawall , Nicolas Palix , Peter Zijlstra , Darren Hart , Davidlohr Bueso , =?UTF-8?q?Andr=C3=A9=20Almeida?= Subject: [patch V5 12/12] select: Convert to scoped user access References: <20251027083700.573016505@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 27 Oct 2025 09:44:04 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Replace the open coded implementation with the scoped user access guard. No functional change intended. Signed-off-by: Thomas Gleixner Cc: Alexander Viro Cc: Christian Brauner Cc: Jan Kara Cc: linux-fsdevel@vger.kernel.org Reviewed-by: Christophe Leroy Reviewed-by: Mathieu Desnoyers --- V4: Use read guard - Peterz Rename once more V3: Adopt to scope changes --- fs/select.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) --- --- a/fs/select.c +++ b/fs/select.c @@ -776,17 +776,13 @@ static inline int get_sigset_argpack(str { // the path is hot enough for overhead of copy_from_user() to matter if (from) { - if (can_do_masked_user_access()) - from =3D masked_user_access_begin(from); - else if (!user_read_access_begin(from, sizeof(*from))) - return -EFAULT; - unsafe_get_user(to->p, &from->p, Efault); - unsafe_get_user(to->size, &from->size, Efault); - user_read_access_end(); + scoped_user_read_access(from, Efault) { + unsafe_get_user(to->p, &from->p, Efault); + unsafe_get_user(to->size, &from->size, Efault); + } } return 0; Efault: - user_read_access_end(); return -EFAULT; }