From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C61F1D2F73 for ; Thu, 17 Oct 2024 21:55:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202128; cv=none; b=uEMWWD0RE5ur0S0GWETS8gSGLbKXO0Fdw4FizVMxPsjRlMOdeORncGJ69puBQNx5ckzj0oUK4j7bZ9Qs+tYBNXW14Il5THgZqOMawCgfaZtPpAEuxrVenG7Pqz3xb+js3R8qEmMF1Ps03jEyIhBPQxewa6/OCNilhDdj1VCJXQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202128; c=relaxed/simple; bh=NSb0CuZFRxLsMtRmJayyQ03cyCTlxrCnGaQiC+w0oD4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u12uiKUjvJWrPLEKbsP4ric0Lb6elADmG1IcbfiqKETlFJLEV/oAt5hB9VnJJZ5q1scYizRRPOpaVVnaX7efHfiXPftuoVNAdwgJnP+MNedYN+6KMoL37M3ZeVuXw8cwexM2VdsYZtkvERwcI4w59jZ2NMG71h2w551Kvucr8aI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uKQDXOec; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uKQDXOec" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC073C4CED0; Thu, 17 Oct 2024 21:55:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202128; bh=NSb0CuZFRxLsMtRmJayyQ03cyCTlxrCnGaQiC+w0oD4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uKQDXOecdhFeWVcjmHYzxDjck9if1H7eA1+6f7NoHaSKzm99ot5/BncF/3PLUnbAh RsJ7nB42/0UWG4+wIvyIna1c/vVySuR/JHzeYr/MJO63eiW2SKqcm4YmcZILD0EMv4 YHTgd4DfquTsJNoGTCEUdsUNZ9188sXWqzp2a6fQoIM72e1F929co1n+Mw9ljovj0q wDdpYy1Vg6vEUx1jqAnmhgtzpw6DlmcpWR6hwWzIPCvf3bRNulPzegEizm04h1/bF+ cIsU8uiR2lVTLesSQzVBqpt6zepx/S73bzxCEFmtSxfFgGIy3L8IZsoalquT9dLCyn gzwgKm5qNOFFg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 1/6] x86/uaccess: Avoid barrier_nospec() in copy_from_user() Date: Thu, 17 Oct 2024 14:55:20 -0700 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For x86-64, the barrier_nospec() in copy_from_user() is overkill and painfully slow. Instead, use pointer masking to force the user pointer to a non-kernel value in speculative paths. To avoid regressing powerpc, move the barrier_nospec() to the powerpc raw_copy_from_user() implementation so there's no functional change. Signed-off-by: Josh Poimboeuf --- arch/powerpc/include/asm/uaccess.h | 2 ++ arch/x86/include/asm/uaccess_64.h | 7 ++++--- arch/x86/lib/getuser.S | 2 +- arch/x86/lib/putuser.S | 2 +- include/linux/uaccess.h | 6 ------ 5 files changed, 8 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/= uaccess.h index 4f5a46a77fa2..12abb8bf5eda 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 #ifdef __powerpc64__ /* We use TASK_SIZE_USER64 as TASK_SIZE is not constant */ @@ -341,6 +342,7 @@ static inline unsigned long raw_copy_from_user(void *to, { unsigned long ret; =20 + barrier_nospec(); allow_read_from_user(from, n); ret =3D __copy_tofrom_user((__force void __user *)to, from, n); prevent_read_from_user(from, n); diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uacce= ss_64.h index afce8ee5d7b7..61693028ea2b 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -54,11 +54,11 @@ static inline unsigned long __untagged_addr_remote(stru= ct mm_struct *mm, #define valid_user_address(x) ((__force long)(x) >=3D 0) =20 /* - * Masking the user address is an alternative to a conditional - * user_access_begin that can avoid the fencing. This only works - * for dense accesses starting at the address. + * If it's a kernel address, force it to all 1's. This prevents a mispred= icted + * access_ok() from speculatively accessing kernel space. */ #define mask_user_address(x) ((typeof(x))((long)(x)|((long)(x)>>63))) + #define masked_user_access_begin(x) ({ \ __auto_type __masked_ptr =3D (x); \ __masked_ptr =3D mask_user_address(__masked_ptr); \ @@ -133,6 +133,7 @@ copy_user_generic(void *to, const void *from, unsigned = long len) static __always_inline __must_check unsigned long raw_copy_from_user(void *dst, const void __user *src, unsigned long size) { + src =3D mask_user_address(src); return copy_user_generic(dst, (__force void *)src, size); } =20 diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index d066aecf8aeb..094224ec9dca 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -39,7 +39,7 @@ =20 .macro check_range size:req .if IS_ENABLED(CONFIG_X86_64) - mov %rax, %rdx + mov %rax, %rdx /* mask_user_address() */ sar $63, %rdx or %rdx, %rax .else diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index 975c9c18263d..09b7e37934ab 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -34,7 +34,7 @@ =20 .macro check_range size:req .if IS_ENABLED(CONFIG_X86_64) - mov %rcx, %rbx + mov %rcx, %rbx /* mask_user_address() */ sar $63, %rbx or %rbx, %rcx .else diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 39c7cf82b0c2..dda9725a9559 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -160,12 +160,6 @@ _inline_copy_from_user(void *to, const void __user *fr= om, unsigned long n) unsigned long res =3D n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - /* - * Ensure that bad access_ok() speculation will not - * lead to nasty side effects *after* the copy is - * finished: - */ - barrier_nospec(); instrument_copy_from_user_before(to, from, n); res =3D raw_copy_from_user(to, from, n); instrument_copy_from_user_after(to, from, n, res); --=20 2.47.0 From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF5F01D63DA for ; Thu, 17 Oct 2024 21:55:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202129; cv=none; b=SIVz/TB0HkZS71jc6HGfLxBg6g4EWwLOfZfiOPcEx/M6FU3AVYK+iGCUlue9LrLWYgBeElsV3MCryHRCP7Tter0celw6m3c+yk/8HSnPoxPvVtZvx73Z0LlmijwJWMkDLh+8MqYZS26BuQVrJIx26Sw7au1ZKwePze6x9CYi5MY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202129; c=relaxed/simple; bh=xfgOc7lTaXmUYWAtPlY5EDc6fxKMFQn54g5tpKh/iYU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O/x6QgRYlATJtOuWK9mmCt68Ib/vDXNiI9hks0Q0p0GhbwG8N3u3f5Ifzs/SIhhMap2SFavx2y3za2wJD9H4ripXWyurG2d3APc/zDzT3o5KGPWRSVS7LruCnTfhK2OscegRFFhcdoDMrYc8C7UgwsmrENug+sAKR5UNy/M+HJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tAf3RJLG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tAf3RJLG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54825C4CED2; Thu, 17 Oct 2024 21:55:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202128; bh=xfgOc7lTaXmUYWAtPlY5EDc6fxKMFQn54g5tpKh/iYU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tAf3RJLGarfzTLeMnQbadhPeskZRMtA2DGorwTgKO/UVRrLECKX9Jb53OUjyLQ5O+ RkMS9uNs6oHcwAc5nf9SB5TTqo/0q+P++YVF9PieqhHBFsJ2Nn02Vc/zATf8WMNWcz 6A3fk34YHGLyWGbMrzecfgFFgW+aaH/n3sO1DpZuPl71DDI7CTZA4NQnXre7u48w/+ tHTjzHDN8gLnQBYjN1UK5+GwCyXFeWQcQKOC3DYKqu6iGxuqv2vsi+5ltcUxGlZi4K RqdTPM+H+IpoqnFYdA19gi+j6Oo8okTqCyrGJW6uWOuVKNAELOeDiY0xGGZ35JI8+2 XXGaZkCCJuo+Q== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 2/6] x86/uaccess: Avoid barrier_nospec() in __get_user() Date: Thu, 17 Oct 2024 14:55:21 -0700 Message-ID: <0777ac8e8c8d669fa56971dcba68b6f1c1980d39.1729201904.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On 64-bit, the barrier_nospec() in __get_user() is overkill and painfully slow. Instead, use pointer masking to force the user pointer to a non-kernel value in speculative paths. Doing so makes get_user() and __get_user() identical in behavior, so converge their implementations. Signed-off-by: Josh Poimboeuf --- arch/x86/lib/getuser.S | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index 094224ec9dca..7c9bf8f0b3ac 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -105,6 +105,26 @@ SYM_FUNC_START(__get_user_8) SYM_FUNC_END(__get_user_8) EXPORT_SYMBOL(__get_user_8) =20 +#ifdef CONFIG_X86_64 + +/* + * On x86-64, get_user() does address masking rather than a conditional + * bounds check so there's no functional difference with __get_user(). + */ +SYM_FUNC_ALIAS(__get_user_nocheck_1, __get_user_1); +EXPORT_SYMBOL(__get_user_nocheck_1); + +SYM_FUNC_ALIAS(__get_user_nocheck_2, __get_user_2); +EXPORT_SYMBOL(__get_user_nocheck_2); + +SYM_FUNC_ALIAS(__get_user_nocheck_4, __get_user_4); +EXPORT_SYMBOL(__get_user_nocheck_4); + +SYM_FUNC_ALIAS(__get_user_nocheck_8, __get_user_8); +EXPORT_SYMBOL(__get_user_nocheck_8); + +#else /* CONFIG_X86_32 */ + /* .. and the same for __get_user, just without the range checks */ SYM_FUNC_START(__get_user_nocheck_1) ASM_STAC @@ -139,19 +159,16 @@ EXPORT_SYMBOL(__get_user_nocheck_4) SYM_FUNC_START(__get_user_nocheck_8) ASM_STAC ASM_BARRIER_NOSPEC -#ifdef CONFIG_X86_64 - UACCESS movq (%_ASM_AX),%rdx -#else xor %ecx,%ecx UACCESS movl (%_ASM_AX),%edx UACCESS movl 4(%_ASM_AX),%ecx -#endif xor %eax,%eax ASM_CLAC RET SYM_FUNC_END(__get_user_nocheck_8) EXPORT_SYMBOL(__get_user_nocheck_8) =20 +#endif /* CONFIG_X86_32 */ =20 SYM_CODE_START_LOCAL(__get_user_handle_exception) ASM_CLAC --=20 2.47.0 From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE71B1D7999 for ; Thu, 17 Oct 2024 21:55:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202129; cv=none; b=CbIIlJ6ucvmR5+NsGgrVaj+6mwAW7ZQEf3xKYbIajzEiGxKRkHhFFbnpKchECvww13GeZNWAL02eStyYefBYmvX6YJEVL0QoQ2vvq0TGplb78T/EDPSok5+tTthzgU2lFpYvb6/aG3sR+ejvbjOjUHW5GDgmK7t7IKZp5tRtd5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202129; c=relaxed/simple; bh=+NVAfBL5XC4O3/+bvmSlB+aCF1KvxK5rqTMlDvFps8s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uVnyMhoUpsXQBWMfSasx7VwUFgR2R/u6PJs8c3Zm5RwZGs5P99XAvD2R+e4v1jpm5njm/KB1I60zIRo/uDlPwl4A1Qb0bxPzmuCJLrYdPkT0vZblLS82gtrK1V19OXcaMkmc7Dw97awbHJzrx/uDNWOC/UDT4Ob51bBCVBVQrb4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RwSRDs6W; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RwSRDs6W" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E136DC4CED0; Thu, 17 Oct 2024 21:55:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202129; bh=+NVAfBL5XC4O3/+bvmSlB+aCF1KvxK5rqTMlDvFps8s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RwSRDs6W96PHO2LRpbnWJ0OAi+5QM78NCMgPnwsZ6OZ0V7sPQULC5WebYPZVd5d3l FEbxM2R0ClBXjbJm5zZ85m/WdRyqIGWUpJG/xqyxJHWhKORX+n8iGN4M6qwy7uf8s1 /YDH9Ssdat0h9Zn+uKLHRCPkjKL0FtvgGVq0TvnDjm2Uy1Qpqp4d7T5E6kHkevZe2g E72VfcMW9JI8XGhLsRfh/X5RIhe9aMYhRvjmDchnYZg2OEwwYPL0YBiLJKd8BTVkBm r84u32waSE4V6iFBM1gjUKQTZ6gCjonbVPlRHOtWM6g8iJpjGXgSOKuqqbdAWwXPP6 vbrG1Orde5mXw== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 3/6] x86/uaccess: Rearrange putuser.S Date: Thu, 17 Oct 2024 14:55:22 -0700 Message-ID: <7818233ecd726628a3eb9cbb5ed0ba831e69af4b.1729201904.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Separate __put_user_*() from __put_user_nocheck_*() to make the layout similar to getuser.S. This will also make it easier to do a subsequent change. No functional changes. Signed-off-by: Josh Poimboeuf --- arch/x86/lib/putuser.S | 67 ++++++++++++++++++++++-------------------- 1 file changed, 35 insertions(+), 32 deletions(-) diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index 09b7e37934ab..cb137e0286be 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -54,59 +54,32 @@ SYM_FUNC_START(__put_user_1) SYM_FUNC_END(__put_user_1) EXPORT_SYMBOL(__put_user_1) =20 -SYM_FUNC_START(__put_user_nocheck_1) - ASM_STAC -2: movb %al,(%_ASM_CX) - xor %ecx,%ecx - ASM_CLAC - RET -SYM_FUNC_END(__put_user_nocheck_1) -EXPORT_SYMBOL(__put_user_nocheck_1) - SYM_FUNC_START(__put_user_2) check_range size=3D2 ASM_STAC -3: movw %ax,(%_ASM_CX) +2: movw %ax,(%_ASM_CX) xor %ecx,%ecx ASM_CLAC RET SYM_FUNC_END(__put_user_2) EXPORT_SYMBOL(__put_user_2) =20 -SYM_FUNC_START(__put_user_nocheck_2) - ASM_STAC -4: movw %ax,(%_ASM_CX) - xor %ecx,%ecx - ASM_CLAC - RET -SYM_FUNC_END(__put_user_nocheck_2) -EXPORT_SYMBOL(__put_user_nocheck_2) - SYM_FUNC_START(__put_user_4) check_range size=3D4 ASM_STAC -5: movl %eax,(%_ASM_CX) +3: movl %eax,(%_ASM_CX) xor %ecx,%ecx ASM_CLAC RET SYM_FUNC_END(__put_user_4) EXPORT_SYMBOL(__put_user_4) =20 -SYM_FUNC_START(__put_user_nocheck_4) - ASM_STAC -6: movl %eax,(%_ASM_CX) - xor %ecx,%ecx - ASM_CLAC - RET -SYM_FUNC_END(__put_user_nocheck_4) -EXPORT_SYMBOL(__put_user_nocheck_4) - SYM_FUNC_START(__put_user_8) check_range size=3D8 ASM_STAC -7: mov %_ASM_AX,(%_ASM_CX) +4: mov %_ASM_AX,(%_ASM_CX) #ifdef CONFIG_X86_32 -8: movl %edx,4(%_ASM_CX) +5: movl %edx,4(%_ASM_CX) #endif xor %ecx,%ecx ASM_CLAC @@ -114,6 +87,34 @@ SYM_FUNC_START(__put_user_8) SYM_FUNC_END(__put_user_8) EXPORT_SYMBOL(__put_user_8) =20 +/* .. and the same for __put_user, just without the range checks */ +SYM_FUNC_START(__put_user_nocheck_1) + ASM_STAC +6: movb %al,(%_ASM_CX) + xor %ecx,%ecx + ASM_CLAC + RET +SYM_FUNC_END(__put_user_nocheck_1) +EXPORT_SYMBOL(__put_user_nocheck_1) + +SYM_FUNC_START(__put_user_nocheck_2) + ASM_STAC +7: movw %ax,(%_ASM_CX) + xor %ecx,%ecx + ASM_CLAC + RET +SYM_FUNC_END(__put_user_nocheck_2) +EXPORT_SYMBOL(__put_user_nocheck_2) + +SYM_FUNC_START(__put_user_nocheck_4) + ASM_STAC +8: movl %eax,(%_ASM_CX) + xor %ecx,%ecx + ASM_CLAC + RET +SYM_FUNC_END(__put_user_nocheck_4) +EXPORT_SYMBOL(__put_user_nocheck_4) + SYM_FUNC_START(__put_user_nocheck_8) ASM_STAC 9: mov %_ASM_AX,(%_ASM_CX) @@ -137,11 +138,13 @@ SYM_CODE_END(__put_user_handle_exception) _ASM_EXTABLE_UA(2b, __put_user_handle_exception) _ASM_EXTABLE_UA(3b, __put_user_handle_exception) _ASM_EXTABLE_UA(4b, __put_user_handle_exception) +#ifdef CONFIG_X86_32 _ASM_EXTABLE_UA(5b, __put_user_handle_exception) +#endif _ASM_EXTABLE_UA(6b, __put_user_handle_exception) _ASM_EXTABLE_UA(7b, __put_user_handle_exception) + _ASM_EXTABLE_UA(8b, __put_user_handle_exception) _ASM_EXTABLE_UA(9b, __put_user_handle_exception) #ifdef CONFIG_X86_32 - _ASM_EXTABLE_UA(8b, __put_user_handle_exception) _ASM_EXTABLE_UA(10b, __put_user_handle_exception) #endif --=20 2.47.0 From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 766661D7E28 for ; Thu, 17 Oct 2024 21:55:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202130; cv=none; b=fWqAWgcKCwDI2CsZBkK5V2lB1EmIdlE1bVYLqESEc3ToAijloWYld9EHEJKFmh12Cpwq7n1cbBNDAbL885nqveXNcKh0UQgNGI0lw3DXGUAGwmv/f9F1pwQPQlaDHxB0jvvJ06vxhjQambteIwucJ1CONwGjavoOS4c8Jg7G730= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202130; c=relaxed/simple; bh=NMUERTfe8IitvLeucMcKK8QffKkIIRs7lelNieDdT8Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ssofy8Jb/RCf8NK3SCPNpjBoXUAY0ytjyYszEFPW3BWjmbJQosRvMWuQblFHp2fo4ybvvGDPR/iA5XThstc8rkIdBcElwRHfq9M5FAb4+HpMpJEPRKX/PfQWcUMrYAgFo5UqjjNPq6fmh2TXeVFY8iiHSjRduwUxRWPggOtw/Qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kMzCg3kq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kMzCg3kq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A02EC4CED4; Thu, 17 Oct 2024 21:55:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202129; bh=NMUERTfe8IitvLeucMcKK8QffKkIIRs7lelNieDdT8Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kMzCg3kqxX5332ygSUeJINGDmjiYNVsMfc7/7dzsAm06HCcjkVGMntsgcp+IjM/hL RWPWigIMfoZYM/d8HrkVmaWzOWlxpmFWU+0rbXuGpt/rbahjspkRqroToOQK7omUQ0 CWgPseIrIFd1H+4fxrsVmrG/4XMCBQKu/BjkSyy9/H784GkUvxWYX+1c7y4jtYw8xz S1ZgR2yDDjpu2629dLL8w54cSK2EIS4rdxvQ4vQO/Hksj4o5MokUFoSuMrhT91SS8v oMDBzwzfliCdS/84MB41FTSlPz99wiwk7WJV0Z3ZdtGpx+p7eqbS+1o5RAP8MuGZ47 OnD3DSEsxO3fg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 4/6] x86/uaccess: Add user pointer masking to __put_user() Date: Thu, 17 Oct 2024 14:55:23 -0700 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add user pointer masking to __put_user() to mitigate Spectre v1. A write in a mispredicted access_ok() branch to a user-controlled kernel address can populate the rest of the affected cache line with kernel data. This makes its behavior identical to put_user(), so converge their implementations. Signed-off-by: Josh Poimboeuf --- arch/x86/lib/putuser.S | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index cb137e0286be..1b122261b7aa 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -87,7 +87,26 @@ SYM_FUNC_START(__put_user_8) SYM_FUNC_END(__put_user_8) EXPORT_SYMBOL(__put_user_8) =20 -/* .. and the same for __put_user, just without the range checks */ +#ifdef CONFIG_X86_64 + +/* + * On x86-64, put_user() does address masking rather than a conditional + * bounds check so there's no functional difference with __put_user(). + */ +SYM_FUNC_ALIAS(__put_user_nocheck_1, __put_user_1); +EXPORT_SYMBOL(__put_user_nocheck_1); + +SYM_FUNC_ALIAS(__put_user_nocheck_2, __put_user_2); +EXPORT_SYMBOL(__put_user_nocheck_2); + +SYM_FUNC_ALIAS(__put_user_nocheck_4, __put_user_4); +EXPORT_SYMBOL(__put_user_nocheck_4); + +SYM_FUNC_ALIAS(__put_user_nocheck_8, __put_user_8); +EXPORT_SYMBOL(__put_user_nocheck_8); + +#else /* CONFIG_X86_32 */ + SYM_FUNC_START(__put_user_nocheck_1) ASM_STAC 6: movb %al,(%_ASM_CX) @@ -118,15 +137,15 @@ EXPORT_SYMBOL(__put_user_nocheck_4) SYM_FUNC_START(__put_user_nocheck_8) ASM_STAC 9: mov %_ASM_AX,(%_ASM_CX) -#ifdef CONFIG_X86_32 10: movl %edx,4(%_ASM_CX) -#endif xor %ecx,%ecx ASM_CLAC RET SYM_FUNC_END(__put_user_nocheck_8) EXPORT_SYMBOL(__put_user_nocheck_8) =20 +#endif /* CONFIG_X86_32 */ + SYM_CODE_START_LOCAL(__put_user_handle_exception) ASM_CLAC .Lbad_put_user: @@ -140,11 +159,9 @@ SYM_CODE_END(__put_user_handle_exception) _ASM_EXTABLE_UA(4b, __put_user_handle_exception) #ifdef CONFIG_X86_32 _ASM_EXTABLE_UA(5b, __put_user_handle_exception) -#endif _ASM_EXTABLE_UA(6b, __put_user_handle_exception) _ASM_EXTABLE_UA(7b, __put_user_handle_exception) _ASM_EXTABLE_UA(8b, __put_user_handle_exception) _ASM_EXTABLE_UA(9b, __put_user_handle_exception) -#ifdef CONFIG_X86_32 _ASM_EXTABLE_UA(10b, __put_user_handle_exception) #endif --=20 2.47.0 From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E84A81D7E57 for ; Thu, 17 Oct 2024 21:55:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202131; cv=none; b=Tc/SC15Xw6p9Pg7fx8P+5TSWch31m8vw4t9vbf0Uhautc5Syq1tBZc83K+VSOdWWcSbtW4nveeQeK9nu0ER4RCtXhA17Jc5PcuZcIypP0ncTHH/zWjKZHUhOV21UtsIVpDcGsZZq6WvydBdxIcspWXdzklwf3sEnNmW2cyrqPB4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202131; c=relaxed/simple; bh=3+6cqjky5SJ8pbbT1W6aRTHDrkhRTtSUbUYAfyoehbs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rBiNpIfvgIY4qE/myg5BtmvLJUf8LUPtWG0rkl23VxTiNZ8QvjPasXoAyhFIoF8HCxiC/AS1xNRAMR8mDwNQUU5UYfCEV9EddV3KZfsK1XICiiiMwmL8N68PAx1zQosCa14jsWvUnVHl8WugJTPh7IzKqpqUHpCJQTr0pJd0MfE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QScC8JXg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QScC8JXg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15429C4CED0; Thu, 17 Oct 2024 21:55:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202130; bh=3+6cqjky5SJ8pbbT1W6aRTHDrkhRTtSUbUYAfyoehbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QScC8JXg1nhIsJfvLMXZp0T/67JQHRKuGALw/DEQDyZqG5bhTI5VunzUliAmaDCTy c/IGDphRQwyP6so/z6M/bZGusu6oMxqktvU2B/PxIyBH09VpBrFCizxBN/yNOraMcH LPv7C82XvgQjd7Cw0C7dkk81Qv4RG68Udm1ukCLHRDJEctcSNXt1xTWyxp9Smy6ggB irDMBk+hdz68zwzRFKTky05dws961rQB+l/+CuG5u5sysjQN0L6VF+X+Xe6jAzw8Zr 96AOyOlz/q0+wljnFl74MsTItpZfzA1wWN+6rVInJrwOm+isaX+aJ/EEDqOw+Ww0XL 9jOJGvTtKjZ3Q== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 5/6] x86/uaccess: Add user pointer masking to copy_to_user() Date: Thu, 17 Oct 2024 14:55:24 -0700 Message-ID: <6500dcd8e7700b4dfe5de4f82ed2da19edc23c58.1729201904.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add user pointer masking to copy_to_user() to mitigate Spectre v1. A write in a mispredicted access_ok() branch to a user-controlled kernel address can populate the rest of the affected cache line with kernel data. Signed-off-by: Josh Poimboeuf --- arch/x86/include/asm/uaccess_64.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uacce= ss_64.h index 61693028ea2b..0587830a47e1 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -140,6 +140,7 @@ raw_copy_from_user(void *dst, const void __user *src, u= nsigned long size) static __always_inline __must_check unsigned long raw_copy_to_user(void __user *dst, const void *src, unsigned long size) { + dst =3D mask_user_address(dst); return copy_user_generic((__force void *)dst, src, size); } =20 --=20 2.47.0 From nobody Tue Nov 26 12:26:20 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8314C1DE4ED for ; Thu, 17 Oct 2024 21:55:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202131; cv=none; b=htJkh0T61zjZPHH9tk9i2NAIpmUy18Sq4uRkPgrca7l42ivn7Ird+PgI57AB7BW/wYRxMGfoIN45RN0XwZWTztaendT/0nvqDj8FPLJuJ2ANBjPbu2EZka7iuXLn5CpBtdexZ4pkGpEc9CPQHkInQNyjiXMEbyN6DTEBt3d65X0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729202131; c=relaxed/simple; bh=opTfvoucneR4FnsToYqJIQ2vAsIrtZ9GVWhav979RCg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WfKc58HYePyCLcwYvqUObu1uRcvqf4eYtPCEaBuzp7b+DC/mEkf/nf/kpIS07hTuJut4DRVIGw/GbfF9ixRgnfN502xuxEq1mxUYAd5I6257w4PPuftM/ukCjgqtdQ5VJYweqmzvx2FD9H+REU+i15jAA0FCnIlgoYWeBeqng1g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TxSb0QpA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TxSb0QpA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A17A8C4CEC3; Thu, 17 Oct 2024 21:55:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729202131; bh=opTfvoucneR4FnsToYqJIQ2vAsIrtZ9GVWhav979RCg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TxSb0QpAIMqiN4HI9o5K6GW+rkBMV2URDlO14dMxDoDJzEWfPvquUxVQ5hm8pQmkx 43qcYbZDzeLJqryctoK0nWFeX51ZT5JQwsA3zcUlKHjW/V3/WYXNA5dHkSBiMDmR7r FNs/BZvZOhDnougKtZALz8ZZxadM48pBr24W2appoYcj7pUuIr1FDlDmmJO89CXqs6 6CuDXUronT/fSOE0ULA/lFjCU6Ot0QmXBgvr9vPWd+isX+b8RkNv6gPKtPgq6GKI8h KiFV2651yGF27415Tjt3LZ2FXdVzMG11jflBg3CcVtQcyqZWDr3K2Lwl4y4ABsU9bV VX+CnZpzvHY4A== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Pawan Gupta , Waiman Long , Dave Hansen , Ingo Molnar , Linus Torvalds , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Andrew Cooper , Mark Rutland , "Kirill A . Shutemov" Subject: [PATCH v2 6/6] x86/uaccess: Add user pointer masking to clear_user() Date: Thu, 17 Oct 2024 14:55:25 -0700 Message-ID: <7db4ec5c9444e4b76d45a189fdd37f6483c06bef.1729201904.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add user pointer masking to clear_user() to mitigate Spectre v1. A write in a mispredicted access_ok() branch to a user-controlled kernel address can populate the rest of the affected cache line with kernel data. Signed-off-by: Josh Poimboeuf --- arch/x86/include/asm/uaccess_64.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uacce= ss_64.h index 0587830a47e1..8027db7f68c2 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -199,7 +199,7 @@ static __always_inline __must_check unsigned long __cle= ar_user(void __user *addr static __always_inline unsigned long clear_user(void __user *to, unsigned = long n) { if (__access_ok(to, n)) - return __clear_user(to, n); + return __clear_user(mask_user_address(to), n); return n; } #endif /* _ASM_X86_UACCESS_64_H */ --=20 2.47.0