From nobody Mon Sep 8 09:26:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6537EB64DC for ; Tue, 18 Jul 2023 13:45:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231463AbjGRNpI (ORCPT ); Tue, 18 Jul 2023 09:45:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231169AbjGRNpC (ORCPT ); Tue, 18 Jul 2023 09:45:02 -0400 Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4193DD1 for ; Tue, 18 Jul 2023 06:45:01 -0700 (PDT) Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-1b3c503af99so3968303fac.0 for ; Tue, 18 Jul 2023 06:45:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689687900; x=1692279900; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9lEWUAIqsSfnKHfBAoJe/16DZctl9ATFmFT415kFxyk=; b=HpoRg2kORxwsOdZ9M6pI5W+pBvHeF6xR4uOJiDqPTdKNLzxhEq4nm0Gk29gLqXF+/t GEwI8KLl0wfVC8+E5HXjA2c+ToTQJjZmqghDtKGtgMerlfJQXUxlKzUTrY9uvLc9TfJN RbP3HMBy8iwb38Snrku/MEH+DooQBbcgISpK25LH+OOZr8nfH8hGkcIQNq04TXmTfRTr n+yw3jDo2XkjZcSTnUcnFX9ryNBcfJEaEuE4cpjlZ0V9E8qZDDzHw5laC8w3OVqFO6nI UoIe3ZW/qC8WnwVEw1dMmHxxPl1RFnOvAe/IpGy8MspjPYwv8hNet9smabZpb+nS8YBY z2fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689687900; x=1692279900; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9lEWUAIqsSfnKHfBAoJe/16DZctl9ATFmFT415kFxyk=; b=EmgQln3ZGo3vYA9jSNZZYnMEIoOVfdmPzLMJV/YHZBcYqPcwr7mdGWVaHeAmUe1XuJ 6w3Vd3FEjbc/UlzpxeAlKaDs1mJStPMrbiClq9FrJh+8xQgGuhiSgPzj/5xQOmRexjYS KIFkBx1ytuMXSXXAyJ+UVbvApDVdaRj7UOxOW4Z8iitXjX5lwdU+wXRNBBXm6cmpmbZU MKH4aH4ZKxRhuVd1Bn0EtjrTir6ChC/JfD4+2CCt/1tYnEaBGHdCHYeAqchcbHWkvVOV WAQGLUdT+zWbaYEt3RkUDBniaXswEjeTV6dbKnxOFz5ooey53fzJbW950i3idEgKrZE3 46ag== X-Gm-Message-State: ABy/qLaaQI0B4CVhDfJ7dcZZFMb7HDOMibI/VH+POt50JWl43l2CmHaD wwrX+ZwnYo0x01zqESIORRxXXsqJeg== X-Google-Smtp-Source: APBJJlEm1OAOJDfGCou/M5BV3u17gOhDf6JryXMzwJ9d8Luc2k0Ogkll7rUWR9yyrAWLZ1+hC2inPQ== X-Received: by 2002:a05:6871:9b:b0:1b7:4c74:e1af with SMTP id u27-20020a056871009b00b001b74c74e1afmr16748571oaa.59.1689687899753; Tue, 18 Jul 2023 06:44:59 -0700 (PDT) Received: from citadel.. (047-026-243-217.res.spectrum.com. [47.26.243.217]) by smtp.gmail.com with ESMTPSA id q7-20020a4aac47000000b005660ed0becesm726778oon.39.2023.07.18.06.44.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Jul 2023 06:44:58 -0700 (PDT) From: Brian Gerst To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Thomas Gleixner , Borislav Petkov , "H . Peter Anvin" , Andy Lutomirski , Brian Gerst Subject: [PATCH 2/6] x86/entry/64: Convert SYSRET validation tests to C Date: Tue, 18 Jul 2023 09:44:42 -0400 Message-ID: <20230718134446.168654-3-brgerst@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230718134446.168654-1-brgerst@gmail.com> References: <20230718134446.168654-1-brgerst@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Signed-off-by: Brian Gerst --- arch/x86/entry/common.c | 50 ++++++++++++++++++++++++++++++- arch/x86/entry/entry_64.S | 55 ++-------------------------------- arch/x86/include/asm/syscall.h | 2 +- 3 files changed, 52 insertions(+), 55 deletions(-) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b33..afe79c3f1c5b 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -70,8 +70,12 @@ static __always_inline bool do_syscall_x32(struct pt_reg= s *regs, int nr) return false; } =20 -__visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) +/* Returns true to return using SYSRET, or false to use IRET */ +__visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr) { + long rip; + unsigned int shift_rip; + add_random_kstack_offset(); nr =3D syscall_enter_from_user_mode(regs, nr); =20 @@ -84,6 +88,50 @@ __visible noinstr void do_syscall_64(struct pt_regs *reg= s, int nr) =20 instrumentation_end(); syscall_exit_to_user_mode(regs); + + /* + * Check that the register state is valid for using SYSRET to exit + * to userspace. Otherwise use the slower but fully capable IRET + * exit path. + */ + + /* XEN PV guests always use IRET path */ + if (cpu_feature_enabled(X86_FEATURE_XENPV)) + return false; + + /* SYSRET requires RCX =3D=3D RIP and R11 =3D=3D EFLAGS */ + if (unlikely(regs->cx !=3D regs->ip || regs->r11 !=3D regs->flags)) + return false; + + /* CS and SS must match the values set in MSR_STAR */ + if (unlikely(regs->cs !=3D __USER_CS || regs->ss !=3D __USER_DS)) + return false; + + /* + * On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP + * in kernel space. This essentially lets the user take over + * the kernel, since userspace controls RSP. + * + * Change top bits to match most significant bit (47th or 56th bit + * depending on paging mode) in the address. + */ + shift_rip =3D (64 - __VIRTUAL_MASK_SHIFT + 1); + rip =3D (long) regs->ip; + rip <<=3D shift_rip; + rip >>=3D shift_rip; + if (unlikely((unsigned long) rip !=3D regs->ip)) + return false; + + /* + * SYSRET cannot restore RF. It can restore TF, but unlike IRET, + * restoring TF results in a trap from userspace immediately after + * SYSRET. + */ + if (unlikely(regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF))) + return false; + + /* Use SYSRET to exit to userspace */ + return true; } #endif =20 diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index c01776a51545..b1288e22cae8 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -123,60 +123,9 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_= GLOBAL) * Try to use SYSRET instead of IRET if we're returning to * a completely clean 64-bit userspace context. If we're not, * go to the slow exit path. - * In the Xen PV case we must use iret anyway. */ - - ALTERNATIVE "", "jmp swapgs_restore_regs_and_return_to_usermode", \ - X86_FEATURE_XENPV - - movq RCX(%rsp), %rcx - movq RIP(%rsp), %r11 - - cmpq %rcx, %r11 /* SYSRET requires RCX =3D=3D RIP */ - jne swapgs_restore_regs_and_return_to_usermode - - /* - * On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP - * in kernel space. This essentially lets the user take over - * the kernel, since userspace controls RSP. - * - * If width of "canonical tail" ever becomes variable, this will need - * to be updated to remain correct on both old and new CPUs. - * - * Change top bits to match most significant bit (47th or 56th bit - * depending on paging mode) in the address. - */ -#ifdef CONFIG_X86_5LEVEL - ALTERNATIVE "shl $(64 - 48), %rcx; sar $(64 - 48), %rcx", \ - "shl $(64 - 57), %rcx; sar $(64 - 57), %rcx", X86_FEATURE_LA57 -#else - shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx - sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx -#endif - - /* If this changed %rcx, it was not canonical */ - cmpq %rcx, %r11 - jne swapgs_restore_regs_and_return_to_usermode - - cmpq $__USER_CS, CS(%rsp) /* CS must match SYSRET */ - jne swapgs_restore_regs_and_return_to_usermode - - movq R11(%rsp), %r11 - cmpq %r11, EFLAGS(%rsp) /* R11 =3D=3D RFLAGS */ - jne swapgs_restore_regs_and_return_to_usermode - - /* - * SYSRET cannot restore RF. It can restore TF, but unlike IRET, - * restoring TF results in a trap from userspace immediately after - * SYSRET. - */ - testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11 - jnz swapgs_restore_regs_and_return_to_usermode - - /* nothing to check for RSP */ - - cmpq $__USER_DS, SS(%rsp) /* SS must match SYSRET */ - jne swapgs_restore_regs_and_return_to_usermode + testb %al, %al + jz swapgs_restore_regs_and_return_to_usermode =20 /* * We win! This label is here just for ease of understanding diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h index 4fb36fba4b5a..be6c5515e0b9 100644 --- a/arch/x86/include/asm/syscall.h +++ b/arch/x86/include/asm/syscall.h @@ -126,7 +126,7 @@ static inline int syscall_get_arch(struct task_struct *= task) ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64; } =20 -void do_syscall_64(struct pt_regs *regs, int nr); +bool do_syscall_64(struct pt_regs *regs, int nr); =20 #endif /* CONFIG_X86_32 */ =20 --=20 2.41.0