From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CD84C433EF for ; Tue, 12 Apr 2022 12:47:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353321AbiDLMtn (ORCPT ); Tue, 12 Apr 2022 08:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355075AbiDLMsC (ORCPT ); Tue, 12 Apr 2022 08:48:02 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47F8510FC8 for ; Tue, 12 Apr 2022 05:15:03 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id b2-20020a17090a010200b001cb0c78db57so2770634pjb.2 for ; Tue, 12 Apr 2022 05:15:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OXdbOjDTkc/lpsO0bVF0UBH7oeBmsD70Aq3u3kuhfAc=; b=iousPDZm7gaCaJiaNiPDPozbAWMZQJje+klmUZbut6nLSHAbk7zDfFClWHWRSAxmtf wV+SI7tVRfOKiy7MuTsWMszMPaTiZvQyqQc/q0VTsXkiDBBAo5ke/wlet77AW2rBkCnp JUmUjORcH4fFhl8MGZtAHqtF4PPP8r1pUCPEOq3tms9ZRHmKLRNuoVyglW5nyedoiVO7 w7ltGPsccnFypmVDPINdO4bNlc8hdLtfmdKWSpXlvERfL8YR4n0g+x5jbVQVcKlQcgzD xyt8VC1uZpP/fgLEgXcmcnpllGnDC/CWh3/iNWRCApaODZt5Fwd9s0anm500LFFGHkAd BJ0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OXdbOjDTkc/lpsO0bVF0UBH7oeBmsD70Aq3u3kuhfAc=; b=Wx0r/o+se14DkBXryL97asayWW35LZeuXv2pPT1kR+Wl36Fa2HYEw25hMmArvUXcZJ w0hF1f7GMYSKw4wMK2ZHVPgcivJ+AsqG6P9ff3kZI/tVW8+3O5ZrEd5z1TSPl/sj074c PyDTy6dXKntlMUp0h/Bx2grCiC2cYg/sViMkaxc8UsI/FUMcLW9ERFglC71jXkyg5PUB 4QDmf4ZSmJ7kc5MzGizc8WcBQPlXM4FZRTZqs3Vgb9YZ0px/6fTkXWwfJ9v3Iq24LcyJ MANpNmWIdjHXVYQiRvTseieQwWZWFQ1S/UXNjbeSfLLwT1JzOtFxWPfeVEx8Wyo+OKi3 /6Hg== X-Gm-Message-State: AOAM532QA4VNVHQ4jZPk48GQdN7s8+SXmQAb8jKOifBvoEYsCT/wR7gb eYdmfQGuxgZpQTe4ka3sHEmtl361TjU= X-Google-Smtp-Source: ABdhPJyDgpTIQAaALd/s3TdLNwfSjuSwhkNOpj10p/VRwS3aBLftHdln7iSASsjNzpt6kjsDqqVHQg== X-Received: by 2002:a17:902:d48b:b0:158:558d:c58a with SMTP id c11-20020a170902d48b00b00158558dc58amr11703141plg.105.1649765702627; Tue, 12 Apr 2022 05:15:02 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id np8-20020a17090b4c4800b001c70aeab380sm3019859pjb.41.2022.04.12.05.15.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:02 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" , "Kirill A. Shutemov" , Fenghua Yu , "Chang S. Bae" Subject: [PATCH V5 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Date: Tue, 12 Apr 2022 20:15:35 +0800 Message-Id: <20220412121541.4595-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan Always stash the address error_entry() is going to return to, in %r12 and get rid of the void *error_entry_ret; slot in struct bad_iret_stack which was supposed to account for it and pt_regs pushed on the stack. After this, both fixup_bad_iret() and sync_regs() can work on a struct pt_regs pointer directly. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 5 ++++- arch/x86/include/asm/traps.h | 2 +- arch/x86/kernel/traps.c | 18 ++++++------------ 3 files changed, 11 insertions(+), 14 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 4faac48ebec5..e9d896717ab4 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1058,9 +1058,12 @@ SYM_CODE_START_LOCAL(error_entry) * Pretend that the exception came from user mode: set up pt_regs * as if we faulted immediately after IRET. */ - mov %rsp, %rdi + popq %r12 /* save return addr in %12 */ + movq %rsp, %rdi /* arg0 =3D pt_regs pointer */ call fixup_bad_iret mov %rax, %rsp + ENCODE_FRAME_POINTER + pushq %r12 jmp .Lerror_entry_from_usermode_after_swapgs SYM_CODE_END(error_entry) =20 diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 35317c5c551d..47ecfff2c83d 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -13,7 +13,7 @@ #ifdef CONFIG_X86_64 asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *ere= gs); asmlinkage __visible notrace -struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s); +struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs); void __init trap_init(void); asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_r= egs *eregs); #endif diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index a4e2efde5d1f..111b18d57a54 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -898,13 +898,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch= _off_ist(struct pt_regs *r } #endif =20 -struct bad_iret_stack { - void *error_entry_ret; - struct pt_regs regs; -}; - -asmlinkage __visible noinstr -struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) +asmlinkage __visible noinstr struct pt_regs *fixup_bad_iret(struct pt_regs= *bad_regs) { /* * This is called from entry_64.S early in handling a fault @@ -914,19 +908,19 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret= _stack *s) * just below the IRET frame) and we want to pretend that the * exception came from the IRET target. */ - struct bad_iret_stack tmp, *new_stack =3D - (struct bad_iret_stack *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1; + struct pt_regs tmp, *new_stack =3D + (struct pt_regs *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1; =20 /* Copy the IRET target to the temporary storage. */ - __memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8); + __memcpy(&tmp.ip, (void *)bad_regs->sp, 5*8); =20 /* Copy the remainder of the stack from the current stack. */ - __memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip)); + __memcpy(&tmp, bad_regs, offsetof(struct pt_regs, ip)); =20 /* Update the entry stack */ __memcpy(new_stack, &tmp, sizeof(tmp)); =20 - BUG_ON(!user_mode(&new_stack->regs)); + BUG_ON(!user_mode(new_stack)); return new_stack; } #endif --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C78C0C433FE for ; Tue, 12 Apr 2022 12:47:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355868AbiDLMt0 (ORCPT ); Tue, 12 Apr 2022 08:49:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355156AbiDLMsF (ORCPT ); Tue, 12 Apr 2022 08:48:05 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3A0E17E20 for ; Tue, 12 Apr 2022 05:15:09 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id be5so10644267plb.13 for ; Tue, 12 Apr 2022 05:15:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m76DlOnMkrs3Z3EokaOUMbfkFeUFvtm1mHdRPM+LTLI=; b=nkdKW4+uIivnulUOM7VrrCeoXgzFDZDIWMHNSu1OpQTkC0IlzT557HJSvgfpkWhnFU 7OFAvtCOxam/oPY9Vms/3yyyZRddfc8bsqf1MwesABnSEIeVYUo3BTG6qpUs+wm1x4pY NpEBdSXZc7EpEkh1KlOrDR7X6c9DKgPEmxpB/AQRcuPDh4xdHVsMEIJtraS7XH4NDFtn 5Pmk/76+CCDXLLwRr2cJvcw+8D+XhMC1ZKlHE/WpUnBGXfp1LC6RWJxQ9a/tNvY7MO3Z X+c4BkGgRbFTQbJNjvCGfjiml9Sg7Iax4bS0W7fv4hrrFIHsl5MPJUI0xP0u27V7Um0i p5Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m76DlOnMkrs3Z3EokaOUMbfkFeUFvtm1mHdRPM+LTLI=; b=ISyCGvd55G/a59qIgpdD5GKc/q75MTOAY6EJLfPV+7Nclrm/q8ZwszOcqQTVUcpYbF +89fGfnAiCoX5l8A9fMRMrnKUOjSmqxeH08T1JzP9ZANjg/9WZEeXIsDQCIrw8LMtW4d iGrIipDn/6rHSSshThJI3c+HHRLgvn0fqmbybjruxes1ydUJd+/oXoasUlPYnna+RwQW +kwqRL/89AkApf60BRnVYHhiSaCXkyi2lnNkMRr2MGhnWahu4wjfk2hzNHCrMlr7jBGR wFlCjWJ20yIdIZg0o9mcNVHCZpFjZ8uT9vFGULnFrpKYODmqL1glvfSZjslF+LHVxPMr 9hpg== X-Gm-Message-State: AOAM533+YTwMnVeNOsfFz7OpJWnvbI0/ln7szyd1upXesLkFnhjJIFOC UGM9xjZ2mNVU63D+Ta7kNq+N7RpwpUw= X-Google-Smtp-Source: ABdhPJxk/xVs2QDTp3erfZ5PqAWWKalL7g7NK4E9pjBykM8olG8Izjrvb/wrMReK8q9SrLhc/jCfPg== X-Received: by 2002:a17:90b:383:b0:1cb:b7f1:9c69 with SMTP id ga3-20020a17090b038300b001cbb7f19c69mr4725735pjb.220.1649765709363; Tue, 12 Apr 2022 05:15:09 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id e14-20020a62aa0e000000b00505c05545f8sm7032389pff.108.2022.04.12.05.15.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:08 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" Subject: [PATCH V5 2/7] x86/entry: Switch the stack after error_entry() returns Date: Tue, 12 Apr 2022 20:15:36 +0800 Message-Id: <20220412121541.4595-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan error_entry() calls sync_regs(), and fixup_bad_iret() before sync_regs() if it is a fault from bad IRET, to copy the pt_regs to the kernel stack and switches the kernel stack directly after sync_regs(). But error_entry() itself is also a function call, so the code has to stash the address error_entry() is going to return to, in %r12 and makes the work complicated. Move the code of switching stack after error_entry() and get rid of the need to handle the return address. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index e9d896717ab4..e1efc56fbcd4 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -326,6 +326,8 @@ SYM_CODE_END(ret_from_fork) .macro idtentry_body cfunc has_error_code:req =20 call error_entry + movq %rax, %rsp /* switch to the task stack if from userspace */ + ENCODE_FRAME_POINTER UNWIND_HINT_REGS =20 movq %rsp, %rdi /* pt_regs pointer into 1st argument*/ @@ -999,14 +1001,10 @@ SYM_CODE_START_LOCAL(error_entry) /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax =20 + leaq 8(%rsp), %rdi /* arg0 =3D pt_regs pointer */ .Lerror_entry_from_usermode_after_swapgs: /* Put us onto the real thread stack. */ - popq %r12 /* save return addr in %12 */ - movq %rsp, %rdi /* arg0 =3D pt_regs pointer */ call sync_regs - movq %rax, %rsp /* switch stack */ - ENCODE_FRAME_POINTER - pushq %r12 RET =20 /* @@ -1038,6 +1036,7 @@ SYM_CODE_START_LOCAL(error_entry) */ .Lerror_entry_done_lfence: FENCE_SWAPGS_KERNEL_ENTRY + leaq 8(%rsp), %rax /* return pt_regs pointer */ RET =20 .Lbstep_iret: @@ -1058,12 +1057,9 @@ SYM_CODE_START_LOCAL(error_entry) * Pretend that the exception came from user mode: set up pt_regs * as if we faulted immediately after IRET. */ - popq %r12 /* save return addr in %12 */ - movq %rsp, %rdi /* arg0 =3D pt_regs pointer */ + leaq 8(%rsp), %rdi /* arg0 =3D pt_regs pointer */ call fixup_bad_iret - mov %rax, %rsp - ENCODE_FRAME_POINTER - pushq %r12 + mov %rax, %rdi jmp .Lerror_entry_from_usermode_after_swapgs SYM_CODE_END(error_entry) =20 --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07540C433EF for ; Tue, 12 Apr 2022 12:52:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354607AbiDLMy4 (ORCPT ); Tue, 12 Apr 2022 08:54:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355223AbiDLMsH (ORCPT ); Tue, 12 Apr 2022 08:48:07 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B63D01BE95 for ; Tue, 12 Apr 2022 05:15:16 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id s137so14360105pgs.5 for ; Tue, 12 Apr 2022 05:15:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q5RIw3Y6iFyQPwywIArOqCVkO5sLf9t/GzdrB82aITI=; b=I1wYevuxtaS9lLUaPCx+RCwFaW38RY7DlLemfEmzjcYFDvvTkcgeY1LaNtemO00ecC TdSRY0Oa9GNs2MJ/5xNISNYYpP9Rh6s6rAjBpnRvztHk/Gj5hDZUcJlYyGDsYeTb1mCu 9obdPHkW+ixlUWB+NjAypOeSbA88if21PMufKT61cgY4xWNkOJNJZ8IOq4JuVglNNfG8 0B2Tu6jrYo6XQcXT100W0X0pE9h8XZfjXenqK2PlsWyfn6frDFzoQpDsnHUXPkYIHNfN I7h1WuwXFc6SnAlCga9YYmV5Jdj9oj7pri48IqYeYkFOhNbXGed9UzjoS7zsEI12Pa2+ /NYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q5RIw3Y6iFyQPwywIArOqCVkO5sLf9t/GzdrB82aITI=; b=n4+1it2mooR1CvW9SxotF5ZQEd/gs/xCwauOy1nHUt23CD+FaeCwkFcE0UFFuDfuRH sNF2lWjmoO5Zc+PTeXI2g3AOwTf1E0APdjHicwRbeYILiWQTApReHoaMlvR2BtZQjgoa n5Zllqjjo/6eDj4JaFkK8aDUxxQkO5BeiM4rpZrwfjpjJe42dJYWBMVzoQaxTAForkIJ BbSoVUTONvyhd+p1pxoXIDof0a1MRZ/9+s4oKBud7R219bcOHU8JFxRrONTywSWIFGHi KCnWT6txuow+LJ0tMiz3twgx/QBfwke0ndeMwTXVkVM/GAQlZsTS4sup/MOdupvj3Wny 5yug== X-Gm-Message-State: AOAM530DczURmAuVCxQdZvW0PDjAW5qYEBJf7m7gPgtBMh2JAVJViasm HwS6hR2Pi+lwdVq2CMdxbiNLn/lmHZg= X-Google-Smtp-Source: ABdhPJy8cZBru6/sIUb5kyQTBRAn1lSqPhw5iX3k5evZnJuDmrM/zdH2RXN4krEK1uS+vJfcuU6KJw== X-Received: by 2002:a65:6951:0:b0:381:f10:ccaa with SMTP id w17-20020a656951000000b003810f10ccaamr29687821pgq.587.1649765715900; Tue, 12 Apr 2022 05:15:15 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id u18-20020a056a00125200b004fb112ee9b7sm35342414pfi.75.2022.04.12.05.15.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:15 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" Subject: [PATCH V5 3/7] x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry() Date: Tue, 12 Apr 2022 20:15:37 +0800 Message-Id: <20220412121541.4595-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan error_entry() doesn't handle the stack balanced. It includes PUSH_AND_CLEAR_REGS which is commonly needed for all IDT entries and can't pop the regs before it returns. Move PUSH_AND_CLEAR_REGS out of error_entry() and make error_entry() works on the stack normally. After this, XENPV doesn't need error_entry() since PUSH_AND_CLEAR_REGS is moved out and error_entry() can be converted to C code in future since it doesn't fiddle the stack. The text size will be enlarged: size arch/x86/entry/entry_64.o.before: text data bss dec hex filename 17916 384 0 18300 477c arch/x86/entry/entry_64.o size --format=3DSysV arch/x86/entry/entry_64.o.before: .entry.text 5528 0 .orc_unwind 6456 0 .orc_unwind_ip 4304 0 size arch/x86/entry/entry_64.o.after: text data bss dec hex filename 26868 384 0 27252 6a74 arch/x86/entry/entry_64.o size --format=3DSysV arch/x86/entry/entry_64.o.after: .entry.text 8200 0 .orc_unwind 10224 0 .orc_unwind_ip 6816 0 The tables .orc_unwind[_ip] are enlarged due to it adds many pushes. But .entry.text in x86_64 is 2M aligned, enlarging it to 8.2k doesn't enlarge the final text size. And it will only increase the footprint when different interrupts and exceptions happen unlikely heavily at the same time. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index e1efc56fbcd4..835b798556fb 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -325,6 +325,9 @@ SYM_CODE_END(ret_from_fork) */ .macro idtentry_body cfunc has_error_code:req =20 + PUSH_AND_CLEAR_REGS + ENCODE_FRAME_POINTER + call error_entry movq %rax, %rsp /* switch to the task stack if from userspace */ ENCODE_FRAME_POINTER @@ -987,8 +990,6 @@ SYM_CODE_END(paranoid_exit) SYM_CODE_START_LOCAL(error_entry) UNWIND_HINT_FUNC cld - PUSH_AND_CLEAR_REGS save_ret=3D1 - ENCODE_FRAME_POINTER 8 testb $3, CS+8(%rsp) jz .Lerror_kernelspace =20 --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A4A9C433F5 for ; Tue, 12 Apr 2022 12:47:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353886AbiDLMt3 (ORCPT ); Tue, 12 Apr 2022 08:49:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355235AbiDLMsH (ORCPT ); Tue, 12 Apr 2022 08:48:07 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB6571D0DE for ; Tue, 12 Apr 2022 05:15:22 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id s21so12557480pgv.13 for ; Tue, 12 Apr 2022 05:15:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EDbKvqEOe0G0GOJ/eQXOz2J1+f7HIDM3jsOjaLAEHZE=; b=Y2d57wk55T/SGsZnhJmFBO604T4+8z9jtkirmnOW3Z11AufEV0/ML1meu03rKqrgLg SanGSe+ShF7WRYO7WgsEeH9cVeRocvvWO0USlIUgBEN4pQVpFwLXCFnf7rLmoFxzY+pO zyAoWTdrWtYeUMNwTtHDZ3Bk3Pe+mm67lLj0NMRJalTk4YFQks0bBEmytlnY4DhBCKbB c4hj0+6045xZf7heSrxKA6Hqv/g6HBzaDBjH3BmZ2xD3TzCr6XRhGjHCDjhXzOAmNMD5 F1/IVgmIu1/k7wo6qlC9BAI1Pmswnn0PkycZEBvVw9R2YQclv23dlBvEhtLT6aumQoCY iELg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EDbKvqEOe0G0GOJ/eQXOz2J1+f7HIDM3jsOjaLAEHZE=; b=ptKF0MJaZxGuAc4ndtV6kK2WougqDBAVgvfTpbBZdm9PEsEb+NLUS1E79FsH0AQ8Zx Pm1KdAwdLauhn/qw6spk4RWNLAzrkj9DjmVx40bQ+UbVUPxJE1MyJZ6FT+ilTSC79PRT yVeKwQFdKoWkq7ZHUbhW2j35AgBegRKUivtZu55dNE6rLr8+DGsho17HDJQZcFBti3BK DpN8OlsHOpCSLI+Xe+suvnMU/qSJi5019kjK1h51Gr1LEdL5kQYQOThjy8kFk2/SC7dd p4HH0FDboI0mkfMAoYHPUKcFSXPlAfxLAUrWU4HLs49YeyRoqPTEnhF92mVb4wU9Ycxh huiw== X-Gm-Message-State: AOAM531VUymZK+rUZPbMfyj5+EEBtW4vYLL5annmzYIHzo9olbON1B9J GqjjGOfp0JRfe75gUXwMX/P4xp45OyE= X-Google-Smtp-Source: ABdhPJxGscyFzKVrObMTtHYlWSAEI0L+zf6pjRQt1Jg9ZbWDjw6SVOdh4LNQfPHbEBXRZsVCTa2luQ== X-Received: by 2002:a63:f24c:0:b0:383:c279:e662 with SMTP id d12-20020a63f24c000000b00383c279e662mr30554747pgk.303.1649765722143; Tue, 12 Apr 2022 05:15:22 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id r10-20020a17090a454a00b001c96a912aa0sm2977907pjm.3.2022.04.12.05.15.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:21 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" Subject: [PATCH V5 4/7] x86/entry: Move cld to the start of idtentry macro Date: Tue, 12 Apr 2022 20:15:38 +0800 Message-Id: <20220412121541.4595-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan Make it next to CLAC Suggested-by: Peter Zijlstra Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 835b798556fb..7b6a0f15bb20 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -360,6 +360,7 @@ SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS offset=3D\has_error_code*8 ENDBR ASM_CLAC + cld =20 .if \has_error_code =3D=3D 0 pushq $-1 /* ORIG_RAX: no syscall to restart */ @@ -428,6 +429,7 @@ SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS ENDBR ASM_CLAC + cld =20 pushq $-1 /* ORIG_RAX: no syscall to restart */ =20 @@ -484,6 +486,7 @@ SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS ENDBR ASM_CLAC + cld =20 /* * If the entry is from userspace, switch stacks and treat it as @@ -546,6 +549,7 @@ SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS offset=3D8 ENDBR ASM_CLAC + cld =20 /* paranoid_entry returns GS information for paranoid_exit in EBX. */ call paranoid_entry @@ -871,7 +875,6 @@ SYM_CODE_END(xen_failsafe_callback) */ SYM_CODE_START_LOCAL(paranoid_entry) UNWIND_HINT_FUNC - cld PUSH_AND_CLEAR_REGS save_ret=3D1 ENCODE_FRAME_POINTER 8 =20 @@ -989,7 +992,6 @@ SYM_CODE_END(paranoid_exit) */ SYM_CODE_START_LOCAL(error_entry) UNWIND_HINT_FUNC - cld testb $3, CS+8(%rsp) jz .Lerror_kernelspace =20 @@ -1123,6 +1125,7 @@ SYM_CODE_START(asm_exc_nmi) */ =20 ASM_CLAC + cld =20 /* Use %rdx as our temp variable throughout */ pushq %rdx @@ -1142,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi) */ =20 swapgs - cld FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rdx movq %rsp, %rdx --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21028C4167E for ; Tue, 12 Apr 2022 12:47:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354160AbiDLMtw (ORCPT ); Tue, 12 Apr 2022 08:49:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355240AbiDLMsI (ORCPT ); Tue, 12 Apr 2022 08:48:08 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF98D10FCC for ; Tue, 12 Apr 2022 05:15:28 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id nt14-20020a17090b248e00b001ca601046a4so2719698pjb.0 for ; Tue, 12 Apr 2022 05:15:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1flPZn7KfxFftAsobgisj3+41XCbW7e4EGW5Tqzb8+g=; b=Nf0HpfxvT7Ry4yE7lVio3FcL8ye5fqe2hbFqsnPm43aqNED8RDsVAVtWp6jBk8jYef ymkf3/C7FGfZJKsfhdqDL3D002DtQMYuybDOZAAp5kecqmxeKzmRZfQgtPB7GTuSmr+v 0gtXEBSjDtRiZwF6Eo5qv7dB+QUMSdzkmE9421aDGRd96pKvTk8PWV2fHqL2V9uyL/Nt KJSoWYzeuy3J0Lauv+5GIwPftXh683H+zV0qRjdDZvfIkd54YGRey/kUhaSG0xcyJJW/ W5eyIzTXisvU91p9mmxJn+UycK0ZY+KnuPM532WfLqn3Q3zHefC1aQQMC1xZANLg3R5o OkLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1flPZn7KfxFftAsobgisj3+41XCbW7e4EGW5Tqzb8+g=; b=Kml10JMZvS8mXGl4w7OpWZfNWj8UKCEGstYXJ4RWx1YQnchdwflQfJxftDiVa9klMV 10shoDelatLcOgVhpbfx6bB8JvgYYjknzLCQfbFjwKzNxmRY+ffOp5AH7dp+FxnLi0fh XqWpRrhzJUv+HwZMeelMsuEEzV8Btk8cWe3Z9i5IvYh4uJ6x8dsq3F2fprU8yL/HE9Jv SPNl6pMddmWJxlNgvZ1fOzbIegULcJIYUWSSmsqkEwvoTa//NvD/hdGeX8Wq5p24eZGJ sTq2k1wideWEhs+/6nuk0gtpxJoL/OWbr301CZJrfwl6BaadA2jZrP0Dh2tNjPBQf3S6 YqUg== X-Gm-Message-State: AOAM5313RwEZnW4hB3nfNdLzxvF9DoWolWmjPfhJCMcpt/RqIsX/rl95 5PY2z4kXDyjShWZlzTK3NF2j+wDY97s= X-Google-Smtp-Source: ABdhPJwUp+KmSkve9HAzkexmkq8wQ8btJVq2enI/a9m27a5Bb6LymOdTUfdCpHULLpOqWI9zopMBaA== X-Received: by 2002:a17:902:c2d8:b0:154:b384:917b with SMTP id c24-20020a170902c2d800b00154b384917bmr38234316pla.58.1649765728232; Tue, 12 Apr 2022 05:15:28 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id c18-20020a056a000ad200b004f0f9696578sm42206082pfl.141.2022.04.12.05.15.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:27 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" Subject: [PATCH V5 5/7] x86/entry: Don't call error_entry() for XENPV Date: Tue, 12 Apr 2022 20:15:39 +0800 Message-Id: <20220412121541.4595-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan When in XENPV, it is already in the task stack, and it can't fault for native_iret() nor native_load_gs_index() since XENPV uses its own pvops for IRET and load_gs_index(). And it doesn't need to switch the CR3. So there is no reason to call error_entry() in XENPV. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 7b6a0f15bb20..3aca7815fe79 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -328,8 +328,17 @@ SYM_CODE_END(ret_from_fork) PUSH_AND_CLEAR_REGS ENCODE_FRAME_POINTER =20 - call error_entry - movq %rax, %rsp /* switch to the task stack if from userspace */ + /* + * Call error_entry() and switch to the task stack if from userspace. + * + * When in XENPV, it is already in the task stack, and it can't fault + * for native_iret() nor native_load_gs_index() since XENPV uses its + * own pvops for IRET and load_gs_index(). And it doesn't need to + * switch the CR3. So it can skip invoking error_entry(). + */ + ALTERNATIVE "call error_entry; movq %rax, %rsp", \ + "", X86_FEATURE_XENPV + ENCODE_FRAME_POINTER UNWIND_HINT_REGS =20 --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 190BEC433F5 for ; Tue, 12 Apr 2022 12:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354400AbiDLMwD (ORCPT ); Tue, 12 Apr 2022 08:52:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355250AbiDLMsI (ORCPT ); Tue, 12 Apr 2022 08:48:08 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 254191EAD9 for ; Tue, 12 Apr 2022 05:15:36 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id a21so2415978pfv.10 for ; Tue, 12 Apr 2022 05:15:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z9X2SrqoTs43koF8W1YEyPB7oiXm2DF9WYmlTb3V7PA=; b=lQLilr+Q2g87EtVVPRkynme+ys+unhl7MuT0MI1fPPleOe3LCLXtn0geYCy4JxzqfE dxUkJRUAW8BcXKan8vkglpGtWOnLfw1uW7UBvsjoAn7wjZqRu630HO5ukKaf/zUnQF41 hx96WGUZoVdVW0L1Zgg6ZzVQoJJEmTJWrb1NT2H7DZIgYHdw1H8uOqmgvQMhve0fSSHD JYhqnA2kGKcIlewMPJoFXpSF8qwr9p4URhQl+Oz0qXWUAS+JcANj8qMDk3H7doYze6gv hGf+O6aaD31GkgKtap63TvHDKeVipbnKKqGKb7gIcCTCbikv0tjAYE2E5Z9ApVOOny3E 74Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z9X2SrqoTs43koF8W1YEyPB7oiXm2DF9WYmlTb3V7PA=; b=QfrgkJk0rmQsV1C0AgqRoNP968hrODxn24IUmkcrxWm6oWDhpnCdfdj56JP7pqP+td XJLG5LdbLFaQIT00ppuS1v6ZVzBnIhJ9pVvVaEdqQGLe1F3mWxllQowzCjoqlVMdRFYN RjHPCt8kJl2r3AwnBcoWGNfY5JSM8RS1RMtTQzXgVB0Dm8PeRwBkyVSEDrh6V/64/Ox1 af+5kUJrKMTM1vGoANwYfCpQg2PSWVwRL6yzQvmhDAGERVEUKyMPDQi0c2b8RrwZfehP lSk4CTeYnS5jULKrgigVkDV77S1l38jClmK/SBuxuZoFh4hMttR1jjkjFNc78nokShd7 R/Ew== X-Gm-Message-State: AOAM5323pV70Pb+RfsiqincjoNykIeTN772jGx4kYwaNsbUqs4hi713i JAufZYgSJXh0TTYZ3CGr+ez7jfUix50= X-Google-Smtp-Source: ABdhPJwUwSEjOQ6f0fkqphrVXFPI2eKvjFOkDqm44pnqo65ZzNs8bqOQfpk2u4QAWn9N7S5xk/kh6Q== X-Received: by 2002:a05:6a00:1ac8:b0:4fa:917f:c1aa with SMTP id f8-20020a056a001ac800b004fa917fc1aamr4312157pfv.2.1649765735470; Tue, 12 Apr 2022 05:15:35 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id ng17-20020a17090b1a9100b001c9f79927bfsm3060878pjb.25.2022.04.12.05.15.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:35 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" , Juergen Gross , "Kirill A. Shutemov" Subject: [PATCH V5 6/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS Date: Tue, 12 Apr 2022 20:15:40 +0800 Message-Id: <20220412121541.4595-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan XENPV doesn't use swapgs_restore_regs_and_return_to_usermode(), error_entry() and entry_SYSENTER_compat(). Change the PV-compatible SWAPGS to the ASM instruction swapgs in these functions. Also remove the definition of SWAPGS since there is no user of the SWAPGS anymore. Reviewed-by: Juergen Gross Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 6 +++--- arch/x86/entry/entry_64_compat.S | 2 +- arch/x86/include/asm/irqflags.h | 8 -------- 3 files changed, 4 insertions(+), 12 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3aca7815fe79..6611032979d9 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1008,7 +1008,7 @@ SYM_CODE_START_LOCAL(error_entry) * We entered from user mode or we're pretending to have entered * from user mode due to an IRET fault. */ - SWAPGS + swapgs FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax @@ -1040,7 +1040,7 @@ SYM_CODE_START_LOCAL(error_entry) * gsbase and proceed. We'll fix up the exception and land in * .Lgs_change's error handler with kernel gsbase. */ - SWAPGS + swapgs =20 /* * Issue an LFENCE to prevent GS speculation, regardless of whether it is= a @@ -1061,7 +1061,7 @@ SYM_CODE_START_LOCAL(error_entry) * We came from an IRET to user mode, so we have user * gsbase and CR3. Switch to kernel gsbase and CR3: */ - SWAPGS + swapgs FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax =20 diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_com= pat.S index 4fdb007cddbd..c5aeb0819707 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -50,7 +50,7 @@ SYM_CODE_START(entry_SYSENTER_compat) UNWIND_HINT_EMPTY ENDBR /* Interrupts are off on entry. */ - SWAPGS + swapgs =20 pushq %rax SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflag= s.h index 111104d1c2cd..7793e52d6237 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -137,14 +137,6 @@ static __always_inline void arch_local_irq_restore(uns= igned long flags) if (!arch_irqs_disabled_flags(flags)) arch_local_irq_enable(); } -#else -#ifdef CONFIG_X86_64 -#ifdef CONFIG_XEN_PV -#define SWAPGS ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV -#else -#define SWAPGS swapgs -#endif -#endif #endif /* !__ASSEMBLY__ */ =20 #endif --=20 2.19.1.6.gb485710b From nobody Mon May 11 07:46:26 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C2DEC433F5 for ; Tue, 12 Apr 2022 12:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244127AbiDLMuW (ORCPT ); Tue, 12 Apr 2022 08:50:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355254AbiDLMsI (ORCPT ); Tue, 12 Apr 2022 08:48:08 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09A1413E2B for ; Tue, 12 Apr 2022 05:15:45 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id k29so512969pgm.12 for ; Tue, 12 Apr 2022 05:15:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6QA2mFsHRGYfVdoow5CwlHXkQGvePnWcsyGvvZQ5/TY=; b=Od0Dg/7DcJTs8dpLuV0+koLqZwZ4QjD4IHJQLf394vvQzEtvrfsZHv002+ZVMhzbVW HrpIKke60l8+Xw5Qo25PHlulXQboW2DKBO5mGlCl+Me7lNfvwNcaJLcTFLT8vDXFdOdT ErpdSJEtBuW/oQLV+2k8snE0fLlzyFvtHXKH/3kR/9wnVhXvsRh29/m+UT2ZGg9FWisA T4+f7SSiiJr2U94rxiR5y0H8akJ9e2jj6L3Ycc4FpheriiG9uP7EkTIJedDKqHOpgeAw C2MPY/2S3s6jvp12J9xYebc1vZw9mEBRe3I5cFRVqIAAlzJYYP2jFBlS2FRGm34YUPg3 Djvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6QA2mFsHRGYfVdoow5CwlHXkQGvePnWcsyGvvZQ5/TY=; b=hQbQGdKnSL0WLNMsEU85yK5xC33CqFoRVj2iHNb1XJlvcV1BdtLzDE9AMiFLqCy92W hGqzG9wj/xu1HaVJaZotWSV7RbdbMqbRixyGgG/mqv4Z/b6axEFEVu7VG4LJSQ08JlS7 IPACXeF2Cb4yFLT5b+WxbXJsvKDLs+cg8MCTVDIOAZaG7oS53+irxHxrO4eQ42Vi+fYG mmUdX+loq5XwJFyW3dLpo2FpXuGi2n8o2BI2/iIq39gof+uE1Qv0tsVTitA4LBxdUR2/ xqK5OKT1j3ntPO58ThSJavx4RyI5SlCgiZC7mOn4pbBNgRvFXJHiV+9xDUDe3KPGxTi8 i6lg== X-Gm-Message-State: AOAM5318Ybvb/dVpm7nbViCyvLqaDraMQnkejqjfXmwUc4cuH227LM8L /HmwCWAJkE5mxUBi11CgCfrQSv8//Lc= X-Google-Smtp-Source: ABdhPJwx0DruTUqRox5uTFvEovHUyLFJ4qatyrrgIsFvlYNRdzSPIQ1IkviojXBYYrRFnIyuTzmxpw== X-Received: by 2002:a05:6a00:230d:b0:4f6:ec4f:35ff with SMTP id h13-20020a056a00230d00b004f6ec4f35ffmr37588351pfh.53.1649765744136; Tue, 12 Apr 2022 05:15:44 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id k187-20020a636fc4000000b003983a01b896sm2702361pgc.90.2022.04.12.05.15.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2022 05:15:43 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Thomas Gleixner , x86@kernel.org, Lai Jiangshan , Ingo Molnar , Dave Hansen , "H. Peter Anvin" , Joerg Roedel , "Kirill A. Shutemov" , "Chang S. Bae" , Kees Cook Subject: [PATCH V5 7/7] x86/entry: Use idtentry macro for entry_INT80_compat Date: Tue, 12 Apr 2022 20:15:41 +0800 Message-Id: <20220412121541.4595-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20220412121541.4595-1-jiangshanlai@gmail.com> References: <20220412121541.4595-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan entry_INT80_compat is identical to idtentry macro except a special handling for %rax in the prolog. Add the prolog to idtentry and use idtentry for entry_INT80_compat. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 18 ++++++ arch/x86/entry/entry_64_compat.S | 103 ------------------------------- arch/x86/include/asm/idtentry.h | 47 ++++++++++++++ arch/x86/include/asm/proto.h | 4 -- 4 files changed, 65 insertions(+), 107 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 6611032979d9..d49b93f494df 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -375,6 +375,24 @@ SYM_CODE_START(\asmsym) pushq $-1 /* ORIG_RAX: no syscall to restart */ .endif =20 + .if \vector =3D=3D IA32_SYSCALL_VECTOR + /* + * User tracing code (ptrace or signal handlers) might assume + * that the saved RAX contains a 32-bit number when we're + * invoking a 32-bit syscall. Just in case the high bits are + * nonzero, zero-extend the syscall number. (This could almost + * certainly be deleted with no ill effects.) + */ + movl %eax, %eax + + /* + * do_int80_syscall_32() expects regs->orig_ax to be user ax, + * and regs->ax to be $-ENOSYS. + */ + movq %rax, (%rsp) + movq $-ENOSYS, %rax + .endif + .if \vector =3D=3D X86_TRAP_BP /* * If coming from kernel space, create a 6-word gap to allow the diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_com= pat.S index c5aeb0819707..6866151bbef3 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -315,106 +315,3 @@ sysret32_from_system_call: swapgs sysretl SYM_CODE_END(entry_SYSCALL_compat) - -/* - * 32-bit legacy system call entry. - * - * 32-bit x86 Linux system calls traditionally used the INT $0x80 - * instruction. INT $0x80 lands here. - * - * This entry point can be used by 32-bit and 64-bit programs to perform - * 32-bit system calls. Instances of INT $0x80 can be found inline in - * various programs and libraries. It is also used by the vDSO's - * __kernel_vsyscall fallback for hardware that doesn't support a faster - * entry method. Restarted 32-bit system calls also fall back to INT - * $0x80 regardless of what instruction was originally used to do the - * system call. - * - * This is considered a slow path. It is not used by most libc - * implementations on modern hardware except during process startup. - * - * Arguments: - * eax system call number - * ebx arg1 - * ecx arg2 - * edx arg3 - * esi arg4 - * edi arg5 - * ebp arg6 - */ -SYM_CODE_START(entry_INT80_compat) - UNWIND_HINT_EMPTY - ENDBR - /* - * Interrupts are off on entry. - */ - ASM_CLAC /* Do this early to minimize exposure */ - SWAPGS - - /* - * User tracing code (ptrace or signal handlers) might assume that - * the saved RAX contains a 32-bit number when we're invoking a 32-bit - * syscall. Just in case the high bits are nonzero, zero-extend - * the syscall number. (This could almost certainly be deleted - * with no ill effects.) - */ - movl %eax, %eax - - /* switch to thread stack expects orig_ax and rdi to be pushed */ - pushq %rax /* pt_regs->orig_ax */ - pushq %rdi /* pt_regs->di */ - - /* Need to switch before accessing the thread stack. */ - SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rdi - - /* In the Xen PV case we already run on the thread stack. */ - ALTERNATIVE "", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV - - movq %rsp, %rdi - movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp - - pushq 6*8(%rdi) /* regs->ss */ - pushq 5*8(%rdi) /* regs->rsp */ - pushq 4*8(%rdi) /* regs->eflags */ - pushq 3*8(%rdi) /* regs->cs */ - pushq 2*8(%rdi) /* regs->ip */ - pushq 1*8(%rdi) /* regs->orig_ax */ - pushq (%rdi) /* pt_regs->di */ -.Lint80_keep_stack: - - pushq %rsi /* pt_regs->si */ - xorl %esi, %esi /* nospec si */ - pushq %rdx /* pt_regs->dx */ - xorl %edx, %edx /* nospec dx */ - pushq %rcx /* pt_regs->cx */ - xorl %ecx, %ecx /* nospec cx */ - pushq $-ENOSYS /* pt_regs->ax */ - pushq %r8 /* pt_regs->r8 */ - xorl %r8d, %r8d /* nospec r8 */ - pushq %r9 /* pt_regs->r9 */ - xorl %r9d, %r9d /* nospec r9 */ - pushq %r10 /* pt_regs->r10*/ - xorl %r10d, %r10d /* nospec r10 */ - pushq %r11 /* pt_regs->r11 */ - xorl %r11d, %r11d /* nospec r11 */ - pushq %rbx /* pt_regs->rbx */ - xorl %ebx, %ebx /* nospec rbx */ - pushq %rbp /* pt_regs->rbp */ - xorl %ebp, %ebp /* nospec rbp */ - pushq %r12 /* pt_regs->r12 */ - xorl %r12d, %r12d /* nospec r12 */ - pushq %r13 /* pt_regs->r13 */ - xorl %r13d, %r13d /* nospec r13 */ - pushq %r14 /* pt_regs->r14 */ - xorl %r14d, %r14d /* nospec r14 */ - pushq %r15 /* pt_regs->r15 */ - xorl %r15d, %r15d /* nospec r15 */ - - UNWIND_HINT_REGS - - cld - - movq %rsp, %rdi - call do_int80_syscall_32 - jmp swapgs_restore_regs_and_return_to_usermode -SYM_CODE_END(entry_INT80_compat) diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentr= y.h index 72184b0b2219..5bf8a01d31f3 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -206,6 +206,20 @@ __visible noinstr void func(struct pt_regs *regs, \ \ static noinline void __##func(struct pt_regs *regs, u32 vector) =20 +/** + * DECLARE_IDTENTRY_IA32_EMULATION - Declare functions for int80 + * @vector: Vector number (ignored for C) + * @asm_func: Function name of the entry point + * @cfunc: The C handler called from the ASM entry point (ignored for C) + * + * Declares two functions: + * - The ASM entry point: asm_func + * - The XEN PV trap entry point: xen_##asm_func (maybe unused) + */ +#define DECLARE_IDTENTRY_IA32_EMULATION(vector, asm_func, cfunc) \ + asmlinkage void asm_func(void); \ + asmlinkage void xen_##asm_func(void) + /** * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry poi= nts * @vector: Vector number (ignored for C) @@ -432,6 +446,35 @@ __visible noinstr void func(struct pt_regs *regs, \ #define DECLARE_IDTENTRY_ERRORCODE(vector, func) \ idtentry vector asm_##func func has_error_code=3D1 =20 +/* + * 32-bit legacy system call entry. + * + * 32-bit x86 Linux system calls traditionally used the INT $0x80 + * instruction. INT $0x80 lands here. + * + * This entry point can be used by 32-bit and 64-bit programs to perform + * 32-bit system calls. Instances of INT $0x80 can be found inline in + * various programs and libraries. It is also used by the vDSO's + * __kernel_vsyscall fallback for hardware that doesn't support a faster + * entry method. Restarted 32-bit system calls also fall back to INT + * $0x80 regardless of what instruction was originally used to do the + * system call. + * + * This is considered a slow path. It is not used by most libc + * implementations on modern hardware except during process startup. + * + * Arguments: + * eax system call number + * ebx arg1 + * ecx arg2 + * edx arg3 + * esi arg4 + * edi arg5 + * ebp arg6 + */ +#define DECLARE_IDTENTRY_IA32_EMULATION(vector, asm_func, cfunc) \ + idtentry vector asm_func cfunc has_error_code=3D0 + /* Special case for 32bit IRET 'trap'. Do not emit ASM code */ #define DECLARE_IDTENTRY_SW(vector, func) =20 @@ -642,6 +685,10 @@ DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER, common_interrupt); DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER, spurious_interrupt); #endif =20 +#ifdef CONFIG_IA32_EMULATION +DECLARE_IDTENTRY_IA32_EMULATION(IA32_SYSCALL_VECTOR, entry_INT80_compat, d= o_int80_syscall_32); +#endif + /* System vector entry points */ #ifdef CONFIG_X86_LOCAL_APIC DECLARE_IDTENTRY_SYSVEC(ERROR_APIC_VECTOR, sysvec_error_interrupt); diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h index 0f899c8d7a4e..4700d1650410 100644 --- a/arch/x86/include/asm/proto.h +++ b/arch/x86/include/asm/proto.h @@ -28,10 +28,6 @@ void entry_SYSENTER_compat(void); void __end_entry_SYSENTER_compat(void); void entry_SYSCALL_compat(void); void entry_SYSCALL_compat_safe_stack(void); -void entry_INT80_compat(void); -#ifdef CONFIG_XEN_PV -void xen_entry_INT80_compat(void); -#endif #endif =20 void x86_configure_nx(void); --=20 2.19.1.6.gb485710b