From nobody Thu Nov 14 16:33:48 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D65C54EE9 for ; Tue, 27 Sep 2022 17:05:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231126AbiI0RFg (ORCPT ); Tue, 27 Sep 2022 13:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232253AbiI0RFP (ORCPT ); Tue, 27 Sep 2022 13:05:15 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2058.outbound.protection.outlook.com [40.107.92.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBD78501B0 for ; Tue, 27 Sep 2022 10:05:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CUqffsHGaUe29unJg4mhVKeSpdCfS6/gwEhl6MC510CJkD0Al0nt/u+DBTqiIbdPU/5+hK12iD/T55h7NqWxQZnNUfOgt9ExJU46dP8bPAjMqjZAjYP/nNAqWlt/VNY7nndMGBtzwzIciJElITKzeSR1EiZtG8BEXo4ui2eGZnRqJuGoRkpfawn6eyiPXQzRFfNOjts6LoP6bjXPJ7UPZELVJBZzqsU0pV+lfP30UH4e9XTCUjKbMqDSrTTp+9uLGWrxRj81fLdmmI8daqEwk628AuWTEfsnIkljRJTcyqJGPYrHrIiFhD0a/KnvrQJ0fXhaZ7ejZ4+32pLWTuEf9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8mz0PBhkH+vuGxEvzF9VW1oCiYeGK2mgt/kFo6tkrqk=; b=fi5oUVoEWCbki6V9+djz9/Asw0EL6YTTzLBs97JRoH8g4bXWOuRTmaj8rgUAQU4bj2nhbsllll4zl41jvSDZPbIw679CbrrKBnM1BrZjQTqFtm9Zpqcmj/veU/ac7OCvIRbmEcbEgBWa2rN40Nwt/vlWPQ3vsJC2UK+zufB4iy0+HekwkwLHVGCHtM9OG5TYggsT0OCOtdPLC7sox0O0dLA8u4Lswjp8Eklms5ZZWr/+XOzaxga5EITbj4Ze+7qe5b7sC9olHyHi5MWWnV31umrcToDdeImRZ1C31lLkPjP3ZLwz5sBFWlPeVsXZJST/ud0PbWkaJrk1PiKM6NdybQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8mz0PBhkH+vuGxEvzF9VW1oCiYeGK2mgt/kFo6tkrqk=; b=K2l+SCiPYa58iHki8L+4HNt4fzM/ZHKbmlok08tfaYnNKsVSODszqfh7WRjhMTHAN2O+ZQhNUk+RiUHvd8v7Vu4qlsd7c0hg3i659WGPcRrUJMgVWaOFYmsigmOfdxK0kI/UI2B12kPxLlgZEQrZk1UwaVOhyc7mA+gE2WwykTg= Received: from BN9PR03CA0507.namprd03.prod.outlook.com (2603:10b6:408:130::32) by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26; Tue, 27 Sep 2022 17:05:06 +0000 Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:408:130:cafe::b2) by BN9PR03CA0507.outlook.office365.com (2603:10b6:408:130::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Tue, 27 Sep 2022 17:05:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:05:06 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:05:05 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 4/6] x86/sev: Allow for use of the early boot GHCB for PSC requests Date: Tue, 27 Sep 2022 12:04:19 -0500 Message-ID: <1913bdc41fed623a3bea273615303a98db35cedc.1664298261.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|DM6PR12MB4927:EE_ X-MS-Office365-Filtering-Correlation-Id: fc48cb0f-8ed3-4b5b-5cc0-08daa0aa6d64 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r9htvoBHVTJ+/k07bBdcXRf07rdQXTqHVJTrAqW6qlGKXhAhHRHlN6bNZ/H/KAE+sWIqZGAhkiH6J77mrC/jVFzSufGM0QYRPowZr53a5B2rC4viMvEcg6EfFe4H8KQ42CgcuUpCLknDKUuaXVt/mmuFnrLikGJJBTtpGfIXPTrXEUEMiScWo786xu2Ug4lG0mw3+Nj+6UCONcEDy7e8rTiZN4QhKCpOPpk1UpCT2qhIqNeqFDm01O43eqe3ylCthYyZCKRbpvOhT3OsZYfd47bslGlsbdbO57wwQDrTUiRKIEAlJ5kK/nldjx9Iq4WJUB/+KUPc+h0vTlpbbMEiQoT9j3EN0ckwy91NENGk4ED9AqXT/QBdxjAnd+VqhTmVS0kMx3VtL/k0y5OvAWHbL24pvykNaTob9uTOqEPQOw7Lv4prb3KXTlKzXOPhraGbPvKV4icsBmfP14SasEjoOvFDwTrcdDVQMjDfMCGcjyZxRGstii+4iYFyNSVkXFoXJkzAXUodVO7VONHGpvJdA8uvurcul6uFvLjkkFAYEzgJ50CysdbUahzfoCAxX1nLSbWll5MH9EBGT2dTZMdCX5XsFpoJgAw+3dz0uBy2i+AdZPNjfsAukVLwvqvNDdOYYwIpipvbdHnE7xqSLEGwZgaACAZtDWFFxqHoin7fFjDbHo2DjbH199qg8AVxGkH0DrBX1D+98PHBsVrnc4HCKE9HciKWF+JK5gp3AtWuNYNhCpWpM2RrTMxkb4nvwH69lmwKkRT11EGwv6vXeceP8cD13rVFe/Hc3neVf5mWFW56jMr5OS1ajwTrE/R5b4J9 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199015)(40470700004)(46966006)(36840700001)(83380400001)(336012)(356005)(82310400005)(47076005)(81166007)(36756003)(40460700003)(82740400003)(86362001)(36860700001)(40480700001)(426003)(70206006)(4326008)(8676002)(70586007)(186003)(110136005)(41300700001)(316002)(478600001)(8936002)(26005)(2906002)(2616005)(54906003)(6666004)(7696005)(5660300002)(7416002)(16526019)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:05:06.5899 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc48cb0f-8ed3-4b5b-5cc0-08daa0aa6d64 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Using a GHCB for a page stage change (as opposed to the MSR protocol) allows for multiple pages to be processed in a single request. In prep for early PSC requests in support of unaccepted memory, update the invocation of vmgexit_psc() to be able to use the early boot GHCB and not just the per-CPU GHCB structure. In order to use the proper GHCB (early boot vs per-CPU), set a flag that indicates when the per-CPU GHCBs are available and registered. For APs, the per-CPU GHCBs are created before they are started and registered upon startup, so this flag can be used globally for the BSP and APs instead of creating a per-CPU flag. This will allow for a significant reduction in the number of MSR protocol page state change requests when accepting memory. Signed-off-by: Tom Lendacky --- arch/x86/kernel/sev.c | 61 +++++++++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 664a4de91757..0b958d77abb4 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -117,7 +117,19 @@ static DEFINE_PER_CPU(struct sev_es_save_area *, sev_v= msa); =20 struct sev_config { __u64 debug : 1, - __reserved : 63; + + /* + * A flag used by __set_pages_state() that indicates when the + * per-CPU GHCB has been created and registered and thus can be + * used by the BSP instead of the early boot GHCB. + * + * For APs, the per-CPU GHCB is created before they are started + * and registered upon startup, so this flag can be used globally + * for the BSP and APs. + */ + ghcbs_initialized : 1, + + __reserved : 62; }; =20 static struct sev_config sev_cfg __read_mostly; @@ -660,7 +672,7 @@ static void pvalidate_pages(unsigned long vaddr, unsign= ed int npages, bool valid } } =20 -static void __init early_set_pages_state(unsigned long paddr, unsigned int= npages, enum psc_op op) +static void early_set_pages_state(unsigned long paddr, unsigned int npages= , enum psc_op op) { unsigned long paddr_end; u64 val; @@ -742,26 +754,13 @@ void __init snp_prep_memory(unsigned long paddr, unsi= gned int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } =20 -static int vmgexit_psc(struct snp_psc_desc *desc) +static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) { int cur_entry, end_entry, ret =3D 0; struct snp_psc_desc *data; - struct ghcb_state state; struct es_em_ctxt ctxt; - unsigned long flags; - struct ghcb *ghcb; =20 - /* - * __sev_get_ghcb() needs to run with IRQs disabled because it is using - * a per-CPU GHCB. - */ - local_irq_save(flags); - - ghcb =3D __sev_get_ghcb(&state); - if (!ghcb) { - ret =3D 1; - goto out_unlock; - } + vc_ghcb_invalidate(ghcb); =20 /* Copy the input desc into GHCB shared buffer */ data =3D (struct snp_psc_desc *)ghcb->shared_buffer; @@ -818,20 +817,18 @@ static int vmgexit_psc(struct snp_psc_desc *desc) } =20 out: - __sev_put_ghcb(&state); - -out_unlock: - local_irq_restore(flags); - return ret; } =20 static void __set_pages_state(struct snp_psc_desc *data, unsigned long vad= dr, unsigned long vaddr_end, int op) { + struct ghcb_state state; struct psc_hdr *hdr; struct psc_entry *e; + unsigned long flags; unsigned long pfn; + struct ghcb *ghcb; int i; =20 hdr =3D &data->hdr; @@ -861,8 +858,20 @@ static void __set_pages_state(struct snp_psc_desc *dat= a, unsigned long vaddr, i++; } =20 - if (vmgexit_psc(data)) + local_irq_save(flags); + + if (sev_cfg.ghcbs_initialized) + ghcb =3D __sev_get_ghcb(&state); + else + ghcb =3D boot_ghcb; + + if (!ghcb || vmgexit_psc(ghcb, data)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + if (sev_cfg.ghcbs_initialized) + __sev_put_ghcb(&state); + + local_irq_restore(flags); } =20 static void set_pages_state(unsigned long vaddr, unsigned int npages, int = op) @@ -870,6 +879,10 @@ static void set_pages_state(unsigned long vaddr, unsig= ned int npages, int op) unsigned long vaddr_end, next_vaddr; struct snp_psc_desc desc; =20 + /* Use the MSR protocol when a GHCB is not available. */ + if (!boot_ghcb) + return early_set_pages_state(__pa(vaddr), npages, op); + vaddr =3D vaddr & PAGE_MASK; vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); =20 @@ -1248,6 +1261,8 @@ void setup_ghcb(void) if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) snp_register_per_cpu_ghcb(); =20 + sev_cfg.ghcbs_initialized =3D true; + return; } =20 --=20 2.37.3