From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 341A2C54EE9 for ; Tue, 27 Sep 2022 17:04:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231841AbiI0REy (ORCPT ); Tue, 27 Sep 2022 13:04:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbiI0REr (ORCPT ); Tue, 27 Sep 2022 13:04:47 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2078.outbound.protection.outlook.com [40.107.212.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A43181DA7B for ; Tue, 27 Sep 2022 10:04:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MXcgOZeL5E6VT4ajFwabrprDkmYCVhWX2qdAoGUVwYYYg5UF93nyA2chtXPesjaq2RPgBv+vzWx741wpy0YaMcSq1JNa8avhXKrjuj9S5V4J69GJxU8CApvkpwxItlxyzRRp7TEPIVab+PIGFS+6NZPe9iIT41lRu3Zay6I6baQs2mP2fwrCSaTLHR88xNUhn6k/ksfv/KwfdNpOfTxIZeOTEPaKyyZ8dRpUv+4/nugUnldu1Xd1PQSndtGvqujvNJfe/XqaO92YvIeW9kxK782jNAUu1CwmubbM+biE4gD+/mSbyDehrfm6XqjxgrWHG1cj2gBQ27kDttW124DWCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aXmJVj27j03Gu4p+Tr+s4+B0UVaE5fD3LhVxKqW0kZg=; b=PfYA/SMAlow6y47sIdvbv07ewYoAqKGE9T12FCHjVFts2TjRdrgiFL1W4ipDa+CyUsFvSD9b6ufF2a93eXa+KQpQaHVQ1ujXSAA7EHN2W9GqdxLHTajNjFfqNbF4CJt2MSRBRQrByDBK02pqAw5bFwzVsievk5fFNcvFq6JICepTFva8Fkmt1glNHDjDAhEeKmT8TDEQ/f5kbG1W6a8ZntUnChfi2X2gYctXPN8ncfIh1RFLkbXKGvSaJO5tor7FX61vebUMZFAs0dBxcSzYHALjr9/BF5EQS8rsEL3eCwDNXjLfEDTyu+dQqXYfhCuH3YhJhUuCDzospKYs8Lh+8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aXmJVj27j03Gu4p+Tr+s4+B0UVaE5fD3LhVxKqW0kZg=; b=X8VZDZK2NvOGMltezNA001h2NYJe+a9D1+CCXl1t3G7WgxFLBNJDQQVIbmuHfxF0ulL/DpQrgq9B5JzGD44GyvA1qFp8Zy7ueFcby3KEdYUppMAOFJCd1CEAutA2+OV3NrtRXtNfdgC5IikDX6/3PNLcR5IJ60Ue6gDtBxEL9JM= Received: from BN0PR04CA0074.namprd04.prod.outlook.com (2603:10b6:408:ea::19) by MW4PR12MB7312.namprd12.prod.outlook.com (2603:10b6:303:21a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.24; Tue, 27 Sep 2022 17:04:44 +0000 Received: from BN8NAM11FT115.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ea:cafe::96) by BN0PR04CA0074.outlook.office365.com (2603:10b6:408:ea::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26 via Frontend Transport; Tue, 27 Sep 2022 17:04:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT115.mail.protection.outlook.com (10.13.177.151) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:04:44 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:04:43 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 1/6] x86/sev: Fix calculation of end address based on number of pages Date: Tue, 27 Sep 2022 12:04:16 -0500 Message-ID: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT115:EE_|MW4PR12MB7312:EE_ X-MS-Office365-Filtering-Correlation-Id: a1c8fa3d-be89-4d00-41d7-08daa0aa6008 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v9si/I8uqinqxKw4gYmrLi21v0AsnPAIEl5pJTIP+LVeFUDo8ZHErwvvBWMjbkI2K9i+KaMF6JlHWXfBK12R2Djb1ikCkgrRP+UdXqrXHELhxlhhlkjgs5UH1n1rD+qAR2vmDlQn3rMt2On7cm9T52PpmYnF8qWk3uEzYVufBhIMcXmXxnxyxDwnn4gjtSG15quAYIKV0B9vQ22I1ItpCmEUKj6/ooItpIQndiceKixTnBcxewnTAiRomh+S/Rh0M/6h/EB96j+AC3nwK2qKDdsEmsxnInX4vYP1ut8+Ufgc3XGhyMvlQbGiXXGKd+K4j4aIrKJke4gKHjvvHlSNtBG30SIA42iRH0s9tl/xxFHN07NFPwP11H9Fgutw5oXmbDmptF169QGGD6BgRRVodgt0B9SUiqjvA7UQMWD4gVlpsC1wpYjgN8zSsYNR9ktih+WLx2nh4QbopxWNP9NpN0KLc4xpSV6eHeblG4UzMX/R4GeNKDDkaj7PFQDZGdngx7vPXvQs0Ny++axeClh5DFhuxPMhWsdgGxGxSgHqwpslCZh9iWa3ZnoMDbsW6Dc8GQCvhXBUqhBC2R4RsEnRDBcGq7pmPazdh4NfRLd4f9a5upA15LronrgGVRxQqkkautBj86tQfWOyY8Phjw03PkjuvJN+zT8yan8SxFOdNdQ7/eeRYSnNg1n50fhNq1/2NumGe2SFNKa9Cxn+U1BKuH5Z1ngSBymauS04ooqXk8j1Xu+CzqxxIdzHcs2/hnCSiEArQm5UQHU5dVX9zra3sfAK9Oy1+oppDRLNsUVVOME= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199015)(40470700004)(36840700001)(46966006)(7416002)(47076005)(5660300002)(316002)(426003)(36860700001)(8676002)(336012)(4326008)(16526019)(70206006)(110136005)(70586007)(40480700001)(82310400005)(26005)(478600001)(40460700003)(54906003)(8936002)(82740400003)(6666004)(86362001)(356005)(2906002)(83380400001)(186003)(2616005)(81166007)(36756003)(7696005)(41300700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:04:44.1759 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a1c8fa3d-be89-4d00-41d7-08daa0aa6008 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT115.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7312 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When calculating an end address based on an unsigned int number of pages, the number of pages must be cast to an unsigned long so that any value greater than or equal to 0x100000 does not result in zero after the shift. Fixes: 5e5ccff60a29 ("x86/sev: Add helper for validating pages in early enc= attribute changes") Signed-off-by: Tom Lendacky Tested-by: Dionna Glaze --- arch/x86/kernel/sev.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c05f0124c410..cac56540929d 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -649,7 +649,7 @@ static void pvalidate_pages(unsigned long vaddr, unsign= ed int npages, bool valid int rc; =20 vaddr =3D vaddr & PAGE_MASK; - vaddr_end =3D vaddr + (npages << PAGE_SHIFT); + vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); =20 while (vaddr < vaddr_end) { rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); @@ -666,7 +666,7 @@ static void __init early_set_pages_state(unsigned long = paddr, unsigned int npage u64 val; =20 paddr =3D paddr & PAGE_MASK; - paddr_end =3D paddr + (npages << PAGE_SHIFT); + paddr_end =3D paddr + ((unsigned long)npages << PAGE_SHIFT); =20 while (paddr < paddr_end) { /* --=20 2.37.3 From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82E14C6FA83 for ; Tue, 27 Sep 2022 17:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231907AbiI0RFE (ORCPT ); Tue, 27 Sep 2022 13:05:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231934AbiI0RE4 (ORCPT ); Tue, 27 Sep 2022 13:04:56 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AB8B3DBDF for ; Tue, 27 Sep 2022 10:04:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fBeEsRlql6zsgb+aklMXlx+qnV/7cCXAH4YnAqwm13W0Sklhrcrjh58L4BZAf4ORMnPY3EwVXUlDfPUM321S7LAZvfMIxf1XGxdpDKhuEsuAjg4qV0g94XJN+tKqc9tsOk7TeOh1aNIYk1USM8g1X2xFBjBt9IyEY2X8g3/tjPJEb1eAzYk54WQyErC4ldaDMhI+GxO0HLiYlFuMz8zRNVef0pO8OBshLf2ocv7T1ueHT23l06pxNQPieEsvzZX4ZAa07rx8V9n3OipHWpEC1R1m2N52gghIPzrjgFRSBW35Yzaij5pDRlqCR/1DGFee4PQ3QWqPsLz1+8k9Vxo4MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=war+ya38wIWlUAuZ6WglQaQbbAjpTtdjhkfj0LELJsE=; b=SOlhOMIR9sQfzkgt34EA5Bi9NSSnG38xy9svUJTD6ncPMVyfCTgT08T9jzKYUY37+civ1HB0CwdwkH9mAJ861kQqFCLyA/b4PLp7FAr1OLGVs1CxnIFryPX3BFBs5LHaxttNCYAgX+wpNTsJnfq9OJc5ezjFUCJh2RuoxtgaKGMflWzXzikM8uA4Kz2h3RtwIGzWdN2WfIzYBlShgHpBjf9Ng1XxbNiGNsSWDLbqFqrwI8uI5YCcuVIlDEpGAl8G90m9yT3S3uTkKG0apYkxZOg6kybQ7nZLWkb52km33dBqx/0clYjYVQ8iDgJWxBKokyBikPqtjDvhd3Uu5wRpyg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=war+ya38wIWlUAuZ6WglQaQbbAjpTtdjhkfj0LELJsE=; b=WVnq/Yhgm5cbgLabpk1mbcLu8gmJRi2/gPXvCfpWbVqWGzTYrmJAYeurCCGrxoPJ0FJomvLKOYqSDjpID8WoVszA1z88oKv+NopipSVzozFXCC56VwWGM/RWl2SNn9HX+l6bj5KoTUzmmXYu4MS9ALwdyd1Qwp4kb+w26ywSl8I= Received: from BN0PR04CA0053.namprd04.prod.outlook.com (2603:10b6:408:e8::28) by BL3PR12MB6570.namprd12.prod.outlook.com (2603:10b6:208:38d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Tue, 27 Sep 2022 17:04:51 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e8:cafe::11) by BN0PR04CA0053.outlook.office365.com (2603:10b6:408:e8::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26 via Frontend Transport; Tue, 27 Sep 2022 17:04:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:04:51 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:04:50 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 2/6] x86/sev: Fix calculation of end address based on number of pages Date: Tue, 27 Sep 2022 12:04:17 -0500 Message-ID: <91dd756197cdeffa5b81d812a55fc8b74924b344.1664298261.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT043:EE_|BL3PR12MB6570:EE_ X-MS-Office365-Filtering-Correlation-Id: cd787eb4-b04b-4ce0-d5bd-08daa0aa646e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DhGEyRLxGWQ82/BlAWBpYxDMe1UVTbh9Obb4e2YOpcDema0kxk6QRagijthOIdIMtTygh1Km/iYTMGgUniGdqBwOGOE4HdgQNxJcAM6rNw6lsuIx70sdvveJriyKSBX3E/fvYPOpzU6963Z5er8TQjBsvFRUUL1uDQ7D+VBNZzNbjH4cVIbuThN0RwjMVlhgl4bATPQdQyUzmpGlHRLsGw8L/U+VncrBiSP7ywgNJA67gsJq4eEiUOomqNZbh7CbUF+TEQYr+/zTL3/AAKk2ZuXPRt6XYlFgXQrc9tjMl9yWzJ/O57Xy5+qsl4KE9sVt69+bm+HrvxXEEAnrGFSKbfVnQl4EU167i8z0YFt5P11ljIEK1Ic+NqBBLbTcD9xFsZoqKJQ+3jZFY6cbpUTfXs5awrCon/2q90atPSIX9MklaGwdLjqEq8FXf2IjbcLkAvADJe2ule0DhGC9FoXem57c25sOABxsZngKQrf8Sklfp63XwVqOgRTuj2+tiSyq49rcwua2mun4dqdBmuRIJDORnuK+31pW5Bky/J+h0JaQWAWNrcw3gVd8XCnsMdT4+RObz1BfDNC9ZAKHqdOzY3kK97uw6s1m0z/JQzt4Js6rL0ygZOfLjm+7+ioMJClkXiKAGDzI1ThzLGhiHBNiZAGe/wP5uSh9l0iy2RQwFYy7KB9Lrn2941ZYUxKVdKs/UysRq92pC6+Qvygon0z1dPNCnOzS2MjWwptOcdQsxODhAUVFojNQ8LwLD/BDUWhFOrZH1zLiTNocGpgRVMWC7WJY6nJ5z6eJ83WUqvtjYnJRLkjMZ7mKUV7i+Y4cWhMh X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(346002)(39860400002)(136003)(451199015)(36840700001)(40470700004)(46966006)(40480700001)(2906002)(356005)(70586007)(40460700003)(336012)(2616005)(6666004)(36756003)(4326008)(426003)(82740400003)(186003)(16526019)(8676002)(7416002)(316002)(4744005)(7696005)(36860700001)(478600001)(26005)(82310400005)(70206006)(41300700001)(110136005)(54906003)(8936002)(86362001)(47076005)(5660300002)(81166007)(83380400001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:04:51.5435 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cd787eb4-b04b-4ce0-d5bd-08daa0aa646e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6570 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When calculating an end address based on an unsigned int number of pages, the number of pages must be cast to an unsigned long so that any value greater than or equal to 0x100000 does not result in zero after the shift. Fixes: dc3f3d2474b8 ("x86/mm: Validate memory when changing the C-bit") Signed-off-by: Tom Lendacky --- arch/x86/kernel/sev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index cac56540929d..c90a47c39f6b 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -875,7 +875,7 @@ static void set_pages_state(unsigned long vaddr, unsign= ed int npages, int op) panic("SNP: failed to allocate memory for PSC descriptor\n"); =20 vaddr =3D vaddr & PAGE_MASK; - vaddr_end =3D vaddr + (npages << PAGE_SHIFT); + vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); =20 while (vaddr < vaddr_end) { /* Calculate the last vaddr that fits in one struct snp_psc_desc. */ --=20 2.37.3 From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CB29C6FA82 for ; Tue, 27 Sep 2022 17:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232316AbiI0RFT (ORCPT ); Tue, 27 Sep 2022 13:05:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231879AbiI0RFH (ORCPT ); Tue, 27 Sep 2022 13:05:07 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2070.outbound.protection.outlook.com [40.107.223.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FDD3422E9 for ; Tue, 27 Sep 2022 10:05:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GKAyC3fiYxLQsMdhTv1B0SD0NqpC8J/umoAYep12FZAG9GY4vdj7HMMnCUgIjOsGWnziKISIobtrqRxv2MT5KXXXN3ZB/m7U6yUkpvrlHITfvaVfHhHOEs3Fzj/AVCh6UxF0Hy8OXDmkRZMF82B/Cd2OUkRdkLuvvsKHsjJoYs9MxZCdM6O3xN5r2FssNVZr5FgYHiv324/JNsp4SDx8e2MQOTvnL6uTvr3wD6li7inZ94UfJRRSSuiq+TtyV0IBEOG8YWp5TnKivIb23ve6BdLgd6go4/q3MH8DbarLc3k6vuMFmKBIKP5uZVpqCwKns4fncNus+y9Wc0PlXHw/hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Dn+ZXEmGVcAuT1CZTQkzVKxUSVjP/Os7EjiaQTEFKwU=; b=f+vHksxRP/UPj0lQJFutA2cV5LNIIDonNhcqLC+a6eL0cfm/V255/0KLXk7V3KIRkiMVzJHGWFWAYFSCM0DzNnlp0Nhc/nSGIiPex7Du5CgNuHy8Qs8K0SoTDEK6EgBf7kJufA+LnBtPm8ukXpRujnfN700QcZdz+bTbKOxlJrXKMrllCYeTNlCi8DBHPaUWFmuPg9AjV3JSxZ2gpyQEA/1pvQWKDFViKhNNwdtGsnCTFDO4hY8JkBWxowEHYYy1593H8zgxyu6B8RzTaLuFHO4TAWkhIB1lttuVoQFr5CrzokEGn4Fm2dw19TkvpPE0cs7wPJuIcuXM20OT54RLUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dn+ZXEmGVcAuT1CZTQkzVKxUSVjP/Os7EjiaQTEFKwU=; b=usUtWGTS2rqOx3CTkY4N7r5TC19phwWLOzxWWpmRnhJt4WVSddObLcDD3QkQyv2TTn63xynIbTGHkQvg/TtpErlY/jYSo5hTL3BuVhpwab8wpcQYz3IxFUkFRh6UHZKflmI+QLrDq6+GPxzOWIbQVCnri4N/A4ke/YUHK8Ph38c= Received: from BN6PR17CA0040.namprd17.prod.outlook.com (2603:10b6:405:75::29) by DM4PR12MB5264.namprd12.prod.outlook.com (2603:10b6:5:39c::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26; Tue, 27 Sep 2022 17:04:59 +0000 Received: from BN8NAM11FT105.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::ef) by BN6PR17CA0040.outlook.office365.com (2603:10b6:405:75::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Tue, 27 Sep 2022 17:04:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT105.mail.protection.outlook.com (10.13.176.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5676.17 via Frontend Transport; Tue, 27 Sep 2022 17:04:59 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:04:57 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 3/6] x86/sev: Put PSC struct on the stack in prep for unaccepted memory support Date: Tue, 27 Sep 2022 12:04:18 -0500 Message-ID: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT105:EE_|DM4PR12MB5264:EE_ X-MS-Office365-Filtering-Correlation-Id: ec687d74-ed00-4337-e2e8-08daa0aa68f3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: F467Oo4AIwLNAlNPTphBET8gPuO8VFOPFi752IhT1p3erg2bWU8ATfW5+iERQLzEwWAXZ5VPBjeByC1WbMq98LJLbMHBLYxCv9nM7F1WOZuL6FPaD8pZYWMNbnlJleBr9gM7uXmYFslAj/Bwn6WbaISdyQeHK74OOvdr34ERvR2CR08rAAagc4uMizi8JvgG2BvD8Z9HYH2B91lSJU4rhr11R35fOJBlb3phsZ+lkFKv3F9Q6LCbOGZ72FFzS2nyFFh6Noz5q+13ksk+mFAz2hWs+kTqfI6Voi0hb/Xl4KL7pVnkY7C+w8dt2yQ2mLIXqqn6z/5pvvfACpejsBhGD0NSmAewTZo3u9OjrUEiRLGztF5pVOIYf1/frEUPankVG9yrsDtiyGR0sCwDzMdvy0JteMDeB4NHoNDssLHiyGAuto/spLdfEUwUQm8yNJdxmpCds3VZr6RiXbnKaveyHWbs2uzkFyMFbccVYQY/AWZrxK+hi1gKWgFtlqD3wx7bm+UBo9PJ9w+b4bTvB6NDNR2jASuTnrBzWgvk6yE39/UNGw3m79WG4ME0WZK2/5TpDVkbZ/rsard25zsQvSbIa4IqwqJVWrzJ/fOAJZ4rPohFWRd3hHsHetvF053KAjc30I1wsDlkfL/z5AxGv4qAhIa13bHQMI/4mo2V8QVxT2uhfpdFUTNTay27GH79uWexwo7LgFdd13Ey67RV2ObF9ioRcHjC2THIJ/fc9i4mhSQ7QWIF8y3WXqSHbVsURhOLQZPPYApR6QaBn0mx6o10yFsjaZmX2QaGyLPwkF4efF6o0USWVXB0MMmkSLgA0kXkr4Is3CmT8rLwqVaoecZmmg== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199015)(40470700004)(46966006)(36840700001)(26005)(54906003)(110136005)(8936002)(966005)(6666004)(41300700001)(186003)(478600001)(70586007)(8676002)(4326008)(7696005)(7416002)(5660300002)(70206006)(2906002)(2616005)(81166007)(356005)(82310400005)(40460700003)(36756003)(82740400003)(83380400001)(336012)(47076005)(16526019)(426003)(316002)(86362001)(40480700001)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:04:59.1419 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ec687d74-ed00-4337-e2e8-08daa0aa68f3 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT105.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5264 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In advance of providing support for unaccepted memory, switch from using kmalloc() for allocating the Page State Change (PSC) structure to using a local variable that lives on the stack. This is needed to avoid a possible recursive call into set_pages_state() if the kmalloc() call requires (more) memory to be accepted, which would result in a hang. The current size of the PSC struct is 2,032 bytes. To make the struct more stack friendly, reduce the number of PSC entries from 253 down to 64, resulting in a size of 520 bytes. This is a nice compromise on struct size and total PSC requests while still allowing parallel PSC operations across vCPUs. If the reduction in PSC entries results in any kind of performance issue (that is not seen at the moment), use of a larger static PSC struct, with fallback to the smaller stack version, can be investigated. For more background info on this decision, see the subthread in the Link: tag below. Signed-off-by: Tom Lendacky Link: https://lore.kernel.org/lkml/658c455c40e8950cb046dd885dd19dc1c52d060a= .1659103274.git.thomas.lendacky@amd.com --- arch/x86/include/asm/sev-common.h | 9 +++++++-- arch/x86/kernel/sev.c | 10 ++-------- 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-c= ommon.h index b8357d6ecd47..8ddfdbe521d4 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -106,8 +106,13 @@ enum psc_op { #define GHCB_HV_FT_SNP BIT_ULL(0) #define GHCB_HV_FT_SNP_AP_CREATION BIT_ULL(1) =20 -/* SNP Page State Change NAE event */ -#define VMGEXIT_PSC_MAX_ENTRY 253 +/* + * SNP Page State Change NAE event + * The VMGEXIT_PSC_MAX_ENTRY determines the size of the PSC structure, w= hich + * is a local stack variable in set_pages_state(). Do not increase this = value + * without evaluating the impact to stack usage. + */ +#define VMGEXIT_PSC_MAX_ENTRY 64 =20 struct psc_hdr { u16 cur_entry; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c90a47c39f6b..664a4de91757 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -868,11 +868,7 @@ static void __set_pages_state(struct snp_psc_desc *dat= a, unsigned long vaddr, static void set_pages_state(unsigned long vaddr, unsigned int npages, int = op) { unsigned long vaddr_end, next_vaddr; - struct snp_psc_desc *desc; - - desc =3D kmalloc(sizeof(*desc), GFP_KERNEL_ACCOUNT); - if (!desc) - panic("SNP: failed to allocate memory for PSC descriptor\n"); + struct snp_psc_desc desc; =20 vaddr =3D vaddr & PAGE_MASK; vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); @@ -882,12 +878,10 @@ static void set_pages_state(unsigned long vaddr, unsi= gned int npages, int op) next_vaddr =3D min_t(unsigned long, vaddr_end, (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr); =20 - __set_pages_state(desc, vaddr, next_vaddr, op); + __set_pages_state(&desc, vaddr, next_vaddr, op); =20 vaddr =3D next_vaddr; } - - kfree(desc); } =20 void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) --=20 2.37.3 From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D65C54EE9 for ; Tue, 27 Sep 2022 17:05:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231126AbiI0RFg (ORCPT ); Tue, 27 Sep 2022 13:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232253AbiI0RFP (ORCPT ); Tue, 27 Sep 2022 13:05:15 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2058.outbound.protection.outlook.com [40.107.92.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBD78501B0 for ; Tue, 27 Sep 2022 10:05:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CUqffsHGaUe29unJg4mhVKeSpdCfS6/gwEhl6MC510CJkD0Al0nt/u+DBTqiIbdPU/5+hK12iD/T55h7NqWxQZnNUfOgt9ExJU46dP8bPAjMqjZAjYP/nNAqWlt/VNY7nndMGBtzwzIciJElITKzeSR1EiZtG8BEXo4ui2eGZnRqJuGoRkpfawn6eyiPXQzRFfNOjts6LoP6bjXPJ7UPZELVJBZzqsU0pV+lfP30UH4e9XTCUjKbMqDSrTTp+9uLGWrxRj81fLdmmI8daqEwk628AuWTEfsnIkljRJTcyqJGPYrHrIiFhD0a/KnvrQJ0fXhaZ7ejZ4+32pLWTuEf9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8mz0PBhkH+vuGxEvzF9VW1oCiYeGK2mgt/kFo6tkrqk=; b=fi5oUVoEWCbki6V9+djz9/Asw0EL6YTTzLBs97JRoH8g4bXWOuRTmaj8rgUAQU4bj2nhbsllll4zl41jvSDZPbIw679CbrrKBnM1BrZjQTqFtm9Zpqcmj/veU/ac7OCvIRbmEcbEgBWa2rN40Nwt/vlWPQ3vsJC2UK+zufB4iy0+HekwkwLHVGCHtM9OG5TYggsT0OCOtdPLC7sox0O0dLA8u4Lswjp8Eklms5ZZWr/+XOzaxga5EITbj4Ze+7qe5b7sC9olHyHi5MWWnV31umrcToDdeImRZ1C31lLkPjP3ZLwz5sBFWlPeVsXZJST/ud0PbWkaJrk1PiKM6NdybQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8mz0PBhkH+vuGxEvzF9VW1oCiYeGK2mgt/kFo6tkrqk=; b=K2l+SCiPYa58iHki8L+4HNt4fzM/ZHKbmlok08tfaYnNKsVSODszqfh7WRjhMTHAN2O+ZQhNUk+RiUHvd8v7Vu4qlsd7c0hg3i659WGPcRrUJMgVWaOFYmsigmOfdxK0kI/UI2B12kPxLlgZEQrZk1UwaVOhyc7mA+gE2WwykTg= Received: from BN9PR03CA0507.namprd03.prod.outlook.com (2603:10b6:408:130::32) by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26; Tue, 27 Sep 2022 17:05:06 +0000 Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:408:130:cafe::b2) by BN9PR03CA0507.outlook.office365.com (2603:10b6:408:130::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Tue, 27 Sep 2022 17:05:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:05:06 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:05:05 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 4/6] x86/sev: Allow for use of the early boot GHCB for PSC requests Date: Tue, 27 Sep 2022 12:04:19 -0500 Message-ID: <1913bdc41fed623a3bea273615303a98db35cedc.1664298261.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|DM6PR12MB4927:EE_ X-MS-Office365-Filtering-Correlation-Id: fc48cb0f-8ed3-4b5b-5cc0-08daa0aa6d64 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r9htvoBHVTJ+/k07bBdcXRf07rdQXTqHVJTrAqW6qlGKXhAhHRHlN6bNZ/H/KAE+sWIqZGAhkiH6J77mrC/jVFzSufGM0QYRPowZr53a5B2rC4viMvEcg6EfFe4H8KQ42CgcuUpCLknDKUuaXVt/mmuFnrLikGJJBTtpGfIXPTrXEUEMiScWo786xu2Ug4lG0mw3+Nj+6UCONcEDy7e8rTiZN4QhKCpOPpk1UpCT2qhIqNeqFDm01O43eqe3ylCthYyZCKRbpvOhT3OsZYfd47bslGlsbdbO57wwQDrTUiRKIEAlJ5kK/nldjx9Iq4WJUB/+KUPc+h0vTlpbbMEiQoT9j3EN0ckwy91NENGk4ED9AqXT/QBdxjAnd+VqhTmVS0kMx3VtL/k0y5OvAWHbL24pvykNaTob9uTOqEPQOw7Lv4prb3KXTlKzXOPhraGbPvKV4icsBmfP14SasEjoOvFDwTrcdDVQMjDfMCGcjyZxRGstii+4iYFyNSVkXFoXJkzAXUodVO7VONHGpvJdA8uvurcul6uFvLjkkFAYEzgJ50CysdbUahzfoCAxX1nLSbWll5MH9EBGT2dTZMdCX5XsFpoJgAw+3dz0uBy2i+AdZPNjfsAukVLwvqvNDdOYYwIpipvbdHnE7xqSLEGwZgaACAZtDWFFxqHoin7fFjDbHo2DjbH199qg8AVxGkH0DrBX1D+98PHBsVrnc4HCKE9HciKWF+JK5gp3AtWuNYNhCpWpM2RrTMxkb4nvwH69lmwKkRT11EGwv6vXeceP8cD13rVFe/Hc3neVf5mWFW56jMr5OS1ajwTrE/R5b4J9 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199015)(40470700004)(46966006)(36840700001)(83380400001)(336012)(356005)(82310400005)(47076005)(81166007)(36756003)(40460700003)(82740400003)(86362001)(36860700001)(40480700001)(426003)(70206006)(4326008)(8676002)(70586007)(186003)(110136005)(41300700001)(316002)(478600001)(8936002)(26005)(2906002)(2616005)(54906003)(6666004)(7696005)(5660300002)(7416002)(16526019)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:05:06.5899 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc48cb0f-8ed3-4b5b-5cc0-08daa0aa6d64 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Using a GHCB for a page stage change (as opposed to the MSR protocol) allows for multiple pages to be processed in a single request. In prep for early PSC requests in support of unaccepted memory, update the invocation of vmgexit_psc() to be able to use the early boot GHCB and not just the per-CPU GHCB structure. In order to use the proper GHCB (early boot vs per-CPU), set a flag that indicates when the per-CPU GHCBs are available and registered. For APs, the per-CPU GHCBs are created before they are started and registered upon startup, so this flag can be used globally for the BSP and APs instead of creating a per-CPU flag. This will allow for a significant reduction in the number of MSR protocol page state change requests when accepting memory. Signed-off-by: Tom Lendacky --- arch/x86/kernel/sev.c | 61 +++++++++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 664a4de91757..0b958d77abb4 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -117,7 +117,19 @@ static DEFINE_PER_CPU(struct sev_es_save_area *, sev_v= msa); =20 struct sev_config { __u64 debug : 1, - __reserved : 63; + + /* + * A flag used by __set_pages_state() that indicates when the + * per-CPU GHCB has been created and registered and thus can be + * used by the BSP instead of the early boot GHCB. + * + * For APs, the per-CPU GHCB is created before they are started + * and registered upon startup, so this flag can be used globally + * for the BSP and APs. + */ + ghcbs_initialized : 1, + + __reserved : 62; }; =20 static struct sev_config sev_cfg __read_mostly; @@ -660,7 +672,7 @@ static void pvalidate_pages(unsigned long vaddr, unsign= ed int npages, bool valid } } =20 -static void __init early_set_pages_state(unsigned long paddr, unsigned int= npages, enum psc_op op) +static void early_set_pages_state(unsigned long paddr, unsigned int npages= , enum psc_op op) { unsigned long paddr_end; u64 val; @@ -742,26 +754,13 @@ void __init snp_prep_memory(unsigned long paddr, unsi= gned int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } =20 -static int vmgexit_psc(struct snp_psc_desc *desc) +static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) { int cur_entry, end_entry, ret =3D 0; struct snp_psc_desc *data; - struct ghcb_state state; struct es_em_ctxt ctxt; - unsigned long flags; - struct ghcb *ghcb; =20 - /* - * __sev_get_ghcb() needs to run with IRQs disabled because it is using - * a per-CPU GHCB. - */ - local_irq_save(flags); - - ghcb =3D __sev_get_ghcb(&state); - if (!ghcb) { - ret =3D 1; - goto out_unlock; - } + vc_ghcb_invalidate(ghcb); =20 /* Copy the input desc into GHCB shared buffer */ data =3D (struct snp_psc_desc *)ghcb->shared_buffer; @@ -818,20 +817,18 @@ static int vmgexit_psc(struct snp_psc_desc *desc) } =20 out: - __sev_put_ghcb(&state); - -out_unlock: - local_irq_restore(flags); - return ret; } =20 static void __set_pages_state(struct snp_psc_desc *data, unsigned long vad= dr, unsigned long vaddr_end, int op) { + struct ghcb_state state; struct psc_hdr *hdr; struct psc_entry *e; + unsigned long flags; unsigned long pfn; + struct ghcb *ghcb; int i; =20 hdr =3D &data->hdr; @@ -861,8 +858,20 @@ static void __set_pages_state(struct snp_psc_desc *dat= a, unsigned long vaddr, i++; } =20 - if (vmgexit_psc(data)) + local_irq_save(flags); + + if (sev_cfg.ghcbs_initialized) + ghcb =3D __sev_get_ghcb(&state); + else + ghcb =3D boot_ghcb; + + if (!ghcb || vmgexit_psc(ghcb, data)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + if (sev_cfg.ghcbs_initialized) + __sev_put_ghcb(&state); + + local_irq_restore(flags); } =20 static void set_pages_state(unsigned long vaddr, unsigned int npages, int = op) @@ -870,6 +879,10 @@ static void set_pages_state(unsigned long vaddr, unsig= ned int npages, int op) unsigned long vaddr_end, next_vaddr; struct snp_psc_desc desc; =20 + /* Use the MSR protocol when a GHCB is not available. */ + if (!boot_ghcb) + return early_set_pages_state(__pa(vaddr), npages, op); + vaddr =3D vaddr & PAGE_MASK; vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); =20 @@ -1248,6 +1261,8 @@ void setup_ghcb(void) if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) snp_register_per_cpu_ghcb(); =20 + sev_cfg.ghcbs_initialized =3D true; + return; } =20 --=20 2.37.3 From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9060EC6FA82 for ; Tue, 27 Sep 2022 17:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232345AbiI0RFm (ORCPT ); Tue, 27 Sep 2022 13:05:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232210AbiI0RFc (ORCPT ); Tue, 27 Sep 2022 13:05:32 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2076.outbound.protection.outlook.com [40.107.220.76]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AD4B57E3A for ; Tue, 27 Sep 2022 10:05:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nSdkxCFmTxWURgBDGQJilT8ynGgz4G1fFXjxsIpUJnlOMUNOSBTx+IFr5XUgVIfypu4xXe82gIXp6GD4X8PpJBFcelQSOr/n07GoIDuK4VHwGXplNUNZMzpsQfK+lBqFKtBtipBRzVLminOdsHbKXk8HF7dOqQlb9KJ41ZQhqdwGXHofv+t7/dH0HpQJgRYJq3/1fsgzaIBS+B0cFTNcRLnVLnGIORv80rPYYjxg4RLh3qGCUql8OYqqHj69nAGtBA9lpXEWFJlsQ+csUG+2Jc4elGO4CaBytO5sa/otRsNP0To+AuitSuKt+RXR5ZTQ6O7DfHXv4guG/fk2+btGSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JKUG239Q84D0CNl9h8UMesh0sB8LZBUU9z+i77Al2Eo=; b=Ym9ZmxPXXmwNvBbGFlqpCmE1+TSIwevHfTO1QYBICvEZd34zduGPUeLenVGOymy9qd4PPwMDm0e+Aelkio6J0bQq2V9ru4/TtsMIPaXtZmpc1TDWEUAnDUHp6eBogVUtb6RzW1w6O/8JYNZD+r3o5Ccq3Dk0/0xR5hERknqXmyfblEiHj+PZF1XYD49C/S+yh1slrlSlrVAbBgFELnQ359TovJcEYakYzg71xE20K/jOj1Nz3m1q46ta2Z06TG9WwTQ6lyGBnKA9o7L5wV2ketlmj7XNKSjskBFzMzrxRtSLUYd2WoiF75Kgn4H5lb32p6oJFWoz6HcgsJFylYNp+g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JKUG239Q84D0CNl9h8UMesh0sB8LZBUU9z+i77Al2Eo=; b=JctAJhvHGs+JIt0G0mRV2FPjXAfg3fSjs8Pna6edYcFgg0w+EVNjnvhHqpKDLlzMy2IJmEVLuKOls2cW/torF2q9nltx7HtepZ+If/ay3yCMk9f5+Vtr21xyNcImzEE79IOazc5OhlqzIqNUhj5EFvw8tqKZc0yD8rnCO0mycvY= Received: from BN1PR14CA0008.namprd14.prod.outlook.com (2603:10b6:408:e3::13) by CY5PR12MB6646.namprd12.prod.outlook.com (2603:10b6:930:41::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Tue, 27 Sep 2022 17:05:14 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e3:cafe::1e) by BN1PR14CA0008.outlook.office365.com (2603:10b6:408:e3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Tue, 27 Sep 2022 17:05:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:05:14 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:05:12 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 5/6] x86/sev: Use large PSC requests if applicable Date: Tue, 27 Sep 2022 12:04:20 -0500 Message-ID: <632a4d3c7fa2f30d2d0d1c442b18d556f85c3449.1664298261.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT039:EE_|CY5PR12MB6646:EE_ X-MS-Office365-Filtering-Correlation-Id: 08f97a97-6a1c-4521-38d9-08daa0aa71dc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1hxa/O0KDh/vhs8rn8eipvHyGJ/LeskVpCXugb45qm/LnMxwGCleKDnXsc4d8CODsijVwD9iYh6BELtwtl7ozIm5oS16Y+lNyp2fURjTtXu804zcrIywi5oLZp0MsfuKnc1Ddc5ehLfASZSEK1adl9utD83VMmOyFNYG9q4id9dsat0jRwfGVizFW7Vio0/UqTNVisqH1d+svISI4j7AqS87CRXPCdKc5oZNj/Lu8Z6RAea0dLzCT+1j0S+2SrZjxgrhfN9mDA+vsnkOPUkYs1EiDmOO1KUkS3Gxhft7ue76Dlby6EyGzJu+RmQH8J5qQhABfrmuQG767GdiibndT407Q9AgYbxaMYKTMj4koT2veBz8eYsXWy4kT2rXMb1gWPNbqUxY2WwkH2nlUDHx6FK/+f2esh4NfVXlbq7ts+mPprcj000Fo+TM+zT93TVdB5O5j1ZIKEfrDUpJjxnyh1Zkc5eTUHRlAxy/fz1tf4LJAvLpWUgyFk81jGLWn+CLNtUQ5kJcObFTQKiTX2QiMQrukoPDQwaxPqxY1h40uPqXQcr0EFxG8T8b7JTMAmQyyEGw051skfaeAFVrfHb3nFEqX0jUEK25RwEeApeX1QhhN4gEKAToxc21A4yW7GUgzSmI9dlTaGzvBbP/GdZdtopJQUDv69B2uXBP1Q+px1G43TIxnrDi+bi1rB0zGzC/Z/I1JU48r99JSLq/IIOTUe2ZTO1XwSA5XdcHekW0lDxE2yzggYf60X3v/pQB8a3GRp6vgTjd+fN72hV91ZrZh75yePk9SXC0Ai994VZ2XYA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199015)(40470700004)(36840700001)(46966006)(82310400005)(40460700003)(478600001)(7416002)(6666004)(356005)(5660300002)(2616005)(36860700001)(26005)(86362001)(8936002)(316002)(336012)(36756003)(7696005)(16526019)(110136005)(186003)(54906003)(82740400003)(70586007)(4326008)(70206006)(81166007)(40480700001)(41300700001)(2906002)(426003)(8676002)(47076005)(83380400001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:05:14.0707 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 08f97a97-6a1c-4521-38d9-08daa0aa71dc X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6646 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In advance of providing support for unaccepted memory, request 2M Page State Change (PSC) requests when the address range allows for it. By using a 2M page size, more PSC operations can be handled in a single request to the hypervisor. The hypervisor will determine if it can accommodate the larger request by checking the mapping in the nested page table. If mapped as a large page, then the 2M page request can be performed, otherwise the 2M page request will be broken down into 512 4K page requests. This is still more efficient than having the guest perform multiple PSC requests in order to process the 512 4K pages. In conjunction with the 2M PSC requests, attempt to perform the associated PVALIDATE instruction of the page using the 2M page size. If PVALIDATE fails with a size mismatch, then fallback to validating 512 4K pages. To do this, page validation is modified to work with the PSC structure and not just a virtual address range. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/sev.h | 4 ++ arch/x86/kernel/sev.c | 125 ++++++++++++++++++++++++------------- 2 files changed, 84 insertions(+), 45 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 19514524f0f8..0007ab04ac5f 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -79,11 +79,15 @@ extern void vc_no_ghcb(void); extern void vc_boot_ghcb(void); extern bool handle_vc_boot_ghcb(struct pt_regs *regs); =20 +/* PVALIDATE return codes */ +#define PVALIDATE_FAIL_SIZEMISMATCH 6 + /* Software defined (when rFlags.CF =3D 1) */ #define PVALIDATE_FAIL_NOUPDATE 255 =20 /* RMP page size */ #define RMP_PG_SIZE_4K 0 +#define RMP_PG_SIZE_2M 1 =20 #define RMPADJUST_VMSA_PAGE_BIT BIT(16) =20 diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 0b958d77abb4..eabb8dd5be5b 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -655,32 +655,58 @@ static u64 __init get_jump_table_addr(void) return ret; } =20 -static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool= validate) +static void pvalidate_pages(struct snp_psc_desc *desc) { - unsigned long vaddr_end; + struct psc_entry *e; + unsigned long vaddr; + unsigned int size; + unsigned int i; + bool validate; int rc; =20 - vaddr =3D vaddr & PAGE_MASK; - vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); + for (i =3D 0; i <=3D desc->hdr.end_entry; i++) { + e =3D &desc->entries[i]; + + vaddr =3D (unsigned long)pfn_to_kaddr(e->gfn); + size =3D e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; + validate =3D (e->operation =3D=3D SNP_PAGE_STATE_PRIVATE) ? true : false; + + rc =3D pvalidate(vaddr, size, validate); + if (rc =3D=3D PVALIDATE_FAIL_SIZEMISMATCH && size =3D=3D RMP_PG_SIZE_2M)= { + unsigned long vaddr_end =3D vaddr + PMD_PAGE_SIZE; + + for (; vaddr < vaddr_end; vaddr +=3D PAGE_SIZE) { + rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + if (rc) + break; + } + } =20 - while (vaddr < vaddr_end) { - rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); - - vaddr =3D vaddr + PAGE_SIZE; } } =20 -static void early_set_pages_state(unsigned long paddr, unsigned int npages= , enum psc_op op) +static void early_set_pages_state(unsigned long vaddr, unsigned long paddr, + unsigned int npages, enum psc_op op) { unsigned long paddr_end; u64 val; + int ret; + + vaddr =3D vaddr & PAGE_MASK; =20 paddr =3D paddr & PAGE_MASK; paddr_end =3D paddr + ((unsigned long)npages << PAGE_SHIFT); =20 while (paddr < paddr_end) { + if (op =3D=3D SNP_PAGE_STATE_SHARED) { + /* Page validation must be rescinded before changing to shared */ + ret =3D pvalidate(vaddr, RMP_PG_SIZE_4K, false); + if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret)) + goto e_term; + } + /* * Use the MSR protocol because this function can be called before * the GHCB is established. @@ -701,7 +727,15 @@ static void early_set_pages_state(unsigned long paddr,= unsigned int npages, enum paddr, GHCB_MSR_PSC_RESP_VAL(val))) goto e_term; =20 - paddr =3D paddr + PAGE_SIZE; + if (op =3D=3D SNP_PAGE_STATE_PRIVATE) { + /* Page validation must be performed after changing to private */ + ret =3D pvalidate(vaddr, RMP_PG_SIZE_4K, true); + if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret)) + goto e_term; + } + + vaddr +=3D PAGE_SIZE; + paddr +=3D PAGE_SIZE; } =20 return; @@ -720,10 +754,7 @@ void __init early_snp_set_memory_private(unsigned long= vaddr, unsigned long padd * Ask the hypervisor to mark the memory pages as private in the RMP * table. */ - early_set_pages_state(paddr, npages, SNP_PAGE_STATE_PRIVATE); - - /* Validate the memory pages after they've been added in the RMP table. */ - pvalidate_pages(vaddr, npages, true); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE); } =20 void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long= paddr, @@ -732,11 +763,8 @@ void __init early_snp_set_memory_shared(unsigned long = vaddr, unsigned long paddr if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; =20 - /* Invalidate the memory pages before they are marked shared in the RMP t= able. */ - pvalidate_pages(vaddr, npages, false); - /* Ask hypervisor to mark the memory pages shared in the RMP table. */ - early_set_pages_state(paddr, npages, SNP_PAGE_STATE_SHARED); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED); } =20 void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc= _op op) @@ -820,10 +848,11 @@ static int vmgexit_psc(struct ghcb *ghcb, struct snp_= psc_desc *desc) return ret; } =20 -static void __set_pages_state(struct snp_psc_desc *data, unsigned long vad= dr, - unsigned long vaddr_end, int op) +static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned= long vaddr, + unsigned long vaddr_end, int op) { struct ghcb_state state; + bool use_large_entry; struct psc_hdr *hdr; struct psc_entry *e; unsigned long flags; @@ -837,27 +866,37 @@ static void __set_pages_state(struct snp_psc_desc *da= ta, unsigned long vaddr, memset(data, 0, sizeof(*data)); i =3D 0; =20 - while (vaddr < vaddr_end) { - if (is_vmalloc_addr((void *)vaddr)) + while (vaddr < vaddr_end && i < ARRAY_SIZE(data->entries)) { + hdr->end_entry =3D i; + + if (is_vmalloc_addr((void *)vaddr)) { pfn =3D vmalloc_to_pfn((void *)vaddr); - else + use_large_entry =3D false; + } else { pfn =3D __pa(vaddr) >> PAGE_SHIFT; + use_large_entry =3D true; + } =20 e->gfn =3D pfn; e->operation =3D op; - hdr->end_entry =3D i; =20 - /* - * Current SNP implementation doesn't keep track of the RMP page - * size so use 4K for simplicity. - */ - e->pagesize =3D RMP_PG_SIZE_4K; + if (use_large_entry && IS_ALIGNED(vaddr, PMD_PAGE_SIZE) && + (vaddr_end - vaddr) >=3D PMD_PAGE_SIZE) { + e->pagesize =3D RMP_PG_SIZE_2M; + vaddr +=3D PMD_PAGE_SIZE; + } else { + e->pagesize =3D RMP_PG_SIZE_4K; + vaddr +=3D PAGE_SIZE; + } =20 - vaddr =3D vaddr + PAGE_SIZE; e++; i++; } =20 + /* Page validation must be rescinded before changing to shared */ + if (op =3D=3D SNP_PAGE_STATE_SHARED) + pvalidate_pages(data); + local_irq_save(flags); =20 if (sev_cfg.ghcbs_initialized) @@ -865,6 +904,7 @@ static void __set_pages_state(struct snp_psc_desc *data= , unsigned long vaddr, else ghcb =3D boot_ghcb; =20 + /* Invoke the hypervisor to perform the page state changes */ if (!ghcb || vmgexit_psc(ghcb, data)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); =20 @@ -872,29 +912,28 @@ static void __set_pages_state(struct snp_psc_desc *da= ta, unsigned long vaddr, __sev_put_ghcb(&state); =20 local_irq_restore(flags); + + /* Page validation must be performed after changing to private */ + if (op =3D=3D SNP_PAGE_STATE_PRIVATE) + pvalidate_pages(data); + + return vaddr; } =20 static void set_pages_state(unsigned long vaddr, unsigned int npages, int = op) { - unsigned long vaddr_end, next_vaddr; struct snp_psc_desc desc; + unsigned long vaddr_end; =20 /* Use the MSR protocol when a GHCB is not available. */ if (!boot_ghcb) - return early_set_pages_state(__pa(vaddr), npages, op); + return early_set_pages_state(vaddr, __pa(vaddr), npages, op); =20 vaddr =3D vaddr & PAGE_MASK; vaddr_end =3D vaddr + ((unsigned long)npages << PAGE_SHIFT); =20 - while (vaddr < vaddr_end) { - /* Calculate the last vaddr that fits in one struct snp_psc_desc. */ - next_vaddr =3D min_t(unsigned long, vaddr_end, - (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr); - - __set_pages_state(&desc, vaddr, next_vaddr, op); - - vaddr =3D next_vaddr; - } + while (vaddr < vaddr_end) + vaddr =3D __set_pages_state(&desc, vaddr, vaddr_end, op); } =20 void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) @@ -902,8 +941,6 @@ void snp_set_memory_shared(unsigned long vaddr, unsigne= d int npages) if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; =20 - pvalidate_pages(vaddr, npages, false); - set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED); } =20 @@ -913,8 +950,6 @@ void snp_set_memory_private(unsigned long vaddr, unsign= ed int npages) return; =20 set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); - - pvalidate_pages(vaddr, npages, true); } =20 static int snp_set_vmsa(void *va, bool vmsa) --=20 2.37.3 From nobody Thu Nov 14 04:38:02 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 540F0C54EE9 for ; Tue, 27 Sep 2022 17:06:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232647AbiI0RGN (ORCPT ); Tue, 27 Sep 2022 13:06:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232207AbiI0RFi (ORCPT ); Tue, 27 Sep 2022 13:05:38 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2061a.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e89::61a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 131B660507 for ; Tue, 27 Sep 2022 10:05:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fMmtNq2eACTfi899dXiQtHIVRjOIjHCrMbTzhSXNxp7vlm4K1YtOmxTZsfQjl222IKGhpHYaN/qNUYb1YNa44YCTNN7qst3IexrVkD6M7b5u1MLoJO3M8mijBQzx6rgl9fnSFiBBR9hlM0bjW3eHU+a5p196oSHbBMryC110JcDTDncsqpj4fR8XkConxBl8hu+ytbtI5/JxSwUrUXrl1PlB3v98nDBi/lXNSCdH7Xz7kJli/FwOQKV3f9T0LPxK8EHmMkoGs7DoElj5d5ZotYdLi/nqgI4njlTneFikpTwnGBgftzkcjIsS1wxyoaTwZoSBzycc1985IirnmfIBnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PG+74JEGHuI7t+JgAlWrAixxM9d/8RP4B2qZ1NqkSz4=; b=BmizuB11aaN/84Zdc5rXvVZF/QYgXadzFUX/8MFpYhH+KU60y7gloSLvXVjkanp9OGj0DJScyfMBbO6GLiQmTbMgm2STUbwm2ntdDe7WbgTV9RO6NnSQkQ3ArwgLB37taegZEtq3/VOSqm4t/SWda4sNK/wlYVsvRtWDwCzZyaAnoZpy1Vhoyq2T0f/NaU8wZD/bVd68amwobzR1mNcIB4B25R4Td8iLvmrWgC0rFbUEO2kkHyXKJNMxhO1ql0NeLQ/80z5URgdno9BXDboCS7ho74O4V0i1yuPhD4vyRb4NOJpYNRd2NOW/WjJxWOq11TdjPZAGrAXh9mYK9d+sqw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PG+74JEGHuI7t+JgAlWrAixxM9d/8RP4B2qZ1NqkSz4=; b=iQk4RTqFvgViH7ibixgtHXxYugkTUuMyLj//hD+XNmIY8duBUhA8USiVlxdtFeltQ49JwZPNa+dx7afiAPGba49ThH1PB4zf2kkWqeQJ8GSRGL9CmUEnnTLzttDmIpF9NsfAzTU92cIsO1hf9w+lzlORwqrspvv2ySQB9vECkpY= Received: from BN9PR03CA0230.namprd03.prod.outlook.com (2603:10b6:408:f8::25) by MW4PR12MB7287.namprd12.prod.outlook.com (2603:10b6:303:22c::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.16; Tue, 27 Sep 2022 17:05:21 +0000 Received: from BN8NAM11FT071.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::a8) by BN9PR03CA0230.outlook.office365.com (2603:10b6:408:f8::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.23 via Frontend Transport; Tue, 27 Sep 2022 17:05:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT071.mail.protection.outlook.com (10.13.177.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5654.14 via Frontend Transport; Tue, 27 Sep 2022 17:05:21 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 27 Sep 2022 12:05:20 -0500 From: Tom Lendacky To: , CC: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "Kirill A. Shutemov" , "H. Peter Anvin" , Michael Roth , Joerg Roedel , Andy Lutomirski , Peter Zijlstra Subject: [PATCH v5 6/6] x86/sev: Add SNP-specific unaccepted memory support Date: Tue, 27 Sep 2022 12:04:21 -0500 Message-ID: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT071:EE_|MW4PR12MB7287:EE_ X-MS-Office365-Filtering-Correlation-Id: 192c1abb-dd42-419c-b34d-08daa0aa765b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oy4I7HWqCIg4F+xnefPKEYyjQ40ieCFXX3gqhid0Br4u4P8ESThbRQlyDoDrsXwFG5veaO3ILWs2iq3OReQGfT8tdJh/9BcjgbFmH5mEsptkwARZN/5OqiKe0vvzPYkAUzocI8Ca8JSkS69kwFrtZ5xVWBvQ2ZkWrWGTp35F+z3N8spUVLU1UjFjpcyK9CulyPK5VH+qSF7ik/naaIexKc7bUKCj59RZTpc/Da1kghTqZZxp3qw5ZTaFTyFyOAqZekH3LHBxeAzCHiH/PDkWBpHvepqjNuN5I7+BpVyqvWwJdnZr4l3pDUsT2K4zrPO9uk3Pa4mwYyx6jlR84jdUVBOAyL25bwHTnBMVE1iy/N//LYV7ZYApR2pJdHl3krQBqzWZK0IJjwZKNJ506RVQp+wrQRVwzAs5q4iZZIQFf0cwKpwiSTygxZhNfxr85r456OQ8nebVRvm7fJaFUd+MRadLiXy+tOMj9YpVw/1yZUcbYv32tfAvElpPI5riDGRYsvzaB81f7Zl3Nae8OHdQaRL2eJEvl2BV6/YRSgZuJwECEKvc9uIW5v0P0s7dbFmqY1Qf2ypAVMo3n6fHK3RF9vTZ/QP2Lu1XfnQ/qqr3FhTE9I999XwSPEjj+n4N00DAFel2G07bd5KEXs1YYnlDgh7LkriyGfm+udB6R/i9HnT8gvxB0/OuM2XfTh53FkB/smQiPJOBJbK9EK2f1FJwu/EZygcWvBrmuFKCAylOYizSU/HBqm9rx8YtLpAqsXDwEs6O/W0QjHm5+qVfkBXIcemAUIiaw79ABjvxukE3/6YmjGtEpjP88j2i26QbqOA0 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199015)(36840700001)(40470700004)(46966006)(356005)(36860700001)(82740400003)(5660300002)(30864003)(40480700001)(83380400001)(81166007)(82310400005)(41300700001)(16526019)(47076005)(40460700003)(36756003)(2906002)(8936002)(26005)(7416002)(7696005)(70206006)(70586007)(336012)(8676002)(2616005)(426003)(186003)(4326008)(54906003)(316002)(86362001)(110136005)(478600001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Sep 2022 17:05:21.6327 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 192c1abb-dd42-419c-b34d-08daa0aa765b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT071.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7287 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add SNP-specific hooks to the unaccepted memory support in the boot path (__accept_memory()) and the core kernel (accept_memory()) in order to support booting SNP guests when unaccepted memory is present. Without this support, SNP guests will fail to boot and/or panic() when unaccepted memory is present in the EFI memory map. The process of accepting memory under SNP involves invoking the hypervisor to perform a page state change for the page to private memory and then issuing a PVALIDATE instruction to accept the page. Since the boot path and the core kernel paths perform similar operations, move the pvalidate_pages() and vmgexit_psc() functions into sev-shared.c to avoid code duplication. Create the new header file arch/x86/boot/compressed/sev.h because adding the function declaration to any of the existing SEV related header files pulls in too many other header files, causing the build to fail. Signed-off-by: Tom Lendacky --- arch/x86/Kconfig | 1 + arch/x86/boot/compressed/mem.c | 3 + arch/x86/boot/compressed/sev.c | 54 ++++++++++++++- arch/x86/boot/compressed/sev.h | 23 +++++++ arch/x86/include/asm/sev.h | 3 + arch/x86/kernel/sev-shared.c | 104 +++++++++++++++++++++++++++++ arch/x86/kernel/sev.c | 112 ++++---------------------------- arch/x86/mm/unaccepted_memory.c | 4 ++ 8 files changed, 205 insertions(+), 99 deletions(-) create mode 100644 arch/x86/boot/compressed/sev.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 34146ecc5bdd..0ad53c3533c2 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1553,6 +1553,7 @@ config AMD_MEM_ENCRYPT select INSTRUCTION_DECODER select ARCH_HAS_CC_PLATFORM select X86_MEM_ENCRYPT + select UNACCEPTED_MEMORY help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index 48e36e640da1..3e19dc0da0d7 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -6,6 +6,7 @@ #include "find.h" #include "math.h" #include "tdx.h" +#include "sev.h" #include =20 #define PMD_SHIFT 21 @@ -39,6 +40,8 @@ static inline void __accept_memory(phys_addr_t start, phy= s_addr_t end) /* Platform-specific memory-acceptance call goes here */ if (is_tdx_guest()) tdx_accept_memory(start, end); + else if (sev_snp_enabled()) + snp_accept_memory(start, end); else error("Cannot accept memory: unknown platform\n"); } diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 730c4677e9db..22da65c96b47 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -115,7 +115,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ct= xt, /* Include code for early handlers */ #include "../../kernel/sev-shared.c" =20 -static inline bool sev_snp_enabled(void) +bool sev_snp_enabled(void) { return sev_status & MSR_AMD64_SEV_SNP_ENABLED; } @@ -181,6 +181,58 @@ static bool early_setup_ghcb(void) return true; } =20 +static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc, + phys_addr_t pa, phys_addr_t pa_end) +{ + struct psc_hdr *hdr; + struct psc_entry *e; + unsigned int i; + + hdr =3D &desc->hdr; + memset(hdr, 0, sizeof(*hdr)); + + e =3D desc->entries; + + i =3D 0; + while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) { + hdr->end_entry =3D i; + + e->gfn =3D pa >> PAGE_SHIFT; + e->operation =3D SNP_PAGE_STATE_PRIVATE; + if (IS_ALIGNED(pa, PMD_PAGE_SIZE) && (pa_end - pa) >=3D PMD_PAGE_SIZE) { + e->pagesize =3D RMP_PG_SIZE_2M; + pa +=3D PMD_PAGE_SIZE; + } else { + e->pagesize =3D RMP_PG_SIZE_4K; + pa +=3D PAGE_SIZE; + } + + e++; + i++; + } + + if (vmgexit_psc(boot_ghcb, desc)) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + pvalidate_pages(desc); + + return pa; +} + +void snp_accept_memory(phys_addr_t start, phys_addr_t end) +{ + struct snp_psc_desc desc =3D {}; + unsigned int i; + phys_addr_t pa; + + if (!boot_ghcb && !early_setup_ghcb()) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + + pa =3D start; + while (pa < end) + pa =3D __snp_accept_memory(&desc, pa, end); +} + void sev_es_shutdown_ghcb(void) { if (!boot_ghcb) diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h new file mode 100644 index 000000000000..fc725a981b09 --- /dev/null +++ b/arch/x86/boot/compressed/sev.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * AMD SEV header for early boot related functions. + * + * Author: Tom Lendacky + */ + +#ifndef BOOT_COMPRESSED_SEV_H +#define BOOT_COMPRESSED_SEV_H + +#ifdef CONFIG_AMD_MEM_ENCRYPT + +bool sev_snp_enabled(void); +void snp_accept_memory(phys_addr_t start, phys_addr_t end); + +#else + +static inline bool sev_snp_enabled(void) { return false; } +static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) {= } + +#endif + +#endif diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 0007ab04ac5f..9297aab0c79e 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -206,6 +206,7 @@ void snp_set_wakeup_secondary_cpu(void); bool snp_init(struct boot_params *bp); void snp_abort(void); int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, uns= igned long *fw_err); +void snp_accept_memory(phys_addr_t start, phys_addr_t end); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } @@ -230,6 +231,8 @@ static inline int snp_issue_guest_request(u64 exit_code= , struct snp_req_data *in { return -ENOTTY; } + +static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) {= } #endif =20 #endif diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index b478edf43bec..7ac7857da2b8 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -12,6 +12,9 @@ #ifndef __BOOT_COMPRESSED #define error(v) pr_err(v) #define has_cpuflag(f) boot_cpu_has(f) +#else +#undef WARN +#define WARN(condition...) #endif =20 /* I/O parameters for CPUID-related helpers */ @@ -998,3 +1001,104 @@ static void __init setup_cpuid_table(const struct cc= _blob_sev_info *cc_info) cpuid_ext_range_max =3D fn->eax; } } + +static void pvalidate_pages(struct snp_psc_desc *desc) +{ + struct psc_entry *e; + unsigned long vaddr; + unsigned int size; + unsigned int i; + bool validate; + int rc; + + for (i =3D 0; i <=3D desc->hdr.end_entry; i++) { + e =3D &desc->entries[i]; + + vaddr =3D (unsigned long)pfn_to_kaddr(e->gfn); + size =3D e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; + validate =3D (e->operation =3D=3D SNP_PAGE_STATE_PRIVATE) ? true : false; + + rc =3D pvalidate(vaddr, size, validate); + if (rc =3D=3D PVALIDATE_FAIL_SIZEMISMATCH && size =3D=3D RMP_PG_SIZE_2M)= { + unsigned long vaddr_end =3D vaddr + PMD_PAGE_SIZE; + + for (; vaddr < vaddr_end; vaddr +=3D PAGE_SIZE) { + rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + if (rc) + break; + } + } + + if (rc) { + WARN(1, "Failed to validate address 0x%lx ret %d", vaddr, rc); + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); + } + } +} + +static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) +{ + int cur_entry, end_entry, ret =3D 0; + struct snp_psc_desc *data; + struct es_em_ctxt ctxt; + + vc_ghcb_invalidate(ghcb); + + /* Copy the input desc into GHCB shared buffer */ + data =3D (struct snp_psc_desc *)ghcb->shared_buffer; + memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof= (*desc))); + + /* + * As per the GHCB specification, the hypervisor can resume the guest + * before processing all the entries. Check whether all the entries + * are processed. If not, then keep retrying. Note, the hypervisor + * will update the data memory directly to indicate the status, so + * reference the data->hdr everywhere. + * + * The strategy here is to wait for the hypervisor to change the page + * state in the RMP table before guest accesses the memory pages. If the + * page state change was not successful, then later memory access will + * result in a crash. + */ + cur_entry =3D data->hdr.cur_entry; + end_entry =3D data->hdr.end_entry; + + while (data->hdr.cur_entry <=3D data->hdr.end_entry) { + ghcb_set_sw_scratch(ghcb, (u64)__pa(data)); + + /* This will advance the shared buffer data points to. */ + ret =3D sev_es_ghcb_hv_call(ghcb, true, &ctxt, SVM_VMGEXIT_PSC, 0, 0); + + /* + * Page State Change VMGEXIT can pass error code through + * exit_info_2. + */ + if (ret || ghcb->save.sw_exit_info_2) { + WARN(1, "SNP: PSC failed ret=3D%d exit_info_2=3D%llx\n", + ret, ghcb->save.sw_exit_info_2); + ret =3D 1; + goto out; + } + + /* Verify that reserved bit is not set */ + if (data->hdr.reserved) { + WARN(1, "Reserved bit is set in the PSC header\n"); + ret =3D 1; + goto out; + } + + /* + * Sanity check that entry processing is not going backwards. + * This will happen only if hypervisor is tricking us. + */ + if (data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_entry) { + WARN(1, "SNP: PSC processing going backward, end_entry %d (got %d) cur_= entry %d (got %d)\n", + end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry); + ret =3D 1; + goto out; + } + } + +out: + return ret; +} diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index eabb8dd5be5b..48440933bde2 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -655,38 +655,6 @@ static u64 __init get_jump_table_addr(void) return ret; } =20 -static void pvalidate_pages(struct snp_psc_desc *desc) -{ - struct psc_entry *e; - unsigned long vaddr; - unsigned int size; - unsigned int i; - bool validate; - int rc; - - for (i =3D 0; i <=3D desc->hdr.end_entry; i++) { - e =3D &desc->entries[i]; - - vaddr =3D (unsigned long)pfn_to_kaddr(e->gfn); - size =3D e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; - validate =3D (e->operation =3D=3D SNP_PAGE_STATE_PRIVATE) ? true : false; - - rc =3D pvalidate(vaddr, size, validate); - if (rc =3D=3D PVALIDATE_FAIL_SIZEMISMATCH && size =3D=3D RMP_PG_SIZE_2M)= { - unsigned long vaddr_end =3D vaddr + PMD_PAGE_SIZE; - - for (; vaddr < vaddr_end; vaddr +=3D PAGE_SIZE) { - rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); - if (rc) - break; - } - } - - if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc)) - sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); - } -} - static void early_set_pages_state(unsigned long vaddr, unsigned long paddr, unsigned int npages, enum psc_op op) { @@ -782,72 +750,6 @@ void __init snp_prep_memory(unsigned long paddr, unsig= ned int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } =20 -static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) -{ - int cur_entry, end_entry, ret =3D 0; - struct snp_psc_desc *data; - struct es_em_ctxt ctxt; - - vc_ghcb_invalidate(ghcb); - - /* Copy the input desc into GHCB shared buffer */ - data =3D (struct snp_psc_desc *)ghcb->shared_buffer; - memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof= (*desc))); - - /* - * As per the GHCB specification, the hypervisor can resume the guest - * before processing all the entries. Check whether all the entries - * are processed. If not, then keep retrying. Note, the hypervisor - * will update the data memory directly to indicate the status, so - * reference the data->hdr everywhere. - * - * The strategy here is to wait for the hypervisor to change the page - * state in the RMP table before guest accesses the memory pages. If the - * page state change was not successful, then later memory access will - * result in a crash. - */ - cur_entry =3D data->hdr.cur_entry; - end_entry =3D data->hdr.end_entry; - - while (data->hdr.cur_entry <=3D data->hdr.end_entry) { - ghcb_set_sw_scratch(ghcb, (u64)__pa(data)); - - /* This will advance the shared buffer data points to. */ - ret =3D sev_es_ghcb_hv_call(ghcb, true, &ctxt, SVM_VMGEXIT_PSC, 0, 0); - - /* - * Page State Change VMGEXIT can pass error code through - * exit_info_2. - */ - if (WARN(ret || ghcb->save.sw_exit_info_2, - "SNP: PSC failed ret=3D%d exit_info_2=3D%llx\n", - ret, ghcb->save.sw_exit_info_2)) { - ret =3D 1; - goto out; - } - - /* Verify that reserved bit is not set */ - if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n"))= { - ret =3D 1; - goto out; - } - - /* - * Sanity check that entry processing is not going backwards. - * This will happen only if hypervisor is tricking us. - */ - if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_en= try, -"SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d (g= ot %d)\n", - end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) { - ret =3D 1; - goto out; - } - } - -out: - return ret; -} - static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned= long vaddr, unsigned long vaddr_end, int op) { @@ -952,6 +854,20 @@ void snp_set_memory_private(unsigned long vaddr, unsig= ned int npages) set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); } =20 +void snp_accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long vaddr; + unsigned int npages; + + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + vaddr =3D (unsigned long)__va(start); + npages =3D (end - start) >> PAGE_SHIFT; + + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); +} + static int snp_set_vmsa(void *va, bool vmsa) { u64 attrs; diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memor= y.c index 9ec2304272dc..b86ad6a8ddf5 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -9,6 +9,7 @@ #include #include #include +#include =20 /* Protects unaccepted memory bitmap */ static DEFINE_SPINLOCK(unaccepted_memory_lock); @@ -66,6 +67,9 @@ void accept_memory(phys_addr_t start, phys_addr_t end) if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { tdx_accept_memory(range_start * PMD_SIZE, range_end * PMD_SIZE); + } else if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) { + snp_accept_memory(range_start * PMD_SIZE, + range_end * PMD_SIZE); } else { panic("Cannot accept memory: unknown platform\n"); } --=20 2.37.3