From nobody Wed Apr 8 14:46:24 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5008C05027 for ; Mon, 23 Jan 2023 16:52:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233466AbjAWQwC (ORCPT ); Mon, 23 Jan 2023 11:52:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233467AbjAWQv6 (ORCPT ); Mon, 23 Jan 2023 11:51:58 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 041072DE6B; Mon, 23 Jan 2023 08:51:54 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id 109DD20E1ABC; Mon, 23 Jan 2023 08:51:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 109DD20E1ABC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492714; bh=6PsJh8C0ku05+rJM5oIvpOU0bn9fPSKNSD8fD5LVGCU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3LcL/Xm5jLD2PrZBmB6TpJwAatXq61skO3THxpP6zzvSsZxyQ7BOBsg92nZS59G1 fZOmIxaUH1zSU+JFV20+Q4iNjQMjm3TrWBvrCRd1ipDIRdoEhYvMi4VFoqntXRVtGs 0slJnC8pbtxiwY7SEP2TCllbRz4eKHq1CFNWeVKk= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky Subject: [RFC PATCH v1 3/6] x86/sev: Maintain shadow rmptable on Hyper-V Date: Mon, 23 Jan 2023 16:51:25 +0000 Message-Id: <20230123165128.28185-4-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hyper-V can expose the SEV-SNP feature to guests, and manages the system-wide RMP (Reverse Map) table. The SNP implementation in the kernel needs access to the rmptable for tracking pages and deciding when/how to issue rmpupdate/psmash. When running as a Hyper-V guest with SNP support, an rmptable is allocated by the kernel during boot for this purpose. Keep the table in sync with issued rmpupdate/psmash instructions. The logic for how to update the rmptable comes from "AMD64 Architecture Programmer=E2=80=99s Manual, Volume 3" which describes the psmash and rmpup= date instructions. To ensure correctness of the SNP host code, the most important fields are "assigned" and "page size". Signed-off-by: Jeremi Piotrowski --- arch/x86/kernel/sev.c | 59 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 95404c7e5150..edec1ccb80b1 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -26,6 +26,7 @@ #include #include =20 +#include #include #include #include @@ -2566,6 +2567,11 @@ int snp_lookup_rmpentry(u64 pfn, int *level) } EXPORT_SYMBOL_GPL(snp_lookup_rmpentry); =20 +static bool hv_no_rmp_table(void) +{ + return ms_hyperv.nested_features & HV_X64_NESTED_NO_RMP_TABLE; +} + static bool virt_snp_msr(void) { return boot_cpu_has(X86_FEATURE_NESTED_VIRT_SNP_MSR); @@ -2584,6 +2590,26 @@ static u64 virt_psmash(u64 paddr) return ret; } =20 +static void snp_update_rmptable_psmash(u64 pfn) +{ + int level; + struct rmpentry *entry =3D __snp_lookup_rmpentry(pfn, &level); + + if (WARN_ON(IS_ERR_OR_NULL(entry))) + return; + + if (level =3D=3D PG_LEVEL_2M) { + int i; + + entry->info.pagesize =3D RMP_PG_SIZE_4K; + for (i =3D 1; i < PTRS_PER_PMD; i++) { + struct rmpentry *it =3D &entry[i]; + *it =3D *entry; + it->info.gpa =3D entry->info.gpa + i * PAGE_SIZE; + } + } +} + /* * psmash is used to smash a 2MB aligned page into 4K * pages while preserving the Validated bit in the RMP. @@ -2601,6 +2627,8 @@ int psmash(u64 pfn) =20 if (virt_snp_msr()) { ret =3D virt_psmash(paddr); + if (!ret && hv_no_rmp_table()) + snp_update_rmptable_psmash(pfn); } else { /* Binutils version 2.36 supports the PSMASH mnemonic. */ asm volatile(".byte 0xF3, 0x0F, 0x01, 0xFF" @@ -2638,6 +2666,35 @@ static u64 virt_rmpupdate(unsigned long paddr, struc= t rmp_state *val) return ret; } =20 +static void snp_update_rmptable_rmpupdate(u64 pfn, int level, struct rmp_s= tate *val) +{ + int prev_level; + struct rmpentry *entry =3D __snp_lookup_rmpentry(pfn, &prev_level); + + if (WARN_ON(IS_ERR_OR_NULL(entry))) + return; + + if (level > PG_LEVEL_4K) { + int i; + struct rmpentry tmp_rmp =3D { + .info =3D { + .assigned =3D val->assigned, + }, + }; + for (i =3D 1; i < PTRS_PER_PMD; i++) + entry[i] =3D tmp_rmp; + } + if (!val->assigned) { + memset(entry, 0, sizeof(*entry)); + } else { + entry->info.assigned =3D val->assigned; + entry->info.pagesize =3D val->pagesize; + entry->info.immutable =3D val->immutable; + entry->info.gpa =3D val->gpa; + entry->info.asid =3D val->asid; + } +} + static int rmpupdate(u64 pfn, struct rmp_state *val) { unsigned long paddr =3D pfn << PAGE_SHIFT; @@ -2666,6 +2723,8 @@ static int rmpupdate(u64 pfn, struct rmp_state *val) =20 if (virt_snp_msr()) { ret =3D virt_rmpupdate(paddr, val); + if (!ret && hv_no_rmp_table()) + snp_update_rmptable_rmpupdate(pfn, level, val); } else { /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" --=20 2.25.1