From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B011FC54E94 for ; Mon, 23 Jan 2023 16:51:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232332AbjAWQvv (ORCPT ); Mon, 23 Jan 2023 11:51:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232599AbjAWQvt (ORCPT ); Mon, 23 Jan 2023 11:51:49 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2548C2CC60; Mon, 23 Jan 2023 08:51:47 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id 53E5720E1ABE; Mon, 23 Jan 2023 08:51:44 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 53E5720E1ABE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492706; bh=sRjFZPvn0/3Qns6qVAMOwsfDhBR7oohTl7w7J1t1omQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LULej+D7yka7iBOxoLyYIZyRREecHJXpI87TRiZpmS2mzyBuMbJwSTPUacFPW2YR+ jMbWrduKcEQK9Nhs+mkhOJCKCPIgAZgYUlJdcxIDwOvR0nSjLZLx6ZWcCUcLjlw4P/ gRmcGxB7wddwkWNlmX0YAmN1Xsx0miDsVeEmkWLw= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky Subject: [RFC PATCH v1 1/6] x86/hyperv: Allocate RMP table during boot Date: Mon, 23 Jan 2023 16:51:23 +0000 Message-Id: <20230123165128.28185-2-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hyper-V VMs can be capable of hosting SNP isolated nested VMs on AMD CPUs. One of the pieces of SNP is the RMP (Reverse Map) table which tracks page assignment to firmware, hypervisor or guest. On bare-metal this table is allocated by UEFI, but on Hyper-V it is the respnsibility of the OS to allocate one if necessary. The nested_feature 'HV_X64_NESTED_NO_RMP_TABLE' will be set to communicate that no rmp is available. The actual RMP table is exclusively controlled by the Hyper-V hypervisor and is not virtualized to the VM. The SNP code in the kernel uses the RMP table for its own tracking and so it is necessary for init code to allocate one. While not strictly necessary, follow the requirements defined by "SEV Secure Nested Paging Firmware ABI Specification" Rev 1.54, section 8.8.2 when allocating the RMP: - RMP_BASE and RMP_END must be set identically across all cores. - RMP_BASE must be 1 MB aligned - RMP_END =E2=80=93 RMP_BASE + 1 must be a multiple of 1 MB - RMP is large enough to protect itself The allocation is done in the init_mem_mapping() hook, which is the earliest hook I found that has both max_pfn and memblock initialized. At this point we are still under the memblock_set_current_limit(ISA_END_ADDRESS) condition, but explicitly passing the end to memblock_phys_alloc_range() allows us to allocate past that value. Signed-off-by: Jeremi Piotrowski --- arch/x86/hyperv/hv_init.c | 5 ++++ arch/x86/include/asm/hyperv-tlfs.h | 3 +++ arch/x86/include/asm/mshyperv.h | 3 +++ arch/x86/include/asm/sev.h | 2 ++ arch/x86/kernel/cpu/mshyperv.c | 41 ++++++++++++++++++++++++++++++ arch/x86/kernel/sev.c | 1 - 6 files changed, 54 insertions(+), 1 deletion(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 29774126e931..e7f5ac075e6d 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -117,6 +117,11 @@ static int hv_cpu_init(unsigned int cpu) } } =20 + if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT) && hv_needs_snp_rmp()) { + wrmsrl(MSR_AMD64_RMP_BASE, rmp_res.start); + wrmsrl(MSR_AMD64_RMP_END, rmp_res.end); + } + return hyperv_init_ghcb(); } =20 diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hype= rv-tlfs.h index e3efaf6e6b62..01cc2c3f9f20 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -152,6 +152,9 @@ */ #define HV_X64_NESTED_ENLIGHTENED_TLB BIT(22) =20 +/* Nested SNP on Hyper-V */ +#define HV_X64_NESTED_NO_RMP_TABLE BIT(23) + /* HYPERV_CPUID_ISOLATION_CONFIG.EAX bits. */ #define HV_PARAVISOR_PRESENT BIT(0) =20 diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyper= v.h index 61f0c206bff0..3533b002cede 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -190,6 +190,9 @@ static inline void hv_ghcb_terminate(unsigned int set, = unsigned int reason) {} =20 extern bool hv_isolation_type_snp(void); =20 +extern struct resource rmp_res; +bool hv_needs_snp_rmp(void); + static inline bool hv_is_synic_reg(unsigned int reg) { if ((reg >=3D HV_REGISTER_SCONTROL) && diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 2916f4150ac7..db5438663229 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -83,6 +83,8 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs); /* RMUPDATE detected 4K page and 2MB page overlap. */ #define RMPUPDATE_FAIL_OVERLAP 7 =20 +#define RMPTABLE_CPU_BOOKKEEPING_SZ 0x4000 + /* RMP page size */ #define RMP_PG_SIZE_4K 0 #define RMP_PG_SIZE_2M 1 diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 831613959a92..e7f02412f3a1 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -488,6 +490,44 @@ static bool __init ms_hyperv_msi_ext_dest_id(void) return eax & HYPERV_VS_PROPERTIES_EAX_EXTENDED_IOAPIC_RTE; } =20 +struct resource rmp_res =3D { + .name =3D "RMP", + .start =3D 0, + .end =3D 0, + .flags =3D IORESOURCE_SYSTEM_RAM, +}; + +bool hv_needs_snp_rmp(void) +{ + return boot_cpu_has(X86_FEATURE_SEV_SNP) && + (ms_hyperv.nested_features & HV_X64_NESTED_NO_RMP_TABLE); +} + + +static void __init ms_hyperv_init_mem_mapping(void) +{ + phys_addr_t addr; + u64 calc_rmp_sz; + + if (!IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) + return; + if (!hv_needs_snp_rmp()) + return; + + calc_rmp_sz =3D (max_pfn << 4) + RMPTABLE_CPU_BOOKKEEPING_SZ; + calc_rmp_sz =3D round_up(calc_rmp_sz, SZ_1M); + addr =3D memblock_phys_alloc_range(calc_rmp_sz, SZ_1M, 0, max_pfn << PAGE= _SHIFT); + if (!addr) { + pr_warn("Unable to allocate RMP table\n"); + return; + } + rmp_res.start =3D addr; + rmp_res.end =3D addr + calc_rmp_sz - 1; + wrmsrl(MSR_AMD64_RMP_BASE, rmp_res.start); + wrmsrl(MSR_AMD64_RMP_END, rmp_res.end); + insert_resource(&iomem_resource, &rmp_res); +} + const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv =3D { .name =3D "Microsoft Hyper-V", .detect =3D ms_hyperv_platform, @@ -495,4 +535,5 @@ const __initconst struct hypervisor_x86 x86_hyper_ms_hy= perv =3D { .init.x2apic_available =3D ms_hyperv_x2apic_available, .init.msi_ext_dest_id =3D ms_hyperv_msi_ext_dest_id, .init.init_platform =3D ms_hyperv_init_platform, + .init.init_mem_mapping =3D ms_hyperv_init_mem_mapping, }; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 1dd1b36bdfea..7fa39dc17edd 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -87,7 +87,6 @@ struct rmpentry { * The first 16KB from the RMP_BASE is used by the processor for the * bookkeeping, the range needs to be added during the RMP entry lookup. */ -#define RMPTABLE_CPU_BOOKKEEPING_SZ 0x4000 #define RMPENTRY_SHIFT 8 #define rmptable_page_offset(x) (RMPTABLE_CPU_BOOKKEEPING_SZ + (((unsigned= long)x) >> RMPENTRY_SHIFT)) =20 --=20 2.25.1 From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28361C54EAA for ; Mon, 23 Jan 2023 16:51:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233465AbjAWQvz (ORCPT ); Mon, 23 Jan 2023 11:51:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232973AbjAWQvw (ORCPT ); Mon, 23 Jan 2023 11:51:52 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0C8DE2C66B; Mon, 23 Jan 2023 08:51:51 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id 396BF20E2C01; Mon, 23 Jan 2023 08:51:48 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 396BF20E2C01 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492710; bh=wmWK36KbGhWLCCUwi7oA+93vCBPAjHtrb7mDKKTi3L4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OBW0WtYkLAxiqRd5ukt/ZwnzrBEQV5F1jPk74X5qMNjWVZqsvu76a1GshJVCCoBTp BZlk3VhIXZKJKSxVvy76/2xBrwvvgjPnheWb/HrgsjqGjD9DU2FY0mPZ5wPT7UiKnj I1HOJtqRgdNiezw1Tn/fQsEEQbZC06wY0fH+9Ccs= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky Subject: [RFC PATCH v1 2/6] x86/sev: Add support for NestedVirtSnpMsr Date: Mon, 23 Jan 2023 16:51:24 +0000 Message-Id: <20230123165128.28185-3-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The rmpupdate and psmash instructions, which are used in AMD's SEV-SNP to update the RMP (Reverse Map) table, can't be trapped. For nested scenarios, AMD defined MSR versions of these instructions which can be emulated by the top-level hypervisor. One instance where these MSRs are used are Hyper-V VMs which expose SNP isolation features to the guest. The MSRs are defined in "AMD64 Architecture Programmer=E2=80=99s Manual, Vo= lume 2: System Programming", section 15.36.19. Signed-off-by: Jeremi Piotrowski --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 2 + arch/x86/kernel/sev.c | 62 +++++++++++++++++++++++++----- 3 files changed, 55 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpuf= eatures.h index 480b4eaef310..e6e2e824f67b 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -423,6 +423,7 @@ #define X86_FEATURE_SEV_SNP (19*32+ 4) /* AMD Secure Encrypted Virtualiza= tion - Secure Nested Paging */ #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */ #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced ca= che coherency */ +#define X86_FEATURE_NESTED_VIRT_SNP_MSR (19*32+29) /* Virtualizable RMPUPD= ATE and PSMASH MSR available */ =20 /* * BUG word(s) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 35100c630617..d6103e607896 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -567,6 +567,8 @@ #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) #define MSR_AMD64_RMP_BASE 0xc0010132 #define MSR_AMD64_RMP_END 0xc0010133 +#define MSR_AMD64_VIRT_RMPUPDATE 0xc001f001 +#define MSR_AMD64_VIRT_PSMASH 0xc001f002 =20 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f =20 diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 7fa39dc17edd..95404c7e5150 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -2566,6 +2566,24 @@ int snp_lookup_rmpentry(u64 pfn, int *level) } EXPORT_SYMBOL_GPL(snp_lookup_rmpentry); =20 +static bool virt_snp_msr(void) +{ + return boot_cpu_has(X86_FEATURE_NESTED_VIRT_SNP_MSR); +} + +static u64 virt_psmash(u64 paddr) +{ + int ret; + + asm volatile( + "wrmsr\n\t" + : "=3Da"(ret) + : "a"(paddr), "c"(MSR_AMD64_VIRT_PSMASH) + : "memory", "cc" + ); + return ret; +} + /* * psmash is used to smash a 2MB aligned page into 4K * pages while preserving the Validated bit in the RMP. @@ -2581,11 +2599,15 @@ int psmash(u64 pfn) if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) return -ENXIO; =20 - /* Binutils version 2.36 supports the PSMASH mnemonic. */ - asm volatile(".byte 0xF3, 0x0F, 0x01, 0xFF" - : "=3Da"(ret) - : "a"(paddr) - : "memory", "cc"); + if (virt_snp_msr()) { + ret =3D virt_psmash(paddr); + } else { + /* Binutils version 2.36 supports the PSMASH mnemonic. */ + asm volatile(".byte 0xF3, 0x0F, 0x01, 0xFF" + : "=3Da"(ret) + : "a"(paddr) + : "memory", "cc"); + } =20 return ret; } @@ -2601,6 +2623,21 @@ static int invalidate_direct_map(unsigned long pfn, = int npages) return set_memory_np((unsigned long)pfn_to_kaddr(pfn), npages); } =20 +static u64 virt_rmpupdate(unsigned long paddr, struct rmp_state *val) +{ + int ret; + register u64 hi asm("r8") =3D ((u64 *)val)[1]; + register u64 lo asm("rdx") =3D ((u64 *)val)[0]; + + asm volatile( + "wrmsr\n\t" + : "=3Da"(ret) + : "a"(paddr), "c"(MSR_AMD64_VIRT_RMPUPDATE), "r"(lo), "r"(hi) + : "memory", "cc" + ); + return ret; +} + static int rmpupdate(u64 pfn, struct rmp_state *val) { unsigned long paddr =3D pfn << PAGE_SHIFT; @@ -2626,11 +2663,16 @@ static int rmpupdate(u64 pfn, struct rmp_state *val) } =20 retry: - /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ - asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" - : "=3Da"(ret) - : "a"(paddr), "c"((unsigned long)val) - : "memory", "cc"); + + if (virt_snp_msr()) { + ret =3D virt_rmpupdate(paddr, val); + } else { + /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ + asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" + : "=3Da"(ret) + : "a"(paddr), "c"((unsigned long)val) + : "memory", "cc"); + } =20 if (ret) { if (!retries) { --=20 2.25.1 From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5008C05027 for ; Mon, 23 Jan 2023 16:52:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233466AbjAWQwC (ORCPT ); Mon, 23 Jan 2023 11:52:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233467AbjAWQv6 (ORCPT ); Mon, 23 Jan 2023 11:51:58 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 041072DE6B; Mon, 23 Jan 2023 08:51:54 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id 109DD20E1ABC; Mon, 23 Jan 2023 08:51:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 109DD20E1ABC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492714; bh=6PsJh8C0ku05+rJM5oIvpOU0bn9fPSKNSD8fD5LVGCU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3LcL/Xm5jLD2PrZBmB6TpJwAatXq61skO3THxpP6zzvSsZxyQ7BOBsg92nZS59G1 fZOmIxaUH1zSU+JFV20+Q4iNjQMjm3TrWBvrCRd1ipDIRdoEhYvMi4VFoqntXRVtGs 0slJnC8pbtxiwY7SEP2TCllbRz4eKHq1CFNWeVKk= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky Subject: [RFC PATCH v1 3/6] x86/sev: Maintain shadow rmptable on Hyper-V Date: Mon, 23 Jan 2023 16:51:25 +0000 Message-Id: <20230123165128.28185-4-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hyper-V can expose the SEV-SNP feature to guests, and manages the system-wide RMP (Reverse Map) table. The SNP implementation in the kernel needs access to the rmptable for tracking pages and deciding when/how to issue rmpupdate/psmash. When running as a Hyper-V guest with SNP support, an rmptable is allocated by the kernel during boot for this purpose. Keep the table in sync with issued rmpupdate/psmash instructions. The logic for how to update the rmptable comes from "AMD64 Architecture Programmer=E2=80=99s Manual, Volume 3" which describes the psmash and rmpup= date instructions. To ensure correctness of the SNP host code, the most important fields are "assigned" and "page size". Signed-off-by: Jeremi Piotrowski --- arch/x86/kernel/sev.c | 59 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 95404c7e5150..edec1ccb80b1 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -26,6 +26,7 @@ #include #include =20 +#include #include #include #include @@ -2566,6 +2567,11 @@ int snp_lookup_rmpentry(u64 pfn, int *level) } EXPORT_SYMBOL_GPL(snp_lookup_rmpentry); =20 +static bool hv_no_rmp_table(void) +{ + return ms_hyperv.nested_features & HV_X64_NESTED_NO_RMP_TABLE; +} + static bool virt_snp_msr(void) { return boot_cpu_has(X86_FEATURE_NESTED_VIRT_SNP_MSR); @@ -2584,6 +2590,26 @@ static u64 virt_psmash(u64 paddr) return ret; } =20 +static void snp_update_rmptable_psmash(u64 pfn) +{ + int level; + struct rmpentry *entry =3D __snp_lookup_rmpentry(pfn, &level); + + if (WARN_ON(IS_ERR_OR_NULL(entry))) + return; + + if (level =3D=3D PG_LEVEL_2M) { + int i; + + entry->info.pagesize =3D RMP_PG_SIZE_4K; + for (i =3D 1; i < PTRS_PER_PMD; i++) { + struct rmpentry *it =3D &entry[i]; + *it =3D *entry; + it->info.gpa =3D entry->info.gpa + i * PAGE_SIZE; + } + } +} + /* * psmash is used to smash a 2MB aligned page into 4K * pages while preserving the Validated bit in the RMP. @@ -2601,6 +2627,8 @@ int psmash(u64 pfn) =20 if (virt_snp_msr()) { ret =3D virt_psmash(paddr); + if (!ret && hv_no_rmp_table()) + snp_update_rmptable_psmash(pfn); } else { /* Binutils version 2.36 supports the PSMASH mnemonic. */ asm volatile(".byte 0xF3, 0x0F, 0x01, 0xFF" @@ -2638,6 +2666,35 @@ static u64 virt_rmpupdate(unsigned long paddr, struc= t rmp_state *val) return ret; } =20 +static void snp_update_rmptable_rmpupdate(u64 pfn, int level, struct rmp_s= tate *val) +{ + int prev_level; + struct rmpentry *entry =3D __snp_lookup_rmpentry(pfn, &prev_level); + + if (WARN_ON(IS_ERR_OR_NULL(entry))) + return; + + if (level > PG_LEVEL_4K) { + int i; + struct rmpentry tmp_rmp =3D { + .info =3D { + .assigned =3D val->assigned, + }, + }; + for (i =3D 1; i < PTRS_PER_PMD; i++) + entry[i] =3D tmp_rmp; + } + if (!val->assigned) { + memset(entry, 0, sizeof(*entry)); + } else { + entry->info.assigned =3D val->assigned; + entry->info.pagesize =3D val->pagesize; + entry->info.immutable =3D val->immutable; + entry->info.gpa =3D val->gpa; + entry->info.asid =3D val->asid; + } +} + static int rmpupdate(u64 pfn, struct rmp_state *val) { unsigned long paddr =3D pfn << PAGE_SHIFT; @@ -2666,6 +2723,8 @@ static int rmpupdate(u64 pfn, struct rmp_state *val) =20 if (virt_snp_msr()) { ret =3D virt_rmpupdate(paddr, val); + if (!ret && hv_no_rmp_table()) + snp_update_rmptable_rmpupdate(pfn, level, val); } else { /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" --=20 2.25.1 From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B75D2C38142 for ; Mon, 23 Jan 2023 16:52:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233476AbjAWQwG (ORCPT ); Mon, 23 Jan 2023 11:52:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233427AbjAWQwB (ORCPT ); Mon, 23 Jan 2023 11:52:01 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C9A372C66B; Mon, 23 Jan 2023 08:51:58 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id D00BF20E1ABE; Mon, 23 Jan 2023 08:51:55 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D00BF20E1ABE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492718; bh=1q0vUA/+bYaUkIbGuVZC66EHLWsufbPUd970kd1iRGY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TNcToS0zofw2NUVWyE80xtxIOj8gXwl2ndk42rbTQnKwVuvk18CJO2Xmi4gYMm7iS Ojue38bjS4dB2x6Y7jyVXeJqvE3fgs5EWQot99JevbHbWQqkR5fjInL3U1nTeZLjo3 LWpe7t7Xfyk0ueoI5mxTQhkG98Fclix4IVqHfO/4= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky , Jeremi Piotrowski Subject: [RFC PATCH v1 4/6] x86/amd: Configure necessary MSRs for SNP during CPU init when running as a guest Date: Mon, 23 Jan 2023 16:51:26 +0000 Message-Id: <20230123165128.28185-5-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jeremi Piotrowski Hyper-V may expose the SEV/SEV-SNP CPU features to the guest, but it is up to the guest to use them. early_detect_mem_encrypt() checks SYSCFG[MEM_ENCRYPT] and HWCR[SMMLOCK] and if these are not set the SEV-SNP features are cleared. Check if we are running under a hypervisor and if so - update SYSCFG and skip the HWCR check. It would be great to make this check more specific (checking for Hyper-V) but this code runs before hypervisor detection on the boot cpu. Signed-off-by: Jeremi Piotrowski --- arch/x86/kernel/cpu/amd.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index c7884198ad5b..17d91ac62937 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -565,6 +565,12 @@ static void early_detect_mem_encrypt(struct cpuinfo_x8= 6 *c) * don't advertise the feature under CONFIG_X86_32. */ if (cpu_has(c, X86_FEATURE_SME) || cpu_has(c, X86_FEATURE_SEV)) { + if (cpu_has(c, X86_FEATURE_HYPERVISOR)) { + rdmsrl(MSR_AMD64_SYSCFG, msr); + msr |=3D MSR_AMD64_SYSCFG_MEM_ENCRYPT; + wrmsrl(MSR_AMD64_SYSCFG, msr); + } + /* Check if memory encryption is enabled */ rdmsrl(MSR_AMD64_SYSCFG, msr); if (!(msr & MSR_AMD64_SYSCFG_MEM_ENCRYPT)) @@ -584,7 +590,7 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86= *c) setup_clear_cpu_cap(X86_FEATURE_SME); =20 rdmsrl(MSR_K7_HWCR, msr); - if (!(msr & MSR_K7_HWCR_SMMLOCK)) + if (!(msr & MSR_K7_HWCR_SMMLOCK) && !cpu_has(c, X86_FEATURE_HYPERVISOR)) goto clear_sev; =20 return; --=20 2.25.1 From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25F8BC05027 for ; Mon, 23 Jan 2023 16:52:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233467AbjAWQwO (ORCPT ); Mon, 23 Jan 2023 11:52:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232515AbjAWQwJ (ORCPT ); Mon, 23 Jan 2023 11:52:09 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 95B462DE42; Mon, 23 Jan 2023 08:52:02 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id B136320E1ABC; Mon, 23 Jan 2023 08:51:59 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B136320E1ABC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492722; bh=i0eNBC8geNLV8JqFCv2fcWa+3Po0NQl3CMraZjyG/Mc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dO2HSdBdZIJS/Y+jT+0mTPO/b066+d6d8bahWzMpmk+tj1pybEaX3acD7STrMRqYs CSrpN0FGXUsyxKKfPJEd4MAPNa/JoTd4A0zxzsaOAHx56uxZkU8QvzKBR3TeA4EsxY +Eo+MAtUvYWGlE274l3mnSlEBX873JB4mL7mfDSA= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky , Joerg Roedel , Suravee Suthikulpanit , iommu@lists.linux.dev, Jeremi Piotrowski Subject: [RFC PATCH v1 5/6] iommu/amd: Don't fail snp_enable when running virtualized Date: Mon, 23 Jan 2023 16:51:27 +0000 Message-Id: <20230123165128.28185-6-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jeremi Piotrowski Hyper-V VMs do not have access to an IOMMU but can support hosting SNP VMs. amd_iommu_snp_enable() is on the SNP init path and should not fail in that case. Signed-off-by: Jeremi Piotrowski --- drivers/iommu/amd/init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index d1270e3c5baf..8049dbe78a27 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -3619,6 +3619,12 @@ int amd_iommu_pc_set_reg(struct amd_iommu *iommu, u8= bank, u8 cntr, u8 fxn, u64 #ifdef CONFIG_AMD_MEM_ENCRYPT int amd_iommu_snp_enable(void) { + /* + * If we're running virtualized there doesn't have to be an IOMMU for SNP= to work. + */ + if (init_state =3D=3D IOMMU_NOT_FOUND && boot_cpu_has(X86_FEATURE_HYPERVI= SOR)) + return 0; + /* * The SNP support requires that IOMMU must be enabled, and is * not configured in the passthrough mode. --=20 2.25.1 From nobody Wed Apr 8 12:49:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 323D5C54E94 for ; Mon, 23 Jan 2023 16:52:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233484AbjAWQwV (ORCPT ); Mon, 23 Jan 2023 11:52:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233482AbjAWQwS (ORCPT ); Mon, 23 Jan 2023 11:52:18 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 011872DE4E; Mon, 23 Jan 2023 08:52:05 -0800 (PST) Received: from vm02.corp.microsoft.com (unknown [167.220.196.155]) by linux.microsoft.com (Postfix) with ESMTPSA id AF73C20E2C03; Mon, 23 Jan 2023 08:52:03 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com AF73C20E2C03 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1674492725; bh=pp03YGGSPJklBjx3IO05F7RCBMZonuTyZ6VVAlGDVMI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bf5Vv0NXstLz3CaB5zygZxlKKX1GLIaTAcak1iUhWQBViY71Nnd7FvGy44lWis+Cx yKUi4GMz12SiI7eGx9b84+aOmcOTJroBQii8CKRc/JHOin4MPgBIdgqaCxaMjE0Zxl Yk7wCM8XCpISMrsxdGxPtfce3yJuYvr5+Up4s2Ok= From: Jeremi Piotrowski To: linux-kernel@vger.kernel.org Cc: Jeremi Piotrowski , Wei Liu , Dexuan Cui , Tianyu Lan , Michael Kelley , linux-hyperv@vger.kernel.org, Brijesh Singh , Michael Roth , Ashish Kalra , Tom Lendacky , linux-crypto@vger.kernel.org Subject: [RFC PATCH v1 6/6] crypto: ccp - Introduce quirk to always reclaim pages after SEV-legacy commands Date: Mon, 23 Jan 2023 16:51:28 +0000 Message-Id: <20230123165128.28185-7-jpiotrowski@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> References: <20230123165128.28185-1-jpiotrowski@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On Hyper-V, the rmp_mark_pages_shared() call after a SEV_PLATFORM_STATUS fails with return code 2 (FAIL_PERMISSION) due to the page having the immutable bit set in the RMP (SNP has been initialized). The comment above this spot mentions that firmware automatically clears the immutable bit, but I can't find any mention of this behavior in the SNP Firmware ABI Spec. Introduce a quirk to always attempt the page reclaim and set it for the platform PSP. It would be possible to make this behavior unconditional as the firmware spec defines that page reclaim results in success if the page does not have the immutable bit set. Signed-off-by: Jeremi Piotrowski --- drivers/crypto/ccp/sev-dev.c | 6 +++++- drivers/crypto/ccp/sp-dev.h | 4 ++++ drivers/crypto/ccp/sp-platform.c | 1 + 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c index 6c4fdcaed72b..4719c0cafa28 100644 --- a/drivers/crypto/ccp/sev-dev.c +++ b/drivers/crypto/ccp/sev-dev.c @@ -658,8 +658,12 @@ static int __snp_cmd_buf_copy(int cmd, void *cmd_buf, = bool to_fw, int fw_err) * no not need to reclaim the page. */ if (from_fw && sev_legacy_cmd_buf_writable(cmd)) { - if (rmp_mark_pages_shared(__pa(cmd_buf), 1)) + if (psp_master->vdata->quirks & PSP_QUIRK_ALWAYS_RECLAIM) { + if (snp_reclaim_pages(__pa(cmd_buf), 1, true)) + return -EFAULT; + } else if (rmp_mark_pages_shared(__pa(cmd_buf), 1)) { return -EFAULT; + } =20 /* No need to go further if firmware failed to execute command. */ if (fw_err) diff --git a/drivers/crypto/ccp/sp-dev.h b/drivers/crypto/ccp/sp-dev.h index 083e57652c7b..6fb065a7d1fd 100644 --- a/drivers/crypto/ccp/sp-dev.h +++ b/drivers/crypto/ccp/sp-dev.h @@ -28,6 +28,9 @@ #define CACHE_NONE 0x00 #define CACHE_WB_NO_ALLOC 0xb7 =20 +/* PSP requires a reclaim after every firmware command */ +#define PSP_QUIRK_ALWAYS_RECLAIM BIT(0) + /* Structure to hold CCP device data */ struct ccp_device; struct ccp_vdata { @@ -59,6 +62,7 @@ struct psp_vdata { unsigned int feature_reg; unsigned int inten_reg; unsigned int intsts_reg; + unsigned int quirks; }; =20 /* Structure to hold SP device data */ diff --git a/drivers/crypto/ccp/sp-platform.c b/drivers/crypto/ccp/sp-platf= orm.c index d56b34255b97..cae3e7e8f289 100644 --- a/drivers/crypto/ccp/sp-platform.c +++ b/drivers/crypto/ccp/sp-platform.c @@ -43,6 +43,7 @@ static struct psp_vdata psp_platform =3D { .feature_reg =3D -1, .inten_reg =3D -1, .intsts_reg =3D -1, + .quirks =3D PSP_QUIRK_ALWAYS_RECLAIM, }; #endif =20 --=20 2.25.1