From nobody Thu Dec 25 19:51:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1DD4DDDD for ; Thu, 11 Jan 2024 22:36:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kevinloughlin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="q/LsICzv" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-35fe765d63eso30215635ab.0 for ; Thu, 11 Jan 2024 14:36:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705012615; x=1705617415; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lczn+93s43xycbOUg1tjz5/Z/zIwNlBLG84sXxBI1vc=; b=q/LsICzvQIUUesLs/W7TeQyAN7p2EPpUCT4Gom1piRQG+mz64iCbdZV1ZYUK/JfG4d T5J5r4SUTnq3HaQOtatg8OJaTLPwvKPzPTCMinVtfgcBAq6bPCVj8Po6XUH2+oyC0Hbm xNEALxp/E5PvHsVJUvAo5UunlK1xBr5jjciZ8nVamlPbNIcm0qmbdCm6OIhbe6Rv4IK5 7ctL1T+voKiaIVBic9qS/k7GEcM1WY9hW0gliOfer2+HuBvRH5zOpKXjCmGRbjGnuyy7 YeXhviGuftcNzsddlFEFla9jVyLOB6PWaZnVb/hM/iipg2uDIWynFLh/GpsO7et3BzYv V2aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705012615; x=1705617415; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lczn+93s43xycbOUg1tjz5/Z/zIwNlBLG84sXxBI1vc=; b=GEIpuWTtkRvoKHdYfoIuDSk69bG6KMe5z27s+aG/0z8YYiAN0C4FEM5CCDiokLcSKe ETxWR2kCHssa04ChcnLFbF3QmFNAU3gmbyrjRzp3xW01rOoESuX56ZPXZb9LAqwjXRlv om1QtkZ3B6vShWsL/DubS9ba0MZLluYHrFxRBSNH3fRX4avIs+FnI+YvKH0/ftgRJocZ hXK9CRFR/n+scuZaaCLRYeFRkkT62rcknlK1mNCJJHdcDcj8lfLtj0QIKdh0Sh4arN1A vwZjgPeU+xhS6TtvM9RPY/VZZJwTXAvLmwfeEHlXhUpLl35Vo5tAomf5KaAKwqU6221X dNAQ== X-Gm-Message-State: AOJu0YwExYqRTUZotlfC6Qpq1rJkIcFATDxDB7Avj1l/OIX8jIolOgOw pOJRVZGX1N/O+OvhJjvplN8ZPjoqayoUOOZafWpAEmeuglo= X-Google-Smtp-Source: AGHT+IE9G10UZqiKCF//ORs9+mSiFte53u+DD+bCNQWkWjXXdqCOytQJCId7DmHVBmf6hSyIeFawQFEr8UUo2Bq+cNUU X-Received: from loughlin00.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:1b6f]) (user=kevinloughlin job=sendgmr) by 2002:a05:6e02:18cd:b0:35f:cca8:cd54 with SMTP id s13-20020a056e0218cd00b0035fcca8cd54mr90119ilu.2.1705012615662; Thu, 11 Jan 2024 14:36:55 -0800 (PST) Date: Thu, 11 Jan 2024 22:36:50 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.43.0.275.g3460e3d667-goog Message-ID: <20240111223650.3502633-1-kevinloughlin@google.com> Subject: [RFC PATCH v2] x86/sev: enforce RIP-relative accesses in early SEV/SME code From: Kevin Loughlin To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Tom Lendacky , Michael Kelley , Kevin Loughlin , Pankaj Gupta , Stephen Rothwell , Arnd Bergmann , Steve Rutherford , Alexander Shishkin , Hou Wenlong , Vegard Nossum , Josh Poimboeuf , Yuntao Wang , Wang Jinchao , David Woodhouse , Brian Gerst , Hugh Dickins , Ard Biesheuvel , Joerg Roedel , Randy Dunlap , Bjorn Helgaas , Dionna Glaze , Brijesh Singh , Michael Roth , "Kirill A. Shutemov" , linux-kernel@vger.kernel.org, llvm@lists.linux.dev, linux-coco@lists.linux.dev, Ashish Kalra , Andi Kleen Cc: Adam Dunlap , Peter Gonda , Jacob Xu , Sidharth Telang Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" SEV/SME code can execute prior to page table fixups for kernel relocation. However, as with global variables accessed in __startup_64(), the compiler is not required to generate RIP-relative accesses for SEV/SME global variables, causing certain flavors of SEV hosts and guests built with clang to crash during boot. While an attempt was made to force RIP-relative addressing for certain global SEV/SME variables via inline assembly (see snp_cpuid_get_table() for example), RIP-relative addressing must be pervasively-enforced for SEV/SME global variables when accessed prior to page table fixups. __startup_64() already handles this issue for select non-SEV/SME global variables using fixup_pointer(), which adjusts the pointer relative to a `physaddr` argument. To avoid having to pass around this `physaddr` argument across all functions needing to apply pointer fixups, this patch introduces the macro GET_RIP_RELATIVE_PTR() (an abstraction of the existing snp_cpuid_get_table()), which generates an RIP-relative pointer to a passed variable. Similarly, PTR_TO_RIP_RELATIVE_PTR() is introduced to fixup an existing pointer value with RIP-relative logic. Applying these macros to early SEV/SME code (alongside Adam Dunlap's necessary "[PATCH v2] x86/asm: Force native_apic_mem_read to use mov") enables previously-failing boots of clang builds to succeed, while preserving successful boot of gcc builds. Tested with and without SEV, SEV-ES, SEV-SNP enabled in guests built via both gcc and clang. Fixes: 95d33bfaa3e1 ("x86/sev: Register GHCB memory when SEV-SNP is active") Fixes: ee0bfa08a345 ("x86/compressed/64: Add support for SEV-SNP CPUID tabl= e in #VC handlers") Fixes: 1cd9c22fee3a ("x86/mm/encrypt: Move page table helpers into separate= translation unit") Fixes: c9f09539e16e ("x86/head/64: Check SEV encryption before switching to= kernel page-table") Fixes: b577f542f93c ("x86/coco: Add API to handle encryption mask") Tested-by: Kevin Loughlin Signed-off-by: Kevin Loughlin --- arch/x86/coco/core.c | 22 +++++++---- arch/x86/include/asm/mem_encrypt.h | 37 +++++++++++++++++- arch/x86/kernel/head64.c | 22 ++++++----- arch/x86/kernel/head_64.S | 4 +- arch/x86/kernel/sev-shared.c | 63 ++++++++++++++++-------------- arch/x86/kernel/sev.c | 15 +++++-- arch/x86/mm/mem_encrypt_identity.c | 50 ++++++++++++------------ 7 files changed, 136 insertions(+), 77 deletions(-) diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c index eeec9986570e..8c45b5643f48 100644 --- a/arch/x86/coco/core.c +++ b/arch/x86/coco/core.c @@ -5,6 +5,11 @@ * Copyright (C) 2021 Advanced Micro Devices, Inc. * * Author: Tom Lendacky + * + * WARNING!! + * Select functions in this file can execute prior to page table fixups an= d thus + * require pointer fixups for global variable accesses. See WARNING in + * arch/x86/kernel/head64.c. */ =20 #include @@ -61,33 +66,34 @@ static __maybe_unused __always_inline bool amd_cc_platf= orm_vtom(enum cc_attr att static bool noinstr amd_cc_platform_has(enum cc_attr attr) { #ifdef CONFIG_AMD_MEM_ENCRYPT + const u64 sev_status_fixed_up =3D sev_get_status_fixup(); =20 - if (sev_status & MSR_AMD64_SNP_VTOM) + if (sev_status_fixed_up & MSR_AMD64_SNP_VTOM) return amd_cc_platform_vtom(attr); =20 switch (attr) { case CC_ATTR_MEM_ENCRYPT: - return sme_me_mask; + return sme_get_me_mask_fixup(); =20 case CC_ATTR_HOST_MEM_ENCRYPT: - return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED); + return sme_get_me_mask_fixup() && !(sev_status_fixed_up & MSR_AMD64_SEV_= ENABLED); =20 case CC_ATTR_GUEST_MEM_ENCRYPT: - return sev_status & MSR_AMD64_SEV_ENABLED; + return sev_status_fixed_up & MSR_AMD64_SEV_ENABLED; =20 case CC_ATTR_GUEST_STATE_ENCRYPT: - return sev_status & MSR_AMD64_SEV_ES_ENABLED; + return sev_status_fixed_up & MSR_AMD64_SEV_ES_ENABLED; =20 /* * With SEV, the rep string I/O instructions need to be unrolled * but SEV-ES supports them through the #VC handler. */ case CC_ATTR_GUEST_UNROLL_STRING_IO: - return (sev_status & MSR_AMD64_SEV_ENABLED) && - !(sev_status & MSR_AMD64_SEV_ES_ENABLED); + return (sev_status_fixed_up & MSR_AMD64_SEV_ENABLED) && + !(sev_status_fixed_up & MSR_AMD64_SEV_ES_ENABLED); =20 case CC_ATTR_GUEST_SEV_SNP: - return sev_status & MSR_AMD64_SEV_SNP_ENABLED; + return sev_status_fixed_up & MSR_AMD64_SEV_SNP_ENABLED; =20 default: return false; diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_= encrypt.h index 359ada486fa9..d007050a0edc 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -17,6 +17,34 @@ =20 #include =20 +/* + * Generates an RIP-relative pointer to a data variable "var". + * This macro can be used to safely access global data variables prior to = kernel + * relocation, similar to fixup_pointer() in arch/x86/kernel/head64.c. + */ +#define GET_RIP_RELATIVE_PTR(var) \ +({ \ + void *rip_rel_ptr; \ + asm ("lea "#var"(%%rip), %0" \ + : "=3Dr" (rip_rel_ptr) \ + : "p" (&var)); \ + rip_rel_ptr; \ +}) + +/* + * Converts an existing pointer "ptr" to an RIP-relative pointer. + * This macro can be used to safely access global pointers prior to kernel + * relocation, similar to fixup_pointer() in arch/x86/kernel/head64.c. + */ +#define PTR_TO_RIP_RELATIVE_PTR(ptr) \ +({ \ + void *rip_rel_ptr; \ + asm ("lea "#ptr"(%%rip), %0" \ + : "=3Dr" (rip_rel_ptr) \ + : "p" (ptr)); \ + rip_rel_ptr; \ +}) + #ifdef CONFIG_X86_MEM_ENCRYPT void __init mem_encrypt_init(void); void __init mem_encrypt_setup_arch(void); @@ -106,9 +134,14 @@ void add_encrypt_protection_map(void); =20 extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_de= crypted_unused[]; =20 -static inline u64 sme_get_me_mask(void) +static inline u64 sme_get_me_mask_fixup(void) +{ + return *((u64 *) GET_RIP_RELATIVE_PTR(sme_me_mask)); +} + +static inline u64 sev_get_status_fixup(void) { - return sme_me_mask; + return *((u64 *) GET_RIP_RELATIVE_PTR(sev_status)); } =20 #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index dc0956067944..8df7a198094d 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -130,6 +130,7 @@ static unsigned long __head sme_postprocess_startup(str= uct boot_params *bp, pmdv { unsigned long vaddr, vaddr_end; int i; + const u64 sme_me_mask_fixed_up =3D sme_get_me_mask_fixup(); =20 /* Encrypt the kernel and related (if SME is active) */ sme_encrypt_kernel(bp); @@ -140,7 +141,7 @@ static unsigned long __head sme_postprocess_startup(str= uct boot_params *bp, pmdv * there is no need to zero it after changing the memory encryption * attribute. */ - if (sme_get_me_mask()) { + if (sme_me_mask_fixed_up) { vaddr =3D (unsigned long)__start_bss_decrypted; vaddr_end =3D (unsigned long)__end_bss_decrypted; =20 @@ -158,7 +159,7 @@ static unsigned long __head sme_postprocess_startup(str= uct boot_params *bp, pmdv early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD); =20 i =3D pmd_index(vaddr); - pmd[i] -=3D sme_get_me_mask(); + pmd[i] -=3D sme_me_mask_fixed_up; } } =20 @@ -166,14 +167,16 @@ static unsigned long __head sme_postprocess_startup(s= truct boot_params *bp, pmdv * Return the SME encryption mask (if SME is active) to be used as a * modifier for the initial pgdir entry programmed into CR3. */ - return sme_get_me_mask(); + return sme_me_mask_fixed_up; } =20 -/* Code in __startup_64() can be relocated during execution, but the compi= ler +/* + * WARNING!! + * Code in __startup_64() can be relocated during execution, but the compi= ler * doesn't have to generate PC-relative relocations when accessing globals= from * that function. Clang actually does not generate them, which leads to * boot-time crashes. To work around this problem, every global pointer mu= st - * be adjusted using fixup_pointer(). + * be adjusted using fixup_pointer() or GET_RIP_RELATIVE_PTR(). */ unsigned long __head __startup_64(unsigned long physaddr, struct boot_params *bp) @@ -188,6 +191,7 @@ unsigned long __head __startup_64(unsigned long physadd= r, bool la57; int i; unsigned int *next_pgt_ptr; + const u64 sme_me_mask_fixed_up =3D sme_get_me_mask_fixup(); =20 la57 =3D check_la57_support(physaddr); =20 @@ -206,7 +210,7 @@ unsigned long __head __startup_64(unsigned long physadd= r, for (;;); =20 /* Include the SME encryption mask in the fixup value */ - load_delta +=3D sme_get_me_mask(); + load_delta +=3D sme_me_mask_fixed_up; =20 /* Fixup the physical addresses in the page table */ =20 @@ -242,7 +246,7 @@ unsigned long __head __startup_64(unsigned long physadd= r, pud =3D fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); pmd =3D fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); =20 - pgtable_flags =3D _KERNPG_TABLE_NOENC + sme_get_me_mask(); + pgtable_flags =3D _KERNPG_TABLE_NOENC + sme_me_mask_fixed_up; =20 if (la57) { p4d =3D fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], @@ -269,7 +273,7 @@ unsigned long __head __startup_64(unsigned long physadd= r, /* Filter out unsupported __PAGE_KERNEL_* bits: */ mask_ptr =3D fixup_pointer(&__supported_pte_mask, physaddr); pmd_entry &=3D *mask_ptr; - pmd_entry +=3D sme_get_me_mask(); + pmd_entry +=3D sme_me_mask_fixed_up; pmd_entry +=3D physaddr; =20 for (i =3D 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) { @@ -313,7 +317,7 @@ unsigned long __head __startup_64(unsigned long physadd= r, * Fixup phys_base - remove the memory encryption mask to obtain * the true physical address. */ - *fixup_long(&phys_base, physaddr) +=3D load_delta - sme_get_me_mask(); + *fixup_long(&phys_base, physaddr) +=3D load_delta - sme_me_mask_fixed_up; =20 return sme_postprocess_startup(bp, pmd); } diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index d4918d03efb4..b9e52cee6e00 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -176,9 +176,11 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_= GLOBAL) /* * Retrieve the modifier (SME encryption mask if SME is active) to be * added to the initial pgdir entry that will be programmed into CR3. + * Since we may have not completed page table fixups, use RIP-relative + * addressing for sme_me_mask. */ #ifdef CONFIG_AMD_MEM_ENCRYPT - movq sme_me_mask, %rax + movq sme_me_mask(%rip), %rax #else xorq %rax, %rax #endif diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index 1d24ec679915..e71752c990ef 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -7,6 +7,11 @@ * This file is not compiled stand-alone. It contains code shared * between the pre-decompression boot code and the running Linux kernel * and is included directly into both code-bases. + * + * WARNING!! + * Select functions in this file can execute prior to page table fixups an= d thus + * require pointer fixups for global variable accesses. See WARNING in + * arch/x86/kernel/head64.c. */ =20 #ifndef __BOOT_COMPRESSED @@ -110,8 +115,9 @@ static void __noreturn sev_es_terminate(unsigned int se= t, unsigned int reason) static u64 get_hv_features(void) { u64 val; + const u16 *ghcb_version_ptr =3D (const u16 *) GET_RIP_RELATIVE_PTR(ghcb_v= ersion); =20 - if (ghcb_version < 2) + if (*ghcb_version_ptr < 2) return 0; =20 sev_es_wr_ghcb_msr(GHCB_MSR_HV_FT_REQ); @@ -143,6 +149,7 @@ static void snp_register_ghcb_early(unsigned long paddr) static bool sev_es_negotiate_protocol(void) { u64 val; + u16 *ghcb_version_ptr; =20 /* Do the GHCB protocol version negotiation */ sev_es_wr_ghcb_msr(GHCB_MSR_SEV_INFO_REQ); @@ -156,7 +163,8 @@ static bool sev_es_negotiate_protocol(void) GHCB_MSR_PROTO_MIN(val) > GHCB_PROTOCOL_MAX) return false; =20 - ghcb_version =3D min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX= ); + ghcb_version_ptr =3D (u16 *) GET_RIP_RELATIVE_PTR(ghcb_version); + *ghcb_version_ptr =3D min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCO= L_MAX); =20 return true; } @@ -318,23 +326,6 @@ static int sev_cpuid_hv(struct ghcb *ghcb, struct es_e= m_ctxt *ctxt, struct cpuid : __sev_cpuid_hv_msr(leaf); } =20 -/* - * This may be called early while still running on the initial identity - * mapping. Use RIP-relative addressing to obtain the correct address - * while running with the initial identity mapping as well as the - * switch-over to kernel virtual addresses later. - */ -static const struct snp_cpuid_table *snp_cpuid_get_table(void) -{ - void *ptr; - - asm ("lea cpuid_table_copy(%%rip), %0" - : "=3Dr" (ptr) - : "p" (&cpuid_table_copy)); - - return ptr; -} - /* * The SNP Firmware ABI, Revision 0.9, Section 7.1, details the use of * XCR0_IN and XSS_IN to encode multiple versions of 0xD subfunctions 0 @@ -357,7 +348,8 @@ static const struct snp_cpuid_table *snp_cpuid_get_tabl= e(void) */ static u32 snp_cpuid_calc_xsave_size(u64 xfeatures_en, bool compacted) { - const struct snp_cpuid_table *cpuid_table =3D snp_cpuid_get_table(); + const struct snp_cpuid_table *cpuid_table =3D (const struct + snp_cpuid_table *) GET_RIP_RELATIVE_PTR(cpuid_table_copy); u64 xfeatures_found =3D 0; u32 xsave_size =3D 0x240; int i; @@ -394,7 +386,8 @@ static u32 snp_cpuid_calc_xsave_size(u64 xfeatures_en, = bool compacted) static bool snp_cpuid_get_validated_func(struct cpuid_leaf *leaf) { - const struct snp_cpuid_table *cpuid_table =3D snp_cpuid_get_table(); + const struct snp_cpuid_table *cpuid_table =3D (const struct + snp_cpuid_table *) GET_RIP_RELATIVE_PTR(cpuid_table_copy); int i; =20 for (i =3D 0; i < cpuid_table->count; i++) { @@ -530,7 +523,9 @@ static int snp_cpuid_postprocess(struct ghcb *ghcb, str= uct es_em_ctxt *ctxt, */ static int snp_cpuid(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cp= uid_leaf *leaf) { - const struct snp_cpuid_table *cpuid_table =3D snp_cpuid_get_table(); + const struct snp_cpuid_table *cpuid_table =3D (const struct + snp_cpuid_table *) GET_RIP_RELATIVE_PTR(cpuid_table_copy); + const u32 *cpuid_std_range_max_ptr, *cpuid_hyp_range_max_ptr, *cpuid_ext_= range_max_ptr; =20 if (!cpuid_table->count) return -EOPNOTSUPP; @@ -555,10 +550,14 @@ static int snp_cpuid(struct ghcb *ghcb, struct es_em_= ctxt *ctxt, struct cpuid_le */ leaf->eax =3D leaf->ebx =3D leaf->ecx =3D leaf->edx =3D 0; =20 + cpuid_std_range_max_ptr =3D (const u32 *) GET_RIP_RELATIVE_PTR(cpuid_std= _range_max); + cpuid_hyp_range_max_ptr =3D (const u32 *) GET_RIP_RELATIVE_PTR(cpuid_hyp= _range_max); + cpuid_ext_range_max_ptr =3D (const u32 *) GET_RIP_RELATIVE_PTR(cpuid_ext= _range_max); + /* Skip post-processing for out-of-range zero leafs. */ - if (!(leaf->fn <=3D cpuid_std_range_max || - (leaf->fn >=3D 0x40000000 && leaf->fn <=3D cpuid_hyp_range_max) || - (leaf->fn >=3D 0x80000000 && leaf->fn <=3D cpuid_ext_range_max))) + if (!(leaf->fn <=3D *cpuid_std_range_max_ptr || + (leaf->fn >=3D 0x40000000 && leaf->fn <=3D *cpuid_hyp_range_max_pt= r) || + (leaf->fn >=3D 0x80000000 && leaf->fn <=3D *cpuid_ext_range_max_pt= r))) return 0; } =20 @@ -1046,6 +1045,7 @@ static struct cc_blob_sev_info *find_cc_blob_setup_da= ta(struct boot_params *bp) static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_inf= o) { const struct snp_cpuid_table *cpuid_table_fw, *cpuid_table; + u32 *cpuid_std_range_max_ptr, *cpuid_hyp_range_max_ptr, *cpuid_ext_range_= max_ptr; int i; =20 if (!cc_info || !cc_info->cpuid_phys || cc_info->cpuid_len < PAGE_SIZE) @@ -1055,19 +1055,24 @@ static void __init setup_cpuid_table(const struct c= c_blob_sev_info *cc_info) if (!cpuid_table_fw->count || cpuid_table_fw->count > SNP_CPUID_COUNT_MAX) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_CPUID); =20 - cpuid_table =3D snp_cpuid_get_table(); + cpuid_table =3D (const struct snp_cpuid_table *) GET_RIP_RELATIVE_PTR( + cpuid_table_copy); memcpy((void *)cpuid_table, cpuid_table_fw, sizeof(*cpuid_table)); =20 + cpuid_std_range_max_ptr =3D (u32 *) GET_RIP_RELATIVE_PTR(cpuid_std_range_= max); + cpuid_hyp_range_max_ptr =3D (u32 *) GET_RIP_RELATIVE_PTR(cpuid_hyp_range_= max); + cpuid_ext_range_max_ptr =3D (u32 *) GET_RIP_RELATIVE_PTR(cpuid_ext_range_= max); + /* Initialize CPUID ranges for range-checking. */ for (i =3D 0; i < cpuid_table->count; i++) { const struct snp_cpuid_fn *fn =3D &cpuid_table->fn[i]; =20 if (fn->eax_in =3D=3D 0x0) - cpuid_std_range_max =3D fn->eax; + *cpuid_std_range_max_ptr =3D fn->eax; else if (fn->eax_in =3D=3D 0x40000000) - cpuid_hyp_range_max =3D fn->eax; + *cpuid_hyp_range_max_ptr =3D fn->eax; else if (fn->eax_in =3D=3D 0x80000000) - cpuid_ext_range_max =3D fn->eax; + *cpuid_ext_range_max_ptr =3D fn->eax; } } =20 diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c67285824e82..c966bc511949 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -5,6 +5,11 @@ * Copyright (C) 2019 SUSE * * Author: Joerg Roedel + * + * WARNING!! + * Select functions in this file can execute prior to page table fixups an= d thus + * require pointer fixups for global variable accesses. See WARNING in + * arch/x86/kernel/head64.c. */ =20 #define pr_fmt(fmt) "SEV: " fmt @@ -748,7 +753,7 @@ void __init early_snp_set_memory_private(unsigned long = vaddr, unsigned long padd * This eliminates worries about jump tables or checking boot_cpu_data * in the cc_platform_has() function. */ - if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (!(sev_get_status_fixup() & MSR_AMD64_SEV_SNP_ENABLED)) return; =20 /* @@ -767,7 +772,7 @@ void __init early_snp_set_memory_shared(unsigned long v= addr, unsigned long paddr * This eliminates worries about jump tables or checking boot_cpu_data * in the cc_platform_has() function. */ - if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (!(sev_get_status_fixup() & MSR_AMD64_SEV_SNP_ENABLED)) return; =20 /* Ask hypervisor to mark the memory pages shared in the RMP table. */ @@ -2114,7 +2119,8 @@ void __init __noreturn snp_abort(void) =20 static void dump_cpuid_table(void) { - const struct snp_cpuid_table *cpuid_table =3D snp_cpuid_get_table(); + const struct snp_cpuid_table *cpuid_table =3D (const struct + snp_cpuid_table *) GET_RIP_RELATIVE_PTR(cpuid_table_copy); int i =3D 0; =20 pr_info("count=3D%d reserved=3D0x%x reserved2=3D0x%llx\n", @@ -2138,7 +2144,8 @@ static void dump_cpuid_table(void) */ static int __init report_cpuid_table(void) { - const struct snp_cpuid_table *cpuid_table =3D snp_cpuid_get_table(); + const struct snp_cpuid_table *cpuid_table =3D (const struct + snp_cpuid_table *) GET_RIP_RELATIVE_PTR(cpuid_table_copy); =20 if (!cpuid_table->count) return 0; diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_i= dentity.c index d73aeb16417f..f4c864ea2468 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -5,6 +5,11 @@ * Copyright (C) 2016 Advanced Micro Devices, Inc. * * Author: Tom Lendacky + * + * WARNING!! + * Select functions in this file can execute prior to page table fixups an= d thus + * require pointer fixups for global variable accesses. See WARNING in + * arch/x86/kernel/head64.c. */ =20 #define DISABLE_BRANCH_PROFILING @@ -305,7 +310,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp) * instrumentation or checking boot_cpu_data in the cc_platform_has() * function. */ - if (!sme_get_me_mask() || sev_status & MSR_AMD64_SEV_ENABLED) + if (!sme_get_me_mask_fixup() || sev_get_status_fixup() & MSR_AMD64_SEV_EN= ABLED) return; =20 /* @@ -346,9 +351,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp) * We're running identity mapped, so we must obtain the address to the * SME encryption workarea using rip-relative addressing. */ - asm ("lea sme_workarea(%%rip), %0" - : "=3Dr" (workarea_start) - : "p" (sme_workarea)); + workarea_start =3D (unsigned long) PTR_TO_RIP_RELATIVE_PTR(sme_workarea); =20 /* * Calculate required number of workarea bytes needed: @@ -511,7 +514,7 @@ void __init sme_enable(struct boot_params *bp) unsigned long me_mask; char buffer[16]; bool snp; - u64 msr; + u64 msr, *sme_me_mask_ptr, *sev_status_ptr; =20 snp =3D snp_init(bp); =20 @@ -542,12 +545,14 @@ void __init sme_enable(struct boot_params *bp) =20 me_mask =3D 1UL << (ebx & 0x3f); =20 + sev_status_ptr =3D (u64 *) GET_RIP_RELATIVE_PTR(sev_status); + /* Check the SEV MSR whether SEV or SME is enabled */ - sev_status =3D __rdmsr(MSR_AMD64_SEV); - feature_mask =3D (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD= _SME_BIT; + *sev_status_ptr =3D __rdmsr(MSR_AMD64_SEV); + feature_mask =3D (*sev_status_ptr & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT = : AMD_SME_BIT; =20 /* The SEV-SNP CC blob should never be present unless SEV-SNP is enabled.= */ - if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (snp && !(*sev_status_ptr & MSR_AMD64_SEV_SNP_ENABLED)) snp_abort(); =20 /* Check if memory encryption is enabled */ @@ -573,7 +578,8 @@ void __init sme_enable(struct boot_params *bp) return; } else { /* SEV state cannot be controlled by a command line option */ - sme_me_mask =3D me_mask; + sme_me_mask_ptr =3D (u64 *) GET_RIP_RELATIVE_PTR(sme_me_mask); + *sme_me_mask_ptr =3D me_mask; goto out; } =20 @@ -582,15 +588,9 @@ void __init sme_enable(struct boot_params *bp) * identity mapped, so we must obtain the address to the SME command * line argument data using rip-relative addressing. */ - asm ("lea sme_cmdline_arg(%%rip), %0" - : "=3Dr" (cmdline_arg) - : "p" (sme_cmdline_arg)); - asm ("lea sme_cmdline_on(%%rip), %0" - : "=3Dr" (cmdline_on) - : "p" (sme_cmdline_on)); - asm ("lea sme_cmdline_off(%%rip), %0" - : "=3Dr" (cmdline_off) - : "p" (sme_cmdline_off)); + cmdline_arg =3D (const char *) PTR_TO_RIP_RELATIVE_PTR(sme_cmdline_arg); + cmdline_on =3D (const char *) PTR_TO_RIP_RELATIVE_PTR(sme_cmdline_on); + cmdline_off =3D (const char *) PTR_TO_RIP_RELATIVE_PTR(sme_cmdline_off); =20 if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT)) active_by_default =3D true; @@ -603,16 +603,18 @@ void __init sme_enable(struct boot_params *bp) if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer))= < 0) return; =20 + sme_me_mask_ptr =3D (u64 *) GET_RIP_RELATIVE_PTR(sme_me_mask); + if (!strncmp(buffer, cmdline_on, sizeof(buffer))) - sme_me_mask =3D me_mask; + *sme_me_mask_ptr =3D me_mask; else if (!strncmp(buffer, cmdline_off, sizeof(buffer))) - sme_me_mask =3D 0; + *sme_me_mask_ptr =3D 0; else - sme_me_mask =3D active_by_default ? me_mask : 0; + *sme_me_mask_ptr =3D active_by_default ? me_mask : 0; out: - if (sme_me_mask) { - physical_mask &=3D ~sme_me_mask; + if (*sme_me_mask_ptr) { + physical_mask &=3D ~(*sme_me_mask_ptr); cc_vendor =3D CC_VENDOR_AMD; - cc_set_mask(sme_me_mask); + cc_set_mask(*sme_me_mask_ptr); } } --=20 2.43.0.275.g3460e3d667-goog