From nobody Thu Feb 12 00:28:56 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFA1A1FF839; Tue, 18 Jun 2024 09:45:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718703910; cv=none; b=rSYSJSLUUmGLgnkwqFRBfbnFpbrMUHLEsPd/yh8/KwpySnkZS0WBibJfQETEF13Kvy2c0SWfr9CeZPWx7FNW3WX3354cX0kuX3VqYCuAhMVwJg83ctj8EqW/UvtketTQl8vgihzNzYyg1vQjUl1rXEFysXHDNPMV2K1Kme0CB78= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718703910; c=relaxed/simple; bh=6KbR4yVZDLuU9RMQ3N9vQN8O80beZGxq5PLgFUo/Swc=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=m+RZ/ICXqGOcGB1J4u+xfG74WR6HQqu3uKwtOzpY3kpW+0bjSp8fWwYyylfJY4HbT2IXfIX8faRQanK776Oz3MYrb/hzfflX76ERRq0G3GxqdFwpV00txuTtF7nlwsiYhFeCDHfF9kK8pqCMCS5upRQzxjUDmAFt1G5PJ2G5vZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=DRGh4bXI; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=4QquqEzg; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="DRGh4bXI"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="4QquqEzg" Date: Tue, 18 Jun 2024 09:45:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1718703900; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RxUsrHQCGP6hQX7SPyfWIcNE23LfeT1VvWMUPp/cj8I=; b=DRGh4bXI+AR+9sDH4bX4TbTeeMu/sHyYJ20jGZ06xb3AXHhKhkRYHVViJ/xTSer1FcTKtz criOPaqC7VoXgV/7yd0JcujIhDmxYPBmBDkUM+V7grQR7/U5Lnl795A7Mxs37OvX314BV7 VGhjaigRA1gBzAWPcPvxNMTFMl92/7Z5QSprtfQsdNWL1HSfqaSbrMVPD+8I93fJLjoFeK 446ru3o3G8p1OEU2XYjoO86bdaBHNgBKoL0L+oNl+Oo3kZ1Ta0ytGfcOhboU1TqIj5+aMu 0A1Pyt3nSVJoJefrUh/rLQ4tkcvjgUidkaA/z1xX+cjhoIP8OflG6UsKOXMC8g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1718703900; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RxUsrHQCGP6hQX7SPyfWIcNE23LfeT1VvWMUPp/cj8I=; b=4QquqEzgXsrnCXY87hn4wyvRp8OoN4b+CPRG4afoSeVk3LVcegPJLH4MFqRkf0hgFy1VUL ks7gmGvRTG4GPYBA== From: "tip-bot2 for Tom Lendacky" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/sev] x86/sev: Use the SVSM to create a vCPU when not in VMPL0 Cc: Tom Lendacky , "Borislav Petkov (AMD)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: =?utf-8?q?=3Cbcdd95ecabe9723673b9693c7f1533a2b8f17781=2E17176?= =?utf-8?q?00736=2Egit=2Ethomas=2Elendacky=40amd=2Ecom=3E?= References: =?utf-8?q?=3Cbcdd95ecabe9723673b9693c7f1533a2b8f17781=2E171760?= =?utf-8?q?0736=2Egit=2Ethomas=2Elendacky=40amd=2Ecom=3E?= Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <171870390005.10875.6575607009408415159.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the x86/sev branch of tip: Commit-ID: d2b2931f19e48c9148909c2f45bccf21a8a83cfb Gitweb: https://git.kernel.org/tip/d2b2931f19e48c9148909c2f45bccf21a= 8a83cfb Author: Tom Lendacky AuthorDate: Wed, 05 Jun 2024 10:18:48 -05:00 Committer: Borislav Petkov (AMD) CommitterDate: Mon, 17 Jun 2024 20:42:56 +02:00 x86/sev: Use the SVSM to create a vCPU when not in VMPL0 Using the RMPADJUST instruction, the VMSA attribute can only be changed at VMPL0. An SVSM will be present when running at VMPL1 or a lower privilege level. In that case, use the SVSM_CORE_CREATE_VCPU call or the SVSM_CORE_DESTROY_VCPU call to perform VMSA attribute changes. Use the VMPL level supplied by the SVSM for the VMSA when starting the AP. [ bp: Fix typo + touchups. ] Signed-off-by: Tom Lendacky Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/bcdd95ecabe9723673b9693c7f1533a2b8f17781.17= 17600736.git.thomas.lendacky@amd.com --- arch/x86/include/asm/sev.h | 2 +- arch/x86/kernel/sev.c | 74 +++++++++++++++++++++++++++---------- 2 files changed, 56 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 874295a..1d0b1b2 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -233,6 +233,8 @@ struct svsm_call { #define SVSM_CORE_CALL(x) ((0ULL << 32) | (x)) #define SVSM_CORE_REMAP_CA 0 #define SVSM_CORE_PVALIDATE 1 +#define SVSM_CORE_CREATE_VCPU 2 +#define SVSM_CORE_DELETE_VCPU 3 =20 #ifdef CONFIG_AMD_MEM_ENCRYPT extern void __sev_es_ist_enter(struct pt_regs *regs); diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index f1d11e7..2586ba6 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1010,22 +1010,49 @@ void snp_accept_memory(phys_addr_t start, phys_addr= _t end) set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); } =20 -static int snp_set_vmsa(void *va, bool vmsa) +static int snp_set_vmsa(void *va, void *caa, int apic_id, bool make_vmsa) { - u64 attrs; + int ret; =20 - /* - * Running at VMPL0 allows the kernel to change the VMSA bit for a page - * using the RMPADJUST instruction. However, for the instruction to - * succeed it must target the permissions of a lesser privileged - * (higher numbered) VMPL level, so use VMPL1 (refer to the RMPADJUST - * instruction in the AMD64 APM Volume 3). - */ - attrs =3D 1; - if (vmsa) - attrs |=3D RMPADJUST_VMSA_PAGE_BIT; + if (snp_vmpl) { + struct svsm_call call =3D {}; + unsigned long flags; + + local_irq_save(flags); + + call.caa =3D this_cpu_read(svsm_caa); + call.rcx =3D __pa(va); + + if (make_vmsa) { + /* Protocol 0, Call ID 2 */ + call.rax =3D SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU); + call.rdx =3D __pa(caa); + call.r8 =3D apic_id; + } else { + /* Protocol 0, Call ID 3 */ + call.rax =3D SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU); + } =20 - return rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs); + ret =3D svsm_perform_call_protocol(&call); + + local_irq_restore(flags); + } else { + /* + * If the kernel runs at VMPL0, it can change the VMSA + * bit for a page using the RMPADJUST instruction. + * However, for the instruction to succeed it must + * target the permissions of a lesser privileged (higher + * numbered) VMPL level, so use VMPL1. + */ + u64 attrs =3D 1; + + if (make_vmsa) + attrs |=3D RMPADJUST_VMSA_PAGE_BIT; + + ret =3D rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs); + } + + return ret; } =20 #define __ATTR_BASE (SVM_SELECTOR_P_MASK | SVM_SELECTOR_S_MASK) @@ -1059,11 +1086,11 @@ static void *snp_alloc_vmsa_page(int cpu) return page_address(p + 1); } =20 -static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa) +static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id) { int err; =20 - err =3D snp_set_vmsa(vmsa, false); + err =3D snp_set_vmsa(vmsa, NULL, apic_id, false); if (err) pr_err("clear VMSA page failed (%u), leaking page\n", err); else @@ -1074,6 +1101,7 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsign= ed long start_ip) { struct sev_es_save_area *cur_vmsa, *vmsa; struct ghcb_state state; + struct svsm_ca *caa; unsigned long flags; struct ghcb *ghcb; u8 sipi_vector; @@ -1120,6 +1148,9 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsign= ed long start_ip) if (!vmsa) return -ENOMEM; =20 + /* If an SVSM is present, the SVSM per-CPU CAA will be !NULL */ + caa =3D per_cpu(svsm_caa, cpu); + /* CR4 should maintain the MCE value */ cr4 =3D native_read_cr4() & X86_CR4_MCE; =20 @@ -1167,11 +1198,11 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsi= gned long start_ip) * VMPL level * SEV_FEATURES (matches the SEV STATUS MSR right shifted 2 bits) */ - vmsa->vmpl =3D 0; + vmsa->vmpl =3D snp_vmpl; vmsa->sev_features =3D sev_status >> 2; =20 /* Switch the page over to a VMSA page now that it is initialized */ - ret =3D snp_set_vmsa(vmsa, true); + ret =3D snp_set_vmsa(vmsa, caa, apic_id, true); if (ret) { pr_err("set VMSA page failed (%u)\n", ret); free_page((unsigned long)vmsa); @@ -1187,7 +1218,10 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsig= ned long start_ip) vc_ghcb_invalidate(ghcb); ghcb_set_rax(ghcb, vmsa->sev_features); ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION); - ghcb_set_sw_exit_info_1(ghcb, ((u64)apic_id << 32) | SVM_VMGEXIT_AP_CREAT= E); + ghcb_set_sw_exit_info_1(ghcb, + ((u64)apic_id << 32) | + ((u64)snp_vmpl << 16) | + SVM_VMGEXIT_AP_CREATE); ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa)); =20 sev_es_wr_ghcb_msr(__pa(ghcb)); @@ -1205,13 +1239,13 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsi= gned long start_ip) =20 /* Perform cleanup if there was an error */ if (ret) { - snp_cleanup_vmsa(vmsa); + snp_cleanup_vmsa(vmsa, apic_id); vmsa =3D NULL; } =20 /* Free up any previous VMSA page */ if (cur_vmsa) - snp_cleanup_vmsa(cur_vmsa); + snp_cleanup_vmsa(cur_vmsa, apic_id); =20 /* Record the current VMSA page */ per_cpu(sev_vmsa, cpu) =3D vmsa;