From nobody Sun Feb 8 01:30:52 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92778299945 for ; Mon, 12 May 2025 19:11:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747077114; cv=none; b=rBEJZ9JWsdUc59Ik2fwrXCHcX6UUrsGid7KhJD227OM6zPESUWW79WWVC9usDjtm7SAEbBcZWagjGSsQQXlZxiv2TAJNw6Va8Ygw9aboUZ/tcfiB/hme15ywLQZkOTVBuTgG+sAMFJdik+6Ns+Kqi8Vko7f5OqEs22yuXlnaask= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747077114; c=relaxed/simple; bh=4PRMUsF3/4G7hA2q4hjeIR+JdxZK6wTZjWGcyi+aY1U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BNnHjqEwSkA1jYRVmkt2B+MWJHhDpknxf5qfNCzYXpbUe++PAypa064tRUuJg77zRZCLi4Q5MLyp8L4pV/RCn1G7aqYPILv0YnUYU8n2FRD4H6weaCQuETBi70BaK1yRfIqzQns/y7P6D2RdonDbjqKvFfZN7OgyOuPCERUFyRY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qpGNIYkK; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qpGNIYkK" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a20257c758so1263416f8f.0 for ; Mon, 12 May 2025 12:11:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747077111; x=1747681911; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VT9UVC5WaYQlU9gYbZ4bwLAzMBxTTt0hC6BCcLJpAOM=; b=qpGNIYkKjSxkXjV6X7yKbx0IqamhISs7CcDqyl7Dahmp3WnmxjIekJpVYjaUaecmZ9 9DpcsTJWxtF2sdIiBAFRE9YBiwn+c+uHtGUYuRePUVj/7sYu2ItYqnua+nIT9hoc27GO Tv6cQWZMYlHai+bjXSEF+0Ln4qrJalCzbC4IoVZNmnMuncfkpr101QCHFCwvudFi0Qa8 KCUH31iqCpi0kwgxVUfmdbXgkHVy/SWet4nrhm0fy6dzolDQ4vN2PaWS2Uer9qVs3i1a /YWzsScECv6mOiVO4Wb3F/spY8MuBfFLkwB2RokoqSiEAqImDnvyROsQB+bNSGhlgVHl 5NIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747077111; x=1747681911; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VT9UVC5WaYQlU9gYbZ4bwLAzMBxTTt0hC6BCcLJpAOM=; b=Pz0fUiP0HJJK0e+AxMPNlWCkyLPYFJ2AB+0JuOMTe9ECxfT9UKwjATUT181QS+p3L3 t8eH+kdHgf+RIpEGYU8CBa7kQ18OOUDdWW/MQEYtEyy2owLsOfFFgW9b1NLDiUr0YbAI wi6JnZysV23wKBX2/VfzVPCeQt/tXbCPJPyiVrqMukdd3cEua9EASigT9uHxRYsxp+kY mqDg3WyX+Lg8SRYyxxU0xCORxBkbZdjyMt9U+Lju+DWjw5hvpWtKLZtKXuAUXgXhKtXF Uk8d7kT3lY+vci8y15t1WQZkRKSZSu9EJieviw+T5Y6IeGMOFcRQxIP9W4HF+Pmv9MRv kSpw== X-Gm-Message-State: AOJu0Yy4HFb9Wl/t5vo7I2c3kTZI6LbRahl2OyW8S2ubMnpg6LG9gump YAgC2X+oz1TCi1S7NgXMoXX5JZMkxHC19mUavxcHt/rhdgcCcu+r0RgLv/uper2O2aeRYMVdtse rVU8PnifBemK9IIi4tuHaRVe26A+smFWtjD7DVRD9uw2mVm5y1DrMWWhdseeuWM7Ju1z6YlpkPg y5POt0hrjd6HDvOrV3UC7ivJip3EWyZQ== X-Google-Smtp-Source: AGHT+IHzd4RlyfLXOODAOZP0BRMlZsBxFntsPH1oq9ob7Ye5rGtrjBHCSJaGDnfwVynWIoCZwe59kvFZ X-Received: from wmbbd9.prod.google.com ([2002:a05:600c:1f09:b0:442:e9ed:dccc]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2289:b0:3a0:6c62:8169 with SMTP id ffacd0b85a97d-3a1f646a66amr11099907f8f.25.1747077111021; Mon, 12 May 2025 12:11:51 -0700 (PDT) Date: Mon, 12 May 2025 21:08:44 +0200 In-Reply-To: <20250512190834.332684-23-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250512190834.332684-23-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=7709; i=ardb@kernel.org; h=from:subject; bh=nKTaGIifaijNW5UnObPbBk/6m3hva4eEIq5cHT0Ct3Y=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIUPJ3W3+khBF5gqd6w1p79xlVv+PvvkhUu3E+gu71jBN4 9muefJBRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiI6RuG/1WP689v9vqsKZ2f u2TpgfPZPdM01z5gffTWKmf+tDdfVVYyMjx72Zvy9hgf0w/uxfrb1/oGmphMPHneYSm/N8+LSMb tzXwA X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250512190834.332684-32-ardb+git@google.com> Subject: [RFT PATCH v3 09/21] x86/sev: Pass SVSM calling area down to early page state change API From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Borislav Petkov , Ingo Molnar , Dionna Amalie Glaze , Kevin Loughlin , Tom Lendacky Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ard Biesheuvel The early page state change API is mostly only used very early, when only the boot time SVSM calling area is in use. However, this API is also called by the kexec finishing code, which runs very late, and potentially from a different CPU (which uses a different calling area). To avoid pulling the per-CPU SVSM calling area pointers and related SEV state into the startup code, refactor the page state change API so the SVSM calling area virtual and physical addresses can be provided by the caller. No functional change intended. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/sev.c | 12 +++++++++--- arch/x86/boot/startup/sev-shared.c | 18 ++++++++++-------- arch/x86/boot/startup/sev-startup.c | 11 +++++++---- arch/x86/coco/sev/core.c | 3 ++- arch/x86/include/asm/sev-internal.h | 3 ++- 5 files changed, 30 insertions(+), 17 deletions(-) diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 7a01eef9ae01..04bc39d065ff 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -68,7 +68,9 @@ void snp_set_page_private(unsigned long paddr) return; =20 msr =3D sev_es_rd_ghcb_msr(); - __page_state_change(paddr, SNP_PAGE_STATE_PRIVATE); + __page_state_change(paddr, SNP_PAGE_STATE_PRIVATE, + (struct svsm_ca *)boot_svsm_caa_pa, + boot_svsm_caa_pa); sev_es_wr_ghcb_msr(msr); } =20 @@ -80,7 +82,9 @@ void snp_set_page_shared(unsigned long paddr) return; =20 msr =3D sev_es_rd_ghcb_msr(); - __page_state_change(paddr, SNP_PAGE_STATE_SHARED); + __page_state_change(paddr, SNP_PAGE_STATE_SHARED, + (struct svsm_ca *)boot_svsm_caa_pa, + boot_svsm_caa_pa); sev_es_wr_ghcb_msr(msr); } =20 @@ -109,7 +113,9 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t e= nd) u64 msr =3D sev_es_rd_ghcb_msr(); =20 for (phys_addr_t pa =3D start; pa < end; pa +=3D PAGE_SIZE) - __page_state_change(pa, SNP_PAGE_STATE_PRIVATE); + __page_state_change(pa, SNP_PAGE_STATE_PRIVATE, + (struct svsm_ca *)boot_svsm_caa_pa, + boot_svsm_caa_pa); sev_es_wr_ghcb_msr(msr); } =20 diff --git a/arch/x86/boot/startup/sev-shared.c b/arch/x86/boot/startup/sev= -shared.c index dae770327b50..70ad9a0aa023 100644 --- a/arch/x86/boot/startup/sev-shared.c +++ b/arch/x86/boot/startup/sev-shared.c @@ -538,7 +538,8 @@ static void __head setup_cpuid_table(const struct cc_bl= ob_sev_info *cc_info) } } =20 -static void __head svsm_pval_4k_page(unsigned long paddr, bool validate) +static void __head svsm_pval_4k_page(unsigned long paddr, bool validate, + struct svsm_ca *caa, u64 caa_pa) { struct svsm_pvalidate_call *pc; struct svsm_call call =3D {}; @@ -552,10 +553,10 @@ static void __head svsm_pval_4k_page(unsigned long pa= ddr, bool validate) */ flags =3D native_local_irq_save(); =20 - call.caa =3D svsm_get_caa(); + call.caa =3D caa; =20 pc =3D (struct svsm_pvalidate_call *)call.caa->svsm_buffer; - pc_pa =3D svsm_get_caa_pa() + offsetof(struct svsm_ca, svsm_buffer); + pc_pa =3D caa_pa + offsetof(struct svsm_ca, svsm_buffer); =20 pc->num_entries =3D 1; pc->cur_index =3D 0; @@ -578,12 +579,12 @@ static void __head svsm_pval_4k_page(unsigned long pa= ddr, bool validate) } =20 static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long pa= ddr, - bool validate) + bool validate, struct svsm_ca *caa, u64 caa_pa) { int ret; =20 if (snp_vmpl) { - svsm_pval_4k_page(paddr, validate); + svsm_pval_4k_page(paddr, validate, caa, caa_pa); } else { ret =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); if (ret) @@ -591,7 +592,8 @@ static void __head pvalidate_4k_page(unsigned long vadd= r, unsigned long paddr, } } =20 -static void __head __page_state_change(unsigned long paddr, enum psc_op op) +static void __head __page_state_change(unsigned long paddr, enum psc_op op, + struct svsm_ca *caa, u64 caa_pa) { u64 val; =20 @@ -600,7 +602,7 @@ static void __head __page_state_change(unsigned long pa= ddr, enum psc_op op) * state change in the RMP table. */ if (op =3D=3D SNP_PAGE_STATE_SHARED) - pvalidate_4k_page(paddr, paddr, false); + pvalidate_4k_page(paddr, paddr, false, caa, caa_pa); =20 /* Issue VMGEXIT to change the page state in RMP table. */ sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op)); @@ -616,7 +618,7 @@ static void __head __page_state_change(unsigned long pa= ddr, enum psc_op op) * consistent with the RMP entry. */ if (op =3D=3D SNP_PAGE_STATE_PRIVATE) - pvalidate_4k_page(paddr, paddr, true); + pvalidate_4k_page(paddr, paddr, true, caa, caa_pa); } =20 /* diff --git a/arch/x86/boot/startup/sev-startup.c b/arch/x86/boot/startup/se= v-startup.c index 28bf68753580..7a3ad17d06f6 100644 --- a/arch/x86/boot/startup/sev-startup.c +++ b/arch/x86/boot/startup/sev-startup.c @@ -132,7 +132,8 @@ noinstr void __sev_put_ghcb(struct ghcb_state *state) =20 void __head early_set_pages_state(unsigned long vaddr, unsigned long paddr, - unsigned long npages, enum psc_op op) + unsigned long npages, enum psc_op op, + struct svsm_ca *caa, u64 caa_pa) { unsigned long paddr_end; =20 @@ -142,7 +143,7 @@ early_set_pages_state(unsigned long vaddr, unsigned lon= g paddr, paddr_end =3D paddr + (npages << PAGE_SHIFT); =20 while (paddr < paddr_end) { - __page_state_change(paddr, op); + __page_state_change(paddr, op, caa, caa_pa); =20 vaddr +=3D PAGE_SIZE; paddr +=3D PAGE_SIZE; @@ -165,7 +166,8 @@ void __head early_snp_set_memory_private(unsigned long = vaddr, unsigned long padd * Ask the hypervisor to mark the memory pages as private in the RMP * table. */ - early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE, + svsm_get_caa(), svsm_get_caa_pa()); } =20 void __head early_snp_set_memory_shared(unsigned long vaddr, unsigned long= paddr, @@ -181,7 +183,8 @@ void __head early_snp_set_memory_shared(unsigned long v= addr, unsigned long paddr return; =20 /* Ask hypervisor to mark the memory pages shared in the RMP table. */ - early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED); + early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED, + svsm_get_caa(), svsm_get_caa_pa()); } =20 /* diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c index 0e0ddf4c92aa..39bbbea09c24 100644 --- a/arch/x86/coco/sev/core.c +++ b/arch/x86/coco/sev/core.c @@ -584,7 +584,8 @@ static void set_pages_state(unsigned long vaddr, unsign= ed long npages, int op) =20 /* Use the MSR protocol when a GHCB is not available. */ if (!boot_ghcb) - return early_set_pages_state(vaddr, __pa(vaddr), npages, op); + return early_set_pages_state(vaddr, __pa(vaddr), npages, op, + svsm_get_caa(), svsm_get_caa_pa()); =20 vaddr =3D vaddr & PAGE_MASK; vaddr_end =3D vaddr + (npages << PAGE_SHIFT); diff --git a/arch/x86/include/asm/sev-internal.h b/arch/x86/include/asm/sev= -internal.h index e3b203c280aa..08e2cfdef512 100644 --- a/arch/x86/include/asm/sev-internal.h +++ b/arch/x86/include/asm/sev-internal.h @@ -55,7 +55,8 @@ DECLARE_PER_CPU(struct sev_es_runtime_data*, runtime_data= ); DECLARE_PER_CPU(struct sev_es_save_area *, sev_vmsa); =20 void early_set_pages_state(unsigned long vaddr, unsigned long paddr, - unsigned long npages, enum psc_op op); + unsigned long npages, enum psc_op op, + struct svsm_ca *ca, u64 caa_pa); =20 DECLARE_PER_CPU(struct svsm_ca *, svsm_caa); DECLARE_PER_CPU(u64, svsm_caa_pa); --=20 2.49.0.1045.g170613ef41-goog