arch/x86/coco/sev/core.c | 4 ++-- arch/x86/include/asm/sev-common.h | 1 + 2 files changed, 3 insertions(+), 2 deletions(-)
From: Ard Biesheuvel <ardb@kernel.org>
Commit
09d35045cd0f ("x86/sev: Avoid WARN()s and panic()s in early boot code")
replaced a panic() that could potentially hit before the kernel is even
mapped with a deadloop, to ensure that execution does not proceed when
the condition in question hits.
As Tom suggests, it is better to terminate and return to the hypervisor
in this case, using a newly invented failure code to describe the
failure condition.
Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/all/9ce88603-20ca-e644-2d8a-aeeaf79cde69@amd.com
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/coco/sev/core.c | 4 ++--
arch/x86/include/asm/sev-common.h | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index 499b41953e3c..86898547056e 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -2356,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
call.rcx = pa;
ret = svsm_perform_call_protocol(&call);
- while (ret)
- cpu_relax(); /* too early to panic */
+ if (ret)
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CA_REMAP_FAIL);
RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
RIP_REL_REF(boot_svsm_caa_pa) = pa;
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 50f5666938c0..577b64dda8b4 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -206,6 +206,7 @@ struct snp_psc_desc {
#define GHCB_TERM_NO_SVSM 7 /* SVSM is not advertised in the secrets page */
#define GHCB_TERM_SVSM_VMPL0 8 /* SVSM is present but has set VMPL to 0 */
#define GHCB_TERM_SVSM_CAA 9 /* SVSM is present but CAA is not page aligned */
+#define GHCB_TERM_SVSM_CA_REMAP_FAIL 10 /* SVSM is present but CA could not be remapped */
#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)
--
2.47.1.613.gc27f4b7a9f-goog
On 1/6/25 09:57, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
> Commit
>
> 09d35045cd0f ("x86/sev: Avoid WARN()s and panic()s in early boot code")
>
> replaced a panic() that could potentially hit before the kernel is even
> mapped with a deadloop, to ensure that execution does not proceed when
> the condition in question hits.
>
> As Tom suggests, it is better to terminate and return to the hypervisor
> in this case, using a newly invented failure code to describe the
> failure condition.
>
> Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
> Link: https://lore.kernel.org/all/9ce88603-20ca-e644-2d8a-aeeaf79cde69@amd.com
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Sorry Ard, I hadn't realized that the series was already merged or I
would have submitted the patch myself. Thanks for doing this!
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
> arch/x86/coco/sev/core.c | 4 ++--
> arch/x86/include/asm/sev-common.h | 1 +
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
> index 499b41953e3c..86898547056e 100644
> --- a/arch/x86/coco/sev/core.c
> +++ b/arch/x86/coco/sev/core.c
> @@ -2356,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
> call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
> call.rcx = pa;
> ret = svsm_perform_call_protocol(&call);
> - while (ret)
> - cpu_relax(); /* too early to panic */
> + if (ret)
> + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CA_REMAP_FAIL);
>
> RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
> RIP_REL_REF(boot_svsm_caa_pa) = pa;
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 50f5666938c0..577b64dda8b4 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -206,6 +206,7 @@ struct snp_psc_desc {
> #define GHCB_TERM_NO_SVSM 7 /* SVSM is not advertised in the secrets page */
> #define GHCB_TERM_SVSM_VMPL0 8 /* SVSM is present but has set VMPL to 0 */
> #define GHCB_TERM_SVSM_CAA 9 /* SVSM is present but CAA is not page aligned */
> +#define GHCB_TERM_SVSM_CA_REMAP_FAIL 10 /* SVSM is present but CA could not be remapped */
>
> #define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)
>
On Mon, 6 Jan 2025 at 17:10, Tom Lendacky <thomas.lendacky@amd.com> wrote:
>
> On 1/6/25 09:57, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Commit
> >
> > 09d35045cd0f ("x86/sev: Avoid WARN()s and panic()s in early boot code")
> >
> > replaced a panic() that could potentially hit before the kernel is even
> > mapped with a deadloop, to ensure that execution does not proceed when
> > the condition in question hits.
> >
> > As Tom suggests, it is better to terminate and return to the hypervisor
> > in this case, using a newly invented failure code to describe the
> > failure condition.
> >
> > Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
> > Link: https://lore.kernel.org/all/9ce88603-20ca-e644-2d8a-aeeaf79cde69@amd.com
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>
> Sorry Ard, I hadn't realized that the series was already merged or I
> would have submitted the patch myself. Thanks for doing this!
>
No worries
> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
>
Thanks
© 2016 - 2026 Red Hat, Inc.