[PATCH v2] x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro

Seongmanlee posted 1 patch 7 months ago
arch/x86/include/asm/sev-common.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH v2] x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro
Posted by Seongmanlee 7 months ago
From: leonardo-leecaprio <augustus92@kaist.ac.kr>

The GHCB_MSR_VMPL_REQ_LEVEL macro lacked parentheses around the bitmask
expression, causing the shift operation to bind too early. As a result,
when switching to VMPL2 from VMPL1 (e.g., GHCB_MSR_VMPL_REQ_LEVEL(1)),
incorrect values such as 0x000000016 were generated instead of the intended
0x100000016.

Fixes the precedence issue by grouping the masked value before applying
the shift.

Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas")

Signed-off-by: Seongman Lee <augustus92@kaist.ac.kr>
---
 arch/x86/include/asm/sev-common.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index acb85b934..0020d77a0 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -116,7 +116,7 @@ enum psc_op {
 #define GHCB_MSR_VMPL_REQ		0x016
 #define GHCB_MSR_VMPL_REQ_LEVEL(v)			\
 	/* GHCBData[39:32] */				\
-	(((u64)(v) & GENMASK_ULL(7, 0) << 32) |		\
+	((((u64)(v) & GENMASK_ULL(7, 0)) << 32) |	\
 	/* GHCBDdata[11:0] */				\
 	GHCB_MSR_VMPL_REQ)
 
-- 
2.39.5 (Apple Git-154)
Re: [PATCH v2] x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro
Posted by Borislav Petkov 7 months ago
On Sun, May 11, 2025 at 06:23:28PM +0900, Seongmanlee wrote:
> From: leonardo-leecaprio <augustus92@kaist.ac.kr>
	^^^^^^^^^^^^^

Right, when you fix the name, you need to fix the authorship too:

$ git commit --amend --author="Seongman Lee <augustus92@kaist.ac.kr>"

as this name will appear in the git history.

But no worries, I'll fix it up when applying - just something to think about
in the future.

You can also set your name in .git/config or .gitconfig and then it'll be
correct automagically.

> The GHCB_MSR_VMPL_REQ_LEVEL macro lacked parentheses around the bitmask
> expression, causing the shift operation to bind too early. As a result,
> when switching to VMPL2 from VMPL1 (e.g., GHCB_MSR_VMPL_REQ_LEVEL(1)),
> incorrect values such as 0x000000016 were generated instead of the intended
> 0x100000016.
> 
> Fixes the precedence issue by grouping the masked value before applying
> the shift.
> 
> Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas")
> 
> Signed-off-by: Seongman Lee <augustus92@kaist.ac.kr>
> ---
>  arch/x86/include/asm/sev-common.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index acb85b934..0020d77a0 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -116,7 +116,7 @@ enum psc_op {
>  #define GHCB_MSR_VMPL_REQ		0x016
>  #define GHCB_MSR_VMPL_REQ_LEVEL(v)			\
>  	/* GHCBData[39:32] */				\
> -	(((u64)(v) & GENMASK_ULL(7, 0) << 32) |		\
> +	((((u64)(v) & GENMASK_ULL(7, 0)) << 32) |	\
>  	/* GHCBDdata[11:0] */				\
>  	GHCB_MSR_VMPL_REQ)
>  
> -- 

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
[tip: x86/urgent] x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro
Posted by tip-bot2 for Seongman Lee 7 months ago
The following commit has been merged into the x86/urgent branch of tip:

Commit-ID:     f7387eff4bad33d12719c66c43541c095556ae4e
Gitweb:        https://git.kernel.org/tip/f7387eff4bad33d12719c66c43541c095556ae4e
Author:        Seongman Lee <augustus92@kaist.ac.kr>
AuthorDate:    Sun, 11 May 2025 18:23:28 +09:00
Committer:     Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Sun, 11 May 2025 11:38:03 +02:00

x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro

The GHCB_MSR_VMPL_REQ_LEVEL macro lacked parentheses around the bitmask
expression, causing the shift operation to bind too early. As a result,
when requesting VMPL1 (e.g., GHCB_MSR_VMPL_REQ_LEVEL(1)), incorrect
values such as 0x000000016 were generated instead of the intended
0x100000016 (the requested VMPL level is specified in GHCBData[39:32]).

Fix the precedence issue by grouping the masked value before applying
the shift.

  [ bp: Massage commit message. ]

Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas")
Signed-off-by: Seongman Lee <augustus92@kaist.ac.kr>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250511092329.12680-1-cloudlee1719@gmail.com
---
 arch/x86/include/asm/sev-common.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index acb85b9..0020d77 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -116,7 +116,7 @@ enum psc_op {
 #define GHCB_MSR_VMPL_REQ		0x016
 #define GHCB_MSR_VMPL_REQ_LEVEL(v)			\
 	/* GHCBData[39:32] */				\
-	(((u64)(v) & GENMASK_ULL(7, 0) << 32) |		\
+	((((u64)(v) & GENMASK_ULL(7, 0)) << 32) |	\
 	/* GHCBDdata[11:0] */				\
 	GHCB_MSR_VMPL_REQ)