[PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction

Tom Lendacky posted 8 patches 1 month, 4 weeks ago
There is a newer version of this series
[PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Tom Lendacky 1 month, 4 weeks ago
The RMPREAD instruction returns an architecture defined format of an
RMP table entry. This is the preferred method for examining RMP entries.

The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this
instruction when available.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/virt/svm/sev.c            | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dd4682857c12..93620a4c5b15 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -447,6 +447,7 @@
 #define X86_FEATURE_V_TSC_AUX		(19*32+ 9) /* Virtual TSC_AUX */
 #define X86_FEATURE_SME_COHERENT	(19*32+10) /* AMD hardware-enforced cache coherency */
 #define X86_FEATURE_DEBUG_SWAP		(19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
+#define X86_FEATURE_RMPREAD		(19*32+21) /* RMPREAD instruction */
 #define X86_FEATURE_SVSM		(19*32+28) /* "svsm" SVSM present */
 
 /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index 103a2dd6e81d..73d4f422829a 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
 {
 	struct rmpentry_raw *e;
 
+	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
+		int ret;
+
+		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
+			     : "=a" (ret)
+			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
+			     : "memory", "cc");
+
+		return ret;
+	}
+
 	e = __get_rmpentry(pfn);
 	if (IS_ERR(e))
 		return PTR_ERR(e);
-- 
2.43.2
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Borislav Petkov 1 month, 1 week ago
On Mon, Sep 30, 2024 at 10:22:10AM -0500, Tom Lendacky wrote:
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 103a2dd6e81d..73d4f422829a 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
>  {
>  	struct rmpentry_raw *e;
>  
> +	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> +		int ret;
> +
> +		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> +			     : "=a" (ret)
> +			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
> +			     : "memory", "cc");
> +
> +		return ret;
> +	}

I think this should be:

diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index 73d9295dd013..5500c5d64cba 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -303,12 +303,11 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
 	struct rmpentry_raw *e;
 
 	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
-		int ret;
+		int ret = pfn << PAGE_SHIFT;
 
 		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
-			     : "=a" (ret)
-			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
-			     : "memory", "cc");
+			     : "+a" (ret), "+c" (entry)
+			     :: "memory", "cc");
 
 		return ret;
 	}

because "The RCX register provides the effective address of a 16-byte data
structure into which the RMP state is written."

So your %rcx is both an input and an output operand and you need to do the "+"
thing here too for that.

Same for %rax.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Tom Lendacky 1 month, 1 week ago
On 10/18/24 07:41, Borislav Petkov wrote:
> On Mon, Sep 30, 2024 at 10:22:10AM -0500, Tom Lendacky wrote:
>> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
>> index 103a2dd6e81d..73d4f422829a 100644
>> --- a/arch/x86/virt/svm/sev.c
>> +++ b/arch/x86/virt/svm/sev.c
>> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
>>  {
>>  	struct rmpentry_raw *e;
>>  
>> +	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
>> +		int ret;
>> +
>> +		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
>> +			     : "=a" (ret)
>> +			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
>> +			     : "memory", "cc");
>> +
>> +		return ret;
>> +	}
> 
> I think this should be:
> 
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 73d9295dd013..5500c5d64cba 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -303,12 +303,11 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
>  	struct rmpentry_raw *e;
>  
>  	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> -		int ret;
> +		int ret = pfn << PAGE_SHIFT;
>  
>  		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> -			     : "=a" (ret)
> -			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
> -			     : "memory", "cc");
> +			     : "+a" (ret), "+c" (entry)
> +			     :: "memory", "cc");
>  
>  		return ret;
>  	}
> 
> because "The RCX register provides the effective address of a 16-byte data
> structure into which the RMP state is written."
> 
> So your %rcx is both an input and an output operand and you need to do the "+"
> thing here too for that.

I don't think so. RCX does not change on output, the contents that RCX
points to changes, but the register value does not so the "+" is not
correct. The instruction doesn't take a memory location as part of
operands (like a MOV instruction could), which is why the "memory" clobber
is specified.

> 
> Same for %rax.

For RAX, yes, if I set "ret" to the input value then I can use "+"
specification. But the way it's coded now is also correct.

Thanks,
Tom

>
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Borislav Petkov 1 month, 1 week ago
On Fri, Oct 18, 2024 at 10:14:04AM -0500, Tom Lendacky wrote:
> I don't think so. RCX does not change on output, the contents that RCX
> points to changes, but the register value does not so the "+" is not
> correct. The instruction doesn't take a memory location as part of
> operands (like a MOV instruction could), which is why the "memory" clobber
> is specified.

Just confirmed it with my compiler guy: yes, you're right. The rule is this:
*if* RCX itself doesn't change but memory it points to, does change, then you
need the "memory" clobber. Otherwise the compiler can reorder accesses.

> For RAX, yes, if I set "ret" to the input value then I can use "+"
> specification. But the way it's coded now is also correct.

If you set ret, it means a smaller and simpler inline asm which is always
better.

:-)

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Tom Lendacky 1 month, 1 week ago
On 10/21/24 10:41, Borislav Petkov wrote:
> On Fri, Oct 18, 2024 at 10:14:04AM -0500, Tom Lendacky wrote:
>> I don't think so. RCX does not change on output, the contents that RCX
>> points to changes, but the register value does not so the "+" is not
>> correct. The instruction doesn't take a memory location as part of
>> operands (like a MOV instruction could), which is why the "memory" clobber
>> is specified.
> 
> Just confirmed it with my compiler guy: yes, you're right. The rule is this:
> *if* RCX itself doesn't change but memory it points to, does change, then you
> need the "memory" clobber. Otherwise the compiler can reorder accesses.
> 
>> For RAX, yes, if I set "ret" to the input value then I can use "+"
>> specification. But the way it's coded now is also correct.
> 
> If you set ret, it means a smaller and simpler inline asm which is always
> better.

The input value is a 64-bit value and on output the return code is in
EAX, a 32-bit value. So the use of the "=a" (ret) for output and "a"
(pfn << PAGE_SHIFT) for input is more accurate.

It's not a complicated statement and is much clearer to me.

I can change it if you really want the "+a" thing (including changing
the ret variable to a u64), but would prefer not to do that.

Thanks,
Tom

> 
> :-)
> 
> Thx.
>
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Borislav Petkov 1 month, 1 week ago
On Mon, Oct 21, 2024 at 12:10:55PM -0500, Tom Lendacky wrote:
> The input value is a 64-bit value and on output the return code is in
> EAX, a 32-bit value. So the use of the "=a" (ret) for output and "a"
> (pfn << PAGE_SHIFT) for input is more accurate.

Oh, they differ in width. Ok.

> It's not a complicated statement and is much clearer to me.
> 
> I can change it if you really want the "+a" thing (including changing
> the ret variable to a u64), but would prefer not to do that.

Nah, it'll get uglier if you do. Let's keep it this way.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Neeraj Upadhyay 1 month, 1 week ago

On 9/30/2024 8:52 PM, Tom Lendacky wrote:
> The RMPREAD instruction returns an architecture defined format of an
> RMP table entry. This is the preferred method for examining RMP entries.
> 
> The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this
> instruction when available.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>

Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>


- Neeraj
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Borislav Petkov 1 month, 1 week ago
On Mon, Sep 30, 2024 at 10:22:10AM -0500, Tom Lendacky wrote:
> +	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> +		int ret;
> +
> +		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> +			     : "=a" (ret)
> +			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
> +			     : "memory", "cc");
> +
> +		return ret;
> +	}
> +
>  	e = __get_rmpentry(pfn);

So dump_rmpentry() still calls this but it doesn't require the newly added
services of RMPREAD and so this is looking to be disambiguated: a function
which gives you the entry coming from RMPREAD, I guess the architectural one,
and the other one.

IOW, I am still unclear on the nomenclature:

The _raw* entries do not come from the insn but then what's the raw-ness about
them?

This convention sounds weird as it is now, I'd say.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Tom Lendacky 1 month, 1 week ago
On 10/17/24 10:26, Borislav Petkov wrote:
> On Mon, Sep 30, 2024 at 10:22:10AM -0500, Tom Lendacky wrote:
>> +	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
>> +		int ret;
>> +
>> +		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
>> +			     : "=a" (ret)
>> +			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
>> +			     : "memory", "cc");
>> +
>> +		return ret;
>> +	}
>> +
>>  	e = __get_rmpentry(pfn);
> 
> So dump_rmpentry() still calls this but it doesn't require the newly added
> services of RMPREAD and so this is looking to be disambiguated: a function
> which gives you the entry coming from RMPREAD, I guess the architectural one,
> and the other one.

Right, because for debugging purposes we want to dump the raw RMP entry
that is in the RMP table, not just the information returned by RMPREAD
(since RMPREAD doesn't return everything defined in the RMP entry).

This is why dump_rmpentry() merely prints out the RMP entry as two u64
values.

> 
> IOW, I am still unclear on the nomenclature:
> 
> The _raw* entries do not come from the insn but then what's the raw-ness about
> them?

The raw-ness is that it is the actual data in the RMP table. The reason
for RMPREAD is because there is no guarantee that the raw data won't be
reformatted in a future program, which is why we only allow access to
the RMP entry for Milan and Genoa, where the format is known and the same.

Thanks,
Tom

> 
> This convention sounds weird as it is now, I'd say.
> 
> Thx.
>
Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
Posted by Nikunj A. Dadhania 1 month, 1 week ago
On 9/30/2024 8:52 PM, Tom Lendacky wrote:
> The RMPREAD instruction returns an architecture defined format of an
> RMP table entry. This is the preferred method for examining RMP entries.
> 
> The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this
> instruction when available.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>

Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>

> ---
>  arch/x86/include/asm/cpufeatures.h |  1 +
>  arch/x86/virt/svm/sev.c            | 11 +++++++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index dd4682857c12..93620a4c5b15 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -447,6 +447,7 @@
>  #define X86_FEATURE_V_TSC_AUX		(19*32+ 9) /* Virtual TSC_AUX */
>  #define X86_FEATURE_SME_COHERENT	(19*32+10) /* AMD hardware-enforced cache coherency */
>  #define X86_FEATURE_DEBUG_SWAP		(19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
> +#define X86_FEATURE_RMPREAD		(19*32+21) /* RMPREAD instruction */
>  #define X86_FEATURE_SVSM		(19*32+28) /* "svsm" SVSM present */
>  
>  /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 103a2dd6e81d..73d4f422829a 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
>  {
>  	struct rmpentry_raw *e;
>  
> +	if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> +		int ret;
> +
> +		asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> +			     : "=a" (ret)
> +			     : "a" (pfn << PAGE_SHIFT), "c" (entry)
> +			     : "memory", "cc");
> +
> +		return ret;
> +	}
> +
>  	e = __get_rmpentry(pfn);
>  	if (IS_ERR(e))
>  		return PTR_ERR(e);