[PATCH 0/6] Add RMPOPT support.

Ashish Kalra posted 6 patches 1 month, 2 weeks ago
There is a newer version of this series
arch/x86/include/asm/cpufeatures.h |   2 +-
arch/x86/include/asm/msr-index.h   |   3 +
arch/x86/include/asm/sev.h         |   2 +
arch/x86/kernel/cpu/scattered.c    |   1 +
arch/x86/kvm/Kconfig               |   1 +
arch/x86/virt/svm/sev.c            | 471 +++++++++++++++++++++++++++++
drivers/crypto/ccp/sev-dev.c       |   4 +
7 files changed, 483 insertions(+), 1 deletion(-)
[PATCH 0/6] Add RMPOPT support.
Posted by Ashish Kalra 1 month, 2 weeks ago
From: Ashish Kalra <ashish.kalra@amd.com>

In the SEV-SNP architecture, hypervisor and non-SNP guests are subject
to RMP checks on writes to provide integrity of SEV-SNP guest memory.

The RMPOPT architecture enables optimizations whereby the RMP checks
can be skipped if 1GB regions of memory are known to not contain any
SNP guest memory.

RMPOPT is a new instruction designed to minimize the performance
overhead of RMP checks for the hypervisor and non-SNP guests. 

As SNP is enabled by default the hypervisor and non-SNP guests are
subject to RMP write checks to provide integrity of SNP guest memory.

This patch series add support to enable RMPOPT optimizations globally
for all system RAM, and allow RMPUPDATE to disable those optimizations
as SNP guests are launched.

Additionally add a configfs interface to re-enable RMP optimizations at
runtime and debugfs interface to report per-CPU RMPOPT status across
all system RAM.

Ashish Kalra (6):
  x86/cpufeatures: Add X86_FEATURE_AMD_RMPOPT feature flag
  x86/sev: add support for enabling RMPOPT
  x86/sev: add support for RMPOPT instruction
  x86/sev: Add interface to re-enable RMP optimizations.
  x86/sev: Use configfs to re-enable RMP optimizations.
  x86/sev: Add debugfs support for RMPOPT

 arch/x86/include/asm/cpufeatures.h |   2 +-
 arch/x86/include/asm/msr-index.h   |   3 +
 arch/x86/include/asm/sev.h         |   2 +
 arch/x86/kernel/cpu/scattered.c    |   1 +
 arch/x86/kvm/Kconfig               |   1 +
 arch/x86/virt/svm/sev.c            | 471 +++++++++++++++++++++++++++++
 drivers/crypto/ccp/sev-dev.c       |   4 +
 7 files changed, 483 insertions(+), 1 deletion(-)

-- 
2.43.0
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Dave Hansen 1 month, 2 weeks ago
On 2/17/26 12:09, Ashish Kalra wrote:
> RMPOPT is a new instruction designed to minimize the performance
> overhead of RMP checks for the hypervisor and non-SNP guests. 

This needs a little theory of operation for the new instruction. It
seems like it will enable optimizations all by itself. You just call it,
and it figures out when the CPU can optimize things. The CPU also
figures out when the optimization must be flipped off.

That's not awful.

To be honest, though, I think this is misdesigned. Shouldn't the CPU
*boot* in a state where it is optimized? Why should software have to
tell it that coming out of reset, there is no SEV-SNP memory?
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Kalra, Ashish 1 month, 1 week ago
Hello Dave,

On 2/17/2026 4:11 PM, Dave Hansen wrote:
> On 2/17/26 12:09, Ashish Kalra wrote:
>> RMPOPT is a new instruction designed to minimize the performance
>> overhead of RMP checks for the hypervisor and non-SNP guests. 
> 
> This needs a little theory of operation for the new instruction. It
> seems like it will enable optimizations all by itself. You just call it,
> and it figures out when the CPU can optimize things. The CPU also
> figures out when the optimization must be flipped off.

Yes, i will add more theory of operation for the new instruction. 

RMPOPT instruction with the verify and report status operation, in this operation
the CPU will read the RMP contents, verify the entire 1GB region starting
at the provided SPA is HV-owned. For the entire 1GB region it checks that all RMP
entries in this region are HV-owned (i.e, not in assigned state) and then 
accordingly update the RMPOPT table to indicate if optimization has been enabled 
and provide indication to software if the optimization was successful.

RMPUPDATE instruction that mark new pages as assigned will automatically clear the
optimizations and the appropriate bit in the RMPOPT table. 

The RMPOPT table is managed by a combination of software and hardware.  Software uses
the RMPOPT instruction to set bits in the table, indicating that regions of memory are
entirely HV-owned.  Hardware automatically clears bits in the RMPOPT table when RMP contents
are changed during RMPUPDATE instruction.

> 
> That's not awful.
> 
> To be honest, though, I think this is misdesigned. Shouldn't the CPU
> *boot* in a state where it is optimized? Why should software have to
> tell it that coming out of reset, there is no SEV-SNP memory?

When the CPU boots, the RMP checks are not done and therefore the CPU
is booting in a state where it is optimized.

The RMP checks are not enabled till SEV-SNP is enabled and SNP is enabled
during kernel boot (as part of iommu_snp_enable() -> snp_rmptable_init()).

Once SNP is enabled as part of kernel boot, hypervisor and non-SNP guests are
subject to RMP checks on writes to provide integrity of SEV-SNP guest memory.

Therefore, we need to enable these RMP optimizations after SNP has been 
enabled to indicate which 1GB regions of memory are known to not contain any
SEV-SNP guest memory.

I will add the above details to the cover letter for the next revision of this
patch series.

Thanks,
Ashish
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Dave Hansen 1 month, 1 week ago
On 2/17/26 20:12, Kalra, Ashish wrote:
>> That's not awful.
>>
>> To be honest, though, I think this is misdesigned. Shouldn't the CPU
>> *boot* in a state where it is optimized? Why should software have to
>> tell it that coming out of reset, there is no SEV-SNP memory?
> When the CPU boots, the RMP checks are not done and therefore the CPU
> is booting in a state where it is optimized.
> 
> The RMP checks are not enabled till SEV-SNP is enabled and SNP is enabled
> during kernel boot (as part of iommu_snp_enable() -> snp_rmptable_init()).
> 
> Once SNP is enabled as part of kernel boot, hypervisor and non-SNP guests are
> subject to RMP checks on writes to provide integrity of SEV-SNP guest memory.
> 
> Therefore, we need to enable these RMP optimizations after SNP has been 
> enabled to indicate which 1GB regions of memory are known to not contain any
> SEV-SNP guest memory.

They are known not to contain any SEV-SNP guest memory at the moment
snp_rmptable_init() finishes, no?
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Kalra, Ashish 1 month, 1 week ago

On 2/18/2026 9:03 AM, Dave Hansen wrote:
> On 2/17/26 20:12, Kalra, Ashish wrote:
>>> That's not awful.
>>>
>>> To be honest, though, I think this is misdesigned. Shouldn't the CPU
>>> *boot* in a state where it is optimized? Why should software have to
>>> tell it that coming out of reset, there is no SEV-SNP memory?
>> When the CPU boots, the RMP checks are not done and therefore the CPU
>> is booting in a state where it is optimized.
>>
>> The RMP checks are not enabled till SEV-SNP is enabled and SNP is enabled
>> during kernel boot (as part of iommu_snp_enable() -> snp_rmptable_init()).
>>
>> Once SNP is enabled as part of kernel boot, hypervisor and non-SNP guests are
>> subject to RMP checks on writes to provide integrity of SEV-SNP guest memory.
>>
>> Therefore, we need to enable these RMP optimizations after SNP has been 
>> enabled to indicate which 1GB regions of memory are known to not contain any
>> SEV-SNP guest memory.
> 
> They are known not to contain any SEV-SNP guest memory at the moment
> snp_rmptable_init() finishes, no?

Yes, but RMP checks are still performed and they affect performance.

Testing a bit in the per‑CPU RMPOPT table to avoid RMP checks significantly improves performance.

Thanks,
Ashish
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Dave Hansen 1 month, 1 week ago
On 2/18/26 09:03, Kalra, Ashish wrote:
>> They are known not to contain any SEV-SNP guest memory at the
>> moment snp_rmptable_init() finishes, no?
> Yes, but RMP checks are still performed and they affect performance.
> 
> Testing a bit in the per‑CPU RMPOPT table to avoid RMP checks
> significantly improves performance.

Sorry, Ashish, I don't think I'm explaining myself very well. Let me try
again, please.

First, my goal here is to ensure that the system has a whole has good
performance, with minimal kernel code, and in the most common
configurations.

I would wager that the most common SEV-SNP configuration in the whole
world is a system that has booted, enabled SEV-SNP, and has never run an
SEV-SNP guest. If it's not *the* most common, it's certainly going to be
common enough to care about deeply.

Do you agree?

If you agree, I hope we can also agree that a "SNP enabled but never ran
a guest" state is deserving of good performance with minimal kernel code.

My assumption (which is maybe a bad one) is that there is a natural
point when SEV-SNP is enabled on the system when the system as a whole
can easily assert that no SEV-SNP guest has ever run. I'm assuming that
there is *a* point where, for instance, the RMP table gets atomically
flipped from being unprotected to being protected. At that point, its
state *must* be known. It must also be naturally obvious that no guest
has had a chance to run at this point.

If that point can be leveraged, and the RMPOPT optimization can be
applied at SEV-SNP enabled time, then an important SEV-SNP configuration
would be optimized by default and with zero or little kernel code needed
to drive it.

To me, that seems like a valuable goal.

Do you agree?
Re: [PATCH 0/6] Add RMPOPT support.
Posted by Kalra, Ashish 1 month, 1 week ago
Hello Dave,

On 2/18/2026 11:15 AM, Dave Hansen wrote:
> On 2/18/26 09:03, Kalra, Ashish wrote:
>>> They are known not to contain any SEV-SNP guest memory at the
>>> moment snp_rmptable_init() finishes, no?
>> Yes, but RMP checks are still performed and they affect performance.
>>
>> Testing a bit in the per‑CPU RMPOPT table to avoid RMP checks
>> significantly improves performance.
> 
> Sorry, Ashish, I don't think I'm explaining myself very well. Let me try
> again, please.
> 
> First, my goal here is to ensure that the system has a whole has good
> performance, with minimal kernel code, and in the most common
> configurations.
> 
> I would wager that the most common SEV-SNP configuration in the whole
> world is a system that has booted, enabled SEV-SNP, and has never run an
> SEV-SNP guest. If it's not *the* most common, it's certainly going to be
> common enough to care about deeply.
> 
> Do you agree?

Yes.

> 
> If you agree, I hope we can also agree that a "SNP enabled but never ran
> a guest" state is deserving of good performance with minimal kernel code.
> 
> My assumption (which is maybe a bad one) is that there is a natural
> point when SEV-SNP is enabled on the system when the system as a whole
> can easily assert that no SEV-SNP guest has ever run. I'm assuming that
> there is *a* point where, for instance, the RMP table gets atomically
> flipped from being unprotected to being protected. At that point, its
> state *must* be known. It must also be naturally obvious that no guest
> has had a chance to run at this point.
> 
> If that point can be leveraged, and the RMPOPT optimization can be
> applied at SEV-SNP enabled time, then an important SEV-SNP configuration
> would be optimized by default and with zero or little kernel code needed
> to drive it.
> 
> To me, that seems like a valuable goal.
> 
> Do you agree?

Now, RMP gets protected at the *same* point where SNP is enabled and then
RMP checking is started. And this is the same point at which RMPOPT
optimizations are enabled with this patch. 

I believe you are talking about the hardware doing it as part of SNP enablement, 
but that isn't how it is implemented and the reasons for that are it would take
a long time (in CPU terms) for a single WRMSR, and we don't support that.

And if RMP has been allocated means that you are going to be running SNP guests,
otherwise you wouldn't have allocated the RMP and enabled SNP in BIOS. 

The RMPOPT feature address the RMP checks associated with non-SNP guests and the 
hypervisor itself, theoretically, a cloud provider has good memory placement for
guests and can benefit even when launching/running SNP guests.

We can simplify this initial series to just using this RMPOPT feature and enabling
RMP optimizations for 0 to 2TB across the system and then do the optimizations
for/or supporting larger systems as a follow on series.

That will address your concerns of performing the RMPOPT optimizations at
SEV-SNP enabled time, and having the important SEV-SNP configuration
optimized by default and with little kernel code needed to drive it.

Thanks,
Ashish