Optimize XCR0/XSS loads that are currently done on every VM-Enter and VM-Exit,
by handling them outside of KVM's fastpath inner loop.
Context switching at entry/exit is unnecessary behavior inherited from a
hack-a-fix that papered over an egregious #MC handling bug where the kernel #MC
handler would call schedule() from atomic contexts. The resulting #GP due to
trying to swap FPU state with a guest XCR0/XSS was "fixed" by loading the host
values before handling #MCs from the guest.
Thankfully, the #MC mess has long since been cleaned up, so it's once again
safe to swap XCR0/XSS outside of the fastpath (but with IRQs still disabled!).
Note, Binbin's kvm_load_xfeatures() still applies cleanly on top, so I
deliberately didn't include it here (but am still planning on applying it).
v2:
- Collect reviews. [Jon, Rick]
- Fix TDX (suprisingly, not servicing host IRQs is problematic, /s). [Tony]
v1: https://lore.kernel.org/all/20251030224246.3456492-1-seanjc@google.com
Sean Christopherson (4):
KVM: SVM: Handle #MCs in guest outside of fastpath
KVM: VMX: Handle #MCs on VM-Enter/TD-Enter outside of the fastpath
KVM: x86: Load guest/host XCR0 and XSS outside of the fastpath run
loop
KVM: x86: Load guest/host PKRU outside of the fastpath run loop
arch/x86/kvm/svm/svm.c | 20 ++++++++---------
arch/x86/kvm/vmx/tdx.c | 3 ---
arch/x86/kvm/vmx/vmx.c | 20 +++++++++--------
arch/x86/kvm/x86.c | 51 +++++++++++++++++++++++++++++-------------
arch/x86/kvm/x86.h | 2 --
5 files changed, 55 insertions(+), 41 deletions(-)
base-commit: 4531ff85d9251ff429a633bdb55209d3360f39f2
--
2.52.0.rc1.455.g30608eb744-goog