When establishing a "host access map", lookup the gfn in the vCPU specific
memslots, as the intent is that the mapping will be established for the
current vCPU context. Specifically, using __kvm_vcpu_map() in x86's SMM
context should create mappings based on the SMM memslots, not the non-SMM
memslots.
Luckily, the bug is benign as x86 is the only architecture with multiple
memslot address spaces, and all of x86's usage is limited to non-SMM. The
calls in (or reachable by) {svm,vmx}_enter_smm() are made before
enter_smm() sets HF_SMM_MASK, and the calls in {svm,vmx}_leave_smm() are
made after emulator_leave_smm() clears HF_SMM_MASK.
Note, kvm_vcpu_unmap() uses the vCPU specific memslots, only the map() side
of things is broken.
Fixes: 357a18ad230f ("KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 9eca084bdcbe..afe13451ce7f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3118,7 +3118,7 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
bool writable)
{
struct kvm_follow_pfn kfp = {
- .slot = gfn_to_memslot(vcpu->kvm, gfn),
+ .slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn),
.gfn = gfn,
.flags = writable ? FOLL_WRITE : 0,
.refcounted_page = &map->pinned_page,
--
2.52.0.rc2.455.g230fcf2819-goog