Guard the call to kvm_x86_call(get_mt_mask) with an explicit check on
kvm_x86_ops.get_mt_mask so as to avoid unnecessarily calling
kvm_is_mmio_pfn(), which is moderately expensive for some backing types.
E.g. lookup_memtype() conditionally takes a system-wide spinlock if KVM
ends up being call pat_pfn_immune_to_uc_mtrr(), e.g. for DAX memory.
While the call to kvm_x86_ops.get_mt_mask() itself is elided, the compiler
still needs to compute all parameters, as it can't know at build time that
the call will be squashed.
<+243>: call 0xffffffff812ad880 <kvm_is_mmio_pfn>
<+248>: mov %r13,%rsi
<+251>: mov %rbx,%rdi
<+254>: movzbl %al,%edx
<+257>: call 0xffffffff81c26af0 <__SCT__kvm_x86_get_mt_mask>
Fixes: 3fee4837ef40 ("KVM: x86: remove shadow_memtype_mask")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/spte.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index cfce03d8f123..f262c380f40e 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -209,7 +209,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
if (level > PG_LEVEL_4K)
spte |= PT_PAGE_SIZE_MASK;
- spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+ if (kvm_x86_ops.get_mt_mask)
+ spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+
if (host_writable)
spte |= shadow_host_writable_mask;
else
--
2.49.0.1151.ga128411c76-goog