arch/x86/include/asm/kvm_host.h | 9 -- arch/x86/kvm/mmu.h | 14 ++- arch/x86/kvm/mmu/mmu.c | 212 ++++++++++++++++++++------------ arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 12 +- arch/x86/kvm/mmu/tdp_mmu.c | 13 +- arch/x86/kvm/mmu/tdp_mmu.h | 25 +--- 7 files changed, 149 insertions(+), 137 deletions(-)
This is a variation of David's series to change tdp_mmu to a RO param[*].
The key difference is that instead of moving the TDP MMU page fault handler
to its own function, use static branches to make TDP MMU page faults (and
a few other paths) effectively branch free.
I'm not dead set against having a dedicated TDP MMU page fault handler, but
IMO it's not really better once the TDP MMU vs. shadow MMU is reduced to a
static branch, just different. The read vs. write mmu_lock is the most
visible ugliness, and that can be buried in helpers if we really want to
make the page fault handler easier on the eyes, e.g.
direct_page_fault_mmu_lock(vcpu);
if (is_page_fault_stale(vcpu, fault))
goto out_unlock;
if (is_tdp_mmu_enabled()) {
r = kvm_tdp_mmu_map(vcpu, fault);
} else {
r = make_mmu_pages_available(vcpu);
if (r)
goto out_unlock;
r = __direct_map(vcpu, fault);
}
out_unlock:
direct_page_fault_mmu_unlock(vcpu);
v4:
- Keep is_tdp_mmu_page() in patch 1.
- Collect reviews. [Isaku]
- Skip "make MMU pages available" for root allocations.
- Rework "is TDP MMU" checks to take advantage of read-only param.
- Use a static key to track TDP MMU enabling.
[*] https://lkml.kernel.org/r/20220921173546.2674386-1-dmatlack@google.com
David Matlack (7):
KVM: x86/mmu: Change tdp_mmu to a read-only parameter
KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled
KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn()
KVM: x86/mmu: Handle error PFNs in kvm_faultin_pfn()
KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON
handling
KVM: x86/mmu: Handle no-slot faults in kvm_faultin_pfn()
KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU
Sean Christopherson (4):
KVM: x86/mmu: Pivot on "TDP MMU enabled" when handling direct page
faults
KVM: x86/mmu: Pivot on "TDP MMU enabled" to check if active MMU is TDP
MMU
KVM: x86/mmu: Replace open coded usage of tdp_mmu_page with
is_tdp_mmu_page()
KVM: x86/mmu: Use static key/branches for checking if TDP MMU is
enabled
arch/x86/include/asm/kvm_host.h | 9 --
arch/x86/kvm/mmu.h | 14 ++-
arch/x86/kvm/mmu/mmu.c | 212 ++++++++++++++++++++------------
arch/x86/kvm/mmu/mmu_internal.h | 1 +
arch/x86/kvm/mmu/paging_tmpl.h | 12 +-
arch/x86/kvm/mmu/tdp_mmu.c | 13 +-
arch/x86/kvm/mmu/tdp_mmu.h | 25 +---
7 files changed, 149 insertions(+), 137 deletions(-)
base-commit: e18d6152ff0f41b7f01f9817372022df04e0d354
--
2.38.0.rc1.362.ged0d419d3c-goog
On Wed, Oct 12, 2022 at 11:17 AM Sean Christopherson <seanjc@google.com> wrote:
>
> This is a variation of David's series to change tdp_mmu to a RO param[*].
> The key difference is that instead of moving the TDP MMU page fault handler
> to its own function, use static branches to make TDP MMU page faults (and
> a few other paths) effectively branch free.
>
> I'm not dead set against having a dedicated TDP MMU page fault handler, but
> IMO it's not really better once the TDP MMU vs. shadow MMU is reduced to a
> static branch, just different. The read vs. write mmu_lock is the most
> visible ugliness, and that can be buried in helpers if we really want to
> make the page fault handler easier on the eyes, e.g.
>
> direct_page_fault_mmu_lock(vcpu);
>
> if (is_page_fault_stale(vcpu, fault))
> goto out_unlock;
>
> if (is_tdp_mmu_enabled()) {
> r = kvm_tdp_mmu_map(vcpu, fault);
> } else {
> r = make_mmu_pages_available(vcpu);
> if (r)
> goto out_unlock;
>
> r = __direct_map(vcpu, fault);
> }
>
> out_unlock:
> direct_page_fault_mmu_unlock(vcpu);
Thanks for the respin.
My preference is still separate handlers. When I am reading this code,
I only care about one path (TDP MMU or Shadow MMU, usually TDP MMU).
Having separate handlers makes it easy to read since I don't have to
care about the implementation details of the other MMU.
And more importantly (but less certain), the TDP MMU fault handler is
going to diverge further from the Shadow MMU fault handler in the near
future. i.e. There will be more and more branches in a common fault
handler, and the value of having a common fault handler diminishes.
Specifically, to support moving the TDP MMU to common code, the TDP
MMU is no longer going to topup the same mem caches as the Shadow MMU
(TDP MMU is not going to use struct kvm_mmu_page), and the TDP MMU
will probably have its own fast_page_fault() handler eventually.
If we do go the common handler route, I don't prefer the
direct_page_fault_mmu_lock/unlock() wrapper since it further obscures
the differences between the 2 MMUs.
>
> v4:
> - Keep is_tdp_mmu_page() in patch 1.
> - Collect reviews. [Isaku]
> - Skip "make MMU pages available" for root allocations.
> - Rework "is TDP MMU" checks to take advantage of read-only param.
> - Use a static key to track TDP MMU enabling.
>
> [*] https://lkml.kernel.org/r/20220921173546.2674386-1-dmatlack@google.com
>
> David Matlack (7):
> KVM: x86/mmu: Change tdp_mmu to a read-only parameter
> KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled
> KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn()
> KVM: x86/mmu: Handle error PFNs in kvm_faultin_pfn()
> KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON
> handling
> KVM: x86/mmu: Handle no-slot faults in kvm_faultin_pfn()
> KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU
>
> Sean Christopherson (4):
> KVM: x86/mmu: Pivot on "TDP MMU enabled" when handling direct page
> faults
> KVM: x86/mmu: Pivot on "TDP MMU enabled" to check if active MMU is TDP
> MMU
> KVM: x86/mmu: Replace open coded usage of tdp_mmu_page with
> is_tdp_mmu_page()
> KVM: x86/mmu: Use static key/branches for checking if TDP MMU is
> enabled
>
> arch/x86/include/asm/kvm_host.h | 9 --
> arch/x86/kvm/mmu.h | 14 ++-
> arch/x86/kvm/mmu/mmu.c | 212 ++++++++++++++++++++------------
> arch/x86/kvm/mmu/mmu_internal.h | 1 +
> arch/x86/kvm/mmu/paging_tmpl.h | 12 +-
> arch/x86/kvm/mmu/tdp_mmu.c | 13 +-
> arch/x86/kvm/mmu/tdp_mmu.h | 25 +---
> 7 files changed, 149 insertions(+), 137 deletions(-)
>
>
> base-commit: e18d6152ff0f41b7f01f9817372022df04e0d354
> --
> 2.38.0.rc1.362.ged0d419d3c-goog
>
On Thu, Oct 13, 2022, David Matlack wrote: > On Wed, Oct 12, 2022 at 11:17 AM Sean Christopherson <seanjc@google.com> wrote: > > I'm not dead set against having a dedicated TDP MMU page fault handler, but > > IMO it's not really better once the TDP MMU vs. shadow MMU is reduced to a > > static branch, just different. The read vs. write mmu_lock is the most > > visible ugliness, and that can be buried in helpers if we really want to > > make the page fault handler easier on the eyes, e.g. ... > My preference is still separate handlers. When I am reading this code, > I only care about one path (TDP MMU or Shadow MMU, usually TDP MMU). > Having separate handlers makes it easy to read since I don't have to > care about the implementation details of the other MMU. > > And more importantly (but less certain), the TDP MMU fault handler is > going to diverge further from the Shadow MMU fault handler in the near > future. i.e. There will be more and more branches in a common fault > handler, and the value of having a common fault handler diminishes. > Specifically, to support moving the TDP MMU to common code, the TDP > MMU is no longer going to topup the same mem caches as the Shadow MMU > (TDP MMU is not going to use struct kvm_mmu_page), and the TDP MMU > will probably have its own fast_page_fault() handler eventually. What if we hold off on the split for the moment, and then revisit the handler when a common MMU is closer to reality? I agree that a separate handler makes sense once things start diverging, but until that happens, supporting two flows instead of one seems like it would add (minor) maintenance cost without much benefit. > If we do go the common handler route, I don't prefer the > direct_page_fault_mmu_lock/unlock() wrapper since it further obscures > the differences between the 2 MMUs. Yeah, I don't like the wrappers either.
On Thu, Oct 13, 2022 at 1:12 PM Sean Christopherson <seanjc@google.com> wrote: > > On Thu, Oct 13, 2022, David Matlack wrote: > > On Wed, Oct 12, 2022 at 11:17 AM Sean Christopherson <seanjc@google.com> wrote: > > > I'm not dead set against having a dedicated TDP MMU page fault handler, but > > > IMO it's not really better once the TDP MMU vs. shadow MMU is reduced to a > > > static branch, just different. The read vs. write mmu_lock is the most > > > visible ugliness, and that can be buried in helpers if we really want to > > > make the page fault handler easier on the eyes, e.g. > > ... > > > My preference is still separate handlers. When I am reading this code, > > I only care about one path (TDP MMU or Shadow MMU, usually TDP MMU). > > Having separate handlers makes it easy to read since I don't have to > > care about the implementation details of the other MMU. > > > > And more importantly (but less certain), the TDP MMU fault handler is > > going to diverge further from the Shadow MMU fault handler in the near > > future. i.e. There will be more and more branches in a common fault > > handler, and the value of having a common fault handler diminishes. > > Specifically, to support moving the TDP MMU to common code, the TDP > > MMU is no longer going to topup the same mem caches as the Shadow MMU > > (TDP MMU is not going to use struct kvm_mmu_page), and the TDP MMU > > will probably have its own fast_page_fault() handler eventually. > > What if we hold off on the split for the moment, and then revisit the handler when > a common MMU is closer to reality? I agree that a separate handler makes sense > once things start diverging, but until that happens, supporting two flows instead > of one seems like it would add (minor) maintenance cost without much benefit. Sure thing. I'll do the split as part of my series to split out the TDP MMU to common code and we can revisit the discussion then. > > > If we do go the common handler route, I don't prefer the > > direct_page_fault_mmu_lock/unlock() wrapper since it further obscures > > the differences between the 2 MMUs. > > Yeah, I don't like the wrappers either.
© 2016 - 2026 Red Hat, Inc.