Allocate TDP MMU roots while holding mmu_lock for read, and instead use
tdp_mmu_pages_lock to guard against duplicate roots. This allows KVM to
create new roots without forcing kvm_tdp_mmu_zap_invalidated_roots() to
yield, e.g. allows vCPUs to load new roots after memslot deletion without
forcing the zap thread to detect contention and yield (or complete if the
kernel isn't preemptible).
Note, creating a new TDP MMU root as an mmu_lock reader is safe for two
reasons: (1) paths that must guarantee all roots/SPTEs are *visited* take
mmu_lock for write and so are still mutually exclusive, e.g. mmu_notifier
invalidations, and (2) paths that require all roots/SPTEs to *observe*
some given state without holding mmu_lock for write must ensure freshness
through some other means, e.g. toggling dirty logging must first wait for
SRCU readers to recognize the memslot flags change before processing
existing roots/SPTEs.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 55 +++++++++++++++-----------------------
1 file changed, 22 insertions(+), 33 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 9a8250a14fc1..d078157e62aa 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -223,51 +223,42 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
}
-static struct kvm_mmu_page *kvm_tdp_mmu_try_get_root(struct kvm_vcpu *vcpu)
-{
- union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
- int as_id = kvm_mmu_role_as_id(role);
- struct kvm *kvm = vcpu->kvm;
- struct kvm_mmu_page *root;
-
- for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
- if (root->role.word == role.word)
- return root;
- }
-
- return NULL;
-}
-
int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
{
struct kvm_mmu *mmu = vcpu->arch.mmu;
union kvm_mmu_page_role role = mmu->root_role;
+ int as_id = kvm_mmu_role_as_id(role);
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_page *root;
/*
- * Check for an existing root while holding mmu_lock for read to avoid
+ * Check for an existing root before acquiring the pages lock to avoid
* unnecessary serialization if multiple vCPUs are loading a new root.
* E.g. when bringing up secondary vCPUs, KVM will already have created
* a valid root on behalf of the primary vCPU.
*/
read_lock(&kvm->mmu_lock);
- root = kvm_tdp_mmu_try_get_root(vcpu);
- read_unlock(&kvm->mmu_lock);
- if (root)
- goto out;
+ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
+ if (root->role.word == role.word)
+ goto out_read_unlock;
+ }
- write_lock(&kvm->mmu_lock);
+ spin_lock(&kvm->arch.tdp_mmu_pages_lock);
/*
- * Recheck for an existing root after acquiring mmu_lock for write. It
- * is possible a new usable root was created between dropping mmu_lock
- * (for read) and acquiring it for write.
+ * Recheck for an existing root after acquiring the pages lock, another
+ * vCPU may have raced ahead and created a new usable root. Manually
+ * walk the list of roots as the standard macros assume that the pages
+ * lock is *not* held. WARN if grabbing a reference to a usable root
+ * fails, as the last reference to a root can only be put *after* the
+ * root has been invalidated, which requires holding mmu_lock for write.
*/
- root = kvm_tdp_mmu_try_get_root(vcpu);
- if (root)
- goto out_unlock;
+ list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
+ if (root->role.word == role.word &&
+ !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root)))
+ goto out_spin_unlock;
+ }
root = tdp_mmu_alloc_sp(vcpu);
tdp_mmu_init_sp(root, NULL, 0, role);
@@ -280,14 +271,12 @@ int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
* is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
*/
refcount_set(&root->tdp_mmu_root_count, 2);
-
- spin_lock(&kvm->arch.tdp_mmu_pages_lock);
list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots);
- spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
-out_unlock:
- write_unlock(&kvm->mmu_lock);
-out:
+out_spin_unlock:
+ spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
+out_read_unlock:
+ read_unlock(&kvm->mmu_lock);
/*
* Note, KVM_REQ_MMU_FREE_OBSOLETE_ROOTS will prevent entering the guest
* and actually consuming the root if it's invalidated after dropping
--
2.43.0.275.g3460e3d667-goog
On Wed, Jan 10, 2024 at 06:00:47PM -0800, Sean Christopherson wrote:
> Allocate TDP MMU roots while holding mmu_lock for read, and instead use
> tdp_mmu_pages_lock to guard against duplicate roots. This allows KVM to
> create new roots without forcing kvm_tdp_mmu_zap_invalidated_roots() to
> yield, e.g. allows vCPUs to load new roots after memslot deletion without
> forcing the zap thread to detect contention and yield (or complete if the
> kernel isn't preemptible).
>
> Note, creating a new TDP MMU root as an mmu_lock reader is safe for two
> reasons: (1) paths that must guarantee all roots/SPTEs are *visited* take
> mmu_lock for write and so are still mutually exclusive, e.g. mmu_notifier
> invalidations, and (2) paths that require all roots/SPTEs to *observe*
> some given state without holding mmu_lock for write must ensure freshness
> through some other means, e.g. toggling dirty logging must first wait for
> SRCU readers to recognize the memslot flags change before processing
> existing roots/SPTEs.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 55 +++++++++++++++-----------------------
> 1 file changed, 22 insertions(+), 33 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 9a8250a14fc1..d078157e62aa 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -223,51 +223,42 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
> tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
> }
>
> -static struct kvm_mmu_page *kvm_tdp_mmu_try_get_root(struct kvm_vcpu *vcpu)
> -{
> - union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
> - int as_id = kvm_mmu_role_as_id(role);
> - struct kvm *kvm = vcpu->kvm;
> - struct kvm_mmu_page *root;
> -
> - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> - if (root->role.word == role.word)
> - return root;
> - }
> -
> - return NULL;
> -}
> -
> int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
> {
> struct kvm_mmu *mmu = vcpu->arch.mmu;
> union kvm_mmu_page_role role = mmu->root_role;
> + int as_id = kvm_mmu_role_as_id(role);
> struct kvm *kvm = vcpu->kvm;
> struct kvm_mmu_page *root;
>
> /*
> - * Check for an existing root while holding mmu_lock for read to avoid
> + * Check for an existing root before acquiring the pages lock to avoid
> * unnecessary serialization if multiple vCPUs are loading a new root.
> * E.g. when bringing up secondary vCPUs, KVM will already have created
> * a valid root on behalf of the primary vCPU.
> */
> read_lock(&kvm->mmu_lock);
> - root = kvm_tdp_mmu_try_get_root(vcpu);
> - read_unlock(&kvm->mmu_lock);
>
> - if (root)
> - goto out;
> + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> + if (root->role.word == role.word)
> + goto out_read_unlock;
> + }
>
> - write_lock(&kvm->mmu_lock);
It seems really complex to me...
I failed to understand why the following KVM_BUG_ON() could be avoided
without the mmu_lock for write. I thought a valid root could be added
during zapping.
void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
{
struct kvm_mmu_page *root;
read_lock(&kvm->mmu_lock);
for_each_tdp_mmu_root_yield_safe(kvm, root) {
if (!root->tdp_mmu_scheduled_root_to_zap)
continue;
root->tdp_mmu_scheduled_root_to_zap = false;
KVM_BUG_ON(!root->role.invalid, kvm);
Thanks,
Yilun
> + spin_lock(&kvm->arch.tdp_mmu_pages_lock);
>
> /*
> - * Recheck for an existing root after acquiring mmu_lock for write. It
> - * is possible a new usable root was created between dropping mmu_lock
> - * (for read) and acquiring it for write.
> + * Recheck for an existing root after acquiring the pages lock, another
> + * vCPU may have raced ahead and created a new usable root. Manually
> + * walk the list of roots as the standard macros assume that the pages
> + * lock is *not* held. WARN if grabbing a reference to a usable root
> + * fails, as the last reference to a root can only be put *after* the
> + * root has been invalidated, which requires holding mmu_lock for write.
> */
> - root = kvm_tdp_mmu_try_get_root(vcpu);
> - if (root)
> - goto out_unlock;
> + list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
> + if (root->role.word == role.word &&
> + !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root)))
> + goto out_spin_unlock;
> + }
>
> root = tdp_mmu_alloc_sp(vcpu);
> tdp_mmu_init_sp(root, NULL, 0, role);
> @@ -280,14 +271,12 @@ int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
> * is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
> */
> refcount_set(&root->tdp_mmu_root_count, 2);
> -
> - spin_lock(&kvm->arch.tdp_mmu_pages_lock);
> list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots);
> - spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>
> -out_unlock:
> - write_unlock(&kvm->mmu_lock);
> -out:
> +out_spin_unlock:
> + spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
> +out_read_unlock:
> + read_unlock(&kvm->mmu_lock);
> /*
> * Note, KVM_REQ_MMU_FREE_OBSOLETE_ROOTS will prevent entering the guest
> * and actually consuming the root if it's invalidated after dropping
> --
> 2.43.0.275.g3460e3d667-goog
>
>
On Tue, Feb 06, 2024, Xu Yilun wrote:
> On Wed, Jan 10, 2024 at 06:00:47PM -0800, Sean Christopherson wrote:
> > ---
> > arch/x86/kvm/mmu/tdp_mmu.c | 55 +++++++++++++++-----------------------
> > 1 file changed, 22 insertions(+), 33 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 9a8250a14fc1..d078157e62aa 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -223,51 +223,42 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
> > tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
> > }
> >
> > -static struct kvm_mmu_page *kvm_tdp_mmu_try_get_root(struct kvm_vcpu *vcpu)
> > -{
> > - union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
> > - int as_id = kvm_mmu_role_as_id(role);
> > - struct kvm *kvm = vcpu->kvm;
> > - struct kvm_mmu_page *root;
> > -
> > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> > - if (root->role.word == role.word)
> > - return root;
> > - }
> > -
> > - return NULL;
> > -}
> > -
> > int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
> > {
> > struct kvm_mmu *mmu = vcpu->arch.mmu;
> > union kvm_mmu_page_role role = mmu->root_role;
> > + int as_id = kvm_mmu_role_as_id(role);
> > struct kvm *kvm = vcpu->kvm;
> > struct kvm_mmu_page *root;
> >
> > /*
> > - * Check for an existing root while holding mmu_lock for read to avoid
> > + * Check for an existing root before acquiring the pages lock to avoid
> > * unnecessary serialization if multiple vCPUs are loading a new root.
> > * E.g. when bringing up secondary vCPUs, KVM will already have created
> > * a valid root on behalf of the primary vCPU.
> > */
> > read_lock(&kvm->mmu_lock);
> > - root = kvm_tdp_mmu_try_get_root(vcpu);
> > - read_unlock(&kvm->mmu_lock);
> >
> > - if (root)
> > - goto out;
> > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> > + if (root->role.word == role.word)
> > + goto out_read_unlock;
> > + }
> >
> > - write_lock(&kvm->mmu_lock);
>
> It seems really complex to me...
>
> I failed to understand why the following KVM_BUG_ON() could be avoided
> without the mmu_lock for write. I thought a valid root could be added
> during zapping.
>
> void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
> {
> struct kvm_mmu_page *root;
>
> read_lock(&kvm->mmu_lock);
>
> for_each_tdp_mmu_root_yield_safe(kvm, root) {
> if (!root->tdp_mmu_scheduled_root_to_zap)
> continue;
>
> root->tdp_mmu_scheduled_root_to_zap = false;
> KVM_BUG_ON(!root->role.invalid, kvm);
tdp_mmu_scheduled_root_to_zap is set only when mmu_lock is held for write, i.e.
it's mutually exclusive with allocating a new root.
And tdp_mmu_scheduled_root_to_zap is cleared if and only if kvm_tdp_mmu_zap_invalidated_roots
is already set, and is only processed by kvm_tdp_mmu_zap_invalidated_roots(),
which runs under slots_lock (a mutex).
So a new, valid root can be added, but it won't have tdp_mmu_scheduled_root_to_zap
set, at least not until the current "fast zap" completes and a new one beings,
which as above requires taking mmu_lock for write.
On Tue, Feb 06, 2024 at 10:10:44AM -0800, Sean Christopherson wrote:
> On Tue, Feb 06, 2024, Xu Yilun wrote:
> > On Wed, Jan 10, 2024 at 06:00:47PM -0800, Sean Christopherson wrote:
> > > ---
> > > arch/x86/kvm/mmu/tdp_mmu.c | 55 +++++++++++++++-----------------------
> > > 1 file changed, 22 insertions(+), 33 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > > index 9a8250a14fc1..d078157e62aa 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > > @@ -223,51 +223,42 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp,
> > > tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role);
> > > }
> > >
> > > -static struct kvm_mmu_page *kvm_tdp_mmu_try_get_root(struct kvm_vcpu *vcpu)
> > > -{
> > > - union kvm_mmu_page_role role = vcpu->arch.mmu->root_role;
> > > - int as_id = kvm_mmu_role_as_id(role);
> > > - struct kvm *kvm = vcpu->kvm;
> > > - struct kvm_mmu_page *root;
> > > -
> > > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> > > - if (root->role.word == role.word)
> > > - return root;
> > > - }
> > > -
> > > - return NULL;
> > > -}
> > > -
> > > int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu)
> > > {
> > > struct kvm_mmu *mmu = vcpu->arch.mmu;
> > > union kvm_mmu_page_role role = mmu->root_role;
> > > + int as_id = kvm_mmu_role_as_id(role);
> > > struct kvm *kvm = vcpu->kvm;
> > > struct kvm_mmu_page *root;
> > >
> > > /*
> > > - * Check for an existing root while holding mmu_lock for read to avoid
> > > + * Check for an existing root before acquiring the pages lock to avoid
> > > * unnecessary serialization if multiple vCPUs are loading a new root.
> > > * E.g. when bringing up secondary vCPUs, KVM will already have created
> > > * a valid root on behalf of the primary vCPU.
> > > */
> > > read_lock(&kvm->mmu_lock);
> > > - root = kvm_tdp_mmu_try_get_root(vcpu);
> > > - read_unlock(&kvm->mmu_lock);
> > >
> > > - if (root)
> > > - goto out;
> > > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, as_id) {
> > > + if (root->role.word == role.word)
> > > + goto out_read_unlock;
> > > + }
> > >
> > > - write_lock(&kvm->mmu_lock);
> >
> > It seems really complex to me...
> >
> > I failed to understand why the following KVM_BUG_ON() could be avoided
> > without the mmu_lock for write. I thought a valid root could be added
> > during zapping.
> >
> > void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
> > {
> > struct kvm_mmu_page *root;
> >
> > read_lock(&kvm->mmu_lock);
> >
> > for_each_tdp_mmu_root_yield_safe(kvm, root) {
> > if (!root->tdp_mmu_scheduled_root_to_zap)
> > continue;
> >
> > root->tdp_mmu_scheduled_root_to_zap = false;
> > KVM_BUG_ON(!root->role.invalid, kvm);
>
> tdp_mmu_scheduled_root_to_zap is set only when mmu_lock is held for write, i.e.
> it's mutually exclusive with allocating a new root.
>
> And tdp_mmu_scheduled_root_to_zap is cleared if and only if kvm_tdp_mmu_zap_invalidated_roots
> is already set, and is only processed by kvm_tdp_mmu_zap_invalidated_roots(),
> which runs under slots_lock (a mutex).
>
> So a new, valid root can be added, but it won't have tdp_mmu_scheduled_root_to_zap
> set, at least not until the current "fast zap" completes and a new one beings,
> which as above requires taking mmu_lock for write.
It's clear to me.
Thanks for the detailed explanation.
>
© 2016 - 2025 Red Hat, Inc.