When the TDP MMU is enabled, i.e. when the shadow MMU isn't used until a
nested TDP VM is run, defer allocation of the array of hashed lists used
to track shadow MMU pages until the first shadow root is allocated.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 40 ++++++++++++++++++++++++++++++----------
1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6b9c72405860..213009cdba15 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1982,14 +1982,25 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp)
return true;
}
+static __ro_after_init HLIST_HEAD(empty_page_hash);
+
+static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
+{
+ struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
+
+ if (!page_hash)
+ return &empty_page_hash;
+
+ return &page_hash[kvm_page_table_hashfn(gfn)];
+}
+
#define for_each_valid_sp(_kvm, _sp, _list) \
hlist_for_each_entry(_sp, _list, hash_link) \
if (is_obsolete_sp((_kvm), (_sp))) { \
} else
#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \
- for_each_valid_sp(_kvm, _sp, \
- &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \
+ for_each_valid_sp(_kvm, _sp, kvm_get_mmu_page_hash(_kvm, _gfn)) \
if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else
static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
@@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
struct kvm_mmu_page *sp;
bool created = false;
+ BUG_ON(!kvm->arch.mmu_page_hash);
sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
sp = kvm_mmu_find_shadow_page(kvm, vcpu, gfn, sp_list, role);
@@ -3884,11 +3896,14 @@ static int kvm_mmu_alloc_page_hash(struct kvm *kvm)
{
typeof(kvm->arch.mmu_page_hash) h;
+ if (kvm->arch.mmu_page_hash)
+ return 0;
+
h = kcalloc(KVM_NUM_MMU_PAGES, sizeof(*h), GFP_KERNEL_ACCOUNT);
if (!h)
return -ENOMEM;
- kvm->arch.mmu_page_hash = h;
+ WRITE_ONCE(kvm->arch.mmu_page_hash, h);
return 0;
}
@@ -3911,9 +3926,13 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm)
if (kvm_shadow_root_allocated(kvm))
goto out_unlock;
+ r = kvm_mmu_alloc_page_hash(kvm);
+ if (r)
+ goto out_unlock;
+
/*
- * Check if anything actually needs to be allocated, e.g. all metadata
- * will be allocated upfront if TDP is disabled.
+ * Check if memslot metadata actually needs to be allocated, e.g. all
+ * metadata will be allocated upfront if TDP is disabled.
*/
if (kvm_memslots_have_rmaps(kvm) &&
kvm_page_track_write_tracking_enabled(kvm))
@@ -6694,12 +6713,13 @@ int kvm_mmu_init_vm(struct kvm *kvm)
INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
- r = kvm_mmu_alloc_page_hash(kvm);
- if (r)
- return r;
-
- if (tdp_mmu_enabled)
+ if (tdp_mmu_enabled) {
kvm_mmu_init_tdp_mmu(kvm);
+ } else {
+ r = kvm_mmu_alloc_page_hash(kvm);
+ if (r)
+ return r;
+ }
kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
--
2.49.0.472.ge94155a9ec-goog
On 2025-04-01 08:57:14, Sean Christopherson wrote:
> +static __ro_after_init HLIST_HEAD(empty_page_hash);
> +
> +static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
> +{
> + struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
> +
> + if (!page_hash)
> + return &empty_page_hash;
> +
> + return &page_hash[kvm_page_table_hashfn(gfn)];
> +}
> +
>
> @@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> struct kvm_mmu_page *sp;
> bool created = false;
>
> + BUG_ON(!kvm->arch.mmu_page_hash);
> sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
Why do we need READ_ONCE() at kvm_get_mmu_page_hash() but not here? My
understanding is that it is in kvm_get_mmu_page_hash() to avoid compiler
doing any read tear. If yes, then the same condition is valid here,
isn't it?
On Tue, Apr 15, 2025, Vipin Sharma wrote:
> On 2025-04-01 08:57:14, Sean Christopherson wrote:
> > +static __ro_after_init HLIST_HEAD(empty_page_hash);
> > +
> > +static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
> > +{
> > + struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
> > +
> > + if (!page_hash)
> > + return &empty_page_hash;
> > +
> > + return &page_hash[kvm_page_table_hashfn(gfn)];
> > +}
> > +
> >
> > @@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> > struct kvm_mmu_page *sp;
> > bool created = false;
> >
> > + BUG_ON(!kvm->arch.mmu_page_hash);
> > sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
>
> Why do we need READ_ONCE() at kvm_get_mmu_page_hash() but not here?
We don't (need it in kvm_get_mmu_page_hash()). I suspect past me was thinking
it could be accessed without holding mmu_lock, but that's simply not true. Unless
I'm forgetting, something, I'll drop the READ_ONCE() and WRITE_ONCE() in
kvm_mmu_alloc_page_hash(), and instead assert that mmu_lock is held for write.
> My understanding is that it is in kvm_get_mmu_page_hash() to avoid compiler
> doing any read tear. If yes, then the same condition is valid here, isn't it?
The intent wasn't to guard against a tear, but to instead ensure mmu_page_hash
couldn't be re-read and end up with a NULL pointer deref, e.g. if KVM set
mmu_page_hash and then nullfied it because some later step failed. But if
mmu_lock is held for write, that is simply impossible.
On Tue, Apr 15, 2025, Sean Christopherson wrote:
> On Tue, Apr 15, 2025, Vipin Sharma wrote:
> > On 2025-04-01 08:57:14, Sean Christopherson wrote:
> > > +static __ro_after_init HLIST_HEAD(empty_page_hash);
> > > +
> > > +static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
> > > +{
> > > + struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
> > > +
> > > + if (!page_hash)
> > > + return &empty_page_hash;
> > > +
> > > + return &page_hash[kvm_page_table_hashfn(gfn)];
> > > +}
> > > +
> > >
> > > @@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> > > struct kvm_mmu_page *sp;
> > > bool created = false;
> > >
> > > + BUG_ON(!kvm->arch.mmu_page_hash);
> > > sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
> >
> > Why do we need READ_ONCE() at kvm_get_mmu_page_hash() but not here?
>
> We don't (need it in kvm_get_mmu_page_hash()). I suspect past me was thinking
> it could be accessed without holding mmu_lock, but that's simply not true. Unless
> I'm forgetting, something, I'll drop the READ_ONCE() and WRITE_ONCE() in
> kvm_mmu_alloc_page_hash(), and instead assert that mmu_lock is held for write.
I remembered what I was trying to do. The _writer_, kvm_mmu_alloc_page_hash(),
doesn't hold mmu_lock, and so the READ/WRITE_ONCE() is needed.
But looking at this again, there's really no point in such games. All readers
hold mmu_lock for write, so kvm_mmu_alloc_page_hash() can take mmu_lock for read
to ensure correctness. That's far easier to reason about than taking a dependency
on shadow_root_allocated.
For performance, taking mmu_lock for read is unlikely to generate contention, as
this is only reachable at runtime if the TDP MMU is enabled. And mmu_lock is
going to be taken for write anyways (to allocate the shadow root).
> > My understanding is that it is in kvm_get_mmu_page_hash() to avoid compiler
> > doing any read tear. If yes, then the same condition is valid here, isn't it?
>
> The intent wasn't to guard against a tear, but to instead ensure mmu_page_hash
> couldn't be re-read and end up with a NULL pointer deref, e.g. if KVM set
> mmu_page_hash and then nullfied it because some later step failed. But if
> mmu_lock is held for write, that is simply impossible.
On Mon, Apr 21, 2025, Sean Christopherson wrote:
> On Tue, Apr 15, 2025, Sean Christopherson wrote:
> > On Tue, Apr 15, 2025, Vipin Sharma wrote:
> > > On 2025-04-01 08:57:14, Sean Christopherson wrote:
> > > > +static __ro_after_init HLIST_HEAD(empty_page_hash);
> > > > +
> > > > +static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
> > > > +{
> > > > + struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
> > > > +
> > > > + if (!page_hash)
> > > > + return &empty_page_hash;
> > > > +
> > > > + return &page_hash[kvm_page_table_hashfn(gfn)];
> > > > +}
> > > > +
> > > >
> > > > @@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> > > > struct kvm_mmu_page *sp;
> > > > bool created = false;
> > > >
> > > > + BUG_ON(!kvm->arch.mmu_page_hash);
> > > > sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
> > >
> > > Why do we need READ_ONCE() at kvm_get_mmu_page_hash() but not here?
> >
> > We don't (need it in kvm_get_mmu_page_hash()). I suspect past me was thinking
> > it could be accessed without holding mmu_lock, but that's simply not true. Unless
> > I'm forgetting, something, I'll drop the READ_ONCE() and WRITE_ONCE() in
> > kvm_mmu_alloc_page_hash(), and instead assert that mmu_lock is held for write.
>
> I remembered what I was trying to do. The _writer_, kvm_mmu_alloc_page_hash(),
> doesn't hold mmu_lock, and so the READ/WRITE_ONCE() is needed.
>
> But looking at this again, there's really no point in such games. All readers
> hold mmu_lock for write, so kvm_mmu_alloc_page_hash() can take mmu_lock for read
> to ensure correctness. That's far easier to reason about than taking a dependency
> on shadow_root_allocated.
>
> For performance, taking mmu_lock for read is unlikely to generate contention, as
> this is only reachable at runtime if the TDP MMU is enabled. And mmu_lock is
> going to be taken for write anyways (to allocate the shadow root).
Wrong again. After way, way too many failed attempts (I tried some truly stupid
ideas) and staring, I finally remembered why it's a-ok to set arch.mmu_page_hash
outside of mmu_lock, and why it's a-ok for __kvm_mmu_get_shadow_page() to not use
READ_ONCE(). I guess that's my penance for not writing a decent changelog or
comments.
Setting the list outside of mmu_lock is safe, as concurrent readers must hold
mmu_lock in some capacity, shadow pages can only be added (or removed) from the
list when mmu_lock is held for write, and tasks that are creating a shadow root
are serialized by slots_arch_lock. I.e. it's impossible for the list to become
non-empty until all readers go away, and so readers are guaranteed to see an empty
list even if they make multiple calls to kvm_get_mmu_page_hash() in a single
mmu_lock critical section.
__kvm_mmu_get_shadow_page() doesn't need READ_ONCE() because it's only reachable
after the task has gone through mmu_first_shadow_root_alloc(), i.e. access to
mmu_page_hash in that context is fully serialized by slots_arch_lock.
> > > My understanding is that it is in kvm_get_mmu_page_hash() to avoid compiler
> > > doing any read tear. If yes, then the same condition is valid here, isn't it?
> >
> > The intent wasn't to guard against a tear, but to instead ensure mmu_page_hash
> > couldn't be re-read and end up with a NULL pointer deref, e.g. if KVM set
> > mmu_page_hash and then nullfied it because some later step failed. But if
> > mmu_lock is held for write, that is simply impossible.
So yes, you were 100% correct, the only reason for WRITE_ONCE/READ_ONCE is to
ensure the compiler doesn't do something stupid and tear the accesses.
© 2016 - 2026 Red Hat, Inc.