[PATCH v3 0/6] KVM: Dirty ring fixes and cleanups

Sean Christopherson posted 6 patches 7 months ago
include/linux/kvm_dirty_ring.h |  18 ++----
virt/kvm/dirty_ring.c          | 111 +++++++++++++++++++++++----------
virt/kvm/kvm_main.c            |   9 ++-
3 files changed, 89 insertions(+), 49 deletions(-)
[PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Sean Christopherson 7 months ago
Fix issues with dirty ring harvesting where KVM doesn't bound the processing
of entries in any way, which allows userspace to keep KVM in a tight loop
indefinitely.

E.g.

        struct kvm_dirty_gfn *dirty_gfns = vcpu_map_dirty_ring(vcpu);

        if (fork()) {
                int r;

                for (;;) {
                        r = kvm_vm_reset_dirty_ring(vcpu->vm);
                        if (r)
                                printf("RESET %d dirty ring entries\n", r);
                }
        } else {
                int i;

                for (i = 0; i < test_dirty_ring_count; i++) {
                        dirty_gfns[i].slot = TEST_MEM_SLOT_INDEX;
                        dirty_gfns[i].offset = (i * 64) % host_num_pages;
                }

                for (;;) {
                        for (i = 0; i < test_dirty_ring_count; i++)
                                WRITE_ONCE(dirty_gfns[i].flags, KVM_DIRTY_GFN_F_RESET);
                }
        }

Patches 1-3 address that class of bugs.  Patches 4-6 are cleanups.

v3:
 - Fix typos (I apparently can't spell opportunistically to save my life).
   [Binbin, James]
 - Clean up stale comments. [Binbin]
 - Collect reviews. [James, Pankaj]
 - Add a lockdep assertion on slots_lock, along with a comment. [James]

v2:
 - https://lore.kernel.org/all/20250508141012.1411952-1-seanjc@google.com
 - Expand on comments in dirty ring harvesting code. [Yan]

v1: https://lore.kernel.org/all/20250111010409.1252942-1-seanjc@google.com

Sean Christopherson (6):
  KVM: Bound the number of dirty ring entries in a single reset at
    INT_MAX
  KVM: Bail from the dirty ring reset flow if a signal is pending
  KVM: Conditionally reschedule when resetting the dirty ring
  KVM: Check for empty mask of harvested dirty ring entries in caller
  KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
    resets
  KVM: Assert that slots_lock is held when resetting per-vCPU dirty
    rings

 include/linux/kvm_dirty_ring.h |  18 ++----
 virt/kvm/dirty_ring.c          | 111 +++++++++++++++++++++++----------
 virt/kvm/kvm_main.c            |   9 ++-
 3 files changed, 89 insertions(+), 49 deletions(-)


base-commit: 7ef51a41466bc846ad794d505e2e34ff97157f7f
-- 
2.49.0.1112.g889b7c5bd8-goog
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Yan Zhao 6 months, 4 weeks ago
Aside from the nits,

Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>

hi Sean,
Do I need to rebase and repost [1]?

[1] https://lore.kernel.org/all/20241223070427.29583-1-yan.y.zhao@intel.com

Thanks
Yan
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Peter Xu 6 months, 4 weeks ago
On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> Sean Christopherson (6):
>   KVM: Bound the number of dirty ring entries in a single reset at
>     INT_MAX
>   KVM: Bail from the dirty ring reset flow if a signal is pending
>   KVM: Conditionally reschedule when resetting the dirty ring
>   KVM: Check for empty mask of harvested dirty ring entries in caller
>   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
>     resets
>   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
>     rings

For the last one, I'd think it's majorly because of the memslot accesses
(or CONFIG_LOCKDEP=y should yell already on resets?).  The "serialization
of concurrent RESETs" part could be a good side effect.  After all, the
dirty rings rely a lot on the userspace to do right things.. for example,
the userspace better also remember to reset before any slot changes, or
it's possible to collect a dirty pfn with a slot index that was already
removed and reused with a new one..

Maybe we could switch the sentences there in the comment of last patch, but
not a huge deal.

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks!

-- 
Peter Xu
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Sean Christopherson 6 months, 4 weeks ago
On Tue, May 20, 2025, Peter Xu wrote:
> On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> > Sean Christopherson (6):
> >   KVM: Bound the number of dirty ring entries in a single reset at
> >     INT_MAX
> >   KVM: Bail from the dirty ring reset flow if a signal is pending
> >   KVM: Conditionally reschedule when resetting the dirty ring
> >   KVM: Check for empty mask of harvested dirty ring entries in caller
> >   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> >     resets
> >   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> >     rings
> 
> For the last one, I'd think it's majorly because of the memslot accesses
> (or CONFIG_LOCKDEP=y should yell already on resets?).  

No?  If KVM only needed to ensure stable memslot accesses, then SRCU would suffice.
It sounds like holding slots_lock may have been a somewhat unintentional,  but the
reason KVM can't switch to SRCU is that doing so would break ordering, not because
slots_lock is needed to protect the memslot accesses.

> The "serialization of concurrent RESETs" part could be a good side effect.
> After all, the dirty rings rely a lot on the userspace to do right things..
> for example, the userspace better also remember to reset before any slot
> changes, or it's possible to collect a dirty pfn with a slot index that was
> already removed and reused with a new one..
> 
> Maybe we could switch the sentences there in the comment of last patch, but
> not a huge deal.
> 
> Reviewed-by: Peter Xu <peterx@redhat.com>
> 
> Thanks!
> 
> -- 
> Peter Xu
>
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Peter Xu 6 months, 4 weeks ago
On Tue, May 20, 2025 at 04:16:00PM -0700, Sean Christopherson wrote:
> On Tue, May 20, 2025, Peter Xu wrote:
> > On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> > > Sean Christopherson (6):
> > >   KVM: Bound the number of dirty ring entries in a single reset at
> > >     INT_MAX
> > >   KVM: Bail from the dirty ring reset flow if a signal is pending
> > >   KVM: Conditionally reschedule when resetting the dirty ring
> > >   KVM: Check for empty mask of harvested dirty ring entries in caller
> > >   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> > >     resets
> > >   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> > >     rings
> > 
> > For the last one, I'd think it's majorly because of the memslot accesses
> > (or CONFIG_LOCKDEP=y should yell already on resets?).  
> 
> No?  If KVM only needed to ensure stable memslot accesses, then SRCU would suffice.
> It sounds like holding slots_lock may have been a somewhat unintentional,  but the
> reason KVM can't switch to SRCU is that doing so would break ordering, not because
> slots_lock is needed to protect the memslot accesses.

Hmm.. isn't what you said exactly means a "yes"? :)

I mean, I would still expect lockdep to report this ioctl if without the
slots_lock, please correct me if it's not the case.  And if using RCU is
not trivial (or not necessary either), so far the slots_lock is still
required to make sure the memslot accesses are legal?

-- 
Peter Xu
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Sean Christopherson 6 months, 4 weeks ago
On Tue, May 20, 2025, Peter Xu wrote:
> On Tue, May 20, 2025 at 04:16:00PM -0700, Sean Christopherson wrote:
> > On Tue, May 20, 2025, Peter Xu wrote:
> > > On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> > > > Sean Christopherson (6):
> > > >   KVM: Bound the number of dirty ring entries in a single reset at
> > > >     INT_MAX
> > > >   KVM: Bail from the dirty ring reset flow if a signal is pending
> > > >   KVM: Conditionally reschedule when resetting the dirty ring
> > > >   KVM: Check for empty mask of harvested dirty ring entries in caller
> > > >   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> > > >     resets
> > > >   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> > > >     rings
> > > 
> > > For the last one, I'd think it's majorly because of the memslot accesses
> > > (or CONFIG_LOCKDEP=y should yell already on resets?).  
> > 
> > No?  If KVM only needed to ensure stable memslot accesses, then SRCU would suffice.
> > It sounds like holding slots_lock may have been a somewhat unintentional,  but the
> > reason KVM can't switch to SRCU is that doing so would break ordering, not because
> > slots_lock is needed to protect the memslot accesses.
> 
> Hmm.. isn't what you said exactly means a "yes"? :)
> 
> I mean, I would still expect lockdep to report this ioctl if without the
> slots_lock, please correct me if it's not the case.

Yes, one of slots_lock or SRCU needs to be held.

> And if using RCU is not trivial (or not necessary either), so far the
> slots_lock is still required to make sure the memslot accesses are legal?

I don't follow this part.  The intent of the comment is to document why slots_lock
is required, which is exceptional because memslot access for readers are protected
by kvm->srcu.  The fact that slots_lock also protects memslots is notable only
because it makes acquiring kvm->srcu superfluous.  But grabbing kvm->srcu is still
safe/legal/ok:

diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 1ba02a06378c..6bf4f9e2f291 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -121,18 +121,26 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
        u64 cur_offset, next_offset;
        unsigned long mask = 0;
        struct kvm_dirty_gfn *entry;
+       int idx;
 
        /*
         * Ensure concurrent calls to KVM_RESET_DIRTY_RINGS are serialized,
         * e.g. so that KVM fully resets all entries processed by a given call
-        * before returning to userspace.  Holding slots_lock also protects
-        * the various memslot accesses.
+        * before returning to userspace.
         */
        lockdep_assert_held(&kvm->slots_lock);
 
+       /*
+        * Holding slots_lock also protects the various memslot accesses, but
+        * acquiring kvm->srcu for read here is still safe, just unnecessary.
+        */
+       idx = srcu_read_lock(&kvm->srcu);
+
        while (likely((*nr_entries_reset) < INT_MAX)) {
-               if (signal_pending(current))
+               if (signal_pending(current)) {
+                       srcu_read_unlock(&kvm->srcu, idx);
                        return -EINTR;
+               }
 
                entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];
 
@@ -205,6 +213,8 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
        if (mask)
                kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
 
+       srcu_read_unlock(&kvm->srcu, idx);
+
        /*
         * The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared
         * by the VCPU thread next time when it enters the guest.
--

And unless there are other behaviors that are protected by slots_lock (which is
entirely possible), serializing the processing of each ring could be done via a
dedicated (for example only, the dedicated mutex could/should be per-vCPU, not
global).

This diff in particular shows why I ordered and phrased the comment the way I
did.  The blurb about protecting memslot accesses is purely a friendly reminder
to readers.  The sole reason for an assert and comment is to call out the need
for ordering.

diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 1ba02a06378c..92ac82b535fe 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -102,6 +102,8 @@ static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
        return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
 }
 
+static DEFINE_MUTEX(per_ring_lock);
+
 int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
                         int *nr_entries_reset)
 {
@@ -121,18 +123,22 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
        u64 cur_offset, next_offset;
        unsigned long mask = 0;
        struct kvm_dirty_gfn *entry;
+       int idx;
 
        /*
         * Ensure concurrent calls to KVM_RESET_DIRTY_RINGS are serialized,
         * e.g. so that KVM fully resets all entries processed by a given call
-        * before returning to userspace.  Holding slots_lock also protects
-        * the various memslot accesses.
+        * before returning to userspace.
         */
-       lockdep_assert_held(&kvm->slots_lock);
+       guard(mutex)(&per_ring_lock);
+
+       idx = srcu_read_lock(&kvm->srcu);
 
        while (likely((*nr_entries_reset) < INT_MAX)) {
-               if (signal_pending(current))
+               if (signal_pending(current)) {
+                       srcu_read_unlock(&kvm->srcu, idx);
                        return -EINTR;
+               }
 
                entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];
 
@@ -205,6 +211,8 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
        if (mask)
                kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
 
+       srcu_read_unlock(&kvm->srcu, idx);
+
        /*
         * The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared
         * by the VCPU thread next time when it enters the guest.
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 571688507204..45729a6f6451 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4908,16 +4908,12 @@ static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm)
        if (!kvm->dirty_ring_size)
                return -EINVAL;
 
-       mutex_lock(&kvm->slots_lock);
-
        kvm_for_each_vcpu(i, vcpu, kvm) {
                r = kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring, &cleared);
                if (r)
                        break;
        }
 
-       mutex_unlock(&kvm->slots_lock);
-
        if (cleared)
                kvm_flush_remote_tlbs(kvm);
--
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Peter Xu 6 months, 4 weeks ago
On Wed, May 21, 2025 at 07:50:10AM -0700, Sean Christopherson wrote:
> On Tue, May 20, 2025, Peter Xu wrote:
> > On Tue, May 20, 2025 at 04:16:00PM -0700, Sean Christopherson wrote:
> > > On Tue, May 20, 2025, Peter Xu wrote:
> > > > On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> > > > > Sean Christopherson (6):
> > > > >   KVM: Bound the number of dirty ring entries in a single reset at
> > > > >     INT_MAX
> > > > >   KVM: Bail from the dirty ring reset flow if a signal is pending
> > > > >   KVM: Conditionally reschedule when resetting the dirty ring
> > > > >   KVM: Check for empty mask of harvested dirty ring entries in caller
> > > > >   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> > > > >     resets
> > > > >   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> > > > >     rings
> > > > 
> > > > For the last one, I'd think it's majorly because of the memslot accesses
> > > > (or CONFIG_LOCKDEP=y should yell already on resets?).  
> > > 
> > > No?  If KVM only needed to ensure stable memslot accesses, then SRCU would suffice.
> > > It sounds like holding slots_lock may have been a somewhat unintentional,  but the
> > > reason KVM can't switch to SRCU is that doing so would break ordering, not because
> > > slots_lock is needed to protect the memslot accesses.
> > 
> > Hmm.. isn't what you said exactly means a "yes"? :)
> > 
> > I mean, I would still expect lockdep to report this ioctl if without the
> > slots_lock, please correct me if it's not the case.
> 
> Yes, one of slots_lock or SRCU needs to be held.
> 
> > And if using RCU is not trivial (or not necessary either), so far the
> > slots_lock is still required to make sure the memslot accesses are legal?
> 
> I don't follow this part.  The intent of the comment is to document why slots_lock
> is required, which is exceptional because memslot access for readers are protected
> by kvm->srcu.

I always think it's fine to take slots_lock for readers too.  RCU can
definitely be better in most cases..

> The fact that slots_lock also protects memslots is notable only
> because it makes acquiring kvm->srcu superfluous.  But grabbing kvm->srcu is still
> safe/legal/ok:
> 
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index 1ba02a06378c..6bf4f9e2f291 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -121,18 +121,26 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>         u64 cur_offset, next_offset;
>         unsigned long mask = 0;
>         struct kvm_dirty_gfn *entry;
> +       int idx;
>  
>         /*
>          * Ensure concurrent calls to KVM_RESET_DIRTY_RINGS are serialized,
>          * e.g. so that KVM fully resets all entries processed by a given call
> -        * before returning to userspace.  Holding slots_lock also protects
> -        * the various memslot accesses.
> +        * before returning to userspace.
>          */
>         lockdep_assert_held(&kvm->slots_lock);
>  
> +       /*
> +        * Holding slots_lock also protects the various memslot accesses, but
> +        * acquiring kvm->srcu for read here is still safe, just unnecessary.
> +        */
> +       idx = srcu_read_lock(&kvm->srcu);
> +
>         while (likely((*nr_entries_reset) < INT_MAX)) {
> -               if (signal_pending(current))
> +               if (signal_pending(current)) {
> +                       srcu_read_unlock(&kvm->srcu, idx);
>                         return -EINTR;
> +               }
>  
>                 entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];
>  
> @@ -205,6 +213,8 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>         if (mask)
>                 kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
>  
> +       srcu_read_unlock(&kvm->srcu, idx);
> +
>         /*
>          * The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared
>          * by the VCPU thread next time when it enters the guest.
> --
> 
> And unless there are other behaviors that are protected by slots_lock (which is
> entirely possible), serializing the processing of each ring could be done via a

Yes, I am not the original author, but when I was working on it I don't
remember anything relying on that.  However still it's possible it can
serialize some operations under the hood (which will be true side effect of
using this lock..).

> dedicated (for example only, the dedicated mutex could/should be per-vCPU, not
> global).
> 
> This diff in particular shows why I ordered and phrased the comment the way I
> did.  The blurb about protecting memslot accesses is purely a friendly reminder
> to readers.  The sole reason for an assert and comment is to call out the need
> for ordering.
> 
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index 1ba02a06378c..92ac82b535fe 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -102,6 +102,8 @@ static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
>         return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
>  }
>  
> +static DEFINE_MUTEX(per_ring_lock);
> +
>  int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>                          int *nr_entries_reset)
>  {
> @@ -121,18 +123,22 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>         u64 cur_offset, next_offset;
>         unsigned long mask = 0;
>         struct kvm_dirty_gfn *entry;
> +       int idx;
>  
>         /*
>          * Ensure concurrent calls to KVM_RESET_DIRTY_RINGS are serialized,
>          * e.g. so that KVM fully resets all entries processed by a given call
> -        * before returning to userspace.  Holding slots_lock also protects
> -        * the various memslot accesses.
> +        * before returning to userspace.
>          */
> -       lockdep_assert_held(&kvm->slots_lock);
> +       guard(mutex)(&per_ring_lock);
> +
> +       idx = srcu_read_lock(&kvm->srcu);
>  
>         while (likely((*nr_entries_reset) < INT_MAX)) {
> -               if (signal_pending(current))
> +               if (signal_pending(current)) {
> +                       srcu_read_unlock(&kvm->srcu, idx);
>                         return -EINTR;
> +               }
>  
>                 entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];
>  
> @@ -205,6 +211,8 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>         if (mask)
>                 kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
>  
> +       srcu_read_unlock(&kvm->srcu, idx);
> +
>         /*
>          * The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared
>          * by the VCPU thread next time when it enters the guest.
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 571688507204..45729a6f6451 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4908,16 +4908,12 @@ static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm)
>         if (!kvm->dirty_ring_size)
>                 return -EINVAL;
>  
> -       mutex_lock(&kvm->slots_lock);
> -
>         kvm_for_each_vcpu(i, vcpu, kvm) {
>                 r = kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring, &cleared);
>                 if (r)
>                         break;
>         }
>  
> -       mutex_unlock(&kvm->slots_lock);
> -
>         if (cleared)
>                 kvm_flush_remote_tlbs(kvm);
> --
> 

I think we almost agree on each other, and I don't see anything
controversial.

It's just that for this path using srcu may have slight risk of breaking
what used to be serialized as you said.  Said that, I'd be surprised if
so.. even if aarch64 is normally even trickier and it now also supports the
rings.  So it's just that it seems unnecessary yet to switch to srcu,
because we don't expect any concurrent writters anyway.

So totally no strong opinion on how the comment should be laid out in the
last patch - please feel free to ignore my request.  But I hope I stated
the fact, that in the current code base the slots_lock is required to
access memslots safely when rcu isn't around..

Thanks,

-- 
Peter Xu
Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
Posted by Sean Christopherson 5 months, 3 weeks ago
On Fri, 16 May 2025 14:35:34 -0700, Sean Christopherson wrote:
> Fix issues with dirty ring harvesting where KVM doesn't bound the processing
> of entries in any way, which allows userspace to keep KVM in a tight loop
> indefinitely.
> 
> E.g.
> 
>         struct kvm_dirty_gfn *dirty_gfns = vcpu_map_dirty_ring(vcpu);
> 
> [...]

Applied to kvm-x86 dirty_ring, thanks!

[1/6] KVM: Bound the number of dirty ring entries in a single reset at INT_MAX
      https://github.com/kvm-x86/linux/commit/530a8ba71b4c
[2/6] KVM: Bail from the dirty ring reset flow if a signal is pending
      https://github.com/kvm-x86/linux/commit/49005a2a3d2a
[3/6] KVM: Conditionally reschedule when resetting the dirty ring
      https://github.com/kvm-x86/linux/commit/1333c35c4eea
[4/6] KVM: Check for empty mask of harvested dirty ring entries in caller
      https://github.com/kvm-x86/linux/commit/ee188dea1677
[5/6] KVM: Use mask of harvested dirty ring entries to coalesce dirty ring resets
      https://github.com/kvm-x86/linux/commit/e46ad851150f
[6/6] KVM: Assert that slots_lock is held when resetting per-vCPU dirty rings
      https://github.com/kvm-x86/linux/commit/614fb9d1479b

--
https://github.com/kvm-x86/kvm-unit-tests/tree/next