[PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation

huangy81@chinatelecom.cn posted 10 patches 2 years, 11 months ago
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, Juan Quintela <quintela@redhat.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Eric Blake <eblake@redhat.com>, Markus Armbruster <armbru@redhat.com>, Thomas Huth <thuth@redhat.com>, Laurent Vivier <lvivier@redhat.com>
There is a newer version of this series
[PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by huangy81@chinatelecom.cn 2 years, 11 months ago
From: Peter Xu <peterx@redhat.com>

It's possible that we want to reap a dirty ring on a vcpu that is during
creation, because the vcpu is put onto list (CPU_FOREACH visible) before
initialization of the structures.  In this case:

qemu_init_vcpu
    x86_cpu_realizefn
        cpu_exec_realizefn
            cpu_list_add      <---- can be probed by CPU_FOREACH
        qemu_init_vcpu
            cpus_accel->create_vcpu_thread(cpu);
                kvm_init_vcpu
                    map kvm_dirty_gfns  <--- kvm_dirty_gfns valid

Don't try to reap dirty ring on vcpus during creation or it'll crash.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2124756
Reported-by: Xiaohui Li <xiaohli@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 accel/kvm/kvm-all.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 9b26582655..47483cdfa0 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu)
     uint32_t ring_size = s->kvm_dirty_ring_size;
     uint32_t count = 0, fetch = cpu->kvm_fetch_index;
 
+    /*
+     * It's possible that we race with vcpu creation code where the vcpu is
+     * put onto the vcpus list but not yet initialized the dirty ring
+     * structures.  If so, skip it.
+     */
+    if (!cpu->created) {
+        return 0;
+    }
+
     assert(dirty_gfns && ring_size);
     trace_kvm_dirty_ring_reap_vcpu(cpu->cpu_index);
 
-- 
2.17.1
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by Paolo Bonzini 2 years, 10 months ago
On 2/16/23 17:18, huangy81@chinatelecom.cn wrote:
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 9b26582655..47483cdfa0 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu)
>       uint32_t ring_size = s->kvm_dirty_ring_size;
>       uint32_t count = 0, fetch = cpu->kvm_fetch_index;
>   
> +    /*
> +     * It's possible that we race with vcpu creation code where the vcpu is
> +     * put onto the vcpus list but not yet initialized the dirty ring
> +     * structures.  If so, skip it.
> +     */
> +    if (!cpu->created) {
> +        return 0;
> +    }
> +

Is there a lock that protects cpu->created?

If you don't want to use a lock you need to use qatomic_load_acquire
together with

diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index fed20ffb5dd2..15b64e7f4592 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -525,7 +525,7 @@ void qemu_cond_timedwait_iothread(QemuCond *cond, int ms)
  /* signal CPU creation */
  void cpu_thread_signal_created(CPUState *cpu)
  {
-    cpu->created = true;
+    qatomic_store_release(&cpu->created, true);
      qemu_cond_signal(&qemu_cpu_cond);
  }
  

Paolo
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by Peter Xu 2 years, 10 months ago
Hi, Paolo!

On Tue, Apr 04, 2023 at 03:32:38PM +0200, Paolo Bonzini wrote:
> On 2/16/23 17:18, huangy81@chinatelecom.cn wrote:
> > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > index 9b26582655..47483cdfa0 100644
> > --- a/accel/kvm/kvm-all.c
> > +++ b/accel/kvm/kvm-all.c
> > @@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu)
> >       uint32_t ring_size = s->kvm_dirty_ring_size;
> >       uint32_t count = 0, fetch = cpu->kvm_fetch_index;
> > +    /*
> > +     * It's possible that we race with vcpu creation code where the vcpu is
> > +     * put onto the vcpus list but not yet initialized the dirty ring
> > +     * structures.  If so, skip it.
> > +     */
> > +    if (!cpu->created) {
> > +        return 0;
> > +    }
> > +
> 
> Is there a lock that protects cpu->created?
> 
> If you don't want to use a lock you need to use qatomic_load_acquire
> together with
> 
> diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> index fed20ffb5dd2..15b64e7f4592 100644
> --- a/softmmu/cpus.c
> +++ b/softmmu/cpus.c
> @@ -525,7 +525,7 @@ void qemu_cond_timedwait_iothread(QemuCond *cond, int ms)
>  /* signal CPU creation */
>  void cpu_thread_signal_created(CPUState *cpu)
>  {
> -    cpu->created = true;
> +    qatomic_store_release(&cpu->created, true);
>      qemu_cond_signal(&qemu_cpu_cond);
>  }

Makes sense.

When looking at such a possible race, I also found that when destroying the
vcpu we may have another relevant issue, where we flip "vcpu->created"
after destroying the vcpu.  IIUC it means the same issue can occur when
vcpu unplugged?

Meanwhile I think the memory ordering trick won't play there, because
firstly to do that we'll need to update created==false:

-    kvm_destroy_vcpu(cpu);
     cpu_thread_signal_destroyed(cpu);
+    kvm_destroy_vcpu(cpu);

And even if we order the operations we still cannot assume the data is safe
to access even if created==true.

Maybe we'd better need (unfortunately) a per-vcpu mutex to protect both
cases?

Thanks,

-- 
Peter Xu
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by Paolo Bonzini 2 years, 10 months ago
Il mar 4 apr 2023, 16:11 Peter Xu <peterx@redhat.com> ha scritto:

> Hi, Paolo!
>
> On Tue, Apr 04, 2023 at 03:32:38PM +0200, Paolo Bonzini wrote:
> > On 2/16/23 17:18, huangy81@chinatelecom.cn wrote:
> > > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > index 9b26582655..47483cdfa0 100644
> > > --- a/accel/kvm/kvm-all.c
> > > +++ b/accel/kvm/kvm-all.c
> > > @@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState
> *s, CPUState *cpu)
> > >       uint32_t ring_size = s->kvm_dirty_ring_size;
> > >       uint32_t count = 0, fetch = cpu->kvm_fetch_index;
> > > +    /*
> > > +     * It's possible that we race with vcpu creation code where the
> vcpu is
> > > +     * put onto the vcpus list but not yet initialized the dirty ring
> > > +     * structures.  If so, skip it.
> > > +     */
> > > +    if (!cpu->created) {
> > > +        return 0;
> > > +    }
> > > +
> >
> > Is there a lock that protects cpu->created?
> >
> > If you don't want to use a lock you need to use qatomic_load_acquire
> > together with
> >
> > diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> > index fed20ffb5dd2..15b64e7f4592 100644
> > --- a/softmmu/cpus.c
> > +++ b/softmmu/cpus.c
> > @@ -525,7 +525,7 @@ void qemu_cond_timedwait_iothread(QemuCond *cond,
> int ms)
> >  /* signal CPU creation */
> >  void cpu_thread_signal_created(CPUState *cpu)
> >  {
> > -    cpu->created = true;
> > +    qatomic_store_release(&cpu->created, true);
> >      qemu_cond_signal(&qemu_cpu_cond);
> >  }
>
> Makes sense.
>
> When looking at such a possible race, I also found that when destroying the
> vcpu we may have another relevant issue, where we flip "vcpu->created"
> after destroying the vcpu.  IIUC it means the same issue can occur when
> vcpu unplugged?
>
> Meanwhile I think the memory ordering trick won't play there, because
> firstly to do that we'll need to update created==false:
>
> -    kvm_destroy_vcpu(cpu);
>      cpu_thread_signal_destroyed(cpu);
> +    kvm_destroy_vcpu(cpu);
>
> And even if we order the operations we still cannot assume the data is safe
> to access even if created==true.
>

Yes, this would need some kind of synchronize_rcu() before clearing
created, and rcu_read_lock() when reading the dirty ring.

(Note that synchronize_rcu can only be used outside BQL. The alternative
would be to defer what's after created=false using call_rcu().

Maybe we'd better need (unfortunately) a per-vcpu mutex to protect both
> cases?


If RCU can work it's obviously better, but if not then yes. It's per-CPU so
it's only about the complexity, not the overhead.

Paolo

>
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by Peter Xu 2 years, 10 months ago
On Tue, Apr 04, 2023 at 06:08:41PM +0200, Paolo Bonzini wrote:
> Il mar 4 apr 2023, 16:11 Peter Xu <peterx@redhat.com> ha scritto:
> 
> > Hi, Paolo!
> >
> > On Tue, Apr 04, 2023 at 03:32:38PM +0200, Paolo Bonzini wrote:
> > > On 2/16/23 17:18, huangy81@chinatelecom.cn wrote:
> > > > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > > index 9b26582655..47483cdfa0 100644
> > > > --- a/accel/kvm/kvm-all.c
> > > > +++ b/accel/kvm/kvm-all.c
> > > > @@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState
> > *s, CPUState *cpu)
> > > >       uint32_t ring_size = s->kvm_dirty_ring_size;
> > > >       uint32_t count = 0, fetch = cpu->kvm_fetch_index;
> > > > +    /*
> > > > +     * It's possible that we race with vcpu creation code where the
> > vcpu is
> > > > +     * put onto the vcpus list but not yet initialized the dirty ring
> > > > +     * structures.  If so, skip it.
> > > > +     */
> > > > +    if (!cpu->created) {
> > > > +        return 0;
> > > > +    }
> > > > +
> > >
> > > Is there a lock that protects cpu->created?
> > >
> > > If you don't want to use a lock you need to use qatomic_load_acquire
> > > together with
> > >
> > > diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> > > index fed20ffb5dd2..15b64e7f4592 100644
> > > --- a/softmmu/cpus.c
> > > +++ b/softmmu/cpus.c
> > > @@ -525,7 +525,7 @@ void qemu_cond_timedwait_iothread(QemuCond *cond,
> > int ms)
> > >  /* signal CPU creation */
> > >  void cpu_thread_signal_created(CPUState *cpu)
> > >  {
> > > -    cpu->created = true;
> > > +    qatomic_store_release(&cpu->created, true);
> > >      qemu_cond_signal(&qemu_cpu_cond);
> > >  }
> >
> > Makes sense.
> >
> > When looking at such a possible race, I also found that when destroying the
> > vcpu we may have another relevant issue, where we flip "vcpu->created"
> > after destroying the vcpu.  IIUC it means the same issue can occur when
> > vcpu unplugged?
> >
> > Meanwhile I think the memory ordering trick won't play there, because
> > firstly to do that we'll need to update created==false:
> >
> > -    kvm_destroy_vcpu(cpu);
> >      cpu_thread_signal_destroyed(cpu);
> > +    kvm_destroy_vcpu(cpu);
> >
> > And even if we order the operations we still cannot assume the data is safe
> > to access even if created==true.
> >
> 
> Yes, this would need some kind of synchronize_rcu() before clearing
> created, and rcu_read_lock() when reading the dirty ring.
> 
> (Note that synchronize_rcu can only be used outside BQL. The alternative
> would be to defer what's after created=false using call_rcu().
> 
> Maybe we'd better need (unfortunately) a per-vcpu mutex to protect both
> > cases?
> 
> 
> If RCU can work it's obviously better, but if not then yes. It's per-CPU so
> it's only about the complexity, not the overhead.

Oh.. I just noticed that both vcpu creation and destruction will require
BQL, while right now dirty ring reaping also requires BQL (taken at all
callers of kvm_dirty_ring_reap()).. so I assume even the current patch will
be race-free already?

I'm not sure whether it's ideal, though, I think having BQL at least makes
sure there's no concurrent memory updates so the slot IDs will be solid
during the dirty ring reaping, but I can't remember the details.  However
that seems to be a separate topic to be discussed..

Thanks,

-- 
Peter Xu
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
Posted by Paolo Bonzini 2 years, 10 months ago
On 4/4/23 18:36, Peter Xu wrote:
> Oh.. I just noticed that both vcpu creation and destruction will require
> BQL, while right now dirty ring reaping also requires BQL (taken at all
> callers of kvm_dirty_ring_reap()).. so I assume even the current patch will
> be race-free already?

Oh, indeed!  Queued then, thanks.

Thanks,

Paolo