Put the async #PF worker's reference to the VM's address space as soon as
the worker is done with the mm. This will allow deferring getting a
reference to the worker itself without having to track whether or not
getting a reference succeeded.
Note, if the vCPU is still alive, there is no danger of the worker getting
stuck with tearing down the host page tables, as userspace also holds a
reference (obviously), i.e. there is no risk of delaying the page-present
notification due to triggering the slow path in mmput().
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
virt/kvm/async_pf.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 876927a558ad..d5dc50318aa6 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -64,6 +64,7 @@ static void async_pf_execute(struct work_struct *work)
get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked);
if (locked)
mmap_read_unlock(mm);
+ mmput(mm);
if (IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC))
kvm_arch_async_page_present(vcpu, apf);
@@ -85,8 +86,6 @@ static void async_pf_execute(struct work_struct *work)
trace_kvm_async_pf_completed(addr, cr2_or_gpa);
__kvm_vcpu_wake_up(vcpu);
-
- mmput(mm);
}
static void kvm_flush_and_free_async_pf_work(struct kvm_async_pf *work)
--
2.43.0.472.g3155946c3a-goog
Sean Christopherson <seanjc@google.com> writes: > Put the async #PF worker's reference to the VM's address space as soon as > the worker is done with the mm. This will allow deferring getting a > reference to the worker itself without having to track whether or not > getting a reference succeeded. > > Note, if the vCPU is still alive, there is no danger of the worker getting > stuck with tearing down the host page tables, as userspace also holds a > reference (obviously), i.e. there is no risk of delaying the page-present > notification due to triggering the slow path in mmput(). > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > virt/kvm/async_pf.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c > index 876927a558ad..d5dc50318aa6 100644 > --- a/virt/kvm/async_pf.c > +++ b/virt/kvm/async_pf.c > @@ -64,6 +64,7 @@ static void async_pf_execute(struct work_struct *work) > get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); > if (locked) > mmap_read_unlock(mm); > + mmput(mm); > > if (IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC)) > kvm_arch_async_page_present(vcpu, apf); > @@ -85,8 +86,6 @@ static void async_pf_execute(struct work_struct *work) > trace_kvm_async_pf_completed(addr, cr2_or_gpa); > > __kvm_vcpu_wake_up(vcpu); > - > - mmput(mm); > } > > static void kvm_flush_and_free_async_pf_work(struct kvm_async_pf *work) Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> -- Vitaly
On Tue, Jan 09, 2024 at 05:15:31PM -0800, Sean Christopherson wrote: > Put the async #PF worker's reference to the VM's address space as soon as > the worker is done with the mm. This will allow deferring getting a > reference to the worker itself without having to track whether or not > getting a reference succeeded. > > Note, if the vCPU is still alive, there is no danger of the worker getting > stuck with tearing down the host page tables, as userspace also holds a > reference (obviously), i.e. there is no risk of delaying the page-present > notification due to triggering the slow path in mmput(). > > Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Xu Yilun <yilun.xu@intel.com> > --- > virt/kvm/async_pf.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c > index 876927a558ad..d5dc50318aa6 100644 > --- a/virt/kvm/async_pf.c > +++ b/virt/kvm/async_pf.c > @@ -64,6 +64,7 @@ static void async_pf_execute(struct work_struct *work) > get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); > if (locked) > mmap_read_unlock(mm); > + mmput(mm); > > if (IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC)) > kvm_arch_async_page_present(vcpu, apf); > @@ -85,8 +86,6 @@ static void async_pf_execute(struct work_struct *work) > trace_kvm_async_pf_completed(addr, cr2_or_gpa); > > __kvm_vcpu_wake_up(vcpu); > - > - mmput(mm); > } > > static void kvm_flush_and_free_async_pf_work(struct kvm_async_pf *work) > -- > 2.43.0.472.g3155946c3a-goog >
© 2016 - 2025 Red Hat, Inc.