Documentation/virt/kvm/api.rst | 53 +++++-- arch/x86/kvm/x86.c | 5 +- arch/x86/kvm/xen.c | 92 +++++++++---- include/linux/kvm_host.h | 43 ++++++ include/linux/kvm_types.h | 3 +- include/uapi/linux/kvm.h | 9 +- .../selftests/kvm/x86_64/xen_shinfo_test.c | 59 ++++++-- virt/kvm/pfncache.c | 129 +++++++++++++----- 8 files changed, 302 insertions(+), 91 deletions(-)
From: Paul Durrant <pdurrant@amazon.com>
The following text from the original cover letter still serves as an
introduction to the series:
"Currently we treat the shared_info page as guest memory and the VMM
informs KVM of its location using a GFN. However it is not guest memory as
such; it's an overlay page. So we pointlessly invalidate and re-cache a
mapping to the *same page* of memory every time the guest requests that
shared_info be mapped into its address space. Let's avoid doing that by
modifying the pfncache code to allow activation using a fixed userspace HVA
as well as a GPA."
This version of the series is functionally the same as version 6. I have
simply added David Woodhouse's R-b to patch 11 to indicate that he has
now fully reviewed the series.
Paul Durrant (11):
KVM: pfncache: add a map helper function
KVM: pfncache: add a mark-dirty helper
KVM: pfncache: add a helper to get the gpa
KVM: pfncache: base offset check on khva rather than gpa
KVM: pfncache: allow a cache to be activated with a fixed (userspace)
HVA
KVM: xen: allow shared_info to be mapped by fixed HVA
KVM: xen: allow vcpu_info to be mapped by fixed HVA
KVM: selftests / xen: map shared_info using HVA rather than GFN
KVM: selftests / xen: re-map vcpu_info using HVA rather than GPA
KVM: xen: advertize the KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA capability
KVM: xen: allow vcpu_info content to be 'safely' copied
Documentation/virt/kvm/api.rst | 53 +++++--
arch/x86/kvm/x86.c | 5 +-
arch/x86/kvm/xen.c | 92 +++++++++----
include/linux/kvm_host.h | 43 ++++++
include/linux/kvm_types.h | 3 +-
include/uapi/linux/kvm.h | 9 +-
.../selftests/kvm/x86_64/xen_shinfo_test.c | 59 ++++++--
virt/kvm/pfncache.c | 129 +++++++++++++-----
8 files changed, 302 insertions(+), 91 deletions(-)
---
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
--
2.39.2
On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: > From: Paul Durrant <pdurrant@amazon.com> > > The following text from the original cover letter still serves as an > introduction to the series: > > "Currently we treat the shared_info page as guest memory and the VMM > informs KVM of its location using a GFN. However it is not guest memory as > such; it's an overlay page. So we pointlessly invalidate and re-cache a > mapping to the *same page* of memory every time the guest requests that > shared_info be mapped into its address space. Let's avoid doing that by > modifying the pfncache code to allow activation using a fixed userspace HVA > as well as a GPA." > > This version of the series is functionally the same as version 6. I have > simply added David Woodhouse's R-b to patch 11 to indicate that he has > now fully reviewed the series. Thanks. I believe Sean is probably waiting for us to stop going back and forth, and for the dust to settle. So for the record: I think I'm done heckling and this is ready to go in. Are you doing the QEMU patches or am I?
On 05/10/2023 07:41, David Woodhouse wrote: > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> The following text from the original cover letter still serves as an >> introduction to the series: >> >> "Currently we treat the shared_info page as guest memory and the VMM >> informs KVM of its location using a GFN. However it is not guest memory as >> such; it's an overlay page. So we pointlessly invalidate and re-cache a >> mapping to the *same page* of memory every time the guest requests that >> shared_info be mapped into its address space. Let's avoid doing that by >> modifying the pfncache code to allow activation using a fixed userspace HVA >> as well as a GPA." >> >> This version of the series is functionally the same as version 6. I have >> simply added David Woodhouse's R-b to patch 11 to indicate that he has >> now fully reviewed the series. > > Thanks. I believe Sean is probably waiting for us to stop going back > and forth, and for the dust to settle. So for the record: I think I'm > done heckling and this is ready to go in. > Nudge. Sean, is there anything more I need to do on this series? Paul
On 05/10/2023 07:41, David Woodhouse wrote: > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> The following text from the original cover letter still serves as an >> introduction to the series: >> >> "Currently we treat the shared_info page as guest memory and the VMM >> informs KVM of its location using a GFN. However it is not guest memory as >> such; it's an overlay page. So we pointlessly invalidate and re-cache a >> mapping to the *same page* of memory every time the guest requests that >> shared_info be mapped into its address space. Let's avoid doing that by >> modifying the pfncache code to allow activation using a fixed userspace HVA >> as well as a GPA." >> >> This version of the series is functionally the same as version 6. I have >> simply added David Woodhouse's R-b to patch 11 to indicate that he has >> now fully reviewed the series. > > Thanks. I believe Sean is probably waiting for us to stop going back > and forth, and for the dust to settle. So for the record: I think I'm > done heckling and this is ready to go in. > > Are you doing the QEMU patches or am I? > I'll do the QEMU changes, once the patches hit kvm/next.
On Thu, 2023-10-05 at 09:36 +0100, Paul Durrant wrote: > On 05/10/2023 07:41, David Woodhouse wrote: > > On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: > > > From: Paul Durrant <pdurrant@amazon.com> > > > > > > The following text from the original cover letter still serves as an > > > introduction to the series: > > > > > > "Currently we treat the shared_info page as guest memory and the VMM > > > informs KVM of its location using a GFN. However it is not guest memory as > > > such; it's an overlay page. So we pointlessly invalidate and re-cache a > > > mapping to the *same page* of memory every time the guest requests that > > > shared_info be mapped into its address space. Let's avoid doing that by > > > modifying the pfncache code to allow activation using a fixed userspace HVA > > > as well as a GPA." > > > > > > This version of the series is functionally the same as version 6. I have > > > simply added David Woodhouse's R-b to patch 11 to indicate that he has > > > now fully reviewed the series. > > > > Thanks. I believe Sean is probably waiting for us to stop going back > > and forth, and for the dust to settle. So for the record: I think I'm > > done heckling and this is ready to go in. > > > > Are you doing the QEMU patches or am I? > > > > I'll do the QEMU changes, once the patches hit kvm/next. Note that I disabled migration support in QEMU for emulated Xen guests. You might want that for testing, since the reason for this work is to enable pause/serialize workflows. Migration does work all the way up to XenStore itself, and https://gitlab.com/qemu-project/qemu/-/commit/766804b101d *was* tested with migration enabled. There are also unit tests for XenStore serialize/deserialize. I disabled it because the PV backends on the XenBus don't have suspend/resume support. But a guest using other emulated net/disk devices should still be able to suspend/resume OK if we just remove the 'unmigratable' flag from xen_xenstore, I believe.
On 09/11/2023 10:02, David Woodhouse wrote: > On Thu, 2023-10-05 at 09:36 +0100, Paul Durrant wrote: >> On 05/10/2023 07:41, David Woodhouse wrote: >>> On Mon, 2023-10-02 at 09:57 +0000, Paul Durrant wrote: >>>> From: Paul Durrant <pdurrant@amazon.com> >>>> >>>> The following text from the original cover letter still serves as an >>>> introduction to the series: >>>> >>>> "Currently we treat the shared_info page as guest memory and the VMM >>>> informs KVM of its location using a GFN. However it is not guest memory as >>>> such; it's an overlay page. So we pointlessly invalidate and re-cache a >>>> mapping to the *same page* of memory every time the guest requests that >>>> shared_info be mapped into its address space. Let's avoid doing that by >>>> modifying the pfncache code to allow activation using a fixed userspace HVA >>>> as well as a GPA." >>>> >>>> This version of the series is functionally the same as version 6. I have >>>> simply added David Woodhouse's R-b to patch 11 to indicate that he has >>>> now fully reviewed the series. >>> >>> Thanks. I believe Sean is probably waiting for us to stop going back >>> and forth, and for the dust to settle. So for the record: I think I'm >>> done heckling and this is ready to go in. >>> >>> Are you doing the QEMU patches or am I? >>> >> >> I'll do the QEMU changes, once the patches hit kvm/next. > > Note that I disabled migration support in QEMU for emulated Xen > guests. You might want that for testing, since the reason for this work > is to enable pause/serialize workflows. > > Migration does work all the way up to XenStore itself, and > https://gitlab.com/qemu-project/qemu/-/commit/766804b101d *was* tested > with migration enabled. There are also unit tests for XenStore > serialize/deserialize. > > I disabled it because the PV backends on the XenBus don't have > suspend/resume support. But a guest using other emulated net/disk > devices should still be able to suspend/resume OK if we just remove the > 'unmigratable' flag from xen_xenstore, I believe. Ok. Enabling suspend/resume for backends really ought not to be that hard. The main reason for this series was to enable pause for-for-memory-reconfiguration but I can look into suspend/resume/migrate once I've done the necessary re-work.
© 2016 - 2026 Red Hat, Inc.