backends/Makefile.objs | 1 + backends/hostmem-xen.c | 108 ++++++++++++++++++++++++++++++++++++++++++++ backends/hostmem.c | 9 ++++ hw/acpi/nvdimm.c | 51 +++++++++++++++++---- hw/i386/acpi-build.c | 9 +++- hw/i386/pc.c | 86 +++++++++++++++++++---------------- hw/i386/xen/xen-hvm.c | 105 ++++++++++++++++++++++++++++++++++++++++-- hw/mem/nvdimm.c | 10 +++- hw/mem/pc-dimm.c | 6 ++- include/hw/acpi/aml-build.h | 4 ++ include/hw/i386/pc.h | 1 + include/hw/xen/xen.h | 7 +++ stubs/xen-hvm.c | 15 ++++++ 13 files changed, 359 insertions(+), 53 deletions(-) create mode 100644 backends/hostmem-xen.c
This is the QEMU part patches that works with the associated Xen patches to enable vNVDIMM support for Xen HVM domains. Xen relies on QEMU to build guest NFIT and NVDIMM namespace devices, and allocate guest address space for vNVDIMM devices. All patches can also be found at Xen: https://github.com/hzzhan9/xen.git nvdimm-rfc-v4 QEMU: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v4 RFC v3 can be found at https://lists.nongnu.org/archive/html/qemu-devel/2017-09/msg02406.html Changes in v4: * The primary change in this version is to use the existing fw_cfg and BIOSLinkerLoader interface to pass ACPI to Xen guest, rather than introducing a Xen-specific mechanism. (Patch 5-10) Following Xen-specific are still left in ACPI code: (1) (Patch 6) xen_acpi_build() is called in acpi_build() to only build Xen guest required ACPI tables. The consequent code path in acpi_build() is bypassed. (2) (Patch 8) Add Xen-specific functions to access DSM memory, because the existing cpu_physical_memory_rw does not work on Xen. (3) (Patch 9) Implement a workaround for different AML integer widths between ACPI 1.0 (QEMU) and 2.0 (Xen). Patch 1 is a trivial code cleanup. Patch 2-3 add a memory backend dedicated for Xen usage and a hotplug memory region for Xen guest, in order to make the existing nvdimm device plugging path work on Xen. Patch 4 is to avoid dereferencing the NULL pointer to non-existing label data, as the Xen side support for labels is not implemented yet. Patch 5-10 enable building ACPI tables and passing them to Xen HVM domains. Haozhong Zhang (10): [01/10] xen-hvm: remove a trailing space [02/10] xen-hvm: create the hotplug memory region on Xen [03/10] hostmem-xen: add a host memory backend for Xen [04/10] nvdimm: do not intiailize nvdimm->label_data if label size is zero [05/10] xen-hvm: initialize fw_cfg interface [06/10] hw/acpi-build, xen-hvm: introduce a Xen-specific ACPI builder [07/10] xen-hvm: add functions to copy data from/to HVM memory [08/10] nvdimm acpi: add functions to access DSM memory on Xen [09/10] nvdimm acpi: add compatibility for 64-bit integer in ACPI 2.0 and later [10/10] xen-hvm: enable building NFIT and SSDT of vNVDIMM for HVM domains backends/Makefile.objs | 1 + backends/hostmem-xen.c | 108 ++++++++++++++++++++++++++++++++++++++++++++ backends/hostmem.c | 9 ++++ hw/acpi/nvdimm.c | 51 +++++++++++++++++---- hw/i386/acpi-build.c | 9 +++- hw/i386/pc.c | 86 +++++++++++++++++++---------------- hw/i386/xen/xen-hvm.c | 105 ++++++++++++++++++++++++++++++++++++++++-- hw/mem/nvdimm.c | 10 +++- hw/mem/pc-dimm.c | 6 ++- include/hw/acpi/aml-build.h | 4 ++ include/hw/i386/pc.h | 1 + include/hw/xen/xen.h | 7 +++ stubs/xen-hvm.c | 15 ++++++ 13 files changed, 359 insertions(+), 53 deletions(-) create mode 100644 backends/hostmem-xen.c -- 2.15.1
On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote: > This is the QEMU part patches that works with the associated Xen > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on > QEMU to build guest NFIT and NVDIMM namespace devices, and allocate > guest address space for vNVDIMM devices. I've got other question, and maybe possible improvements. When QEMU build the ACPI tables, it also initialize some MemoryRegion, which use more guest memory. Do you know if those regions are used with your patch series on Xen? Otherwise, we could try to avoid their creation with this: In xenfv_machine_options() m->rom_file_has_mr = false; (setting this in xen_hvm_init() would probably be better, but I havn't try) If this is possible, libxl would not need to allocate more memory for the guest (dm_acpi_size). -- Anthony PERARD
On 02/27/18 17:22 +0000, Anthony PERARD wrote: > On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote: > > This is the QEMU part patches that works with the associated Xen > > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on > > QEMU to build guest NFIT and NVDIMM namespace devices, and allocate > > guest address space for vNVDIMM devices. > > I've got other question, and maybe possible improvements. > > When QEMU build the ACPI tables, it also initialize some MemoryRegion, > which use more guest memory. Do you know if those regions are used with > your patch series on Xen? Yes, that's why dm_acpi_size is introduced. > Otherwise, we could try to avoid their > creation with this: > In xenfv_machine_options() > m->rom_file_has_mr = false; > (setting this in xen_hvm_init() would probably be better, but I havn't > try) If my memory is correct, simply setting rom_file_has_mr to false does not work (though I cannot remind the exact reason). I'll have a look as the code to refresh my memory. Haozhong > > If this is possible, libxl would not need to allocate more memory for > the guest (dm_acpi_size). > > -- > Anthony PERARD
On Wed, Feb 28, 2018 at 05:36:59PM +0800, Haozhong Zhang wrote: > On 02/27/18 17:22 +0000, Anthony PERARD wrote: > > On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote: > > > This is the QEMU part patches that works with the associated Xen > > > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on > > > QEMU to build guest NFIT and NVDIMM namespace devices, and allocate > > > guest address space for vNVDIMM devices. > > > > I've got other question, and maybe possible improvements. > > > > When QEMU build the ACPI tables, it also initialize some MemoryRegion, > > which use more guest memory. Do you know if those regions are used with > > your patch series on Xen? > > Yes, that's why dm_acpi_size is introduced. > > > Otherwise, we could try to avoid their > > creation with this: > > In xenfv_machine_options() > > m->rom_file_has_mr = false; > > (setting this in xen_hvm_init() would probably be better, but I havn't > > try) > > If my memory is correct, simply setting rom_file_has_mr to false does > not work (though I cannot remind the exact reason). I'll have a look > as the code to refresh my memory. I've played a bit with this idea, but without a proper NVDIMM available for the guest, so I don't know if it's going to work properly without the mr. To make it work, I had to disable some code in acpi_build_update() that make use of the MemoryRegions, as well as an assert in acpi_setup(). After those small hacks, I could boot the guest, and I've check that the expected ACPI tables where there, and they looked correct to my eyes. And least `ndctl list` works and showed the nvdimm (that I have configured on QEMU's cmdline). But I may not have been far enough with my tests, and maybe something later relies on the MRs, especially the _DSM method that I don't know if it was working properly. Anyway, that why I proposed the idea, and if we can avoid more uncertainty about how much guest memory QEMU is going to use, that would be good. Thanks, -- Anthony PERARD
On 03/02/18 12:03 +0000, Anthony PERARD wrote: > On Wed, Feb 28, 2018 at 05:36:59PM +0800, Haozhong Zhang wrote: > > On 02/27/18 17:22 +0000, Anthony PERARD wrote: > > > On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote: > > > > This is the QEMU part patches that works with the associated Xen > > > > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on > > > > QEMU to build guest NFIT and NVDIMM namespace devices, and allocate > > > > guest address space for vNVDIMM devices. > > > > > > I've got other question, and maybe possible improvements. > > > > > > When QEMU build the ACPI tables, it also initialize some MemoryRegion, > > > which use more guest memory. Do you know if those regions are used with > > > your patch series on Xen? > > > > Yes, that's why dm_acpi_size is introduced. > > > > > Otherwise, we could try to avoid their > > > creation with this: > > > In xenfv_machine_options() > > > m->rom_file_has_mr = false; > > > (setting this in xen_hvm_init() would probably be better, but I havn't > > > try) > > > > If my memory is correct, simply setting rom_file_has_mr to false does > > not work (though I cannot remind the exact reason). I'll have a look > > as the code to refresh my memory. > > I've played a bit with this idea, but without a proper NVDIMM available > for the guest, so I don't know if it's going to work properly without > the mr. > > To make it work, I had to disable some code in acpi_build_update() that > make use of the MemoryRegions, as well as an assert in acpi_setup(). > After those small hacks, I could boot the guest, and I've check that the > expected ACPI tables where there, and they looked correct to my eyes. > And least `ndctl list` works and showed the nvdimm (that I have > configured on QEMU's cmdline). > > But I may not have been far enough with my tests, and maybe something > later relies on the MRs, especially the _DSM method that I don't know if > it was working properly. > > Anyway, that why I proposed the idea, and if we can avoid more > uncertainty about how much guest memory QEMU is going to use, that would > be good. > Yes, I also tested some non-trivial _DSM methods and it looks rom files without memory regions can work with Xen after some modifications. I'll apply this idea in the next version if no other issues are found. Thanks, Haozhong
On Tue, Mar 06, 2018 at 12:16:08PM +0800, Haozhong Zhang wrote: > On 03/02/18 12:03 +0000, Anthony PERARD wrote: > > On Wed, Feb 28, 2018 at 05:36:59PM +0800, Haozhong Zhang wrote: > > > On 02/27/18 17:22 +0000, Anthony PERARD wrote: > > > > On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote: > > > > > This is the QEMU part patches that works with the associated Xen > > > > > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on > > > > > QEMU to build guest NFIT and NVDIMM namespace devices, and allocate > > > > > guest address space for vNVDIMM devices. > > > > > > > > I've got other question, and maybe possible improvements. > > > > > > > > When QEMU build the ACPI tables, it also initialize some MemoryRegion, > > > > which use more guest memory. Do you know if those regions are used with > > > > your patch series on Xen? > > > > > > Yes, that's why dm_acpi_size is introduced. > > > > > > > Otherwise, we could try to avoid their > > > > creation with this: > > > > In xenfv_machine_options() > > > > m->rom_file_has_mr = false; > > > > (setting this in xen_hvm_init() would probably be better, but I havn't > > > > try) > > > > > > If my memory is correct, simply setting rom_file_has_mr to false does > > > not work (though I cannot remind the exact reason). I'll have a look > > > as the code to refresh my memory. > > > > I've played a bit with this idea, but without a proper NVDIMM available > > for the guest, so I don't know if it's going to work properly without > > the mr. > > > > To make it work, I had to disable some code in acpi_build_update() that > > make use of the MemoryRegions, as well as an assert in acpi_setup(). > > After those small hacks, I could boot the guest, and I've check that the > > expected ACPI tables where there, and they looked correct to my eyes. > > And least `ndctl list` works and showed the nvdimm (that I have > > configured on QEMU's cmdline). > > > > But I may not have been far enough with my tests, and maybe something > > later relies on the MRs, especially the _DSM method that I don't know if > > it was working properly. > > > > Anyway, that why I proposed the idea, and if we can avoid more > > uncertainty about how much guest memory QEMU is going to use, that would > > be good. > > > > Yes, I also tested some non-trivial _DSM methods and it looks rom > files without memory regions can work with Xen after some > modifications. I'll apply this idea in the next version if no other > issues are found. Awesome, thanks. -- Anthony PERARD
© 2016 - 2024 Red Hat, Inc.