docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 docs/virtio-pmem.txt
This patch documents the steps to use virtio pmem.
It also documents other useful information about
virtio pmem e.g use-case, comparison with Qemu NVDIMM
backend and current limitations.
Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
create mode 100644 docs/virtio-pmem.txt
diff --git a/docs/virtio-pmem.txt b/docs/virtio-pmem.txt
new file mode 100644
index 0000000000..fc61eebb20
--- /dev/null
+++ b/docs/virtio-pmem.txt
@@ -0,0 +1,65 @@
+
+QEMU virtio pmem
+===================
+
+ This document explains the usage of virtio pmem device
+ which is available since QEMU v4.1.0.
+
+ The virtio pmem is paravirtualized persistent memory device
+ on regular(non-NVDIMM) storage.
+
+Usecase
+--------
+ Allows to bypass the guest page cache and directly use host page cache.
+ This reduces guest memory footprint as host can make efficient memory
+ reclaim decisions under memory pressure.
+
+o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
+
+ NVDIMM emulation on regular(non-NVDIMM) host storage does not persists
+ the guest writes as there are no defined semantecs in the device specification.
+ With virtio pmem device, guest write persistence on non-NVDIMM storage is
+ supported.
+
+virtio pmem usage
+-----------------
+ virtio pmem device is created with a memory-backend-file with the below
+ options:
+
+ -machine pc -m 8G,slots=$N,maxmem=$MAX_SIZE
+ -object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$SIZE
+ -device virtio-pmem-pci,memdev=mem1,id=nv1
+
+ where:
+ - "object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$VIRTIO_PMEM_SIZE"
+ creates a backend storage of size $SIZE on a file $PATH. All
+ accesses to the virtio pmem device go to the file $PATH.
+
+ - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
+ device whose storage is provided by above memory backend device.
+
+ Multiple virtio pmem devices can be created if multiple pairs of "-object"
+ and "-device" are provided.
+
+Hotplug
+-------
+Accomplished by two monitor commands "object_add" and "device_add".
+
+For example, the following commands add another 4GB virtio pmem device to
+the guest:
+
+ (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
+ (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
+
+Guest Data Persistence
+----------------------
+Guest data persistence on non-NVDIMM requires guest userspace application to
+perform fsync/msync. This is different than real nvdimm backend where no additional
+fsync/msync is required for data persistence.
+
+Limitations
+------------
+- Real nvdimm device backend is not supported.
+- virtio pmem hotunplug is not supported.
+- ACPI NVDIMM features like regions/namespaces are not supported.
+- ndctl command is not supported.
--
2.21.0
On Tue, 30 Jul 2019 12:16:57 +0530 Pankaj Gupta <pagupta@redhat.com> wrote: > This patch documents the steps to use virtio pmem. > It also documents other useful information about > virtio pmem e.g use-case, comparison with Qemu NVDIMM > backend and current limitations. > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com> > --- > docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 65 insertions(+) > create mode 100644 docs/virtio-pmem.txt > > diff --git a/docs/virtio-pmem.txt b/docs/virtio-pmem.txt Maybe make this ReST from the start? Should be trivial enough. > new file mode 100644 > index 0000000000..fc61eebb20 > --- /dev/null > +++ b/docs/virtio-pmem.txt > @@ -0,0 +1,65 @@ > + > +QEMU virtio pmem > +=================== > + > + This document explains the usage of virtio pmem device "setup and usage" ? > + which is available since QEMU v4.1.0. > + > + The virtio pmem is paravirtualized persistent memory device "The virtio pmem device is a paravirtualized..." > + on regular(non-NVDIMM) storage. > + > +Usecase > +-------- > + Allows to bypass the guest page cache and directly use host page cache. > + This reduces guest memory footprint as host can make efficient memory s/as host/,as the host/ > + reclaim decisions under memory pressure. > + > +o How does virtio-pmem compare to the nvdimm emulation supported by QEMU? > + > + NVDIMM emulation on regular(non-NVDIMM) host storage does not persists s/regular(non-NVDIMM)/regular (i.e. non-NVDIMM)/ ? s/persists/persist/ > + the guest writes as there are no defined semantecs in the device specification. s/semantecs/semantics/ > + With virtio pmem device, guest write persistence on non-NVDIMM storage is > + supported. "The virtio pmem device provides a way to support guest write persistence on non-NVDIMM storage." ? > + > +virtio pmem usage > +----------------- > + virtio pmem device is created with a memory-backend-file with the below > + options: "A virtio pmem device backed by a memory-backend-file can be created on the QEMU command line as in the following example:" ? > + > + -machine pc -m 8G,slots=$N,maxmem=$MAX_SIZE I'm not sure you should explicitly specify the machine type in this example. I think it is fine to say that something is only supported on a subset of machine types, but it should not make its way into an example on how to configure a device and its backing. Also, maybe fill in more concrete values here? Or split it into a part specifying the syntax (where I'd use <max_size> instead of $MAX_SIZE etc.), and a more concrete example? > + -object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$SIZE > + -device virtio-pmem-pci,memdev=mem1,id=nv1 > + > + where: > + - "object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$VIRTIO_PMEM_SIZE" > + creates a backend storage of size $SIZE on a file $PATH. All > + accesses to the virtio pmem device go to the file $PATH. > + > + - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem > + device whose storage is provided by above memory backend device. "a virtio pmem PCI device" ? > + > + Multiple virtio pmem devices can be created if multiple pairs of "-object" > + and "-device" are provided. > + > +Hotplug > +------- > +Accomplished by two monitor commands "object_add" and "device_add". Hm... what about the following instead: "Virtio pmem devices can be hotplugged via the QEMU monitor. First, the memory backing has to be added via 'object_add'; afterwards, the virtio pmem device can be added via 'device_add'." > + > +For example, the following commands add another 4GB virtio pmem device to > +the guest: > + > + (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G > + (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2 > + > +Guest Data Persistence > +---------------------- > +Guest data persistence on non-NVDIMM requires guest userspace application to s/application/applications/ ? > +perform fsync/msync. This is different than real nvdimm backend where no additional s/than/from a/ ? > +fsync/msync is required for data persistence. Should we be a bit more verbose on what which guest applications are supposed to do? I.e., how do they know they need to do fsync/msync, when should they do it, and what are the consequences if they don't? > + > +Limitations > +------------ > +- Real nvdimm device backend is not supported. > +- virtio pmem hotunplug is not supported. > +- ACPI NVDIMM features like regions/namespaces are not supported. > +- ndctl command is not supported.
> On Tue, 30 Jul 2019 12:16:57 +0530 > Pankaj Gupta <pagupta@redhat.com> wrote: > > > This patch documents the steps to use virtio pmem. > > It also documents other useful information about > > virtio pmem e.g use-case, comparison with Qemu NVDIMM > > backend and current limitations. > > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com> > > --- > > docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 65 insertions(+) > > create mode 100644 docs/virtio-pmem.txt > > > > diff --git a/docs/virtio-pmem.txt b/docs/virtio-pmem.txt > > Maybe make this ReST from the start? Should be trivial enough. o.k > > > new file mode 100644 > > index 0000000000..fc61eebb20 > > --- /dev/null > > +++ b/docs/virtio-pmem.txt > > @@ -0,0 +1,65 @@ > > + > > +QEMU virtio pmem > > +=================== > > + > > + This document explains the usage of virtio pmem device > > "setup and usage" ? o.k > > > + which is available since QEMU v4.1.0. > > + > > + The virtio pmem is paravirtualized persistent memory device > > "The virtio pmem device is a paravirtualized..." sure. > > > + on regular(non-NVDIMM) storage. > > + > > +Usecase > > +-------- > > + Allows to bypass the guest page cache and directly use host page cache. > > + This reduces guest memory footprint as host can make efficient memory > > s/as host/,as the host/ sure > > > + reclaim decisions under memory pressure. > > + > > +o How does virtio-pmem compare to the nvdimm emulation supported by QEMU? > > + > > + NVDIMM emulation on regular(non-NVDIMM) host storage does not persists > > s/regular(non-NVDIMM)/regular (i.e. non-NVDIMM)/ ? > s/persists/persist/ yes, to both. > > > + the guest writes as there are no defined semantecs in the device > > specification. > > s/semantecs/semantics/ ah...spell checker :( > > > + With virtio pmem device, guest write persistence on non-NVDIMM storage > > is > > + supported. > > "The virtio pmem device provides a way to support guest write > persistence on non-NVDIMM storage." ? > > > + > > +virtio pmem usage > > +----------------- > > + virtio pmem device is created with a memory-backend-file with the below > > + options: > > "A virtio pmem device backed by a memory-backend-file can be created on > the QEMU command line as in the following example:" ? o.k > > > + > > + -machine pc -m 8G,slots=$N,maxmem=$MAX_SIZE > > I'm not sure you should explicitly specify the machine type in this > example. I think it is fine to say that something is only supported on > a subset of machine types, but it should not make its way into an > example on how to configure a device and its backing. o.k > > Also, maybe fill in more concrete values here? Or split it into a part > specifying the syntax (where I'd use <max_size> instead of $MAX_SIZE > etc.), and a more concrete example? o.k > > > + -object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$SIZE > > + -device virtio-pmem-pci,memdev=mem1,id=nv1 > > + > > + where: > > + - "object > > memory-backend-file,id=mem1,share,mem-path=$PATH,size=$VIRTIO_PMEM_SIZE" > > + creates a backend storage of size $SIZE on a file $PATH. All > > + accesses to the virtio pmem device go to the file $PATH. > > + > > + - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem > > + device whose storage is provided by above memory backend device. > > "a virtio pmem PCI device" ? o.k > > > + > > + Multiple virtio pmem devices can be created if multiple pairs of > > "-object" > > + and "-device" are provided. > > + > > +Hotplug > > +------- > > +Accomplished by two monitor commands "object_add" and "device_add". > > Hm... what about the following instead: > > "Virtio pmem devices can be hotplugged via the QEMU monitor. First, the > memory backing has to be added via 'object_add'; afterwards, the virtio > pmem device can be added via 'device_add'." o.k > > > + > > +For example, the following commands add another 4GB virtio pmem device to > > +the guest: > > + > > + (qemu) object_add > > memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G > > + (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2 > > + > > +Guest Data Persistence > > +---------------------- > > +Guest data persistence on non-NVDIMM requires guest userspace application > > to > > s/application/applications/ ? > > > +perform fsync/msync. This is different than real nvdimm backend where no > > additional > > s/than/from a/ ? yes. > > > +fsync/msync is required for data persistence. > > Should we be a bit more verbose on what which guest applications are > supposed to do? I.e., how do they know they need to do fsync/msync, > when should they do it, and what are the consequences if they don't? o.k. > > > + > > +Limitations > > +------------ > > +- Real nvdimm device backend is not supported. > > +- virtio pmem hotunplug is not supported. > > +- ACPI NVDIMM features like regions/namespaces are not supported. > > +- ndctl command is not supported. Thank you for the review. Best regards, Pankaj > > >
This patch documents the steps to use virtio pmem.
It also documents other useful information about
virtio pmem e.g use-case, comparison with Qemu NVDIMM
backend and current limitations.
Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
create mode 100644 docs/virtio-pmem.txt
diff --git a/docs/virtio-pmem.txt b/docs/virtio-pmem.txt
new file mode 100644
index 0000000000..fc61eebb20
--- /dev/null
+++ b/docs/virtio-pmem.txt
@@ -0,0 +1,65 @@
+
+QEMU virtio pmem
+===================
+
+ This document explains the usage of virtio pmem device
+ which is available since QEMU v4.1.0.
+
+ The virtio pmem is paravirtualized persistent memory device
+ on regular(non-NVDIMM) storage.
+
+Usecase
+--------
+ Allows to bypass the guest page cache and directly use host page cache.
+ This reduces guest memory footprint as host can make efficient memory
+ reclaim decisions under memory pressure.
+
+o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
+
+ NVDIMM emulation on regular(non-NVDIMM) host storage does not persists
+ the guest writes as there are no defined semantecs in the device specification.
+ With virtio pmem device, guest write persistence on non-NVDIMM storage is
+ supported.
+
+virtio pmem usage
+-----------------
+ virtio pmem device is created with a memory-backend-file with the below
+ options:
+
+ -machine pc -m 8G,slots=$N,maxmem=$MAX_SIZE
+ -object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$SIZE
+ -device virtio-pmem-pci,memdev=mem1,id=nv1
+
+ where:
+ - "object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$VIRTIO_PMEM_SIZE"
+ creates a backend storage of size $SIZE on a file $PATH. All
+ accesses to the virtio pmem device go to the file $PATH.
+
+ - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
+ device whose storage is provided by above memory backend device.
+
+ Multiple virtio pmem devices can be created if multiple pairs of "-object"
+ and "-device" are provided.
+
+Hotplug
+-------
+Accomplished by two monitor commands "object_add" and "device_add".
+
+For example, the following commands add another 4GB virtio pmem device to
+the guest:
+
+ (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
+ (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
+
+Guest Data Persistence
+----------------------
+Guest data persistence on non-NVDIMM requires guest userspace application to
+perform fsync/msync. This is different than real nvdimm backend where no additional
+fsync/msync is required for data persistence.
+
+Limitations
+------------
+- Real nvdimm device backend is not supported.
+- virtio pmem hotunplug is not supported.
+- ACPI NVDIMM features like regions/namespaces are not supported.
+- ndctl command is not supported.
--
2.21.0
© 2016 - 2024 Red Hat, Inc.