hw/virtio/virtio-mem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
vmem->bitmap indexes the memory region of the virtio-mem backend at a
granularity of block_size. To calculate the index of target section offset,
the block_size should be divided instead of the bitmap_size.
Fixes: 2044969f0b ("virtio-mem: Implement RamDiscardManager interface")
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
---
hw/virtio/virtio-mem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index ed170def48..e19ee817fe 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -235,7 +235,7 @@ static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem,
uint64_t offset, size;
int ret = 0;
- first_bit = s->offset_within_region / vmem->bitmap_size;
+ first_bit = s->offset_within_region / vmem->block_size;
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
while (first_bit < vmem->bitmap_size) {
MemoryRegionSection tmp = *s;
@@ -267,7 +267,7 @@ static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem,
uint64_t offset, size;
int ret = 0;
- first_bit = s->offset_within_region / vmem->bitmap_size;
+ first_bit = s->offset_within_region / vmem->block_size;
first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
while (first_bit < vmem->bitmap_size) {
MemoryRegionSection tmp = *s;
--
2.17.1
On Fri, Dec 16, 2022 at 02:22:31PM +0800, Chenyi Qiang wrote:
> vmem->bitmap indexes the memory region of the virtio-mem backend at a
> granularity of block_size. To calculate the index of target section offset,
> the block_size should be divided instead of the bitmap_size.
>
> Fixes: 2044969f0b ("virtio-mem: Implement RamDiscardManager interface")
> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
I see David's queueing this.
> ---
> hw/virtio/virtio-mem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
> index ed170def48..e19ee817fe 100644
> --- a/hw/virtio/virtio-mem.c
> +++ b/hw/virtio/virtio-mem.c
> @@ -235,7 +235,7 @@ static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem,
> uint64_t offset, size;
> int ret = 0;
>
> - first_bit = s->offset_within_region / vmem->bitmap_size;
> + first_bit = s->offset_within_region / vmem->block_size;
> first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
> while (first_bit < vmem->bitmap_size) {
> MemoryRegionSection tmp = *s;
> @@ -267,7 +267,7 @@ static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem,
> uint64_t offset, size;
> int ret = 0;
>
> - first_bit = s->offset_within_region / vmem->bitmap_size;
> + first_bit = s->offset_within_region / vmem->block_size;
> first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
> while (first_bit < vmem->bitmap_size) {
> MemoryRegionSection tmp = *s;
> --
> 2.17.1
On 16.12.22 07:22, Chenyi Qiang wrote:
> vmem->bitmap indexes the memory region of the virtio-mem backend at a
> granularity of block_size. To calculate the index of target section offset,
> the block_size should be divided instead of the bitmap_size.
I'm curious, what's the user-visible effect and how did you identify
this issue?
IIUC, we could end up our search for a plugged/unplugged block "too
late", such that we miss to process blocks.
That would be the case if the bitmap_size < block_size, which should
effectively always happen ...
unplug_all and migration would be affected, which is why a simple test
case without a guest reboot/migration wouldn't run into it.
>
> Fixes: 2044969f0b ("virtio-mem: Implement RamDiscardManager interface")
> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
> ---
> hw/virtio/virtio-mem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
> index ed170def48..e19ee817fe 100644
> --- a/hw/virtio/virtio-mem.c
> +++ b/hw/virtio/virtio-mem.c
> @@ -235,7 +235,7 @@ static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem,
> uint64_t offset, size;
> int ret = 0;
>
> - first_bit = s->offset_within_region / vmem->bitmap_size;
> + first_bit = s->offset_within_region / vmem->block_size;
> first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
> while (first_bit < vmem->bitmap_size) {
> MemoryRegionSection tmp = *s;
> @@ -267,7 +267,7 @@ static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem,
> uint64_t offset, size;
> int ret = 0;
>
> - first_bit = s->offset_within_region / vmem->bitmap_size;
> + first_bit = s->offset_within_region / vmem->block_size;
> first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
> while (first_bit < vmem->bitmap_size) {
> MemoryRegionSection tmp = *s;
Looks correct to me
Reviewed-by: David Hildenbrand <david@redhat.com>
Thanks!
--
Thanks,
David / dhildenb
On 16.12.22 09:52, David Hildenbrand wrote: > On 16.12.22 07:22, Chenyi Qiang wrote: >> vmem->bitmap indexes the memory region of the virtio-mem backend at a >> granularity of block_size. To calculate the index of target section offset, >> the block_size should be divided instead of the bitmap_size. > > I'm curious, what's the user-visible effect and how did you identify > this issue? > > IIUC, we could end up our search for a plugged/unplugged block "too > late", such that we miss to process blocks. > > That would be the case if the bitmap_size < block_size, which should > effectively always happen ... > > > unplug_all and migration would be affected, which is why a simple test > case without a guest reboot/migration wouldn't run into it. I just realized that unplug_all is fine because only vfio implements the ram_discard_listener so far and always sets double_discard_supported=true. So migration should be the issue (and IIRC migration with VFIO is still shaky). -- Thanks, David / dhildenb
On 12/16/2022 6:30 PM, David Hildenbrand wrote: > On 16.12.22 09:52, David Hildenbrand wrote: >> On 16.12.22 07:22, Chenyi Qiang wrote: >>> vmem->bitmap indexes the memory region of the virtio-mem backend at a >>> granularity of block_size. To calculate the index of target section >>> offset, >>> the block_size should be divided instead of the bitmap_size. >> >> I'm curious, what's the user-visible effect and how did you identify >> this issue? >> >> IIUC, we could end up our search for a plugged/unplugged block "too >> late", such that we miss to process blocks. >> >> That would be the case if the bitmap_size < block_size, which should >> effectively always happen ... >> >> >> unplug_all and migration would be affected, which is why a simple test >> case without a guest reboot/migration wouldn't run into it. > > I just realized that unplug_all is fine because only vfio implements the > ram_discard_listener so far and always sets > double_discard_supported=true. So migration should be the issue (and > IIRC migration with VFIO is still shaky). Yes, actually, no obvious visible effect on my side. I was just learning the RamDiscardManager interface and found this issue. :) >
On 19.12.22 02:21, Chenyi Qiang wrote: > > > On 12/16/2022 6:30 PM, David Hildenbrand wrote: >> On 16.12.22 09:52, David Hildenbrand wrote: >>> On 16.12.22 07:22, Chenyi Qiang wrote: >>>> vmem->bitmap indexes the memory region of the virtio-mem backend at a >>>> granularity of block_size. To calculate the index of target section >>>> offset, >>>> the block_size should be divided instead of the bitmap_size. >>> >>> I'm curious, what's the user-visible effect and how did you identify >>> this issue? >>> >>> IIUC, we could end up our search for a plugged/unplugged block "too >>> late", such that we miss to process blocks. >>> >>> That would be the case if the bitmap_size < block_size, which should >>> effectively always happen ... >>> >>> >>> unplug_all and migration would be affected, which is why a simple test >>> case without a guest reboot/migration wouldn't run into it. >> >> I just realized that unplug_all is fine because only vfio implements the >> ram_discard_listener so far and always sets >> double_discard_supported=true. So migration should be the issue (and >> IIRC migration with VFIO is still shaky). > > Yes, actually, no obvious visible effect on my side. I was just learning > the RamDiscardManager interface and found this issue. :) Good, thanks. Queuing this to https://github.com/davidhildenbrand/qemu.git mem-next -- Thanks, David / dhildenb
© 2016 - 2026 Red Hat, Inc.