nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
but we never release them. Do it in nvme_free_queue() which is called
from nvme_free_queue_pair().
Reported by valgrind:
==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
==252858== by 0xAA0125: nvme_init (nvme.c:799)
==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
==252858== by 0xA24806: bdrv_open_common (block.c:1827)
==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
==252858== by 0xA28DE4: bdrv_open (block.c:3840)
==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
block/nvme.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/nvme.c b/block/nvme.c
index 6e476f54b9f..903c8ffa060 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
{
+ qemu_vfio_dma_unmap(s->vfio, q->queue);
qemu_vfree(q->queue);
}
--
2.31.1
On 10/6/21 18:49, Philippe Mathieu-Daudé wrote:
> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
> but we never release them. Do it in nvme_free_queue() which is called
> from nvme_free_queue_pair().
>
> Reported by valgrind:
>
> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
BTW the "8302" number is kinda depressing...
Good news, with this patch I'm now at 8301.
> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
>
> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> block/nvme.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index 6e476f54b9f..903c8ffa060 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
>
> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
> {
> + qemu_vfio_dma_unmap(s->vfio, q->queue);
> qemu_vfree(q->queue);
> }
>
>
On Wed, Oct 06, 2021 at 06:49:31PM +0200, Philippe Mathieu-Daudé wrote:
> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
> but we never release them. Do it in nvme_free_queue() which is called
> from nvme_free_queue_pair().
>
> Reported by valgrind:
>
> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
>
> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> block/nvme.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index 6e476f54b9f..903c8ffa060 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
>
> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
> {
> + qemu_vfio_dma_unmap(s->vfio, q->queue);
> qemu_vfree(q->queue);
> }
I can't figure out the issue. qemu_vfree(q->queue) was already called
before this patch. How does adding qemu_vfio_dma_unmap() help with the
valgrind report in the commit description?
Stefan
On 10/7/21 15:29, Stefan Hajnoczi wrote:
> On Wed, Oct 06, 2021 at 06:49:31PM +0200, Philippe Mathieu-Daudé wrote:
>> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
>> but we never release them. Do it in nvme_free_queue() which is called
>> from nvme_free_queue_pair().
>>
>> Reported by valgrind:
>>
>> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
>> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
>> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
>> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
>> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
>> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
>> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
>> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
>> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
>> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
>> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
>> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
>> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
>>
>> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>> block/nvme.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/block/nvme.c b/block/nvme.c
>> index 6e476f54b9f..903c8ffa060 100644
>> --- a/block/nvme.c
>> +++ b/block/nvme.c
>> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
>>
>> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
>> {
>> + qemu_vfio_dma_unmap(s->vfio, q->queue);
>> qemu_vfree(q->queue);
>> }
>
> I can't figure out the issue. qemu_vfree(q->queue) was already called
> before this patch. How does adding qemu_vfio_dma_unmap() help with the
> valgrind report in the commit description?
You are right, I think I didn't select the correct record
between the 8302 reported by valgrind. I will revisit, thanks.
Am 07.10.2021 um 15:34 hat Philippe Mathieu-Daudé geschrieben:
> On 10/7/21 15:29, Stefan Hajnoczi wrote:
> > On Wed, Oct 06, 2021 at 06:49:31PM +0200, Philippe Mathieu-Daudé wrote:
> >> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
> >> but we never release them. Do it in nvme_free_queue() which is called
> >> from nvme_free_queue_pair().
> >>
> >> Reported by valgrind:
> >>
> >> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
> >> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
> >> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
> >> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
> >> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
> >> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
> >> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
> >> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
> >> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
> >> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
> >> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
> >> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
> >> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
> >>
> >> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> >> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >> ---
> >> block/nvme.c | 1 +
> >> 1 file changed, 1 insertion(+)
> >>
> >> diff --git a/block/nvme.c b/block/nvme.c
> >> index 6e476f54b9f..903c8ffa060 100644
> >> --- a/block/nvme.c
> >> +++ b/block/nvme.c
> >> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
> >>
> >> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
> >> {
> >> + qemu_vfio_dma_unmap(s->vfio, q->queue);
> >> qemu_vfree(q->queue);
> >> }
> >
> > I can't figure out the issue. qemu_vfree(q->queue) was already called
> > before this patch. How does adding qemu_vfio_dma_unmap() help with the
> > valgrind report in the commit description?
>
> You are right, I think I didn't select the correct record
> between the 8302 reported by valgrind. I will revisit, thanks.
Should we still merge (parts of) this series for 6.2? Or does this mean
that we don't want it at all?
Kevin
On 11/2/21 13:33, Kevin Wolf wrote:
> Am 07.10.2021 um 15:34 hat Philippe Mathieu-Daudé geschrieben:
>> On 10/7/21 15:29, Stefan Hajnoczi wrote:
>>> On Wed, Oct 06, 2021 at 06:49:31PM +0200, Philippe Mathieu-Daudé wrote:
>>>> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
>>>> but we never release them. Do it in nvme_free_queue() which is called
>>>> from nvme_free_queue_pair().
>>>>
>>>> Reported by valgrind:
>>>>
>>>> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
>>>> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
>>>> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
>>>> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
>>>> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
>>>> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
>>>> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
>>>> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
>>>> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
>>>> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
>>>> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
>>>> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
>>>> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
>>>>
>>>> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
>>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>> ---
>>>> block/nvme.c | 1 +
>>>> 1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/block/nvme.c b/block/nvme.c
>>>> index 6e476f54b9f..903c8ffa060 100644
>>>> --- a/block/nvme.c
>>>> +++ b/block/nvme.c
>>>> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
>>>>
>>>> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
>>>> {
>>>> + qemu_vfio_dma_unmap(s->vfio, q->queue);
>>>> qemu_vfree(q->queue);
>>>> }
>>>
>>> I can't figure out the issue. qemu_vfree(q->queue) was already called
>>> before this patch. How does adding qemu_vfio_dma_unmap() help with the
>>> valgrind report in the commit description?
>>
>> You are right, I think I didn't select the correct record
>> between the 8302 reported by valgrind. I will revisit, thanks.
>
> Should we still merge (parts of) this series for 6.2? Or does this mean
> that we don't want it at all?
Patches #1-4 are cleanups welcome for 6.2 :) However we do not want #5.
Am 02.11.2021 um 13:36 hat Philippe Mathieu-Daudé geschrieben:
> On 11/2/21 13:33, Kevin Wolf wrote:
> > Am 07.10.2021 um 15:34 hat Philippe Mathieu-Daudé geschrieben:
> >> On 10/7/21 15:29, Stefan Hajnoczi wrote:
> >>> On Wed, Oct 06, 2021 at 06:49:31PM +0200, Philippe Mathieu-Daudé wrote:
> >>>> nvme_create_queue_pair() allocates resources with qemu_vfio_dma_map(),
> >>>> but we never release them. Do it in nvme_free_queue() which is called
> >>>> from nvme_free_queue_pair().
> >>>>
> >>>> Reported by valgrind:
> >>>>
> >>>> ==252858== 520,192 bytes in 1 blocks are still reachable in loss record 8,293 of 8,302
> >>>> ==252858== at 0x4846803: memalign (vg_replace_malloc.c:1265)
> >>>> ==252858== by 0x484691F: posix_memalign (vg_replace_malloc.c:1429)
> >>>> ==252858== by 0xB8AFE4: qemu_try_memalign (oslib-posix.c:210)
> >>>> ==252858== by 0xA9E315: nvme_create_queue_pair (nvme.c:229)
> >>>> ==252858== by 0xAA0125: nvme_init (nvme.c:799)
> >>>> ==252858== by 0xAA081C: nvme_file_open (nvme.c:953)
> >>>> ==252858== by 0xA23DDD: bdrv_open_driver (block.c:1550)
> >>>> ==252858== by 0xA24806: bdrv_open_common (block.c:1827)
> >>>> ==252858== by 0xA2889B: bdrv_open_inherit (block.c:3747)
> >>>> ==252858== by 0xA28DE4: bdrv_open (block.c:3840)
> >>>> ==252858== by 0x9E0F8E: bds_tree_init (blockdev.c:675)
> >>>> ==252858== by 0x9E7C74: qmp_blockdev_add (blockdev.c:3551)
> >>>>
> >>>> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> >>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >>>> ---
> >>>> block/nvme.c | 1 +
> >>>> 1 file changed, 1 insertion(+)
> >>>>
> >>>> diff --git a/block/nvme.c b/block/nvme.c
> >>>> index 6e476f54b9f..903c8ffa060 100644
> >>>> --- a/block/nvme.c
> >>>> +++ b/block/nvme.c
> >>>> @@ -185,6 +185,7 @@ static bool nvme_init_queue(BDRVNVMeState *s, NVMeQueue *q,
> >>>>
> >>>> static void nvme_free_queue(BDRVNVMeState *s, NVMeQueue *q)
> >>>> {
> >>>> + qemu_vfio_dma_unmap(s->vfio, q->queue);
> >>>> qemu_vfree(q->queue);
> >>>> }
> >>>
> >>> I can't figure out the issue. qemu_vfree(q->queue) was already called
> >>> before this patch. How does adding qemu_vfio_dma_unmap() help with the
> >>> valgrind report in the commit description?
> >>
> >> You are right, I think I didn't select the correct record
> >> between the 8302 reported by valgrind. I will revisit, thanks.
> >
> > Should we still merge (parts of) this series for 6.2? Or does this mean
> > that we don't want it at all?
>
> Patches #1-4 are cleanups welcome for 6.2 :) However we do not want #5.
Thanks. Patch 4 doesn't seem to make sense without 5 (and definitely not
without rewriting the commit message), but I'm applying patches 1-3.
Kevin
On 11/2/21 15:50, Kevin Wolf wrote: > Am 02.11.2021 um 13:36 hat Philippe Mathieu-Daudé geschrieben: >> On 11/2/21 13:33, Kevin Wolf wrote: >>> Should we still merge (parts of) this series for 6.2? Or does this mean >>> that we don't want it at all? >> >> Patches #1-4 are cleanups welcome for 6.2 :) However we do not want #5. > > Thanks. Patch 4 doesn't seem to make sense without 5 (and definitely not > without rewriting the commit message), but I'm applying patches 1-3. Oops indeed. Thank you, that helps.
© 2016 - 2026 Red Hat, Inc.