We end up not closing the file descriptor, resulting in leaking one
file descriptor for each VHOST_USER_REM_MEM_REG message.
Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Coiby Xu <coiby.xu@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
subprojects/libvhost-user/libvhost-user.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index bf09693255..bb5c3b3280 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
vu_panic(dev, "Specified region not found\n");
}
+ close(vmsg->fds[0]);
+
return true;
}
--
2.31.1
On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
> We end up not closing the file descriptor, resulting in leaking one
> file descriptor for each VHOST_USER_REM_MEM_REG message.
>
> Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Coiby Xu <coiby.xu@gmail.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> subprojects/libvhost-user/libvhost-user.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index bf09693255..bb5c3b3280 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> vu_panic(dev, "Specified region not found\n");
> }
>
> + close(vmsg->fds[0]);
Does anything check that exactly 1 fd was received? For example,
vu_set_log_fd_exec() does:
if (vmsg->fd_num != 1) {
vu_panic(dev, "Invalid log_fd message");
return false;
}
I think that's necessary both to make vhost-user master development
easier and because fds[] is not initialized to -1.
Stefan
On 13.10.21 11:48, Stefan Hajnoczi wrote:
> On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
>> We end up not closing the file descriptor, resulting in leaking one
>> file descriptor for each VHOST_USER_REM_MEM_REG message.
>>
>> Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
>> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Coiby Xu <coiby.xu@gmail.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> subprojects/libvhost-user/libvhost-user.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
>> index bf09693255..bb5c3b3280 100644
>> --- a/subprojects/libvhost-user/libvhost-user.c
>> +++ b/subprojects/libvhost-user/libvhost-user.c
>> @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>> vu_panic(dev, "Specified region not found\n");
>> }
>>
>> + close(vmsg->fds[0]);
>
> Does anything check that exactly 1 fd was received? For example,
> vu_set_log_fd_exec() does:
>
> if (vmsg->fd_num != 1) {
> vu_panic(dev, "Invalid log_fd message");
> return false;
> }
>
> I think that's necessary both to make vhost-user master development
> easier and because fds[] is not initialized to -1.
Similarly, vu_add_mem_reg() assumes exactly one was sent AFAIKS.
If we panic, do we still have to call vmsg_close_fds() ?
--
Thanks,
David / dhildenb
On Wed, Oct 13, 2021 at 11:51:24AM +0200, David Hildenbrand wrote:
> On 13.10.21 11:48, Stefan Hajnoczi wrote:
> > On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
> > > We end up not closing the file descriptor, resulting in leaking one
> > > file descriptor for each VHOST_USER_REM_MEM_REG message.
> > >
> > > Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > > Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
> > > Cc: Stefan Hajnoczi <stefanha@redhat.com>
> > > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > Cc: Coiby Xu <coiby.xu@gmail.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> > > subprojects/libvhost-user/libvhost-user.c | 2 ++
> > > 1 file changed, 2 insertions(+)
> > >
> > > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > > index bf09693255..bb5c3b3280 100644
> > > --- a/subprojects/libvhost-user/libvhost-user.c
> > > +++ b/subprojects/libvhost-user/libvhost-user.c
> > > @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> > > vu_panic(dev, "Specified region not found\n");
> > > }
> > > + close(vmsg->fds[0]);
> >
> > Does anything check that exactly 1 fd was received? For example,
> > vu_set_log_fd_exec() does:
> >
> > if (vmsg->fd_num != 1) {
> > vu_panic(dev, "Invalid log_fd message");
> > return false;
> > }
> >
> > I think that's necessary both to make vhost-user master development
> > easier and because fds[] is not initialized to -1.
Ack - will add that.
>
> Similarly, vu_add_mem_reg() assumes exactly one was sent AFAIKS.
Ack
>
> If we panic, do we still have to call vmsg_close_fds() ?
>
I think so. What else will close the FDs?
AFAICT a vu_panic does not imply that the overall process has to die if that's
what you mean. What if one process is exposing multiple devices and only one of
them panics?
> --
> Thanks,
>
> David / dhildenb
>
On 14.10.21 07:29, Raphael Norwitz wrote:
> On Wed, Oct 13, 2021 at 11:51:24AM +0200, David Hildenbrand wrote:
>> On 13.10.21 11:48, Stefan Hajnoczi wrote:
>>> On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
>>>> We end up not closing the file descriptor, resulting in leaking one
>>>> file descriptor for each VHOST_USER_REM_MEM_REG message.
>>>>
>>>> Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
>>>> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
>>>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>> Cc: Coiby Xu <coiby.xu@gmail.com>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> subprojects/libvhost-user/libvhost-user.c | 2 ++
>>>> 1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
>>>> index bf09693255..bb5c3b3280 100644
>>>> --- a/subprojects/libvhost-user/libvhost-user.c
>>>> +++ b/subprojects/libvhost-user/libvhost-user.c
>>>> @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>>>> vu_panic(dev, "Specified region not found\n");
>>>> }
>>>> + close(vmsg->fds[0]);
>>>
>>> Does anything check that exactly 1 fd was received? For example,
>>> vu_set_log_fd_exec() does:
>>>
>>> if (vmsg->fd_num != 1) {
>>> vu_panic(dev, "Invalid log_fd message");
>>> return false;
>>> }
>>>
>>> I think that's necessary both to make vhost-user master development
>>> easier and because fds[] is not initialized to -1.
>
> Ack - will add that.
>
>>
>> Similarly, vu_add_mem_reg() assumes exactly one was sent AFAIKS.
>
> Ack
>
>>
>> If we panic, do we still have to call vmsg_close_fds() ?
>>
>
> I think so. What else will close the FDs?
>
> AFAICT a vu_panic does not imply that the overall process has to die if that's
> what you mean. What if one process is exposing multiple devices and only one of
> them panics?
So IIUC, you'll send some patches to tackle the fd checks?
While at it, we might want to simplify VHOST_USER_REM_MEM_REG.
I have a patch there that needs tweaking to cover the point Stefan raised
regarding duplicate ranges. We might want to do the memmove within the loop
instead and drop the "break" to process all elements.
commit 34d71b6531c74a61442432b37e5829a76a7017c5
Author: David Hildenbrand <david@redhat.com>
Date: Tue Oct 12 13:25:43 2021 +0200
libvhost-user: Simplify VHOST_USER_REM_MEM_REG
Let's avoid having to manually copy all elements. Copy only the ones
necessary to close the hole and perform the operation in-place without
a second array.
Signed-off-by: David Hildenbrand <david@redhat.com>
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 7b0e40256e..499c31dc68 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -796,10 +796,8 @@ static inline bool reg_equal(VuDevRegion *vudev_reg,
static bool
vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
- int i, j;
- bool found = false;
- VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
+ int i;
DPRINT("Removing region:\n");
DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n",
@@ -811,28 +809,27 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
DPRINT(" mmap_offset 0x%016"PRIx64"\n",
msg_region->mmap_offset);
- for (i = 0, j = 0; i < dev->nregions; i++) {
- if (!reg_equal(&dev->regions[i], msg_region)) {
- shadow_regions[j].gpa = dev->regions[i].gpa;
- shadow_regions[j].size = dev->regions[i].size;
- shadow_regions[j].qva = dev->regions[i].qva;
- shadow_regions[j].mmap_addr = dev->regions[i].mmap_addr;
- shadow_regions[j].mmap_offset = dev->regions[i].mmap_offset;
- j++;
- } else {
- found = true;
+ for (i = 0; i < dev->nregions; i++) {
+ if (reg_equal(&dev->regions[i], msg_region)) {
VuDevRegion *r = &dev->regions[i];
void *m = (void *) (uintptr_t) r->mmap_addr;
if (m) {
munmap(m, r->size + r->mmap_offset);
}
+ break;
}
}
- if (found) {
- memcpy(dev->regions, shadow_regions,
- sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS);
+ if (i < dev->nregions) {
+ /*
+ * Shift all affected entries by 1 to close the hole at index i and
+ * zero out the last entry.
+ */
+ memmove(dev->regions + i, dev->regions + i + 1,
+ sizeof(VuDevRegion) * (dev->nregions - i - 1));
+ memset(dev->regions + dev->nregions - 1, 0,
+ sizeof(VuDevRegion));
DPRINT("Successfully removed a region\n");
dev->nregions--;
vmsg_set_reply_u64(vmsg, 0);
On a related note, I proposed in a RFC series to increase the memslot count:
https://lkml.kernel.org/r/20211013103330.26869-1-david@redhat.com
--
Thanks,
David / dhildenb
On Thu, Oct 14, 2021 at 09:06:51AM +0200, David Hildenbrand wrote:
> On 14.10.21 07:29, Raphael Norwitz wrote:
> > On Wed, Oct 13, 2021 at 11:51:24AM +0200, David Hildenbrand wrote:
> >> On 13.10.21 11:48, Stefan Hajnoczi wrote:
> >>> On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
> >>>> We end up not closing the file descriptor, resulting in leaking one
> >>>> file descriptor for each VHOST_USER_REM_MEM_REG message.
> >>>>
> >>>> Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
> >>>> Cc: Michael S. Tsirkin <mst@redhat.com>
> >>>> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
> >>>> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
> >>>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> >>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >>>> Cc: Coiby Xu <coiby.xu@gmail.com>
> >>>> Signed-off-by: David Hildenbrand <david@redhat.com>
> >>>> ---
> >>>> subprojects/libvhost-user/libvhost-user.c | 2 ++
> >>>> 1 file changed, 2 insertions(+)
> >>>>
> >>>> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> >>>> index bf09693255..bb5c3b3280 100644
> >>>> --- a/subprojects/libvhost-user/libvhost-user.c
> >>>> +++ b/subprojects/libvhost-user/libvhost-user.c
> >>>> @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> >>>> vu_panic(dev, "Specified region not found\n");
> >>>> }
> >>>> + close(vmsg->fds[0]);
> >>>
> >>> Does anything check that exactly 1 fd was received? For example,
> >>> vu_set_log_fd_exec() does:
> >>>
> >>> if (vmsg->fd_num != 1) {
> >>> vu_panic(dev, "Invalid log_fd message");
> >>> return false;
> >>> }
> >>>
> >>> I think that's necessary both to make vhost-user master development
> >>> easier and because fds[] is not initialized to -1.
> >
> > Ack - will add that.
> >
> >>
> >> Similarly, vu_add_mem_reg() assumes exactly one was sent AFAIKS.
> >
> > Ack
> >
> >>
> >> If we panic, do we still have to call vmsg_close_fds() ?
> >>
> >
> > I think so. What else will close the FDs?
> >
> > AFAICT a vu_panic does not imply that the overall process has to die if that's
> > what you mean. What if one process is exposing multiple devices and only one of
> > them panics?
>
> So IIUC, you'll send some patches to tackle the fd checks?
>
Yes
> While at it, we might want to simplify VHOST_USER_REM_MEM_REG.
> I have a patch there that needs tweaking to cover the point Stefan raised
> regarding duplicate ranges. We might want to do the memmove within the loop
> instead and drop the "break" to process all elements.
>
>
Sure - let me include this in the series.
> commit 34d71b6531c74a61442432b37e5829a76a7017c5
> Author: David Hildenbrand <david@redhat.com>
> Date: Tue Oct 12 13:25:43 2021 +0200
>
> libvhost-user: Simplify VHOST_USER_REM_MEM_REG
>
> Let's avoid having to manually copy all elements. Copy only the ones
> necessary to close the hole and perform the operation in-place without
> a second array.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 7b0e40256e..499c31dc68 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -796,10 +796,8 @@ static inline bool reg_equal(VuDevRegion *vudev_reg,
>
> static bool
> vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> - int i, j;
> - bool found = false;
> - VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
> VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
> + int i;
>
> DPRINT("Removing region:\n");
> DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n",
> @@ -811,28 +809,27 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> DPRINT(" mmap_offset 0x%016"PRIx64"\n",
> msg_region->mmap_offset);
>
> - for (i = 0, j = 0; i < dev->nregions; i++) {
> - if (!reg_equal(&dev->regions[i], msg_region)) {
> - shadow_regions[j].gpa = dev->regions[i].gpa;
> - shadow_regions[j].size = dev->regions[i].size;
> - shadow_regions[j].qva = dev->regions[i].qva;
> - shadow_regions[j].mmap_addr = dev->regions[i].mmap_addr;
> - shadow_regions[j].mmap_offset = dev->regions[i].mmap_offset;
> - j++;
> - } else {
> - found = true;
> + for (i = 0; i < dev->nregions; i++) {
> + if (reg_equal(&dev->regions[i], msg_region)) {
> VuDevRegion *r = &dev->regions[i];
> void *m = (void *) (uintptr_t) r->mmap_addr;
>
> if (m) {
> munmap(m, r->size + r->mmap_offset);
> }
> + break;
> }
> }
>
> - if (found) {
> - memcpy(dev->regions, shadow_regions,
> - sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS);
> + if (i < dev->nregions) {
> + /*
> + * Shift all affected entries by 1 to close the hole at index i and
> + * zero out the last entry.
> + */
> + memmove(dev->regions + i, dev->regions + i + 1,
> + sizeof(VuDevRegion) * (dev->nregions - i - 1));
> + memset(dev->regions + dev->nregions - 1, 0,
> + sizeof(VuDevRegion));
> DPRINT("Successfully removed a region\n");
> dev->nregions--;
> vmsg_set_reply_u64(vmsg, 0);
>
>
>
> On a related note, I proposed in a RFC series to increase the memslot count:
>
> https://lore.kernel.org/all/20211013103330.26869-1-david@redhat.com/T/#mbaa35cebb311e7ab9c029f9f99fb2ba41e993b9f
>
Thanks for pointing that out. I don't see any problem with bumping the
memslot count. I don't expect it will conflict with anything I'm doing
but will keep it in mind.
> --
> Thanks,
>
> David / dhildenb
>
On Tue, Oct 12, 2021 at 08:38:32PM +0200, David Hildenbrand wrote:
> We end up not closing the file descriptor, resulting in leaking one
> file descriptor for each VHOST_USER_REM_MEM_REG message.
>
> Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Coiby Xu <coiby.xu@gmail.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
> subprojects/libvhost-user/libvhost-user.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index bf09693255..bb5c3b3280 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -839,6 +839,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> vu_panic(dev, "Specified region not found\n");
> }
>
> + close(vmsg->fds[0]);
> +
> return true;
> }
>
> --
> 2.31.1
>
© 2016 - 2026 Red Hat, Inc.