hw/virtio/vhost-user.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
From: German Maglione <gmaglione@redhat.com>
QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
setting the NEED_REPLY flag, i.e. by the time the respective
vhost_user_set_vring_*() function returns, it is completely up to chance
whether whether the back-end has already processed the request and
switched over to the new FD for interrupts.
At least for vhost_user_set_vring_call(), that is a problem: It is
called through vhost_virtqueue_mask(), which is generally used in the
VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
called by virtio_pci_one_vector_unmask(). The fact that we do not wait
for the back-end to install the FD leads to a race there:
Masking interrupts is implemented by redirecting interrupts to an
internal event FD that is not connected to the guest. Unmasking then
re-installs the guest-connected IRQ FD, then checks if there are pending
interrupts left on the masked event FD, and if so, issues an interrupt
to the guest.
Because guest_notifier_mask() (through vhost_user_set_vring_call())
doesn't wait for the back-end to switch over to the actual IRQ FD, it's
possible we check for pending interrupts while the back-end is still
using the masked event FD, and then we will lose interrupts that occur
before the back-end finally does switch over.
Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
so when we get that reply, we know that the back-end is now using the
new FD.
Signed-off-by: German Maglione <gmaglione@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
hw/virtio/vhost-user.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 36c9c2e04d..641960293b 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
VhostUserRequest request,
struct vhost_vring_file *file)
{
+ int ret;
int fds[VHOST_USER_MAX_RAM_SLOTS];
size_t fd_num = 0;
+ bool reply_supported = virtio_has_feature(dev->protocol_features,
+ VHOST_USER_PROTOCOL_F_REPLY_ACK);
VhostUserMsg msg = {
.hdr.request = request,
.hdr.flags = VHOST_USER_VERSION,
@@ -1336,13 +1339,26 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
.hdr.size = sizeof(msg.payload.u64),
};
+ if (reply_supported) {
+ msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
+ }
+
if (file->fd > 0) {
fds[fd_num++] = file->fd;
} else {
msg.payload.u64 |= VHOST_USER_VRING_NOFD_MASK;
}
- return vhost_user_write(dev, &msg, fds, fd_num);
+ ret = vhost_user_write(dev, &msg, fds, fd_num);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (reply_supported) {
+ return process_message_reply(dev, &msg);
+ }
+
+ return 0;
}
static int vhost_user_set_vring_kick(struct vhost_dev *dev,
--
2.49.0
If you need to resend, please add a `vhost-user: ` prefix in the title.
On Wed, Oct 22, 2025 at 03:32:28PM +0200, gmaglione@redhat.com wrote:
>From: German Maglione <gmaglione@redhat.com>
>
>QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
>setting the NEED_REPLY flag, i.e. by the time the respective
>vhost_user_set_vring_*() function returns, it is completely up to chance
>whether whether the back-end has already processed the request and
^
There is an extra "whether" here.
>switched over to the new FD for interrupts.
>
>At least for vhost_user_set_vring_call(), that is a problem: It is
>called through vhost_virtqueue_mask(), which is generally used in the
>VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
>called by virtio_pci_one_vector_unmask(). The fact that we do not wait
>for the back-end to install the FD leads to a race there:
>
>Masking interrupts is implemented by redirecting interrupts to an
>internal event FD that is not connected to the guest. Unmasking then
>re-installs the guest-connected IRQ FD, then checks if there are pending
>interrupts left on the masked event FD, and if so, issues an interrupt
>to the guest.
>
>Because guest_notifier_mask() (through vhost_user_set_vring_call())
>doesn't wait for the back-end to switch over to the actual IRQ FD, it's
>possible we check for pending interrupts while the back-end is still
>using the masked event FD, and then we will lose interrupts that occur
>before the back-end finally does switch over.
>
>Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
>so when we get that reply, we know that the back-end is now using the
>new FD.
IIUC this is related to an issue discovered in virtiofsd, should we put
a link to that?
>
>Signed-off-by: German Maglione <gmaglione@redhat.com>
>Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
>---
> hw/virtio/vhost-user.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
>diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>index 36c9c2e04d..641960293b 100644
>--- a/hw/virtio/vhost-user.c
>+++ b/hw/virtio/vhost-user.c
>@@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> VhostUserRequest request,
> struct vhost_vring_file *file)
> {
>+ int ret;
> int fds[VHOST_USER_MAX_RAM_SLOTS];
> size_t fd_num = 0;
>+ bool reply_supported = virtio_has_feature(dev->protocol_features,
>+ VHOST_USER_PROTOCOL_F_REPLY_ACK);
> VhostUserMsg msg = {
> .hdr.request = request,
> .hdr.flags = VHOST_USER_VERSION,
>@@ -1336,13 +1339,26 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> .hdr.size = sizeof(msg.payload.u64),
> };
>
>+ if (reply_supported) {
>+ msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>+ }
>+
> if (file->fd > 0) {
> fds[fd_num++] = file->fd;
> } else {
> msg.payload.u64 |= VHOST_USER_VRING_NOFD_MASK;
> }
>
>- return vhost_user_write(dev, &msg, fds, fd_num);
>+ ret = vhost_user_write(dev, &msg, fds, fd_num);
>+ if (ret < 0) {
>+ return ret;
>+ }
>+
>+ if (reply_supported) {
What about adding a comment here that summarizes the excellent
description of this commit?
BTW all minor comments, the fix LGTM:
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Thanks,
Stefano
>+ return process_message_reply(dev, &msg);
>+ }
>+
>+ return 0;
> }
>
> static int vhost_user_set_vring_kick(struct vhost_dev *dev,
>--
>2.49.0
>
On Wed, Oct 22, 2025 at 3:32 PM <gmaglione@redhat.com> wrote:
>
> From: German Maglione <gmaglione@redhat.com>
>
> QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
> setting the NEED_REPLY flag, i.e. by the time the respective
> vhost_user_set_vring_*() function returns, it is completely up to chance
> whether whether the back-end has already processed the request and
> switched over to the new FD for interrupts.
>
> At least for vhost_user_set_vring_call(), that is a problem: It is
> called through vhost_virtqueue_mask(), which is generally used in the
> VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
> called by virtio_pci_one_vector_unmask(). The fact that we do not wait
> for the back-end to install the FD leads to a race there:
>
> Masking interrupts is implemented by redirecting interrupts to an
> internal event FD that is not connected to the guest. Unmasking then
> re-installs the guest-connected IRQ FD, then checks if there are pending
> interrupts left on the masked event FD, and if so, issues an interrupt
> to the guest.
>
> Because guest_notifier_mask() (through vhost_user_set_vring_call())
> doesn't wait for the back-end to switch over to the actual IRQ FD, it's
> possible we check for pending interrupts while the back-end is still
> using the masked event FD, and then we will lose interrupts that occur
> before the back-end finally does switch over.
>
> Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
> so when we get that reply, we know that the back-end is now using the
> new FD.
>
Fixes: 5f6f6664bf24 ("Add vhost-user as a vhost backend.") ?
> Signed-off-by: German Maglione <gmaglione@redhat.com>
> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
> ---
> hw/virtio/vhost-user.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 36c9c2e04d..641960293b 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> VhostUserRequest request,
> struct vhost_vring_file *file)
> {
> + int ret;
> int fds[VHOST_USER_MAX_RAM_SLOTS];
> size_t fd_num = 0;
> + bool reply_supported = virtio_has_feature(dev->protocol_features,
> + VHOST_USER_PROTOCOL_F_REPLY_ACK);
Why not use vhost_user_write_sync directly?
> VhostUserMsg msg = {
> .hdr.request = request,
> .hdr.flags = VHOST_USER_VERSION,
> @@ -1336,13 +1339,26 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> .hdr.size = sizeof(msg.payload.u64),
> };
>
> + if (reply_supported) {
> + msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
> + }
> +
> if (file->fd > 0) {
> fds[fd_num++] = file->fd;
> } else {
> msg.payload.u64 |= VHOST_USER_VRING_NOFD_MASK;
> }
>
> - return vhost_user_write(dev, &msg, fds, fd_num);
> + ret = vhost_user_write(dev, &msg, fds, fd_num);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + if (reply_supported) {
> + return process_message_reply(dev, &msg);
> + }
> +
> + return 0;
> }
>
> static int vhost_user_set_vring_kick(struct vhost_dev *dev,
> --
> 2.49.0
>
On Wed, Oct 22, 2025 at 3:50 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Wed, Oct 22, 2025 at 3:32 PM <gmaglione@redhat.com> wrote:
> >
> > From: German Maglione <gmaglione@redhat.com>
> >
> > QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
> > setting the NEED_REPLY flag, i.e. by the time the respective
> > vhost_user_set_vring_*() function returns, it is completely up to chance
> > whether whether the back-end has already processed the request and
> > switched over to the new FD for interrupts.
> >
> > At least for vhost_user_set_vring_call(), that is a problem: It is
> > called through vhost_virtqueue_mask(), which is generally used in the
> > VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
> > called by virtio_pci_one_vector_unmask(). The fact that we do not wait
> > for the back-end to install the FD leads to a race there:
> >
> > Masking interrupts is implemented by redirecting interrupts to an
> > internal event FD that is not connected to the guest. Unmasking then
> > re-installs the guest-connected IRQ FD, then checks if there are pending
> > interrupts left on the masked event FD, and if so, issues an interrupt
> > to the guest.
> >
> > Because guest_notifier_mask() (through vhost_user_set_vring_call())
> > doesn't wait for the back-end to switch over to the actual IRQ FD, it's
> > possible we check for pending interrupts while the back-end is still
> > using the masked event FD, and then we will lose interrupts that occur
> > before the back-end finally does switch over.
> >
> > Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
> > so when we get that reply, we know that the back-end is now using the
> > new FD.
> >
>
> Fixes: 5f6f6664bf24 ("Add vhost-user as a vhost backend.") ?
sorry, I missed this, Michael, should I add this in a new patch version?
>
> > Signed-off-by: German Maglione <gmaglione@redhat.com>
> > Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
> > ---
> > hw/virtio/vhost-user.c | 18 +++++++++++++++++-
> > 1 file changed, 17 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > index 36c9c2e04d..641960293b 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> > VhostUserRequest request,
> > struct vhost_vring_file *file)
> > {
> > + int ret;
> > int fds[VHOST_USER_MAX_RAM_SLOTS];
> > size_t fd_num = 0;
> > + bool reply_supported = virtio_has_feature(dev->protocol_features,
> > + VHOST_USER_PROTOCOL_F_REPLY_ACK);
>
> Why not use vhost_user_write_sync directly?
>
> > VhostUserMsg msg = {
> > .hdr.request = request,
> > .hdr.flags = VHOST_USER_VERSION,
> > @@ -1336,13 +1339,26 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> > .hdr.size = sizeof(msg.payload.u64),
> > };
> >
> > + if (reply_supported) {
> > + msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
> > + }
> > +
> > if (file->fd > 0) {
> > fds[fd_num++] = file->fd;
> > } else {
> > msg.payload.u64 |= VHOST_USER_VRING_NOFD_MASK;
> > }
> >
> > - return vhost_user_write(dev, &msg, fds, fd_num);
> > + ret = vhost_user_write(dev, &msg, fds, fd_num);
> > + if (ret < 0) {
> > + return ret;
> > + }
> > +
> > + if (reply_supported) {
> > + return process_message_reply(dev, &msg);
> > + }
> > +
> > + return 0;
> > }
> >
> > static int vhost_user_set_vring_kick(struct vhost_dev *dev,
> > --
> > 2.49.0
> >
>
--
German
On Wed, Oct 22, 2025 at 03:49:25PM +0200, Eugenio Perez Martin wrote:
>On Wed, Oct 22, 2025 at 3:32 PM <gmaglione@redhat.com> wrote:
>>
>> From: German Maglione <gmaglione@redhat.com>
>>
>> QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
>> setting the NEED_REPLY flag, i.e. by the time the respective
>> vhost_user_set_vring_*() function returns, it is completely up to chance
>> whether whether the back-end has already processed the request and
>> switched over to the new FD for interrupts.
>>
>> At least for vhost_user_set_vring_call(), that is a problem: It is
>> called through vhost_virtqueue_mask(), which is generally used in the
>> VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
>> called by virtio_pci_one_vector_unmask(). The fact that we do not wait
>> for the back-end to install the FD leads to a race there:
>>
>> Masking interrupts is implemented by redirecting interrupts to an
>> internal event FD that is not connected to the guest. Unmasking then
>> re-installs the guest-connected IRQ FD, then checks if there are pending
>> interrupts left on the masked event FD, and if so, issues an interrupt
>> to the guest.
>>
>> Because guest_notifier_mask() (through vhost_user_set_vring_call())
>> doesn't wait for the back-end to switch over to the actual IRQ FD, it's
>> possible we check for pending interrupts while the back-end is still
>> using the masked event FD, and then we will lose interrupts that occur
>> before the back-end finally does switch over.
>>
>> Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
>> so when we get that reply, we know that the back-end is now using the
>> new FD.
>>
>
>Fixes: 5f6f6664bf24 ("Add vhost-user as a vhost backend.") ?
>
>> Signed-off-by: German Maglione <gmaglione@redhat.com>
>> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
>> ---
>> hw/virtio/vhost-user.c | 18 +++++++++++++++++-
>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>> index 36c9c2e04d..641960293b 100644
>> --- a/hw/virtio/vhost-user.c
>> +++ b/hw/virtio/vhost-user.c
>> @@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
>> VhostUserRequest request,
>> struct vhost_vring_file *file)
>> {
>> + int ret;
>> int fds[VHOST_USER_MAX_RAM_SLOTS];
>> size_t fd_num = 0;
>> + bool reply_supported = virtio_has_feature(dev->protocol_features,
>> + VHOST_USER_PROTOCOL_F_REPLY_ACK);
>
>Why not use vhost_user_write_sync directly?
I was about to suggest the same thing, but IIUC does not support passing
fds.
However, yes, I agree that we should extend vhost_user_write_sync() in
another patch and then use it either here and in other places (e.g.
vhost_user_set_mem_table(), vhost_setup_backend_channel()).
But maybe that could be a follow-up later, since this is a fix to
backport without touching too much code around. Up to German and you,
I'm fine with both.
Thanks,
Stefano
On Wed, Oct 22, 2025 at 4:04 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Wed, Oct 22, 2025 at 03:49:25PM +0200, Eugenio Perez Martin wrote:
> >On Wed, Oct 22, 2025 at 3:32 PM <gmaglione@redhat.com> wrote:
> >>
> >> From: German Maglione <gmaglione@redhat.com>
> >>
> >> QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
> >> setting the NEED_REPLY flag, i.e. by the time the respective
> >> vhost_user_set_vring_*() function returns, it is completely up to chance
> >> whether whether the back-end has already processed the request and
> >> switched over to the new FD for interrupts.
> >>
> >> At least for vhost_user_set_vring_call(), that is a problem: It is
> >> called through vhost_virtqueue_mask(), which is generally used in the
> >> VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
> >> called by virtio_pci_one_vector_unmask(). The fact that we do not wait
> >> for the back-end to install the FD leads to a race there:
> >>
> >> Masking interrupts is implemented by redirecting interrupts to an
> >> internal event FD that is not connected to the guest. Unmasking then
> >> re-installs the guest-connected IRQ FD, then checks if there are pending
> >> interrupts left on the masked event FD, and if so, issues an interrupt
> >> to the guest.
> >>
> >> Because guest_notifier_mask() (through vhost_user_set_vring_call())
> >> doesn't wait for the back-end to switch over to the actual IRQ FD, it's
> >> possible we check for pending interrupts while the back-end is still
> >> using the masked event FD, and then we will lose interrupts that occur
> >> before the back-end finally does switch over.
> >>
> >> Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
> >> so when we get that reply, we know that the back-end is now using the
> >> new FD.
> >>
> >
> >Fixes: 5f6f6664bf24 ("Add vhost-user as a vhost backend.") ?
> >
> >> Signed-off-by: German Maglione <gmaglione@redhat.com>
> >> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
> >> ---
> >> hw/virtio/vhost-user.c | 18 +++++++++++++++++-
> >> 1 file changed, 17 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >> index 36c9c2e04d..641960293b 100644
> >> --- a/hw/virtio/vhost-user.c
> >> +++ b/hw/virtio/vhost-user.c
> >> @@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> >> VhostUserRequest request,
> >> struct vhost_vring_file *file)
> >> {
> >> + int ret;
> >> int fds[VHOST_USER_MAX_RAM_SLOTS];
> >> size_t fd_num = 0;
> >> + bool reply_supported = virtio_has_feature(dev->protocol_features,
> >> + VHOST_USER_PROTOCOL_F_REPLY_ACK);
> >
> >Why not use directly?
>
> I was about to suggest the same thing, but IIUC does not support passing
> fds.
>
> However, yes, I agree that we should extend vhost_user_write_sync() in
> another patch and then use it either here and in other places (e.g.
> vhost_user_set_mem_table(), vhost_setup_backend_channel()).
>
> But maybe that could be a follow-up later, since this is a fix to
> backport without touching too much code around. Up to German and you,
> I'm fine with both.
If you don't mind, I prefer to keep this as small as possible, and extend
vhost_user_write_sync() in a follow-up
>
> Thanks,
> Stefano
>
--
German
On Wed, Oct 22, 2025 at 6:24 PM German Maglione <gmaglione@redhat.com> wrote:
>
> On Wed, Oct 22, 2025 at 4:04 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
> >
> > On Wed, Oct 22, 2025 at 03:49:25PM +0200, Eugenio Perez Martin wrote:
> > >On Wed, Oct 22, 2025 at 3:32 PM <gmaglione@redhat.com> wrote:
> > >>
> > >> From: German Maglione <gmaglione@redhat.com>
> > >>
> > >> QEMU sends all of VHOST_USER_SET_VRING_KICK, _CALL, and _ERR without
> > >> setting the NEED_REPLY flag, i.e. by the time the respective
> > >> vhost_user_set_vring_*() function returns, it is completely up to chance
> > >> whether whether the back-end has already processed the request and
> > >> switched over to the new FD for interrupts.
> > >>
> > >> At least for vhost_user_set_vring_call(), that is a problem: It is
> > >> called through vhost_virtqueue_mask(), which is generally used in the
> > >> VirtioDeviceClass.guest_notifier_mask() implementation, which is in turn
> > >> called by virtio_pci_one_vector_unmask(). The fact that we do not wait
> > >> for the back-end to install the FD leads to a race there:
> > >>
> > >> Masking interrupts is implemented by redirecting interrupts to an
> > >> internal event FD that is not connected to the guest. Unmasking then
> > >> re-installs the guest-connected IRQ FD, then checks if there are pending
> > >> interrupts left on the masked event FD, and if so, issues an interrupt
> > >> to the guest.
> > >>
> > >> Because guest_notifier_mask() (through vhost_user_set_vring_call())
> > >> doesn't wait for the back-end to switch over to the actual IRQ FD, it's
> > >> possible we check for pending interrupts while the back-end is still
> > >> using the masked event FD, and then we will lose interrupts that occur
> > >> before the back-end finally does switch over.
> > >>
> > >> Fix this by setting NEED_REPLY on those VHOST_USER_SET_VRING_* messages,
> > >> so when we get that reply, we know that the back-end is now using the
> > >> new FD.
> > >>
> > >
> > >Fixes: 5f6f6664bf24 ("Add vhost-user as a vhost backend.") ?
> > >
> > >> Signed-off-by: German Maglione <gmaglione@redhat.com>
> > >> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
> > >> ---
> > >> hw/virtio/vhost-user.c | 18 +++++++++++++++++-
> > >> 1 file changed, 17 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > >> index 36c9c2e04d..641960293b 100644
> > >> --- a/hw/virtio/vhost-user.c
> > >> +++ b/hw/virtio/vhost-user.c
> > >> @@ -1327,8 +1327,11 @@ static int vhost_set_vring_file(struct vhost_dev *dev,
> > >> VhostUserRequest request,
> > >> struct vhost_vring_file *file)
> > >> {
> > >> + int ret;
> > >> int fds[VHOST_USER_MAX_RAM_SLOTS];
> > >> size_t fd_num = 0;
> > >> + bool reply_supported = virtio_has_feature(dev->protocol_features,
> > >> + VHOST_USER_PROTOCOL_F_REPLY_ACK);
> > >
> > >Why not use directly?
> >
> > I was about to suggest the same thing, but IIUC does not support passing
> > fds.
> >
> > However, yes, I agree that we should extend vhost_user_write_sync() in
> > another patch and then use it either here and in other places (e.g.
> > vhost_user_set_mem_table(), vhost_setup_backend_channel()).
> >
> > But maybe that could be a follow-up later, since this is a fix to
> > backport without touching too much code around. Up to German and you,
> > I'm fine with both.
>
>
> If you don't mind, I prefer to keep this as small as possible, and extend
> vhost_user_write_sync() in a follow-up
>
Absolutely, it makes sense.
Out of curiosity, what about using the vhost_user_get_features trick
to also support backends without VHOST_USER_PROTOCOL_F_REPLY_ACK?
© 2016 - 2025 Red Hat, Inc.