> -----Original Message-----
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Wednesday, July 27, 2022 10:25 AM
> To: Eli Cohen <elic@nvidia.com>
> Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> virtualization@lists.linux-foundation.org
> Subject: Re: VIRTIO_NET_F_MTU not negotiated
>
> On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> (e.g. through libvirt <mtu size="9000"/>).
> > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> it using its copy of virtio_net_config.
> >
> > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> >
> > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > {
> > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > VirtIONet *n = VIRTIO_NET(dev);
> > NetClientState *nc;
> > int i;
> >
> > if (n->net_conf.mtu) {
> > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > }
> >
> > The above code can be interpreted as follows:
> > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> device (that actual value is ignored).
> >
> > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> we should read the actual limitation for the device.
> >
> > If this makes sense I can send a patch to fix this.
>
> Well it will then either have to be for vdpa only, or have
> compat machinery to avoid breaking migration.
>
How about this one:
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1067e72b3975..e464e4645c79 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
{
virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
+ virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
n->config_size = virtio_feature_get_config_size(feature_sizes,
host_features);
@@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
struct virtio_net_config netcfg = {};
+ n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
vhost_net_set_config(get_vhost_net(nc->peer),
(uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote: > > -----Original Message----- > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, July 27, 2022 10:25 AM > > To: Eli Cohen <elic@nvidia.com> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > virtualization@lists.linux-foundation.org > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote: > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device > > (e.g. through libvirt <mtu size="9000"/>). > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do > > it using its copy of virtio_net_config. > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here: > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > { > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > > VirtIONet *n = VIRTIO_NET(dev); > > > NetClientState *nc; > > > int i; > > > > > > if (n->net_conf.mtu) { > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > } > > > > > > The above code can be interpreted as follows: > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the > > device (that actual value is ignored). > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that > > we should read the actual limitation for the device. > > > > > > If this makes sense I can send a patch to fix this. > > > > Well it will then either have to be for vdpa only, or have > > compat machinery to avoid breaking migration. > > > > How about this one: > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > index 1067e72b3975..e464e4645c79 100644 > --- a/hw/net/virtio-net.c > +++ b/hw/net/virtio-net.c > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx, > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features) > { > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC); > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU); > > n->config_size = virtio_feature_get_config_size(feature_sizes, > host_features); Seems to increase config size unconditionally? > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { > struct virtio_net_config netcfg = {}; > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN); > vhost_net_set_config(get_vhost_net(nc->peer), > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER); And the point is vdpa does not support migration anyway ATM, right? -- MST
> -----Original Message----- > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, July 27, 2022 12:35 PM > To: Eli Cohen <elic@nvidia.com> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > virtualization@lists.linux-foundation.org > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote: > > > -----Original Message----- > > > From: Michael S. Tsirkin <mst@redhat.com> > > > Sent: Wednesday, July 27, 2022 10:25 AM > > > To: Eli Cohen <elic@nvidia.com> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > virtualization@lists.linux-foundation.org > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote: > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net > device > > > (e.g. through libvirt <mtu size="9000"/>). > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom > to do > > > it using its copy of virtio_net_config. > > > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here: > > > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > { > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > > > VirtIONet *n = VIRTIO_NET(dev); > > > > NetClientState *nc; > > > > int i; > > > > > > > > if (n->net_conf.mtu) { > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > } > > > > > > > > The above code can be interpreted as follows: > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the > > > device (that actual value is ignored). > > > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates > that > > > we should read the actual limitation for the device. > > > > > > > > If this makes sense I can send a patch to fix this. > > > > > > Well it will then either have to be for vdpa only, or have > > > compat machinery to avoid breaking migration. > > > > > > > How about this one: > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > index 1067e72b3975..e464e4645c79 100644 > > --- a/hw/net/virtio-net.c > > +++ b/hw/net/virtio-net.c > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx, > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features) > > { > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC); > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU); > > > > n->config_size = virtio_feature_get_config_size(feature_sizes, > > host_features); > > Seems to increase config size unconditionally? Right but you pay for reading two more bytes. Is that such a high price to pay? > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { > > struct virtio_net_config netcfg = {}; > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN); > > vhost_net_set_config(get_vhost_net(nc->peer), > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER); > > And the point is vdpa does not support migration anyway ATM, right? > I don't see how this can affect vdpa live migration. Am I missing something?
On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote: > > -----Original Message----- > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, July 27, 2022 12:35 PM > > To: Eli Cohen <elic@nvidia.com> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > virtualization@lists.linux-foundation.org > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote: > > > > -----Original Message----- > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > Sent: Wednesday, July 27, 2022 10:25 AM > > > > To: Eli Cohen <elic@nvidia.com> > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > > virtualization@lists.linux-foundation.org > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote: > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net > > device > > > > (e.g. through libvirt <mtu size="9000"/>). > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom > > to do > > > > it using its copy of virtio_net_config. > > > > > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here: > > > > > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > { > > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > > > > VirtIONet *n = VIRTIO_NET(dev); > > > > > NetClientState *nc; > > > > > int i; > > > > > > > > > > if (n->net_conf.mtu) { > > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > > } > > > > > > > > > > The above code can be interpreted as follows: > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the > > > > device (that actual value is ignored). > > > > > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates > > that > > > > we should read the actual limitation for the device. > > > > > > > > > > If this makes sense I can send a patch to fix this. > > > > > > > > Well it will then either have to be for vdpa only, or have > > > > compat machinery to avoid breaking migration. > > > > > > > > > > How about this one: > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > > index 1067e72b3975..e464e4645c79 100644 > > > --- a/hw/net/virtio-net.c > > > +++ b/hw/net/virtio-net.c > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx, > > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features) > > > { > > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC); > > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU); > > > > > > n->config_size = virtio_feature_get_config_size(feature_sizes, > > > host_features); > > > > Seems to increase config size unconditionally? > > Right but you pay for reading two more bytes. Is that such a high price to pay? That's not a performance question. The issue compatibility, size should not change for a given machine type. > > > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { > > > struct virtio_net_config netcfg = {}; > > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN); > > > vhost_net_set_config(get_vhost_net(nc->peer), > > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER); > > > > And the point is vdpa does not support migration anyway ATM, right? > > > > I don't see how this can affect vdpa live migration. Am I missing something? config size affects things like pci BAR size. This must not change during migration. -- MST
> From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, July 27, 2022 6:44 PM > To: Eli Cohen <elic@nvidia.com> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > virtualization@lists.linux-foundation.org > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote: > > > -----Original Message----- > > > From: Michael S. Tsirkin <mst@redhat.com> > > > Sent: Wednesday, July 27, 2022 12:35 PM > > > To: Eli Cohen <elic@nvidia.com> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > virtualization@lists.linux-foundation.org > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote: > > > > > -----Original Message----- > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > Sent: Wednesday, July 27, 2022 10:25 AM > > > > > To: Eli Cohen <elic@nvidia.com> > > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > > > virtualization@lists.linux-foundation.org > > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote: > > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net > > > device > > > > > (e.g. through libvirt <mtu size="9000"/>). > > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the > freedom > > > to do > > > > > it using its copy of virtio_net_config. > > > > > > > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here: > > > > > > > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > > { > > > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > > > > > VirtIONet *n = VIRTIO_NET(dev); > > > > > > NetClientState *nc; > > > > > > int i; > > > > > > > > > > > > if (n->net_conf.mtu) { > > > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > > > } > > > > > > > > > > > > The above code can be interpreted as follows: > > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from > the > > > > > device (that actual value is ignored). > > > > > > > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates > > > that > > > > > we should read the actual limitation for the device. > > > > > > > > > > > > If this makes sense I can send a patch to fix this. > > > > > > > > > > Well it will then either have to be for vdpa only, or have > > > > > compat machinery to avoid breaking migration. > > > > > > > > > > > > > How about this one: > > > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > > > index 1067e72b3975..e464e4645c79 100644 > > > > --- a/hw/net/virtio-net.c > > > > +++ b/hw/net/virtio-net.c > > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx, > > > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features) > > > > { > > > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC); > > > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU); > > > > > > > > n->config_size = virtio_feature_get_config_size(feature_sizes, > > > > host_features); > > > > > > Seems to increase config size unconditionally? > > > > Right but you pay for reading two more bytes. Is that such a high price to pay? > > > That's not a performance question. The issue compatibility, size > should not change for a given machine type. > Did you mean it should not change for virtio_net pci devices? Can't management controlling the live migration process take care of this? > > > > > > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > > > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { > > > > struct virtio_net_config netcfg = {}; > > > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN); > > > > vhost_net_set_config(get_vhost_net(nc->peer), > > > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER); > > > > > > And the point is vdpa does not support migration anyway ATM, right? > > > > > > > I don't see how this can affect vdpa live migration. Am I missing something? > > config size affects things like pci BAR size. This must not change > during migration. > Why should this change during live migration? > -- > MST
On Thu, Jul 28, 2022 at 05:51:32AM +0000, Eli Cohen wrote: > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, July 27, 2022 6:44 PM > > To: Eli Cohen <elic@nvidia.com> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > virtualization@lists.linux-foundation.org > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote: > > > > -----Original Message----- > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > Sent: Wednesday, July 27, 2022 12:35 PM > > > > To: Eli Cohen <elic@nvidia.com> > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > > virtualization@lists.linux-foundation.org > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote: > > > > > > -----Original Message----- > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > Sent: Wednesday, July 27, 2022 10:25 AM > > > > > > To: Eli Cohen <elic@nvidia.com> > > > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>; > > > > > > virtualization@lists.linux-foundation.org > > > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated > > > > > > > > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote: > > > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net > > > > device > > > > > > (e.g. through libvirt <mtu size="9000"/>). > > > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the > > freedom > > > > to do > > > > > > it using its copy of virtio_net_config. > > > > > > > > > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here: > > > > > > > > > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > > > { > > > > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > > > > > > VirtIONet *n = VIRTIO_NET(dev); > > > > > > > NetClientState *nc; > > > > > > > int i; > > > > > > > > > > > > > > if (n->net_conf.mtu) { > > > > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > > > > } > > > > > > > > > > > > > > The above code can be interpreted as follows: > > > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from > > the > > > > > > device (that actual value is ignored). > > > > > > > > > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates > > > > that > > > > > > we should read the actual limitation for the device. > > > > > > > > > > > > > > If this makes sense I can send a patch to fix this. > > > > > > > > > > > > Well it will then either have to be for vdpa only, or have > > > > > > compat machinery to avoid breaking migration. > > > > > > > > > > > > > > > > How about this one: > > > > > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > > > > index 1067e72b3975..e464e4645c79 100644 > > > > > --- a/hw/net/virtio-net.c > > > > > +++ b/hw/net/virtio-net.c > > > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx, > > > > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features) > > > > > { > > > > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC); > > > > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU); > > > > > > > > > > n->config_size = virtio_feature_get_config_size(feature_sizes, > > > > > host_features); > > > > > > > > Seems to increase config size unconditionally? > > > > > > Right but you pay for reading two more bytes. Is that such a high price to pay? > > > > > > That's not a performance question. The issue compatibility, size > > should not change for a given machine type. > > > > Did you mean it should not change for virtio_net pci devices? yes > Can't management controlling the live migration process take care of this? Management does what it always did which is set flags consistently. If we tweak them with virtio_add_feature it can do nothing. > > > > > > > > > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) > > > > > > > > > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { > > > > > struct virtio_net_config netcfg = {}; > > > > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU); > > > > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN); > > > > > vhost_net_set_config(get_vhost_net(nc->peer), > > > > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER); > > > > > > > > And the point is vdpa does not support migration anyway ATM, right? > > > > > > > > > > I don't see how this can affect vdpa live migration. Am I missing something? > > > > config size affects things like pci BAR size. This must not change > > during migration. > > > > Why should this change during live migration? Simply put features need to match on both ends. > > -- > > MST
© 2016 - 2024 Red Hat, Inc.