Changeset
docs/interop/vhost-user.txt       | 22 +++++++++++++++++++---
hw/net/vhost_net.c                | 17 +++++++++++++++++
hw/net/virtio-net.c               |  5 +++++
hw/virtio/vhost-user.c            | 24 ++++++++++++++++++++++++
include/hw/virtio/vhost-backend.h |  3 +++
include/net/vhost_net.h           |  1 +
6 files changed, 69 insertions(+), 3 deletions(-)
Git apply log
Switched to a new branch '20180112145658.17121-1-maxime.coquelin@redhat.com'
Applying: vhost-user: fix multiple queue specification
Applying: vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
Applying: vhost-net: add vhost_net_set_queue_num helper
Applying: virtio-net: notify backend with number of queue pairs setup
To https://github.com/patchew-project/qemu
 * [new tag]               patchew/20180112145658.17121-1-maxime.coquelin@redhat.com -> patchew/20180112145658.17121-1-maxime.coquelin@redhat.com
Test passed: docker

loading

Test passed: checkpatch

loading

Test passed: s390x

loading

Test passed: ppc

loading

[Qemu-devel] [PATCH 0/4] vhost-user: notify backend with number of queues setup
Posted by Maxime Coquelin, 6 days ago
This series introduces a new vhost-user protocol request for QEMU
to notify the backend with the number of queues setup by the
guest driver.

When the user backend cannot add queues dynamically (i.e. once the
port is running), it waits for all queues to be initialized.

Problem is that QEMU sends messages for all queues declared in
command line, even the ones no setup by the guest driver.
Without fix from Zheng Xiang [0] that doesn't start queues that
haven't been setup by the guest, it ends up corrupting memory
around GPA 0 as SET_VRING_ADDR is sent with uninitialized ring
addresses.
With the fix, the DPDK backend would be stuck forever, waiting
for unused queues to be setup, which won't happen.

Note that these problems are met neither with virtio-net Linux
driver, nor DPDK's Virtio PMD, because these drivers always setup
all queues provided by QEMU, even if they doesn't use all of them.

However, the problem can be reproduced with Windows guest, when
QEMU declares more queue pairs than vcpus. In this case, the
Windows virtio-net driver only setup as much queue pairs as vcpus.

[0]: https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg02484.html

Maxime Coquelin (4):
  vhost-user: fix multiple queue specification
  vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
  vhost-net: add vhost_net_set_queue_num helper
  virtio-net: notify backend with number of queue pairs setup

 docs/interop/vhost-user.txt       | 22 +++++++++++++++++++---
 hw/net/vhost_net.c                | 17 +++++++++++++++++
 hw/net/virtio-net.c               |  5 +++++
 hw/virtio/vhost-user.c            | 24 ++++++++++++++++++++++++
 include/hw/virtio/vhost-backend.h |  3 +++
 include/net/vhost_net.h           |  1 +
 6 files changed, 69 insertions(+), 3 deletions(-)

-- 
2.14.3


[Qemu-devel] [PATCH 1/4] vhost-user: fix multiple queue specification
Posted by Maxime Coquelin, 6 days ago
The number of queues supported by the slave is queried with
message VHOST_USER_GET_QUEUE_NUM, not with message
VHOST_USER_GET_PROTOCOL_FEATURES.

Also, looking at master and slave implemntations, the payload
returned by the slave is the number of queue pairs supported
by the slave, not the number of queues.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 docs/interop/vhost-user.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
index d49444e037..8a14191a1e 100644
--- a/docs/interop/vhost-user.txt
+++ b/docs/interop/vhost-user.txt
@@ -214,8 +214,8 @@ Multiple queue is treated as a protocol extension, hence the slave has to
 implement protocol features first. The multiple queues feature is supported
 only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set.
 
-The max number of queues the slave supports can be queried with message
-VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
+The max number of queue pairs the slave supports can be queried with message
+VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
 requested queues is bigger than that.
 
 As all queues share one connection, the master uses a unique index for each
@@ -537,7 +537,7 @@ Master message types
       Master payload: N/A
       Slave payload: u64
 
-      Query how many queues the backend supports. This request should be
+      Query how many queue pairs the backend supports. This request should be
       sent only when VHOST_USER_PROTOCOL_F_MQ is set in queried protocol
       features by VHOST_USER_GET_PROTOCOL_FEATURES.
 
-- 
2.14.3


Re: [Qemu-devel] [PATCH 1/4] vhost-user: fix multiple queue specification
Posted by Michael S. Tsirkin, 2 days ago
On Fri, Jan 12, 2018 at 03:56:55PM +0100, Maxime Coquelin wrote:
> The number of queues supported by the slave is queried with
> message VHOST_USER_GET_QUEUE_NUM, not with message
> VHOST_USER_GET_PROTOCOL_FEATURES.
> 
> Also, looking at master and slave implemntations, the payload
> returned by the slave is the number of queue pairs supported
> by the slave, not the number of queues.

virtio doesn't have a concept of queue pairs. virtio net does
have a concept of a tx/rx pair for purposes of steering.

Would this be a slave bug then?

I've applied the 1st chunk for now.

> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  docs/interop/vhost-user.txt | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
> index d49444e037..8a14191a1e 100644
> --- a/docs/interop/vhost-user.txt
> +++ b/docs/interop/vhost-user.txt
> @@ -214,8 +214,8 @@ Multiple queue is treated as a protocol extension, hence the slave has to
>  implement protocol features first. The multiple queues feature is supported
>  only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set.
>  
> -The max number of queues the slave supports can be queried with message
> -VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
> +The max number of queue pairs the slave supports can be queried with message
> +VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
>  requested queues is bigger than that.
>  
>  As all queues share one connection, the master uses a unique index for each
> @@ -537,7 +537,7 @@ Master message types
>        Master payload: N/A
>        Slave payload: u64
>  
> -      Query how many queues the backend supports. This request should be
> +      Query how many queue pairs the backend supports. This request should be
>        sent only when VHOST_USER_PROTOCOL_F_MQ is set in queried protocol
>        features by VHOST_USER_GET_PROTOCOL_FEATURES.
>  
> -- 
> 2.14.3

Re: [Qemu-devel] [PATCH 1/4] vhost-user: fix multiple queue specification
Posted by Maxime Coquelin, 2 days ago

On 01/16/2018 04:00 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 12, 2018 at 03:56:55PM +0100, Maxime Coquelin wrote:
>> The number of queues supported by the slave is queried with
>> message VHOST_USER_GET_QUEUE_NUM, not with message
>> VHOST_USER_GET_PROTOCOL_FEATURES.
>>
>> Also, looking at master and slave implemntations, the payload
>> returned by the slave is the number of queue pairs supported
>> by the slave, not the number of queues.
> 
> virtio doesn't have a concept of queue pairs. virtio net does
> have a concept of a tx/rx pair for purposes of steering.

Ok, thanks for the clarification. I will have a look at how vhost-user
SCSI implements it.

> Would this be a slave bug then?

If I'm not mistaken, the bug is in QEMU:

VHOST_USER_GET_QUEUE_NUM is stored in (struct vhost_dev).max_queues.
vhost_net_get_max_queues() returns (struct vhost_dev).max_queues.
And vhost_user_start() from net/vhost-user.c calls
vhost_net_get_max_queues() to get the max number of tx/rx pairs.

If we want to fix QEMU, I think we will need a new flag for
compatibility with older/current backends that assume it represent a
queue pair.

> I've applied the 1st chunk for now.

Thanks.

>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   docs/interop/vhost-user.txt | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
>> index d49444e037..8a14191a1e 100644
>> --- a/docs/interop/vhost-user.txt
>> +++ b/docs/interop/vhost-user.txt
>> @@ -214,8 +214,8 @@ Multiple queue is treated as a protocol extension, hence the slave has to
>>   implement protocol features first. The multiple queues feature is supported
>>   only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set.
>>   
>> -The max number of queues the slave supports can be queried with message
>> -VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
>> +The max number of queue pairs the slave supports can be queried with message
>> +VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
>>   requested queues is bigger than that.
>>   
>>   As all queues share one connection, the master uses a unique index for each
>> @@ -537,7 +537,7 @@ Master message types
>>         Master payload: N/A
>>         Slave payload: u64
>>   
>> -      Query how many queues the backend supports. This request should be
>> +      Query how many queue pairs the backend supports. This request should be
>>         sent only when VHOST_USER_PROTOCOL_F_MQ is set in queried protocol
>>         features by VHOST_USER_GET_PROTOCOL_FEATURES.
>>   
>> -- 
>> 2.14.3

[Qemu-devel] [PATCH 2/4] vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
Posted by Maxime Coquelin, 6 days ago
When the slave cannot add queues dynamically, it needs to know for
how many queues to wait to be initialized.

This patch introduce new vhost-user protocol feature & request for
the master to send the number of queue pairs allocated by the
driver.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 docs/interop/vhost-user.txt       | 16 ++++++++++++++++
 hw/virtio/vhost-user.c            | 24 ++++++++++++++++++++++++
 include/hw/virtio/vhost-backend.h |  3 +++
 3 files changed, 43 insertions(+)

diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
index 8a14191a1e..85c0e03a95 100644
--- a/docs/interop/vhost-user.txt
+++ b/docs/interop/vhost-user.txt
@@ -218,6 +218,9 @@ The max number of queue pairs the slave supports can be queried with message
 VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
 requested queues is bigger than that.
 
+When VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM is negotiated, the master must send
+the number of queue pairs initialized with message VHOST_USER_SET_QUEUE_NUM.
+
 As all queues share one connection, the master uses a unique index for each
 queue in the sent message to identify a specified queue. One queue pair
 is enabled initially. More queues are enabled dynamically, by sending
@@ -354,6 +357,7 @@ Protocol features
 #define VHOST_USER_PROTOCOL_F_MTU            4
 #define VHOST_USER_PROTOCOL_F_SLAVE_REQ      5
 #define VHOST_USER_PROTOCOL_F_CROSS_ENDIAN   6
+#define VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM  7
 
 Master message types
 --------------------
@@ -623,6 +627,18 @@ Master message types
       and expect this message once (per VQ) during device configuration
       (ie. before the master starts the VQ).
 
+ * VHOST_USER_SET_QUEUE_NUM
+
+      Id: 24
+      Equivalent ioctl: N/A
+      Master payload: u64
+
+      Set the number of queue pairs initialized.
+      Master sends such request to notify the slave the number of queue pairs
+      that have been initialized.
+      This request should only be sent if VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM
+      feature has been successfully negotiated.
+
 Slave message types
 -------------------
 
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 093675ed98..9e7728d2da 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -34,6 +34,7 @@ enum VhostUserProtocolFeature {
     VHOST_USER_PROTOCOL_F_NET_MTU = 4,
     VHOST_USER_PROTOCOL_F_SLAVE_REQ = 5,
     VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6,
+    VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM = 7,
 
     VHOST_USER_PROTOCOL_F_MAX
 };
@@ -65,6 +66,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_SET_SLAVE_REQ_FD = 21,
     VHOST_USER_IOTLB_MSG = 22,
     VHOST_USER_SET_VRING_ENDIAN = 23,
+    VHOST_USER_SET_QUEUE_NUM = 24,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -922,6 +924,27 @@ static void vhost_user_set_iotlb_callback(struct vhost_dev *dev, int enabled)
     /* No-op as the receive channel is not dedicated to IOTLB messages. */
 }
 
+static int vhost_user_set_queue_num(struct vhost_dev *dev, uint64_t queues)
+{
+    VhostUserMsg msg = {
+        .request = VHOST_USER_SET_QUEUE_NUM,
+        .size = sizeof(msg.payload.u64),
+        .flags = VHOST_USER_VERSION,
+        .payload.u64 = queues,
+    };
+
+    if (!(dev->protocol_features &
+                (1ULL << VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM))) {
+        return 0;
+    }
+
+    if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
+        return -1;
+    }
+
+    return 0;
+}
+
 const VhostOps user_ops = {
         .backend_type = VHOST_BACKEND_TYPE_USER,
         .vhost_backend_init = vhost_user_init,
@@ -948,4 +971,5 @@ const VhostOps user_ops = {
         .vhost_net_set_mtu = vhost_user_net_set_mtu,
         .vhost_set_iotlb_callback = vhost_user_set_iotlb_callback,
         .vhost_send_device_iotlb_msg = vhost_user_send_device_iotlb_msg,
+        .vhost_set_queue_num = vhost_user_set_queue_num,
 };
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index a7a5f22bc6..1dd3e4bbf3 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -84,6 +84,8 @@ typedef void (*vhost_set_iotlb_callback_op)(struct vhost_dev *dev,
                                            int enabled);
 typedef int (*vhost_send_device_iotlb_msg_op)(struct vhost_dev *dev,
                                               struct vhost_iotlb_msg *imsg);
+typedef int (*vhost_set_queue_num_op)(struct vhost_dev *dev,
+                                      uint64_t queues);
 
 typedef struct VhostOps {
     VhostBackendType backend_type;
@@ -118,6 +120,7 @@ typedef struct VhostOps {
     vhost_vsock_set_running_op vhost_vsock_set_running;
     vhost_set_iotlb_callback_op vhost_set_iotlb_callback;
     vhost_send_device_iotlb_msg_op vhost_send_device_iotlb_msg;
+    vhost_set_queue_num_op vhost_set_queue_num;
 } VhostOps;
 
 extern const VhostOps user_ops;
-- 
2.14.3


Re: [Qemu-devel] [PATCH 2/4] vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
Posted by Michael S. Tsirkin, 2 days ago
On Fri, Jan 12, 2018 at 03:56:56PM +0100, Maxime Coquelin wrote:
> When the slave cannot add queues dynamically,


Could you please clarify the motivation a bit?
Why is it such a big deal to resize the queue array?
We know no queues are used before all of them are initialized,
so you don't need to worry about synchronizing with threads
processing the queues at the same time.

> it needs to know for
> how many queues to wait to be initialized.
> 
> This patch introduce new vhost-user protocol feature & request for
> the master to send the number of queue pairs allocated by the
> driver.

Assuming we can fix the previous message, I think specifying
the # of queues would be better.


> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  docs/interop/vhost-user.txt       | 16 ++++++++++++++++
>  hw/virtio/vhost-user.c            | 24 ++++++++++++++++++++++++
>  include/hw/virtio/vhost-backend.h |  3 +++
>  3 files changed, 43 insertions(+)
> 
> diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
> index 8a14191a1e..85c0e03a95 100644
> --- a/docs/interop/vhost-user.txt
> +++ b/docs/interop/vhost-user.txt
> @@ -218,6 +218,9 @@ The max number of queue pairs the slave supports can be queried with message
>  VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
>  requested queues is bigger than that.
>  
> +When VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM is negotiated, the master must send
> +the number of queue pairs initialized with message VHOST_USER_SET_QUEUE_NUM.
> +
>  As all queues share one connection, the master uses a unique index for each
>  queue in the sent message to identify a specified queue. One queue pair
>  is enabled initially. More queues are enabled dynamically, by sending
> @@ -354,6 +357,7 @@ Protocol features
>  #define VHOST_USER_PROTOCOL_F_MTU            4
>  #define VHOST_USER_PROTOCOL_F_SLAVE_REQ      5
>  #define VHOST_USER_PROTOCOL_F_CROSS_ENDIAN   6
> +#define VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM  7
>  
>  Master message types
>  --------------------
> @@ -623,6 +627,18 @@ Master message types
>        and expect this message once (per VQ) during device configuration
>        (ie. before the master starts the VQ).
>  
> + * VHOST_USER_SET_QUEUE_NUM
> +
> +      Id: 24
> +      Equivalent ioctl: N/A
> +      Master payload: u64
> +
> +      Set the number of queue pairs initialized.
> +      Master sends such request to notify the slave the number of queue pairs
> +      that have been initialized.
> +      This request should only be sent if VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM
> +      feature has been successfully negotiated.
> +
>  Slave message types
>  -------------------
>  
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 093675ed98..9e7728d2da 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -34,6 +34,7 @@ enum VhostUserProtocolFeature {
>      VHOST_USER_PROTOCOL_F_NET_MTU = 4,
>      VHOST_USER_PROTOCOL_F_SLAVE_REQ = 5,
>      VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6,
> +    VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM = 7,
>  
>      VHOST_USER_PROTOCOL_F_MAX
>  };
> @@ -65,6 +66,7 @@ typedef enum VhostUserRequest {
>      VHOST_USER_SET_SLAVE_REQ_FD = 21,
>      VHOST_USER_IOTLB_MSG = 22,
>      VHOST_USER_SET_VRING_ENDIAN = 23,
> +    VHOST_USER_SET_QUEUE_NUM = 24,
>      VHOST_USER_MAX
>  } VhostUserRequest;
>  
> @@ -922,6 +924,27 @@ static void vhost_user_set_iotlb_callback(struct vhost_dev *dev, int enabled)
>      /* No-op as the receive channel is not dedicated to IOTLB messages. */
>  }
>  
> +static int vhost_user_set_queue_num(struct vhost_dev *dev, uint64_t queues)
> +{
> +    VhostUserMsg msg = {
> +        .request = VHOST_USER_SET_QUEUE_NUM,
> +        .size = sizeof(msg.payload.u64),
> +        .flags = VHOST_USER_VERSION,
> +        .payload.u64 = queues,
> +    };
> +
> +    if (!(dev->protocol_features &
> +                (1ULL << VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM))) {
> +        return 0;
> +    }
> +
> +    if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
>  const VhostOps user_ops = {
>          .backend_type = VHOST_BACKEND_TYPE_USER,
>          .vhost_backend_init = vhost_user_init,
> @@ -948,4 +971,5 @@ const VhostOps user_ops = {
>          .vhost_net_set_mtu = vhost_user_net_set_mtu,
>          .vhost_set_iotlb_callback = vhost_user_set_iotlb_callback,
>          .vhost_send_device_iotlb_msg = vhost_user_send_device_iotlb_msg,
> +        .vhost_set_queue_num = vhost_user_set_queue_num,
>  };
> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> index a7a5f22bc6..1dd3e4bbf3 100644
> --- a/include/hw/virtio/vhost-backend.h
> +++ b/include/hw/virtio/vhost-backend.h
> @@ -84,6 +84,8 @@ typedef void (*vhost_set_iotlb_callback_op)(struct vhost_dev *dev,
>                                             int enabled);
>  typedef int (*vhost_send_device_iotlb_msg_op)(struct vhost_dev *dev,
>                                                struct vhost_iotlb_msg *imsg);
> +typedef int (*vhost_set_queue_num_op)(struct vhost_dev *dev,
> +                                      uint64_t queues);
>  
>  typedef struct VhostOps {
>      VhostBackendType backend_type;
> @@ -118,6 +120,7 @@ typedef struct VhostOps {
>      vhost_vsock_set_running_op vhost_vsock_set_running;
>      vhost_set_iotlb_callback_op vhost_set_iotlb_callback;
>      vhost_send_device_iotlb_msg_op vhost_send_device_iotlb_msg;
> +    vhost_set_queue_num_op vhost_set_queue_num;
>  } VhostOps;
>  
>  extern const VhostOps user_ops;
> -- 
> 2.14.3

Re: [Qemu-devel] [PATCH 2/4] vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
Posted by Maxime Coquelin, 2 days ago

On 01/16/2018 04:05 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 12, 2018 at 03:56:56PM +0100, Maxime Coquelin wrote:
>> When the slave cannot add queues dynamically,
> 
> 
> Could you please clarify the motivation a bit
> Why is it such a big deal to resize the queue array?
> We know no queues are used before all of them are initialized,
> so you don't need to worry about synchronizing with threads
> processing the queues at the same time.

The problem is to know how many queues to wait for being initialized.
We need this information in DPDK to start the "vhost device".

Before the guest driver is initialized, the slave receives from QEMU
VHOST_USER_SET_VRING_CALL and VHOST_USER_SET_VRING_ENABLE for all queues
declared in QEMU. In DPDK, queues metadata is allocated each time a
message targets a new vring index, and vring count is incremented
accordingly.

Currently, it seems to work, because QEMU calls vhost_virtqueue_start()
for all queues, even ones not configured by the guest.

But with patch "[PATCH] vhost: fix corrupting GPA 0 when using
uninitialized queues" from Zheng Xiang, QEMU will not more send
_SET_VRING_ADDR and _SET_VRING_KICK for un-configured queues, but
the backend will wait forever for these messages to start the device.

I pasted below a log of messages received by the slave, so you can make
an idea of the sequence.

>> it needs to know for
>> how many queues to wait to be initialized.
>>
>> This patch introduce new vhost-user protocol feature & request for
>> the master to send the number of queue pairs allocated by the
>> driver.
> 
> Assuming we can fix the previous message, I think specifying
> the # of queues would be better.

Thanks,
Maxime


EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:156f net_e1000_em
PMD: Initializing pmd_vhost for net_vhost0
PMD: Creating VHOST-USER backend on numa socket 0
VHOST_CONFIG: vhost-user server: socket created, fd: 13
VHOST_CONFIG: bind to /tmp/vhost-user2
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, 
size=2176, socket=0

Warning! Cannot handle an odd number of ports with the current port 
topology. Configuration must be changed to have an even number of ports, 
or relaunch application with --port-topology=chained

Configuring Port 0 (socket 0)
Port 0: 56:48:4F:53:54:00
Checking link statuses...
Done
testpmd> VHOST_CONFIG: new vhost user connection is 15
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:17
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:18
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:20
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:21
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
PMD: vring0 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
PMD: vring1 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
PMD: vring2 is disabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
PMD: vring3 is disabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
PMD: vring0 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
PMD: vring1 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
PMD: vring2 is disabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
PMD: vring3 is disabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7f9cae200000
	 host  virtual  addr: 0x7f633da00000
	 mmap addr : 0x7f62bda00000
	 mmap size : 0x100000000
	 mmap align: 0x200000
	 mmap off  : 0x80000000
VHOST_CONFIG: guest memory region 1, size: 0xa0000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7f9c2e200000
	 host  virtual  addr: 0x7f64bda00000
	 mmap addr : 0x7f64bda00000
	 mmap size : 0x200000
	 mmap align: 0x200000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region 2, size: 0x7ff40000
	 guest physical addr: 0xc0000
	 guest virtual  addr: 0x7f9c2e2c0000
	 host  virtual  addr: 0x7f64380c0000
	 mmap addr : 0x7f6438000000
	 mmap size : 0x80000000
	 mmap align: 0x200000
	 mmap off  : 0xc0000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:26
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:17
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:27
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
PMD: vring0 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
PMD: vring1 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:18
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:28
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:20
VHOST_CONFIG: virtio is now ready for processing.
PMD: New connection established

Port 0: LSC event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:29
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
PMD: vring0 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
PMD: vring1 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
PMD: vring2 is enabled

Port 0: Queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
PMD: vring3 is enabled

Port 0: Queue state event

[Qemu-devel] [PATCH 3/4] vhost-net: add vhost_net_set_queue_num helper
Posted by Maxime Coquelin, 6 days ago
This patch adds a new help to notify the backend with
the number of queue pairs setup by the guest driver.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 hw/net/vhost_net.c      | 17 +++++++++++++++++
 include/net/vhost_net.h |  1 +
 2 files changed, 18 insertions(+)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index e037db63a3..d60e237a34 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -124,6 +124,18 @@ uint64_t vhost_net_get_max_queues(VHostNetState *net)
     return net->dev.max_queues;
 }
 
+int vhost_net_set_queue_num(NetClientState *nc, uint64_t queues)
+{
+    VHostNetState *net = get_vhost_net(nc);
+    const VhostOps *vhost_ops = net->dev.vhost_ops;
+
+    if (vhost_ops->vhost_set_queue_num) {
+        return vhost_ops->vhost_set_queue_num(&net->dev, queues);
+    }
+
+    return 0;
+}
+
 uint64_t vhost_net_get_acked_features(VHostNetState *net)
 {
     return net->dev.acked_features;
@@ -456,6 +468,11 @@ uint64_t vhost_net_get_max_queues(VHostNetState *net)
     return 1;
 }
 
+int vhost_net_set_queue_num(NetClientState *nc, uint64_t queues)
+{
+    return 0;
+}
+
 struct vhost_net *vhost_net_init(VhostNetOptions *options)
 {
     error_report("vhost-net support is not compiled in");
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index afc1499eb9..39e639d014 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -15,6 +15,7 @@ typedef struct VhostNetOptions {
 } VhostNetOptions;
 
 uint64_t vhost_net_get_max_queues(VHostNetState *net);
+int vhost_net_set_queue_num(NetClientState *nc, uint64_t queues);
 struct vhost_net *vhost_net_init(VhostNetOptions *options);
 
 int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
-- 
2.14.3


[Qemu-devel] [PATCH 4/4] virtio-net: notify backend with number of queue pairs setup
Posted by Maxime Coquelin, 6 days ago
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 hw/net/virtio-net.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 38674b08aa..b8908c98ed 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -561,6 +561,7 @@ static int peer_detach(VirtIONet *n, int index)
 
 static void virtio_net_set_queues(VirtIONet *n)
 {
+    NetClientState *nc = qemu_get_queue(n->nic);
     int i;
     int r;
 
@@ -568,6 +569,10 @@ static void virtio_net_set_queues(VirtIONet *n)
         return;
     }
 
+    if (get_vhost_net(nc->peer)) {
+        vhost_net_set_queue_num(nc->peer, n->curr_queues);
+    }
+
     for (i = 0; i < n->max_queues; i++) {
         if (i < n->curr_queues) {
             r = peer_attach(n, i);
-- 
2.14.3


Re: [Qemu-devel] [PATCH 4/4] virtio-net: notify backend with number of queue pairs setup
Posted by Michael S. Tsirkin, 2 days ago
On Fri, Jan 12, 2018 at 03:56:58PM +0100, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  hw/net/virtio-net.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 38674b08aa..b8908c98ed 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -561,6 +561,7 @@ static int peer_detach(VirtIONet *n, int index)
>  
>  static void virtio_net_set_queues(VirtIONet *n)
>  {
> +    NetClientState *nc = qemu_get_queue(n->nic);
>      int i;
>      int r;
>  
> @@ -568,6 +569,10 @@ static void virtio_net_set_queues(VirtIONet *n)
>          return;
>      }
>  
> +    if (get_vhost_net(nc->peer)) {
> +        vhost_net_set_queue_num(nc->peer, n->curr_queues);
> +    }
> +
>      for (i = 0; i < n->max_queues; i++) {
>          if (i < n->curr_queues) {
>              r = peer_attach(n, i);

Seems wrong to me.
curr_queues isn't the max # of queues configured as the documentation says.
It's the number of queues currently in use by driver.


> -- 
> 2.14.3

Re: [Qemu-devel] [PATCH 4/4] virtio-net: notify backend with number of queue pairs setup
Posted by Maxime Coquelin, 2 days ago

On 01/16/2018 04:07 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 12, 2018 at 03:56:58PM +0100, Maxime Coquelin wrote:
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   hw/net/virtio-net.c | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index 38674b08aa..b8908c98ed 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -561,6 +561,7 @@ static int peer_detach(VirtIONet *n, int index)
>>   
>>   static void virtio_net_set_queues(VirtIONet *n)
>>   {
>> +    NetClientState *nc = qemu_get_queue(n->nic);
>>       int i;
>>       int r;
>>   
>> @@ -568,6 +569,10 @@ static void virtio_net_set_queues(VirtIONet *n)
>>           return;
>>       }
>>   
>> +    if (get_vhost_net(nc->peer)) {
>> +        vhost_net_set_queue_num(nc->peer, n->curr_queues);
>> +    }
>> +
>>       for (i = 0; i < n->max_queues; i++) {
>>           if (i < n->curr_queues) {
>>               r = peer_attach(n, i);
> 
> Seems wrong to me.
> curr_queues isn't the max # of queues configured as the documentation says.
> It's the number of queues currently in use by driver.

Ok. What about detecting the number of queues configured, by checking
for example that decs_phys, avail_phys and used_phys are different?

Thanks,
Maxime
> 
>> -- 
>> 2.14.3