From nobody Tue Oct 7 18:23:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 211062236FF for ; Tue, 8 Jul 2025 06:48:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751957319; cv=none; b=T/aJRqgWfRDpfMuiX1XYetUCv/9y7zGrDyoH0NkIN+GpmPvYWBZ0P71cqN0WXsmczZdEYqS9t6ItqmMBZH5Gi0jDM3hHLbAfA17ACjy4DsuRrgAYRKAooYQl47ZTrwIg6BncbOOcSPkju/0g0enP6KqsZcPQ03HjjHP53RDDYg4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751957319; c=relaxed/simple; bh=Xa+Dr0xa8OhtWcB403K0vBX5a/plYjfANueuy6bB4U8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V03/xJsS4xEXNCXmJxZFPuOYBcyZiJgX7QGROPckE3mc4H4LZLGvMMJ0CKRldMO8/3Kt+q5vIu9IkUtrWh7Tn4M7XJEz8AKYR3uEZcwpkbFHW8f6qlTGoB/2VXd4c/RaqbqS6H/Gq1csjbA1FvHgRtzW4dha++xOd9xy6eCOkRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KzlUtnur; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KzlUtnur" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751957315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cdJTt8kUxyLhklAXD2nDm7x6/Dvbfcd8Q0xrLMB8L9E=; b=KzlUtnurId+0dj9BoouLjXUkBm47IGGfMHOUO365HhNQHsk3Uc5hZTyWm0c5PEGLdyr5Ht +PtPIi6mU9Xa0b4F+9n1hf1OvMzZi+zqQ4VOnC5HF6iE+NubpecWHLUqrL4vS7+hz1+7Xz U3H7TAzIsVz/zpV0nwFaVV1IzgXj0bI= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-693-RGsLEBcoPHeTg23s_XtQxw-1; Tue, 08 Jul 2025 02:48:34 -0400 X-MC-Unique: RGsLEBcoPHeTg23s_XtQxw-1 X-Mimecast-MFC-AGG-ID: RGsLEBcoPHeTg23s_XtQxw_1751957313 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 49B9E180028B; Tue, 8 Jul 2025 06:48:33 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.112.173]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2BDC819560B2; Tue, 8 Jul 2025 06:48:28 +0000 (UTC) From: Jason Wang To: mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com Cc: kvm@vger.kernel.org, virtualization@lists.linux.dev, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, jonah.palmer@oracle.com Subject: [PATCH net-next 1/2] vhost: basic in order support Date: Tue, 8 Jul 2025 14:48:18 +0800 Message-ID: <20250708064819.35282-2-jasowang@redhat.com> In-Reply-To: <20250708064819.35282-1-jasowang@redhat.com> References: <20250708064819.35282-1-jasowang@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" This patch adds basic in order support for vhost. Two optimizations are implemented in this patch: 1) Since driver uses descriptor in order, vhost can deduce the next avail ring head by counting the number of descriptors that has been used in next_avail_head. This eliminate the need to access the available ring in vhost. 2) vhost_add_used_and_singal_n() is extended to accept the number of batched buffers per used elem. While this increases the times of usersapce memory access but it helps to reduce the chance of used ring access of both the driver and vhost. Vhost-net will be the first user for this. Signed-off-by: Jason Wang Acked-by: Jonah Palmer --- drivers/vhost/net.c | 6 ++- drivers/vhost/vhost.c | 121 +++++++++++++++++++++++++++++++++++------- drivers/vhost/vhost.h | 8 ++- 3 files changed, 111 insertions(+), 24 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 7cbfc7d718b3..4f9c67f17b49 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -374,7 +374,8 @@ static void vhost_zerocopy_signal_used(struct vhost_net= *net, while (j) { add =3D min(UIO_MAXIOV - nvq->done_idx, j); vhost_add_used_and_signal_n(vq->dev, vq, - &vq->heads[nvq->done_idx], add); + &vq->heads[nvq->done_idx], + NULL, add); nvq->done_idx =3D (nvq->done_idx + add) % UIO_MAXIOV; j -=3D add; } @@ -457,7 +458,8 @@ static void vhost_net_signal_used(struct vhost_net_virt= queue *nvq) if (!nvq->done_idx) return; =20 - vhost_add_used_and_signal_n(dev, vq, vq->heads, nvq->done_idx); + vhost_add_used_and_signal_n(dev, vq, vq->heads, NULL, + nvq->done_idx); nvq->done_idx =3D 0; } =20 diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 3a5ebb973dba..c7ed069fc49e 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -364,6 +364,7 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->avail =3D NULL; vq->used =3D NULL; vq->last_avail_idx =3D 0; + vq->next_avail_head =3D 0; vq->avail_idx =3D 0; vq->last_used_idx =3D 0; vq->signalled_used =3D 0; @@ -455,6 +456,8 @@ static void vhost_vq_free_iovecs(struct vhost_virtqueue= *vq) vq->log =3D NULL; kfree(vq->heads); vq->heads =3D NULL; + kfree(vq->nheads); + vq->nheads =3D NULL; } =20 /* Helper to allocate iovec buffers for all vqs. */ @@ -472,7 +475,9 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *de= v) GFP_KERNEL); vq->heads =3D kmalloc_array(dev->iov_limit, sizeof(*vq->heads), GFP_KERNEL); - if (!vq->indirect || !vq->log || !vq->heads) + vq->nheads =3D kmalloc_array(dev->iov_limit, sizeof(*vq->nheads), + GFP_KERNEL); + if (!vq->indirect || !vq->log || !vq->heads || !vq->nheads) goto err_nomem; } return 0; @@ -1990,14 +1995,15 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigne= d int ioctl, void __user *arg break; } if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { - vq->last_avail_idx =3D s.num & 0xffff; + vq->next_avail_head =3D vq->last_avail_idx =3D + s.num & 0xffff; vq->last_used_idx =3D (s.num >> 16) & 0xffff; } else { if (s.num > 0xffff) { r =3D -EINVAL; break; } - vq->last_avail_idx =3D s.num; + vq->next_avail_head =3D vq->last_avail_idx =3D s.num; } /* Forget the cached index value. */ vq->avail_idx =3D vq->last_avail_idx; @@ -2590,11 +2596,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, unsigned int *out_num, unsigned int *in_num, struct vhost_log *log, unsigned int *log_num) { + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); struct vring_desc desc; unsigned int i, head, found =3D 0; u16 last_avail_idx =3D vq->last_avail_idx; __virtio16 ring_head; - int ret, access; + int ret, access, c =3D 0; =20 if (vq->avail_idx =3D=3D vq->last_avail_idx) { ret =3D vhost_get_avail_idx(vq); @@ -2605,17 +2612,21 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, return vq->num; } =20 - /* Grab the next descriptor number they're advertising, and increment - * the index we've seen. */ - if (unlikely(vhost_get_avail_head(vq, &ring_head, last_avail_idx))) { - vq_err(vq, "Failed to read head: idx %d address %p\n", - last_avail_idx, - &vq->avail->ring[last_avail_idx % vq->num]); - return -EFAULT; + if (in_order) + head =3D vq->next_avail_head & (vq->num - 1); + else { + /* Grab the next descriptor number they're + * advertising, and increment the index we've seen. */ + if (unlikely(vhost_get_avail_head(vq, &ring_head, + last_avail_idx))) { + vq_err(vq, "Failed to read head: idx %d address %p\n", + last_avail_idx, + &vq->avail->ring[last_avail_idx % vq->num]); + return -EFAULT; + } + head =3D vhost16_to_cpu(vq, ring_head); } =20 - head =3D vhost16_to_cpu(vq, ring_head); - /* If their number is silly, that's an error. */ if (unlikely(head >=3D vq->num)) { vq_err(vq, "Guest says index %u > %u is available", @@ -2658,6 +2669,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, "in indirect descriptor at idx %d\n", i); return ret; } + ++c; continue; } =20 @@ -2693,10 +2705,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, } *out_num +=3D ret; } + ++c; } while ((i =3D next_desc(vq, &desc)) !=3D -1); =20 /* On success, increment avail index. */ vq->last_avail_idx++; + vq->next_avail_head +=3D c; =20 /* Assume notifications from guest are disabled at this point, * if they aren't we would need to update avail_event index. */ @@ -2720,8 +2734,9 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsign= ed int head, int len) cpu_to_vhost32(vq, head), cpu_to_vhost32(vq, len) }; + u16 nheads =3D 1; =20 - return vhost_add_used_n(vq, &heads, 1); + return vhost_add_used_n(vq, &heads, &nheads, 1); } EXPORT_SYMBOL_GPL(vhost_add_used); =20 @@ -2757,10 +2772,10 @@ static int __vhost_add_used_n(struct vhost_virtqueu= e *vq, return 0; } =20 -/* After we've used one of their buffers, we tell them about it. We'll th= en - * want to notify the guest, using eventfd. */ -int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *h= eads, - unsigned count) +static int vhost_add_used_n_ooo(struct vhost_virtqueue *vq, + struct vring_used_elem *heads, + u16 *nheads, + unsigned count) { int start, n, r; =20 @@ -2775,6 +2790,70 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, str= uct vring_used_elem *heads, } r =3D __vhost_add_used_n(vq, heads, count); =20 + return r; +} + +static int vhost_add_used_n_in_order(struct vhost_virtqueue *vq, + struct vring_used_elem *heads, + u16 *nheads, + unsigned count) +{ + vring_used_elem_t __user *used; + u16 old, new =3D vq->last_used_idx; + int start, i; + + if (!nheads) + return -EINVAL; + + start =3D vq->last_used_idx & (vq->num - 1); + used =3D vq->used->ring + start; + + for (i =3D 0; i < count; i++) { + if (vhost_put_used(vq, &heads[i], start, 1)) { + vq_err(vq, "Failed to write used"); + return -EFAULT; + } + start +=3D nheads[i]; + new +=3D nheads[i]; + if (start >=3D vq->num) + start -=3D vq->num; + } + + if (unlikely(vq->log_used)) { + /* Make sure data is seen before log. */ + smp_wmb(); + /* Log used ring entry write. */ + log_used(vq, ((void __user *)used - (void __user *)vq->used), + (vq->num - start) * sizeof *used); + if (start + count > vq->num) + log_used(vq, 0, + (start + count - vq->num) * sizeof *used); + } + + old =3D vq->last_used_idx; + vq->last_used_idx =3D new; + /* If the driver never bothers to signal in a very long while, + * used index might wrap around. If that happens, invalidate + * signalled_used index we stored. TODO: make sure driver + * signals at least once in 2^16 and remove this. */ + if (unlikely((u16)(new - vq->signalled_used) < (u16)(new - old))) + vq->signalled_used_valid =3D false; + return 0; +} + +/* After we've used one of their buffers, we tell them about it. We'll th= en + * want to notify the guest, using eventfd. */ +int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *h= eads, + u16 *nheads, unsigned count) +{ + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); + int r; + + if (!in_order || !nheads) + r =3D vhost_add_used_n_ooo(vq, heads, nheads, count); + else + r =3D vhost_add_used_n_in_order(vq, heads, nheads, count); + /* Make sure buffer is written before we update index. */ smp_wmb(); if (vhost_put_used_idx(vq)) { @@ -2853,9 +2932,11 @@ EXPORT_SYMBOL_GPL(vhost_add_used_and_signal); /* multi-buffer version of vhost_add_used_and_signal */ void vhost_add_used_and_signal_n(struct vhost_dev *dev, struct vhost_virtqueue *vq, - struct vring_used_elem *heads, unsigned count) + struct vring_used_elem *heads, + u16 *nheads, + unsigned count) { - vhost_add_used_n(vq, heads, count); + vhost_add_used_n(vq, heads, nheads, count); vhost_signal(dev, vq); } EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index bb75a292d50c..dca9f309d396 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -103,6 +103,8 @@ struct vhost_virtqueue { * Values are limited to 0x7fff, and the high bit is used as * a wrap counter when using VIRTIO_F_RING_PACKED. */ u16 last_avail_idx; + /* Next avail ring head when VIRTIO_F_IN_ORDER is neogitated */ + u16 next_avail_head; =20 /* Caches available index value from user. */ u16 avail_idx; @@ -129,6 +131,7 @@ struct vhost_virtqueue { struct iovec iotlb_iov[64]; struct iovec *indirect; struct vring_used_elem *heads; + u16 *nheads; /* Protected by virtqueue mutex. */ struct vhost_iotlb *umem; struct vhost_iotlb *iotlb; @@ -213,11 +216,12 @@ bool vhost_vq_is_setup(struct vhost_virtqueue *vq); int vhost_vq_init_access(struct vhost_virtqueue *); int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len); int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *hea= ds, - unsigned count); + u16 *nheads, unsigned count); void vhost_add_used_and_signal(struct vhost_dev *, struct vhost_virtqueue = *, unsigned int id, int len); void vhost_add_used_and_signal_n(struct vhost_dev *, struct vhost_virtqueu= e *, - struct vring_used_elem *heads, unsigned count); + struct vring_used_elem *heads, u16 *nheads, + unsigned count); void vhost_signal(struct vhost_dev *, struct vhost_virtqueue *); void vhost_disable_notify(struct vhost_dev *, struct vhost_virtqueue *); bool vhost_vq_avail_empty(struct vhost_dev *, struct vhost_virtqueue *); --=20 2.31.1 From nobody Tue Oct 7 18:23:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3F1922A7F9 for ; Tue, 8 Jul 2025 06:48:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751957325; cv=none; b=o3fFLuxo+mtzXvECnO2u0UM1TaA1Fs5+QzBIswO3XSzZDjtLDRaFA4K29xeWI1znegth5v3+2Ml0HlwO4x3s3JvCTEkhxqNaMVhR27o6vHw0082gsGPhi1xQnlMxJrKIr9NlhgwPwbqRkEbGV9emppuSO5k4Uo/fdSXbWUtNfjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751957325; c=relaxed/simple; bh=MY8vMj70pEwa4XHTdydkUlnbuShmK/xDyh6QSW3qoUA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZxffQf2A5GIH4PNrt9tO9+rw13MRPMjKycnvJ8K4TJQykBJQJZgF5Dp5sa82Afj8cSDXWHoVnlNhljz4V3xN+jiUCrgGEZmcQge9TmwSR2V+X+A1mYpGpDITeipji5PsYDJlPuqLJWX4OOTOfdT4A5J9A/h0Tdi65TtF+T8fKE0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Yf51D7xu; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Yf51D7xu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751957322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=teTwZIaIdHF2W5KWCPjBDcr/Vz6B+2AKBnkMWIsMlmo=; b=Yf51D7xu7VLBH2TBz4OxqS55H43N1mCN9gD1xel99SpSmxn0dGcKozanlZrTJDlnU1YjLR o77bK2RkbGBOHG40AdNZPV6Yp6vPRX9BsSgAZMF7066X9TN9p3Vc356OM7rNlUK8ui0fqH 0uNvggcURz+BBL31AyAL6rGIVJH5bs4= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-462-3UxcNv81M0GM_cSvbteMdw-1; Tue, 08 Jul 2025 02:48:39 -0400 X-MC-Unique: 3UxcNv81M0GM_cSvbteMdw-1 X-Mimecast-MFC-AGG-ID: 3UxcNv81M0GM_cSvbteMdw_1751957318 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 303C3180029F; Tue, 8 Jul 2025 06:48:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.112.173]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2E0B419560B2; Tue, 8 Jul 2025 06:48:33 +0000 (UTC) From: Jason Wang To: mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com Cc: kvm@vger.kernel.org, virtualization@lists.linux.dev, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, jonah.palmer@oracle.com Subject: [PATCH net-next 2/2] vhost_net: basic in_order support Date: Tue, 8 Jul 2025 14:48:19 +0800 Message-ID: <20250708064819.35282-3-jasowang@redhat.com> In-Reply-To: <20250708064819.35282-1-jasowang@redhat.com> References: <20250708064819.35282-1-jasowang@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" This patch introduces basic in-order support for vhost-net. By recording the number of batched buffers in an array when calling `vhost_add_used_and_signal_n()`, we can reduce the number of userspace accesses. Note that the vhost-net batching logic is kept as we still count the number of buffers there. Testing Results: With testpmd: - TX: txonly mode + vhost_net with XDP_DROP on TAP shows a 17.5% improvement, from 4.75 Mpps to 5.35 Mpps. - RX: No obvious improvements were observed. With virtio-ring in-order experimental code in the guest: - TX: pktgen in the guest + XDP_DROP on TAP shows a 19% improvement, from 5.2 Mpps to 6.2 Mpps. - RX: pktgen on TAP with vhost_net + XDP_DROP in the guest achieves a 6.1% improvement, from 3.47 Mpps to 3.61 Mpps. Signed-off-by: Jason Wang Acked-by: Eugenio P=C3=A9rez Acked-by: Jonah Palmer --- drivers/vhost/net.c | 86 ++++++++++++++++++++++++++++++++------------- 1 file changed, 61 insertions(+), 25 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 4f9c67f17b49..8ac994b3228a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -74,7 +74,8 @@ enum { (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) | (1ULL << VIRTIO_NET_F_MRG_RXBUF) | (1ULL << VIRTIO_F_ACCESS_PLATFORM) | - (1ULL << VIRTIO_F_RING_RESET) + (1ULL << VIRTIO_F_RING_RESET) | + (1ULL << VIRTIO_F_IN_ORDER) }; =20 enum { @@ -450,7 +451,8 @@ static int vhost_net_enable_vq(struct vhost_net *n, return vhost_poll_start(poll, sock->file); } =20 -static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) +static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq, + unsigned int count) { struct vhost_virtqueue *vq =3D &nvq->vq; struct vhost_dev *dev =3D vq->dev; @@ -458,8 +460,8 @@ static void vhost_net_signal_used(struct vhost_net_virt= queue *nvq) if (!nvq->done_idx) return; =20 - vhost_add_used_and_signal_n(dev, vq, vq->heads, NULL, - nvq->done_idx); + vhost_add_used_and_signal_n(dev, vq, vq->heads, + vq->nheads, count); nvq->done_idx =3D 0; } =20 @@ -468,6 +470,8 @@ static void vhost_tx_batch(struct vhost_net *net, struct socket *sock, struct msghdr *msghdr) { + struct vhost_virtqueue *vq =3D &nvq->vq; + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); struct tun_msg_ctl ctl =3D { .type =3D TUN_MSG_PTR, .num =3D nvq->batched_xdp, @@ -475,6 +479,11 @@ static void vhost_tx_batch(struct vhost_net *net, }; int i, err; =20 + if (in_order) { + vq->heads[0].len =3D 0; + vq->nheads[0] =3D nvq->done_idx; + } + if (nvq->batched_xdp =3D=3D 0) goto signal_used; =20 @@ -496,7 +505,7 @@ static void vhost_tx_batch(struct vhost_net *net, } =20 signal_used: - vhost_net_signal_used(nvq); + vhost_net_signal_used(nvq, in_order ? 1 : nvq->done_idx); nvq->batched_xdp =3D 0; } =20 @@ -758,6 +767,7 @@ static void handle_tx_copy(struct vhost_net *net, struc= t socket *sock) int sent_pkts =3D 0; bool sock_can_batch =3D (sock->sk->sk_sndbuf =3D=3D INT_MAX); bool busyloop_intr; + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); =20 do { busyloop_intr =3D false; @@ -794,11 +804,13 @@ static void handle_tx_copy(struct vhost_net *net, str= uct socket *sock) break; } =20 - /* We can't build XDP buff, go for single - * packet path but let's flush batched - * packets. - */ - vhost_tx_batch(net, nvq, sock, &msg); + if (nvq->batched_xdp) { + /* We can't build XDP buff, go for single + * packet path but let's flush batched + * packets. + */ + vhost_tx_batch(net, nvq, sock, &msg); + } msg.msg_control =3D NULL; } else { if (tx_can_batch(vq, total_len)) @@ -819,8 +831,12 @@ static void handle_tx_copy(struct vhost_net *net, stru= ct socket *sock) pr_debug("Truncated TX packet: len %d !=3D %zd\n", err, len); done: - vq->heads[nvq->done_idx].id =3D cpu_to_vhost32(vq, head); - vq->heads[nvq->done_idx].len =3D 0; + if (in_order) { + vq->heads[0].id =3D cpu_to_vhost32(vq, head); + } else { + vq->heads[nvq->done_idx].id =3D cpu_to_vhost32(vq, head); + vq->heads[nvq->done_idx].len =3D 0; + } ++nvq->done_idx; } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len))); =20 @@ -999,7 +1015,7 @@ static int peek_head_len(struct vhost_net_virtqueue *r= vq, struct sock *sk) } =20 static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *= sk, - bool *busyloop_intr) + bool *busyloop_intr, unsigned int count) { struct vhost_net_virtqueue *rnvq =3D &net->vqs[VHOST_NET_VQ_RX]; struct vhost_net_virtqueue *tnvq =3D &net->vqs[VHOST_NET_VQ_TX]; @@ -1009,7 +1025,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_ne= t *net, struct sock *sk, =20 if (!len && rvq->busyloop_timeout) { /* Flush batched heads first */ - vhost_net_signal_used(rnvq); + vhost_net_signal_used(rnvq, count); /* Both tx vq and rx socket were polled here */ vhost_net_busy_poll(net, rvq, tvq, busyloop_intr, true); =20 @@ -1021,7 +1037,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_ne= t *net, struct sock *sk, =20 /* This is a multi-buffer version of vhost_get_desc, that works if * vq has read descriptors only. - * @vq - the relevant virtqueue + * @nvq - the relevant vhost_net virtqueue * @datalen - data length we'll be reading * @iovcount - returned count of io vectors we fill * @log - vhost log @@ -1029,14 +1045,17 @@ static int vhost_net_rx_peek_head_len(struct vhost_= net *net, struct sock *sk, * @quota - headcount quota, 1 for big buffer * returns number of buffer heads allocated, negative on error */ -static int get_rx_bufs(struct vhost_virtqueue *vq, +static int get_rx_bufs(struct vhost_net_virtqueue *nvq, struct vring_used_elem *heads, + u16 *nheads, int datalen, unsigned *iovcount, struct vhost_log *log, unsigned *log_num, unsigned int quota) { + struct vhost_virtqueue *vq =3D &nvq->vq; + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); unsigned int out, in; int seg =3D 0; int headcount =3D 0; @@ -1073,14 +1092,16 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, nlogs +=3D *log_num; log +=3D *log_num; } - heads[headcount].id =3D cpu_to_vhost32(vq, d); len =3D iov_length(vq->iov + seg, in); - heads[headcount].len =3D cpu_to_vhost32(vq, len); - datalen -=3D len; + if (!in_order) { + heads[headcount].id =3D cpu_to_vhost32(vq, d); + heads[headcount].len =3D cpu_to_vhost32(vq, len); + } ++headcount; + datalen -=3D len; seg +=3D in; } - heads[headcount - 1].len =3D cpu_to_vhost32(vq, len + datalen); + *iovcount =3D seg; if (unlikely(log)) *log_num =3D nlogs; @@ -1090,6 +1111,15 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, r =3D UIO_MAXIOV + 1; goto err; } + + if (!in_order) + heads[headcount - 1].len =3D cpu_to_vhost32(vq, len + datalen); + else { + heads[0].len =3D cpu_to_vhost32(vq, len + datalen); + heads[0].id =3D cpu_to_vhost32(vq, d); + nheads[0] =3D headcount; + } + return headcount; err: vhost_discard_vq_desc(vq, headcount); @@ -1102,6 +1132,8 @@ static void handle_rx(struct vhost_net *net) { struct vhost_net_virtqueue *nvq =3D &net->vqs[VHOST_NET_VQ_RX]; struct vhost_virtqueue *vq =3D &nvq->vq; + bool in_order =3D vhost_has_feature(vq, VIRTIO_F_IN_ORDER); + unsigned int count =3D 0; unsigned in, log; struct vhost_log *vq_log; struct msghdr msg =3D { @@ -1149,12 +1181,13 @@ static void handle_rx(struct vhost_net *net) =20 do { sock_len =3D vhost_net_rx_peek_head_len(net, sock->sk, - &busyloop_intr); + &busyloop_intr, count); if (!sock_len) break; sock_len +=3D sock_hlen; vhost_len =3D sock_len + vhost_hlen; - headcount =3D get_rx_bufs(vq, vq->heads + nvq->done_idx, + headcount =3D get_rx_bufs(nvq, vq->heads + count, + vq->nheads + count, vhost_len, &in, vq_log, &log, likely(mergeable) ? UIO_MAXIOV : 1); /* On error, stop handling until the next kick. */ @@ -1230,8 +1263,11 @@ static void handle_rx(struct vhost_net *net) goto out; } nvq->done_idx +=3D headcount; - if (nvq->done_idx > VHOST_NET_BATCH) - vhost_net_signal_used(nvq); + count +=3D in_order ? 1 : headcount; + if (nvq->done_idx > VHOST_NET_BATCH) { + vhost_net_signal_used(nvq, count); + count =3D 0; + } if (unlikely(vq_log)) vhost_log_write(vq, vq_log, log, vhost_len, vq->iov, in); @@ -1243,7 +1279,7 @@ static void handle_rx(struct vhost_net *net) else if (!sock_len) vhost_net_enable_vq(net, vq); out: - vhost_net_signal_used(nvq); + vhost_net_signal_used(nvq, count); mutex_unlock(&vq->mutex); } =20 --=20 2.31.1