net/vmw_vsock/virtio_transport.c | 144 +++++++++++++++++++++++++-------------- 1 file changed, 94 insertions(+), 50 deletions(-)
This series introduces an optimization for vsock/virtio to reduce latency
and increase the throughput: When the guest sends a packet to the host,
and the intermediate queue (send_pkt_queue) is empty, if there is enough
space, the packet is put directly in the virtqueue.
v3->v4
While running experiments on fio with 64B payload, I realized that there
was a mistake in my fio configuration, so I re-ran all the experiments
and now the latency numbers are indeed lower with the patch applied.
I also noticed that I was kicking the host without the lock.
- Fixed a configuration mistake on fio and re-ran all experiments.
- Fio latency measurement using 64B payload.
- virtio_transport_send_skb_fast_path sends kick with the tx_lock acquired
- Addressed all minor style changes requested by maintainer.
- Rebased on latest net-next
- Link to v3: https://lore.kernel.org/r/20240711-pinna-v3-0-697d4164fe80@outlook.com
v2->v3
- Performed more experiments using iperf3 using multiple streams
- Handling of reply packets removed from virtio_transport_send_skb,
as is needed just by the worker.
- Removed atomic_inc/atomic_sub when queuing directly to the vq.
- Introduced virtio_transport_send_skb_fast_path that handles the
steps for sending on the vq.
- Fixed a missing mutex_unlock in error path.
- Changed authorship of the second commit
- Rebased on latest net-next
v1->v2
In this v2 I replaced a mutex_lock with a mutex_trylock because it was
insidea RCU critical section. I also added a check on tx_run, so if the
module is being removed the packet is not queued. I'd like to thank Stefano
for reporting the tx_run issue.
Applied all Stefano's suggestions:
- Minor code style changes
- Minor commit text rewrite
Performed more experiments:
- Check if all the packets go directly to the vq (Matias' suggestion)
- Used iperf3 to see if there is any improvement in overall throughput
from guest to host
- Pinned the vhost process to a pCPU.
- Run fio using 512B payload
Rebased on latest net-next
---
Luigi Leonardi (1):
vsock/virtio: avoid queuing packets when intermediate queue is empty
Marco Pinna (1):
vsock/virtio: refactor virtio_transport_send_pkt_work
net/vmw_vsock/virtio_transport.c | 144 +++++++++++++++++++++++++--------------
1 file changed, 94 insertions(+), 50 deletions(-)
---
base-commit: 1722389b0d863056d78287a120a1d6cadb8d4f7b
change-id: 20240730-pinna-db8cc1b8b037
Best regards,
--
Luigi Leonardi <luigi.leonardi@outlook.com>
Hi Michael, this series is marked as "Not Applicable" for the net-next tree: https://patchwork.kernel.org/project/netdevbpf/patch/20240730-pinna-v4-2-5c9179164db5@outlook.com/ Actually this is more about the virtio-vsock driver, so can you queue this on your tree? Thanks, Stefano On Tue, Jul 30, 2024 at 09:47:30PM GMT, Luigi Leonardi via B4 Relay wrote: >This series introduces an optimization for vsock/virtio to reduce latency >and increase the throughput: When the guest sends a packet to the host, >and the intermediate queue (send_pkt_queue) is empty, if there is enough >space, the packet is put directly in the virtqueue. > >v3->v4 >While running experiments on fio with 64B payload, I realized that there >was a mistake in my fio configuration, so I re-ran all the experiments >and now the latency numbers are indeed lower with the patch applied. >I also noticed that I was kicking the host without the lock. > >- Fixed a configuration mistake on fio and re-ran all experiments. >- Fio latency measurement using 64B payload. >- virtio_transport_send_skb_fast_path sends kick with the tx_lock acquired >- Addressed all minor style changes requested by maintainer. >- Rebased on latest net-next >- Link to v3: https://lore.kernel.org/r/20240711-pinna-v3-0-697d4164fe80@outlook.com > >v2->v3 >- Performed more experiments using iperf3 using multiple streams >- Handling of reply packets removed from virtio_transport_send_skb, > as is needed just by the worker. >- Removed atomic_inc/atomic_sub when queuing directly to the vq. >- Introduced virtio_transport_send_skb_fast_path that handles the > steps for sending on the vq. >- Fixed a missing mutex_unlock in error path. >- Changed authorship of the second commit >- Rebased on latest net-next > >v1->v2 >In this v2 I replaced a mutex_lock with a mutex_trylock because it was >insidea RCU critical section. I also added a check on tx_run, so if the >module is being removed the packet is not queued. I'd like to thank Stefano >for reporting the tx_run issue. > >Applied all Stefano's suggestions: > - Minor code style changes > - Minor commit text rewrite >Performed more experiments: > - Check if all the packets go directly to the vq (Matias' suggestion) > - Used iperf3 to see if there is any improvement in overall throughput > from guest to host > - Pinned the vhost process to a pCPU. > - Run fio using 512B payload >Rebased on latest net-next > >--- >Luigi Leonardi (1): > vsock/virtio: avoid queuing packets when intermediate queue is empty > >Marco Pinna (1): > vsock/virtio: refactor virtio_transport_send_pkt_work > > net/vmw_vsock/virtio_transport.c | 144 +++++++++++++++++++++++++-------------- > 1 file changed, 94 insertions(+), 50 deletions(-) >--- >base-commit: 1722389b0d863056d78287a120a1d6cadb8d4f7b >change-id: 20240730-pinna-db8cc1b8b037 > >Best regards, >-- >Luigi Leonardi <luigi.leonardi@outlook.com> > > >
On Mon, 5 Aug 2024 10:39:23 +0200 Stefano Garzarella wrote: > this series is marked as "Not Applicable" for the net-next tree: > https://patchwork.kernel.org/project/netdevbpf/patch/20240730-pinna-v4-2-5c9179164db5@outlook.com/ > > Actually this is more about the virtio-vsock driver, so can you queue > this on your tree? We can revive it in our patchwork, too, if that's easier. Not entirely sure why it was discarded, seems borderline.
On Tue, Aug 06, 2024 at 09:02:57AM GMT, Jakub Kicinski wrote: >On Mon, 5 Aug 2024 10:39:23 +0200 Stefano Garzarella wrote: >> this series is marked as "Not Applicable" for the net-next tree: >> https://patchwork.kernel.org/project/netdevbpf/patch/20240730-pinna-v4-2-5c9179164db5@outlook.com/ >> >> Actually this is more about the virtio-vsock driver, so can you queue >> this on your tree? > >We can revive it in our patchwork, too, if that's easier. That's perfectly fine with me, if Michael hasn't already queued it. >Not entirely sure why it was discarded, seems borderline. > Yes, even to me it's not super clear when to expect net and when virtio. Usually the other vsock transports (VMCI and HyperV) go with net, so virtio-vsock is a bit of an exception. I don't have any particular preferences, so how it works best for you and Michael is fine with me. Thanks, Stefano
© 2016 - 2026 Red Hat, Inc.