[PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop

Simon Schippers posted 8 patches 1 week, 2 days ago
There is a newer version of this series
drivers/net/tap.c        | 115 +++++++++++++++++++++++++++++++--
drivers/net/tun.c        | 136 +++++++++++++++++++++++++++++++++++----
drivers/vhost/net.c      |  90 +++++++++++++++++---------
include/linux/if_tap.h   |  15 +++++
include/linux/if_tun.h   |  18 ++++++
include/linux/ptr_ring.h |  54 +++++++++++++---
6 files changed, 367 insertions(+), 61 deletions(-)
[PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Simon Schippers 1 week, 2 days ago
This patch series deals with TUN, TAP and vhost_net which drop incoming 
SKBs whenever their internal ptr_ring buffer is full. Instead, with this 
patch series, the associated netdev queue is stopped before this happens. 
This allows the connected qdisc to function correctly as reported by [1] 
and improves application-layer performance, see our paper [2]. Meanwhile 
the theoretical performance differs only slightly:

+------------------------+----------+----------+
| pktgen benchmarks      | Stock    | Patched  |
| i5 6300HQ, 20M packets |          |          |
+------------------------+----------+----------+
| TAP                    | 2.10Mpps | 1.99Mpps |
+------------------------+----------+----------+
| TAP+vhost_net          | 6.05Mpps | 6.14Mpps |
+------------------------+----------+----------+
| Note: Patched had no TX drops at all,        |
| while stock suffered numerous drops.         |
+----------------------------------------------+

This patch series includes TUN, TAP, and vhost_net because they share 
logic. Adjusting only one of them would break the others. Therefore, the 
patch series is structured as follows:
1+2: New ptr_ring helpers for 3 & 4
3: TUN & TAP: Stop netdev queue upon reaching a full ptr_ring
4: TUN & TAP: Wake netdev queue after consuming an entry
5+6+7: TUN & TAP: ptr_ring wrappers and other helpers to be called by 
vhost_net
8: vhost_net: Call the wrappers & helpers

Possible future work:
- Introduction of Byte Queue Limits as suggested by Stephen Hemminger
- Adaption of the netdev queue flow control for ipvtap & macvtap

[1] Link: 
https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
[2] Link: 
https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf

Links to previous versions:
V4: 
https://lore.kernel.org/netdev/20250902080957.47265-1-simon.schippers@tu-dortmund.de/T/#u
V3: 
https://lore.kernel.org/netdev/20250825211832.84901-1-simon.schippers@tu-dortmund.de/T/#u
V2: 
https://lore.kernel.org/netdev/20250811220430.14063-1-simon.schippers@tu-dortmund.de/T/#u
V1: 
https://lore.kernel.org/netdev/20250808153721.261334-1-simon.schippers@tu-dortmund.de/T/#u

Changelog:
V4 -> V5:
- Stop the netdev queue prior to producing the final fitting ptr_ring entry
-> Ensures the consumer has the latest netdev queue state, making it safe 
to wake the queue
-> Resolves an issue in vhost_net where the netdev queue could remain 
stopped despite being empty
-> For TUN/TAP, the netdev queue no longer needs to be woken in the 
blocking loop
-> Introduces new helpers __ptr_ring_full_next and 
__ptr_ring_will_invalidate for this purpose

- vhost_net now uses wrappers of TUN/TAP for ptr_ring consumption rather 
than maintaining its own rx_ring pointer

V3 -> V4:
- Target net-next instead of net
- Changed to patch series instead of single patch
- Changed to new title from old title
"TUN/TAP: Improving throughput and latency by avoiding SKB drops"
- Wake netdev queue with new helpers wake_netdev_queue when there is any 
spare capacity in the ptr_ring instead of waiting for it to be empty
- Use tun_file instead of tun_struct in tun_ring_recv as a more consistent 
logic
- Use smp_wmb() and smp_rmb() barrier pair, which avoids any packet drops 
that happened rarely before
- Use safer logic for vhost_net using RCU read locks to access TUN/TAP data

V2 -> V3: Added support for TAP and TAP+vhost_net.

V1 -> V2: Removed NETDEV_TX_BUSY return case in tun_net_xmit and removed 
unnecessary netif_tx_wake_queue in tun_ring_recv.

Thanks,
Simon :)

Simon Schippers (8):
  __ptr_ring_full_next: Returns if ring will be full after next
    insertion
  Move the decision of invalidation out of __ptr_ring_discard_one
  TUN, TAP & vhost_net: Stop netdev queue before reaching a full
    ptr_ring
  TUN & TAP: Wake netdev queue after consuming an entry
  TUN & TAP: Provide ptr_ring_consume_batched wrappers for vhost_net
  TUN & TAP: Provide ptr_ring_unconsume wrappers for vhost_net
  TUN & TAP: Methods to determine whether file is TUN/TAP for vhost_net
  vhost_net: Replace rx_ring with calls of TUN/TAP wrappers

 drivers/net/tap.c        | 115 +++++++++++++++++++++++++++++++--
 drivers/net/tun.c        | 136 +++++++++++++++++++++++++++++++++++----
 drivers/vhost/net.c      |  90 +++++++++++++++++---------
 include/linux/if_tap.h   |  15 +++++
 include/linux/if_tun.h   |  18 ++++++
 include/linux/ptr_ring.h |  54 +++++++++++++---
 6 files changed, 367 insertions(+), 61 deletions(-)

-- 
2.43.0
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week ago
On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> This patch series deals with TUN, TAP and vhost_net which drop incoming 
> SKBs whenever their internal ptr_ring buffer is full. Instead, with this 
> patch series, the associated netdev queue is stopped before this happens. 
> This allows the connected qdisc to function correctly as reported by [1] 
> and improves application-layer performance, see our paper [2]. Meanwhile 
> the theoretical performance differs only slightly:


About this whole approach.
What if userspace is not consuming packets?
Won't the watchdog warnings appear?
Is it safe to allow userspace to block a tx queue
indefinitely?

-- 
MST
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Jason Wang 1 week ago
On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > patch series, the associated netdev queue is stopped before this happens.
> > This allows the connected qdisc to function correctly as reported by [1]
> > and improves application-layer performance, see our paper [2]. Meanwhile
> > the theoretical performance differs only slightly:
>
>
> About this whole approach.
> What if userspace is not consuming packets?
> Won't the watchdog warnings appear?
> Is it safe to allow userspace to block a tx queue
> indefinitely?

I think it's safe as it's a userspace device, there's no way to
guarantee the userspace can process the packet in time (so no watchdog
for TUN).

Thanks

>
> --
> MST
>
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week ago
On Wed, Sep 24, 2025 at 03:33:08PM +0800, Jason Wang wrote:
> On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > > patch series, the associated netdev queue is stopped before this happens.
> > > This allows the connected qdisc to function correctly as reported by [1]
> > > and improves application-layer performance, see our paper [2]. Meanwhile
> > > the theoretical performance differs only slightly:
> >
> >
> > About this whole approach.
> > What if userspace is not consuming packets?
> > Won't the watchdog warnings appear?
> > Is it safe to allow userspace to block a tx queue
> > indefinitely?
> 
> I think it's safe as it's a userspace device, there's no way to
> guarantee the userspace can process the packet in time (so no watchdog
> for TUN).
> 
> Thanks

Hmm. Anyway, I guess if we ever want to enable timeout for tun,
we can worry about it then. Does not need to block this patchset.

> >
> > --
> > MST
> >

Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Jason Wang 1 week ago
On Wed, Sep 24, 2025 at 3:42 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, Sep 24, 2025 at 03:33:08PM +0800, Jason Wang wrote:
> > On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > > > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > > > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > > > patch series, the associated netdev queue is stopped before this happens.
> > > > This allows the connected qdisc to function correctly as reported by [1]
> > > > and improves application-layer performance, see our paper [2]. Meanwhile
> > > > the theoretical performance differs only slightly:
> > >
> > >
> > > About this whole approach.
> > > What if userspace is not consuming packets?
> > > Won't the watchdog warnings appear?
> > > Is it safe to allow userspace to block a tx queue
> > > indefinitely?
> >
> > I think it's safe as it's a userspace device, there's no way to
> > guarantee the userspace can process the packet in time (so no watchdog
> > for TUN).
> >
> > Thanks
>
> Hmm. Anyway, I guess if we ever want to enable timeout for tun,
> we can worry about it then.

The problem is that the skb is freed until userspace calls recvmsg(),
so it would be tricky to implement a watchdog. (Or if we can do, we
can do BQL as well).

> Does not need to block this patchset.

Yes.

Thanks

>
> > >
> > > --
> > > MST
> > >
>
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week ago
On Wed, Sep 24, 2025 at 04:08:33PM +0800, Jason Wang wrote:
> On Wed, Sep 24, 2025 at 3:42 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Wed, Sep 24, 2025 at 03:33:08PM +0800, Jason Wang wrote:
> > > On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > > > > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > > > > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > > > > patch series, the associated netdev queue is stopped before this happens.
> > > > > This allows the connected qdisc to function correctly as reported by [1]
> > > > > and improves application-layer performance, see our paper [2]. Meanwhile
> > > > > the theoretical performance differs only slightly:
> > > >
> > > >
> > > > About this whole approach.
> > > > What if userspace is not consuming packets?
> > > > Won't the watchdog warnings appear?
> > > > Is it safe to allow userspace to block a tx queue
> > > > indefinitely?
> > >
> > > I think it's safe as it's a userspace device, there's no way to
> > > guarantee the userspace can process the packet in time (so no watchdog
> > > for TUN).
> > >
> > > Thanks
> >
> > Hmm. Anyway, I guess if we ever want to enable timeout for tun,
> > we can worry about it then.
> 
> The problem is that the skb is freed until userspace calls recvmsg(),
> so it would be tricky to implement a watchdog. (Or if we can do, we
> can do BQL as well).

I thought the watchdog generally watches queues not individual skbs?

> > Does not need to block this patchset.
> 
> Yes.
> 
> Thanks
> 
> >
> > > >
> > > > --
> > > > MST
> > > >
> >

Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Jason Wang 1 week ago
On Wed, Sep 24, 2025 at 4:10 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, Sep 24, 2025 at 04:08:33PM +0800, Jason Wang wrote:
> > On Wed, Sep 24, 2025 at 3:42 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Wed, Sep 24, 2025 at 03:33:08PM +0800, Jason Wang wrote:
> > > > On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > >
> > > > > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > > > > > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > > > > > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > > > > > patch series, the associated netdev queue is stopped before this happens.
> > > > > > This allows the connected qdisc to function correctly as reported by [1]
> > > > > > and improves application-layer performance, see our paper [2]. Meanwhile
> > > > > > the theoretical performance differs only slightly:
> > > > >
> > > > >
> > > > > About this whole approach.
> > > > > What if userspace is not consuming packets?
> > > > > Won't the watchdog warnings appear?
> > > > > Is it safe to allow userspace to block a tx queue
> > > > > indefinitely?
> > > >
> > > > I think it's safe as it's a userspace device, there's no way to
> > > > guarantee the userspace can process the packet in time (so no watchdog
> > > > for TUN).
> > > >
> > > > Thanks
> > >
> > > Hmm. Anyway, I guess if we ever want to enable timeout for tun,
> > > we can worry about it then.
> >
> > The problem is that the skb is freed until userspace calls recvmsg(),
> > so it would be tricky to implement a watchdog. (Or if we can do, we
> > can do BQL as well).
>
> I thought the watchdog generally watches queues not individual skbs?

Yes, but only if ndo_tx_timeout is implemented.

I mean it would be tricky if we want to implement ndo_tx_timeout since
we can't choose a good timeout.

Thanks
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week ago
On Wed, Sep 24, 2025 at 04:30:45PM +0800, Jason Wang wrote:
> On Wed, Sep 24, 2025 at 4:10 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Wed, Sep 24, 2025 at 04:08:33PM +0800, Jason Wang wrote:
> > > On Wed, Sep 24, 2025 at 3:42 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Wed, Sep 24, 2025 at 03:33:08PM +0800, Jason Wang wrote:
> > > > > On Wed, Sep 24, 2025 at 3:18 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> > > > > > > This patch series deals with TUN, TAP and vhost_net which drop incoming
> > > > > > > SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> > > > > > > patch series, the associated netdev queue is stopped before this happens.
> > > > > > > This allows the connected qdisc to function correctly as reported by [1]
> > > > > > > and improves application-layer performance, see our paper [2]. Meanwhile
> > > > > > > the theoretical performance differs only slightly:
> > > > > >
> > > > > >
> > > > > > About this whole approach.
> > > > > > What if userspace is not consuming packets?
> > > > > > Won't the watchdog warnings appear?
> > > > > > Is it safe to allow userspace to block a tx queue
> > > > > > indefinitely?
> > > > >
> > > > > I think it's safe as it's a userspace device, there's no way to
> > > > > guarantee the userspace can process the packet in time (so no watchdog
> > > > > for TUN).
> > > > >
> > > > > Thanks
> > > >
> > > > Hmm. Anyway, I guess if we ever want to enable timeout for tun,
> > > > we can worry about it then.
> > >
> > > The problem is that the skb is freed until userspace calls recvmsg(),
> > > so it would be tricky to implement a watchdog. (Or if we can do, we
> > > can do BQL as well).
> >
> > I thought the watchdog generally watches queues not individual skbs?
> 
> Yes, but only if ndo_tx_timeout is implemented.
> 
> I mean it would be tricky if we want to implement ndo_tx_timeout since
> we can't choose a good timeout.
> 
> Thanks

userspace could supply that, thinkably. anyway, we can worry
about that when we need that.

-- 
MST

Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week, 1 day ago
On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> This patch series deals with TUN, TAP and vhost_net which drop incoming 
> SKBs whenever their internal ptr_ring buffer is full. Instead, with this 
> patch series, the associated netdev queue is stopped before this happens. 
> This allows the connected qdisc to function correctly as reported by [1] 
> and improves application-layer performance, see our paper [2]. Meanwhile 
> the theoretical performance differs only slightly:
> 
> +------------------------+----------+----------+
> | pktgen benchmarks      | Stock    | Patched  |
> | i5 6300HQ, 20M packets |          |          |
> +------------------------+----------+----------+
> | TAP                    | 2.10Mpps | 1.99Mpps |
> +------------------------+----------+----------+
> | TAP+vhost_net          | 6.05Mpps | 6.14Mpps |
> +------------------------+----------+----------+
> | Note: Patched had no TX drops at all,        |
> | while stock suffered numerous drops.         |
> +----------------------------------------------+
> 
> This patch series includes TUN, TAP, and vhost_net because they share 
> logic. Adjusting only one of them would break the others. Therefore, the 
> patch series is structured as follows:
> 1+2: New ptr_ring helpers for 3 & 4
> 3: TUN & TAP: Stop netdev queue upon reaching a full ptr_ring


so what happens if you only apply patches 1-3?

> 4: TUN & TAP: Wake netdev queue after consuming an entry
> 5+6+7: TUN & TAP: ptr_ring wrappers and other helpers to be called by 
> vhost_net
> 8: vhost_net: Call the wrappers & helpers
> 
> Possible future work:
> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
> - Adaption of the netdev queue flow control for ipvtap & macvtap
> 
> [1] Link: 
> https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> [2] Link: 
> https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
> 
> Links to previous versions:
> V4: 
> https://lore.kernel.org/netdev/20250902080957.47265-1-simon.schippers@tu-dortmund.de/T/#u
> V3: 
> https://lore.kernel.org/netdev/20250825211832.84901-1-simon.schippers@tu-dortmund.de/T/#u
> V2: 
> https://lore.kernel.org/netdev/20250811220430.14063-1-simon.schippers@tu-dortmund.de/T/#u
> V1: 
> https://lore.kernel.org/netdev/20250808153721.261334-1-simon.schippers@tu-dortmund.de/T/#u
> 
> Changelog:
> V4 -> V5:
> - Stop the netdev queue prior to producing the final fitting ptr_ring entry
> -> Ensures the consumer has the latest netdev queue state, making it safe 
> to wake the queue
> -> Resolves an issue in vhost_net where the netdev queue could remain 
> stopped despite being empty
> -> For TUN/TAP, the netdev queue no longer needs to be woken in the 
> blocking loop
> -> Introduces new helpers __ptr_ring_full_next and 
> __ptr_ring_will_invalidate for this purpose
> 
> - vhost_net now uses wrappers of TUN/TAP for ptr_ring consumption rather 
> than maintaining its own rx_ring pointer
> 
> V3 -> V4:
> - Target net-next instead of net
> - Changed to patch series instead of single patch
> - Changed to new title from old title
> "TUN/TAP: Improving throughput and latency by avoiding SKB drops"
> - Wake netdev queue with new helpers wake_netdev_queue when there is any 
> spare capacity in the ptr_ring instead of waiting for it to be empty
> - Use tun_file instead of tun_struct in tun_ring_recv as a more consistent 
> logic
> - Use smp_wmb() and smp_rmb() barrier pair, which avoids any packet drops 
> that happened rarely before
> - Use safer logic for vhost_net using RCU read locks to access TUN/TAP data
> 
> V2 -> V3: Added support for TAP and TAP+vhost_net.
> 
> V1 -> V2: Removed NETDEV_TX_BUSY return case in tun_net_xmit and removed 
> unnecessary netif_tx_wake_queue in tun_ring_recv.
> 
> Thanks,
> Simon :)
> 
> Simon Schippers (8):
>   __ptr_ring_full_next: Returns if ring will be full after next
>     insertion
>   Move the decision of invalidation out of __ptr_ring_discard_one
>   TUN, TAP & vhost_net: Stop netdev queue before reaching a full
>     ptr_ring
>   TUN & TAP: Wake netdev queue after consuming an entry
>   TUN & TAP: Provide ptr_ring_consume_batched wrappers for vhost_net
>   TUN & TAP: Provide ptr_ring_unconsume wrappers for vhost_net
>   TUN & TAP: Methods to determine whether file is TUN/TAP for vhost_net
>   vhost_net: Replace rx_ring with calls of TUN/TAP wrappers
> 
>  drivers/net/tap.c        | 115 +++++++++++++++++++++++++++++++--
>  drivers/net/tun.c        | 136 +++++++++++++++++++++++++++++++++++----
>  drivers/vhost/net.c      |  90 +++++++++++++++++---------
>  include/linux/if_tap.h   |  15 +++++
>  include/linux/if_tun.h   |  18 ++++++
>  include/linux/ptr_ring.h |  54 +++++++++++++---
>  6 files changed, 367 insertions(+), 61 deletions(-)
> 
> -- 
> 2.43.0
[PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Simon Schippers 1 week ago
On 23.09.25 16:55, Michael S. Tsirkin wrote:
> On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
>> This patch series deals with TUN, TAP and vhost_net which drop incoming 
>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this 
>> patch series, the associated netdev queue is stopped before this happens. 
>> This allows the connected qdisc to function correctly as reported by [1] 
>> and improves application-layer performance, see our paper [2]. Meanwhile 
>> the theoretical performance differs only slightly:
>>
>> +------------------------+----------+----------+
>> | pktgen benchmarks      | Stock    | Patched  |
>> | i5 6300HQ, 20M packets |          |          |
>> +------------------------+----------+----------+
>> | TAP                    | 2.10Mpps | 1.99Mpps |
>> +------------------------+----------+----------+
>> | TAP+vhost_net          | 6.05Mpps | 6.14Mpps |
>> +------------------------+----------+----------+
>> | Note: Patched had no TX drops at all,        |
>> | while stock suffered numerous drops.         |
>> +----------------------------------------------+
>>
>> This patch series includes TUN, TAP, and vhost_net because they share 
>> logic. Adjusting only one of them would break the others. Therefore, the 
>> patch series is structured as follows:
>> 1+2: New ptr_ring helpers for 3 & 4
>> 3: TUN & TAP: Stop netdev queue upon reaching a full ptr_ring
> 
> 
> so what happens if you only apply patches 1-3?
> 

The netdev queue of vhost_net would be stopped by tun_net_xmit but will
never be woken again.

>> 4: TUN & TAP: Wake netdev queue after consuming an entry
>> 5+6+7: TUN & TAP: ptr_ring wrappers and other helpers to be called by 
>> vhost_net
>> 8: vhost_net: Call the wrappers & helpers
>>
>> Possible future work:
>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
>> - Adaption of the netdev queue flow control for ipvtap & macvtap
>>
>> [1] Link: 
>> https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
>> [2] Link: 
>> https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>>
>> Links to previous versions:
>> V4: 
>> https://lore.kernel.org/netdev/20250902080957.47265-1-simon.schippers@tu-dortmund.de/T/#u
>> V3: 
>> https://lore.kernel.org/netdev/20250825211832.84901-1-simon.schippers@tu-dortmund.de/T/#u
>> V2: 
>> https://lore.kernel.org/netdev/20250811220430.14063-1-simon.schippers@tu-dortmund.de/T/#u
>> V1: 
>> https://lore.kernel.org/netdev/20250808153721.261334-1-simon.schippers@tu-dortmund.de/T/#u
>>
>> Changelog:
>> V4 -> V5:
>> - Stop the netdev queue prior to producing the final fitting ptr_ring entry
>> -> Ensures the consumer has the latest netdev queue state, making it safe 
>> to wake the queue
>> -> Resolves an issue in vhost_net where the netdev queue could remain 
>> stopped despite being empty
>> -> For TUN/TAP, the netdev queue no longer needs to be woken in the 
>> blocking loop
>> -> Introduces new helpers __ptr_ring_full_next and 
>> __ptr_ring_will_invalidate for this purpose
>>
>> - vhost_net now uses wrappers of TUN/TAP for ptr_ring consumption rather 
>> than maintaining its own rx_ring pointer
>>
>> V3 -> V4:
>> - Target net-next instead of net
>> - Changed to patch series instead of single patch
>> - Changed to new title from old title
>> "TUN/TAP: Improving throughput and latency by avoiding SKB drops"
>> - Wake netdev queue with new helpers wake_netdev_queue when there is any 
>> spare capacity in the ptr_ring instead of waiting for it to be empty
>> - Use tun_file instead of tun_struct in tun_ring_recv as a more consistent 
>> logic
>> - Use smp_wmb() and smp_rmb() barrier pair, which avoids any packet drops 
>> that happened rarely before
>> - Use safer logic for vhost_net using RCU read locks to access TUN/TAP data
>>
>> V2 -> V3: Added support for TAP and TAP+vhost_net.
>>
>> V1 -> V2: Removed NETDEV_TX_BUSY return case in tun_net_xmit and removed 
>> unnecessary netif_tx_wake_queue in tun_ring_recv.
>>
>> Thanks,
>> Simon :)
>>
>> Simon Schippers (8):
>>   __ptr_ring_full_next: Returns if ring will be full after next
>>     insertion
>>   Move the decision of invalidation out of __ptr_ring_discard_one
>>   TUN, TAP & vhost_net: Stop netdev queue before reaching a full
>>     ptr_ring
>>   TUN & TAP: Wake netdev queue after consuming an entry
>>   TUN & TAP: Provide ptr_ring_consume_batched wrappers for vhost_net
>>   TUN & TAP: Provide ptr_ring_unconsume wrappers for vhost_net
>>   TUN & TAP: Methods to determine whether file is TUN/TAP for vhost_net
>>   vhost_net: Replace rx_ring with calls of TUN/TAP wrappers
>>
>>  drivers/net/tap.c        | 115 +++++++++++++++++++++++++++++++--
>>  drivers/net/tun.c        | 136 +++++++++++++++++++++++++++++++++++----
>>  drivers/vhost/net.c      |  90 +++++++++++++++++---------
>>  include/linux/if_tap.h   |  15 +++++
>>  include/linux/if_tun.h   |  18 ++++++
>>  include/linux/ptr_ring.h |  54 +++++++++++++---
>>  6 files changed, 367 insertions(+), 61 deletions(-)
>>
>> -- 
>> 2.43.0
>
Re: [PATCH net-next v5 0/8] TUN/TAP & vhost_net: netdev queue flow control to avoid ptr_ring tail drop
Posted by Michael S. Tsirkin 1 week ago
On Wed, Sep 24, 2025 at 07:59:46AM +0200, Simon Schippers wrote:
> On 23.09.25 16:55, Michael S. Tsirkin wrote:
> > On Tue, Sep 23, 2025 at 12:15:45AM +0200, Simon Schippers wrote:
> >> This patch series deals with TUN, TAP and vhost_net which drop incoming 
> >> SKBs whenever their internal ptr_ring buffer is full. Instead, with this 
> >> patch series, the associated netdev queue is stopped before this happens. 
> >> This allows the connected qdisc to function correctly as reported by [1] 
> >> and improves application-layer performance, see our paper [2]. Meanwhile 
> >> the theoretical performance differs only slightly:
> >>
> >> +------------------------+----------+----------+
> >> | pktgen benchmarks      | Stock    | Patched  |
> >> | i5 6300HQ, 20M packets |          |          |
> >> +------------------------+----------+----------+
> >> | TAP                    | 2.10Mpps | 1.99Mpps |
> >> +------------------------+----------+----------+
> >> | TAP+vhost_net          | 6.05Mpps | 6.14Mpps |
> >> +------------------------+----------+----------+
> >> | Note: Patched had no TX drops at all,        |
> >> | while stock suffered numerous drops.         |
> >> +----------------------------------------------+
> >>
> >> This patch series includes TUN, TAP, and vhost_net because they share 
> >> logic. Adjusting only one of them would break the others. Therefore, the 
> >> patch series is structured as follows:
> >> 1+2: New ptr_ring helpers for 3 & 4
> >> 3: TUN & TAP: Stop netdev queue upon reaching a full ptr_ring
> > 
> > 
> > so what happens if you only apply patches 1-3?
> > 
> 
> The netdev queue of vhost_net would be stopped by tun_net_xmit but will
> never be woken again.

So this breaks bisect. Don't split patches like this please.


> >> 4: TUN & TAP: Wake netdev queue after consuming an entry
> >> 5+6+7: TUN & TAP: ptr_ring wrappers and other helpers to be called by 
> >> vhost_net
> >> 8: vhost_net: Call the wrappers & helpers
> >>
> >> Possible future work:
> >> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
> >> - Adaption of the netdev queue flow control for ipvtap & macvtap
> >>
> >> [1] Link: 
> >> https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> >> [2] Link: 
> >> https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
> >>
> >> Links to previous versions:
> >> V4: 
> >> https://lore.kernel.org/netdev/20250902080957.47265-1-simon.schippers@tu-dortmund.de/T/#u
> >> V3: 
> >> https://lore.kernel.org/netdev/20250825211832.84901-1-simon.schippers@tu-dortmund.de/T/#u
> >> V2: 
> >> https://lore.kernel.org/netdev/20250811220430.14063-1-simon.schippers@tu-dortmund.de/T/#u
> >> V1: 
> >> https://lore.kernel.org/netdev/20250808153721.261334-1-simon.schippers@tu-dortmund.de/T/#u
> >>
> >> Changelog:
> >> V4 -> V5:
> >> - Stop the netdev queue prior to producing the final fitting ptr_ring entry
> >> -> Ensures the consumer has the latest netdev queue state, making it safe 
> >> to wake the queue
> >> -> Resolves an issue in vhost_net where the netdev queue could remain 
> >> stopped despite being empty
> >> -> For TUN/TAP, the netdev queue no longer needs to be woken in the 
> >> blocking loop
> >> -> Introduces new helpers __ptr_ring_full_next and 
> >> __ptr_ring_will_invalidate for this purpose
> >>
> >> - vhost_net now uses wrappers of TUN/TAP for ptr_ring consumption rather 
> >> than maintaining its own rx_ring pointer
> >>
> >> V3 -> V4:
> >> - Target net-next instead of net
> >> - Changed to patch series instead of single patch
> >> - Changed to new title from old title
> >> "TUN/TAP: Improving throughput and latency by avoiding SKB drops"
> >> - Wake netdev queue with new helpers wake_netdev_queue when there is any 
> >> spare capacity in the ptr_ring instead of waiting for it to be empty
> >> - Use tun_file instead of tun_struct in tun_ring_recv as a more consistent 
> >> logic
> >> - Use smp_wmb() and smp_rmb() barrier pair, which avoids any packet drops 
> >> that happened rarely before
> >> - Use safer logic for vhost_net using RCU read locks to access TUN/TAP data
> >>
> >> V2 -> V3: Added support for TAP and TAP+vhost_net.
> >>
> >> V1 -> V2: Removed NETDEV_TX_BUSY return case in tun_net_xmit and removed 
> >> unnecessary netif_tx_wake_queue in tun_ring_recv.
> >>
> >> Thanks,
> >> Simon :)
> >>
> >> Simon Schippers (8):
> >>   __ptr_ring_full_next: Returns if ring will be full after next
> >>     insertion
> >>   Move the decision of invalidation out of __ptr_ring_discard_one
> >>   TUN, TAP & vhost_net: Stop netdev queue before reaching a full
> >>     ptr_ring
> >>   TUN & TAP: Wake netdev queue after consuming an entry
> >>   TUN & TAP: Provide ptr_ring_consume_batched wrappers for vhost_net
> >>   TUN & TAP: Provide ptr_ring_unconsume wrappers for vhost_net
> >>   TUN & TAP: Methods to determine whether file is TUN/TAP for vhost_net
> >>   vhost_net: Replace rx_ring with calls of TUN/TAP wrappers
> >>
> >>  drivers/net/tap.c        | 115 +++++++++++++++++++++++++++++++--
> >>  drivers/net/tun.c        | 136 +++++++++++++++++++++++++++++++++++----
> >>  drivers/vhost/net.c      |  90 +++++++++++++++++---------
> >>  include/linux/if_tap.h   |  15 +++++
> >>  include/linux/if_tun.h   |  18 ++++++
> >>  include/linux/ptr_ring.h |  54 +++++++++++++---
> >>  6 files changed, 367 insertions(+), 61 deletions(-)
> >>
> >> -- 
> >> 2.43.0
> >