drivers/net/tap.c | 106 +++++++++++++++++++++++++-- drivers/net/tun.c | 154 +++++++++++++++++++++++++++++++++++---- drivers/vhost/net.c | 92 +++++++++++++++-------- include/linux/if_tap.h | 16 +++- include/linux/if_tun.h | 18 ++++- include/linux/ptr_ring.h | 42 +++++++++++ 6 files changed, 372 insertions(+), 56 deletions(-)
This patch series deals with tun/tap and vhost-net which drop incoming
SKBs whenever their internal ptr_ring buffer is full. Instead, with this
patch series, the associated netdev queue is stopped before this happens.
This allows the connected qdisc to function correctly as reported by [1]
and improves application-layer performance, see our paper [2]. Meanwhile
the theoretical performance differs only slightly:
+--------------------------------+-----------+----------+
| pktgen benchmarks to Debian VM | Stock | Patched |
| i5 6300HQ, 20M packets | | |
+-----------------+--------------+-----------+----------+
| TAP | Transmitted | 195 Kpps | 183 Kpps |
| +--------------+-----------+----------+
| | Lost | 1615 Kpps | 0 pps |
+-----------------+--------------+-----------+----------+
| TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
| +--------------+-----------+----------+
| | Lost | 1164 Kpps | 0 pps |
+-----------------+--------------+-----------+----------+
This patch series includes tun/tap, and vhost-net because they share
logic. Adjusting only one of them would break the others. Therefore, the
patch series is structured as follows:
1+2: new ptr_ring helpers for 3
3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
management
4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
vhost-net
7: tun/tap & vhost-net: only now use the previous implemented functions to
not break git bisect
8: tun/tap: drop get ring exports (not used anymore)
Possible future work:
- Introduction of Byte Queue Limits as suggested by Stephen Hemminger
- Adaption of the netdev queue flow control for ipvtap & macvtap
[1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
[2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
Changelog:
V6:
General:
- Major adjustments to the descriptions. Special thanks to Jon Kohler!
- Fix git bisect by moving most logic into dedicated functions and only
start using them in patch 7.
- Moved the main logic of the coupled producer and consumer into a single
patch to avoid a chicken-and-egg dependency between commits :-)
- Rebased to 6.18-rc5 and ran benchmarks again that now also include lost
packets (previously I missed a 0, so all benchmark results were higher by
factor 10...).
- Also include the benchmark in patch 7.
Producer:
- Move logic into the new helper tun_ring_produce()
- Added a smp_rmb() paired with the consumer, ensuring freed space of the
consumer is visible
- Assume that ptr_ring is not full when __ptr_ring_full_next() is called
Consumer:
- Use an unpaired smp_rmb() instead of barrier() to ensure that the
netdev_tx_queue_stopped() call completes before discarding
- Also wake the netdev queue if it was stopped before discarding and then
becomes empty
-> Fixes race with producer as identified by MST in V5
-> Waking the netdev queues upon resize is not required anymore
- Use __ptr_ring_consume_created_space() instead of messing with ptr_ring
internals
-> Batched consume now just calls
__tun_ring_consume()/__tap_ring_consume() in a loop
- Added an smp_wmb() before waking the netdev queue which is paired with
the smp_rmb() discussed above
V5: https://lore.kernel.org/netdev/20250922221553.47802-1-simon.schippers@tu-dortmund.de/T/#u
- Stop the netdev queue prior to producing the final fitting ptr_ring entry
-> Ensures the consumer has the latest netdev queue state, making it safe
to wake the queue
-> Resolves an issue in vhost-net where the netdev queue could remain
stopped despite being empty
-> For TUN/TAP, the netdev queue no longer needs to be woken in the
blocking loop
-> Introduces new helpers __ptr_ring_full_next and
__ptr_ring_will_invalidate for this purpose
- vhost-net now uses wrappers of TUN/TAP for ptr_ring consumption rather
than maintaining its own rx_ring pointer
V4: https://lore.kernel.org/netdev/20250902080957.47265-1-simon.schippers@tu-dortmund.de/T/#u
- Target net-next instead of net
- Changed to patch series instead of single patch
- Changed to new title from old title
"TUN/TAP: Improving throughput and latency by avoiding SKB drops"
- Wake netdev queue with new helpers wake_netdev_queue when there is any
spare capacity in the ptr_ring instead of waiting for it to be empty
- Use tun_file instead of tun_struct in tun_ring_recv as a more consistent
logic
- Use smp_wmb() and smp_rmb() barrier pair, which avoids any packet drops
that happened rarely before
- Use safer logic for vhost-net using RCU read locks to access TUN/TAP data
V3: https://lore.kernel.org/netdev/20250825211832.84901-1-simon.schippers@tu-dortmund.de/T/#u
- Added support for TAP and TAP+vhost-net.
V2: https://lore.kernel.org/netdev/20250811220430.14063-1-simon.schippers@tu-dortmund.de/T/#u
- Removed NETDEV_TX_BUSY return case in tun_net_xmit and removed
unnecessary netif_tx_wake_queue in tun_ring_recv.
V1: https://lore.kernel.org/netdev/20250808153721.261334-1-simon.schippers@tu-dortmund.de/T/#u
Thanks,
Simon :)
Simon Schippers (8):
ptr_ring: add __ptr_ring_full_next() to predict imminent fullness
ptr_ring: add helper to check if consume created space
tun/tap: add synchronized ring produce/consume with queue management
tun/tap: add batched ring consume function
tun/tap: add uncomsume function for returning entries to ring
tun/tap: add helper functions to check file type
tun/tap & vhost-net: use {tun|tap}_ring_{consume|produce} to avoid
tail drops
tun/tap: drop get ring exports
drivers/net/tap.c | 106 +++++++++++++++++++++++++--
drivers/net/tun.c | 154 +++++++++++++++++++++++++++++++++++----
drivers/vhost/net.c | 92 +++++++++++++++--------
include/linux/if_tap.h | 16 +++-
include/linux/if_tun.h | 18 ++++-
include/linux/ptr_ring.h | 42 +++++++++++
6 files changed, 372 insertions(+), 56 deletions(-)
--
2.43.0
On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
<simon.schippers@tu-dortmund.de> wrote:
>
> This patch series deals with tun/tap and vhost-net which drop incoming
> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> patch series, the associated netdev queue is stopped before this happens.
> This allows the connected qdisc to function correctly as reported by [1]
> and improves application-layer performance, see our paper [2]. Meanwhile
> the theoretical performance differs only slightly:
>
> +--------------------------------+-----------+----------+
> | pktgen benchmarks to Debian VM | Stock | Patched |
> | i5 6300HQ, 20M packets | | |
> +-----------------+--------------+-----------+----------+
> | TAP | Transmitted | 195 Kpps | 183 Kpps |
> | +--------------+-----------+----------+
> | | Lost | 1615 Kpps | 0 pps |
> +-----------------+--------------+-----------+----------+
> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
> | +--------------+-----------+----------+
> | | Lost | 1164 Kpps | 0 pps |
> +-----------------+--------------+-----------+----------+
PPS drops somehow for TAP, any reason for that?
Btw, I had some questions:
1) most of the patches in this series would introduce non-trivial
impact on the performance, we probably need to benchmark each or split
the series. What's more we need to run TCP benchmark
(throughput/latency) as well as pktgen see the real impact
2) I see this:
if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
drop_reason = SKB_DROP_REASON_FULL_RING;
goto drop;
}
So there could still be packet drop? Or is this related to the XDP path?
3) The LLTX change would have performance implications, but the
benmark doesn't cover the case where multiple transmission is done in
parallel
4) After the LLTX change, it seems we've lost the synchronization with
the XDP_TX and XDP_REDIRECT path?
5) The series introduces various ptr_ring helpers with lots of
ordering stuff which is complicated, I wonder if we first have a
simple patch to implement the zero packet loss
>
> This patch series includes tun/tap, and vhost-net because they share
> logic. Adjusting only one of them would break the others. Therefore, the
> patch series is structured as follows:
> 1+2: new ptr_ring helpers for 3
> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
> management
> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
> vhost-net
> 7: tun/tap & vhost-net: only now use the previous implemented functions to
> not break git bisect
> 8: tun/tap: drop get ring exports (not used anymore)
>
> Possible future work:
> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
This seems to be not easy. The tx completion depends on the userspace behaviour.
> - Adaption of the netdev queue flow control for ipvtap & macvtap
>
> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>
Thanks
On 11/21/25 07:19, Jason Wang wrote:
> On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
> <simon.schippers@tu-dortmund.de> wrote:
>>
>> This patch series deals with tun/tap and vhost-net which drop incoming
>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
>> patch series, the associated netdev queue is stopped before this happens.
>> This allows the connected qdisc to function correctly as reported by [1]
>> and improves application-layer performance, see our paper [2]. Meanwhile
>> the theoretical performance differs only slightly:
>>
>> +--------------------------------+-----------+----------+
>> | pktgen benchmarks to Debian VM | Stock | Patched |
>> | i5 6300HQ, 20M packets | | |
>> +-----------------+--------------+-----------+----------+
>> | TAP | Transmitted | 195 Kpps | 183 Kpps |
>> | +--------------+-----------+----------+
>> | | Lost | 1615 Kpps | 0 pps |
>> +-----------------+--------------+-----------+----------+
>> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
>> | +--------------+-----------+----------+
>> | | Lost | 1164 Kpps | 0 pps |
>> +-----------------+--------------+-----------+----------+
>
Hi Jason,
thank you for your reply!
> PPS drops somehow for TAP, any reason for that?
I have no explicit explanation for that except general overheads coming
with this implementation.
>
> Btw, I had some questions:
>
> 1) most of the patches in this series would introduce non-trivial
> impact on the performance, we probably need to benchmark each or split
> the series. What's more we need to run TCP benchmark
> (throughput/latency) as well as pktgen see the real impact
What could be done, IMO, is to activate tun_ring_consume() /
tap_ring_consume() before enabling tun_ring_produce(). Then we could see
if this alone drops performance.
For TCP benchmarks, you mean userspace performance like iperf3 between a
host and a guest system?
>
> 2) I see this:
>
> if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
> drop_reason = SKB_DROP_REASON_FULL_RING;
> goto drop;
> }
>
> So there could still be packet drop? Or is this related to the XDP path?
Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
unconsume. Since those two happen so rarely, I figured we should just
drop in this case.
>
> 3) The LLTX change would have performance implications, but the
> benmark doesn't cover the case where multiple transmission is done in
> parallel
Do you mean multiple applications that produce traffic and potentially
run on different CPUs?
>
> 4) After the LLTX change, it seems we've lost the synchronization with
> the XDP_TX and XDP_REDIRECT path?
I must admit I did not take a look at XDP and cannot really judge if/how
lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
call and I do not see the impact for XDP, which calls its own methods.
>
> 5) The series introduces various ptr_ring helpers with lots of
> ordering stuff which is complicated, I wonder if we first have a
> simple patch to implement the zero packet loss
I personally don't see how a simpler patch is possible without using
discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
spin locking between producer and consumer. But I am open for
suggestions :)
>
>>
>> This patch series includes tun/tap, and vhost-net because they share
>> logic. Adjusting only one of them would break the others. Therefore, the
>> patch series is structured as follows:
>> 1+2: new ptr_ring helpers for 3
>> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
>> management
>> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
>> vhost-net
>> 7: tun/tap & vhost-net: only now use the previous implemented functions to
>> not break git bisect
>> 8: tun/tap: drop get ring exports (not used anymore)
>>
>> Possible future work:
>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
>
> This seems to be not easy. The tx completion depends on the userspace behaviour.
I agree, but I really would like to reduce the buffer bloat caused by the
default 500 TUN / 1000 TAP packet queue without losing performance.
>
>> - Adaption of the netdev queue flow control for ipvtap & macvtap
>>
>> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
>> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>>
>
> Thanks
>
Thanks! :)
On Fri, Nov 21, 2025 at 10:22:54AM +0100, Simon Schippers wrote: > I agree, but I really would like to reduce the buffer bloat caused by the > default 500 TUN / 1000 TAP packet queue without losing performance. that default is part of the userspace API and can't be changed. just change whatever userspace is creating your device. -- MST
On 11/26/25 08:15, Michael S. Tsirkin wrote: > On Fri, Nov 21, 2025 at 10:22:54AM +0100, Simon Schippers wrote: >> I agree, but I really would like to reduce the buffer bloat caused by the >> default 500 TUN / 1000 TAP packet queue without losing performance. > > that default is part of the userspace API and can't be changed. > just change whatever userspace is creating your device. > Yes, but I’m thinking about introducing a new interface flag like IFF_BQL. However, as noted earlier, there are significant implementation challenges. I think there can be advantages to something like VPN's on mobile devices where the throughput varies between a few Mbit/s (small TUN/TAP queue is fine) and multiple Gbit/s (need a bigger queue).
On Fri, Nov 21, 2025 at 5:23 PM Simon Schippers
<simon.schippers@tu-dortmund.de> wrote:
>
> On 11/21/25 07:19, Jason Wang wrote:
> > On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
> > <simon.schippers@tu-dortmund.de> wrote:
> >>
> >> This patch series deals with tun/tap and vhost-net which drop incoming
> >> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> >> patch series, the associated netdev queue is stopped before this happens.
> >> This allows the connected qdisc to function correctly as reported by [1]
> >> and improves application-layer performance, see our paper [2]. Meanwhile
> >> the theoretical performance differs only slightly:
> >>
> >> +--------------------------------+-----------+----------+
> >> | pktgen benchmarks to Debian VM | Stock | Patched |
> >> | i5 6300HQ, 20M packets | | |
> >> +-----------------+--------------+-----------+----------+
> >> | TAP | Transmitted | 195 Kpps | 183 Kpps |
> >> | +--------------+-----------+----------+
> >> | | Lost | 1615 Kpps | 0 pps |
> >> +-----------------+--------------+-----------+----------+
> >> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
> >> | +--------------+-----------+----------+
> >> | | Lost | 1164 Kpps | 0 pps |
> >> +-----------------+--------------+-----------+----------+
> >
>
> Hi Jason,
>
> thank you for your reply!
>
> > PPS drops somehow for TAP, any reason for that?
>
> I have no explicit explanation for that except general overheads coming
> with this implementation.
It would be better to fix that.
>
> >
> > Btw, I had some questions:
> >
> > 1) most of the patches in this series would introduce non-trivial
> > impact on the performance, we probably need to benchmark each or split
> > the series. What's more we need to run TCP benchmark
> > (throughput/latency) as well as pktgen see the real impact
>
> What could be done, IMO, is to activate tun_ring_consume() /
> tap_ring_consume() before enabling tun_ring_produce(). Then we could see
> if this alone drops performance.
>
> For TCP benchmarks, you mean userspace performance like iperf3 between a
> host and a guest system?
Yes,
>
> >
> > 2) I see this:
> >
> > if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
> > drop_reason = SKB_DROP_REASON_FULL_RING;
> > goto drop;
> > }
> >
> > So there could still be packet drop? Or is this related to the XDP path?
>
> Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
> unconsume. Since those two happen so rarely, I figured we should just
> drop in this case.
>
> >
> > 3) The LLTX change would have performance implications, but the
> > benmark doesn't cover the case where multiple transmission is done in
> > parallel
>
> Do you mean multiple applications that produce traffic and potentially
> run on different CPUs?
Yes.
>
> >
> > 4) After the LLTX change, it seems we've lost the synchronization with
> > the XDP_TX and XDP_REDIRECT path?
>
> I must admit I did not take a look at XDP and cannot really judge if/how
> lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
> instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
> call and I do not see the impact for XDP, which calls its own methods.
Without LLTX tun_net_xmit is protected by tx lock but it is not the
case of tun_xdp_xmit. This is because, unlike other devices, tun
doesn't have a dedicated TX queue for XDP, so the queue is shared by
both XDP and skb. So XDP xmit path needs to be protected with tx lock
as well, and since we don't have queue discipline for XDP, it means we
could still drop packets when XDP is enabled. I'm not sure this would
defeat the whole idea or not.
> >
> > 5) The series introduces various ptr_ring helpers with lots of
> > ordering stuff which is complicated, I wonder if we first have a
> > simple patch to implement the zero packet loss
>
> I personally don't see how a simpler patch is possible without using
> discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
> spin locking between producer and consumer. But I am open for
> suggestions :)
I see NETDEV_TX_BUSY is used by veth:
static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
{
if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
return NETDEV_TX_BUSY; /* signal qdisc layer */
return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
}
Maybe it would be simpler to start from that (probably with a new tun->flags?).
Thanks
>
> >
> >>
> >> This patch series includes tun/tap, and vhost-net because they share
> >> logic. Adjusting only one of them would break the others. Therefore, the
> >> patch series is structured as follows:
> >> 1+2: new ptr_ring helpers for 3
> >> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
> >> management
> >> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
> >> vhost-net
> >> 7: tun/tap & vhost-net: only now use the previous implemented functions to
> >> not break git bisect
> >> 8: tun/tap: drop get ring exports (not used anymore)
> >>
> >> Possible future work:
> >> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
> >
> > This seems to be not easy. The tx completion depends on the userspace behaviour.
>
> I agree, but I really would like to reduce the buffer bloat caused by the
> default 500 TUN / 1000 TAP packet queue without losing performance.
>
> >
> >> - Adaption of the netdev queue flow control for ipvtap & macvtap
> >>
> >> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> >> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
> >>
> >
> > Thanks
> >
>
> Thanks! :)
>
On 11/24/25 02:04, Jason Wang wrote:
> On Fri, Nov 21, 2025 at 5:23 PM Simon Schippers
> <simon.schippers@tu-dortmund.de> wrote:
>>
>> On 11/21/25 07:19, Jason Wang wrote:
>>> On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
>>> <simon.schippers@tu-dortmund.de> wrote:
>>>>
>>>> This patch series deals with tun/tap and vhost-net which drop incoming
>>>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
>>>> patch series, the associated netdev queue is stopped before this happens.
>>>> This allows the connected qdisc to function correctly as reported by [1]
>>>> and improves application-layer performance, see our paper [2]. Meanwhile
>>>> the theoretical performance differs only slightly:
>>>>
>>>> +--------------------------------+-----------+----------+
>>>> | pktgen benchmarks to Debian VM | Stock | Patched |
>>>> | i5 6300HQ, 20M packets | | |
>>>> +-----------------+--------------+-----------+----------+
>>>> | TAP | Transmitted | 195 Kpps | 183 Kpps |
>>>> | +--------------+-----------+----------+
>>>> | | Lost | 1615 Kpps | 0 pps |
>>>> +-----------------+--------------+-----------+----------+
>>>> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
>>>> | +--------------+-----------+----------+
>>>> | | Lost | 1164 Kpps | 0 pps |
>>>> +-----------------+--------------+-----------+----------+
>>>
>>
>> Hi Jason,
>>
>> thank you for your reply!
>>
>>> PPS drops somehow for TAP, any reason for that?
>>
>> I have no explicit explanation for that except general overheads coming
>> with this implementation.
>
> It would be better to fix that.
>
>>
>>>
>>> Btw, I had some questions:
>>>
>>> 1) most of the patches in this series would introduce non-trivial
>>> impact on the performance, we probably need to benchmark each or split
>>> the series. What's more we need to run TCP benchmark
>>> (throughput/latency) as well as pktgen see the real impact
>>
>> What could be done, IMO, is to activate tun_ring_consume() /
>> tap_ring_consume() before enabling tun_ring_produce(). Then we could see
>> if this alone drops performance.
>>
>> For TCP benchmarks, you mean userspace performance like iperf3 between a
>> host and a guest system?
>
> Yes,
>
>>
>>>
>>> 2) I see this:
>>>
>>> if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
>>> drop_reason = SKB_DROP_REASON_FULL_RING;
>>> goto drop;
>>> }
>>>
>>> So there could still be packet drop? Or is this related to the XDP path?
>>
>> Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
>> unconsume. Since those two happen so rarely, I figured we should just
>> drop in this case.
>>
>>>
>>> 3) The LLTX change would have performance implications, but the
>>> benmark doesn't cover the case where multiple transmission is done in
>>> parallel
>>
>> Do you mean multiple applications that produce traffic and potentially
>> run on different CPUs?
>
> Yes.
>
>>
>>>
>>> 4) After the LLTX change, it seems we've lost the synchronization with
>>> the XDP_TX and XDP_REDIRECT path?
>>
>> I must admit I did not take a look at XDP and cannot really judge if/how
>> lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
>> instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
>> call and I do not see the impact for XDP, which calls its own methods.
>
> Without LLTX tun_net_xmit is protected by tx lock but it is not the
> case of tun_xdp_xmit. This is because, unlike other devices, tun
> doesn't have a dedicated TX queue for XDP, so the queue is shared by
> both XDP and skb. So XDP xmit path needs to be protected with tx lock
> as well, and since we don't have queue discipline for XDP, it means we
> could still drop packets when XDP is enabled. I'm not sure this would
> defeat the whole idea or not.
Good point.
>
>>>
>>> 5) The series introduces various ptr_ring helpers with lots of
>>> ordering stuff which is complicated, I wonder if we first have a
>>> simple patch to implement the zero packet loss
>>
>> I personally don't see how a simpler patch is possible without using
>> discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
>> spin locking between producer and consumer. But I am open for
>> suggestions :)
>
> I see NETDEV_TX_BUSY is used by veth:
>
> static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
> {
> if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
> return NETDEV_TX_BUSY; /* signal qdisc layer */
>
> return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
> }
>
> Maybe it would be simpler to start from that (probably with a new tun->flags?).
>
> Thanks
Do you mean that this patchset could be implemented using the same
approach that was used for veth in [1]?
This could then also fix the XDP path.
But is returning NETDEV_TX_BUSY fine in our case?
Do you mean a flag that enables or disables the no-drop behavior?
Thanks!
[1] Link: https://lore.kernel.org/netdev/174559288731.827981.8748257839971869213.stgit@firesoul/T/#u
>
>>
>>>
>>>>
>>>> This patch series includes tun/tap, and vhost-net because they share
>>>> logic. Adjusting only one of them would break the others. Therefore, the
>>>> patch series is structured as follows:
>>>> 1+2: new ptr_ring helpers for 3
>>>> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
>>>> management
>>>> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
>>>> vhost-net
>>>> 7: tun/tap & vhost-net: only now use the previous implemented functions to
>>>> not break git bisect
>>>> 8: tun/tap: drop get ring exports (not used anymore)
>>>>
>>>> Possible future work:
>>>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
>>>
>>> This seems to be not easy. The tx completion depends on the userspace behaviour.
>>
>> I agree, but I really would like to reduce the buffer bloat caused by the
>> default 500 TUN / 1000 TAP packet queue without losing performance.
>>
>>>
>>>> - Adaption of the netdev queue flow control for ipvtap & macvtap
>>>>
>>>> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
>>>> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>>>>
>>>
>>> Thanks
>>>
>>
>> Thanks! :)
>>
>
On Mon, Nov 24, 2025 at 5:20 PM Simon Schippers
<simon.schippers@tu-dortmund.de> wrote:
>
> On 11/24/25 02:04, Jason Wang wrote:
> > On Fri, Nov 21, 2025 at 5:23 PM Simon Schippers
> > <simon.schippers@tu-dortmund.de> wrote:
> >>
> >> On 11/21/25 07:19, Jason Wang wrote:
> >>> On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
> >>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>
> >>>> This patch series deals with tun/tap and vhost-net which drop incoming
> >>>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> >>>> patch series, the associated netdev queue is stopped before this happens.
> >>>> This allows the connected qdisc to function correctly as reported by [1]
> >>>> and improves application-layer performance, see our paper [2]. Meanwhile
> >>>> the theoretical performance differs only slightly:
> >>>>
> >>>> +--------------------------------+-----------+----------+
> >>>> | pktgen benchmarks to Debian VM | Stock | Patched |
> >>>> | i5 6300HQ, 20M packets | | |
> >>>> +-----------------+--------------+-----------+----------+
> >>>> | TAP | Transmitted | 195 Kpps | 183 Kpps |
> >>>> | +--------------+-----------+----------+
> >>>> | | Lost | 1615 Kpps | 0 pps |
> >>>> +-----------------+--------------+-----------+----------+
> >>>> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
> >>>> | +--------------+-----------+----------+
> >>>> | | Lost | 1164 Kpps | 0 pps |
> >>>> +-----------------+--------------+-----------+----------+
> >>>
> >>
> >> Hi Jason,
> >>
> >> thank you for your reply!
> >>
> >>> PPS drops somehow for TAP, any reason for that?
> >>
> >> I have no explicit explanation for that except general overheads coming
> >> with this implementation.
> >
> > It would be better to fix that.
> >
> >>
> >>>
> >>> Btw, I had some questions:
> >>>
> >>> 1) most of the patches in this series would introduce non-trivial
> >>> impact on the performance, we probably need to benchmark each or split
> >>> the series. What's more we need to run TCP benchmark
> >>> (throughput/latency) as well as pktgen see the real impact
> >>
> >> What could be done, IMO, is to activate tun_ring_consume() /
> >> tap_ring_consume() before enabling tun_ring_produce(). Then we could see
> >> if this alone drops performance.
> >>
> >> For TCP benchmarks, you mean userspace performance like iperf3 between a
> >> host and a guest system?
> >
> > Yes,
> >
> >>
> >>>
> >>> 2) I see this:
> >>>
> >>> if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
> >>> drop_reason = SKB_DROP_REASON_FULL_RING;
> >>> goto drop;
> >>> }
> >>>
> >>> So there could still be packet drop? Or is this related to the XDP path?
> >>
> >> Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
> >> unconsume. Since those two happen so rarely, I figured we should just
> >> drop in this case.
> >>
> >>>
> >>> 3) The LLTX change would have performance implications, but the
> >>> benmark doesn't cover the case where multiple transmission is done in
> >>> parallel
> >>
> >> Do you mean multiple applications that produce traffic and potentially
> >> run on different CPUs?
> >
> > Yes.
> >
> >>
> >>>
> >>> 4) After the LLTX change, it seems we've lost the synchronization with
> >>> the XDP_TX and XDP_REDIRECT path?
> >>
> >> I must admit I did not take a look at XDP and cannot really judge if/how
> >> lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
> >> instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
> >> call and I do not see the impact for XDP, which calls its own methods.
> >
> > Without LLTX tun_net_xmit is protected by tx lock but it is not the
> > case of tun_xdp_xmit. This is because, unlike other devices, tun
> > doesn't have a dedicated TX queue for XDP, so the queue is shared by
> > both XDP and skb. So XDP xmit path needs to be protected with tx lock
> > as well, and since we don't have queue discipline for XDP, it means we
> > could still drop packets when XDP is enabled. I'm not sure this would
> > defeat the whole idea or not.
>
> Good point.
>
> >
> >>>
> >>> 5) The series introduces various ptr_ring helpers with lots of
> >>> ordering stuff which is complicated, I wonder if we first have a
> >>> simple patch to implement the zero packet loss
> >>
> >> I personally don't see how a simpler patch is possible without using
> >> discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
> >> spin locking between producer and consumer. But I am open for
> >> suggestions :)
> >
> > I see NETDEV_TX_BUSY is used by veth:
> >
> > static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
> > {
> > if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
> > return NETDEV_TX_BUSY; /* signal qdisc layer */
> >
> > return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
> > }
> >
> > Maybe it would be simpler to start from that (probably with a new tun->flags?).
> >
> > Thanks
>
> Do you mean that this patchset could be implemented using the same
> approach that was used for veth in [1]?
> This could then also fix the XDP path.
I think so.
>
> But is returning NETDEV_TX_BUSY fine in our case?
If it helps to avoid packet drop. But I'm not sure if qdisc is a must
in your case.
>
> Do you mean a flag that enables or disables the no-drop behavior?
Yes, via a new flags that could be set via TUNSETIFF.
Thanks
>
> Thanks!
>
> [1] Link: https://lore.kernel.org/netdev/174559288731.827981.8748257839971869213.stgit@firesoul/T/#u
>
> >
> >>
> >>>
> >>>>
> >>>> This patch series includes tun/tap, and vhost-net because they share
> >>>> logic. Adjusting only one of them would break the others. Therefore, the
> >>>> patch series is structured as follows:
> >>>> 1+2: new ptr_ring helpers for 3
> >>>> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
> >>>> management
> >>>> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
> >>>> vhost-net
> >>>> 7: tun/tap & vhost-net: only now use the previous implemented functions to
> >>>> not break git bisect
> >>>> 8: tun/tap: drop get ring exports (not used anymore)
> >>>>
> >>>> Possible future work:
> >>>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
> >>>
> >>> This seems to be not easy. The tx completion depends on the userspace behaviour.
> >>
> >> I agree, but I really would like to reduce the buffer bloat caused by the
> >> default 500 TUN / 1000 TAP packet queue without losing performance.
> >>
> >>>
> >>>> - Adaption of the netdev queue flow control for ipvtap & macvtap
> >>>>
> >>>> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> >>>> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
> >>>>
> >>>
> >>> Thanks
> >>>
> >>
> >> Thanks! :)
> >>
> >
>
On 11/25/25 02:34, Jason Wang wrote:
> On Mon, Nov 24, 2025 at 5:20 PM Simon Schippers
> <simon.schippers@tu-dortmund.de> wrote:
>>
>> On 11/24/25 02:04, Jason Wang wrote:
>>> On Fri, Nov 21, 2025 at 5:23 PM Simon Schippers
>>> <simon.schippers@tu-dortmund.de> wrote:
>>>>
>>>> On 11/21/25 07:19, Jason Wang wrote:
>>>>> On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
>>>>> <simon.schippers@tu-dortmund.de> wrote:
>>>>>>
>>>>>> This patch series deals with tun/tap and vhost-net which drop incoming
>>>>>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
>>>>>> patch series, the associated netdev queue is stopped before this happens.
>>>>>> This allows the connected qdisc to function correctly as reported by [1]
>>>>>> and improves application-layer performance, see our paper [2]. Meanwhile
>>>>>> the theoretical performance differs only slightly:
>>>>>>
>>>>>> +--------------------------------+-----------+----------+
>>>>>> | pktgen benchmarks to Debian VM | Stock | Patched |
>>>>>> | i5 6300HQ, 20M packets | | |
>>>>>> +-----------------+--------------+-----------+----------+
>>>>>> | TAP | Transmitted | 195 Kpps | 183 Kpps |
>>>>>> | +--------------+-----------+----------+
>>>>>> | | Lost | 1615 Kpps | 0 pps |
>>>>>> +-----------------+--------------+-----------+----------+
>>>>>> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
>>>>>> | +--------------+-----------+----------+
>>>>>> | | Lost | 1164 Kpps | 0 pps |
>>>>>> +-----------------+--------------+-----------+----------+
>>>>>
>>>>
>>>> Hi Jason,
>>>>
>>>> thank you for your reply!
>>>>
>>>>> PPS drops somehow for TAP, any reason for that?
>>>>
>>>> I have no explicit explanation for that except general overheads coming
>>>> with this implementation.
>>>
>>> It would be better to fix that.
>>>
>>>>
>>>>>
>>>>> Btw, I had some questions:
>>>>>
>>>>> 1) most of the patches in this series would introduce non-trivial
>>>>> impact on the performance, we probably need to benchmark each or split
>>>>> the series. What's more we need to run TCP benchmark
>>>>> (throughput/latency) as well as pktgen see the real impact
>>>>
>>>> What could be done, IMO, is to activate tun_ring_consume() /
>>>> tap_ring_consume() before enabling tun_ring_produce(). Then we could see
>>>> if this alone drops performance.
>>>>
>>>> For TCP benchmarks, you mean userspace performance like iperf3 between a
>>>> host and a guest system?
>>>
>>> Yes,
>>>
>>>>
>>>>>
>>>>> 2) I see this:
>>>>>
>>>>> if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
>>>>> drop_reason = SKB_DROP_REASON_FULL_RING;
>>>>> goto drop;
>>>>> }
>>>>>
>>>>> So there could still be packet drop? Or is this related to the XDP path?
>>>>
>>>> Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
>>>> unconsume. Since those two happen so rarely, I figured we should just
>>>> drop in this case.
>>>>
>>>>>
>>>>> 3) The LLTX change would have performance implications, but the
>>>>> benmark doesn't cover the case where multiple transmission is done in
>>>>> parallel
>>>>
>>>> Do you mean multiple applications that produce traffic and potentially
>>>> run on different CPUs?
>>>
>>> Yes.
>>>
>>>>
>>>>>
>>>>> 4) After the LLTX change, it seems we've lost the synchronization with
>>>>> the XDP_TX and XDP_REDIRECT path?
>>>>
>>>> I must admit I did not take a look at XDP and cannot really judge if/how
>>>> lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
>>>> instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
>>>> call and I do not see the impact for XDP, which calls its own methods.
>>>
>>> Without LLTX tun_net_xmit is protected by tx lock but it is not the
>>> case of tun_xdp_xmit. This is because, unlike other devices, tun
>>> doesn't have a dedicated TX queue for XDP, so the queue is shared by
>>> both XDP and skb. So XDP xmit path needs to be protected with tx lock
>>> as well, and since we don't have queue discipline for XDP, it means we
>>> could still drop packets when XDP is enabled. I'm not sure this would
>>> defeat the whole idea or not.
>>
>> Good point.
>>
>>>
>>>>>
>>>>> 5) The series introduces various ptr_ring helpers with lots of
>>>>> ordering stuff which is complicated, I wonder if we first have a
>>>>> simple patch to implement the zero packet loss
>>>>
>>>> I personally don't see how a simpler patch is possible without using
>>>> discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
>>>> spin locking between producer and consumer. But I am open for
>>>> suggestions :)
>>>
>>> I see NETDEV_TX_BUSY is used by veth:
>>>
>>> static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
>>> {
>>> if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
>>> return NETDEV_TX_BUSY; /* signal qdisc layer */
>>>
>>> return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
>>> }
>>>
>>> Maybe it would be simpler to start from that (probably with a new tun->flags?).
>>>
>>> Thanks
>>
>> Do you mean that this patchset could be implemented using the same
>> approach that was used for veth in [1]?
>> This could then also fix the XDP path.
>
> I think so.
Okay, I will do so and submit a v7 when net-next opens again for 6.19.
>
>>
>> But is returning NETDEV_TX_BUSY fine in our case?
>
> If it helps to avoid packet drop. But I'm not sure if qdisc is a must
> in your case.
I will try to avoid returning it.
When no qdisc is connected, I will just drop like veth does.
>
>>
>> Do you mean a flag that enables or disables the no-drop behavior?
>
> Yes, via a new flags that could be set via TUNSETIFF.
>
> Thanks
I am not a fan of that, since I can not imagine a use case where
dropping packets is desired. veth does not introduce a flag either.
Of course, if there is a major performance degradation, it makes sense.
But I will benchmark it, and we will see.
Thank you!
>
>>
>> Thanks!
>>
>> [1] Link: https://lore.kernel.org/netdev/174559288731.827981.8748257839971869213.stgit@firesoul/T/#u
>>
>>>
>>>>
>>>>>
>>>>>>
>>>>>> This patch series includes tun/tap, and vhost-net because they share
>>>>>> logic. Adjusting only one of them would break the others. Therefore, the
>>>>>> patch series is structured as follows:
>>>>>> 1+2: new ptr_ring helpers for 3
>>>>>> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
>>>>>> management
>>>>>> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
>>>>>> vhost-net
>>>>>> 7: tun/tap & vhost-net: only now use the previous implemented functions to
>>>>>> not break git bisect
>>>>>> 8: tun/tap: drop get ring exports (not used anymore)
>>>>>>
>>>>>> Possible future work:
>>>>>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
>>>>>
>>>>> This seems to be not easy. The tx completion depends on the userspace behaviour.
>>>>
>>>> I agree, but I really would like to reduce the buffer bloat caused by the
>>>> default 500 TUN / 1000 TAP packet queue without losing performance.
>>>>
>>>>>
>>>>>> - Adaption of the netdev queue flow control for ipvtap & macvtap
>>>>>>
>>>>>> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
>>>>>> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>> Thanks! :)
>>>>
>>>
>>
>
On Tue, Nov 25, 2025 at 10:05 PM Simon Schippers
<simon.schippers@tu-dortmund.de> wrote:
>
> On 11/25/25 02:34, Jason Wang wrote:
> > On Mon, Nov 24, 2025 at 5:20 PM Simon Schippers
> > <simon.schippers@tu-dortmund.de> wrote:
> >>
> >> On 11/24/25 02:04, Jason Wang wrote:
> >>> On Fri, Nov 21, 2025 at 5:23 PM Simon Schippers
> >>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>
> >>>> On 11/21/25 07:19, Jason Wang wrote:
> >>>>> On Thu, Nov 20, 2025 at 11:30 PM Simon Schippers
> >>>>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>>>
> >>>>>> This patch series deals with tun/tap and vhost-net which drop incoming
> >>>>>> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> >>>>>> patch series, the associated netdev queue is stopped before this happens.
> >>>>>> This allows the connected qdisc to function correctly as reported by [1]
> >>>>>> and improves application-layer performance, see our paper [2]. Meanwhile
> >>>>>> the theoretical performance differs only slightly:
> >>>>>>
> >>>>>> +--------------------------------+-----------+----------+
> >>>>>> | pktgen benchmarks to Debian VM | Stock | Patched |
> >>>>>> | i5 6300HQ, 20M packets | | |
> >>>>>> +-----------------+--------------+-----------+----------+
> >>>>>> | TAP | Transmitted | 195 Kpps | 183 Kpps |
> >>>>>> | +--------------+-----------+----------+
> >>>>>> | | Lost | 1615 Kpps | 0 pps |
> >>>>>> +-----------------+--------------+-----------+----------+
> >>>>>> | TAP+vhost_net | Transmitted | 589 Kpps | 588 Kpps |
> >>>>>> | +--------------+-----------+----------+
> >>>>>> | | Lost | 1164 Kpps | 0 pps |
> >>>>>> +-----------------+--------------+-----------+----------+
> >>>>>
> >>>>
> >>>> Hi Jason,
> >>>>
> >>>> thank you for your reply!
> >>>>
> >>>>> PPS drops somehow for TAP, any reason for that?
> >>>>
> >>>> I have no explicit explanation for that except general overheads coming
> >>>> with this implementation.
> >>>
> >>> It would be better to fix that.
> >>>
> >>>>
> >>>>>
> >>>>> Btw, I had some questions:
> >>>>>
> >>>>> 1) most of the patches in this series would introduce non-trivial
> >>>>> impact on the performance, we probably need to benchmark each or split
> >>>>> the series. What's more we need to run TCP benchmark
> >>>>> (throughput/latency) as well as pktgen see the real impact
> >>>>
> >>>> What could be done, IMO, is to activate tun_ring_consume() /
> >>>> tap_ring_consume() before enabling tun_ring_produce(). Then we could see
> >>>> if this alone drops performance.
> >>>>
> >>>> For TCP benchmarks, you mean userspace performance like iperf3 between a
> >>>> host and a guest system?
> >>>
> >>> Yes,
> >>>
> >>>>
> >>>>>
> >>>>> 2) I see this:
> >>>>>
> >>>>> if (unlikely(tun_ring_produce(&tfile->tx_ring, queue, skb))) {
> >>>>> drop_reason = SKB_DROP_REASON_FULL_RING;
> >>>>> goto drop;
> >>>>> }
> >>>>>
> >>>>> So there could still be packet drop? Or is this related to the XDP path?
> >>>>
> >>>> Yes, there can be packet drops after a ptr_ring resize or a ptr_ring
> >>>> unconsume. Since those two happen so rarely, I figured we should just
> >>>> drop in this case.
> >>>>
> >>>>>
> >>>>> 3) The LLTX change would have performance implications, but the
> >>>>> benmark doesn't cover the case where multiple transmission is done in
> >>>>> parallel
> >>>>
> >>>> Do you mean multiple applications that produce traffic and potentially
> >>>> run on different CPUs?
> >>>
> >>> Yes.
> >>>
> >>>>
> >>>>>
> >>>>> 4) After the LLTX change, it seems we've lost the synchronization with
> >>>>> the XDP_TX and XDP_REDIRECT path?
> >>>>
> >>>> I must admit I did not take a look at XDP and cannot really judge if/how
> >>>> lltx has an impact on XDP. But from my point of view, __netif_tx_lock()
> >>>> instead of __netif_tx_acquire(), is executed before the tun_net_xmit()
> >>>> call and I do not see the impact for XDP, which calls its own methods.
> >>>
> >>> Without LLTX tun_net_xmit is protected by tx lock but it is not the
> >>> case of tun_xdp_xmit. This is because, unlike other devices, tun
> >>> doesn't have a dedicated TX queue for XDP, so the queue is shared by
> >>> both XDP and skb. So XDP xmit path needs to be protected with tx lock
> >>> as well, and since we don't have queue discipline for XDP, it means we
> >>> could still drop packets when XDP is enabled. I'm not sure this would
> >>> defeat the whole idea or not.
> >>
> >> Good point.
> >>
> >>>
> >>>>>
> >>>>> 5) The series introduces various ptr_ring helpers with lots of
> >>>>> ordering stuff which is complicated, I wonder if we first have a
> >>>>> simple patch to implement the zero packet loss
> >>>>
> >>>> I personally don't see how a simpler patch is possible without using
> >>>> discouraged practices like returning NETDEV_TX_BUSY in tun_net_xmit or
> >>>> spin locking between producer and consumer. But I am open for
> >>>> suggestions :)
> >>>
> >>> I see NETDEV_TX_BUSY is used by veth:
> >>>
> >>> static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
> >>> {
> >>> if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
> >>> return NETDEV_TX_BUSY; /* signal qdisc layer */
> >>>
> >>> return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
> >>> }
> >>>
> >>> Maybe it would be simpler to start from that (probably with a new tun->flags?).
> >>>
> >>> Thanks
> >>
> >> Do you mean that this patchset could be implemented using the same
> >> approach that was used for veth in [1]?
> >> This could then also fix the XDP path.
> >
> > I think so.
>
> Okay, I will do so and submit a v7 when net-next opens again for 6.19.
>
> >
> >>
> >> But is returning NETDEV_TX_BUSY fine in our case?
> >
> > If it helps to avoid packet drop. But I'm not sure if qdisc is a must
> > in your case.
>
> I will try to avoid returning it.
>
> When no qdisc is connected, I will just drop like veth does.
>
> >
> >>
> >> Do you mean a flag that enables or disables the no-drop behavior?
> >
> > Yes, via a new flags that could be set via TUNSETIFF.
> >
> > Thanks
>
> I am not a fan of that, since I can not imagine a use case where
> dropping packets is desired.
Right, it's just for the case when we can see regression for some specific test.
> veth does not introduce a flag either.
>
> Of course, if there is a major performance degradation, it makes sense.
> But I will benchmark it, and we will see.
Exactly.
Thanks
>
> Thank you!
>
> >
> >>
> >> Thanks!
> >>
> >> [1] Link: https://lore.kernel.org/netdev/174559288731.827981.8748257839971869213.stgit@firesoul/T/#u
> >>
> >>>
> >>>>
> >>>>>
> >>>>>>
> >>>>>> This patch series includes tun/tap, and vhost-net because they share
> >>>>>> logic. Adjusting only one of them would break the others. Therefore, the
> >>>>>> patch series is structured as follows:
> >>>>>> 1+2: new ptr_ring helpers for 3
> >>>>>> 3: tun/tap: tun/tap: add synchronized ring produce/consume with queue
> >>>>>> management
> >>>>>> 4+5+6: tun/tap: ptr_ring wrappers and other helpers to be called by
> >>>>>> vhost-net
> >>>>>> 7: tun/tap & vhost-net: only now use the previous implemented functions to
> >>>>>> not break git bisect
> >>>>>> 8: tun/tap: drop get ring exports (not used anymore)
> >>>>>>
> >>>>>> Possible future work:
> >>>>>> - Introduction of Byte Queue Limits as suggested by Stephen Hemminger
> >>>>>
> >>>>> This seems to be not easy. The tx completion depends on the userspace behaviour.
> >>>>
> >>>> I agree, but I really would like to reduce the buffer bloat caused by the
> >>>> default 500 TUN / 1000 TAP packet queue without losing performance.
> >>>>
> >>>>>
> >>>>>> - Adaption of the netdev queue flow control for ipvtap & macvtap
> >>>>>>
> >>>>>> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> >>>>>> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
> >>>>>>
> >>>>>
> >>>>> Thanks
> >>>>>
> >>>>
> >>>> Thanks! :)
> >>>>
> >>>
> >>
> >
>
syzbot ci has tested the following series
[v6] tun/tap & vhost-net: netdev queue flow control to avoid ptr_ring tail drop
https://lore.kernel.org/all/20251120152914.1127975-1-simon.schippers@tu-dortmund.de
* [PATCH net-next v6 1/8] ptr_ring: add __ptr_ring_full_next() to predict imminent fullness
* [PATCH net-next v6 2/8] ptr_ring: add helper to check if consume created space
* [PATCH net-next v6 3/8] tun/tap: add synchronized ring produce/consume with queue management
* [PATCH net-next v6 4/8] tun/tap: add batched ring consume function
* [PATCH net-next v6 5/8] tun/tap: add uncomsume function for returning entries to ring
* [PATCH net-next v6 6/8] tun/tap: add helper functions to check file type
* [PATCH net-next v6 7/8] tun/tap/vhost: use {tun|tap}_ring_{consume|produce} to avoid tail drops
* [PATCH net-next v6 8/8] tun/tap: drop get ring exports
and found the following issue:
general protection fault in tun_net_xmit
Full report is available here:
https://ci.syzbot.org/series/63c35694-3fa6-48b6-ba11-f893f55bcc1a
***
general protection fault in tun_net_xmit
tree: net-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git
base: 45a1cd8346ca245a1ca475b26eb6ceb9d8b7c6f0
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/e1084fb4-2e0a-4c87-8e42-bc8fa70e1c77/config
syz repro: https://ci.syzbot.org/findings/cf1c9121-7e31-4bc8-a254-f9e6c8ee2d26/syz_repro
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000002: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017]
CPU: 1 UID: 0 PID: 13 Comm: kworker/u8:1 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: ipv6_addrconf addrconf_dad_work
RIP: 0010:__ptr_ring_full include/linux/ptr_ring.h:51 [inline]
RIP: 0010:tun_ring_produce drivers/net/tun.c:1023 [inline]
RIP: 0010:tun_net_xmit+0xdf0/0x1840 drivers/net/tun.c:1164
Code: 00 00 00 fc ff df 48 89 44 24 50 0f b6 04 18 84 c0 0f 85 1f 07 00 00 4c 89 7c 24 30 4d 63 37 4f 8d 3c f4 4c 89 f8 48 c1 e8 03 <80> 3c 18 00 74 08 4c 89 ff e8 92 ba e3 fb 49 83 3f 00 74 0a e8 17
RSP: 0018:ffffc90000126f80 EFLAGS: 00010202
RAX: 0000000000000002 RBX: dffffc0000000000 RCX: dffffc0000000000
RDX: 0000000000000001 RSI: 0000000000000004 RDI: ffffc90000126f00
RBP: ffffc900001270b0 R08: 0000000000000003 R09: 0000000000000004
R10: dffffc0000000000 R11: fffff52000024de0 R12: 0000000000000010
R13: ffff8881730b6a48 R14: 0000000000000000 R15: 0000000000000010
FS: 0000000000000000(0000) GS:ffff8882a9f38000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000002280 CR3: 00000001bb189000 CR4: 0000000000352ef0
Call Trace:
<TASK>
__netdev_start_xmit include/linux/netdevice.h:5259 [inline]
netdev_start_xmit include/linux/netdevice.h:5268 [inline]
xmit_one net/core/dev.c:3853 [inline]
dev_hard_start_xmit+0x2d7/0x830 net/core/dev.c:3869
__dev_queue_xmit+0x172a/0x3740 net/core/dev.c:4811
neigh_output include/net/neighbour.h:556 [inline]
ip6_finish_output2+0xfb3/0x1480 net/ipv6/ip6_output.c:136
NF_HOOK_COND include/linux/netfilter.h:307 [inline]
ip6_output+0x340/0x550 net/ipv6/ip6_output.c:247
NF_HOOK include/linux/netfilter.h:318 [inline]
ndisc_send_skb+0xbce/0x1510 net/ipv6/ndisc.c:512
addrconf_dad_completed+0x7ae/0xd60 net/ipv6/addrconf.c:4360
addrconf_dad_work+0xc36/0x14b0 net/ipv6/addrconf.c:-1
process_one_work kernel/workqueue.c:3263 [inline]
process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:__ptr_ring_full include/linux/ptr_ring.h:51 [inline]
RIP: 0010:tun_ring_produce drivers/net/tun.c:1023 [inline]
RIP: 0010:tun_net_xmit+0xdf0/0x1840 drivers/net/tun.c:1164
Code: 00 00 00 fc ff df 48 89 44 24 50 0f b6 04 18 84 c0 0f 85 1f 07 00 00 4c 89 7c 24 30 4d 63 37 4f 8d 3c f4 4c 89 f8 48 c1 e8 03 <80> 3c 18 00 74 08 4c 89 ff e8 92 ba e3 fb 49 83 3f 00 74 0a e8 17
RSP: 0018:ffffc90000126f80 EFLAGS: 00010202
RAX: 0000000000000002 RBX: dffffc0000000000 RCX: dffffc0000000000
RDX: 0000000000000001 RSI: 0000000000000004 RDI: ffffc90000126f00
RBP: ffffc900001270b0 R08: 0000000000000003 R09: 0000000000000004
R10: dffffc0000000000 R11: fffff52000024de0 R12: 0000000000000010
R13: ffff8881730b6a48 R14: 0000000000000000 R15: 0000000000000010
FS: 0000000000000000(0000) GS:ffff8882a9f38000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000002280 CR3: 00000001bb189000 CR4: 0000000000352ef0
----------------
Code disassembly (best guess), 5 bytes skipped:
0: df 48 89 fisttps -0x77(%rax)
3: 44 24 50 rex.R and $0x50,%al
6: 0f b6 04 18 movzbl (%rax,%rbx,1),%eax
a: 84 c0 test %al,%al
c: 0f 85 1f 07 00 00 jne 0x731
12: 4c 89 7c 24 30 mov %r15,0x30(%rsp)
17: 4d 63 37 movslq (%r15),%r14
1a: 4f 8d 3c f4 lea (%r12,%r14,8),%r15
1e: 4c 89 f8 mov %r15,%rax
21: 48 c1 e8 03 shr $0x3,%rax
* 25: 80 3c 18 00 cmpb $0x0,(%rax,%rbx,1) <-- trapping instruction
29: 74 08 je 0x33
2b: 4c 89 ff mov %r15,%rdi
2e: e8 92 ba e3 fb call 0xfbe3bac5
33: 49 83 3f 00 cmpq $0x0,(%r15)
37: 74 0a je 0x43
39: e8 .byte 0xe8
3a: 17 (bad)
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
© 2016 - 2025 Red Hat, Inc.