drivers/net/tun.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Use xdp_get_frame_len helper to ensure xdp frame size is calculated
correctly in both single buffer and multi buffer configurations.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
drivers/net/tun.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 7babd1e9a378..1c879467e696 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1993,7 +1993,7 @@ static ssize_t tun_put_user_xdp(struct tun_struct *tun,
struct iov_iter *iter)
{
int vnet_hdr_sz = 0;
- size_t size = xdp_frame->len;
+ size_t size = xdp_get_frame_len(xdp_frame);
ssize_t ret;
if (tun->flags & IFF_VNET_HDR) {
@@ -2579,7 +2579,7 @@ static int tun_ptr_peek_len(void *ptr)
if (tun_is_xdp_frame(ptr)) {
struct xdp_frame *xdpf = tun_ptr_to_xdp(ptr);
- return xdpf->len;
+ return xdp_get_frame_len(xdpf);
}
return __skb_array_len_with_tag(ptr);
} else {
--
2.43.0
Jon Kohler wrote:
> Use xdp_get_frame_len helper to ensure xdp frame size is calculated
> correctly in both single buffer and multi buffer configurations.
Not necessarily opposed, but multi buffer is not actually possible
in this code path, right?
tun_put_user_xdp only copies xdp_frame->data, for one.
Else this would also be fix, not net-next material.
>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> ---
> drivers/net/tun.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index 7babd1e9a378..1c879467e696 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1993,7 +1993,7 @@ static ssize_t tun_put_user_xdp(struct tun_struct *tun,
> struct iov_iter *iter)
> {
> int vnet_hdr_sz = 0;
> - size_t size = xdp_frame->len;
> + size_t size = xdp_get_frame_len(xdp_frame);
> ssize_t ret;
>
> if (tun->flags & IFF_VNET_HDR) {
> @@ -2579,7 +2579,7 @@ static int tun_ptr_peek_len(void *ptr)
> if (tun_is_xdp_frame(ptr)) {
> struct xdp_frame *xdpf = tun_ptr_to_xdp(ptr);
>
> - return xdpf->len;
> + return xdp_get_frame_len(xdpf);
> }
> return __skb_array_len_with_tag(ptr);
> } else {
> --
> 2.43.0
>
> On May 7, 2025, at 4:56 PM, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote:
>
> !-------------------------------------------------------------------|
> CAUTION: External Email
>
> |-------------------------------------------------------------------!
>
> Jon Kohler wrote:
>> Use xdp_get_frame_len helper to ensure xdp frame size is calculated
>> correctly in both single buffer and multi buffer configurations.
>
> Not necessarily opposed, but multi buffer is not actually possible
> in this code path, right?
>
> tun_put_user_xdp only copies xdp_frame->data, for one.
>
> Else this would also be fix, not net-next material.
Correct, this is a prep patch for future multi buffer support,
I’m not aware of any path that can currently do that thru
this code.
The reason for pursuing multi-buffer is to allow vhost/net
batching to work again for large payloads.
>>
>> Signed-off-by: Jon Kohler <jon@nutanix.com>
>> ---
>> drivers/net/tun.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
>> index 7babd1e9a378..1c879467e696 100644
>> --- a/drivers/net/tun.c
>> +++ b/drivers/net/tun.c
>> @@ -1993,7 +1993,7 @@ static ssize_t tun_put_user_xdp(struct tun_struct *tun,
>> struct iov_iter *iter)
>> {
>> int vnet_hdr_sz = 0;
>> - size_t size = xdp_frame->len;
>> + size_t size = xdp_get_frame_len(xdp_frame);
>> ssize_t ret;
>>
>> if (tun->flags & IFF_VNET_HDR) {
>> @@ -2579,7 +2579,7 @@ static int tun_ptr_peek_len(void *ptr)
>> if (tun_is_xdp_frame(ptr)) {
>> struct xdp_frame *xdpf = tun_ptr_to_xdp(ptr);
>>
>> - return xdpf->len;
>> + return xdp_get_frame_len(xdpf);
>> }
>> return __skb_array_len_with_tag(ptr);
>> } else {
>> --
>> 2.43.0
>>
>
>
Jon Kohler wrote: > > > > On May 7, 2025, at 4:56 PM, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: > > > > !-------------------------------------------------------------------| > > CAUTION: External Email > > > > |-------------------------------------------------------------------! > > > > Jon Kohler wrote: > >> Use xdp_get_frame_len helper to ensure xdp frame size is calculated > >> correctly in both single buffer and multi buffer configurations. > > > > Not necessarily opposed, but multi buffer is not actually possible > > in this code path, right? > > > > tun_put_user_xdp only copies xdp_frame->data, for one. > > > > Else this would also be fix, not net-next material. > > Correct, this is a prep patch for future multi buffer support, > I’m not aware of any path that can currently do that thru > this code. > > The reason for pursuing multi-buffer is to allow vhost/net > batching to work again for large payloads. I was not aware of that context. I'd add a comment to that in the commit message, and send it as part of that series.
On 08/05/2025 15.29, Willem de Bruijn wrote: > Jon Kohler wrote: >> >> >>> On May 7, 2025, at 4:56 PM, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: >>> >>> >>> Jon Kohler wrote: >>>> Use xdp_get_frame_len helper to ensure xdp frame size is calculated >>>> correctly in both single buffer and multi buffer configurations. >>> >>> Not necessarily opposed, but multi buffer is not actually possible >>> in this code path, right? >>> >>> tun_put_user_xdp only copies xdp_frame->data, for one. >>> >>> Else this would also be fix, not net-next material. >> >> Correct, this is a prep patch for future multi buffer support, >> I’m not aware of any path that can currently do that thru >> this code. >> This is a good example of a performance paper-cut, from my rant. Adding xdp_get_frame_len() where it is not needed, adds extra code, in-form of an if-statement and a potential touching of a colder cache-line in skb_shared_info area. >> The reason for pursuing multi-buffer is to allow vhost/net >> batching to work again for large payloads. > > I was not aware of that context. I'd add a comment to that in the > commit message, and send it as part of that series. It need to part of that series, as that batching change should bring a larger performance benefit that outweighs the paper-cut. AFAICR there is also some dual packet handling code path for XDP in vhost_net/tun. I'm also willing to take the paper-cut, for cleaning that up. --Jesper
> On May 8, 2025, at 10:16 AM, Jesper Dangaard Brouer <hawk@kernel.org> wrote: > > > > On 08/05/2025 15.29, Willem de Bruijn wrote: >> Jon Kohler wrote: >>> >>> >>>> On May 7, 2025, at 4:56 PM, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: >>>> >>>> >>>> Jon Kohler wrote: >>>>> Use xdp_get_frame_len helper to ensure xdp frame size is calculated >>>>> correctly in both single buffer and multi buffer configurations. >>>> >>>> Not necessarily opposed, but multi buffer is not actually possible >>>> in this code path, right? >>>> >>>> tun_put_user_xdp only copies xdp_frame->data, for one. >>>> >>>> Else this would also be fix, not net-next material. >>> >>> Correct, this is a prep patch for future multi buffer support, >>> I’m not aware of any path that can currently do that thru >>> this code. >>> > > This is a good example of a performance paper-cut, from my rant. > Adding xdp_get_frame_len() where it is not needed, adds extra code, > in-form of an if-statement and a potential touching of a colder > cache-line in skb_shared_info area. > > >>> The reason for pursuing multi-buffer is to allow vhost/net >>> batching to work again for large payloads. >> I was not aware of that context. I'd add a comment to that in the >> commit message, and send it as part of that series. > > It need to part of that series, as that batching change should bring a > larger performance benefit that outweighs the paper-cut. Gotcha, mission understood. Sorry for the confusion, and thank you for taking the time to walk me through that, I appreciate it. I’ll come back to list when the larger series is ready for eyes. > > AFAICR there is also some dual packet handling code path for XDP in > vhost_net/tun. I'm also willing to take the paper-cut, for cleaning > that up. > > --Jesper When you say dual packet handling, what are you referring to specifically?
On 08/05/2025 16.24, Jon Kohler wrote: > >> On May 8, 2025, at 10:16 AM, Jesper Dangaard Brouer <hawk@kernel.org> wrote: >> [...] >> >> AFAICR there is also some dual packet handling code path for XDP in >> vhost_net/tun. I'm also willing to take the paper-cut, for cleaning >> that up. >> >> --Jesper > > When you say dual packet handling, what are you referring to specifically? The important part of the sentence was *code path*, as in multiple code path for packets. You tricked me into looking up the code for you... It was in drivers/net/virtio_net.c where function receive_buf() calls[1] three different functions based on different checks. Some cases support XDP and others don't. I though you talked about this in another thread? --Jesper [1] https://elixir.bootlin.com/linux/v6.15-rc5/source/drivers/net/virtio_net.c#L2570-L2573
> On May 8, 2025, at 11:04 AM, Jesper Dangaard Brouer <hawk@kernel.org> wrote: > > > On 08/05/2025 16.24, Jon Kohler wrote: >>> On May 8, 2025, at 10:16 AM, Jesper Dangaard Brouer <hawk@kernel.org> wrote: >>> > [...] >>> >>> AFAICR there is also some dual packet handling code path for XDP in >>> vhost_net/tun. I'm also willing to take the paper-cut, for cleaning >>> that up. >>> >>> --Jesper >> When you say dual packet handling, what are you referring to specifically? > > The important part of the sentence was *code path*, as in multiple code path for packets. Ah right, sorry coffee hadn’t kicked in, apologies for the trickery! > > You tricked me into looking up the code for you... > > It was in drivers/net/virtio_net.c where function receive_buf() calls[1] > three different functions based on different checks. Some cases support > XDP and others don't. I though you talked about this in another thread? I was talking about the vhost/net side, not the virtio_net side. In vhost net.c there is roughly the same thing though, where < PAGE_SIZE uses xdp_buff as a means-to-an-end for batching, either to be dispatched as proper XDP or just flipping to SKB and the traditional net stack. Anything above PAGE_SIZE takes a wildly different, non-batched path. That’s what I’m actively working through now. The series I’m working on aims to unify that handling again, but will see if it blows up in my face or not. > > --Jesper > > [1] https://elixir.bootlin.com/linux/v6.15-rc5/source/drivers/net/virtio_net.c#L2570-L2573 > >
© 2016 - 2026 Red Hat, Inc.