[PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)

Alexander Lobakin posted 19 patches 1 week, 2 days ago
drivers/net/ethernet/intel/libeth/Kconfig     |    6 +
drivers/net/ethernet/intel/libeth/Makefile    |    6 +
include/net/libeth/types.h                    |  102 +-
include/net/page_pool/types.h                 |    6 +-
drivers/net/ethernet/intel/libeth/priv.h      |   37 +
include/linux/bpf.h                           |   12 +-
include/linux/filter.h                        |    9 +-
include/linux/netdevice.h                     |    7 +-
include/linux/skbuff.h                        |   49 +-
include/linux/unroll.h                        |   43 +
include/net/libeth/rx.h                       |    6 +-
include/net/libeth/tx.h                       |   34 +-
include/net/libeth/xdp.h                      | 1861 +++++++++++++++++
include/net/libeth/xsk.h                      |  684 ++++++
include/net/xdp.h                             |  185 +-
include/net/xdp_sock_drv.h                    |   52 +-
include/net/xsk_buff_pool.h                   |   12 +-
.../net/ethernet/freescale/dpaa/dpaa_eth.c    |    2 +-
drivers/net/ethernet/intel/i40e/i40e_xsk.c    |   30 +-
drivers/net/ethernet/intel/ice/ice_xsk.c      |   32 +-
drivers/net/ethernet/intel/libeth/rx.c        |   22 +-
drivers/net/ethernet/intel/libeth/tx.c        |   39 +
drivers/net/ethernet/intel/libeth/xdp.c       |  446 ++++
drivers/net/ethernet/intel/libeth/xsk.c       |  264 +++
drivers/net/veth.c                            |    4 +-
kernel/bpf/cpumap.c                           |    2 +-
kernel/bpf/devmap.c                           |    8 +-
kernel/jump_label.c                           |    2 +
net/bpf/test_run.c                            |    4 +-
net/core/dev.c                                |   20 +-
net/core/filter.c                             |   41 +-
net/core/page_pool.c                          |   60 +-
net/core/skbuff.c                             |    2 +-
net/core/xdp.c                                |  311 ++-
net/xdp/xsk_buff_pool.c                       |   40 +
35 files changed, 4219 insertions(+), 221 deletions(-)
create mode 100644 drivers/net/ethernet/intel/libeth/priv.h
create mode 100644 include/net/libeth/xdp.h
create mode 100644 include/net/libeth/xsk.h
create mode 100644 drivers/net/ethernet/intel/libeth/tx.c
create mode 100644 drivers/net/ethernet/intel/libeth/xdp.c
create mode 100644 drivers/net/ethernet/intel/libeth/xsk.c
[PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 1 week, 2 days ago
XDP for idpf is currently 5 chapters:
* convert Rx to libeth;
* convert Tx and stats to libeth;
* generic XDP and XSk code changes (this);
* actual XDP for idpf via libeth_xdp;
* XSk for idpf (^).

Part III does the following:
* does some cleanups with marking read-only bpf_prog and xdp_buff
  arguments const for some generic functions;
* allows attaching already registered XDP memory model to Rxq info;
* allows mixing pages from several Page Pools within one XDP frame;
* optimizes &xdp_frame structure and removes no-more-used field;
* adds generic functions to build skbs from xdp_buffs (regular and
  XSk) and attach frags to xdp_buffs (regular and XSk);
* adds helper to optimize XSk xmit in drivers;
* extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
* adds libeth_xdp -- libeth module with common XDP and XSk routines.

They are implemented mostly as inlines with inline callback arguments.
They will be then uninlined in the drivers with sane function sizes,
but without any indirect calls.
All those inlines and macros really removes tons of driver code, which
is mostly the same across the drivers minus HW-specific part. You just
basically need functions which read Rx descriptors and fill Tx
descriptors, call a couple macros and that's it. The rest is written
once in libeth_xdp.
All exception and cold code is external. Error handling etc, anything
that doesn't happen at line rates, is external. Only the hottest things
are inlined ensuring driver code doesn't bloat for no gain and that
cold code won't push hot code into more cachelines than wanted.

Note on diffstat: don't be scared, almost 1500 lines are documentation
explaining everything in details. The actual new code is around 2500.

Alexander Lobakin (18):
  jump_label: export static_key_slow_{inc,dec}_cpuslocked()
  skbuff: allow 2-4-argument skb_frag_dma_map()
  unroll: add generic loop unroll helpers
  bpf, xdp: constify some bpf_prog * function arguments
  xdp, xsk: constify read-only arguments of some static inline helpers
  xdp: allow attaching already registered memory model to xdp_rxq_info
  page_pool: make page_pool_put_page_bulk() actually handle array of
    pages
  page_pool: allow mixing PPs within one bulk
  xdp: get rid of xdp_frame::mem.id
  xdp: add generic xdp_buff_add_frag()
  xdp: add generic xdp_build_skb_from_buff()
  xsk: align &xdp_buff_xsk harder
  xsk: allow attaching XSk pool via xdp_rxq_info_reg_mem_model()
  xsk: make xsk_buff_add_frag really add a frag via
    __xdp_buff_add_frag()
  xsk: add generic XSk &xdp_buff -> skb conversion
  xsk: add helper to get &xdp_desc's DMA and meta pointer in one go
  libeth: support native XDP and register memory model
  libeth: add a couple of XDP helpers (libeth_xdp)

Toke Høiland-Jørgensen (1):
  xdp: register system page pool as an XDP memory model

 drivers/net/ethernet/intel/libeth/Kconfig     |    6 +
 drivers/net/ethernet/intel/libeth/Makefile    |    6 +
 include/net/libeth/types.h                    |  102 +-
 include/net/page_pool/types.h                 |    6 +-
 drivers/net/ethernet/intel/libeth/priv.h      |   37 +
 include/linux/bpf.h                           |   12 +-
 include/linux/filter.h                        |    9 +-
 include/linux/netdevice.h                     |    7 +-
 include/linux/skbuff.h                        |   49 +-
 include/linux/unroll.h                        |   43 +
 include/net/libeth/rx.h                       |    6 +-
 include/net/libeth/tx.h                       |   34 +-
 include/net/libeth/xdp.h                      | 1861 +++++++++++++++++
 include/net/libeth/xsk.h                      |  684 ++++++
 include/net/xdp.h                             |  185 +-
 include/net/xdp_sock_drv.h                    |   52 +-
 include/net/xsk_buff_pool.h                   |   12 +-
 .../net/ethernet/freescale/dpaa/dpaa_eth.c    |    2 +-
 drivers/net/ethernet/intel/i40e/i40e_xsk.c    |   30 +-
 drivers/net/ethernet/intel/ice/ice_xsk.c      |   32 +-
 drivers/net/ethernet/intel/libeth/rx.c        |   22 +-
 drivers/net/ethernet/intel/libeth/tx.c        |   39 +
 drivers/net/ethernet/intel/libeth/xdp.c       |  446 ++++
 drivers/net/ethernet/intel/libeth/xsk.c       |  264 +++
 drivers/net/veth.c                            |    4 +-
 kernel/bpf/cpumap.c                           |    2 +-
 kernel/bpf/devmap.c                           |    8 +-
 kernel/jump_label.c                           |    2 +
 net/bpf/test_run.c                            |    4 +-
 net/core/dev.c                                |   20 +-
 net/core/filter.c                             |   41 +-
 net/core/page_pool.c                          |   60 +-
 net/core/skbuff.c                             |    2 +-
 net/core/xdp.c                                |  311 ++-
 net/xdp/xsk_buff_pool.c                       |   40 +
 35 files changed, 4219 insertions(+), 221 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/libeth/priv.h
 create mode 100644 include/net/libeth/xdp.h
 create mode 100644 include/net/libeth/xsk.h
 create mode 100644 drivers/net/ethernet/intel/libeth/tx.c
 create mode 100644 drivers/net/ethernet/intel/libeth/xdp.c
 create mode 100644 drivers/net/ethernet/intel/libeth/xsk.c

---
From v4[0]:
* 12: pick RB from Toke;
* 19: drop redundant ';'s (Jakub);
* 19: fix a couple context imbalance warnings by moving __acquires() /
  __releases() to the proper place (smatch);
* no functional changes.

From v3[1]:
* rebase on top of the latest net-next to solve conflict (Jakub);
* 09: use iterative approach instead of recursive to not blow the stack
  (Toke);
* 12: rephrase the commitmsg since the functionality changed, so that
  it's not actual anymore (Toke);
* align &xdp_buff_xsk a bit harder since its alignment degraded
  recently;
* pick RBs from Toke.

From v2[2]:
* cover: rename the series;
* collect RBs and Acks from Maciej;
* 007: reword the commitmsg;
* 011: fix typos in the commitmsg (M);
* 012: 'ts' -> 'tsize' (M; not 'truesize' to fit into 80 cols =\);
* 016: fix the intro sentence (M);
* no functional changes.

From v1[3]:
* rebase on top of the latest net-next;
* no other changes.

[0] https://lore.kernel.org/netdev/20241107161026.2903044-1-aleksander.lobakin@intel.com
[1] https://lore.kernel.org/netdev/20241030165201.442301-1-aleksander.lobakin@intel.com
[2] https://lore.kernel.org/netdev/20241015145350.4077765-1-aleksander.lobakin@intel.com
[3] https://lore.kernel.org/netdev/20241009152756.3113697-1-aleksander.lobakin@intel.com
-- 
2.47.0

Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Jakub Kicinski 1 week ago
On Wed, 13 Nov 2024 16:24:23 +0100 Alexander Lobakin wrote:
> Part III does the following:
> * does some cleanups with marking read-only bpf_prog and xdp_buff
>   arguments const for some generic functions;
> * allows attaching already registered XDP memory model to Rxq info;
> * allows mixing pages from several Page Pools within one XDP frame;
> * optimizes &xdp_frame structure and removes no-more-used field;
> * adds generic functions to build skbs from xdp_buffs (regular and
>   XSk) and attach frags to xdp_buffs (regular and XSk);
> * adds helper to optimize XSk xmit in drivers;
> * extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
> * adds libeth_xdp -- libeth module with common XDP and XSk routines.

This clearly could be multiple series, please don't go over the limit.
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 3 days, 17 hours ago
From: Jakub Kicinski <kuba@kernel.org>
Date: Fri, 15 Nov 2024 18:43:01 -0800

> On Wed, 13 Nov 2024 16:24:23 +0100 Alexander Lobakin wrote:
>> Part III does the following:
>> * does some cleanups with marking read-only bpf_prog and xdp_buff
>>   arguments const for some generic functions;
>> * allows attaching already registered XDP memory model to Rxq info;
>> * allows mixing pages from several Page Pools within one XDP frame;
>> * optimizes &xdp_frame structure and removes no-more-used field;
>> * adds generic functions to build skbs from xdp_buffs (regular and
>>   XSk) and attach frags to xdp_buffs (regular and XSk);
>> * adds helper to optimize XSk xmit in drivers;
>> * extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
>> * adds libeth_xdp -- libeth module with common XDP and XSk routines.
> 
> This clearly could be multiple series, please don't go over the limit.

Sorta.
I think I'll split it into two: changing the current logic + adding new
functions and libeth_xdp. Maybe I could merge the second one with the
Chapter IV, will see.

Thanks,
Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Jakub Kicinski 3 days, 14 hours ago
On Tue, 19 Nov 2024 13:06:56 +0100 Alexander Lobakin wrote:
> > This clearly could be multiple series, please don't go over the limit.  
> 
> Sorta.
> I think I'll split it into two: changing the current logic + adding new
> functions and libeth_xdp. Maybe I could merge the second one with the
> Chapter IV, will see.

FWIW I was tempted to cherry-pick patches 2, 4, 5, 6, and 7, since they
look uncontroversial and stand on their own. But wasn't 100% sure I'm
not missing a dependency. Then I thought - why take the risk, let me
complain instead :) Easier to make progress with smaller series..
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 2 days, 14 hours ago
From: Jakub Kicinski <kuba@kernel.org>
Date: Tue, 19 Nov 2024 06:25:43 -0800

> On Tue, 19 Nov 2024 13:06:56 +0100 Alexander Lobakin wrote:
>>> This clearly could be multiple series, please don't go over the limit.  
>>
>> Sorta.
>> I think I'll split it into two: changing the current logic + adding new
>> functions and libeth_xdp. Maybe I could merge the second one with the
>> Chapter IV, will see.
> 
> FWIW I was tempted to cherry-pick patches 2, 4, 5, 6, and 7, since they
> look uncontroversial and stand on their own. But wasn't 100% sure I'm
> not missing a dependency. Then I thought - why take the risk, let me
> complain instead :) Easier to make progress with smaller series..

Sure, I understand. I wasn't sure it's not too much for one series, in
such cases, I let the reviewers choose :)

Thanks,
Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Jacob Keller 1 day, 9 hours ago

On 11/20/2024 6:40 AM, Alexander Lobakin wrote:
> From: Jakub Kicinski <kuba@kernel.org>
> Date: Tue, 19 Nov 2024 06:25:43 -0800
> 
>> On Tue, 19 Nov 2024 13:06:56 +0100 Alexander Lobakin wrote:
>>>> This clearly could be multiple series, please don't go over the limit.  
>>>
>>> Sorta.
>>> I think I'll split it into two: changing the current logic + adding new
>>> functions and libeth_xdp. Maybe I could merge the second one with the
>>> Chapter IV, will see.
>>
>> FWIW I was tempted to cherry-pick patches 2, 4, 5, 6, and 7, since they
>> look uncontroversial and stand on their own. But wasn't 100% sure I'm
>> not missing a dependency. Then I thought - why take the risk, let me
>> complain instead :) Easier to make progress with smaller series..
> 
> Sure, I understand. I wasn't sure it's not too much for one series, in
> such cases, I let the reviewers choose :)
> 
> Thanks,
> Olek
> 

15 limit has been the policy for quite some time, and I don't recall
seeing >15 get merged in recent memory.

Even 15 can sometimes be pushing it when there is obvious good
breakpoints for series.
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Willem de Bruijn 6 days, 13 hours ago
Jakub Kicinski wrote:
> On Wed, 13 Nov 2024 16:24:23 +0100 Alexander Lobakin wrote:
> > Part III does the following:
> > * does some cleanups with marking read-only bpf_prog and xdp_buff
> >   arguments const for some generic functions;
> > * allows attaching already registered XDP memory model to Rxq info;
> > * allows mixing pages from several Page Pools within one XDP frame;
> > * optimizes &xdp_frame structure and removes no-more-used field;
> > * adds generic functions to build skbs from xdp_buffs (regular and
> >   XSk) and attach frags to xdp_buffs (regular and XSk);
> > * adds helper to optimize XSk xmit in drivers;
> > * extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
> > * adds libeth_xdp -- libeth module with common XDP and XSk routines.
> 
> This clearly could be multiple series, please don't go over the limit.

Targeting different subsystems and thus reviewers. The XDP, page_pool
and AF_XDP changes might move faster on their own.

If pulling those out into separate series, that also allows splitting
up the last patch. That weighs in at 3481 LoC, out of 4400 for the
series.

The first 3 patches are not essential to IDFP XDP + AF_XDP either.
The IDPF feature does not have to not depend on them.

Does not matter for upstream, but for the purpose of backporting this
to distro kernels, it helps if the driver feature minimizes dependency
on core kernel API changes. If patch 19 can be made to work without
some of the changes in 1..18, that makes it more robust from that PoV.
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 3 days, 16 hours ago
From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
Date: Sat, 16 Nov 2024 10:31:08 -0500

> Jakub Kicinski wrote:
>> On Wed, 13 Nov 2024 16:24:23 +0100 Alexander Lobakin wrote:
>>> Part III does the following:
>>> * does some cleanups with marking read-only bpf_prog and xdp_buff
>>>   arguments const for some generic functions;
>>> * allows attaching already registered XDP memory model to Rxq info;
>>> * allows mixing pages from several Page Pools within one XDP frame;
>>> * optimizes &xdp_frame structure and removes no-more-used field;
>>> * adds generic functions to build skbs from xdp_buffs (regular and
>>>   XSk) and attach frags to xdp_buffs (regular and XSk);
>>> * adds helper to optimize XSk xmit in drivers;
>>> * extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
>>> * adds libeth_xdp -- libeth module with common XDP and XSk routines.
>>
>> This clearly could be multiple series, please don't go over the limit.
> 
> Targeting different subsystems and thus reviewers. The XDP, page_pool
> and AF_XDP changes might move faster on their own.

Reviewers for page_pool, XDP and XSk (no idea why everyone name it
AF_XDP) are 90% time the same people.
Often times, you can't avoid cross-subsystem patches. These three are
closely tied to each other.

> 
> If pulling those out into separate series, that also allows splitting
> up the last patch. That weighs in at 3481 LoC, out of 4400 for the
> series.

1500 of which is kdoc if you read the cover letter.

libeth_xdp depends on every patch from the series. I don't know why you
believe this might anyhow move faster. Almost the whole series got
reviewed relatively quickly, except drivers/intel folder which people
often tend to avoid.

I remind you that the initial libeth + iavf series (11 patches) was
baking on LKML for one year. Here 2 Chapters went into the kernel within
2 windows and only this one (clearly much bigger than the previous ones
and containing only generic changes in contrary to the previous which
had only /intel code) didn't follow this rule, which doesn't
unnecessarily mean it will stuck for too long.

(+ I clearly mentioned several times that Chapter III will take longer
 than the rest and each time you had no issues with that)

> 
> The first 3 patches are not essential to IDFP XDP + AF_XDP either.

You don't seem to read the code. libeth_xdp won't even build without them.
I don't believe the model taken by some developers (not spelling names
loud) "let's submit minimal changes and almost draft code, I promise
I'll create a todo list and will be polishing it within next x years"
works at all, not speaking that it may work better than sending polished
mature code (I hope it is).

> The IDPF feature does not have to not depend on them.
> 
> Does not matter for upstream, but for the purpose of backporting this
> to distro kernels, it helps if the driver feature minimizes dependency
> on core kernel API changes. If patch 19 can be made to work without

OOT style of thinking.
Minimizing core changes == artificial self-limiting optimization and
functionality potential.
New kernels > LTSes and especially custom kernels which receive
non-upstream (== not officially supported by the community) feature
backports. Upstream shouldn't sacrifice anything in favor of those, this
way we end up one day sacrificing stuff for out-of-tree drivers (which I
know some people already try to do).

> some of the changes in 1..18, that makes it more robust from that PoV.

No it can't, I thought people first read the code and only then comment,
otherwise it's just wasting time.

Thanks,
Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Willem de Bruijn 3 days, 13 hours ago
Alexander Lobakin wrote:
> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> Date: Sat, 16 Nov 2024 10:31:08 -0500
> 
> > Jakub Kicinski wrote:
> >> On Wed, 13 Nov 2024 16:24:23 +0100 Alexander Lobakin wrote:
> >>> Part III does the following:
> >>> * does some cleanups with marking read-only bpf_prog and xdp_buff
> >>>   arguments const for some generic functions;
> >>> * allows attaching already registered XDP memory model to Rxq info;
> >>> * allows mixing pages from several Page Pools within one XDP frame;
> >>> * optimizes &xdp_frame structure and removes no-more-used field;
> >>> * adds generic functions to build skbs from xdp_buffs (regular and
> >>>   XSk) and attach frags to xdp_buffs (regular and XSk);
> >>> * adds helper to optimize XSk xmit in drivers;
> >>> * extends libeth Rx to support XDP requirements (headroom etc.) on Rx;
> >>> * adds libeth_xdp -- libeth module with common XDP and XSk routines.
> >>
> >> This clearly could be multiple series, please don't go over the limit.
> > 
> > Targeting different subsystems and thus reviewers. The XDP, page_pool
> > and AF_XDP changes might move faster on their own.
> 
> Reviewers for page_pool, XDP and XSk (no idea why everyone name it
> AF_XDP) are 90% time the same people.
> Often times, you can't avoid cross-subsystem patches. These three are
> closely tied to each other.
> 
> > 
> > If pulling those out into separate series, that also allows splitting
> > up the last patch. That weighs in at 3481 LoC, out of 4400 for the
> > series.
> 
> 1500 of which is kdoc if you read the cover letter.
> 
> libeth_xdp depends on every patch from the series. I don't know why you
> believe this might anyhow move faster. Almost the whole series got
> reviewed relatively quickly, except drivers/intel folder which people
> often tend to avoid.

Smaller focused series might have been merged already.
 
> I remind you that the initial libeth + iavf series (11 patches) was
> baking on LKML for one year. Here 2 Chapters went into the kernel within
> 2 windows and only this one (clearly much bigger than the previous ones
> and containing only generic changes in contrary to the previous which
> had only /intel code) didn't follow this rule, which doesn't
> unnecessarily mean it will stuck for too long.
> 
> (+ I clearly mentioned several times that Chapter III will take longer
>  than the rest and each time you had no issues with that)

This is a misunderstanding. I need a working feature, on a predictable
timeline, in distro kernels.

> > 
> > The first 3 patches are not essential to IDFP XDP + AF_XDP either.
> 
> You don't seem to read the code. libeth_xdp won't even build without them.

Not as written, no, obviously.

> I don't believe the model taken by some developers (not spelling names
> loud) "let's submit minimal changes and almost draft code, I promise
> I'll create a todo list and will be polishing it within next x years"
> works at all, not speaking that it may work better than sending polished
> mature code (I hope it is).
> 
> > The IDPF feature does not have to not depend on them.
> > 
> > Does not matter for upstream, but for the purpose of backporting this
> > to distro kernels, it helps if the driver feature minimizes dependency
> > on core kernel API changes. If patch 19 can be made to work without
> 
> OOT style of thinking.
> Minimizing core changes == artificial self-limiting optimization and
> functionality potential.
> New kernels > LTSes and especially custom kernels which receive
> non-upstream (== not officially supported by the community) feature
> backports. Upstream shouldn't sacrifice anything in favor of those, this
> way we end up one day sacrificing stuff for out-of-tree drivers (which I
> know some people already try to do).

Opinionated positions. Nice if you have unlimited time.

> > some of the changes in 1..18, that makes it more robust from that PoV.
> 
> No it can't, I thought people first read the code and only then comment,
> otherwise it's just wasting time.
>
> Thanks,
> Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 2 days, 13 hours ago
From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
Date: Tue, 19 Nov 2024 10:14:28 -0500

> Alexander Lobakin wrote:
>> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
>> Date: Sat, 16 Nov 2024 10:31:08 -0500

[...]

>> libeth_xdp depends on every patch from the series. I don't know why you
>> believe this might anyhow move faster. Almost the whole series got
>> reviewed relatively quickly, except drivers/intel folder which people
>> often tend to avoid.
> 
> Smaller focused series might have been merged already.

Half of this series merged wouldn't change that the whole set wouldn't
fit into one window (which is what you want). Half of this series merged
wouldn't allow sending idpf XDP parts.

>  
>> I remind you that the initial libeth + iavf series (11 patches) was
>> baking on LKML for one year. Here 2 Chapters went into the kernel within
>> 2 windows and only this one (clearly much bigger than the previous ones
>> and containing only generic changes in contrary to the previous which
>> had only /intel code) didn't follow this rule, which doesn't
>> unnecessarily mean it will stuck for too long.
>>
>> (+ I clearly mentioned several times that Chapter III will take longer
>>  than the rest and each time you had no issues with that)
> 
> This is a misunderstanding. I need a working feature, on a predictable
> timeline, in distro kernels.

Predictable timeline is not about upstream. At least when it comes to
series which introduce a lot of generic changes / additions.
A good example is PFCP offload in ice, the initial support was done and
sent spring 2022, then it took almost 2 years until it landed into the
kernel. The first series was of good quality, but there'll always be
discussions, different opinions etc.

I've no idea what misunderstanding are you talking about, I quoted what
Oregon told me quoting you. The email I sent with per-patch breakdown
why none of them can be tossed off to upstream XDP for idpf, you seemed
to ignore, at least I haven't seen any reply. I've no idea what they
promise you each kernel release, but I haven't promised anything except
sending first working RFC by the end of 2023, which was done back then;
because promising that feature X will definitely land into upstream
release Y would mean lying. There's always risk even a small series can
easily miss 1-3 kernel releases.
Take a look at Amit's comment. It involves additional work which I
didn't expect. I'm planning to do it while the window is closed as the
suggestion is perfectly valid and I don't have any arguments against.
Feel free to go there and argue that the comment is not valid because
you want the series merged ASAP, if you think that this "argument" works
upstream.

> 
>>>
>>> The first 3 patches are not essential to IDFP XDP + AF_XDP either.
>>
>> You don't seem to read the code. libeth_xdp won't even build without them.
> 
> Not as written, no, obviously.

If you want to compare with the OOT implementation for the 10th time,
let me remind you that it differs from the upstream version of idpf a
ton. OOT driver still doesn't use Page Pool (without which idpf wouldn't
have been accepted upstream at all), for example, which automatically
drops the dependency from several big patches from this series. OOT
implementation performs X times worse than the upstream ice. It still
forces header split to be turned off when XDP prog is installed. It
still uses hardcoded Rx buffer sizes. I can continue enumerating things
from OOT unacceptable here in upstream forever.

> 
>> I don't believe the model taken by some developers (not spelling names
>> loud) "let's submit minimal changes and almost draft code, I promise
>> I'll create a todo list and will be polishing it within next x years"
>> works at all, not speaking that it may work better than sending polished
>> mature code (I hope it is).
>>
>>> The IDPF feature does not have to not depend on them.
>>>
>>> Does not matter for upstream, but for the purpose of backporting this
>>> to distro kernels, it helps if the driver feature minimizes dependency
>>> on core kernel API changes. If patch 19 can be made to work without
>>
>> OOT style of thinking.
>> Minimizing core changes == artificial self-limiting optimization and
>> functionality potential.
>> New kernels > LTSes and especially custom kernels which receive
>> non-upstream (== not officially supported by the community) feature
>> backports. Upstream shouldn't sacrifice anything in favor of those, this
>> way we end up one day sacrificing stuff for out-of-tree drivers (which I
>> know some people already try to do).
> 
> Opinionated positions. Nice if you have unlimited time.

I clearly remember Kuba's position that he wants good quality of
networking core and driver code. I'm pretty sure every netdev maintainer
has the same position. Again, feel free to argue with them, saying they
must take whatever trash is sent to LKML because customer X wants it
backported to his custom kernel Y ASAP.

> 
>>> some of the changes in 1..18, that makes it more robust from that PoV.
>>
>> No it can't, I thought people first read the code and only then comment,
>> otherwise it's just wasting time.

Thanks,
Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Willem de Bruijn 1 day, 13 hours ago
Alexander Lobakin wrote:
> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> Date: Tue, 19 Nov 2024 10:14:28 -0500
> 
> > Alexander Lobakin wrote:
> >> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> >> Date: Sat, 16 Nov 2024 10:31:08 -0500
> 
> [...]
> 
> >> libeth_xdp depends on every patch from the series. I don't know why you
> >> believe this might anyhow move faster. Almost the whole series got
> >> reviewed relatively quickly, except drivers/intel folder which people
> >> often tend to avoid.
> > 
> > Smaller focused series might have been merged already.
> 
> Half of this series merged wouldn't change that the whole set wouldn't
> fit into one window (which is what you want). Half of this series merged
> wouldn't allow sending idpf XDP parts.

I meant that smaller series are easier to facilitate feedback and
iterate on quickly. So multiple focused series can make the same
window.
 
> >  
> >> I remind you that the initial libeth + iavf series (11 patches) was
> >> baking on LKML for one year. Here 2 Chapters went into the kernel within
> >> 2 windows and only this one (clearly much bigger than the previous ones
> >> and containing only generic changes in contrary to the previous which
> >> had only /intel code) didn't follow this rule, which doesn't
> >> unnecessarily mean it will stuck for too long.
> >>
> >> (+ I clearly mentioned several times that Chapter III will take longer
> >>  than the rest and each time you had no issues with that)
> > 
> > This is a misunderstanding. I need a working feature, on a predictable
> > timeline, in distro kernels.
> 
> Predictable timeline is not about upstream. At least when it comes to
> series which introduce a lot of generic changes / additions.
> A good example is PFCP offload in ice, the initial support was done and
> sent spring 2022, then it took almost 2 years until it landed into the
> kernel. The first series was of good quality, but there'll always be
> discussions, different opinions etc.
> 
> I've no idea what misunderstanding are you talking about, I quoted what
> Oregon told me quoting you. The email I sent with per-patch breakdown
> why none of them can be tossed off to upstream XDP for idpf, you seemed
> to ignore, at least I haven't seen any reply. I've no idea what they
> promise you each kernel release, but I haven't promised anything except
> sending first working RFC by the end of 2023, which was done back then;
> because promising that feature X will definitely land into upstream
> release Y would mean lying. There's always risk even a small series can
> easily miss 1-3 kernel releases.
> Take a look at Amit's comment. It involves additional work which I
> didn't expect. I'm planning to do it while the window is closed as the
> suggestion is perfectly valid and I don't have any arguments against.
> Feel free to go there and argue that the comment is not valid because
> you want the series merged ASAP, if you think that this "argument" works
> upstream.
> 
> > 
> >>>
> >>> The first 3 patches are not essential to IDFP XDP + AF_XDP either.
> >>
> >> You don't seem to read the code. libeth_xdp won't even build without them.
> > 
> > Not as written, no, obviously.
> 
> If you want to compare with the OOT implementation for the 10th time,
> let me remind you that it differs from the upstream version of idpf a
> ton. OOT driver still doesn't use Page Pool (without which idpf wouldn't
> have been accepted upstream at all), for example, which automatically
> drops the dependency from several big patches from this series. OOT
> implementation performs X times worse than the upstream ice. It still
> forces header split to be turned off when XDP prog is installed. It
> still uses hardcoded Rx buffer sizes. I can continue enumerating things
> from OOT unacceptable here in upstream forever.

I was not referring to the OOT series. See below.

> > 
> >> I don't believe the model taken by some developers (not spelling names
> >> loud) "let's submit minimal changes and almost draft code, I promise
> >> I'll create a todo list and will be polishing it within next x years"
> >> works at all, not speaking that it may work better than sending polished
> >> mature code (I hope it is).
> >>
> >>> The IDPF feature does not have to not depend on them.
> >>>
> >>> Does not matter for upstream, but for the purpose of backporting this
> >>> to distro kernels, it helps if the driver feature minimizes dependency
> >>> on core kernel API changes. If patch 19 can be made to work without
> >>
> >> OOT style of thinking.
> >> Minimizing core changes == artificial self-limiting optimization and
> >> functionality potential.
> >> New kernels > LTSes and especially custom kernels which receive
> >> non-upstream (== not officially supported by the community) feature
> >> backports. Upstream shouldn't sacrifice anything in favor of those, this
> >> way we end up one day sacrificing stuff for out-of-tree drivers (which I
> >> know some people already try to do).
> > 
> > Opinionated positions. Nice if you have unlimited time.
> 
> I clearly remember Kuba's position that he wants good quality of
> networking core and driver code. I'm pretty sure every netdev maintainer
> has the same position. Again, feel free to argue with them, saying they
> must take whatever trash is sent to LKML because customer X wants it
> backported to his custom kernel Y ASAP.

Not asking for massive changes, just suggesting a different patch
order. That does not affect code quality.

The core feature set does not depend on loop unrolling, constification,
removal of xdp_frame::mem.id, etcetera. These can probably be reviewed
and merged more quickly independent from this driver change, even.

Within IDPF, same for for per queue(set) link up/down and chunked
virtchannel messages. A deeper analysis can probably carve out
other changes not critical to landing XDP/XSK (sw marker removal).

> > 
> >>> some of the changes in 1..18, that makes it more robust from that PoV.
> >>
> >> No it can't, I thought people first read the code and only then comment,
> >> otherwise it's just wasting time.
> 
> Thanks,
> Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Alexander Lobakin 1 day, 11 hours ago
From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
Date: Thu, 21 Nov 2024 10:43:12 -0500

> Alexander Lobakin wrote:
>> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
>> Date: Tue, 19 Nov 2024 10:14:28 -0500
>>
>>> Alexander Lobakin wrote:
>>>> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
>>>> Date: Sat, 16 Nov 2024 10:31:08 -0500
>>
>> [...]
>>
>>>> libeth_xdp depends on every patch from the series. I don't know why you
>>>> believe this might anyhow move faster. Almost the whole series got
>>>> reviewed relatively quickly, except drivers/intel folder which people
>>>> often tend to avoid.
>>>
>>> Smaller focused series might have been merged already.
>>
>> Half of this series merged wouldn't change that the whole set wouldn't
>> fit into one window (which is what you want). Half of this series merged
>> wouldn't allow sending idpf XDP parts.
> 
> I meant that smaller series are easier to facilitate feedback and
> iterate on quickly. So multiple focused series can make the same
> window.

You get reviews on more patches with bigger series. I'm not saying 19 is
fine, but I don't see any way how this series split into two could get
reviewed and accepted in full if the whole series didn't do that.
And please don't say that the delays between different revisions were
too long. I don't remember Mina sending devmem every single day. I
already hit the top negative review:series ratio score this window while
I was reviewing stuff when I had time.
Chapter II also got delayed a bit due to that most of the maintainers
were on vacations and I was helping with the reviews back then as well.
It's not only about code when it comes to upstream, it's not just you
and me being here.

[...]

>> I clearly remember Kuba's position that he wants good quality of
>> networking core and driver code. I'm pretty sure every netdev maintainer
>> has the same position. Again, feel free to argue with them, saying they
>> must take whatever trash is sent to LKML because customer X wants it
>> backported to his custom kernel Y ASAP.
> 
> Not asking for massive changes, just suggesting a different patch
> order. That does not affect code quality.
> 
> The core feature set does not depend on loop unrolling, constification,

I need to remind once again that you have mail from me somewhere
describing every patch in detail and why it's needed?
When we agreed with Kuba to drop stats off the Chapter II, it took me a
while to resolve all the dependencies, but you keep saying that wasting
time on downgrading code is better and faster than upstreaming what was
already done and works good.

> removal of xdp_frame::mem.id, etcetera. These can probably be reviewed

You see already that I even receive additional requests (Amit).
Sometimes generic (and not only) changes cause chain reaction and you
can't say to people "let me handle this later", because there's once
again no "later" here.
idpf still has 3 big lists of todos/followups to be done since it was
upstreamed and I haven't seen any activity here (not my team
responsibility), so I don't believe in "laters".

> and merged more quickly independent from this driver change, even.
> 
> Within IDPF, same for for per queue(set) link up/down and chunked
> virtchannel messages. A deeper analysis can probably carve out
> other changes not critical to landing XDP/XSK (sw marker removal).

You also said once that XDP Rx Hints can be implemented in 3 lines,
while it took couple hundred and several revisions for Larysa to
implement it in ice.

Just BTW, even if Chapter 3 + 4 + 5 is taken literally today, XDP
doesn't work on C0 board revisions at all because the FW is broken and I
also doesn't have any activity in fixing this for half a year already.

> 
>>>
>>>>> some of the changes in 1..18, that makes it more robust from that PoV.
>>>>
>>>> No it can't, I thought people first read the code and only then comment,
>>>> otherwise it's just wasting time.

Thanks,
Olek
Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Posted by Willem de Bruijn 1 day, 10 hours ago
Alexander Lobakin wrote:
> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> Date: Thu, 21 Nov 2024 10:43:12 -0500
> 
> > Alexander Lobakin wrote:
> >> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> >> Date: Tue, 19 Nov 2024 10:14:28 -0500
> >>
> >>> Alexander Lobakin wrote:
> >>>> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> >>>> Date: Sat, 16 Nov 2024 10:31:08 -0500
> >>
> >> [...]
> >>
> >>>> libeth_xdp depends on every patch from the series. I don't know why you
> >>>> believe this might anyhow move faster. Almost the whole series got
> >>>> reviewed relatively quickly, except drivers/intel folder which people
> >>>> often tend to avoid.
> >>>
> >>> Smaller focused series might have been merged already.
> >>
> >> Half of this series merged wouldn't change that the whole set wouldn't
> >> fit into one window (which is what you want). Half of this series merged
> >> wouldn't allow sending idpf XDP parts.
> > 
> > I meant that smaller series are easier to facilitate feedback and
> > iterate on quickly. So multiple focused series can make the same
> > window.
> 
> You get reviews on more patches with bigger series. I'm not saying 19 is
> fine, but I don't see any way how this series split into two could get
> reviewed and accepted in full if the whole series didn't do that.
> And please don't say that the delays between different revisions were
> too long. I don't remember Mina sending devmem every single day. I
> already hit the top negative review:series ratio score this window while
> I was reviewing stuff when I had time.
> Chapter II also got delayed a bit due to that most of the maintainers
> were on vacations and I was helping with the reviews back then as well.
> It's not only about code when it comes to upstream, it's not just you
> and me being here.
> 
> [...]
> 
> >> I clearly remember Kuba's position that he wants good quality of
> >> networking core and driver code. I'm pretty sure every netdev maintainer
> >> has the same position. Again, feel free to argue with them, saying they
> >> must take whatever trash is sent to LKML because customer X wants it
> >> backported to his custom kernel Y ASAP.
> > 
> > Not asking for massive changes, just suggesting a different patch
> > order. That does not affect code quality.
> > 
> > The core feature set does not depend on loop unrolling, constification,
> 
> I need to remind once again that you have mail from me somewhere
> describing every patch in detail and why it's needed?
> When we agreed with Kuba to drop stats off the Chapter II, it took me a
> while to resolve all the dependencies, but you keep saying that wasting
> time on downgrading code is better and faster than upstreaming what was
> already done and works good.
> 
> > removal of xdp_frame::mem.id, etcetera. These can probably be reviewed
> 
> You see already that I even receive additional requests (Amit).
> Sometimes generic (and not only) changes cause chain reaction and you
> can't say to people "let me handle this later", because there's once
> again no "later" here.
> idpf still has 3 big lists of todos/followups to be done since it was
> upstreamed and I haven't seen any activity here (not my team
> responsibility), so I don't believe in "laters".
> 
> > and merged more quickly independent from this driver change, even.
> > 
> > Within IDPF, same for for per queue(set) link up/down and chunked
> > virtchannel messages. A deeper analysis can probably carve out
> > other changes not critical to landing XDP/XSK (sw marker removal).
> 
> You also said once that XDP Rx Hints can be implemented in 3 lines,
> while it took couple hundred and several revisions for Larysa to
> implement it in ice.
> 
> Just BTW, even if Chapter 3 + 4 + 5 is taken literally today, XDP
> doesn't work on C0 board revisions at all because the FW is broken and I
> also doesn't have any activity in fixing this for half a year already.

I am not aware of this restriction. Definitely have been running
variants of your github version on various boards. Let's discuss
offline.