[PATCH net v1] net: stmmac: Apply new page pool parameters when SPH is enabled

Furong Xu posted 1 patch 1 year ago
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 5 +++++
1 file changed, 5 insertions(+)
[PATCH net v1] net: stmmac: Apply new page pool parameters when SPH is enabled
Posted by Furong Xu 1 year ago
Commit df542f669307 ("net: stmmac: Switch to zero-copy in
non-XDP RX path") makes DMA write received frame into buffer at offset
of NET_SKB_PAD and sets page pool parameters to sync from offset of
NET_SKB_PAD. But when Header Payload Split is enabled, the header is
written at offset of NET_SKB_PAD, while the payload is written at
offset of zero. Uncorrect offset parameter for the payload breaks dma
coherence [1] since both CPU and DMA touch the page buffer from offset
of zero which is not handled by the page pool sync parameter.

And in case the DMA cannot split the received frame, for example,
a large L2 frame, pp_params.max_len should grow to match the tail
of entire frame.

[1] https://lore.kernel.org/netdev/d465f277-bac7-439f-be1d-9a47dfe2d951@nvidia.com/

Reported-by: Jon Hunter <jonathanh@nvidia.com>
Reported-by: Brad Griffis <bgriffis@nvidia.com>
Suggested-by: Ido Schimmel <idosch@idosch.org>
Fixes: df542f669307 ("net: stmmac: Switch to zero-copy in non-XDP RX path")
Signed-off-by: Furong Xu <0x1207@gmail.com>
---
 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index b34ebb916b89..c0ae7db96f46 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2094,6 +2094,11 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
 	pp_params.offset = stmmac_rx_offset(priv);
 	pp_params.max_len = dma_conf->dma_buf_sz;
 
+	if (priv->sph) {
+		pp_params.offset = 0;
+		pp_params.max_len += stmmac_rx_offset(priv);
+	}
+
 	rx_q->page_pool = page_pool_create(&pp_params);
 	if (IS_ERR(rx_q->page_pool)) {
 		ret = PTR_ERR(rx_q->page_pool);
-- 
2.34.1
Re: [PATCH net v1] net: stmmac: Apply new page pool parameters when SPH is enabled
Posted by Ido Schimmel 12 months ago
On Fri, Feb 07, 2025 at 04:56:39PM +0800, Furong Xu wrote:
> Commit df542f669307 ("net: stmmac: Switch to zero-copy in
> non-XDP RX path") makes DMA write received frame into buffer at offset
> of NET_SKB_PAD and sets page pool parameters to sync from offset of
> NET_SKB_PAD. But when Header Payload Split is enabled, the header is
> written at offset of NET_SKB_PAD, while the payload is written at
> offset of zero. Uncorrect offset parameter for the payload breaks dma
> coherence [1] since both CPU and DMA touch the page buffer from offset
> of zero which is not handled by the page pool sync parameter.
> 
> And in case the DMA cannot split the received frame, for example,
> a large L2 frame, pp_params.max_len should grow to match the tail
> of entire frame.
> 
> [1] https://lore.kernel.org/netdev/d465f277-bac7-439f-be1d-9a47dfe2d951@nvidia.com/
> 
> Reported-by: Jon Hunter <jonathanh@nvidia.com>
> Reported-by: Brad Griffis <bgriffis@nvidia.com>
> Suggested-by: Ido Schimmel <idosch@idosch.org>
> Fixes: df542f669307 ("net: stmmac: Switch to zero-copy in non-XDP RX path")
> Signed-off-by: Furong Xu <0x1207@gmail.com>

Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Re: [PATCH net v1] net: stmmac: Apply new page pool parameters when SPH is enabled
Posted by Jon Hunter 1 year ago
Hi Furong,

On 07/02/2025 08:56, Furong Xu wrote:
> Commit df542f669307 ("net: stmmac: Switch to zero-copy in
> non-XDP RX path") makes DMA write received frame into buffer at offset
> of NET_SKB_PAD and sets page pool parameters to sync from offset of
> NET_SKB_PAD. But when Header Payload Split is enabled, the header is
> written at offset of NET_SKB_PAD, while the payload is written at
> offset of zero. Uncorrect offset parameter for the payload breaks dma
> coherence [1] since both CPU and DMA touch the page buffer from offset
> of zero which is not handled by the page pool sync parameter.
> 
> And in case the DMA cannot split the received frame, for example,
> a large L2 frame, pp_params.max_len should grow to match the tail
> of entire frame.
> 
> [1] https://lore.kernel.org/netdev/d465f277-bac7-439f-be1d-9a47dfe2d951@nvidia.com/
> 
> Reported-by: Jon Hunter <jonathanh@nvidia.com>
> Reported-by: Brad Griffis <bgriffis@nvidia.com>
> Suggested-by: Ido Schimmel <idosch@idosch.org>
> Fixes: df542f669307 ("net: stmmac: Switch to zero-copy in non-XDP RX path")
> Signed-off-by: Furong Xu <0x1207@gmail.com>
> ---
>   drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index b34ebb916b89..c0ae7db96f46 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -2094,6 +2094,11 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
>   	pp_params.offset = stmmac_rx_offset(priv);
>   	pp_params.max_len = dma_conf->dma_buf_sz;
>   
> +	if (priv->sph) {
> +		pp_params.offset = 0;
> +		pp_params.max_len += stmmac_rx_offset(priv);
> +	}
> +
>   	rx_q->page_pool = page_pool_create(&pp_params);
>   	if (IS_ERR(rx_q->page_pool)) {
>   		ret = PTR_ERR(rx_q->page_pool);


Thanks for sending this. I can confirm that it fixes the issue we are 
seeing and so ...

Tested-by: Jon Hunter <jonathanh@nvidia.com>

Cheers
Jon

-- 
nvpublic
Re: [PATCH net v1] net: stmmac: Apply new page pool parameters when SPH is enabled
Posted by Thierry Reding 1 year ago
On Fri, Feb 07, 2025 at 01:41:49PM +0000, Jon Hunter wrote:
> Hi Furong,
> 
> On 07/02/2025 08:56, Furong Xu wrote:
> > Commit df542f669307 ("net: stmmac: Switch to zero-copy in
> > non-XDP RX path") makes DMA write received frame into buffer at offset
> > of NET_SKB_PAD and sets page pool parameters to sync from offset of
> > NET_SKB_PAD. But when Header Payload Split is enabled, the header is
> > written at offset of NET_SKB_PAD, while the payload is written at
> > offset of zero. Uncorrect offset parameter for the payload breaks dma
> > coherence [1] since both CPU and DMA touch the page buffer from offset
> > of zero which is not handled by the page pool sync parameter.
> > 
> > And in case the DMA cannot split the received frame, for example,
> > a large L2 frame, pp_params.max_len should grow to match the tail
> > of entire frame.
> > 
> > [1] https://lore.kernel.org/netdev/d465f277-bac7-439f-be1d-9a47dfe2d951@nvidia.com/
> > 
> > Reported-by: Jon Hunter <jonathanh@nvidia.com>
> > Reported-by: Brad Griffis <bgriffis@nvidia.com>
> > Suggested-by: Ido Schimmel <idosch@idosch.org>
> > Fixes: df542f669307 ("net: stmmac: Switch to zero-copy in non-XDP RX path")
> > Signed-off-by: Furong Xu <0x1207@gmail.com>
> > ---
> >   drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 5 +++++
> >   1 file changed, 5 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > index b34ebb916b89..c0ae7db96f46 100644
> > --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > @@ -2094,6 +2094,11 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
> >   	pp_params.offset = stmmac_rx_offset(priv);
> >   	pp_params.max_len = dma_conf->dma_buf_sz;
> > +	if (priv->sph) {
> > +		pp_params.offset = 0;
> > +		pp_params.max_len += stmmac_rx_offset(priv);
> > +	}
> > +
> >   	rx_q->page_pool = page_pool_create(&pp_params);
> >   	if (IS_ERR(rx_q->page_pool)) {
> >   		ret = PTR_ERR(rx_q->page_pool);
> 
> 
> Thanks for sending this. I can confirm that it fixes the issue we are seeing
> and so ...
> 
> Tested-by: Jon Hunter <jonathanh@nvidia.com>

Yes, I can confirm as well. I've tested based on the discussion in the
earlier thread and had the equivalent of this patch (modulo the ->sph
check, but that's always true on this system), so:

Tested-by: Thierry Reding <treding@nvidia.com>