From: Saeed Mahameed <saeedm@nvidia.com>
On netdev_rx_queue_restart, a special type of page pool maybe expected.
In this patch declare support for UNREADABLE netmem iov pages in the
pool params only when header data split shampo RQ mode is enabled, also
set the queue index in the page pool params struct.
Shampo mode requirement: Without header split rx needs to peek at the data,
we can't do UNREADABLE_NETMEM.
The patch also enables the use of a separate page pool for headers when
a memory provider is installed for the queue, otherwise the same common
page pool continues to be used.
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5e649705e35f..a51e204bd364 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
{
- return false;
+ struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
+
+ return !!rxq->mp_params.mp_ops;
}
static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
@@ -964,6 +966,11 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
pp_params.netdev = rq->netdev;
pp_params.dma_dir = rq->buff.map_dir;
pp_params.max_len = PAGE_SIZE;
+ pp_params.queue_idx = rq->ix;
+
+ /* Shampo header data split allow for unreadable netmem */
+ if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state))
+ pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
/* page_pool can be used even when there is no rq->xdp_prog,
* given page_pool does not handle DMA mapping there is no
--
2.34.1
On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@nvidia.com> wrote: > > From: Saeed Mahameed <saeedm@nvidia.com> > > On netdev_rx_queue_restart, a special type of page pool maybe expected. > > In this patch declare support for UNREADABLE netmem iov pages in the > pool params only when header data split shampo RQ mode is enabled, also > set the queue index in the page pool params struct. > > Shampo mode requirement: Without header split rx needs to peek at the data, > we can't do UNREADABLE_NETMEM. > > The patch also enables the use of a separate page pool for headers when > a memory provider is installed for the queue, otherwise the same common > page pool continues to be used. > > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> > Signed-off-by: Tariq Toukan <tariqt@nvidia.com> > Signed-off-by: Mark Bloch <mbloch@nvidia.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > index 5e649705e35f..a51e204bd364 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) > > static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq) > { > - return false; > + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix); > + > + return !!rxq->mp_params.mp_ops; This is kinda assuming that all future memory providers will return unreadable memory, which is not a restriction I have in mind... in theory there is nothing wrong with memory providers that feed readable pages. Technically the right thing to do here is to define a new helper page_pool_is_readable() and have the mp report to the pp if it's all readable or not. But all this sounds like a huge hassle for an unnecessary amount of future proofing, so I guess this is fine. Reviewed-by: Mina Almasry <almasrymina@google.com> -- Thanks, Mina
On Wed, Jun 11, 2025 at 10:16:18PM -0700, Mina Almasry wrote: > On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@nvidia.com> wrote: > > > > From: Saeed Mahameed <saeedm@nvidia.com> > > > > On netdev_rx_queue_restart, a special type of page pool maybe expected. > > > > In this patch declare support for UNREADABLE netmem iov pages in the > > pool params only when header data split shampo RQ mode is enabled, also > > set the queue index in the page pool params struct. > > > > Shampo mode requirement: Without header split rx needs to peek at the data, > > we can't do UNREADABLE_NETMEM. > > > > The patch also enables the use of a separate page pool for headers when > > a memory provider is installed for the queue, otherwise the same common > > page pool continues to be used. > > > > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > > Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> > > Signed-off-by: Tariq Toukan <tariqt@nvidia.com> > > Signed-off-by: Mark Bloch <mbloch@nvidia.com> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++- > > 1 file changed, 8 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > index 5e649705e35f..a51e204bd364 100644 > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) > > > > static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq) > > { > > - return false; > > + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix); > > + > > + return !!rxq->mp_params.mp_ops; > > This is kinda assuming that all future memory providers will return > unreadable memory, which is not a restriction I have in mind... in > theory there is nothing wrong with memory providers that feed readable > pages. Technically the right thing to do here is to define a new > helper page_pool_is_readable() and have the mp report to the pp if > it's all readable or not. > The API is already there: page_pool_is_unreadable(). But it uses the same logic... However, having a pp level API is a bit limiting: as Cosmin pointed out, mlx5 can't use it because it needs to know in advance if this page_pool is for unreadable memory to correctly size the data page_pool (with or without headers). > But all this sounds like a huge hassle for an unnecessary amount of > future proofing, so I guess this is fine. > > Reviewed-by: Mina Almasry <almasrymina@google.com> > Thanks! Dragos
On Thu, Jun 12, 2025 at 1:46 AM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > On Wed, Jun 11, 2025 at 10:16:18PM -0700, Mina Almasry wrote: > > On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@nvidia.com> wrote: > > > > > > From: Saeed Mahameed <saeedm@nvidia.com> > > > > > > On netdev_rx_queue_restart, a special type of page pool maybe expected. > > > > > > In this patch declare support for UNREADABLE netmem iov pages in the > > > pool params only when header data split shampo RQ mode is enabled, also > > > set the queue index in the page pool params struct. > > > > > > Shampo mode requirement: Without header split rx needs to peek at the data, > > > we can't do UNREADABLE_NETMEM. > > > > > > The patch also enables the use of a separate page pool for headers when > > > a memory provider is installed for the queue, otherwise the same common > > > page pool continues to be used. > > > > > > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > > > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > > > Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> > > > Signed-off-by: Tariq Toukan <tariqt@nvidia.com> > > > Signed-off-by: Mark Bloch <mbloch@nvidia.com> > > > --- > > > drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++- > > > 1 file changed, 8 insertions(+), 1 deletion(-) > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > > index 5e649705e35f..a51e204bd364 100644 > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > > > @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) > > > > > > static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq) > > > { > > > - return false; > > > + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix); > > > + > > > + return !!rxq->mp_params.mp_ops; > > > > This is kinda assuming that all future memory providers will return > > unreadable memory, which is not a restriction I have in mind... in > > theory there is nothing wrong with memory providers that feed readable > > pages. Technically the right thing to do here is to define a new > > helper page_pool_is_readable() and have the mp report to the pp if > > it's all readable or not. > > > The API is already there: page_pool_is_unreadable(). But it uses the > same logic... > Ugh, I was evidently not paying attention when that was added. I guess everyone thinks memory provider == unreadable memory. I think it's more a coincidence that the first 2 memory providers give unreadable memory. Whatever I guess; it's good enough for now :D > However, having a pp level API is a bit limiting: as Cosmin pointed out, > mlx5 can't use it because it needs to know in advance if this page_pool > is for unreadable memory to correctly size the data page_pool (with or > without headers). > Yeah, in that case mlx5 would do something like: return !rxq->mp_params.mp_ops->is_readable(); If we decided that mp's could report if they're readable or not. For now I guess assuming all mps are unreadable is fine. -- Thanks, Mina
© 2016 - 2025 Red Hat, Inc.