From: Jakub Kicinski <kuba@kernel.org>
If user decides to increase the buffer size for agg ring
we need to ask the page pool for higher order pages.
There is no need to use larger pages for header frags,
if user increase the size of agg ring buffers switch
to separate header page automatically.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
[pavel: adjust max_len]
Reviewed-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 13286f4a2fa7..5c57b2a5c51c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3829,11 +3829,13 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
pp.pool_size = bp->rx_agg_ring_size / agg_size_fac;
if (BNXT_RX_PAGE_MODE(bp))
pp.pool_size += bp->rx_ring_size / rx_size_fac;
+
+ pp.order = get_order(bp->rx_page_size);
pp.nid = numa_node;
pp.netdev = bp->dev;
pp.dev = &bp->pdev->dev;
pp.dma_dir = bp->rx_dir;
- pp.max_len = PAGE_SIZE;
+ pp.max_len = PAGE_SIZE << pp.order;
pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV |
PP_FLAG_ALLOW_UNREADABLE_NETMEM;
pp.queue_idx = rxr->bnapi->index;
@@ -3844,7 +3846,10 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
rxr->page_pool = pool;
rxr->need_head_pool = page_pool_is_unreadable(pool);
+ rxr->need_head_pool |= !!pp.order;
if (bnxt_separate_head_pool(rxr)) {
+ pp.order = 0;
+ pp.max_len = PAGE_SIZE;
pp.pool_size = min(bp->rx_ring_size / rx_size_fac, 1024);
pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
pool = page_pool_create(&pp);
--
2.49.0