From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BE403BB9F4 for ; Thu, 15 Jan 2026 17:12:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497147; cv=none; b=bOSSg6tLUaLF3BGRN9UA2F3ELH4uM7IIbegcemGiNkTOsxXXnog0PzkzhwgMRfJvvUPS1k1WhfgzExJRuphbnyEo0fRnRtlrMboW3E3lnqmlKxQELM+yhQAH3HojCl6QRbQWHVwYY4FGPAWrglS+HvI02bZHmTSxlox4q28L/5o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497147; c=relaxed/simple; bh=LSh5hLuiJCxDRwJ+BQ+Xc7tCZE16QA/b770rYhc+vp8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LIzhuhtSdRopWETcNydF4DT0Ks1eCAu2i7KVmPVqjoSmtC189Oa4wXjC5fa92C6yYQytVpN8ryiPP/7xOY8/VnkJIWA3JS5N8CLz1d73h5fhzbWwJdZP2ClJzjTXwGeIaLJ27r57CZUvXPn+4RgXzmKLCeLj58NeLT6A2lVPXMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BpSyJ0Nx; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BpSyJ0Nx" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-47ee937ecf2so9445945e9.0 for ; Thu, 15 Jan 2026 09:12:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497144; x=1769101944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BopLoWIhKj7xpDJExutQghL/d96/x+rVfP3p1ohtywE=; b=BpSyJ0NxtpRPZ6abZNisiXXoEYNXasXggd9O3OKVrGnHSKsViorbq/MWM0H+4+vazG wTWa6BJKsVkIsYVo4yw5n1V36rolO6khYXu1on9DSEl6H8AJa8Y7Z4y+Hr7EnSyAbAJL DtY/5SZojfLmswHJCdLMrYeVy16h5gtGQFlxzRL2Zfsd2Gh15s7MO/hW57pZ5JtvIdbB QsYiSaDzlLIQW0Vbt6yqGYOi1Tutq0LhPaSWhlPyAOp9WghftsjiYCqKgjEsBWG0k6oB BbZEJGFMCKC46d5tVENMJ26vXugK9oIXgKOBZgW8fSv6KqwRyAHOkGrBH/EbNo6QQyaj 5NMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497144; x=1769101944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=BopLoWIhKj7xpDJExutQghL/d96/x+rVfP3p1ohtywE=; b=NZSBJ9jDQeIojJTqmgrjQ5SkSpJ3ZuqJMpuUZxBEPZ7FN07S7Wb8BcHkkPG+Khm7jV 5OLjwyKd+ehjpa1TEFhf47IEN4m1qINQjdfDRtjSCUonRl4bSKI7l0m7gXnhmU2ivnpr 3QDZEx/npBw71O/gmbmzFR6SqF0ZQnd4gEZgc9otrIV1tSckjEANgRnhbSUGl7v8n9KN 6Ieem1kJiwjZGP42zUIFY9UoXgDIaMm8kViFKxOwXBoMUpQ01iCYOvMHi4Vf3t1K6a5g 4FjzoLhWr56M1nrn7uo25U802Ti85/8tQg/q7iOrNu3XfdtaV8+sJjhlP57EapXxwNps ON+w== X-Forwarded-Encrypted: i=1; AJvYcCUeoRORAldCviGoUd1wUyrtAbtx/Vw0lToVXpaskDR7oncxNCKyGVURQaNcyWyREVyNlib7F2Wgd45pWq8=@vger.kernel.org X-Gm-Message-State: AOJu0YyvyWLRx+THupUowUTcQimp+ZAsCVAq6BwVBE37Fg3xqG1RO9uM Kf3ICrV5z+PtQCnO3p7P48dSKGIzw/5bgtCrKVretoEJOyfKWEEa08Db X-Gm-Gg: AY/fxX6U2sgKUuWKYP1QWparfXkopUXXW2VTobfu0vT9ZMjA2nx9dpt6uULsT4d79WW tdQZ5+38lJxtu4Jkafp/fpE0HTZOmZyvkN3BGvnc3pdDU/CeTyruFnrjrXu72Inw80a4FniRqXR 3t1CodHtpzHGPaTGesGJZGYzuHIj21VbuYLNuqzKRtmTH8u1P/CghDywYwjr6rBzZMGY7LZ/2Tz 42oHxs/hwe1mnHIHvvZnFYkDFrAlYertlpY/FOXVZ5nnZ2XrL9igLrbbb9iSWkfnmqwXzWhrEVR WEiq8U+Icbp5yqn5HWVZsQewyy1fviZF9NXlG4H5uRvoQTYCulXjQS4AQ63lNCzzA5Rl55UK8Mg 6mqVRjwvB2Dr4Zr/XTOmbk9ToMqYdYuGmZO/XkdRMrozEFKsZYgIAf8R3y9Q19gO6P43rf7CkLv bVRaqRThxw8S96DEfCruKoE3qVmo4oDHz6Bm/o0jhfO9XJY8sSOW+1oUABKzTto5is9jQ/EPnPn 6+M7lwNTZ3pYBYggA== X-Received: by 2002:a05:600d:640f:20b0:47d:6c36:a125 with SMTP id 5b1f17b1804b1-4801e7d2a3cmr1460515e9.17.1768497144306; Thu, 15 Jan 2026 09:12:24 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:23 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 1/9] net: memzero mp params when closing a queue Date: Thu, 15 Jan 2026 17:11:54 +0000 Message-ID: <7073bb4b696f5593c1f2e0b9451f0120ca624182.1768493907.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of resetting memory provider parameters one by one in __net_mp_{open,close}_rxq, memzero the entire structure. It'll be used to extend the structure. Signed-off-by: Pavel Begunkov --- net/core/netdev_rx_queue.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index c7d9341b7630..a0083f176a9c 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -139,10 +139,9 @@ int __net_mp_open_rxq(struct net_device *dev, unsigned= int rxq_idx, =20 rxq->mp_params =3D *p; ret =3D netdev_rx_queue_restart(dev, rxq_idx); - if (ret) { - rxq->mp_params.mp_ops =3D NULL; - rxq->mp_params.mp_priv =3D NULL; - } + if (ret) + memset(&rxq->mp_params, 0, sizeof(rxq->mp_params)); + return ret; } =20 @@ -179,8 +178,7 @@ void __net_mp_close_rxq(struct net_device *dev, unsigne= d int ifq_idx, rxq->mp_params.mp_priv !=3D old_p->mp_priv)) return; =20 - rxq->mp_params.mp_ops =3D NULL; - rxq->mp_params.mp_priv =3D NULL; + memset(&rxq->mp_params, 0, sizeof(rxq->mp_params)); err =3D netdev_rx_queue_restart(dev, ifq_idx); WARN_ON(err && err !=3D -ENETDOWN); } --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 037993BFE27 for ; Thu, 15 Jan 2026 17:12:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497150; cv=none; b=bX+ZwQYB/V8+Ft6jN4hgeDbdEAsE8wJJdN/T+4pU9xutXP6Z32PbhKG+MXDHpgIfdh8HJ8v6uQvPWiPtzedx0Zusqmsl1LTUnlk8yPKWtetDzism5VcNhCChk0/2C8aWrv3/O5oS+zycu1cMwRnx97j6mDjDQ6EY2tIyrWSJsZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497150; c=relaxed/simple; bh=42xckaESf3j1/c9BstDJDSEUMXBt7iNWBkDF/+HLvPg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mGTU31yPf9w71F5xU8GFANfFHsstZ/1m9DoZ5pa1HZy1hbTReGxrP6JegazpUW2GuGk5w3OpuTmu7aQZ2pONLN6barOflATrbiwMBtQB9volj0Hxwxybj3OVASxM8ugEkKR2ypYzQagnlocry9fVK8PrYXLZ48d6MXW0t4+wWoo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BSozIh62; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BSozIh62" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-47edffe5540so12299295e9.0 for ; Thu, 15 Jan 2026 09:12:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497147; x=1769101947; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZzXTN5GPRXX9JktaEoAXzWi3s9VynOdluwYCuvOxVM0=; b=BSozIh62kZ+spfzWRT6eykR7HFH56OVNcsqeNDIdtpTNdnNiv+AwKekWffGvnKsAHV qggO678RCgaNXagQijZLaJXcvjglgpgq7Y1jMFOGQSQzPT5sc9PVXZiDbYtyzMM5fo0C tecW0jtMBzVUmgNZ0kilpejk/Kf79dYApfsz8p6CxEj4vEGyUVtiai/B5KC75TqtFEVI /dKyl/tXjKdiYRbYWWDIl17LI6eOPiqEV/sWWFlCHF1LO9Z8vSXyqNWG+ujyPfDeEZhL NA+I+XmH1rfLO+qBltZ3oEy4pCx2t6yk2SBEJ3P7jR6Vv7kzjv2Imae9gW/NU3Wlvuyi 3CGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497147; x=1769101947; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ZzXTN5GPRXX9JktaEoAXzWi3s9VynOdluwYCuvOxVM0=; b=qUEQ4zi+uQ+PewhEXAUQGovbyUKGInNOM/ltowPWfujJ2Xogsb17WEcGlv+47X2R4L JoIGrWNhLfMOZEPhmV+2EE7EPKBngVc7FUoPtnAU0BlVF0lrvzC0yIAw6IGjBRw3IpVU gtj814M/6sTm2qIaFlo3zvbx8hZ34iDM/BcJdS+fdo0E5VdOzt+ON0L0pRdHKKoCirGO Lhr+XZMk2jmUDC5Hm6ZldV8p25kshlckNYRS+DdCTBZADJPeHR2Tthyr4qpEG8IEiNsg G8d/rwBdJrJBU+FN4UQbykioYBrKIlB4OLywR0DMxjTfoemqQDhcq1WdUuG9JdXz5Ki9 Ijlg== X-Forwarded-Encrypted: i=1; AJvYcCUczpY9sx9r3VSf44ENx54ASOWnbiy2SykKcHQTlI3OSjz1nke8ylSJCfbqJPVZoLh77W+/ngjjexsYuww=@vger.kernel.org X-Gm-Message-State: AOJu0YzplfO5AxbXlDhZKqWm2Rot5YjoW++OQdHdW1sLHT9+XYjOHsGL 5Zj2j0GPq597L/4SbZBX/gbpKV29FOiH/oKUrHcHW9GQ+Yif1U48mJTK X-Gm-Gg: AY/fxX6ZTzrWy2juvECTr6mvurH2+vV3/yd2kBQHeZm/trvIUyIOh1goOsXp9uo6Z1r G7uX7qRPoikOKulxqmfzxii4SvKuAS9zFhVt/XkEOl8K47sYvO/4Xh/afUN4nSQEDuFpudrbOFe c6yW16mN4XgEt+QTy5p12KT9Fz7U2tG51EVzjfC0rntsSDsAkJD3c3OQUMnkI7Z2lq2492RZDLe vB0WWvrU41xHnJrKs+7sgrdGa3LXOyWnxxfY8+qVFKYgcLgHcTwyDOhBR1nYME8sfVTeL7YXgBh HSLATBIJmybrXJ43H+R6U/U4GwJAeu6vKzH0bwWamoL1YugS+FbIakrgy6v2bVwx0LxY8xy4DQf CpyGoOdGcfuxQCIjnGXBdfErv8d4DVY6blspGOuoNCN8/qgwGD63Hf/lY5Va23yEcVIomOsJS3a A37vUpFA2rsXpR6O/UUMOEiw3APmpzr1HQhxD6yH8edBipwz5gGXFf6mmlJNgfhOn3tcWw3Kdnp 9jcXRo25MMS5/kWeDlMK9fKoDqU X-Received: by 2002:a05:600c:3504:b0:47a:935f:61a0 with SMTP id 5b1f17b1804b1-4801e2a95fcmr6718835e9.0.1768497146951; Thu, 15 Jan 2026 09:12:26 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:25 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 2/9] net: reduce indent of struct netdev_queue_mgmt_ops members Date: Thu, 15 Jan 2026 17:11:55 +0000 Message-ID: <92d76cf96dcbc3c58daa84dbbf71a3ca8d9de53d.1768493907.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jakub Kicinski Trivial change, reduce the indent. I think the original is copied from real NDOs. It's unnecessarily deep, makes passing struct args problematic. Signed-off-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov --- include/net/netdev_queues.h | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index cd00e0406cf4..541e7d9853b1 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -135,20 +135,20 @@ void netdev_stat_queue_sum(struct net_device *netdev, * be called for an interface which is open. */ struct netdev_queue_mgmt_ops { - size_t ndo_queue_mem_size; - int (*ndo_queue_mem_alloc)(struct net_device *dev, - void *per_queue_mem, - int idx); - void (*ndo_queue_mem_free)(struct net_device *dev, - void *per_queue_mem); - int (*ndo_queue_start)(struct net_device *dev, - void *per_queue_mem, - int idx); - int (*ndo_queue_stop)(struct net_device *dev, - void *per_queue_mem, - int idx); - struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, - int idx); + size_t ndo_queue_mem_size; + int (*ndo_queue_mem_alloc)(struct net_device *dev, + void *per_queue_mem, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *per_queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + void *per_queue_mem, + int idx); + int (*ndo_queue_stop)(struct net_device *dev, + void *per_queue_mem, + int idx); + struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, + int idx); }; =20 bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx); --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B72263BFE5E for ; Thu, 15 Jan 2026 17:12:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497154; cv=none; b=c7dZKv9XbD7+yIIZ8Hgk6DRiMFJEqBuRPQ709AS4pe0hHgPJSJ6F4bm97Otw1fFkCFpCFgvePLdZncGFHAh2LGQiwMUa8y6szIHq9ExkW2Ru1ZpEpwjevMpTTX9GVdRculAD8EtOLIgycirwoRoHr97NSpPA63NGEkV8xJv3iQA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497154; c=relaxed/simple; bh=325kOFYzw2k+cqXSD1yZvsNV8lGCDE0J53Wnm9an4I0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JulAEUIXtt8WamTH74TMX66v42Q63C98TSc021m3m3Og0DaqqEfZFF0Y3SWUuIdzval1A5uMBqT4b2R/mib3LFwuzj3oXrfN0pbQosWzOkxWQZWTXQco/hhMi8spKJIJC2f6aiRTaKr/vZQ4wWRyccDI1W1Dqt4fMhvdcFNH7SI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Xao7+sAw; arc=none smtp.client-ip=209.85.128.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Xao7+sAw" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-477a219dbcaso10238965e9.3 for ; Thu, 15 Jan 2026 09:12:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497150; x=1769101950; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jXcPbLudM1kUjP1WbAI7w2pUnWk49Wcruu//10Qhzw8=; b=Xao7+sAwCNlK123K/6mrDq2XlN4KGPM7mVkycv5Jq4qDr7BNDFDX7/uRjuCIfc5Dz2 4jsyBfWUBbNKzGGBDRBajJwlIV1lUaFXcwscIlSjKunmwFBagu/syf0nUE+81/KUV8LX vKMqZawEaVBiHmD2fgxFkStAtDmtVJZWdsRqNAcWvKZiV6qkYxTM//x3fXsCf41g1QFE 8F79SAImqdiS/MLfQST+sluDFE2qRu6e7kOKCa5TAdvfM1bnJybXrknVAolm2s/8/Qtg YQGWZ1DCy2dnT2ckeRqxdMi86Sv00vWVsaZEBNLX5R2VKyh8CGIXiyTlG8tietWOEiUv H4lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497150; x=1769101950; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=jXcPbLudM1kUjP1WbAI7w2pUnWk49Wcruu//10Qhzw8=; b=dWf8I45oUvePxeDy1S1ZoFGdrrDaDJ6O+BuHqEC1AG5Gtn4V/yGCRv+yCaLEXBnFuN erx/YHWrNx5mqswcfe3FrqQoztzqLOC9d7L5eADMX89pIKHnMjUaD/h2cAYeIzynvCHm vcHleH6LjFOclc3IVoPH0KFwKiEQGot4XIKGzhZ4nfCpSSZzifCsIrr7trLJdv5I3sfM VqDwO6o0/X/Nar3ouHF8XR8CePngdRx+87We/rG5JqsDbsRg1e/tSguIT/PrXIk+ZJDh lB0do6z4Xqr41AlepXjZqrkVNwi0rGx77MsixpfS/U1FJ8Kn2RFYCoePnejUu6bMnh9h /Ldg== X-Forwarded-Encrypted: i=1; AJvYcCXB9CujitkQyFGdwMfRrT2ZJYbwuCcO8U/JLbc4rh9o5fT+nrIeTkrx9q2v3Okd3ExOVdgzgn2oVOvbh24=@vger.kernel.org X-Gm-Message-State: AOJu0YxYIVrZkjUOZTtihqkA10HOHEUCfkwqjXppgMH2FI6368czndCA FzQlwJmPxIfA8STMqZ8TK6Tcm24fz5nsOvqINSwn5QB1hC7inprDGLjG X-Gm-Gg: AY/fxX4Ob/3g0ZOobgF86vEYhk8H2ae4ifCcgmPDyXWjnyIyNqoopcuuYpoXG1nuNxC Ew3yt7YMlvxNwPtOsHnWOz+67zBFevZsVoKJWxzVvFqLUYOBG1aWIjqvaCImMXEw2zLe44SBoKM YvPTmENNC2T8F+oA9KRvpPO1QDtlW2vfr8+XEX9szqzjOJg7Zt5hR9GRqP7+cbBSE2WQXf4R1dM iuBE2aWAgGCspMk4HiXZ2NygUKeQGE04n9IJM4b87l4bDkW2zuZuBdFGiessAlNDLCnQHE6g35P vcXsXR+sMT5TBViKFd6sIf6OXSdUsEwTkyh4l1//izNgFV+hinoD/LZsOSw5YClwVL8W1wo7Sgc qmb0snujHsnqqIvUDI5J6auopvFir27PT47+DFfZ8jIdmJzXiVACxelowcHa/ya2jj0nWIXjU29 olklxcpIBTOqeTvBp1F6pY9K81DSeojNtENqMMmYA9fAmsH90nrc7kWu3KNqewbT1/nnPEYiaP6 ZNn+3KkcngXdR8yYw== X-Received: by 2002:a05:600c:4586:b0:477:9ce2:a0d8 with SMTP id 5b1f17b1804b1-4801e29edc9mr7306735e9.0.1768497149877; Thu, 15 Jan 2026 09:12:29 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:27 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 3/9] net: add bare bone queue configs Date: Thu, 15 Jan 2026 17:11:56 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We'll need to pass extra parameters when allocating a queue for memory providers. Define a new structure for queue configurations, and pass it to qapi callbacks. It's empty for now, actual parameters will be added in following patches. Configurations should persist across resets, and for that they're default-initialised on device registration and stored in struct netdev_rx_queue. We also add a new qapi callback for defaulting a given config. It must be implemented if a driver wants to use queue configs and is optional otherwise. Suggested-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 ++++++-- drivers/net/ethernet/google/gve/gve_main.c | 9 ++++++--- .../net/ethernet/mellanox/mlx5/core/en_main.c | 10 ++++++---- drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 8 ++++++-- drivers/net/netdevsim/netdev.c | 7 +++++-- include/net/netdev_queues.h | 9 +++++++++ include/net/netdev_rx_queue.h | 2 ++ net/core/dev.c | 17 +++++++++++++++++ net/core/netdev_rx_queue.c | 12 +++++++++--- 9 files changed, 66 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index 8419d1eb4035..a0abe991f79a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -15911,7 +15911,9 @@ static const struct netdev_stat_ops bnxt_stat_ops = =3D { .get_base_stats =3D bnxt_get_base_stats, }; =20 -static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int id= x) +static int bnxt_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct bnxt_rx_ring_info *rxr, *clone; struct bnxt *bp =3D netdev_priv(dev); @@ -16077,7 +16079,9 @@ static void bnxt_copy_rx_ring(struct bnxt *bp, dst->rx_agg_bmap =3D src->rx_agg_bmap; } =20 -static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx) +static int bnxt_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct bnxt *bp =3D netdev_priv(dev); struct bnxt_rx_ring_info *rxr, *clone; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 7eb64e1e4d85..c42640da15a5 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -2616,8 +2616,9 @@ static void gve_rx_queue_mem_free(struct net_device *= dev, void *per_q_mem) gve_rx_free_ring_dqo(priv, gve_per_q_mem, &cfg); } =20 -static int gve_rx_queue_mem_alloc(struct net_device *dev, void *per_q_mem, - int idx) +static int gve_rx_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_q_mem, int idx) { struct gve_priv *priv =3D netdev_priv(dev); struct gve_rx_alloc_rings_cfg cfg =3D {0}; @@ -2638,7 +2639,9 @@ static int gve_rx_queue_mem_alloc(struct net_device *= dev, void *per_q_mem, return err; } =20 -static int gve_rx_queue_start(struct net_device *dev, void *per_q_mem, int= idx) +static int gve_rx_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_q_mem, int idx) { struct gve_priv *priv =3D netdev_priv(dev); struct gve_rx_ring *gve_per_q_mem; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/ne= t/ethernet/mellanox/mlx5/core/en_main.c index 07fc4d2c8fad..0e2132b58257 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -5596,8 +5596,9 @@ struct mlx5_qmgmt_data { struct mlx5e_channel_param cparam; }; =20 -static int mlx5e_queue_mem_alloc(struct net_device *dev, void *newq, - int queue_index) +static int mlx5e_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *newq, int queue_index) { struct mlx5_qmgmt_data *new =3D (struct mlx5_qmgmt_data *)newq; struct mlx5e_priv *priv =3D netdev_priv(dev); @@ -5658,8 +5659,9 @@ static int mlx5e_queue_stop(struct net_device *dev, v= oid *oldq, int queue_index) return 0; } =20 -static int mlx5e_queue_start(struct net_device *dev, void *newq, - int queue_index) +static int mlx5e_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *newq, int queue_index) { struct mlx5_qmgmt_data *new =3D (struct mlx5_qmgmt_data *)newq; struct mlx5e_priv *priv =3D netdev_priv(dev); diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/eth= ernet/meta/fbnic/fbnic_txrx.c index 13d508ce637f..e36ed25462b4 100644 --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c @@ -2809,7 +2809,9 @@ void fbnic_napi_depletion_check(struct net_device *ne= tdev) fbnic_wrfl(fbd); } =20 -static int fbnic_queue_mem_alloc(struct net_device *dev, void *qmem, int i= dx) +static int fbnic_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct fbnic_net *fbn =3D netdev_priv(dev); const struct fbnic_q_triad *real; @@ -2861,7 +2863,9 @@ static void __fbnic_nv_restart(struct fbnic_net *fbn, netif_wake_subqueue(fbn->netdev, nv->qt[i].sub0.q_idx); } =20 -static int fbnic_queue_start(struct net_device *dev, void *qmem, int idx) +static int fbnic_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct fbnic_net *fbn =3D netdev_priv(dev); struct fbnic_napi_vector *nv; diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c index 6927c1962277..6285fbefe38a 100644 --- a/drivers/net/netdevsim/netdev.c +++ b/drivers/net/netdevsim/netdev.c @@ -758,7 +758,9 @@ struct nsim_queue_mem { }; =20 static int -nsim_queue_mem_alloc(struct net_device *dev, void *per_queue_mem, int idx) +nsim_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_queue_mem, int idx) { struct nsim_queue_mem *qmem =3D per_queue_mem; struct netdevsim *ns =3D netdev_priv(dev); @@ -807,7 +809,8 @@ static void nsim_queue_mem_free(struct net_device *dev,= void *per_queue_mem) } =20 static int -nsim_queue_start(struct net_device *dev, void *per_queue_mem, int idx) +nsim_queue_start(struct net_device *dev, struct netdev_queue_config *qcfg, + void *per_queue_mem, int idx) { struct nsim_queue_mem *qmem =3D per_queue_mem; struct netdevsim *ns =3D netdev_priv(dev); diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index 541e7d9853b1..f6f1f71a24e1 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -14,6 +14,9 @@ struct netdev_config { u8 hds_config; }; =20 +struct netdev_queue_config { +}; + /* See the netdev.yaml spec for definition of each statistic */ struct netdev_queue_stats_rx { u64 bytes; @@ -130,6 +133,8 @@ void netdev_stat_queue_sum(struct net_device *netdev, * @ndo_queue_get_dma_dev: Get dma device for zero-copy operations to be u= sed * for this queue. Return NULL on error. * + * @ndo_default_qcfg: Populate queue config struct with defaults. Optional. + * * Note that @ndo_queue_mem_alloc and @ndo_queue_mem_free may be called wh= ile * the interface is closed. @ndo_queue_start and @ndo_queue_stop will only * be called for an interface which is open. @@ -137,16 +142,20 @@ void netdev_stat_queue_sum(struct net_device *netdev, struct netdev_queue_mgmt_ops { size_t ndo_queue_mem_size; int (*ndo_queue_mem_alloc)(struct net_device *dev, + struct netdev_queue_config *qcfg, void *per_queue_mem, int idx); void (*ndo_queue_mem_free)(struct net_device *dev, void *per_queue_mem); int (*ndo_queue_start)(struct net_device *dev, + struct netdev_queue_config *qcfg, void *per_queue_mem, int idx); int (*ndo_queue_stop)(struct net_device *dev, void *per_queue_mem, int idx); + void (*ndo_default_qcfg)(struct net_device *dev, + struct netdev_queue_config *qcfg); struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, int idx); }; diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index 8cdcd138b33f..cfa72c485387 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 /* This structure contains an instance of an RX queue. */ struct netdev_rx_queue { @@ -27,6 +28,7 @@ struct netdev_rx_queue { struct xsk_buff_pool *pool; #endif struct napi_struct *napi; + struct netdev_queue_config qcfg; struct pp_memory_provider_params mp_params; } ____cacheline_aligned_in_smp; =20 diff --git a/net/core/dev.c b/net/core/dev.c index 36dc5199037e..a1d394addaef 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -11270,6 +11270,21 @@ static void netdev_free_phy_link_topology(struct n= et_device *dev) } } =20 +static void init_rx_queue_cfgs(struct net_device *dev) +{ + const struct netdev_queue_mgmt_ops *qops =3D dev->queue_mgmt_ops; + struct netdev_rx_queue *rxq; + int i; + + if (!qops || !qops->ndo_default_qcfg) + return; + + for (i =3D 0; i < dev->num_rx_queues; i++) { + rxq =3D __netif_get_rx_queue(dev, i); + qops->ndo_default_qcfg(dev, &rxq->qcfg); + } +} + /** * register_netdevice() - register a network device * @dev: device to register @@ -11315,6 +11330,8 @@ int register_netdevice(struct net_device *dev) if (!dev->name_node) goto out; =20 + init_rx_queue_cfgs(dev); + /* Init, if this function is available */ if (dev->netdev_ops->ndo_init) { ret =3D dev->netdev_ops->ndo_init(dev); diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index a0083f176a9c..86d1c0a925e3 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -22,6 +22,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) { struct netdev_rx_queue *rxq =3D __netif_get_rx_queue(dev, rxq_idx); const struct netdev_queue_mgmt_ops *qops =3D dev->queue_mgmt_ops; + struct netdev_queue_config qcfg; void *new_mem, *old_mem; int err; =20 @@ -31,6 +32,10 @@ int netdev_rx_queue_restart(struct net_device *dev, unsi= gned int rxq_idx) =20 netdev_assert_locked(dev); =20 + memset(&qcfg, 0, sizeof(qcfg)); + if (qops->ndo_default_qcfg) + qops->ndo_default_qcfg(dev, &qcfg); + new_mem =3D kvzalloc(qops->ndo_queue_mem_size, GFP_KERNEL); if (!new_mem) return -ENOMEM; @@ -41,7 +46,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) goto err_free_new_mem; } =20 - err =3D qops->ndo_queue_mem_alloc(dev, new_mem, rxq_idx); + err =3D qops->ndo_queue_mem_alloc(dev, &qcfg, new_mem, rxq_idx); if (err) goto err_free_old_mem; =20 @@ -54,7 +59,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) if (err) goto err_free_new_queue_mem; =20 - err =3D qops->ndo_queue_start(dev, new_mem, rxq_idx); + err =3D qops->ndo_queue_start(dev, &qcfg, new_mem, rxq_idx); if (err) goto err_start_queue; } else { @@ -66,6 +71,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) kvfree(old_mem); kvfree(new_mem); =20 + rxq->qcfg =3D qcfg; return 0; =20 err_start_queue: @@ -76,7 +82,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) * WARN if we fail to recover the old rx queue, and at least free * old_mem so we don't also leak that. */ - if (qops->ndo_queue_start(dev, old_mem, rxq_idx)) { + if (qops->ndo_queue_start(dev, &rxq->qcfg, old_mem, rxq_idx)) { WARN(1, "Failed to restart old queue in error path. RX queue %d may be unhe= althy.", rxq_idx); --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73D353C008F for ; Thu, 15 Jan 2026 17:12:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497157; cv=none; b=qkIqT+0u4qPwZUxlq/DN51PxCBskIaCq7bz3l999Gt+fby7MGQ7bplP3GnY7ROhaUrAaJScFAz1mZ/DFUqQV0tnk8w+RrwawG/Oaxl4X8ynZBiwKlNi33exMZqhDjPwRVwvqK8Q0+Jrc5a9AZg9Z+n+AM/Dwbp1Vij4XfUoefl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497157; c=relaxed/simple; bh=hKOxjq4fxJ+iQfib8Cm75gZgmDPHfB9Dr0qLWGLYXFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zg5k1j/GINXhqVhN1Isox04BYYnOpYtPhFSkV+6DS//5KHHIeIEUxmTqphk7LZ+Wgyo5IVQ+haneRGEVvpna2z8LJQhDwmm97O9Q8lSdpTdJVaz/AfLCOywtsN5K3UmiJBs5rfkeqoyQL675NFbJySDf/WsxbCiTLmdBs19636Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=arbDkq1S; arc=none smtp.client-ip=209.85.128.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="arbDkq1S" Received: by mail-wm1-f65.google.com with SMTP id 5b1f17b1804b1-47ee4539adfso10843465e9.3 for ; Thu, 15 Jan 2026 09:12:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497152; x=1769101952; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TWL7hbAkNRsIM9hHu4g4BkFlYyqPgZDU68a7iQIhzDo=; b=arbDkq1SNOq8FIwZsavNxMCRr/WUD0TzmJLcXHaBmHUBRQyb25O9fL8k1r4DrSpQCn AYGA64CE+H4VU8dMWaXw0ghqO0pBUgjkVGpyQWFPCE3/ddhtTQH0fybqWyQgRLPHf1sF SQXjBsBWv5d5q8rcSO9IVGarTW+Qfr2a5gv9+4KiW0bLYQRlweBJO+mRjEHMzQDXl0Ed DMO9MNpUnEz70qYWjqUmFX/X1bygvefxpWGaWcT7GByruIJKfRc5sl2jrFrhiXAjxM8l IbK8gUaOWgs85gDH9KhfdXYoPKGT+kuYV+or2fWM9BriRH/C4XBYBS6XFI+peVZAW2iV DMLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497152; x=1769101952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=TWL7hbAkNRsIM9hHu4g4BkFlYyqPgZDU68a7iQIhzDo=; b=WffiitxrogageTiXn4QSzbJf9MyqaZztoVKLFzm1Cd3d/zIrhlrr0zb67fkaAQHoti lKNT7s68MGsEJ6YBvpYHm69/W/fcF0S5/FKejeC1sF2ke8V2mODBJVwmfm9+Onf7HC9M UlRN1jXaqzY/nvYm+ZDxTU8gJ0vhP1Ztg7zFph6sjTycB+Tghof+0UMZAj4GWfGrUAWl 4mlDLbotLKOj0pHQAc4DIJhk2m858OpOjwtEgU5IuVahgkYqtrGRFCLZJ+4Ri1Mkf5fm xEa4l2KP/SGsX3jz05UD/dkAFKe6d5HDZRheeKj+rtFZvOaN6jpx5yLSAo9XeC2jd7io iNQw== X-Forwarded-Encrypted: i=1; AJvYcCUMrJWMkKbaUTLnHq4xjg59SYoBG4o//SazaHxoDErJhp0HA7MGarwsdZsGJzXaNEkJAAr/rhzNhEXfz/k=@vger.kernel.org X-Gm-Message-State: AOJu0YyxATOIJ1Lfn4xvMjbdSKJeRe2Qbzdomqn/weBcnw51iqCfWCQS SrIN28f+BOjqOB0TzlYU+l6W0UiIDFM5lkbGUNuZQGqjxy2B2WH3vEMc X-Gm-Gg: AY/fxX4v/htZkrhP+7ADiPp4bAoDvO3hdO426hv771v4C4MBkFlHAUmdyvQSpBwGzEz ZtM4Ot9ppfIsiYHLX1I7QguWu/N979btdtenBgGJrfici8GlF8Rs8yPORX6IS0y6IQxY3PgWdb+ lrcrHVhuCTAhWh/hdO2tuDKI4myTRxTIDxCjdLHak/HZ9slZOD3W/GZp8KAy5VENIdZWnmdIWpq 8QypR7Lm93g4xuqW24OQLQ562Z5OiDEVb10pZkhVSZM51fjgGCmVuaZ3zjRDQGLCZaaKItySfeR YfF+tsc6EimIwgCFPViLDotczZ/ZJCUARyKX4oay0Na0/11EEWQR7gODmILAEak74DNHB7hdioU gsfoHhq3Bd/fV7xQO/RRTWV14DlOGnZLPuzQFHI0nfoFw3XJrIHTHsZFC09aHhxceORG6nq2uRV 6M4rfIJ1iI07fFSMFXRbMcASIr/s4ttBqg88/ejzIexksL/QTtLJ2+qRrPaQR76JtVDy4g85U2E XEAkZFeFwGMeqVLFw== X-Received: by 2002:a05:600c:8718:b0:477:76c2:49c9 with SMTP id 5b1f17b1804b1-4801e2fb9e5mr6717845e9.2.1768497151701; Thu, 15 Jan 2026 09:12:31 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:30 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 4/9] net: pass queue rx page size from memory provider Date: Thu, 15 Jan 2026 17:11:57 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow memory providers to configure rx queues with a custom receive page size. It's passed in struct pp_memory_provider_params, which is copied into the queue, so it's preserved across queue restarts. Then, it's propagated to the driver in a new queue config parameter. Drivers should explicitly opt into using it by setting QCFG_RX_PAGE_SIZE, in which case they should implement ndo_default_qcfg, validate the size on queue restart and honour the current config in case of a reset. Signed-off-by: Pavel Begunkov --- include/net/netdev_queues.h | 10 ++++++++++ include/net/page_pool/types.h | 1 + net/core/netdev_rx_queue.c | 9 +++++++++ 3 files changed, 20 insertions(+) diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index f6f1f71a24e1..feca25131930 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -15,6 +15,7 @@ struct netdev_config { }; =20 struct netdev_queue_config { + u32 rx_page_size; }; =20 /* See the netdev.yaml spec for definition of each statistic */ @@ -114,6 +115,11 @@ void netdev_stat_queue_sum(struct net_device *netdev, int tx_start, int tx_end, struct netdev_queue_stats_tx *tx_sum); =20 +enum { + /* The queue checks and honours the page size qcfg parameter */ + QCFG_RX_PAGE_SIZE =3D 0x1, +}; + /** * struct netdev_queue_mgmt_ops - netdev ops for queue management * @@ -135,6 +141,8 @@ void netdev_stat_queue_sum(struct net_device *netdev, * * @ndo_default_qcfg: Populate queue config struct with defaults. Optional. * + * @supported_params: Bitmask of supported parameters, see QCFG_*. + * * Note that @ndo_queue_mem_alloc and @ndo_queue_mem_free may be called wh= ile * the interface is closed. @ndo_queue_start and @ndo_queue_stop will only * be called for an interface which is open. @@ -158,6 +166,8 @@ struct netdev_queue_mgmt_ops { struct netdev_queue_config *qcfg); struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, int idx); + + unsigned int supported_params; }; =20 bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx); diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 1509a536cb85..0d453484a585 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -161,6 +161,7 @@ struct memory_provider_ops; struct pp_memory_provider_params { void *mp_priv; const struct memory_provider_ops *mp_ops; + u32 rx_page_size; }; =20 struct page_pool { diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index 86d1c0a925e3..b81cad90ba2f 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -30,12 +30,21 @@ int netdev_rx_queue_restart(struct net_device *dev, uns= igned int rxq_idx) !qops->ndo_queue_mem_alloc || !qops->ndo_queue_start) return -EOPNOTSUPP; =20 + if (WARN_ON_ONCE(qops->supported_params && !qops->ndo_default_qcfg)) + return -EINVAL; + netdev_assert_locked(dev); =20 memset(&qcfg, 0, sizeof(qcfg)); if (qops->ndo_default_qcfg) qops->ndo_default_qcfg(dev, &qcfg); =20 + if (rxq->mp_params.rx_page_size) { + if (!(qops->supported_params & QCFG_RX_PAGE_SIZE)) + return -EOPNOTSUPP; + qcfg.rx_page_size =3D rxq->mp_params.rx_page_size; + } + new_mem =3D kvzalloc(qops->ndo_queue_mem_size, GFP_KERNEL); if (!new_mem) return -ENOMEM; --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A8213B8D58 for ; Thu, 15 Jan 2026 18:50:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768503005; cv=none; b=RFGGHCv/j7tUEeePJH1RqzeiCK6wQNbw5fn5EDqHpaBI1xwUDxCC9RleZ+RUzMZYc1VlJYLwEEP+PBBWC9S9T7yX4gu7zuvLQsSLez8kSgCui9Jz3F7RHEPWUy+uvFe+576oDPHpexPA4PJLcJDG5rqa+Xbs/re+gWgZN2MOVoM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768503005; c=relaxed/simple; bh=38iSyjYoEwMzr+nrU6pjXPKBtQ2TX3V1C7DF25i/yc0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=N2Ncby50G41cMt973d2VnbQ0PXm55C++ijm1YEqiqUHbgJQlgiAG/RLKhZQp47X79tMSZFuu3oWyf/Bpn4azKVUKKYHeIVJJPcvtBICNJwrRZYqqZiSQwIo/6Ucjm4zYihzdsQnLqixO9ukGIfOyVkvjKYFDnSMnbyaqdB9LwwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Z9Lnxc3y; arc=none smtp.client-ip=209.85.218.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Z9Lnxc3y" Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-b876bf5277dso333547166b.0 for ; Thu, 15 Jan 2026 10:50:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768503001; x=1769107801; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bW47JMUnnrn3JGaUjgMVWNTwCOKFTT/lQ2uvrJQBiv0=; b=Z9Lnxc3yOP9C7Iu+YSl4SOz6iDJi9chCJmgiWFdyFIU0gkfu+7iBh3DDhuwzg3KkWv 5Y8zb9UOeGdKMg0BXZhNwaFIlc/GVF+9/5qB3O7SgHXLX3nVV3R+Il1ap9hCbyNJhbIl YFZac3uT6Ym1K/2JZR1U+d6Gg0YL9429SORZesVCS5sory+YutbKFpcChb+VkwzAehud ULtGk7LNMyRvFftpx2lBC27brLF6/UsifPE6cC6gzQBsvEsE1PH0ha6Ahqp8G9A5L/YV 2So1YEcIfAgM3m6dClB5rHmQyB30z1IO0xx2n2ORIyLCo5x4453Jn/Lzx96fnhC8ckZZ LUtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768503001; x=1769107801; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=bW47JMUnnrn3JGaUjgMVWNTwCOKFTT/lQ2uvrJQBiv0=; b=Ck2AREpvqOZrJ5XKUkJcV3VUL0B21kA1uQDtlRkSo3UjfZCPQ3xFL2nVQpG8c7GJnE JjJh9V7yaybx+dXMbbd8oX4daNVX0vcKVNXr4LZfAeXV9pI2iiHu5VL15WsyeWs+dOdZ wYbbgWlUKL3UcsXLfjqNMjKK4IhNENF2V7XhXL3KJFfF2tsYcuwXpyljRi4T9iE2Yu7t 8VzLkxpdh5SmfPlwdEZbjVc5zopQHuBjVNJ3n3MSAfwa1D730ayFjC44A7/W/l5804i7 0RleueSvsovqzZQBp5Zz2uUcv60bZjuvupFpaD6a0CnvW7ajtfMhSXwUhTTQ+zki4Z6p UGLg== X-Forwarded-Encrypted: i=1; AJvYcCXazr3sQ0gYjJiUMI++ZoEwXpYRMyl9odYWjd3bhnohJhbpo5YDQAd+opJGboUnsyva9e49oav8TldP5xY=@vger.kernel.org X-Gm-Message-State: AOJu0YzeSYvItgW1Us+o8hs9RWIm1w1WZW8tGL+B0pLMvxkdZDWqC3zn 7jvaQlr/3/UVMHtHlunB8ElD7RvyTjO1vrOKI7JT/5Fdw5FY5eSbSLgs64FqjA== X-Gm-Gg: AY/fxX6P6P6my2P8lNmELoKJRUkuIPxmc6mmJbFX7XXHr+uk4i++prJFFFphlh2K8Jq lmqsf6zLIvfB+We0vGEBHFEiEzfNkFOL9GcfqTYZe0HvzAiO9c4f/M8zOXmsI9ddwQ55mhlcNgU MTZ7c/Ju8tRkyMbXM80DE+eeHV8vN5yYOaFCMTVWsU47zZO9jOZLmk9SH7Z+1aolYPuDwB6B2SB pZWk6E5xnIPTBEFtF6pyO7CruWIh+baxNaG4TjwpzlCGlmtEBWd4FEhp5CDZIPbC4pjFFUgXV4r TBN3Mx13lTrILtCWM0mrp99jTXv0ycSCTJqpOU1PmmSYkK1Jbl4Ke3sIwA6Hu53hdTjEOE6kjcC Zts/SuS+FZ/2IG0Pntntzxd8E05JSbZu2Ll4FNxTY4LB+79u6/m2YQxbkGT83QmJxH5pKhLrxGi 9YeOzUeFEZG2Na3FEduuf/q3q8HIFNg9e/YOkjcBkiYucQ28a0NqdmhNpZGv3Fp0zbADkRWEkY7 S6QEjxGVJ1OFMmCbQ== X-Received: by 2002:a05:600d:640f:20b0:47d:6c36:a125 with SMTP id 5b1f17b1804b1-4801e7d2a3cmr1465185e9.17.1768497153419; Thu, 15 Jan 2026 09:12:33 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:32 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 5/9] eth: bnxt: store rx buffer size per queue Date: Thu, 15 Jan 2026 17:11:58 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of using a constant buffer length, allow configuring the size for each queue separately. There is no way to change the length yet, and it'll be passed from memory providers in a later patch. Suggested-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 56 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- 4 files changed, 38 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index a0abe991f79a..196b972263bd 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -905,7 +905,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_na= pi *bnapi, int budget) =20 static bool bnxt_separate_head_pool(struct bnxt_rx_ring_info *rxr) { - return rxr->need_head_pool || PAGE_SIZE > BNXT_RX_PAGE_SIZE; + return rxr->need_head_pool || rxr->rx_page_size < PAGE_SIZE; } =20 static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapp= ing, @@ -915,9 +915,9 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *b= p, dma_addr_t *mapping, { struct page *page; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + if (rxr->rx_page_size < PAGE_SIZE) { page =3D page_pool_dev_alloc_frag(rxr->page_pool, offset, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); } else { page =3D page_pool_dev_alloc_pages(rxr->page_pool); *offset =3D 0; @@ -936,8 +936,9 @@ static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *b= p, dma_addr_t *mapping, { netmem_ref netmem; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_P= AGE_SIZE, gfp); + if (rxr->rx_page_size < PAGE_SIZE) { + netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, + rxr->rx_page_size, gfp); } else { netmem =3D page_pool_alloc_netmems(rxr->page_pool, gfp); *offset =3D 0; @@ -1155,9 +1156,9 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct = bnxt *bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); - skb =3D napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); + skb =3D napi_build_skb(data_ptr - bp->rx_offset, rxr->rx_page_size); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); return NULL; @@ -1189,7 +1190,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); =20 if (unlikely(!payload)) @@ -1203,7 +1204,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, =20 skb_mark_for_recycle(skb); off =3D (void *)data_ptr - page_address(page); - skb_add_rx_frag(skb, 0, page, off, len, BNXT_RX_PAGE_SIZE); + skb_add_rx_frag(skb, 0, page, off, len, rxr->rx_page_size); memcpy(skb->data - NET_IP_ALIGN, data_ptr - NET_IP_ALIGN, payload + NET_IP_ALIGN); =20 @@ -1288,7 +1289,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem, cons_rx_buf->offset, - frag_len, BNXT_RX_PAGE_SIZE); + frag_len, rxr->rx_page_size); } else { skb_frag_t *frag =3D &shinfo->frags[i]; =20 @@ -1313,7 +1314,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb->len -=3D frag_len; skb->data_len -=3D frag_len; - skb->truesize -=3D BNXT_RX_PAGE_SIZE; + skb->truesize -=3D rxr->rx_page_size; } =20 --shinfo->nr_frags; @@ -1328,7 +1329,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, } =20 page_pool_dma_sync_netmem_for_cpu(rxr->page_pool, netmem, 0, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); =20 total_frag_len +=3D frag_len; prod =3D NEXT_RX_AGG(prod); @@ -2290,8 +2291,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_c= p_ring_info *cpr, if (!skb) goto oom_next_rx; } else { - skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, - rxr->page_pool, &xdp); + skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr, &xdp); if (!skb) { /* we should be able to free the old skb here */ bnxt_xdp_buff_frags_free(rxr, &xdp); @@ -3837,11 +3837,13 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.pool_size =3D bp->rx_agg_ring_size / agg_size_fac; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size +=3D bp->rx_ring_size / rx_size_fac; + + pp.order =3D get_order(rxr->rx_page_size); pp.nid =3D numa_node; pp.netdev =3D bp->dev; pp.dev =3D &bp->pdev->dev; pp.dma_dir =3D bp->rx_dir; - pp.max_len =3D PAGE_SIZE; + pp.max_len =3D PAGE_SIZE << pp.order; pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_ALLOW_UNREADABLE_NETMEM; pp.queue_idx =3D rxr->bnapi->index; @@ -3852,7 +3854,10 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, rxr->page_pool =3D pool; =20 rxr->need_head_pool =3D page_pool_is_unreadable(pool); + rxr->need_head_pool |=3D !!pp.order; if (bnxt_separate_head_pool(rxr)) { + pp.order =3D 0; + pp.max_len =3D PAGE_SIZE; pp.pool_size =3D min(bp->rx_ring_size / rx_size_fac, 1024); pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pool =3D page_pool_create(&pp); @@ -4328,6 +4333,8 @@ static void bnxt_init_ring_struct(struct bnxt *bp) if (!rxr) goto skip_rx; =20 + rxr->rx_page_size =3D BNXT_RX_PAGE_SIZE; + ring =3D &rxr->rx_ring_struct; rmem =3D &ring->ring_mem; rmem->nr_pages =3D bp->rx_nr_pages; @@ -4487,7 +4494,7 @@ static void bnxt_init_one_rx_agg_ring_rxbd(struct bnx= t *bp, ring =3D &rxr->rx_agg_ring_struct; ring->fw_ring_id =3D INVALID_HW_RING_ID; if ((bp->flags & BNXT_FLAG_AGG_RINGS)) { - type =3D ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | + type =3D ((u32)rxr->rx_page_size << RX_BD_LEN_SHIFT) | RX_BD_TYPE_RX_AGG_BD; =20 /* On P7, setting EOP will cause the chip to disable @@ -7065,6 +7072,7 @@ static void bnxt_hwrm_ring_grp_free(struct bnxt *bp) =20 static void bnxt_set_rx_ring_params_p5(struct bnxt *bp, u32 ring_type, struct hwrm_ring_alloc_input *req, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring) { struct bnxt_ring_grp_info *grp_info =3D &bp->grp_info[ring->grp_idx]; @@ -7074,7 +7082,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, if (ring_type =3D=3D HWRM_RING_ALLOC_AGG) { req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_RX_AGG; req->rx_ring_id =3D cpu_to_le16(grp_info->rx_fw_ring_id); - req->rx_buf_size =3D cpu_to_le16(BNXT_RX_PAGE_SIZE); + req->rx_buf_size =3D cpu_to_le16(rxr->rx_page_size); enables |=3D RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID; } else { req->rx_buf_size =3D cpu_to_le16(bp->rx_buf_use_size); @@ -7088,6 +7096,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, } =20 static int hwrm_ring_alloc_send_msg(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring, u32 ring_type, u32 map_index) { @@ -7144,7 +7153,8 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, cpu_to_le32(bp->rx_ring_mask + 1) : cpu_to_le32(bp->rx_agg_ring_mask + 1); if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) - bnxt_set_rx_ring_params_p5(bp, ring_type, req, ring); + bnxt_set_rx_ring_params_p5(bp, ring_type, req, + rxr, ring); break; case HWRM_RING_ALLOC_CMPL: req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_L2_CMPL; @@ -7292,7 +7302,7 @@ static int bnxt_hwrm_rx_ring_alloc(struct bnxt *bp, u32 map_idx =3D bnapi->index; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7312,7 +7322,7 @@ static int bnxt_hwrm_rx_agg_ring_alloc(struct bnxt *b= p, int rc; =20 map_idx =3D grp_idx + bp->rx_nr_rings; - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7336,7 +7346,7 @@ static int bnxt_hwrm_cp_ring_alloc_p5(struct bnxt *bp, =20 ring =3D &cpr->cp_ring_struct; ring->handle =3D BNXT_SET_NQ_HDL(cpr); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) return rc; bnxt_set_db(bp, &cpr->cp_db, type, map_idx, ring->fw_ring_id); @@ -7351,7 +7361,7 @@ static int bnxt_hwrm_tx_ring_alloc(struct bnxt *bp, const u32 type =3D HWRM_RING_ALLOC_TX; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, tx_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, tx_idx); if (rc) return rc; bnxt_set_db(bp, &txr->tx_db, type, tx_idx, ring->fw_ring_id); @@ -7377,7 +7387,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) =20 vector =3D bp->irq_tbl[map_idx].vector; disable_irq_nosync(vector); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) { enable_irq(vector); goto err_out; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index f88e7769a838..9eaef6d7c150 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1105,6 +1105,7 @@ struct bnxt_rx_ring_info { =20 unsigned long *rx_agg_bmap; u16 rx_agg_bmap_size; + u32 rx_page_size; bool need_head_pool; =20 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.c index c94a391b1ba5..85cbeb35681c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -183,7 +183,7 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx= _ring_info *rxr, u16 cons, u8 *data_ptr, unsigned int len, struct xdp_buff *xdp) { - u32 buflen =3D BNXT_RX_PAGE_SIZE; + u32 buflen =3D rxr->rx_page_size; struct bnxt_sw_rx_bd *rx_buf; struct pci_dev *pdev; dma_addr_t mapping; @@ -460,7 +460,7 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf = *xdp) =20 struct sk_buff * bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, - struct page_pool *pool, struct xdp_buff *xdp) + struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp) { struct skb_shared_info *sinfo =3D xdp_get_shared_info_from_buff(xdp); =20 @@ -468,7 +468,7 @@ bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb= , u8 num_frags, return NULL; =20 xdp_update_skb_frags_info(skb, num_frags, sinfo->xdp_frags_size, - BNXT_RX_PAGE_SIZE * num_frags, + rxr->rx_page_size * num_frags, xdp_buff_get_skb_flags(xdp)); return skb; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.h index 220285e190fc..8933a0dec09a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -32,6 +32,6 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_r= ing_info *rxr, void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, - u8 num_frags, struct page_pool *pool, + u8 num_frags, struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); #endif --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C77E13C1962 for ; Thu, 15 Jan 2026 17:12:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497158; cv=none; b=p7nLtflPdqF68K+virAfYG5kUuZVxoCJ74RSGLupTSwpWqxCEbDSCnGMCzQ1McsLcRzYIE4ScdOMMvCXf/ZjdyHQSUUDnVMYjUuqb+QQn0E/n93b0ArUmT4ZZIx3suaG/+kpLINz10OSjNbY5qO5Q0fPKGbyflYUgccsOlO3L8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497158; c=relaxed/simple; bh=m8NMYH4g6lQKcDnWsulwGjwfrhYH+1zKunQ0vgAfNRI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hhSmsQZ0DFP1sZpbZ+rcojbffsNBnnV1DCnJBcr16v2g2DeDFnckTjfryMA8eKbp9UimaqaGMutqft8JHsXWwJdN6exX3On5HuFqrS1Z8yE/X/6Q8VPEZ6iWZuGa7HPZZY61/aWDf9V77Nr5W5T8Jjc1bxxAOk2iEowXxEXFtY8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Uh9CH+0P; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Uh9CH+0P" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-47ee974e230so10869725e9.2 for ; Thu, 15 Jan 2026 09:12:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497155; x=1769101955; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wz1yS4KnsLbiuKsdPkEkvSuKGyQ7NQwQMS3x7qa8DNs=; b=Uh9CH+0PtQFoy2xX4gBfxnddb1lL+DfYCnOwBWThGJGyKEhSKwlEOJD4+E9X0CnwSz pkLyHv82T0T+DMKi3nhFSclTexl9/CncwFV87UxsnoBcDzarZCMsreQzhQBEvZ3yJYXj LdZyv8Swohkgt1nr+PEYKOAut1ysA/XlKHcBC/nDWWksu+2qptZ+Ujf/6av7aIyJNtIK bw/doxp+F8kFjkCsakvon4Dut0+ACoS/Ca3/fBJsHyiWHvjIOmUSLQCHJqy/I/SEPL7b PFyb7EVK7oB5ApOBjDwZy/tsyfi2CJYhXUQaEeyIWCLt4vrZGrfZyqiA3G0a21Hj9TgT el3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497155; x=1769101955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=wz1yS4KnsLbiuKsdPkEkvSuKGyQ7NQwQMS3x7qa8DNs=; b=K3FR0zS9linX+L0759K2m4ZTEghbmfKNK8wfmxtekhHa2oJH/NPhTpc+OWcIpM4Fjo IT3fp5tP2fKQZHXYgWgfmF8HUwL4tjYvotHkmSHnGyxrOfg6DJ1sH/gkxulS7tk088O2 R8cHQoc+nGyEItk2lb5Clz/hgnbHFbhS7PXDqo+4PIxRzNP0PzZmvzenuUlOjPC2OTZo QZ6Zk+SSnErJ1ykAwVKJN5T/Oy8Pizo8Ca1jj7MQbOf22NdY5+E1DvPlkJjiTA+pXkI6 LjzdBCFp+05oG3/RTEjfG6IOnQ6nqeAs5EB6PNcWBHThSWghnEOfs8O2HIdSFVfYeQcc oPCg== X-Forwarded-Encrypted: i=1; AJvYcCWo1ioMzIxACmz2ecX4EI0duTfqpU13XtBiPi/TeQKxrRvD5kZrawH48tbzFgmk+Tjg/MzlJJbr3tksQpw=@vger.kernel.org X-Gm-Message-State: AOJu0YxzVfzN58974rFQbR69whpomVgT93eA+Q6VNGvsNzqT5K77aA4c 1+A6M21tFp0vf/nrDTBvfJMGTiFSqka/P1CBDs/SCfXsjYIvXD8x5fwq X-Gm-Gg: AY/fxX5G2icgiCi9eiNHf8NdYo7bDGfCmGTnfbX5yzblaayC6YQu7kcgwiVJs6Wmu2W O7NspExw15mTpHtZdyXgtE5oLQSBzkeSFL/r33l0+XPaRtjGssVt0CDM5OkvAc11djfKeNO1IkP PU9DmgtYDucazPCnHSdoy6Q2ztDQJ7Xh1ch8RtLd0v391nKESjWQLBupTJ9Dl/+L5F9mJ1+KRwC oNML+8wyaqMMO3/RwckafP4cJIg6BlsUzqMQa3wDQ76FpQBSz4N+5sB7FLfp+4BaRlwHCe5T9R3 0JBP25lgdaflowVhHnKNMHQVMolIBxFTWwU0u5IynU88tvTxiY5Lsq19ik+IDGfWDqQfXot/KLP 1pmNXh+B3vF4iytYrlj+P6JP7UjVY9BVzJVvfKdMr+ghb8YJ6vIgjkzog2NekID8hZGeW4Qdtv3 A2eZ13zXX1phnbFhoilC6wjP69NDxChzz5s9i4CqU54q4blTwVB15/Y5SfrGRF+6jObxU3uIxE0 5eQernX57r1uJu5zA== X-Received: by 2002:a05:600c:1c05:b0:47e:e952:86ca with SMTP id 5b1f17b1804b1-4801e2f28edmr6191145e9.2.1768497155155; Thu, 15 Jan 2026 09:12:35 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:34 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 6/9] eth: bnxt: adjust the fill level of agg queues with larger buffers Date: Thu, 15 Jan 2026 17:11:59 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jakub Kicinski The driver tries to provision more agg buffers than header buffers since multiple agg segments can reuse the same header. The calculation / heuristic tries to provide enough pages for 65k of data for each header (or 4 frags per header if the result is too big). This calculation is currently global to the adapter. If we increase the buffer sizes 8x we don't want 8x the amount of memory sitting on the rings. Luckily we don't have to fill the rings completely, adjust the fill level dynamically in case particular queue has buffers larger than the global size. Signed-off-by: Jakub Kicinski [pavel: rebase on top of agg_size_fac] Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 25 +++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index 196b972263bd..f011cf792abe 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3825,16 +3825,31 @@ static void bnxt_free_rx_rings(struct bnxt *bp) } } =20 +static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + /* User may have chosen larger than default rx_page_size, + * we keep the ring sizes uniform and also want uniform amount + * of bytes consumed per ring, so cap how much of the rings we fill. + */ + int fill_level =3D bp->rx_agg_ring_size; + + if (rxr->rx_page_size > BNXT_RX_PAGE_SIZE) + fill_level /=3D rxr->rx_page_size / BNXT_RX_PAGE_SIZE; + + return fill_level; +} + static int bnxt_alloc_rx_page_pool(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, int numa_node) { - const unsigned int agg_size_fac =3D PAGE_SIZE / BNXT_RX_PAGE_SIZE; + unsigned int agg_size_fac =3D rxr->rx_page_size / BNXT_RX_PAGE_SIZE; const unsigned int rx_size_fac =3D PAGE_SIZE / SZ_4K; struct page_pool_params pp =3D { 0 }; struct page_pool *pool; =20 - pp.pool_size =3D bp->rx_agg_ring_size / agg_size_fac; + pp.pool_size =3D bnxt_rx_agg_ring_fill_level(bp, rxr) / agg_size_fac; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size +=3D bp->rx_ring_size / rx_size_fac; =20 @@ -4412,11 +4427,13 @@ static void bnxt_alloc_one_rx_ring_netmem(struct bn= xt *bp, struct bnxt_rx_ring_info *rxr, int ring_nr) { + int fill_level, i; u32 prod; - int i; + + fill_level =3D bnxt_rx_agg_ring_fill_level(bp, rxr); =20 prod =3D rxr->rx_agg_prod; - for (i =3D 0; i < bp->rx_agg_ring_size; i++) { + for (i =3D 0; i < fill_level; i++) { if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) { netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n", ring_nr, i, bp->rx_agg_ring_size); --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A65F23AA1A8 for ; Thu, 15 Jan 2026 17:12:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497161; cv=none; b=koYLzttVpXwCqUOa3KXpAshDoHKNz1C6fvQanyppwME4DzMSAb82AnWXMGssfQvnbjFDzNIRLjI+grtz7aP3EmBwgnUnxIZ+p700jFfURlrAT4hnNqTGMbbkhOzRwncTNQplnI+amd/dKIBkUVEAx93xcjhs+l3VyQNK75c0xlY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497161; c=relaxed/simple; bh=w63yj/3XJuKjz144FSyN9g/slARmBLRuU9i9NwK0eJA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kOUYKhA6LTjNy9RObhjX3GCtG4bcjCkD2DgCs0k3K77m7P7+joFb3Xo2cxrG9qfi2BYqEzdqMeVDxF7FYobO6JV9x/WAJWayllN2iLEeSTDEC5UySVFwOgRwXDGZLZ68n/ACpmyflSNPB2Q+woEA9XgjUo9tUMxg8J/ABgugoIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hiqesj0s; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hiqesj0s" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-47ee807a4c5so8728255e9.2 for ; Thu, 15 Jan 2026 09:12:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497157; x=1769101957; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wPBVfrUasjXxxiO4appWlnagZh47/ZZDY5dXmpSB7pw=; b=hiqesj0sxvc+us67WpybM7ve5RTeOigkcN84tPQ+/U0teBVX0m7xui94H0JLZGFi4n 05aUqCF8kiifBwtlSAi1Y14KMaS6m/IhIKN0heY4p7YC/p9PfjxuW7zwK9XFjBVq95Jk 1Bx8gnpUkW3Y2LwnMfLV88qE6z8z/cvLdW0CEDHnA7K0cl4EINwQ/SVzSyo9vxDJ71fx 2eDO1p6ELeowzo04obADm80hZNKf+rYUx87oo0JCilwKZbD1tIQnkORMb3413r59oKlb y0B4VB04NeXk9IkwEGEQy+FD1IlCqkKqD1KaOl+vVir+B8hfLD3Hrqu5xxCXqJhYdjQo 2NPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497157; x=1769101957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=wPBVfrUasjXxxiO4appWlnagZh47/ZZDY5dXmpSB7pw=; b=aK2VpQFtKuY8VBsOMZgOotbiftzRlyL5YVhv62CXLLZ1m4CUk1KWb3Lvjt7MkTZSjT WWI77GXA1L7AvR5FstQRydwBfhYkTYtnoYprQko/IpIhO8I6qeCyLhxVsVhR+f/905hE gOcXNxMYJ7yyy+YWF8075mKlEMMxjKy0GxWPKwI9B8HTE8ocA6kVQ/XbX4NH9qwT/Gw3 6ip7zXrKUbpY7/s75I5X3ig4vek/LkSRil2ke+y1O3I0bTNrST+yAhtFHjYG0uHIYWye tj2e6O9C1og0gHoRrUzArYbQm5nAwYvd2KspDA3GWskRY6jHIJfgmViaje/6DlCA6rAt Gjkw== X-Forwarded-Encrypted: i=1; AJvYcCXH4cFwOZRYd1SyQYnKyUouh71cg95PNEwXJymjq9HJVh8kDhHHYfQ8KKAiwXs6hjhBx2+EoNOSwQPeRb8=@vger.kernel.org X-Gm-Message-State: AOJu0YwYUmMBkK5YNQQVyfvJMfanMH5dmn/htk5UuOI5Y1Uara9IAPMV YvppX3xsZL2Vk6L1YqxGkGO1qoLyHpVFBFxIH7ny/38jB3ENDG5615Zf X-Gm-Gg: AY/fxX4LUhJqtqiE5r3EuDt+taXAjabYDy5X+NCI16H9aHzQ7WaOHdgJFCVQVCDKjQC c17HSpJsTWVrmz9jsNbKxjPF+cK6QunbldKMwruGW0QV+wF2PkF54iq8AiNeykXYwyHsQvQYxQc IcKWP8tEHrjbFerO72F0Ha9k7+Fbd8YMwAzWNAI4Ol84xw47iLiVq/mWN5Vxc3+UTOe0c71ognh /nR6Jcyr9O0noIG3ab6hmNGqQzcCsGd1Fyyu1EqdWD6GtKXpVTlEeuirhMbpsbLLcHAOzEcy6Hh Ps68s8SsM0pyWcPKXkctyBwGVw/PZRdyl8UTGL+Lmre8VF7bww1bQSTT7UWPHmgwT1VnSUliLnX CE7g4rZA65bC68qa9+PvhLYRAFf+6z6iK+ZDsVr4s8evItFLpCHw1Hx/aBmi5uE9iNNOQ8FJMyB 7InbkzYZ5dZzGZF5PYzTrj5O6xRNewnodkzyUTpCnjrpAi1uf+aBYJAeNYW502I8zhhLEz+vkIm u61YhP1NuTicboZhA== X-Received: by 2002:a05:600c:c08e:b0:477:9b4a:a82 with SMTP id 5b1f17b1804b1-4801e4a38c8mr2774705e9.35.1768497156723; Thu, 15 Jan 2026 09:12:36 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:35 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 7/9] eth: bnxt: support qcfg provided rx page size Date: Thu, 15 Jan 2026 17:12:00 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement support for qcfg provided rx page sizes. For that, implement the ndo_default_qcfg callback and validate the config on restart. Also, use the current config's value in bnxt_init_ring_struct to retain the correct size across resets. Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 36 ++++++++++++++++++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 36 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index f011cf792abe..f4f265a25a4a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4331,6 +4331,7 @@ static void bnxt_init_ring_struct(struct bnxt *bp) struct bnxt_rx_ring_info *rxr; struct bnxt_tx_ring_info *txr; struct bnxt_ring_struct *ring; + struct netdev_rx_queue *rxq; =20 if (!bnapi) continue; @@ -4348,7 +4349,8 @@ static void bnxt_init_ring_struct(struct bnxt *bp) if (!rxr) goto skip_rx; =20 - rxr->rx_page_size =3D BNXT_RX_PAGE_SIZE; + rxq =3D __netif_get_rx_queue(bp->dev, i); + rxr->rx_page_size =3D rxq->qcfg.rx_page_size; =20 ring =3D &rxr->rx_ring_struct; rmem =3D &ring->ring_mem; @@ -15938,6 +15940,29 @@ static const struct netdev_stat_ops bnxt_stat_ops = =3D { .get_base_stats =3D bnxt_get_base_stats, }; =20 +static void bnxt_queue_default_qcfg(struct net_device *dev, + struct netdev_queue_config *qcfg) +{ + qcfg->rx_page_size =3D BNXT_RX_PAGE_SIZE; +} + +static int bnxt_validate_qcfg(struct bnxt *bp, struct netdev_queue_config = *qcfg) +{ + /* Older chips need MSS calc so rx_page_size is not supported */ + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && + qcfg->rx_page_size !=3D BNXT_RX_PAGE_SIZE) + return -EINVAL; + + if (!is_power_of_2(qcfg->rx_page_size)) + return -ERANGE; + + if (qcfg->rx_page_size < BNXT_RX_PAGE_SIZE || + qcfg->rx_page_size > BNXT_MAX_RX_PAGE_SIZE) + return -ERANGE; + + return 0; +} + static int bnxt_queue_mem_alloc(struct net_device *dev, struct netdev_queue_config *qcfg, void *qmem, int idx) @@ -15950,6 +15975,10 @@ static int bnxt_queue_mem_alloc(struct net_device = *dev, if (!bp->rx_ring) return -ENETDOWN; =20 + rc =3D bnxt_validate_qcfg(bp, qcfg); + if (rc < 0) + return rc; + rxr =3D &bp->rx_ring[idx]; clone =3D qmem; memcpy(clone, rxr, sizeof(*rxr)); @@ -15961,6 +15990,7 @@ static int bnxt_queue_mem_alloc(struct net_device *= dev, clone->rx_sw_agg_prod =3D 0; clone->rx_next_cons =3D 0; clone->need_head_pool =3D false; + clone->rx_page_size =3D qcfg->rx_page_size; =20 rc =3D bnxt_alloc_rx_page_pool(bp, clone, rxr->page_pool->p.nid); if (rc) @@ -16087,6 +16117,8 @@ static void bnxt_copy_rx_ring(struct bnxt *bp, src_ring =3D &src->rx_agg_ring_struct; src_rmem =3D &src_ring->ring_mem; =20 + dst->rx_page_size =3D src->rx_page_size; + WARN_ON(dst_rmem->nr_pages !=3D src_rmem->nr_pages); WARN_ON(dst_rmem->page_size !=3D src_rmem->page_size); WARN_ON(dst_rmem->flags !=3D src_rmem->flags); @@ -16241,6 +16273,8 @@ static const struct netdev_queue_mgmt_ops bnxt_queu= e_mgmt_ops =3D { .ndo_queue_mem_free =3D bnxt_queue_mem_free, .ndo_queue_start =3D bnxt_queue_start, .ndo_queue_stop =3D bnxt_queue_stop, + .ndo_default_qcfg =3D bnxt_queue_default_qcfg, + .supported_params =3D QCFG_RX_PAGE_SIZE, }; =20 static void bnxt_remove_one(struct pci_dev *pdev) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index 9eaef6d7c150..dc7227a69b7b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -760,6 +760,7 @@ struct nqe_cn { #endif =20 #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) +#define BNXT_MAX_RX_PAGE_SIZE BIT(15) =20 #define BNXT_MAX_MTU 9500 =20 --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 808A43C1FD0 for ; Thu, 15 Jan 2026 17:12:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497163; cv=none; b=HAZGs2rZ5nk57V4tbtLwXdgvIDt8wK0vT72RyXrF79nhwF5yG2CFqywqfX3p24fBndpGqiLmBPOD/aG+31ff1D5APDOP62LRUc2VAsSGZfNnUZVDLZbbZnzRkEp9CZgDJ+MbLqpUIzRZtqAI8AQinUx5l9gJ8Q8rUewSZdVJ7Qc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497163; c=relaxed/simple; bh=ozJe/DZktvt1THhlIH8e13meT+GK+nb2peuvZHdYIB8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BnejIWFzntw7jpz1CWY1GVKFVMltbzSn7f1NsK9LGpOzak5AuCP1idQmk2Eg1zQppnQlKGbLFqcfDU9Rz3ytz4+G20DC/FUWc4jN78DIbS5Y/zeHEnbyjVcVNRNeycyTV1LBiOSPrG2QcY9zotKTb3MRXG9dCMDERf4+6X0niF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jv68rUWY; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jv68rUWY" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-4801c2fae63so5239225e9.2 for ; Thu, 15 Jan 2026 09:12:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497158; x=1769101958; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IUwAm1Dn1V0YNHJ06qHaA7B/DEkukzNU8MJR12KV7b4=; b=jv68rUWYhRMUVaQfK92x5EMA0vdJHU3qpp61axNVrr9/wMOVD3Y8IgWKe4QNDWKNUR HsnZOlgtxcvB/x1YGKC3nrRZvAocSJivD48lquEF0YLTHL0y81Kzb8HK1+WsYZiZyX60 5KCsHivQpfrnydWnhu8Ri/yd7X0pogRtwlimzX9ufGwSYwlQ62Br+EyxE3jNEZTl+Oj+ zHM+GQ8EiY0CWlv8IayDXJkUcizVKdJL83CoS7UMvdTwgP8ebqpTvoju1gJfi0uDokhP cTNqVxaNtDkFS8LVYSjCxmmtSsHvdpl/SxwY7yf/xkUnpT6aRzHOjhfRRHfH/1e+pqHT ZHkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497158; x=1769101958; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=IUwAm1Dn1V0YNHJ06qHaA7B/DEkukzNU8MJR12KV7b4=; b=VxuVTao38LXGm7Xf8IKZMOPnnRN5dLRM0akDaVTlgUsxaDIFoSADfjzq342xj1kth1 eXTFXeybNSKn3eFDOXEYcRYbRJWk/7y/4E9KjErhk0mb8R4I1vByJ+7et78N6Md4N+DX hSvZTIgrxzDb8zw0XbUI2CtvYz1/TSt6Zy9UJBIDsoNC/o2CeiMU/KLF7PaPGvY3gZg7 AY7UQ53b3R1apnsqkBW6SRBFhCkSgBxV8m/WbO5BFxYf5uPq82g643hf2LPm3cycYZ+G vezVjMqhxbb7MaSKo1qQ/0fFZGhCZaMBLrhXQ9KYwATjt+hiEl/I5u7HZ84KoCqMaQRr fVww== X-Forwarded-Encrypted: i=1; AJvYcCVbg5RtT6nbltfvYOjSeEZEU0gSvTATrRNqmEEi5tUDoJAdv3+K2hY1Eioy3L6AgVOe9iBy6c6CqdsT4nM=@vger.kernel.org X-Gm-Message-State: AOJu0YxGc2ytt2xkMduyyU5T9D2WxNfYxCkMaCmXSuWm9kLjY5RMI09q CcTjSdpPXPxDbfj3izSGMFW/oXV+/GVvGddezvf+FgKUEl899jmKjFas X-Gm-Gg: AY/fxX7JjS4SnVyIXR/nfDqBirbuwhmqteKPeEL6d/GzVRbXVACvGzl3LoR9u66PwOu I2HweFrEdDHrxKSCWKB7HG2YxL6GBmnrbINHHWsJ0Z1sE+tweUr7HC+pxt4d/T/B0EDPXY/8mmD 2AADSTeuWAAtPMUF0KpegGUQ1HKDILs16kSTqI8Er6M0uX2fPzBOXzOe6kvEwx5yqkPESI1Fgz4 nXfXnAXy9Hfn+dJITRJZb/62p/3K3T9vrQOXxYcNaIzGPzcLaHOhjTbjCnBz6DDKrmSkokN/ZjK LKOr6xTH8Dimjaa5n0ZgiRb6h92xAZxsEl3lBou05kECFyxtrZ5XhKDKXotvBkc+1ReYxkgbvme 87/UQXFFqTfik9cT0KUTtZY/kjq2md5WZ+w4XbZ7foYHTCYvvktLY1/A/LZmY0L9LAle7QpzmD6 RSI+JhJqcbI31OpoGb1mXdosoXDv7+TxKVKNQQ5xY43x2rH+NsBbBS88I1LM/pCl5cFXYjgOsNu oWoedLHbuhT9iiOaw== X-Received: by 2002:a05:600c:8b58:b0:47b:e2a9:2bd9 with SMTP id 5b1f17b1804b1-4801e345987mr6476985e9.31.1768497158402; Thu, 15 Jan 2026 09:12:38 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:37 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 8/9] selftests: iou-zcrx: test large chunk sizes Date: Thu, 15 Jan 2026 17:12:01 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a test using large chunks for zcrx memory area. Signed-off-by: Pavel Begunkov --- .../selftests/drivers/net/hw/iou-zcrx.c | 72 +++++++++++++++---- .../selftests/drivers/net/hw/iou-zcrx.py | 39 ++++++++++ 2 files changed, 99 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/drivers/net/hw/iou-zcrx.c b/tools/test= ing/selftests/drivers/net/hw/iou-zcrx.c index 62456df947bc..240d13dbc54e 100644 --- a/tools/testing/selftests/drivers/net/hw/iou-zcrx.c +++ b/tools/testing/selftests/drivers/net/hw/iou-zcrx.c @@ -12,6 +12,7 @@ #include =20 #include +#include #include #include #include @@ -37,6 +38,23 @@ =20 #include =20 +#define SKIP_CODE 42 + +struct t_io_uring_zcrx_ifq_reg { + __u32 if_idx; + __u32 if_rxq; + __u32 rq_entries; + __u32 flags; + + __u64 area_ptr; /* pointer to struct io_uring_zcrx_area_reg */ + __u64 region_ptr; /* struct io_uring_region_desc * */ + + struct io_uring_zcrx_offsets offsets; + __u32 zcrx_id; + __u32 rx_buf_len; + __u64 __resv[3]; +}; + static long page_size; #define AREA_SIZE (8192 * page_size) #define SEND_SIZE (512 * 4096) @@ -65,6 +83,8 @@ static bool cfg_oneshot; static int cfg_oneshot_recvs; static int cfg_send_size =3D SEND_SIZE; static struct sockaddr_in6 cfg_addr; +static unsigned int cfg_rx_buf_len; +static bool cfg_dry_run; =20 static char *payload; static void *area_ptr; @@ -128,14 +148,28 @@ static void setup_zcrx(struct io_uring *ring) if (!ifindex) error(1, 0, "bad interface name: %s", cfg_ifname); =20 - area_ptr =3D mmap(NULL, - AREA_SIZE, - PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, - 0, - 0); - if (area_ptr =3D=3D MAP_FAILED) - error(1, 0, "mmap(): zero copy area"); + if (cfg_rx_buf_len && cfg_rx_buf_len !=3D page_size) { + area_ptr =3D mmap(NULL, + AREA_SIZE, + PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE | + MAP_HUGETLB | MAP_HUGE_2MB, + -1, + 0); + if (area_ptr =3D=3D MAP_FAILED) { + printf("Can't allocate huge pages\n"); + exit(SKIP_CODE); + } + } else { + area_ptr =3D mmap(NULL, + AREA_SIZE, + PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, + 0, + 0); + if (area_ptr =3D=3D MAP_FAILED) + error(1, 0, "mmap(): zero copy area"); + } =20 ring_size =3D get_refill_ring_size(rq_entries); ring_ptr =3D mmap(NULL, @@ -157,17 +191,23 @@ static void setup_zcrx(struct io_uring *ring) .flags =3D 0, }; =20 - struct io_uring_zcrx_ifq_reg reg =3D { + struct t_io_uring_zcrx_ifq_reg reg =3D { .if_idx =3D ifindex, .if_rxq =3D cfg_queue_id, .rq_entries =3D rq_entries, .area_ptr =3D (__u64)(unsigned long)&area_reg, .region_ptr =3D (__u64)(unsigned long)®ion_reg, + .rx_buf_len =3D cfg_rx_buf_len, }; =20 - ret =3D io_uring_register_ifq(ring, ®); - if (ret) + ret =3D io_uring_register_ifq(ring, (void *)®); + if (cfg_rx_buf_len && (ret =3D=3D -EINVAL || ret =3D=3D -EOPNOTSUPP || + ret =3D=3D -ERANGE)) { + printf("Large chunks are not supported %i\n", ret); + exit(SKIP_CODE); + } else if (ret) { error(1, 0, "io_uring_register_ifq(): %d", ret); + } =20 rq_ring.khead =3D (unsigned int *)((char *)ring_ptr + reg.offsets.head); rq_ring.ktail =3D (unsigned int *)((char *)ring_ptr + reg.offsets.tail); @@ -323,6 +363,8 @@ static void run_server(void) io_uring_queue_init(512, &ring, flags); =20 setup_zcrx(&ring); + if (cfg_dry_run) + return; =20 add_accept(&ring, fd); =20 @@ -383,7 +425,7 @@ static void parse_opts(int argc, char **argv) usage(argv[0]); cfg_payload_len =3D max_payload_len; =20 - while ((c =3D getopt(argc, argv, "sch:p:l:i:q:o:z:")) !=3D -1) { + while ((c =3D getopt(argc, argv, "sch:p:l:i:q:o:z:x:d")) !=3D -1) { switch (c) { case 's': if (cfg_client) @@ -418,6 +460,12 @@ static void parse_opts(int argc, char **argv) case 'z': cfg_send_size =3D strtoul(optarg, NULL, 0); break; + case 'x': + cfg_rx_buf_len =3D page_size * strtoul(optarg, NULL, 0); + break; + case 'd': + cfg_dry_run =3D true; + break; } } =20 diff --git a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py b/tools/tes= ting/selftests/drivers/net/hw/iou-zcrx.py index 712c806508b5..7f596a33eb2b 100755 --- a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py +++ b/tools/testing/selftests/drivers/net/hw/iou-zcrx.py @@ -7,6 +7,7 @@ from lib.py import ksft_run, ksft_exit, KsftSkipEx from lib.py import NetDrvEpEnv from lib.py import bkg, cmd, defer, ethtool, rand_port, wait_port_listen =20 +SKIP_CODE =3D 42 =20 def _get_current_settings(cfg): output =3D ethtool(f"-g {cfg.ifname}", json=3DTrue)[0] @@ -132,6 +133,44 @@ def test_zcrx_rss(cfg) -> None: cmd(tx_cmd, host=3Dcfg.remote) =20 =20 +def test_zcrx_large_chunks(cfg) -> None: + """Test zcrx with large buffer chunks.""" + + cfg.require_ipver('6') + + combined_chans =3D _get_combined_channels(cfg) + if combined_chans < 2: + raise KsftSkipEx('at least 2 combined channels required') + (rx_ring, hds_thresh) =3D _get_current_settings(cfg) + port =3D rand_port() + + ethtool(f"-G {cfg.ifname} tcp-data-split on") + defer(ethtool, f"-G {cfg.ifname} tcp-data-split auto") + + ethtool(f"-G {cfg.ifname} hds-thresh 0") + defer(ethtool, f"-G {cfg.ifname} hds-thresh {hds_thresh}") + + ethtool(f"-G {cfg.ifname} rx 64") + defer(ethtool, f"-G {cfg.ifname} rx {rx_ring}") + + ethtool(f"-X {cfg.ifname} equal {combined_chans - 1}") + defer(ethtool, f"-X {cfg.ifname} default") + + flow_rule_id =3D _set_flow_rule(cfg, port, combined_chans - 1) + defer(ethtool, f"-N {cfg.ifname} delete {flow_rule_id}") + + rx_cmd =3D f"{cfg.bin_local} -s -p {port} -i {cfg.ifname} -q {combined= _chans - 1} -x 2" + tx_cmd =3D f"{cfg.bin_remote} -c -h {cfg.addr_v['6']} -p {port} -l 128= 40" + + probe =3D cmd(rx_cmd + " -d", fail=3DFalse) + if probe.ret =3D=3D SKIP_CODE: + raise KsftSkipEx(probe.stdout) + + with bkg(rx_cmd, exit_wait=3DTrue): + wait_port_listen(port, proto=3D"tcp") + cmd(tx_cmd, host=3Dcfg.remote) + + def main() -> None: with NetDrvEpEnv(__file__) as cfg: cfg.bin_local =3D path.abspath(path.dirname(__file__) + "/../../..= /drivers/net/hw/iou-zcrx") --=20 2.52.0 From nobody Sat Feb 7 14:51:04 2026 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CD1B3D1CA0 for ; Thu, 15 Jan 2026 17:12:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497164; cv=none; b=WhfXxxF4Oi3yqvE+M4Z2iN1gAXGqEfgCOtj25CL3RrxQvO17Ha6Hp/Ur3yDLYFUcE5iF4zEmcvpVqEAY8i27pS6pGP4oPvRK7ZPQ2a6CzJ6NXzh4K6NMUn/z9SqFfzkX0+bsYB0RqCIvPdb/7Ci31q6XezgAo74zX3MErSCGweE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768497164; c=relaxed/simple; bh=axdc3q5KY7YeHULab/TvgzOYs0oof9ugiogriyYfhIs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rtEXgEVPfQEnuRYhznKjtUhh1Wq0EpjopPGsD/kB+hmLcwyeBiB/28ocFIRWe6HmgEhJHI+37RBFFqDsb8PxwRdoJ/2COoR2gbK0QlGjrBo2RcWHtC2wiO1deTESn4SeceznJQDTGVSse/ymUUpsKjchqmYRkODOsuwkPo7Uhx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=A8GoXRFc; arc=none smtp.client-ip=209.85.128.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="A8GoXRFc" Received: by mail-wm1-f66.google.com with SMTP id 5b1f17b1804b1-4801c1ad878so5590215e9.1 for ; Thu, 15 Jan 2026 09:12:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768497160; x=1769101960; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kiBEtLY6CE0u/w5HMF7IyfpDPmY7qzN3np9POU+8r/k=; b=A8GoXRFc0IJMW1qfOfJrOBj1JHhZwXvCopVi2+PE2DOaFTMuDMnBiAw+2k0Qtxm+hs LgdgjOe7NnSX7UaF8b7r4PfW15tcwfenWHlJ7akUcMP7Evmb2nyvPmhrYTLKAaP/DS9d o2UqH7xu88UmohiLg90AcDD97OgiExglCL4W44+PajXj7dUvM39oUJn+f+nN9pJyVIiw jq2aONOfPO73F4MWAC9MrzLv+DmU0STcN5G9FGhOc8U8A7SQuiNaSbnHgRnrkcIU4cm6 O88JXvbq1HxOS5z7yJqidF2evKknMKxlYrkoQbvbL+v/aV3JMwrMxGuXWX/hviJc+hMn ELig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768497160; x=1769101960; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kiBEtLY6CE0u/w5HMF7IyfpDPmY7qzN3np9POU+8r/k=; b=JP+H5NRfkzKmJFkPxcy5pGD7Am6aq7gVYKV2HluhLEZPl9Ie6d3QIRHxiL59dnvlyg nVQw1ZU2GIaP9cMpec8shXtQSqscg/XAmiemM0C+7lSayYoeSAZUn4pUXM+BZLUWoxS8 C/xbLiwO8TaBdoI7gQGFdr49dtMEeQWBMQcpy01Ut1Z43qdLB3rP9+fdegIXRWRr5tDX I/RVA6KiQq+s0OGMFWlHis7oaioL0SaiBuSv4m+fHeXpto9QQL2XhUhQ1mTJOCN9Hrfl YTAQr8peCIjb2nBJ2GJlk6bMFFdajkYeTiS7KvEoc0OoX1KlHUhBiEix9Ngw3DSWd+xM NOfg== X-Forwarded-Encrypted: i=1; AJvYcCVhwrpK7Q+PKvhy7UnlcEKnjuXu0i0yUifHaKjhxH66QyqQZE9NUtqW/wmq41MLm8K7ecUkqFu8UvVSZwc=@vger.kernel.org X-Gm-Message-State: AOJu0YzWPuXgDCZ1xDHAhVYiemiNsew/oHpLtnMNWomukLNoECYw1y/r lkGTTMZH99PAyfbKX23ytnrNsPoEjK89mtROZsEhVx9Py4aW5WvjEoU6 X-Gm-Gg: AY/fxX6Q8u+GrceyleYAu8tY18iuH+NktWIyEofbwRnZq+ZiLVIL+FAUprJFAqZhLMG NuG/w8Q1kB1KjREjFc89y2kmqwyjVkvEt59F5SSAyXmCifMnBNraVCfIbWTcTKrGBmCE/hInNIr KsdUvytprnOraBHtw4A/DsnHp/hlF7Wkuv4zI00lDNYJwt+abVeXq86DAgpAfn73CXL5B8BWhWw wA+kkVv4ELwcScFk/HnhHW/v3rm8/fHy1g+sAhK9azAghj24aPE58Qwkxt2Og6FmdxWe4sfmzfk sYbqmLCFrUyy9TAZbZ6T6JyBhkyOKYlb68Vq++WfQ/BXDE4cRk50rqceHw8fUvycYo3fsU9apEc xcyh9Ntnde9Lu126ogPHAaViu4NGAoXdSQ7kU95j1MV4gPvzdeW0MB2MD6xjyJ7/5Lj0SqFvk0a r5eZ4ZGgt18+kdyQt8cDSNFMOx9cP7zKTMkmPabONa+Pc51UbL2KPiSHshJNbGEtn9ZbxwTIvcE p2Xabx/QLBz/PNb8HgR5QpMA9Bd X-Received: by 2002:a05:600c:3ba8:b0:477:9eb8:97d2 with SMTP id 5b1f17b1804b1-4801e2fddcbmr5854765e9.8.1768497160083; Thu, 15 Jan 2026 09:12:40 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f429071a2sm54741645e9.11.2026.01.15.09.12.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jan 2026 09:12:39 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, kernel-team@meta.com, io-uring@vger.kernel.org Subject: [PATCH net-next v9 9/9] io_uring/zcrx: document area chunking parameter Date: Thu, 15 Jan 2026 17:12:02 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" struct io_uring_zcrx_ifq_reg::rx_buf_len is used as a hint specifying the kernel what buffer size it should use. Document the API and limitations. Signed-off-by: Pavel Begunkov --- Documentation/networking/iou-zcrx.rst | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/Documentation/networking/iou-zcrx.rst b/Documentation/networki= ng/iou-zcrx.rst index 54a72e172bdc..7f3f4b2e6cf2 100644 --- a/Documentation/networking/iou-zcrx.rst +++ b/Documentation/networking/iou-zcrx.rst @@ -196,6 +196,26 @@ Return buffers back to the kernel to be used again:: rqe->len =3D cqe->res; IO_URING_WRITE_ONCE(*refill_ring.ktail, ++refill_ring.rq_tail); =20 +Area chunking +------------- + +zcrx splits the memory area into fixed-length physically contiguous chunks. +This limits the maximum buffer size returned in a single io_uring CQE. Use= rs +can provide a hint to the kernel to use larger chunks by setting the +``rx_buf_len`` field of ``struct io_uring_zcrx_ifq_reg`` to the desired le= ngth +during registration. If this field is set to zero, the kernel defaults to +the system page size. + +To use larger sizes, the memory area must be backed by physically contiguo= us +ranges whose sizes are multiples of ``rx_buf_len``. It also requires kernel +and hardware support. If registration fails, users are generally expected = to +fall back to defaults by setting ``rx_buf_len`` to zero. + +Larger chunks don't give any additional guarantees about buffer sizes retu= rned +in CQEs, and they can vary depending on many factors like traffic pattern, +hardware offload, etc. It doesn't require any application changes beyond z= crx +registration. + Testing =3D=3D=3D=3D=3D=3D=3D =20 --=20 2.52.0