From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B737733E374 for ; Fri, 9 Jan 2026 11:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958143; cv=none; b=sjdIPSIW0n+AQh2WNt9kR/xXkOmMOQJTKQtDR1GEOwJFFATqXJPe2uhyug9pFPpRrLXQXFAMZNdXmlhNWDscb8Jant1By259vvSga9R7jaPgkGN+P/g8UUXW2QCyPYmlcJV3rmD2FVL0ZwYk4L6QXVcvIHCBNEheNs3P7R4Mx2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958143; c=relaxed/simple; bh=LSh5hLuiJCxDRwJ+BQ+Xc7tCZE16QA/b770rYhc+vp8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qc4gduAwWtCCqXx43G5FUzvNqvc2zcrJXUMPZkmRjT5OFs+BX0dFK7pJssD1/f6heWtUAyoZIpxqhUlJfC08w7ILZ0XHAwxqVmzhiwqVMjrdcuturKmrT6KCw/45FnxgV5X/nOWbFSEfw5OP3GJo7nyRt4RyaneLgu5rOPY7SBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b/ZG7m4w; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b/ZG7m4w" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-47775fb6cb4so23333805e9.0 for ; Fri, 09 Jan 2026 03:28:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958136; x=1768562936; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BopLoWIhKj7xpDJExutQghL/d96/x+rVfP3p1ohtywE=; b=b/ZG7m4wKVzkwXfuLm6QGnOAJp/HgaQTkKoDf4PPERpRrW/xfPh/U86HYEqTVWFaAG g2ZzP0xHeWS/pnMNnNON5RAq/fUxI0a+XKI8/U/MRV7sY/5X834K0gd3oCw3qkjULuVk zbg+ctPuua5VxpGTkqH/YoBA+YIjplcnWshRHz0iUllOQ/X9O5k22kzUU2zJQtShydIU mYFw3/HG9Rqu5xERUCHnvYY18SouLf5BPujcesnOYA8lX1gDF5I65J6qayMEbAQ0GvL1 1zXiO/qOJFWNfcQS8VLSgL6nKXibWjKJq+OG6dGZBG2AfAP763X1NOgvjYBmNjMevPAR 7Ieg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958136; x=1768562936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=BopLoWIhKj7xpDJExutQghL/d96/x+rVfP3p1ohtywE=; b=iQjnpG/RehFL9U9d0kd7A4BIYfER7ymtbF6l1ZnKuElAkSyM4bR2d7PeyMZOoFWDRQ MHR16woDOVDt0DXDIcUyUIHYnHJGhRvthVsbEZ69WEs1tTa+R6a94PFKiLJZeTfGqA/J VvzuYAFVtJtdo/rgchuboX7hS2xWZOQpwBIY+dow6SQN1zt4LjJpOfpo3cPYxeMcp5ye OJ3u9SB3umMyxiK/bFQ2HRB5VryGfgSkFht55ozwA6gdhRFgr//Mauk2Ic8uN5kIrQqB olXnlsIEwyKTTtuZcQdaRYpRJmdDrtnV6nm4N3ueL+lZsKRqSXjrJR3zngUogI6ZVSPX tT2w== X-Forwarded-Encrypted: i=1; AJvYcCXbgOPGVPOcmH1e4eXwZXvS0F9qPNCve3y1lNHRUwWmLwtmeFIWSdPPVkOg3Fm3mWxX+fRmg6nETT3jYDw=@vger.kernel.org X-Gm-Message-State: AOJu0Ywt3f22PA35WE5fd8sp9nJaFbVeQvQcW+oJ02IBlh+TjqrUnd+u 4Ti/G/fl6B0alvq+TYD7NTmuhYqutcbZ9OcBOB6i0/+rq1RtP9/5cUbl X-Gm-Gg: AY/fxX7MKYSTXUyi3bG0RPIp9ibU2MbZFbGR+fMSnEXal3pO/lszpJ+XLtJR27tYCsg eO6V6ht0mFuFred1c9SbJtuvTc61Lta0sxGDQJ3YEDDxQ0df8Inw+BNPkaONP6ITL3Ego3jyz4m PZzsJ1YcZzlVxV08H3t23SXoTw15soUD0dA3RPSQhtkFrEFLNH1emZTCxQsB+lGP06MFx18s/V4 L9AQ+9UcP+FT2ugEIu8sD+Kaam0OZAbZ2MEcrIGVSRMQdp3KyJQolpV6lfqL68uoPb7DKNRTeb+ otz34qcdDIE81Hme2NbZoJfChAu7XHrX9YELVrw8eRAaJW4vANDOsWZnxOrnviHa5uSRUlvTsex nFiykQ9HqRropwZoemU1Z4QUSnCF2VNBzn2xobBcgy/ouQIiJnMI3+8WFPFLwEWJOFETmVkF9F5 dBUxHJCO4lsvSAPaBGW62MrmCfjCW50HaEVJZv1dAXKepM+2TCfCgdUPvI7p3MmgOy7p9r0g== X-Google-Smtp-Source: AGHT+IEDJTcPs3tUFs8ntvCdNub/oE0byZocsnwzSH3zb12a+mdPTDeqfaJ1kCUtNpxFLR0tZ3foXw== X-Received: by 2002:a05:600c:620c:b0:46e:33b2:c8da with SMTP id 5b1f17b1804b1-47d84b3be47mr99997755e9.32.1767958135835; Fri, 09 Jan 2026 03:28:55 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.28.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:28:55 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 1/9] net: memzero mp params when closing a queue Date: Fri, 9 Jan 2026 11:28:40 +0000 Message-ID: <1414450be1abf13a812cbcfc3747beb6b6f767a4.1767819709.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of resetting memory provider parameters one by one in __net_mp_{open,close}_rxq, memzero the entire structure. It'll be used to extend the structure. Signed-off-by: Pavel Begunkov --- net/core/netdev_rx_queue.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index c7d9341b7630..a0083f176a9c 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -139,10 +139,9 @@ int __net_mp_open_rxq(struct net_device *dev, unsigned= int rxq_idx, =20 rxq->mp_params =3D *p; ret =3D netdev_rx_queue_restart(dev, rxq_idx); - if (ret) { - rxq->mp_params.mp_ops =3D NULL; - rxq->mp_params.mp_priv =3D NULL; - } + if (ret) + memset(&rxq->mp_params, 0, sizeof(rxq->mp_params)); + return ret; } =20 @@ -179,8 +178,7 @@ void __net_mp_close_rxq(struct net_device *dev, unsigne= d int ifq_idx, rxq->mp_params.mp_priv !=3D old_p->mp_priv)) return; =20 - rxq->mp_params.mp_ops =3D NULL; - rxq->mp_params.mp_priv =3D NULL; + memset(&rxq->mp_params, 0, sizeof(rxq->mp_params)); err =3D netdev_rx_queue_restart(dev, ifq_idx); WARN_ON(err && err !=3D -ENETDOWN); } --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42A043563C0 for ; Fri, 9 Jan 2026 11:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958145; cv=none; b=URXJGD45ny303icNKL4iqd3YCTmeDoOQzeHbLbEJ/ZHeuz+8mJ3K32esBNs62OGQdpMe8SXrInScxkRVKJh4ZcPheCmuPliXXlMsLbrQLnpvmXaie1Oi9auxpJ2sr9NSs1NpZDCkHNzeELBEY9dN9gZCyJ/IsY3XTj+wnjiig7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958145; c=relaxed/simple; bh=42xckaESf3j1/c9BstDJDSEUMXBt7iNWBkDF/+HLvPg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HZNMF7s55cznn5yZdrWOjU/meDq0t0COvSjvchq5ffnsz033c9egAyF3VNHq86ZWsKcLP788MQKN/zpTW9fU6NsBHyGwtLUgyZSgr3m5Yk39qbsh5XMedPjpRx14eqSbq4Yr8YIt+q56SI0Kqasql3IZrlDJNaJ5aJNGs1TUxog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jZ3zDuHu; arc=none smtp.client-ip=209.85.128.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jZ3zDuHu" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-47774d3536dso26503305e9.0 for ; Fri, 09 Jan 2026 03:29:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958139; x=1768562939; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZzXTN5GPRXX9JktaEoAXzWi3s9VynOdluwYCuvOxVM0=; b=jZ3zDuHuxK/Sqh+ceepekrnR1yweshgh3nHdiD/9KnWrRAWkW/XMXE9GARjOudogPX U4Z8cx2+IVIflPUlCwqB09iQwl09IJ+vDDp8GN25fuhrH5zNp1yRGaL1ySn1tMG4aHe1 6/BNMOID5X043DtlHMcO/RjOYjhdIIhSV6HSoYuBQtEdc6Y5SXf/0dF1vYyVdxWCpdeX g4aQpT5qQxsYmSeS57iEMWLnzqdDwihPt7cWOfnGPbMt6nhV4FkbzpARDA7P65bczK+T 6QF0va02S1sVrsfjPkr0SRmmV/2MLlmUKZlM05C744KYpXKxnfOPzNPh5uMR41n5Rp+7 SZvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958139; x=1768562939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ZzXTN5GPRXX9JktaEoAXzWi3s9VynOdluwYCuvOxVM0=; b=FDr4r5Ap707j+Ewmo/zRx0vYK0TGV3++E0yQKxXinCDayIB0W6K2fuAfQmWZgpa0X4 xti6tqbnymTdm2GLIFcC/uf9yAyUfXqHzES3CGcZb2KJkLSOSPeaIStDREEyatPy4KbH OrkO+2YUXTupwGzxOvhm5hiYsoR3F+vut8pgLYhxxwfgyobG6LN41R6Mej6sH/xOrzFA 0KTe0FHrD3i/CuMnZe68ugWOnuHQpEgFTDZJ+6xMMn9dbaerQvbzB7gttIDTBjV0HE3o j+poC148DKhTVKC+ZXbmq6LgjN0Bz+3X+eJmX440vybPck7m7urNEu2nwOtU7GBI66cs 3JeA== X-Forwarded-Encrypted: i=1; AJvYcCVA4v+ZLJS/aH2sIo5wl7+IIwA1jaVmNz40F3/2fj2wWE4rrMy5WgCA2LG70uw9ALP8T8LXcChgNkc45Xw=@vger.kernel.org X-Gm-Message-State: AOJu0YzQUk3KtspRSLdRuAkE4xL6xDNuyMausRwoJerQOV+NeuHnpG10 Bqd2IWhPTaazQAHUJS+R6Kp2iaCCcBBYKD2syBXStrhAeGfLXz5/P9mV X-Gm-Gg: AY/fxX4iztbhyFuwAU1m7nUT5qFGuEjFLXeDP0jdlOaqdv8SER5zZGc6g6SHaqvXtzB Al+3q5MoMoDH4SzIXbT/hSSFPY35xqtvADew43v09rW9/OKdYJzra5lEmFzz6plW1jrhz8sHo4N 5yicvjpVf/v+50Dt3lSbz/RiszGwaM6Mv5x7iXo72229j+zVfeI67gMz5etjsboUCm7ZcWWjlZV u6TYCHjqMA4ych3xNnVTeojidEkyXiV7zeiOL1EmqnVVo/j0uXtojyCdcYNAJSs6u7ZASeLyx3F SEq1DQzUcb30x1BN1hUsXmln+VO0CTEuNGnKXYclryBpctTnvwkTjqMFMmYzZfLvYdmFRUjbHzx s+5abPuGj68KeiknsXt9m8sO6mijC4mvWfeSS2s/XH34gmoS93PJA8zUb2oNS2IY7Odyf7RMx2G fgJtsYKTEve4V/PdnQNHOrwAixXBnkVDVgRxf8wzqc3n+pmu2ZHEnXmebLq5ddtMGnlhoFZA== X-Google-Smtp-Source: AGHT+IEyei4058HWVz7j1Ties6/iGQL4tmWJgfcaL9xqAMs5HpVk0uNC+wFxI/pUEYC0UA3VPD2wSA== X-Received: by 2002:a05:600c:55c6:b0:477:75b4:d2d1 with SMTP id 5b1f17b1804b1-47d7f627ca1mr114093315e9.15.1767958139299; Fri, 09 Jan 2026 03:28:59 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.28.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:28:57 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 2/9] net: reduce indent of struct netdev_queue_mgmt_ops members Date: Fri, 9 Jan 2026 11:28:41 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jakub Kicinski Trivial change, reduce the indent. I think the original is copied from real NDOs. It's unnecessarily deep, makes passing struct args problematic. Signed-off-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov --- include/net/netdev_queues.h | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index cd00e0406cf4..541e7d9853b1 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -135,20 +135,20 @@ void netdev_stat_queue_sum(struct net_device *netdev, * be called for an interface which is open. */ struct netdev_queue_mgmt_ops { - size_t ndo_queue_mem_size; - int (*ndo_queue_mem_alloc)(struct net_device *dev, - void *per_queue_mem, - int idx); - void (*ndo_queue_mem_free)(struct net_device *dev, - void *per_queue_mem); - int (*ndo_queue_start)(struct net_device *dev, - void *per_queue_mem, - int idx); - int (*ndo_queue_stop)(struct net_device *dev, - void *per_queue_mem, - int idx); - struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, - int idx); + size_t ndo_queue_mem_size; + int (*ndo_queue_mem_alloc)(struct net_device *dev, + void *per_queue_mem, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *per_queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + void *per_queue_mem, + int idx); + int (*ndo_queue_stop)(struct net_device *dev, + void *per_queue_mem, + int idx); + struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, + int idx); }; =20 bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx); --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8ECCA359703 for ; Fri, 9 Jan 2026 11:29:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958149; cv=none; b=j0JYNlzHiNm4PPWGMh1fjWD+mSYTR9oDlVly5hPtbpRohS5hflkbBJmwfGh6H4/H87JEic6tvwC/UpzTY3A2OPuOkNNZkflwFesFHQb5i8xffTC2gzKpmFnx0/UIJfueBZ6fJXic94LrMu9JPDnS9gPMJE6w/pRC/NLNI2Jw4Cw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958149; c=relaxed/simple; bh=8v+hHUhF1CMhZOc8CLqelBf0LDjedL88Pj/OWXk4VqE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UclytWiAzEwPi50ZAQKHyBIR2C/e0yaKCL8zxPBYXj0YfOQR9HNrIu47mIoxvoWl7yIBPbk7JminoXsOdXEtgquBuAfOpam/yKFwleY6jNb3kGlsZFu9QBZ1w0t0AcS7Ywo7/R9wigec4EnhtC4p6N8H2ZLdf3KFs1ivjdbMias= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IMeMYpaZ; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IMeMYpaZ" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-477563e28a3so20601275e9.1 for ; Fri, 09 Jan 2026 03:29:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958143; x=1768562943; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kdd1BJ8GnEBdSFCIL/A/+d2iA6KRo+nUubv323kPE0Y=; b=IMeMYpaZNNU//ZfeRXmfaPZKwfgILGS6vxfT1MrsWE4HPuqM4AGELGoVPU+RT/STJz kvBBJaO490+u8Q+GtHLj6l9ag08jxEwJuQa4YDMNgHyRUdAQTq2vZz9qlBLtXyXGSUCX ZyDCqJ2Izfmw7i2A9nAIh5TTtC5PmaOKEH6tYLWqKIA4VmKvyw7wWdGMkfL6L0KpParE 5dA/J7ZKX64Vxv/OAQjIAt0l5MCn2ErlH+CrCDVryj0XkSmSLoJR7hd6drvo5rCTXa8c Jth8tYGj6smsuyWQINEIQ4M6dg7ZZhCXgzxN++ud2LRIZ0+ZW15Sojto3FpQah7Pl/6n as+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958143; x=1768562943; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kdd1BJ8GnEBdSFCIL/A/+d2iA6KRo+nUubv323kPE0Y=; b=wURJqQKH93NGUPhbMjOlwmOKJrrBcE/5dbDB9tDjek1gJdVCJweVQorN+M9qOKF7yw NqF9tiJnf9MD9d8gbGwpNJ3pUZCLXn05Ffu9HQH/J6UsZeGNHF7DCwpypWR3ASQrgsJC 3SJTN3cysAre8A1jtzMcQPr3ZUx2wutKEz6AlzdxWlllFsYy8CK8plrgTtz00OgAHvLh /DIL8nMLgDaNzFrBnADX6TmPDPKINyuWs/Lki1jaIhRumjo57AQMgFF1B3Mua2kZWT0C 8NMEdkgVbC2EXGUkyyz+tbqKg9p3HkgQJbFdWP4IC7HOH21AO3yGBsYNT+F/5a75hJ6f 2dFA== X-Forwarded-Encrypted: i=1; AJvYcCWOd6c+axCAkDkUEbwwX3FzDCL/9PDOPJDdZJDFmGsF8PXL3Xiig3xwo3LQYQ9lh+qxe+qz0pEpDQarJls=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3bWim6tlawIqvL88GtbCe3xWjAmuqzipRX/cJKFs1tenw8eEV 04wxjwnMgmtsXeUHp7iLCr5ApwMFm4hI9ExCkO7ZkGAw8ArT7c6b4hzm X-Gm-Gg: AY/fxX6vkNp80RQwmS1ir6IPj+rE9iIAoN3rsNEDqg1yfIB6jj/PwzU8ma0JsDhhIe/ mrH01sv02ciWVngrWbVP+3aJPJMEkRAzgAm/Sml7JDlnAhv7cfpTtwetsa0B6dCDdZpyHcEvlQo w0Zw8HKx/5pSFPbYETptMTcE27ELnNimb5qNQKLVqTfD0a2GYxkTlvE5C/kJiGhv/zvXFxwZS/x v7iHU0xnKUPIjzTd+BndK3hJUFyGK/IJMqYD8v9iKGpe8DDOP9a/jsvocftkaXwEIwFhBp0aTN+ 4ZotuY+9k+iJWoJaIWIr/vTTG79I+kcTXZeCymxSVv+uhd9vYNWdkiUGcBURzgAkeShROfmLTMU 971UrmZC+Vz4wPr2WYtJcN7kSdB+xqVQEbTGJ18C5QaLkfuW/yFS7WWVtYB3UASxdXa3A3515nN 09QlLsGvWDfkmitUYK9/EUBc5Bgomnlio3mqAbfuhff0DgkCfnvIWTg9d65ABpV9YL+/2bwA== X-Google-Smtp-Source: AGHT+IHKuRS0IT/bpC+UJc4vXXX6jRdQbUKXx2O2o/AL3e7ieD8njEmbDbJnzHUsvm+xrORjKVPuZQ== X-Received: by 2002:a05:600c:8215:b0:46e:2815:8568 with SMTP id 5b1f17b1804b1-47d8486cd41mr104264645e9.10.1767958142195; Fri, 09 Jan 2026 03:29:02 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.28.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:00 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 3/9] net: add bare bone queue configs Date: Fri, 9 Jan 2026 11:28:42 +0000 Message-ID: <6280519f4d4dcd9500f04fc1a79677a2df9b2fca.1767819709.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We'll need to pass extra parameters when allocating a queue for memory providers. Define a new structure for queue configurations, and pass it to qapi callbacks. It's empty for now, actual parameters will be added in following patches. Configurations should persist across resets, and for that they're default-initialised on device registration and stored in struct netdev_rx_queue. We also add a new qapi callback for defaulting a given config. It must be implemented if a driver wants to use queue configs and is optional otherwise. Suggested-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 ++++++-- drivers/net/ethernet/google/gve/gve_main.c | 9 ++++++--- .../net/ethernet/mellanox/mlx5/core/en_main.c | 10 ++++++---- drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 8 ++++++-- drivers/net/netdevsim/netdev.c | 7 +++++-- include/net/netdev_queues.h | 9 +++++++++ include/net/netdev_rx_queue.h | 2 ++ net/core/dev.c | 17 +++++++++++++++++ net/core/netdev_rx_queue.c | 12 +++++++++--- 9 files changed, 66 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index d17d0ea89c36..73f954da39b9 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -15902,7 +15902,9 @@ static const struct netdev_stat_ops bnxt_stat_ops = =3D { .get_base_stats =3D bnxt_get_base_stats, }; =20 -static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int id= x) +static int bnxt_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct bnxt_rx_ring_info *rxr, *clone; struct bnxt *bp =3D netdev_priv(dev); @@ -16068,7 +16070,9 @@ static void bnxt_copy_rx_ring(struct bnxt *bp, dst->rx_agg_bmap =3D src->rx_agg_bmap; } =20 -static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx) +static int bnxt_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct bnxt *bp =3D netdev_priv(dev); struct bnxt_rx_ring_info *rxr, *clone; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 7eb64e1e4d85..c42640da15a5 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -2616,8 +2616,9 @@ static void gve_rx_queue_mem_free(struct net_device *= dev, void *per_q_mem) gve_rx_free_ring_dqo(priv, gve_per_q_mem, &cfg); } =20 -static int gve_rx_queue_mem_alloc(struct net_device *dev, void *per_q_mem, - int idx) +static int gve_rx_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_q_mem, int idx) { struct gve_priv *priv =3D netdev_priv(dev); struct gve_rx_alloc_rings_cfg cfg =3D {0}; @@ -2638,7 +2639,9 @@ static int gve_rx_queue_mem_alloc(struct net_device *= dev, void *per_q_mem, return err; } =20 -static int gve_rx_queue_start(struct net_device *dev, void *per_q_mem, int= idx) +static int gve_rx_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_q_mem, int idx) { struct gve_priv *priv =3D netdev_priv(dev); struct gve_rx_ring *gve_per_q_mem; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/ne= t/ethernet/mellanox/mlx5/core/en_main.c index 07fc4d2c8fad..0e2132b58257 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -5596,8 +5596,9 @@ struct mlx5_qmgmt_data { struct mlx5e_channel_param cparam; }; =20 -static int mlx5e_queue_mem_alloc(struct net_device *dev, void *newq, - int queue_index) +static int mlx5e_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *newq, int queue_index) { struct mlx5_qmgmt_data *new =3D (struct mlx5_qmgmt_data *)newq; struct mlx5e_priv *priv =3D netdev_priv(dev); @@ -5658,8 +5659,9 @@ static int mlx5e_queue_stop(struct net_device *dev, v= oid *oldq, int queue_index) return 0; } =20 -static int mlx5e_queue_start(struct net_device *dev, void *newq, - int queue_index) +static int mlx5e_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *newq, int queue_index) { struct mlx5_qmgmt_data *new =3D (struct mlx5_qmgmt_data *)newq; struct mlx5e_priv *priv =3D netdev_priv(dev); diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/eth= ernet/meta/fbnic/fbnic_txrx.c index 13d508ce637f..e36ed25462b4 100644 --- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c +++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c @@ -2809,7 +2809,9 @@ void fbnic_napi_depletion_check(struct net_device *ne= tdev) fbnic_wrfl(fbd); } =20 -static int fbnic_queue_mem_alloc(struct net_device *dev, void *qmem, int i= dx) +static int fbnic_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct fbnic_net *fbn =3D netdev_priv(dev); const struct fbnic_q_triad *real; @@ -2861,7 +2863,9 @@ static void __fbnic_nv_restart(struct fbnic_net *fbn, netif_wake_subqueue(fbn->netdev, nv->qt[i].sub0.q_idx); } =20 -static int fbnic_queue_start(struct net_device *dev, void *qmem, int idx) +static int fbnic_queue_start(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *qmem, int idx) { struct fbnic_net *fbn =3D netdev_priv(dev); struct fbnic_napi_vector *nv; diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c index 6927c1962277..6285fbefe38a 100644 --- a/drivers/net/netdevsim/netdev.c +++ b/drivers/net/netdevsim/netdev.c @@ -758,7 +758,9 @@ struct nsim_queue_mem { }; =20 static int -nsim_queue_mem_alloc(struct net_device *dev, void *per_queue_mem, int idx) +nsim_queue_mem_alloc(struct net_device *dev, + struct netdev_queue_config *qcfg, + void *per_queue_mem, int idx) { struct nsim_queue_mem *qmem =3D per_queue_mem; struct netdevsim *ns =3D netdev_priv(dev); @@ -807,7 +809,8 @@ static void nsim_queue_mem_free(struct net_device *dev,= void *per_queue_mem) } =20 static int -nsim_queue_start(struct net_device *dev, void *per_queue_mem, int idx) +nsim_queue_start(struct net_device *dev, struct netdev_queue_config *qcfg, + void *per_queue_mem, int idx) { struct nsim_queue_mem *qmem =3D per_queue_mem; struct netdevsim *ns =3D netdev_priv(dev); diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index 541e7d9853b1..f6f1f71a24e1 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -14,6 +14,9 @@ struct netdev_config { u8 hds_config; }; =20 +struct netdev_queue_config { +}; + /* See the netdev.yaml spec for definition of each statistic */ struct netdev_queue_stats_rx { u64 bytes; @@ -130,6 +133,8 @@ void netdev_stat_queue_sum(struct net_device *netdev, * @ndo_queue_get_dma_dev: Get dma device for zero-copy operations to be u= sed * for this queue. Return NULL on error. * + * @ndo_default_qcfg: Populate queue config struct with defaults. Optional. + * * Note that @ndo_queue_mem_alloc and @ndo_queue_mem_free may be called wh= ile * the interface is closed. @ndo_queue_start and @ndo_queue_stop will only * be called for an interface which is open. @@ -137,16 +142,20 @@ void netdev_stat_queue_sum(struct net_device *netdev, struct netdev_queue_mgmt_ops { size_t ndo_queue_mem_size; int (*ndo_queue_mem_alloc)(struct net_device *dev, + struct netdev_queue_config *qcfg, void *per_queue_mem, int idx); void (*ndo_queue_mem_free)(struct net_device *dev, void *per_queue_mem); int (*ndo_queue_start)(struct net_device *dev, + struct netdev_queue_config *qcfg, void *per_queue_mem, int idx); int (*ndo_queue_stop)(struct net_device *dev, void *per_queue_mem, int idx); + void (*ndo_default_qcfg)(struct net_device *dev, + struct netdev_queue_config *qcfg); struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, int idx); }; diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index 8cdcd138b33f..cfa72c485387 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 /* This structure contains an instance of an RX queue. */ struct netdev_rx_queue { @@ -27,6 +28,7 @@ struct netdev_rx_queue { struct xsk_buff_pool *pool; #endif struct napi_struct *napi; + struct netdev_queue_config qcfg; struct pp_memory_provider_params mp_params; } ____cacheline_aligned_in_smp; =20 diff --git a/net/core/dev.c b/net/core/dev.c index 36dc5199037e..a1d394addaef 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -11270,6 +11270,21 @@ static void netdev_free_phy_link_topology(struct n= et_device *dev) } } =20 +static void init_rx_queue_cfgs(struct net_device *dev) +{ + const struct netdev_queue_mgmt_ops *qops =3D dev->queue_mgmt_ops; + struct netdev_rx_queue *rxq; + int i; + + if (!qops || !qops->ndo_default_qcfg) + return; + + for (i =3D 0; i < dev->num_rx_queues; i++) { + rxq =3D __netif_get_rx_queue(dev, i); + qops->ndo_default_qcfg(dev, &rxq->qcfg); + } +} + /** * register_netdevice() - register a network device * @dev: device to register @@ -11315,6 +11330,8 @@ int register_netdevice(struct net_device *dev) if (!dev->name_node) goto out; =20 + init_rx_queue_cfgs(dev); + /* Init, if this function is available */ if (dev->netdev_ops->ndo_init) { ret =3D dev->netdev_ops->ndo_init(dev); diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index a0083f176a9c..86d1c0a925e3 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -22,6 +22,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) { struct netdev_rx_queue *rxq =3D __netif_get_rx_queue(dev, rxq_idx); const struct netdev_queue_mgmt_ops *qops =3D dev->queue_mgmt_ops; + struct netdev_queue_config qcfg; void *new_mem, *old_mem; int err; =20 @@ -31,6 +32,10 @@ int netdev_rx_queue_restart(struct net_device *dev, unsi= gned int rxq_idx) =20 netdev_assert_locked(dev); =20 + memset(&qcfg, 0, sizeof(qcfg)); + if (qops->ndo_default_qcfg) + qops->ndo_default_qcfg(dev, &qcfg); + new_mem =3D kvzalloc(qops->ndo_queue_mem_size, GFP_KERNEL); if (!new_mem) return -ENOMEM; @@ -41,7 +46,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) goto err_free_new_mem; } =20 - err =3D qops->ndo_queue_mem_alloc(dev, new_mem, rxq_idx); + err =3D qops->ndo_queue_mem_alloc(dev, &qcfg, new_mem, rxq_idx); if (err) goto err_free_old_mem; =20 @@ -54,7 +59,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) if (err) goto err_free_new_queue_mem; =20 - err =3D qops->ndo_queue_start(dev, new_mem, rxq_idx); + err =3D qops->ndo_queue_start(dev, &qcfg, new_mem, rxq_idx); if (err) goto err_start_queue; } else { @@ -66,6 +71,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) kvfree(old_mem); kvfree(new_mem); =20 + rxq->qcfg =3D qcfg; return 0; =20 err_start_queue: @@ -76,7 +82,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsig= ned int rxq_idx) * WARN if we fail to recover the old rx queue, and at least free * old_mem so we don't also leak that. */ - if (qops->ndo_queue_start(dev, old_mem, rxq_idx)) { + if (qops->ndo_queue_start(dev, &rxq->qcfg, old_mem, rxq_idx)) { WARN(1, "Failed to restart old queue in error path. RX queue %d may be unhe= althy.", rxq_idx); --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC5EF359F81 for ; Fri, 9 Jan 2026 11:29:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958153; cv=none; b=qpfdG1eDE8Qc9iZRLrF2ViTckSR/Bo0gRFxkrtx/5z6AMmE9dO3e96GfUQS5MAEX4j1NS9eFk/bhD18R5yH/1bDmcTq7VCQBfJ/0TKt0iAATqvKL41ts/L7NnsRgSfdsgdTw+/D16UjrdGi+k6Z4IMed60joBa+q0hbe+5K5RRw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958153; c=relaxed/simple; bh=hKOxjq4fxJ+iQfib8Cm75gZgmDPHfB9Dr0qLWGLYXFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f5sECRkXmOqGoiqf1fA0FRISy22Us5wLGiNaZ4GHEbnllmMizXjiSvXSXS2/xBmcOpqxwpO0nE7GR03dly1cZQs3IXNZlCayQkadjhc6RpPPHzzLP/+R4Zo0znm9UWHBT3VNW/x2aoXg9LilTacTUxZRzPkEwqqJTUUtPgK1QSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HHHhRw72; arc=none smtp.client-ip=209.85.128.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HHHhRw72" Received: by mail-wm1-f65.google.com with SMTP id 5b1f17b1804b1-4775895d69cso20731315e9.0 for ; Fri, 09 Jan 2026 03:29:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958146; x=1768562946; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TWL7hbAkNRsIM9hHu4g4BkFlYyqPgZDU68a7iQIhzDo=; b=HHHhRw72TWPZCJ341hwOHL6/05wsgcs23mXP7RxJGEyIoqolIVbOMIqTOfpQt19zRM 05n786aOs0DWt84e8XLXAC5urr/+RJonEsXhgsyEH819l3O5KdywIuIfw0WnfhAmedEg uElfdPLyHABR9T9kn2/wImSEOSjFbFraxvGE4IWOrY27GvqYIpRjZdw0Tasra0/Bjn9X T2ihfDVlOFFSlA0l5EaD/n4v4vOGSjXZHeH81b4UhXAztYFI/Pix/7Hn/20PAtoPvrO0 5dG/AjptJRhve38dtYwUa/tU1t2/T36wZ7vM52kc4sQM6AhFBe28pqD+JAL4a8l7eHnk rDPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958146; x=1768562946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=TWL7hbAkNRsIM9hHu4g4BkFlYyqPgZDU68a7iQIhzDo=; b=G3cMyJ+qJV1QOcgtg+zNJBLj2Uqcb5NR8PtLZpA3b9Yu3U8HRdmqEVcBKTosgibbED vp9kev0JqWz3JQvIWdnmrCwQy4f6e8NvlFFkwTGtKoHyy0FvanmpUnaAul6OuKlZ9uYE 6rM5b6y6/+Q9PTBUIJFemvtZLq3ldji3QW+7GgRa1EWdh6hGLvLeyjiUo641AoSEwoxh eMKURMfFqObkiAFaBMSyfq/oqcM5U2H659tI4EZkW88NxT3Ol5GcUXPMoXXWKWtKaj90 olZ6kAIEP6viEk4GQb5+tI+IVy0xRGmly25xTndRs7CoLVQ4jdoQqMv2sQDO7wQz0RnB EcSg== X-Forwarded-Encrypted: i=1; AJvYcCXn+WgtYHgifJzU0rQHw6/YtXA47A95J5eRCS5gkr/3GKFbAu77MjKWT55wRkZFKSLWkm2h0wzmQXn1PrM=@vger.kernel.org X-Gm-Message-State: AOJu0YwMdPs2yNDyaKFalywt1q8prW3Se8gdVc7PO8P3OX9d2nV3bgS3 ngNvamw8I0SGKBCoe8bA9/NCYujb//JG0oqkzNlW3jzxlt7ktX3vI9Fg X-Gm-Gg: AY/fxX6Jk8rehdoaARvaZFXmqmbCoFPESur3NRUirwcGjp8v6r1oYaGn1gwsPFFppb+ ndc2ed0trULFZPYtrKed0PneESBDUNgJWlcutSBQhFkkmboeMF8MPJvx4VjlDhSke9dfDa4KJ9S WUxF2S/zz3WFgqDEyf1p0EZgyBB2AgLwz9a8nwSajORUKm3M9SaJx/DFa0DZA85qctFh5ttC9S2 zL+P/IzIuI5jmK5IbHNAl/n+Z0dwjla2wKTj5K8cCp7DG3VLNhPiAf+ldBEaun+l+LDgiFADHR0 1xlCQslZDWmXzMr+/7bvUBpDqI80Guxk9LJsFlvuWo6yk61c1da734f6PRONSbmBlLvJBuWk8Wh OlAaIxkd6rzX10fgKaBiaRGXbNzx9pxPAUN8tfqakZX4ai+g8aRUmFH/i5pDR8bRxe31WW/++1P LQ1Pyz6NoflzCzkwgMtELMQhwXQs0PIJfaY8e8kBBrdH1PXUNYngRaFxnNtQ/mepjJamxX2w== X-Google-Smtp-Source: AGHT+IEtu//eEPFSMQCQc0x4+ohl+/zP3PbtVY9Fq75VCKqTNmy1GQDGxEUmS6COuOk11FJ2AmC1hg== X-Received: by 2002:a05:600c:1f8c:b0:477:7d94:5d0e with SMTP id 5b1f17b1804b1-47d84b40955mr99606675e9.27.1767958145751; Fri, 09 Jan 2026 03:29:05 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:05 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 4/9] net: pass queue rx page size from memory provider Date: Fri, 9 Jan 2026 11:28:43 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow memory providers to configure rx queues with a custom receive page size. It's passed in struct pp_memory_provider_params, which is copied into the queue, so it's preserved across queue restarts. Then, it's propagated to the driver in a new queue config parameter. Drivers should explicitly opt into using it by setting QCFG_RX_PAGE_SIZE, in which case they should implement ndo_default_qcfg, validate the size on queue restart and honour the current config in case of a reset. Signed-off-by: Pavel Begunkov --- include/net/netdev_queues.h | 10 ++++++++++ include/net/page_pool/types.h | 1 + net/core/netdev_rx_queue.c | 9 +++++++++ 3 files changed, 20 insertions(+) diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index f6f1f71a24e1..feca25131930 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -15,6 +15,7 @@ struct netdev_config { }; =20 struct netdev_queue_config { + u32 rx_page_size; }; =20 /* See the netdev.yaml spec for definition of each statistic */ @@ -114,6 +115,11 @@ void netdev_stat_queue_sum(struct net_device *netdev, int tx_start, int tx_end, struct netdev_queue_stats_tx *tx_sum); =20 +enum { + /* The queue checks and honours the page size qcfg parameter */ + QCFG_RX_PAGE_SIZE =3D 0x1, +}; + /** * struct netdev_queue_mgmt_ops - netdev ops for queue management * @@ -135,6 +141,8 @@ void netdev_stat_queue_sum(struct net_device *netdev, * * @ndo_default_qcfg: Populate queue config struct with defaults. Optional. * + * @supported_params: Bitmask of supported parameters, see QCFG_*. + * * Note that @ndo_queue_mem_alloc and @ndo_queue_mem_free may be called wh= ile * the interface is closed. @ndo_queue_start and @ndo_queue_stop will only * be called for an interface which is open. @@ -158,6 +166,8 @@ struct netdev_queue_mgmt_ops { struct netdev_queue_config *qcfg); struct device * (*ndo_queue_get_dma_dev)(struct net_device *dev, int idx); + + unsigned int supported_params; }; =20 bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx); diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 1509a536cb85..0d453484a585 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -161,6 +161,7 @@ struct memory_provider_ops; struct pp_memory_provider_params { void *mp_priv; const struct memory_provider_ops *mp_ops; + u32 rx_page_size; }; =20 struct page_pool { diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index 86d1c0a925e3..b81cad90ba2f 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -30,12 +30,21 @@ int netdev_rx_queue_restart(struct net_device *dev, uns= igned int rxq_idx) !qops->ndo_queue_mem_alloc || !qops->ndo_queue_start) return -EOPNOTSUPP; =20 + if (WARN_ON_ONCE(qops->supported_params && !qops->ndo_default_qcfg)) + return -EINVAL; + netdev_assert_locked(dev); =20 memset(&qcfg, 0, sizeof(qcfg)); if (qops->ndo_default_qcfg) qops->ndo_default_qcfg(dev, &qcfg); =20 + if (rxq->mp_params.rx_page_size) { + if (!(qops->supported_params & QCFG_RX_PAGE_SIZE)) + return -EOPNOTSUPP; + qcfg.rx_page_size =3D rxq->mp_params.rx_page_size; + } + new_mem =3D kvzalloc(qops->ndo_queue_mem_size, GFP_KERNEL); if (!new_mem) return -ENOMEM; --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95F53359FB6 for ; Fri, 9 Jan 2026 11:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958154; cv=none; b=N2fN4bPjjSJlagPtJ88j16oQbOTO1h2EzNYg9DZJvtK23YC25II7E0BKWkz/qu4ClRdAHoBZO/mFLOgTcUzQdSQIXcaGN8pboFQ06HOK9NxvcZs26UfBLe8kqW8Za4K/wM/Beo6KAj/DrwKhOBCSZkVDRQHDb76DkIWZOpVJ3qc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958154; c=relaxed/simple; bh=QZXEA2+NYMD7lIXsZ2nJV6uQaLTc157QE8KSkpAr83g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n/eWlT4h0he1Xcr6Z7eBDp7Sj1Y3rSNLv+rlwesP4skRMOhZOPJt0OFuNh9NPTcm9SjfGnTTITaMDci7NLu3nu2+cYZ0pFv6RY9n1tofPwVsUwDSG4xNFUhdrCglwCV+xvMXlt6CywzY6PrfrJIffTfwNAgJ7Q/UShERfbSyRdI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JVLwpLVH; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JVLwpLVH" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4779a4fc95aso15309855e9.1 for ; Fri, 09 Jan 2026 03:29:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958148; x=1768562948; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nZD5Wx1jyZXUWe5yKYezCw7KRgK+3rTGuK1KYHQq5VM=; b=JVLwpLVHuBi7ybwLaq1H6ZGCcHvwRcgM4c5hYLnGZpLo/cWv7bx/eAYBv12z5zP1fP WoScqF4zYl6NcNET1aCkG0hmoCNF+ab4SGGJodg1ixCmdq84tOkWwjCCHsyNBQM8IPu0 w8czIfufXohhSd3FQ2pQ7TDfZZG+VZ2hD9qg0LHriluIcaO4vPcAZ2T+Dt2tJCRdYyxA gK/+DTf4odgXJRqadja4+iPhhbvjlbwZIpQxDP39e2RGW6/r8qpb6vFptYx3GIa+DajR Jl1Qf6b/mCJVnLhvzEray0QA/OcOD+20NFTJtAvBEVLZ4AQ2cQPZRNkdHbkWxmeUXWDR m1pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958148; x=1768562948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=nZD5Wx1jyZXUWe5yKYezCw7KRgK+3rTGuK1KYHQq5VM=; b=VjKwlsK7ToibYpmF8XAbVKr2bIr5JaPPKgw+DPX9JJ2Cuknug3qkjuslW/dyjmZgpY SKow55XywfFgpbUSpskKsbelGPIn3usjRex9y7TE6XCnoMYWI9RU9xtrBv82xuRdN5yy 52Zq331Qnxw5mfooWPMW+NYpbJMN62GCFy4wSPaNf6DqHPfj6551TphoYhaJYOgZ1zx+ +gl4Xi6EsTbRkaUIsgibPE6/m5zXIATfa7mVLjfpkaJ51QdEzypqKmTJ1yjOgZtIIV4l U2xgU4YqyYQH3+IjqArk4xK3xgU1IsRV/wd3bbchzOm2vbRtiQq5ZPhG/BpOci+JEsCH 9G/g== X-Forwarded-Encrypted: i=1; AJvYcCWFwumeKYQopebmYGt4LVTKeuLsDPrApMSczmwD1CgH8mT5/in5fvq4VPneHuVWpvLWHyCyV7Hoo1ELTAI=@vger.kernel.org X-Gm-Message-State: AOJu0YzCbUGQUjyhtY0Gt+Z7oRKljWDaTPcAmuuhK+4V4f0hPTFLDj+c J4W3Zvt0ODCzdZrlYaViGihL1AYhIVVRRVqcNh9V9/pSBWF1jNlmqilU X-Gm-Gg: AY/fxX5Vj062b2yIqG5t51geeFCVJpKB5OzTtBctu8H9NWP6plLfHY8QdxUhECtS0jm 5IFh6gNtDtHXdTmIFaM11NCT4clUMMNHggpFA3FQjr/VYtAJ9DpprrG51iV76wE182D5bqQf45B 3awCWvE2D98eR3De4d9bBQUhw8L1rlCOtrtTN/XaFXFjTrIvmt+bnPVbDmVYv8FgsPBH9YoqxpO pT790iLdRhVCH8jGIu8znmkQykoSFiNYC9pC4ww+qJYigt6VzxqcSIWtJjwuIL0ekgx4f0YG8P3 mhDH9QhMxOn1BZwSBnE2IXLiRqYmu+SSSMRFXc/SwpJKj7Oo2IyvfS8d/frtAeb1hVoQNQwos19 RrqD0jfX6jiaFx31UnI9A7RTkE7BO+Z6Em3il6XQKrfh/c1uFW/FKINqhkbJ/+d6o445a1PSkRe 35MPE2NhLZ5rmucuw7f1FAbfEmd+11fa56EFzEREKG6Ws7A/1+BQwYBbDmoRdzKJ/Xa94yeQ== X-Google-Smtp-Source: AGHT+IFCZxx8Wr/pDbni51TxgkC0jyiIt2gDlEMICKp850CkohI8JdFYYguYo7ByEs5Bde0EWv+Mdw== X-Received: by 2002:a05:600c:8b37:b0:47d:333d:99c with SMTP id 5b1f17b1804b1-47d84881c77mr97821025e9.18.1767958147570; Fri, 09 Jan 2026 03:29:07 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:06 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 5/9] eth: bnxt: store rx buffer size per queue Date: Fri, 9 Jan 2026 11:28:44 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of using a constant buffer length, allow configuring the size for each queue separately. There is no way to change the length yet, and it'll be passed from memory providers in a later patch. Suggested-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 56 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- 4 files changed, 38 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index 73f954da39b9..8f42885a7c86 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -905,7 +905,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_na= pi *bnapi, int budget) =20 static bool bnxt_separate_head_pool(struct bnxt_rx_ring_info *rxr) { - return rxr->need_head_pool || PAGE_SIZE > BNXT_RX_PAGE_SIZE; + return rxr->need_head_pool || rxr->rx_page_size < PAGE_SIZE; } =20 static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapp= ing, @@ -915,9 +915,9 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *b= p, dma_addr_t *mapping, { struct page *page; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + if (rxr->rx_page_size < PAGE_SIZE) { page =3D page_pool_dev_alloc_frag(rxr->page_pool, offset, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); } else { page =3D page_pool_dev_alloc_pages(rxr->page_pool); *offset =3D 0; @@ -936,8 +936,9 @@ static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *b= p, dma_addr_t *mapping, { netmem_ref netmem; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_P= AGE_SIZE, gfp); + if (rxr->rx_page_size < PAGE_SIZE) { + netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, + rxr->rx_page_size, gfp); } else { netmem =3D page_pool_alloc_netmems(rxr->page_pool, gfp); *offset =3D 0; @@ -1155,9 +1156,9 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct = bnxt *bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); - skb =3D napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); + skb =3D napi_build_skb(data_ptr - bp->rx_offset, rxr->rx_page_size); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); return NULL; @@ -1189,7 +1190,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); =20 if (unlikely(!payload)) @@ -1203,7 +1204,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, =20 skb_mark_for_recycle(skb); off =3D (void *)data_ptr - page_address(page); - skb_add_rx_frag(skb, 0, page, off, len, BNXT_RX_PAGE_SIZE); + skb_add_rx_frag(skb, 0, page, off, len, rxr->rx_page_size); memcpy(skb->data - NET_IP_ALIGN, data_ptr - NET_IP_ALIGN, payload + NET_IP_ALIGN); =20 @@ -1288,7 +1289,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem, cons_rx_buf->offset, - frag_len, BNXT_RX_PAGE_SIZE); + frag_len, rxr->rx_page_size); } else { skb_frag_t *frag =3D &shinfo->frags[i]; =20 @@ -1313,7 +1314,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb->len -=3D frag_len; skb->data_len -=3D frag_len; - skb->truesize -=3D BNXT_RX_PAGE_SIZE; + skb->truesize -=3D rxr->rx_page_size; } =20 --shinfo->nr_frags; @@ -1328,7 +1329,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, } =20 page_pool_dma_sync_netmem_for_cpu(rxr->page_pool, netmem, 0, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); =20 total_frag_len +=3D frag_len; prod =3D NEXT_RX_AGG(prod); @@ -2281,8 +2282,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_c= p_ring_info *cpr, if (!skb) goto oom_next_rx; } else { - skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, - rxr->page_pool, &xdp); + skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr, &xdp); if (!skb) { /* we should be able to free the old skb here */ bnxt_xdp_buff_frags_free(rxr, &xdp); @@ -3828,11 +3828,13 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.pool_size =3D bp->rx_agg_ring_size / agg_size_fac; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size +=3D bp->rx_ring_size / rx_size_fac; + + pp.order =3D get_order(rxr->rx_page_size); pp.nid =3D numa_node; pp.netdev =3D bp->dev; pp.dev =3D &bp->pdev->dev; pp.dma_dir =3D bp->rx_dir; - pp.max_len =3D PAGE_SIZE; + pp.max_len =3D PAGE_SIZE << pp.order; pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_ALLOW_UNREADABLE_NETMEM; pp.queue_idx =3D rxr->bnapi->index; @@ -3843,7 +3845,10 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, rxr->page_pool =3D pool; =20 rxr->need_head_pool =3D page_pool_is_unreadable(pool); + rxr->need_head_pool |=3D !!pp.order; if (bnxt_separate_head_pool(rxr)) { + pp.order =3D 0; + pp.max_len =3D PAGE_SIZE; pp.pool_size =3D min(bp->rx_ring_size / rx_size_fac, 1024); pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pool =3D page_pool_create(&pp); @@ -4319,6 +4324,8 @@ static void bnxt_init_ring_struct(struct bnxt *bp) if (!rxr) goto skip_rx; =20 + rxr->rx_page_size =3D BNXT_RX_PAGE_SIZE; + ring =3D &rxr->rx_ring_struct; rmem =3D &ring->ring_mem; rmem->nr_pages =3D bp->rx_nr_pages; @@ -4478,7 +4485,7 @@ static void bnxt_init_one_rx_agg_ring_rxbd(struct bnx= t *bp, ring =3D &rxr->rx_agg_ring_struct; ring->fw_ring_id =3D INVALID_HW_RING_ID; if ((bp->flags & BNXT_FLAG_AGG_RINGS)) { - type =3D ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | + type =3D ((u32)(u32)rxr->rx_page_size << RX_BD_LEN_SHIFT) | RX_BD_TYPE_RX_AGG_BD; =20 /* On P7, setting EOP will cause the chip to disable @@ -7056,6 +7063,7 @@ static void bnxt_hwrm_ring_grp_free(struct bnxt *bp) =20 static void bnxt_set_rx_ring_params_p5(struct bnxt *bp, u32 ring_type, struct hwrm_ring_alloc_input *req, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring) { struct bnxt_ring_grp_info *grp_info =3D &bp->grp_info[ring->grp_idx]; @@ -7065,7 +7073,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, if (ring_type =3D=3D HWRM_RING_ALLOC_AGG) { req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_RX_AGG; req->rx_ring_id =3D cpu_to_le16(grp_info->rx_fw_ring_id); - req->rx_buf_size =3D cpu_to_le16(BNXT_RX_PAGE_SIZE); + req->rx_buf_size =3D cpu_to_le16(rxr->rx_page_size); enables |=3D RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID; } else { req->rx_buf_size =3D cpu_to_le16(bp->rx_buf_use_size); @@ -7079,6 +7087,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, } =20 static int hwrm_ring_alloc_send_msg(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring, u32 ring_type, u32 map_index) { @@ -7135,7 +7144,8 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, cpu_to_le32(bp->rx_ring_mask + 1) : cpu_to_le32(bp->rx_agg_ring_mask + 1); if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) - bnxt_set_rx_ring_params_p5(bp, ring_type, req, ring); + bnxt_set_rx_ring_params_p5(bp, ring_type, req, + rxr, ring); break; case HWRM_RING_ALLOC_CMPL: req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_L2_CMPL; @@ -7283,7 +7293,7 @@ static int bnxt_hwrm_rx_ring_alloc(struct bnxt *bp, u32 map_idx =3D bnapi->index; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7303,7 +7313,7 @@ static int bnxt_hwrm_rx_agg_ring_alloc(struct bnxt *b= p, int rc; =20 map_idx =3D grp_idx + bp->rx_nr_rings; - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7327,7 +7337,7 @@ static int bnxt_hwrm_cp_ring_alloc_p5(struct bnxt *bp, =20 ring =3D &cpr->cp_ring_struct; ring->handle =3D BNXT_SET_NQ_HDL(cpr); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) return rc; bnxt_set_db(bp, &cpr->cp_db, type, map_idx, ring->fw_ring_id); @@ -7342,7 +7352,7 @@ static int bnxt_hwrm_tx_ring_alloc(struct bnxt *bp, const u32 type =3D HWRM_RING_ALLOC_TX; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, tx_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, tx_idx); if (rc) return rc; bnxt_set_db(bp, &txr->tx_db, type, tx_idx, ring->fw_ring_id); @@ -7368,7 +7378,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) =20 vector =3D bp->irq_tbl[map_idx].vector; disable_irq_nosync(vector); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) { enable_irq(vector); goto err_out; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index f5f07a7e6b29..4c880a9fba92 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1107,6 +1107,7 @@ struct bnxt_rx_ring_info { =20 unsigned long *rx_agg_bmap; u16 rx_agg_bmap_size; + u16 rx_page_size; bool need_head_pool; =20 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.c index c94a391b1ba5..85cbeb35681c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -183,7 +183,7 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx= _ring_info *rxr, u16 cons, u8 *data_ptr, unsigned int len, struct xdp_buff *xdp) { - u32 buflen =3D BNXT_RX_PAGE_SIZE; + u32 buflen =3D rxr->rx_page_size; struct bnxt_sw_rx_bd *rx_buf; struct pci_dev *pdev; dma_addr_t mapping; @@ -460,7 +460,7 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf = *xdp) =20 struct sk_buff * bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, - struct page_pool *pool, struct xdp_buff *xdp) + struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp) { struct skb_shared_info *sinfo =3D xdp_get_shared_info_from_buff(xdp); =20 @@ -468,7 +468,7 @@ bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb= , u8 num_frags, return NULL; =20 xdp_update_skb_frags_info(skb, num_frags, sinfo->xdp_frags_size, - BNXT_RX_PAGE_SIZE * num_frags, + rxr->rx_page_size * num_frags, xdp_buff_get_skb_flags(xdp)); return skb; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.h index 220285e190fc..8933a0dec09a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -32,6 +32,6 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_r= ing_info *rxr, void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, - u8 num_frags, struct page_pool *pool, + u8 num_frags, struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); #endif --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 145CC35971A for ; Fri, 9 Jan 2026 11:29:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958156; cv=none; b=ch/Y4WrRqJ2FhlGCAULbbqhY1tiwDX8feqD0nAZlH8/3cmmM88iJe5yTHVQbvkxrqBTwu+OD1Ue/voxEQh+mT+NRWclzw/G8Kaoj0BSOCE+UJzSsIcDxqCFRetPBNMxsq9ncUrwwbjcNh0s6Wbu8TqxBXrQxI9FQwZj5AyHvFM4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958156; c=relaxed/simple; bh=5qbU+cqkCY7Oy7htHErBAKE7+dTnE/mhpdnkukFkSXk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XFNHpwQTWe9ZrhxH1HvvhfN9fgbL+yXXxColX/uUw62LhpFBs42g+LYN52VJ68YmGhLxSohtud+9GlJYqyOgUqUvbXlcvyZCQvX7J2toZVPVYl/vKnpnIW70wXlS4GGE8sT60bsi1CImq3j52rK9W8559/0VhyCJ839TJ2OkohE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hwHKJmqx; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hwHKJmqx" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-47d63594f7eso25381885e9.0 for ; Fri, 09 Jan 2026 03:29:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958150; x=1768562950; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3VL6yChlTp3dYpMCzBhFC2VDB7at7kYV7rRkhbUdhFI=; b=hwHKJmqxpH5w6iezA9snAeS/GLYfoExNv4gBKN9yxT/nDeYIHHuwMvAxdv6K15Pxit niT4NRbLo01etqSDQfSgfq2DYE8e4mjoPxd4fMi4Wv8RibRAlH9MdhdKPjmqYgqgI0VR JeL1VI0bSvL95e9j0035mOBpDqmXiruk/QjjTVAwFrXphkcYN+dzdzCdJGa3PRlPps5e q7OO7AmlTU1MOTuVnrN38SG63erPKbLNLAfPextQL2H6K7If8PxYvkGCffnLvJNcGAGg lJEIXfar3z9eui7C7ipzvK5iSpIHwzgY2p00ziAWbgkeJh7hYTA+mB2JBke/jH1FpXMw 0hZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958150; x=1768562950; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3VL6yChlTp3dYpMCzBhFC2VDB7at7kYV7rRkhbUdhFI=; b=B+MQCo2MxfKmTw3V+03CNeJ4pU4XlXyeopwyQucqTNNfpLbH1QJKmSyA9TxJ5RPgJC jNbaez06YoDbRhLPc0+LsfzeNxPLEUkC+r9tMiluVYwVgENnHDYj0JmVwSIti22QdUU/ GxMLzeqpVhjgCmlu42Ic82VRoMFSSjh3YxMvyZCWicx3sFBnJ9FPXudVQTVzkZtwdhoF ELpFC5iyDm6htuVNC43BwT5APiGGBVK/cDjJwbcraWEfSkWRbF5Jc6qityulKHkP2BOF QRnHPAIKm3yLwQHJIDly7vSizr+xI1rYx3zSpQ/EnI0EPNOezD88OZzTcnMYPZfzO+r9 fYpw== X-Forwarded-Encrypted: i=1; AJvYcCWZxoaq0K59O6qO1K/Qa9DGfIEZ8XLKH8sG/w+dMMyj5UUi92sP3o3C4N2Fq5NMhzOQ8dvsdKVSTvDGGaQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yw6GXQOwwvRlXfdH328p3n/O5ZgzPiib51oM63FMhZ6HPhPTJQw /bVYrNOEWNv4Zra9DyUPMIH/HLOIUf7p/26MwM1MJdSrSzDohnm7jkW0 X-Gm-Gg: AY/fxX4ftWLvjUpBnCIVwrMkni56TjwMQciM+aBzoZChizk5Uem8zk+o69rAqntqAC6 D3vSbcxYleGVWbdK3LmnpEYqLWzE/BuHfdHaAqw4Lg8SqvP39pcggpoPvn1uHI22x1nQUgxhG9g ljRkn6Rn7bjf7m2tDgTYtxFye1rrDtpQk3eQf1VFavNXEb0vl1eTqPnBslyQHpvAaxoZB1uFrhy VpuHRdX4AnJ5GX6/F4LAjPzJi+HlZNN95qD+ZoZew1rbtz+aTnalLxQH/8Rm7deKZzv3J71Gicu 0lgb52vV+ShuT1YfHQaKU2322vfwz209bDOkohBV6LV3lKcl9F5WEdJSHyOJXDcId8QPLEsQ3WL hkwNvf2dqhk7j5XP2ATTIfvkiWfuVXlK/oCyTl0tV5Q4ORKarP2pV4JhidZik14/fRO5Rr7UKt5 BlSPT9ik0PlD4/7S1hE5JblOOXs12OgpcZVLk0A3AVOcwLYp+D+HI2Bfp2x/DKIoKBPCWxPQ== X-Google-Smtp-Source: AGHT+IHy01qrbGB2/3dvCGWwtdhHYeGIDdQ2qK319mi7ENth+laC+xngVDxYXJWlkGJRgxWOxr0qDg== X-Received: by 2002:a05:600c:1c28:b0:477:632c:5b91 with SMTP id 5b1f17b1804b1-47d84b1a2e7mr124151565e9.16.1767958149641; Fri, 09 Jan 2026 03:29:09 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:08 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 6/9] eth: bnxt: adjust the fill level of agg queues with larger buffers Date: Fri, 9 Jan 2026 11:28:45 +0000 Message-ID: <8b6486d8a498875c4157f28171b5b0d26593c3d8.1767819709.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jakub Kicinski The driver tries to provision more agg buffers than header buffers since multiple agg segments can reuse the same header. The calculation / heuristic tries to provide enough pages for 65k of data for each header (or 4 frags per header if the result is too big). This calculation is currently global to the adapter. If we increase the buffer sizes 8x we don't want 8x the amount of memory sitting on the rings. Luckily we don't have to fill the rings completely, adjust the fill level dynamically in case particular queue has buffers larger than the global size. Signed-off-by: Jakub Kicinski [pavel: rebase on top of agg_size_fac, assert agg_size_fac] Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 28 +++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index 8f42885a7c86..137e348d2b9c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3816,16 +3816,34 @@ static void bnxt_free_rx_rings(struct bnxt *bp) } } =20 +static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + /* User may have chosen larger than default rx_page_size, + * we keep the ring sizes uniform and also want uniform amount + * of bytes consumed per ring, so cap how much of the rings we fill. + */ + int fill_level =3D bp->rx_agg_ring_size; + + if (rxr->rx_page_size > BNXT_RX_PAGE_SIZE) + fill_level /=3D rxr->rx_page_size / BNXT_RX_PAGE_SIZE; + + return fill_level; +} + static int bnxt_alloc_rx_page_pool(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, int numa_node) { - const unsigned int agg_size_fac =3D PAGE_SIZE / BNXT_RX_PAGE_SIZE; + unsigned int agg_size_fac =3D rxr->rx_page_size / BNXT_RX_PAGE_SIZE; const unsigned int rx_size_fac =3D PAGE_SIZE / SZ_4K; struct page_pool_params pp =3D { 0 }; struct page_pool *pool; =20 - pp.pool_size =3D bp->rx_agg_ring_size / agg_size_fac; + if (WARN_ON_ONCE(agg_size_fac =3D=3D 0)) + agg_size_fac =3D 1; + + pp.pool_size =3D bnxt_rx_agg_ring_fill_level(bp, rxr) / agg_size_fac; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size +=3D bp->rx_ring_size / rx_size_fac; =20 @@ -4403,11 +4421,13 @@ static void bnxt_alloc_one_rx_ring_netmem(struct bn= xt *bp, struct bnxt_rx_ring_info *rxr, int ring_nr) { + int fill_level, i; u32 prod; - int i; + + fill_level =3D bnxt_rx_agg_ring_fill_level(bp, rxr); =20 prod =3D rxr->rx_agg_prod; - for (i =3D 0; i < bp->rx_agg_ring_size; i++) { + for (i =3D 0; i < fill_level; i++) { if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) { netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n", ring_nr, i, bp->rx_agg_ring_size); --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17FA935B144 for ; Fri, 9 Jan 2026 11:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958160; cv=none; b=hCtb7jiIYPK53QYuV5edD/7urCeMZmwIE6MZn0iFaWGCaX9heIv5ZQkbTEyz7aC1ps4HamlB9Z3Rt3UVfELSINK+mNySNkQY3Z0KHbZ/d6qMNdmRZlMRzLeJPK+CP8q5vEFcYtoFgOeLTPpuGsY/f8QWse1BN4MPc43R/ynnWMY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958160; c=relaxed/simple; bh=F6waAxUTN0XHkxjyQT+jUXiZnFRXTIpv+p7I9hYKiIU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aLYdBcxKiadKcezVUqzVErsiRHz2aA3Y6mivaftromaPZYmA9u7ka7EIx5+/ezXot2Tj4izPn9eFAi9tcRSlosS5NIN5akxTt1GVcGsQ0uHjqflv+TQ8bEmHrMFJGlKzf5Con67FjShJQ2yguKdNDg62kgBe9SwQbaMNRdGNSXo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hXe9VPz5; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hXe9VPz5" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-47a8195e515so30228225e9.0 for ; Fri, 09 Jan 2026 03:29:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958153; x=1768562953; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HkGYuXz+QxHGRROmceDvRDL3xnFuc5UQrONsfTFdwcU=; b=hXe9VPz5zpwUdxHdQiNA7nI7D89AQSveggNCgVpkqkyCGsukQrI2unq7opkC9S70QQ Ma3zfkPfDUchjPMEWupLTGaO52psc0mwkWd9sHH0rVDMpJ/s47PPp/C/qQYRoWs5cWOw p0zwWsuRel4KPLiXU1oykrX0YVN3ZClfZLk69wJYfhTz2dgqJjX5InV/gzoij6MTQFzj ZZcTi2drf7HmcEjpmvWNx1GOynGzLktFNmg/od6VyS2UB7nPosKJXOSHbHPMZufx9OxL CuzbUvqb6Ugpd5rIAnXpo+GMO05CiFtWBfNlTCgqkedlFa8dFNmiVxO9h0hHzAqwFhRW NJkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958153; x=1768562953; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=HkGYuXz+QxHGRROmceDvRDL3xnFuc5UQrONsfTFdwcU=; b=SiN6vWPKpUVgLctBZdVWAOIBb71IV4OQo+PdDnqKFNoQiy2LkoDSLguIcyEJdjyhnQ qSdSUKnlSA0IFokOcO6cQit6WcjtFr/3PmaWd+GIxhMpgDIqpYg5P4+dyMl+BQY+Oyak J//K4AxsLr0VnWzAN5XD7HV9VpOLg6AAIIXb/La4s9cBbnNuy2qAXD2ZPPLbJIfdarWb +iFVJZootmftnOs8siFl21DfVK7irSAKU86bgcNvGmskBLe9CLZpklGkdm8pKF+HVHw2 EIgFTjNkIJUmFAb3SrcZE8lVVtIYPoB1VOAFcNKmxU5vU6fEEQanrYT5VdJMQBF2tbTK m26A== X-Forwarded-Encrypted: i=1; AJvYcCX7y30lZV7Va9onK+G9uMUTKTX0ROD9eIlsGJ+OwW/TlUDWO3wQV1C/UcXXRs3RBgTK+FNRjchVRRMRPLA=@vger.kernel.org X-Gm-Message-State: AOJu0YzRUWDeexS7JEm4QClM7k4zJLKF+WZpoQl6LaR7hMI9N/OVgkey SXCvE8k+pUZGRutY6SMm/4qgHIa3VAqJHacwHuNs0SanKeYpm0Fsv5H4 X-Gm-Gg: AY/fxX6BI/stGww9N0hj7w45GoR4eG9xwUWJ7lHeJ1aymAA3tliBpq3jaqjZ2avi8hx e0tJPQexoO5NE1korMmlZ8iiWxDcJSGtgayL05nt8L2DBogpy2fUXYmqIvMl5u3RxuEly2bfhjN 676yucaN/70hmxoOO7k5lky8ocWZXWtucAR6Zv7LtooQ25Yi/mwion8xt5dGoNL1O3fyuEwxAbC RLVYYatc0w3Dkhx5nP6qPWaObB3urlnqBBrc1N0eKFswpUFnGNcQlJiTKnFvuOBNcnM+Ca6uJLq X1tQn2SNqTRHZiIuaCA4Y1IAHeR8uOlNZYvJriYvKVIn5d5bMSIVL475wDRLDpZkg1nv5g9a31V hE8OQA6F2Qg0ENu+rOKyPGx4gKXHpLyNiiNjntpLbN8wZXSfKqsnXgG2Ln7CqkKQz+eH0vTUuKh sE+bdT6D7A3HTVTiWT+WXQ1PdjnWrP8iVbeCDM+cgIDLCo8oN3gBnp0dIDlD0NWrF1RLFtSdXZO vy0hG8x X-Google-Smtp-Source: AGHT+IFSy1+lFxUURQmqDSNKN0HOOFkFOx1ujyLXJg2nypF43Lq0YrQtt9+vrxTHS0Rrgmo+7XmdqQ== X-Received: by 2002:a05:600c:1d14:b0:477:97c7:9be7 with SMTP id 5b1f17b1804b1-47d84b0a7bdmr104836275e9.1.1767958152875; Fri, 09 Jan 2026 03:29:12 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:12 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 7/9] eth: bnxt: support qcfg provided rx page size Date: Fri, 9 Jan 2026 11:28:46 +0000 Message-ID: <28028611f572ded416b8ab653f1b9515b0337fba.1767819709.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement support for qcfg provided rx page sizes. For that, implement the ndo_default_qcfg callback and validate the config on restart. Also, use the current config's value in bnxt_init_ring_struct to retain the correct size across resets. Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 36 ++++++++++++++++++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 36 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index 137e348d2b9c..3ffe4fe159d3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4325,6 +4325,7 @@ static void bnxt_init_ring_struct(struct bnxt *bp) struct bnxt_rx_ring_info *rxr; struct bnxt_tx_ring_info *txr; struct bnxt_ring_struct *ring; + struct netdev_rx_queue *rxq; =20 if (!bnapi) continue; @@ -4342,7 +4343,8 @@ static void bnxt_init_ring_struct(struct bnxt *bp) if (!rxr) goto skip_rx; =20 - rxr->rx_page_size =3D BNXT_RX_PAGE_SIZE; + rxq =3D __netif_get_rx_queue(bp->dev, i); + rxr->rx_page_size =3D rxq->qcfg.rx_page_size; =20 ring =3D &rxr->rx_ring_struct; rmem =3D &ring->ring_mem; @@ -15932,6 +15934,29 @@ static const struct netdev_stat_ops bnxt_stat_ops = =3D { .get_base_stats =3D bnxt_get_base_stats, }; =20 +static void bnxt_queue_default_qcfg(struct net_device *dev, + struct netdev_queue_config *qcfg) +{ + qcfg->rx_page_size =3D BNXT_RX_PAGE_SIZE; +} + +static int bnxt_validate_qcfg(struct bnxt *bp, struct netdev_queue_config = *qcfg) +{ + /* Older chips need MSS calc so rx_page_size is not supported */ + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && + qcfg->rx_page_size !=3D BNXT_RX_PAGE_SIZE) + return -EINVAL; + + if (!is_power_of_2(qcfg->rx_page_size)) + return -ERANGE; + + if (qcfg->rx_page_size < BNXT_RX_PAGE_SIZE || + qcfg->rx_page_size > BNXT_MAX_RX_PAGE_SIZE) + return -ERANGE; + + return 0; +} + static int bnxt_queue_mem_alloc(struct net_device *dev, struct netdev_queue_config *qcfg, void *qmem, int idx) @@ -15944,6 +15969,10 @@ static int bnxt_queue_mem_alloc(struct net_device = *dev, if (!bp->rx_ring) return -ENETDOWN; =20 + rc =3D bnxt_validate_qcfg(bp, qcfg); + if (rc < 0) + return rc; + rxr =3D &bp->rx_ring[idx]; clone =3D qmem; memcpy(clone, rxr, sizeof(*rxr)); @@ -15955,6 +15984,7 @@ static int bnxt_queue_mem_alloc(struct net_device *= dev, clone->rx_sw_agg_prod =3D 0; clone->rx_next_cons =3D 0; clone->need_head_pool =3D false; + clone->rx_page_size =3D qcfg->rx_page_size; =20 rc =3D bnxt_alloc_rx_page_pool(bp, clone, rxr->page_pool->p.nid); if (rc) @@ -16081,6 +16111,8 @@ static void bnxt_copy_rx_ring(struct bnxt *bp, src_ring =3D &src->rx_agg_ring_struct; src_rmem =3D &src_ring->ring_mem; =20 + dst->rx_page_size =3D src->rx_page_size; + WARN_ON(dst_rmem->nr_pages !=3D src_rmem->nr_pages); WARN_ON(dst_rmem->page_size !=3D src_rmem->page_size); WARN_ON(dst_rmem->flags !=3D src_rmem->flags); @@ -16235,6 +16267,8 @@ static const struct netdev_queue_mgmt_ops bnxt_queu= e_mgmt_ops =3D { .ndo_queue_mem_free =3D bnxt_queue_mem_free, .ndo_queue_start =3D bnxt_queue_start, .ndo_queue_stop =3D bnxt_queue_stop, + .ndo_default_qcfg =3D bnxt_queue_default_qcfg, + .supported_params =3D QCFG_RX_PAGE_SIZE, }; =20 static void bnxt_remove_one(struct pci_dev *pdev) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index 4c880a9fba92..d245eefbbdda 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -760,6 +760,7 @@ struct nqe_cn { #endif =20 #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) +#define BNXT_MAX_RX_PAGE_SIZE BIT(15) =20 #define BNXT_MAX_MTU 9500 =20 --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB9263590DC for ; Fri, 9 Jan 2026 11:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958161; cv=none; b=hArc1rnZ8u6NDTBlNZSac9xmYEd5IlgB++oLk351NS6BJYa/oerOVjqSwnESJk/QK7uCgdW5+8PH/yadD39bPU9ajbFHBcE8dTSsM7m+Sm4dx2FWgbF+WoE3Q7xjiSCuxc0b+0yOqtx8+dFmkgkcJkqcQr9//DSLUy1hcaObvAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958161; c=relaxed/simple; bh=/jxCrpK8l6glGcqt49LNeHrXzbo3WZk9EY0AmmIwENo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AJDl216RtcsYETGtdfXYEcAiYDU+C0IekYh8rJxkmswW4n9xyKazxQg6TFqLbgL5I62azfqnWllE4rs7vD3rO109I7s81BK3DBTQ2TQHdf67ZOaDm8XMji14EGrkD5/5D7TherGMjp+2Z5BtjPBJeS5c61eECsASvtw355rU80k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=h+0hgalO; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="h+0hgalO" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-47d6a1f08bbso17294655e9.2 for ; Fri, 09 Jan 2026 03:29:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958155; x=1768562955; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WJRC4dPidxMIa9vqar4Y3LK2OFHt6WzESXNCWKhLD0s=; b=h+0hgalOLvkaiVreIVlWnFsD7FLxMGxgYjcECpRiFlLppOrRDuSOin4WdyW83RlYK4 4SeCVO9+TvbdniM3n7n0RV0JHTKrPFphJJC/dXsDoV5NsL898yYMH8Q0r/E2fJxAgbjy qxA2+cAt2tmQfgW5WfGwgmoSZyd5R6tpVnqkwEEee3MTFFUKoBHdtId+pxPW0w4F2J+q RGjZIyzmTENj1Qa1QkhlDr6Z25G7VOwQvrtv94NFpD80LN0YYXp0HUBO8PFU9YA5kGDG y45NLgx/HYPMjNarPZ3hyLahLByhGvYD1964kMaDn/T56iD0aOJURD1NMaeNn+UzI3tg 6tQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958155; x=1768562955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=WJRC4dPidxMIa9vqar4Y3LK2OFHt6WzESXNCWKhLD0s=; b=vWXAG6s7rcUfivKGWpLlH/3jms+5UMiRkC1WWQL0Nt0cTbrr4oxCcPwayOF9rVMTcU RIQBjPUKa2y8pAQb5mjhZxaApOvLMDxhFsIUxoWMk4DSpu7DhtsuWDXT6DX2iraq0NRf V2TpynK8v6i3/9URrJjKvBupdkYuTXwRimuPWvSNHgRit0mw3y1POGXQn/HSHYvHFtqK ukDDZ3zJfZsBiRZ1+FFdcs2L/CwZQJUCovcWL9A9FGkHrPviL+CYUETlctDcg2AgI4v7 I+idu8AFG9OPbA8z0uDULywJcS6z4uL97J0QqFkQYXjyFkTO5gFm4qE13g2rDNDmv4v1 0w6Q== X-Forwarded-Encrypted: i=1; AJvYcCVFYUmRMZfEqChf+d5Z3UyqFfg8U6a3Q6ofRe8KAzWMY/s3+4XMg1/a6HmGDuNtcSi6NYwsmkbqv9UC2C8=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2LrIoIksrVKH9JAjqwCHh8M5GPXq0xmSJy7iINKMjb51UhDOh LVJkAPD/cx1otKwR/B9ymRAEZJFesOUtFDB+IxK4DecmtJTgp0aJcAVa X-Gm-Gg: AY/fxX7j3FQV8AwctklZ2rteW7QZN+itnkDJkruRJcDJmBfVXOQ4uUFJqkcWzdWuddW 2Njc3HFDN25qVsFndxpYyleGoGI/7M0XKk3soXrwvV1IlBfHqOY2f0ZNA5Tp06OPr9LW0/WemfD kp/U98KeTMpF1lTD//KJHaNlnyCDwpa3/zoFeCtst9phKreWUvcVLafWh1iTSlunsT44w3ayGCQ y4CVc0AeHUb39MnGbHQiqyOwvcY7oGlm8Q1GWgJ6CxTwI0rRJEsjicI3xD6sI5JJe80XlRonntT Q1InJ952E0ijlQqVIs+VLKBA20WXnN9QT5lj6TeXu7YASDAFqdRWbBqMkQZA6J+fy+TsKb3Gj9D Ldq8IUjcRAxtKb+Odm1OCHlY4sW5NihOMQDEkp1XuKLt0QSSsQb+onc4casYhojG9y3H7FwTLcv PkKwzsKAltDdUhVCRs8c5GKW+ElhF4RIc5qQNxNC139Cw/glCYMv7eVOkLKD4bF8KjPGC9xA== X-Google-Smtp-Source: AGHT+IFP8GKWdVbTR4kP2eSXOtP3B21YVVOqcaO/WazPjd6IKdKiRPGp9LwPNLHkHaCIs9RrvfXe8w== X-Received: by 2002:a05:600c:8b57:b0:477:54f9:6ac2 with SMTP id 5b1f17b1804b1-47d849bdfa7mr100045195e9.0.1767958154803; Fri, 09 Jan 2026 03:29:14 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:14 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 8/9] selftests: iou-zcrx: test large chunk sizes Date: Fri, 9 Jan 2026 11:28:47 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a test using large chunks for zcrx memory area. Signed-off-by: Pavel Begunkov --- .../selftests/drivers/net/hw/iou-zcrx.c | 72 +++++++++++++++---- .../selftests/drivers/net/hw/iou-zcrx.py | 37 ++++++++++ 2 files changed, 97 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/drivers/net/hw/iou-zcrx.c b/tools/test= ing/selftests/drivers/net/hw/iou-zcrx.c index 62456df947bc..0a19b573f4f5 100644 --- a/tools/testing/selftests/drivers/net/hw/iou-zcrx.c +++ b/tools/testing/selftests/drivers/net/hw/iou-zcrx.c @@ -12,6 +12,7 @@ #include =20 #include +#include #include #include #include @@ -37,6 +38,23 @@ =20 #include =20 +#define SKIP_CODE 42 + +struct t_io_uring_zcrx_ifq_reg { + __u32 if_idx; + __u32 if_rxq; + __u32 rq_entries; + __u32 flags; + + __u64 area_ptr; /* pointer to struct io_uring_zcrx_area_reg */ + __u64 region_ptr; /* struct io_uring_region_desc * */ + + struct io_uring_zcrx_offsets offsets; + __u32 zcrx_id; + __u32 rx_buf_len; + __u64 __resv[3]; +}; + static long page_size; #define AREA_SIZE (8192 * page_size) #define SEND_SIZE (512 * 4096) @@ -65,6 +83,8 @@ static bool cfg_oneshot; static int cfg_oneshot_recvs; static int cfg_send_size =3D SEND_SIZE; static struct sockaddr_in6 cfg_addr; +static unsigned cfg_rx_buf_len; +static bool cfg_dry_run; =20 static char *payload; static void *area_ptr; @@ -128,14 +148,28 @@ static void setup_zcrx(struct io_uring *ring) if (!ifindex) error(1, 0, "bad interface name: %s", cfg_ifname); =20 - area_ptr =3D mmap(NULL, - AREA_SIZE, - PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, - 0, - 0); - if (area_ptr =3D=3D MAP_FAILED) - error(1, 0, "mmap(): zero copy area"); + if (cfg_rx_buf_len && cfg_rx_buf_len !=3D page_size) { + area_ptr =3D mmap(NULL, + AREA_SIZE, + PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE | + MAP_HUGETLB | MAP_HUGE_2MB, + -1, + 0); + if (area_ptr =3D=3D MAP_FAILED) { + printf("Can't allocate huge pages\n"); + exit(SKIP_CODE); + } + } else { + area_ptr =3D mmap(NULL, + AREA_SIZE, + PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, + 0, + 0); + if (area_ptr =3D=3D MAP_FAILED) + error(1, 0, "mmap(): zero copy area"); + } =20 ring_size =3D get_refill_ring_size(rq_entries); ring_ptr =3D mmap(NULL, @@ -157,17 +191,23 @@ static void setup_zcrx(struct io_uring *ring) .flags =3D 0, }; =20 - struct io_uring_zcrx_ifq_reg reg =3D { + struct t_io_uring_zcrx_ifq_reg reg =3D { .if_idx =3D ifindex, .if_rxq =3D cfg_queue_id, .rq_entries =3D rq_entries, .area_ptr =3D (__u64)(unsigned long)&area_reg, .region_ptr =3D (__u64)(unsigned long)®ion_reg, + .rx_buf_len =3D cfg_rx_buf_len, }; =20 - ret =3D io_uring_register_ifq(ring, ®); - if (ret) + ret =3D io_uring_register_ifq(ring, (void *)®); + if (cfg_rx_buf_len && (ret =3D=3D -EINVAL || ret =3D=3D -EOPNOTSUPP || + ret =3D=3D -ERANGE)) { + printf("Large chunks are not supported %i\n", ret); + exit(SKIP_CODE); + } else if (ret) { error(1, 0, "io_uring_register_ifq(): %d", ret); + } =20 rq_ring.khead =3D (unsigned int *)((char *)ring_ptr + reg.offsets.head); rq_ring.ktail =3D (unsigned int *)((char *)ring_ptr + reg.offsets.tail); @@ -323,6 +363,8 @@ static void run_server(void) io_uring_queue_init(512, &ring, flags); =20 setup_zcrx(&ring); + if (cfg_dry_run) + return; =20 add_accept(&ring, fd); =20 @@ -383,7 +425,7 @@ static void parse_opts(int argc, char **argv) usage(argv[0]); cfg_payload_len =3D max_payload_len; =20 - while ((c =3D getopt(argc, argv, "sch:p:l:i:q:o:z:")) !=3D -1) { + while ((c =3D getopt(argc, argv, "sch:p:l:i:q:o:z:x:d")) !=3D -1) { switch (c) { case 's': if (cfg_client) @@ -418,6 +460,12 @@ static void parse_opts(int argc, char **argv) case 'z': cfg_send_size =3D strtoul(optarg, NULL, 0); break; + case 'x': + cfg_rx_buf_len =3D page_size * strtoul(optarg, NULL, 0); + break; + case 'd': + cfg_dry_run =3D true; + break; } } =20 diff --git a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py b/tools/tes= ting/selftests/drivers/net/hw/iou-zcrx.py index 712c806508b5..83061b27f2f2 100755 --- a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py +++ b/tools/testing/selftests/drivers/net/hw/iou-zcrx.py @@ -7,6 +7,7 @@ from lib.py import ksft_run, ksft_exit, KsftSkipEx from lib.py import NetDrvEpEnv from lib.py import bkg, cmd, defer, ethtool, rand_port, wait_port_listen =20 +SKIP_CODE =3D 42 =20 def _get_current_settings(cfg): output =3D ethtool(f"-g {cfg.ifname}", json=3DTrue)[0] @@ -132,6 +133,42 @@ def test_zcrx_rss(cfg) -> None: cmd(tx_cmd, host=3Dcfg.remote) =20 =20 +def test_zcrx_large_chunks(cfg) -> None: + cfg.require_ipver('6') + + combined_chans =3D _get_combined_channels(cfg) + if combined_chans < 2: + raise KsftSkipEx('at least 2 combined channels required') + (rx_ring, hds_thresh) =3D _get_current_settings(cfg) + port =3D rand_port() + + ethtool(f"-G {cfg.ifname} tcp-data-split on") + defer(ethtool, f"-G {cfg.ifname} tcp-data-split auto") + + ethtool(f"-G {cfg.ifname} hds-thresh 0") + defer(ethtool, f"-G {cfg.ifname} hds-thresh {hds_thresh}") + + ethtool(f"-G {cfg.ifname} rx 64") + defer(ethtool, f"-G {cfg.ifname} rx {rx_ring}") + + ethtool(f"-X {cfg.ifname} equal {combined_chans - 1}") + defer(ethtool, f"-X {cfg.ifname} default") + + flow_rule_id =3D _set_flow_rule(cfg, port, combined_chans - 1) + defer(ethtool, f"-N {cfg.ifname} delete {flow_rule_id}") + + rx_cmd =3D f"{cfg.bin_local} -s -p {port} -i {cfg.ifname} -q {combined= _chans - 1} -x 2" + tx_cmd =3D f"{cfg.bin_remote} -c -h {cfg.addr_v['6']} -p {port} -l 128= 40" + + probe =3D cmd(rx_cmd + " -d", fail=3DFalse) + if probe.ret =3D=3D SKIP_CODE: + raise KsftSkipEx(probe.stdout) + + with bkg(rx_cmd, exit_wait=3DTrue): + wait_port_listen(port, proto=3D"tcp") + cmd(tx_cmd, host=3Dcfg.remote) + + def main() -> None: with NetDrvEpEnv(__file__) as cfg: cfg.bin_local =3D path.abspath(path.dirname(__file__) + "/../../..= /drivers/net/hw/iou-zcrx") --=20 2.52.0 From nobody Fri Apr 17 06:35:59 2026 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5245F35BDC0 for ; Fri, 9 Jan 2026 11:29:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958162; cv=none; b=A0CQ67XsbUkVDQhfk6QxkxkffSbhMBxfXCcm7YPxIUR6AJwdQaDXTHzoLqOp+S7LxljAK830B2+yDj8xiz+CSWFh88IzU7l5pfdhzAtdYll23qZqaYRz+NHh6NgkB7DoeqdaqxNfXh32478AfTBz6T5m5BXRiF46kVyI4jjI7m8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767958162; c=relaxed/simple; bh=axdc3q5KY7YeHULab/TvgzOYs0oof9ugiogriyYfhIs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NwVWztOAVfPlxWIWFHWZDQTx511QfYU+A3SafhLU7/kaBEULUWa0PoDiIG+hpYLARbmTd9Sn7j5sCuotJZk9+60sZnfGj9HdNrU1UA3eJkirHrcMe9cWXAsDSEiwYa6X0OciqkNLQKaHq6g3y3N8YPqLWWy3eL4iso6T5axdUFY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QwZdNikX; arc=none smtp.client-ip=209.85.128.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QwZdNikX" Received: by mail-wm1-f65.google.com with SMTP id 5b1f17b1804b1-4779ce2a624so34536925e9.2 for ; Fri, 09 Jan 2026 03:29:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767958157; x=1768562957; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kiBEtLY6CE0u/w5HMF7IyfpDPmY7qzN3np9POU+8r/k=; b=QwZdNikXMmUiWaP6LrA6Ss+L4IlbzqbF8LAkHZkrSJYHJAwJQzeZm6Hovf0wIRcqhB PexztlOhMNAPcHnKno9bvafz3SBPHuPe1N7aTJkWq4qn5RX2N0DAbGl8jfVmU8Pzq1o6 FvOIooIaqyNuWlJfMg1Rv0KcZ2IBryt5yp4e8HRITnG2mhY5v8pSnqUDdS0cR8H5jQmV TdNAm/O3qnO5a4/RMRYI/H+6j9zwcW+NQ/Kr2oZnB1oA3WbpZqAi6YFG9a4nhi9r1ekM 3d/nerDRq2lYdG8wL7uo6OszlN/QB0A6zKiLWKvDRusPD82tpNIL64qInLSG2s2XojGf jOuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767958157; x=1768562957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kiBEtLY6CE0u/w5HMF7IyfpDPmY7qzN3np9POU+8r/k=; b=oeFGBG6/oTgc6RTXzbDH47jCigpkQK96jsLq/yVGPDfRFSzNweOaQfW4FylnboZ1iF v/rAXobj4ZRVADqgGsYk1EuvbK6oCaP5bzwwF1R5UBcVM55HNwYsjfRsYkS0W7bGyj3e i7MFCJyGpwvLWeofwDVvOBi7LWagfVPkbOI9BZYoRB+mLwFH2uiN5haRQNcimyw3Ua9+ i5WyB8DgsURm6Gy98eoRx0Q8lfLGlKdu/uEbpxPm2cN0WdTp8YEX1NuOikwbki32gGPM BoW+t/geczBG52zPhokfjaL+RW/L2i0RShClNzJ8Yjb0oTvvy6vAdRlTpvC3GPlSzRp4 z1ww== X-Forwarded-Encrypted: i=1; AJvYcCVGxqzWrmAkuUEAMOGMkQuchR5WSnhu+ZkS45MAjzN7GUjQlpTZuduGb4gFrig/rzByd5xourKO+fKEeHc=@vger.kernel.org X-Gm-Message-State: AOJu0YyWrpk7ws0Wh0GtCGqxzheq9V3coYnwoYW+lNDeKUNWalu0MOl+ jt6gHo3NvjDQTauHWoWNRDkUSiF1gm9LPmNff5+nnI5J1Bo5EUr3hPz0 X-Gm-Gg: AY/fxX697H8H6pCj52TJo4rTEmtGgextPV7hcJvepBNs6Bk0y+ifEiG2NXWCc2khU6w md/bnQ1yLRnJNVfJfTU1LYYLmgcoDNCD1m1CxNshJZcS1nV83dDT7KVqJmD7K6bibYHBhC1Q0Af Avmy2ix+Nom8ma1jlcDgkziQnXggBBwVrXnVa5/nMzFYfzaggCiYwF10XAIm5+6XzvtGbNXLzFj Oha3h0asIxMocY0AG7ZhSDR3DpxktLnz2p5NZnlprkfwL9AHLQ9CEqK3QltfRp3k3suy8+fE3S8 ECCn3wyzN52wsHxrGhhJtZ0ZI4JyABPezRS7yJyujk2dR3EDFOOprSFDZ0849zfSqXRRgTLIko+ XCEaKPWA5IDJ/5XyNzkfxY16ObnTWumm9AQwKvbMl3/RztEAYMYoi7GvikQIErlH4uvvjjBn8Qb Bnvmk3p4wjOGgu4A3V9Ut0PLuoW6CQmfSh8PQMlBVzUuYmJvmVOBLg1N4W8fW0RNWiJ58hfFXBK IVDSbC6 X-Google-Smtp-Source: AGHT+IF6sUa9JMjq/7LpBs2cdH8SCduGosXSIcxXOG6syAloBrtUa/I4hJziDJatmkaPD68dJ1oitQ== X-Received: by 2002:a05:600c:1d0c:b0:471:14b1:da13 with SMTP id 5b1f17b1804b1-47d84b1fcf9mr101969345e9.14.1767958156631; Fri, 09 Jan 2026 03:29:16 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:69b5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d8636c610sm60056985e9.0.2026.01.09.03.29.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jan 2026 03:29:15 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Joshua Washington , Harshitha Ramamurthy , Saeed Mahameed , Tariq Toukan , Mark Bloch , Leon Romanovsky , Alexander Duyck , Ilias Apalodimas , Shuah Khan , Willem de Bruijn , Ankit Garg , Tim Hostetler , Alok Tiwari , Ziwei Xiao , John Fraker , Praveen Kaligineedi , Mohsin Bashir , Joe Damato , Mina Almasry , Dimitri Daskalakis , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Ahmed Zaki , Alexander Lobakin , Pavel Begunkov , David Wei , Yue Haibing , Haiyue Wang , Jens Axboe , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org, dtatulea@nvidia.com, io-uring@vger.kernel.org Subject: [PATCH net-next v8 9/9] io_uring/zcrx: document area chunking parameter Date: Fri, 9 Jan 2026 11:28:48 +0000 Message-ID: <65585c411f066a0565880ef0a9843e244d511bcf.1767819709.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" struct io_uring_zcrx_ifq_reg::rx_buf_len is used as a hint specifying the kernel what buffer size it should use. Document the API and limitations. Signed-off-by: Pavel Begunkov --- Documentation/networking/iou-zcrx.rst | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/Documentation/networking/iou-zcrx.rst b/Documentation/networki= ng/iou-zcrx.rst index 54a72e172bdc..7f3f4b2e6cf2 100644 --- a/Documentation/networking/iou-zcrx.rst +++ b/Documentation/networking/iou-zcrx.rst @@ -196,6 +196,26 @@ Return buffers back to the kernel to be used again:: rqe->len =3D cqe->res; IO_URING_WRITE_ONCE(*refill_ring.ktail, ++refill_ring.rq_tail); =20 +Area chunking +------------- + +zcrx splits the memory area into fixed-length physically contiguous chunks. +This limits the maximum buffer size returned in a single io_uring CQE. Use= rs +can provide a hint to the kernel to use larger chunks by setting the +``rx_buf_len`` field of ``struct io_uring_zcrx_ifq_reg`` to the desired le= ngth +during registration. If this field is set to zero, the kernel defaults to +the system page size. + +To use larger sizes, the memory area must be backed by physically contiguo= us +ranges whose sizes are multiples of ``rx_buf_len``. It also requires kernel +and hardware support. If registration fails, users are generally expected = to +fall back to defaults by setting ``rx_buf_len`` to zero. + +Larger chunks don't give any additional guarantees about buffer sizes retu= rned +in CQEs, and they can vary depending on many factors like traffic pattern, +hardware offload, etc. It doesn't require any application changes beyond z= crx +registration. + Testing =3D=3D=3D=3D=3D=3D=3D =20 --=20 2.52.0