drivers/net/ethernet/intel/ice/virt/queues.c | 3 +++ 1 file changed, 3 insertions(+)
Since the tagged commit, ice stopped respecting Rx buffer length
passed from VFs.
At that point, the buffer length was hardcoded in ice, so VFs still
worked up to some point (until, for example, a VF wanted an MTU
larger than its PF).
The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke
Rx on VFs completely since ice started accounting per-queue buffer
lengths again, but now VF queues always had their length zeroed, as
ice was already ignoring what iavf was passing to it.
Restore the line that initializes the buffer length on VF queues
basing on the virtchnl messages.
Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
I'd like this to go directly to net-next to quickly unbreak VFs
(the related commits are not in the mainline yet).
---
drivers/net/ethernet/intel/ice/virt/queues.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c
index 7928f4e8e788..f73d5a3e83d4 100644
--- a/drivers/net/ethernet/intel/ice/virt/queues.c
+++ b/drivers/net/ethernet/intel/ice/virt/queues.c
@@ -842,6 +842,9 @@ int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
(qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
qpi->rxq.databuffer_size < 1024))
goto error_param;
+
+ ring->rx_buf_len = qpi->rxq.databuffer_size;
+
if (qpi->rxq.max_pkt_size > max_frame_size ||
qpi->rxq.max_pkt_size < 64)
goto error_param;
--
2.51.1
Tested-by: Jakub Slepecki <jakub.slepecki@intel.com>
As expected, the issue reproduced with commit 53ffcce6fe91 ("ixd: add
devlink support, 2025-11-17"). Applying this patch on top of this commit
allows VFs to receive packets. Network configuration used:
ip netns add $pf_netns
ip l set $pf netns $pf_netns
ip netns exec $pf_netns ip l set lo up
ip netns exec $pf_netns ip l set $pf address $pf_mac up
ip netns exec $pf_netns ip a add 10.0.0.1/24 dev $pf
ip netns add $vf0_netns
ip l set $vf0 netns $vf0_netns
ip netns exec $vf0_netns ip l set lo up
ip netns exec $vf0_netns ip l set $vf0 up
ip netns exec $vf0_netns ip a add 10.0.0.2/24 dev $vf0
ip netns add $vf1_netns
ip l set $vf1 netns $vf1_netns
ip netns exec $vf1_netns ip l set lo up
ip netns exec $vf1_netns ip l set $vf1 up
ip netns exec $vf1_netns ip a add 10.0.0.3/24 dev $vf1
Assume all variables are known and network namespaces are distinct.
External host was able to successfully ping each 10.0.0.[123].
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Mon, 24 Nov 2025 18:07:35 +0100
Ooops, missed a tag, sorry...
> Since the tagged commit, ice stopped respecting Rx buffer length
> passed from VFs.
> At that point, the buffer length was hardcoded in ice, so VFs still
> worked up to some point (until, for example, a VF wanted an MTU
> larger than its PF).
> The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke
> Rx on VFs completely since ice started accounting per-queue buffer
> lengths again, but now VF queues always had their length zeroed, as
> ice was already ignoring what iavf was passing to it.
>
> Restore the line that initializes the buffer length on VF queues
> basing on the virtchnl messages.
>
> Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
> Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
Suggested-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> I'd like this to go directly to net-next to quickly unbreak VFs
> (the related commits are not in the mainline yet).
Thanks,
Olek
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Alexander Lobakin
> Sent: Monday, November 24, 2025 6:18 PM
> To: Andrew Lunn <andrew+netdev@lunn.ch>; David S. Miller
> <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; Jakub
> Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>
> Cc: Kitszel, Przemyslaw <przemyslaw.kitszel@intel.com>; Nguyen,
> Anthony L <anthony.l.nguyen@intel.com>; Slepecki, Jakub
> <jakub.slepecki@intel.com>; NXNE CNSE OSDT ITP Upstreaming
> <nxne.cnse.osdt.itp.upstreaming@intel.com>; intel-wired-
> lan@lists.osuosl.org; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org
> Subject: Re: [Intel-wired-lan] [PATCH net-next] ice: fix broken Rx on
> VFs
>
> From: Alexander Lobakin <aleksander.lobakin@intel.com>
> Date: Mon, 24 Nov 2025 18:07:35 +0100
>
> Ooops, missed a tag, sorry...
>
> > Since the tagged commit, ice stopped respecting Rx buffer length
> > passed from VFs.
> > At that point, the buffer length was hardcoded in ice, so VFs still
> > worked up to some point (until, for example, a VF wanted an MTU
> larger
> > than its PF).
> > The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke Rx
> on
> > VFs completely since ice started accounting per-queue buffer lengths
> > again, but now VF queues always had their length zeroed, as ice was
> > already ignoring what iavf was passing to it.
> >
> > Restore the line that initializes the buffer length on VF queues
> > basing on the virtchnl messages.
> >
> > Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
> > Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
>
> Suggested-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
>
> > Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> > ---
> > I'd like this to go directly to net-next to quickly unbreak VFs (the
> > related commits are not in the mainline yet).
> Thanks,
> Olek
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
© 2016 - 2025 Red Hat, Inc.