From nobody Sun Feb 8 12:42:48 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAC88333452; Thu, 23 Oct 2025 14:59:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231577; cv=none; b=L+rZXW6zn7NUwKMzQgw8hJrycoZI7ivsTFynM44ftTH43y8D5987QuxI/xj1sV/XEZQ+xKkkChoY4fv4gKF2I2F/lF7lk0daG21cPoqbkxhJ+VAFvFJ49R2H8JHlRlH0yvAnluPz/wtdkJ/WlyEdfRxzsWzzfMgfnWtm6KMq/NA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231577; c=relaxed/simple; bh=mEGb7sJhxpAsby+nkcgfpSTFUr7uvLYxKEWGdriM7AU=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QzDgmZKuUVwAhzf0aL11WouBOSczoege5Ntng2Gjj/PJgeGsCb7zixQ4e0rcyVXUVcr1anDjfEGN3CFbzsALGt9uIbLKpcjQuPbUoyA83FHTzlotF/Hdz6I7gLYM2dh48UbPsvl9SwGoOLn5eWGYOAQEpMO9qTxIrNZVo3PVqSg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PlaN2aSD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PlaN2aSD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99A20C4CEE7; Thu, 23 Oct 2025 14:59:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1761231576; bh=mEGb7sJhxpAsby+nkcgfpSTFUr7uvLYxKEWGdriM7AU=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=PlaN2aSDl2VV3R1B6IKJF7dgbpvailfkTYGlNomAs923Npb2aqkM3bsEP4o1uQyX8 8rVbvDEcjo8Tl7QADG0jePlA8fEgNRvYatTK3Y+689/1xNumbJV3DYllezuoAO/j0m YRVR6IJnEcaJburMly0URgF4aiNWpAMkg+ffSyu8Es8XnP7b5VR7YHevTaUIhtx30y 21n2dhBwF7tENaohgFfatLGFzdtU8wkDFNzlkWyJ4Mz4AvMqmt/KVGNBRkr8iZzBTW hUOTNjuVzjE9x8n67xO88inHC7MklA+E5NLKMUrFcvx9vSJd0Si4TF19wtX8xfBUJ0 kCxYUb6291oiw== Subject: [PATCH net V1 1/3] veth: enable dev_watchdog for detecting stalled TXQs From: Jesper Dangaard Brouer To: netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Cc: Jesper Dangaard Brouer , Eric Dumazet , "David S. Miller" , Jakub Kicinski , Paolo Abeni , ihor.solodrai@linux.dev, toshiaki.makita1@gmail.com, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@cloudflare.com Date: Thu, 23 Oct 2025 16:59:31 +0200 Message-ID: <176123157173.2281302.7040578942230212638.stgit@firesoul> In-Reply-To: <176123150256.2281302.7000617032469740443.stgit@firesoul> References: <176123150256.2281302.7000617032469740443.stgit@firesoul> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The changes introduced in commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to reduce TX drops") have been found to cause a race condition in production environments. Under specific circumstances, observed exclusively on ARM64 (aarch64) systems with Ampere Altra Max CPUs, a transmit queue (TXQ) can become permanently stalled. This happens when the race condition leads to the TXQ entering the QUEUE_STATE_DRV_XOFF state without a corresponding queue wake-= up, preventing the attached qdisc from dequeueing packets and causing the network link to halt. As a first step towards resolving this issue, this patch introduces a failsafe mechanism. It enables the net device watchdog by setting a timeout value and implements the .ndo_tx_timeout callback. If a TXQ stalls, the watchdog will trigger the veth_tx_timeout() function, which logs a warning and calls netif_tx_wake_queue() to unstall the queue and allow traffic to resume. The log message will look like this: veth42: NETDEV WATCHDOG: CPU: 34: transmit queue 0 timed out 5393 ms veth42: veth backpressure stalled(n:1) TXQ(0) re-enable This provides a necessary recovery mechanism while the underlying race condition is investigated further. Subsequent patches will address the root cause and add more robust state handling in ndo_open/ndo_stop. Fixes: dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to re= duce TX drops") Signed-off-by: Jesper Dangaard Brouer --- drivers/net/veth.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index a3046142cb8e..7b1a9805b270 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -959,8 +959,10 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, rq->stats.vs.xdp_packets +=3D done; u64_stats_update_end(&rq->stats.syncp); =20 - if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) + if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) { + txq_trans_cond_update(peer_txq); netif_tx_wake_queue(peer_txq); + } =20 return done; } @@ -1373,6 +1375,16 @@ static int veth_set_channels(struct net_device *dev, goto out; } =20 +static void veth_tx_timeout(struct net_device *dev, unsigned int txqueue) +{ + struct netdev_queue *txq =3D netdev_get_tx_queue(dev, txqueue); + + netdev_err(dev, "veth backpressure stalled(n:%ld) TXQ(%u) re-enable\n", + atomic_long_read(&txq->trans_timeout), txqueue); + + netif_tx_wake_queue(txq); +} + static int veth_open(struct net_device *dev) { struct veth_priv *priv =3D netdev_priv(dev); @@ -1711,6 +1723,7 @@ static const struct net_device_ops veth_netdev_ops = =3D { .ndo_bpf =3D veth_xdp, .ndo_xdp_xmit =3D veth_ndo_xdp_xmit, .ndo_get_peer_dev =3D veth_peer_dev, + .ndo_tx_timeout =3D veth_tx_timeout, }; =20 static const struct xdp_metadata_ops veth_xdp_metadata_ops =3D { @@ -1749,6 +1762,7 @@ static void veth_setup(struct net_device *dev) dev->priv_destructor =3D veth_dev_free; dev->pcpu_stat_type =3D NETDEV_PCPU_STAT_TSTATS; dev->max_mtu =3D ETH_MAX_MTU; + dev->watchdog_timeo =3D msecs_to_jiffies(5000); =20 dev->hw_features =3D VETH_FEATURES; dev->hw_enc_features =3D VETH_FEATURES; From nobody Sun Feb 8 12:42:48 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B28AA33374C; Thu, 23 Oct 2025 14:59:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231583; cv=none; b=EwA2+gky1HFSOvpSeVX6yhmkObaPVxrQzL2EWTndh9Gi29obz7oOV7dOFPJfEUMGe7HU6YXP5SHdMYoLRhL8jIkjjzPfciQD+2aIr7Lv5aAHWGSoXBs63XFmJ1x+g+Wm5m6wV82pR6APfLYqiqZMi35NZWbTpGxvt8q/BuV+oD8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231583; c=relaxed/simple; bh=lKMjZEnsBGbGIh61081vB9iPnEfQ8hKP8DiN0/8/FAE=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ELvSgUy/l6wmlwldzg0F0JBr4BksBFAcpsZQJDFqNwaW0eGmkO1tOmLYP1kgTykyai7GE8eejcsd/7CxD8xc1sDh6rGrvR62JltexP0kp8YY43x4WX1Pt9MuBo17FZfZZloJVCJfReyZbR2U6F4f0JrvieINyVLqGVOIa832BmM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Bsf0rUtB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Bsf0rUtB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15E47C4CEF7; Thu, 23 Oct 2025 14:59:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1761231583; bh=lKMjZEnsBGbGIh61081vB9iPnEfQ8hKP8DiN0/8/FAE=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=Bsf0rUtBwndgCmheoDpUJxTbWvR9oRoTTg7Z+jlOqSqK1RT/G4me0KKU0FbTZa+Tf 6jCwD61Vvg6S/v8YQm2RoBOKO6qV45UkpNYqHXJ8xWdFabHWoOurODznbGV29o+Abx pjfMDTTtl6oFx8rjOzdykWbQJst11EGPKWx54V4xaZiPuVzAit8mU0dPey9OtWPcUR FuqdKgoOrnySQIxoG2dZtLGvYeTfYlXIBGdjxnpbmwrePoK52q1kEXEYIuvc5lwS/t Ew0sT9U8M2WU7/Xarx9S6XSADA0cW16KabHx1HLhdnBNiXa1u5b0SuQFHBZwOIf6OP tyiq4cZS+FKgQ== Subject: [PATCH net V1 2/3] veth: stop and start all TX queue in netdev down/up From: Jesper Dangaard Brouer To: netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Cc: Jesper Dangaard Brouer , Eric Dumazet , "David S. Miller" , Jakub Kicinski , Paolo Abeni , ihor.solodrai@linux.dev, toshiaki.makita1@gmail.com, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@cloudflare.com Date: Thu, 23 Oct 2025 16:59:37 +0200 Message-ID: <176123157775.2281302.5972243809904783041.stgit@firesoul> In-Reply-To: <176123150256.2281302.7000617032469740443.stgit@firesoul> References: <176123150256.2281302.7000617032469740443.stgit@firesoul> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The veth driver started manipulating TXQ states in commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to reduce TX drops"). Other drivers manipulating TXQ states takes care of stopping and starting TXQs in NDOs. Thus, adding this to veth .ndo_open and .ndo_stop. Fixes: dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to re= duce TX drops") Signed-off-by: Jesper Dangaard Brouer --- drivers/net/veth.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 7b1a9805b270..3976ddda5fb8 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -1404,6 +1404,9 @@ static int veth_open(struct net_device *dev) return err; } =20 + netif_tx_start_all_queues(dev); + netif_tx_start_all_queues(peer); + if (peer->flags & IFF_UP) { netif_carrier_on(dev); netif_carrier_on(peer); @@ -1423,6 +1426,10 @@ static int veth_close(struct net_device *dev) if (peer) netif_carrier_off(peer); =20 + netif_tx_stop_all_queues(dev); + if (peer) + netif_tx_stop_all_queues(peer); + if (priv->_xdp_prog) veth_disable_xdp(dev); else if (veth_gro_requested(dev)) From nobody Sun Feb 8 12:42:48 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1447A330D34; Thu, 23 Oct 2025 14:59:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231590; cv=none; b=rkgu5CYZQ9W8ehnEG3weY0nW//1O3AJJR6S+EjGPKU0Wpizv8aa4Fz9OejecVpm4uMsInQRh0vGp+4XKtsIGUhHoqlSy99OkGR0oqU95EzHnMp0dLpaSgQDGwqRJvhhjBA6kxFf0npKk6rcRh9HhVYCSPzFZ5S7vlFw+SBs3A0A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761231590; c=relaxed/simple; bh=cXZ5FzyudK+Ly7PkDe/gmfBGxOLTQi1YvUaQvOnWpu0=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=pBUkekU+zF9h8oxwdkDrfYOHvVE+RuvNNlolIUgxAsp0w85fXLLYtlyygW7NY0gNJVQoURleW8fBwzDZjIP5VknkVnjXtWDDdiwKuWZBXt4UfFN/gAvDC5Tcq8aQN6CLWYqtIZAtobahwvGCA/476XyttKPE2OhmxdEUBossrVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U6d8xsXj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U6d8xsXj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C770C4CEE7; Thu, 23 Oct 2025 14:59:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1761231589; bh=cXZ5FzyudK+Ly7PkDe/gmfBGxOLTQi1YvUaQvOnWpu0=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=U6d8xsXjRjSS47t571oe1/ZnaeeWoHx9XGxaWBr5xPWhVIfSSSUVZaVYXRkNoK9Ir My5WbSTlGMIhq1LD7gpHXJYg1S6nXBp6PS+wQNpFYiDU6v8BYCJOsejGPhr44Incv5 v1fMEpr0bfG4VTO0LHi2paGDXC3yuUsmkTcQFryQi31ULsIke+au5NdMIQqbnn/G70 zayvrLbRRssm3HTtPg6r23rECEal5WIGaKpgxhDt30PZUlyRvY4kTe0q9osXrymj49 Uyg49UmH3z60XeAnWc0lx83rJiJ4XZvGXz0SKTLURYntuBDs/5rApIecL373J3KPli cqy+ncC6WD55Q== Subject: [PATCH net V1 3/3] veth: more robust handing of race to avoid txq getting stuck From: Jesper Dangaard Brouer To: netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Cc: Jesper Dangaard Brouer , Eric Dumazet , "David S. Miller" , Jakub Kicinski , Paolo Abeni , ihor.solodrai@linux.dev, toshiaki.makita1@gmail.com, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@cloudflare.com Date: Thu, 23 Oct 2025 16:59:44 +0200 Message-ID: <176123158453.2281302.11061466460805684097.stgit@firesoul> In-Reply-To: <176123150256.2281302.7000617032469740443.stgit@firesoul> References: <176123150256.2281302.7000617032469740443.stgit@firesoul> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to reduce TX drops") introduced a race condition that can lead to a permanently stalled TXQ. This was observed in production on ARM64 systems (Ampere Altra Max). The race occurs in veth_xmit(). The producer observes a full ptr_ring and stops the queue (netif_tx_stop_queue()). The subsequent conditional logic, intended to re-wake the queue if the consumer had just emptied it (if (__ptr_ring_empty(...)) netif_tx_wake_queue()), can fail. This leads to a "lost wakeup" where the TXQ remains stopped (QUEUE_STATE_DRV_XOFF) and traffic halts. This failure is caused by an incorrect use of the __ptr_ring_empty() API from the producer side. As noted in kernel comments, this check is not guaranteed to be correct if a consumer is operating on another CPU. The empty test is based on ptr_ring->consumer_head, making it reliable only for the consumer. Using this check from the producer side is fundamentally racy. This patch fixes the race by adopting the more robust logic from an earlier version V4 of the patchset, which always flushed the peer: (1) In veth_xmit(), the racy conditional wake-up logic and its memory barri= er are removed. Instead, after stopping the queue, we unconditionally call __veth_xdp_flush(rq). This guarantees that the NAPI consumer is scheduled, making it solely responsible for re-waking the TXQ. (2) On the consumer side, the logic for waking the peer TXQ is centralized. It is moved out of veth_xdp_rcv() (which processes a batch) and placed at the end of the veth_poll() function. This ensures netif_tx_wake_queue() is called once per complete NAPI poll cycle. (3) Finally, the NAPI completion check in veth_poll() is updated. If NAPI is about to complete (napi_complete_done), it now also checks if the peer TXQ is stopped. If the ring is empty but the peer TXQ is stopped, NAPI will reschedule itself. This prevents a new race where the producer stops the queue just as the consumer is finishing its poll, ensuring the wakeup is not missed. Fixes: dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to re= duce TX drops") Signed-off-by: Jesper Dangaard Brouer --- drivers/net/veth.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 3976ddda5fb8..1d70377481eb 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -392,14 +392,12 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, str= uct net_device *dev) } /* Restore Eth hdr pulled by dev_forward_skb/eth_type_trans */ __skb_push(skb, ETH_HLEN); - /* Depend on prior success packets started NAPI consumer via - * __veth_xdp_flush(). Cancel TXQ stop if consumer stopped, - * paired with empty check in veth_poll(). - */ netif_tx_stop_queue(txq); - smp_mb__after_atomic(); - if (unlikely(__ptr_ring_empty(&rq->xdp_ring))) - netif_tx_wake_queue(txq); + /* Handle race: Makes sure NAPI peer consumer runs. Consumer is + * responsible for starting txq again, until then ndo_start_xmit + * (this function) will not be invoked by the netstack again. + */ + __veth_xdp_flush(rq); break; case NET_RX_DROP: /* same as NET_XMIT_DROP */ drop: @@ -900,17 +898,9 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, struct veth_xdp_tx_bq *bq, struct veth_stats *stats) { - struct veth_priv *priv =3D netdev_priv(rq->dev); - int queue_idx =3D rq->xdp_rxq.queue_index; - struct netdev_queue *peer_txq; - struct net_device *peer_dev; int i, done =3D 0, n_xdpf =3D 0; void *xdpf[VETH_XDP_BATCH]; =20 - /* NAPI functions as RCU section */ - peer_dev =3D rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); - peer_txq =3D peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; - for (i =3D 0; i < budget; i++) { void *ptr =3D __ptr_ring_consume(&rq->xdp_ring); =20 @@ -959,11 +949,6 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, rq->stats.vs.xdp_packets +=3D done; u64_stats_update_end(&rq->stats.syncp); =20 - if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) { - txq_trans_cond_update(peer_txq); - netif_tx_wake_queue(peer_txq); - } - return done; } =20 @@ -971,12 +956,20 @@ static int veth_poll(struct napi_struct *napi, int bu= dget) { struct veth_rq *rq =3D container_of(napi, struct veth_rq, xdp_napi); + struct veth_priv *priv =3D netdev_priv(rq->dev); + int queue_idx =3D rq->xdp_rxq.queue_index; + struct netdev_queue *peer_txq; struct veth_stats stats =3D {}; + struct net_device *peer_dev; struct veth_xdp_tx_bq bq; int done; =20 bq.count =3D 0; =20 + /* NAPI functions as RCU section */ + peer_dev =3D rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); + peer_txq =3D peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; + xdp_set_return_frame_no_direct(); done =3D veth_xdp_rcv(rq, budget, &bq, &stats); =20 @@ -986,7 +979,8 @@ static int veth_poll(struct napi_struct *napi, int budg= et) if (done < budget && napi_complete_done(napi, done)) { /* Write rx_notify_masked before reading ptr_ring */ smp_store_mb(rq->rx_notify_masked, false); - if (unlikely(!__ptr_ring_empty(&rq->xdp_ring))) { + if (unlikely(!__ptr_ring_empty(&rq->xdp_ring) || + (peer_txq && netif_tx_queue_stopped(peer_txq)))) { if (napi_schedule_prep(&rq->xdp_napi)) { WRITE_ONCE(rq->rx_notify_masked, true); __napi_schedule(&rq->xdp_napi); @@ -998,6 +992,12 @@ static int veth_poll(struct napi_struct *napi, int bud= get) veth_xdp_flush(rq, &bq); xdp_clear_return_frame_no_direct(); =20 + /* Release backpressure per NAPI poll */ + if (peer_txq && netif_tx_queue_stopped(peer_txq)) { + txq_trans_cond_update(peer_txq); + netif_tx_wake_queue(peer_txq); + } + return done; }