net/core/netpoll.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
Fix a AA deadlock in refill_skbs() where memory allocation while holding
skb_pool->lock can trigger a recursive lock acquisition attempt.
The deadlock scenario occurs when the system is under severe memory
pressure:
1. refill_skbs() acquires skb_pool->lock (spinlock)
2. alloc_skb() is called while holding the lock
3. Memory allocator fails and calls slab_out_of_memory()
4. This triggers printk() for the OOM warning
5. The console output path calls netpoll_send_udp()
6. netpoll_send_udp() attempts to acquire the same skb_pool->lock
7. Deadlock: the lock is already held by the same CPU
Call stack:
refill_skbs()
spin_lock_irqsave(&skb_pool->lock) <- lock acquired
__alloc_skb()
kmem_cache_alloc_node_noprof()
slab_out_of_memory()
printk()
console_flush_all()
netpoll_send_udp()
skb_dequeue()
spin_lock_irqsave() <- deadlock attempt
Refactor refill_skbs() to never allocate memory while holding
the spinlock.
Signed-off-by: Breno Leitao <leitao@debian.org>
Fixes: 1da177e4c3f41 ("Linux-2.6.12-rc2")
---
net/core/netpoll.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 60a05d3b7c249..788cec4d527f8 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -232,14 +232,26 @@ static void refill_skbs(struct netpoll *np)
skb_pool = &np->skb_pool;
- spin_lock_irqsave(&skb_pool->lock, flags);
- while (skb_pool->qlen < MAX_SKBS) {
+ while (1) {
+ spin_lock_irqsave(&skb_pool->lock, flags);
+ if (skb_pool->qlen >= MAX_SKBS)
+ goto unlock;
+ spin_unlock_irqrestore(&skb_pool->lock, flags);
+
skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC);
if (!skb)
- break;
+ return;
+ spin_lock_irqsave(&skb_pool->lock, flags);
+ if (skb_pool->qlen >= MAX_SKBS)
+ /* Discard if len got increased (TOCTOU) */
+ goto discard;
__skb_queue_tail(skb_pool, skb);
+ spin_unlock_irqrestore(&skb_pool->lock, flags);
}
+discard:
+ dev_kfree_skb_any(skb);
+unlock:
spin_unlock_irqrestore(&skb_pool->lock, flags);
}
---
base-commit: 0b4b77eff5f8cd9be062783a1c1e198d46d0a753
change-id: 20251013-fix_netpoll_aa-c991ac5f2138
Best regards,
--
Breno Leitao <leitao@debian.org>
On Mon, Oct 13, 2025 at 02:42:29AM -0700, Breno Leitao wrote:
> Fix a AA deadlock in refill_skbs() where memory allocation while holding
> skb_pool->lock can trigger a recursive lock acquisition attempt.
>
> The deadlock scenario occurs when the system is under severe memory
> pressure:
>
> 1. refill_skbs() acquires skb_pool->lock (spinlock)
> 2. alloc_skb() is called while holding the lock
> 3. Memory allocator fails and calls slab_out_of_memory()
> 4. This triggers printk() for the OOM warning
> 5. The console output path calls netpoll_send_udp()
> 6. netpoll_send_udp() attempts to acquire the same skb_pool->lock
> 7. Deadlock: the lock is already held by the same CPU
>
> Call stack:
> refill_skbs()
> spin_lock_irqsave(&skb_pool->lock) <- lock acquired
> __alloc_skb()
> kmem_cache_alloc_node_noprof()
> slab_out_of_memory()
> printk()
> console_flush_all()
> netpoll_send_udp()
> skb_dequeue()
> spin_lock_irqsave() <- deadlock attempt
>
> Refactor refill_skbs() to never allocate memory while holding
> the spinlock.
>
> Signed-off-by: Breno Leitao <leitao@debian.org>
> Fixes: 1da177e4c3f41 ("Linux-2.6.12-rc2")
> ---
> net/core/netpoll.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/netpoll.c b/net/core/netpoll.c
> index 60a05d3b7c249..788cec4d527f8 100644
> --- a/net/core/netpoll.c
> +++ b/net/core/netpoll.c
> @@ -232,14 +232,26 @@ static void refill_skbs(struct netpoll *np)
>
> skb_pool = &np->skb_pool;
>
> - spin_lock_irqsave(&skb_pool->lock, flags);
> - while (skb_pool->qlen < MAX_SKBS) {
> + while (1) {
> + spin_lock_irqsave(&skb_pool->lock, flags);
> + if (skb_pool->qlen >= MAX_SKBS)
> + goto unlock;
> + spin_unlock_irqrestore(&skb_pool->lock, flags);
> +
> skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC);
> if (!skb)
> - break;
> + return;
>
> + spin_lock_irqsave(&skb_pool->lock, flags);
> + if (skb_pool->qlen >= MAX_SKBS)
> + /* Discard if len got increased (TOCTOU) */
> + goto discard;
> __skb_queue_tail(skb_pool, skb);
> + spin_unlock_irqrestore(&skb_pool->lock, flags);
> }
We probably want to return in here, as pointed out by Rik van Riel
offline.
If there are no more concerns, I will wait the 24-hours period and send
a v2.
© 2016 - 2025 Red Hat, Inc.