From nobody Sat Feb 7 23:10:58 2026 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B48534BA44 for ; Tue, 23 Dec 2025 15:27:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503630; cv=none; b=lTU4ANZp0ZHIuAsyu1E3FYmZda/iYYtmYk9BN/Lth+8XZlQfSr3ktBo1uN+LTPsnzXZcydXPS4brgXG1fw3dw7kRUYeUBDZH5CEsYbW7IuT1Sxxog56dqM/2DXvCw9GLM4gpL5J26NkbFsXnyzU9AJ7+DqyKV07hB80UBWknPUw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503630; c=relaxed/simple; bh=KiQ0ZThXMAIWhbfHJPrDxjl8hzo1pgPh+udcXeksnfg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UZp+U+fW0n/TALgyyo09Q56OQtwuuQK3VrPMWybJS7bfBwi59MZimRtfrml8Gh8UTsAcRFcYPnRVOUJazNjYV9YsmVgCywVU/hfjoRNPmJTM1YFMPqsVf+sFGs4AQ5vChW9rzVanYj9eryTXwNVKtstcR/G77z/QShXpDU7ot8s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UqFWrY8s; arc=none smtp.client-ip=209.85.216.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UqFWrY8s" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-34be2be4b7cso3375276a91.3 for ; Tue, 23 Dec 2025 07:27:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766503628; x=1767108428; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kLWPkTU7zzF478AH6f0FqkR0eLCmJiyjEedfPTiBhIU=; b=UqFWrY8s/GHzGf1/KHgYwLSt+WnIXOj/DznssKwKQF3xIfZ+DIZ2tf7K/pxyMVHv0G ZVZ5VfyEuiJRsm6xblj8RTHheYVRI8jeh15dFfzlES77P0HHjv2KckOhu48fojRIWZPN 89YSM6hoZLFOckPuBO1RzHvQaFTz70KUqcQz0zImnHLULQ3OQl/p+NBouaSmCwywUUAF ICsAMpHOKMWIidaOtRNNMg1EnMc9EysEQYHN+ve4oMgbcHQPqjP7ZzIOSGHEOfA1/Uhu 3n/2WANNzqhRCzMRlmvIcPTdG7FpL9eYwAD4qGbkT28bx2W8nWC6ldQxhUlbFuMr58eN aXIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766503628; x=1767108428; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kLWPkTU7zzF478AH6f0FqkR0eLCmJiyjEedfPTiBhIU=; b=VTyNN0uP+9j7G/kNiYzlpbl2ky3pffMfcbQYpZwhWgyjUcD+worqPEy4kF0BQDLrKi EcoErnmzPuTF02Q3Oj9fmKhNM6ykzIze8H53Ut23oq1Cah5q6jtdqWfCOAQ7wZYs6AoT RxmTUJxCN2h+iy2Crfse6bZCMk9N23SsVse9HDNKEY8cGl5R3TZGhw2/3dZGL04Y6nFK Z9Tlj4TZtb4wGH/KldxQDgRh/b1ERlfZ+WoIK3Xzd6RW55GbCVaE5IbPtEys98AAKx6e KU+r4HY1EzQL8lYU/B92cG/8Lpi9shOhABrVeaAg/RhitqWvdIpCjd3p3MACl3YlMhpV Ki+g== X-Forwarded-Encrypted: i=1; AJvYcCVNuLSQbLnnO52DqBYFPnBOPei3wNA5AhO6W/yKJ2Jwqf9UGzihUeYUBNy5074nmjo69LBBuGKPg2do5tE=@vger.kernel.org X-Gm-Message-State: AOJu0YwAxw2H8YHEhSuUxiJOTeFr5KdAAWnjyLDfYGX/18fdeoclJeBl 88MhhpK/qQrzME67MZ4GoGssWGRNf6680hCV7dQU+ytuA3alkQWtL2Or X-Gm-Gg: AY/fxX7mFWUR/uBfEcNbDUPlfz6qpfOb/gasnk+pgD7rwNtldJ2UiTNP+pV/tNTZNkx ka4rW2FwYYgQ6hVgN58xRv5J83g7ypiB8HH2FNgkhYBuBIJhuPQpRDay3vpmD/vRcW/RboraPlB 0Ajj3foQ1pWmDZQPAJSR6pcXyOnGIfM02B8oKc2pbO7Qbhsz17jzKCc6Z0LZWUHPVpM3KTnA/gv 7UqYp+tyJSzmFX8ErGCIMgBC9t7UNM9x7vOPCCeEDpA0n+5aoRjs2SALgth1+yYXhXbbbU8Ld3N w+bL5gJ6VIfnm5JACK6aVewBL+AQV0xl8y6Zu5LgkLJfWxAXJOVuwhQZGKmSBbqasttxWv24qZi PVdY9/kod2UmeWF3pUkvEcln7OU0vEemOOFHjyQ+yAGLY0LSGz7cjW5pIFGeT0NzNoL9OA2M5U/ FOWX+qJJ/noMfELgdMOEUnBv05 X-Google-Smtp-Source: AGHT+IEVnhKR3V8FNtZi0Ym6mdYKVy9uIOFKC8GdabAruSwJL2IXyye60twQ8EXVcOpM73gSn0aWcg== X-Received: by 2002:a17:90a:e18e:b0:32e:a8b7:e9c with SMTP id 98e67ed59e1d1-34e921cc9c6mr13306528a91.29.1766503627601; Tue, 23 Dec 2025 07:27:07 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:3523:f373:4d1d:e7f0]) by smtp.googlemail.com with ESMTPSA id 98e67ed59e1d1-34e76ae7618sm8006138a91.1.2025.12.23.07.27.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Dec 2025 07:27:07 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh Subject: [PATCH net 1/3] virtio-net: make refill work a per receive queue work Date: Tue, 23 Dec 2025 22:25:31 +0700 Message-ID: <20251223152533.24364-2-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251223152533.24364-1-minhquangbui99@gmail.com> References: <20251223152533.24364-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the refill work is a global delayed work for all the receive queues. This commit makes the refill work a per receive queue so that we can manage them separately and avoid further mistakes. It also helps the successfully refilled queue avoid the napi_disable in the global delayed refill work like before. Signed-off-by: Bui Quang Minh Reported-by: Paolo Abeni --- drivers/net/virtio_net.c | 155 ++++++++++++++++++--------------------- 1 file changed, 72 insertions(+), 83 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bb3aeca66c6..63126e490bda 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -379,6 +379,15 @@ struct receive_queue { struct xdp_rxq_info xsk_rxq_info; =20 struct xdp_buff **xsk_buffs; + + /* Is delayed refill enabled? */ + bool refill_enabled; + + /* The lock to synchronize the access to refill_enabled */ + spinlock_t refill_lock; + + /* Work struct for delayed refilling if we run low on memory. */ + struct delayed_work refill; }; =20 #define VIRTIO_NET_RSS_MAX_KEY_SIZE 40 @@ -441,9 +450,6 @@ struct virtnet_info { /* Packet virtio header size */ u8 hdr_len; =20 - /* Work struct for delayed refilling if we run low on memory. */ - struct delayed_work refill; - /* UDP tunnel support */ bool tx_tnl; =20 @@ -451,12 +457,6 @@ struct virtnet_info { =20 bool rx_tnl_csum; =20 - /* Is delayed refill enabled? */ - bool refill_enabled; - - /* The lock to synchronize the access to refill_enabled */ - spinlock_t refill_lock; - /* Work struct for config space updates */ struct work_struct config_work; =20 @@ -720,18 +720,18 @@ static void virtnet_rq_free_buf(struct virtnet_info *= vi, put_page(virt_to_head_page(buf)); } =20 -static void enable_delayed_refill(struct virtnet_info *vi) +static void enable_delayed_refill(struct receive_queue *rq) { - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled =3D true; - spin_unlock_bh(&vi->refill_lock); + spin_lock_bh(&rq->refill_lock); + rq->refill_enabled =3D true; + spin_unlock_bh(&rq->refill_lock); } =20 -static void disable_delayed_refill(struct virtnet_info *vi) +static void disable_delayed_refill(struct receive_queue *rq) { - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled =3D false; - spin_unlock_bh(&vi->refill_lock); + spin_lock_bh(&rq->refill_lock); + rq->refill_enabled =3D false; + spin_unlock_bh(&rq->refill_lock); } =20 static void enable_rx_mode_work(struct virtnet_info *vi) @@ -2950,38 +2950,19 @@ static void virtnet_napi_disable(struct receive_que= ue *rq) =20 static void refill_work(struct work_struct *work) { - struct virtnet_info *vi =3D - container_of(work, struct virtnet_info, refill.work); + struct receive_queue *rq =3D + container_of(work, struct receive_queue, refill.work); bool still_empty; - int i; - - for (i =3D 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq =3D &vi->rq[i]; =20 - /* - * When queue API support is added in the future and the call - * below becomes napi_disable_locked, this driver will need to - * be refactored. - * - * One possible solution would be to: - * - cancel refill_work with cancel_delayed_work (note: - * non-sync) - * - cancel refill_work with cancel_delayed_work_sync in - * virtnet_remove after the netdev is unregistered - * - wrap all of the work in a lock (perhaps the netdev - * instance lock) - * - check netif_running() and return early to avoid a race - */ - napi_disable(&rq->napi); - still_empty =3D !try_fill_recv(vi, rq, GFP_KERNEL); - virtnet_napi_do_enable(rq->vq, &rq->napi); + napi_disable(&rq->napi); + still_empty =3D !try_fill_recv(rq->vq->vdev->priv, rq, GFP_KERNEL); + virtnet_napi_do_enable(rq->vq, &rq->napi); =20 - /* In theory, this can happen: if we don't get any buffers in - * we will *never* try to fill again. - */ - if (still_empty) - schedule_delayed_work(&vi->refill, HZ/2); - } + /* In theory, this can happen: if we don't get any buffers in + * we will *never* try to fill again. + */ + if (still_empty) + schedule_delayed_work(&rq->refill, HZ / 2); } =20 static int virtnet_receive_xsk_bufs(struct virtnet_info *vi, @@ -3048,10 +3029,10 @@ static int virtnet_receive(struct receive_queue *rq= , int budget, =20 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size= (rq->vq)) / 2) { if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { - spin_lock(&vi->refill_lock); - if (vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock(&vi->refill_lock); + spin_lock(&rq->refill_lock); + if (rq->refill_enabled) + schedule_delayed_work(&rq->refill, 0); + spin_unlock(&rq->refill_lock); } } =20 @@ -3226,13 +3207,13 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi =3D netdev_priv(dev); int i, err; =20 - enable_delayed_refill(vi); - for (i =3D 0; i < vi->max_queue_pairs; i++) { - if (i < vi->curr_queue_pairs) + if (i < vi->curr_queue_pairs) { + enable_delayed_refill(&vi->rq[i]); /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->refill, 0); + schedule_delayed_work(&vi->rq[i].refill, 0); + } =20 err =3D virtnet_enable_queue_pair(vi, i); if (err < 0) @@ -3251,10 +3232,9 @@ static int virtnet_open(struct net_device *dev) return 0; =20 err_enable_qp: - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); - for (i--; i >=3D 0; i--) { + disable_delayed_refill(&vi->rq[i]); + cancel_delayed_work_sync(&vi->rq[i].refill); virtnet_disable_queue_pair(vi, i); virtnet_cancel_dim(vi, &vi->rq[i].dim); } @@ -3447,14 +3427,15 @@ static void virtnet_rx_pause_all(struct virtnet_inf= o *vi) { int i; =20 - /* - * Make sure refill_work does not run concurrently to - * avoid napi_disable race which leads to deadlock. - */ - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); - for (i =3D 0; i < vi->max_queue_pairs; i++) + for (i =3D 0; i < vi->max_queue_pairs; i++) { + /* + * Make sure refill_work does not run concurrently to + * avoid napi_disable race which leads to deadlock. + */ + disable_delayed_refill(&vi->rq[i]); + cancel_delayed_work_sync(&vi->rq[i].refill); __virtnet_rx_pause(vi, &vi->rq[i]); + } } =20 static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue= *rq) @@ -3463,8 +3444,8 @@ static void virtnet_rx_pause(struct virtnet_info *vi,= struct receive_queue *rq) * Make sure refill_work does not run concurrently to * avoid napi_disable race which leads to deadlock. */ - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); + disable_delayed_refill(rq); + cancel_delayed_work_sync(&rq->refill); __virtnet_rx_pause(vi, rq); } =20 @@ -3481,25 +3462,26 @@ static void __virtnet_rx_resume(struct virtnet_info= *vi, virtnet_napi_enable(rq); =20 if (schedule_refill) - schedule_delayed_work(&vi->refill, 0); + schedule_delayed_work(&rq->refill, 0); } =20 static void virtnet_rx_resume_all(struct virtnet_info *vi) { int i; =20 - enable_delayed_refill(vi); for (i =3D 0; i < vi->max_queue_pairs; i++) { - if (i < vi->curr_queue_pairs) + if (i < vi->curr_queue_pairs) { + enable_delayed_refill(&vi->rq[i]); __virtnet_rx_resume(vi, &vi->rq[i], true); - else + } else { __virtnet_rx_resume(vi, &vi->rq[i], false); + } } } =20 static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queu= e *rq) { - enable_delayed_refill(vi); + enable_delayed_refill(rq); __virtnet_rx_resume(vi, rq, true); } =20 @@ -3830,10 +3812,16 @@ static int virtnet_set_queues(struct virtnet_info *= vi, u16 queue_pairs) succ: vi->curr_queue_pairs =3D queue_pairs; /* virtnet_open() will refill when device is going to up. */ - spin_lock_bh(&vi->refill_lock); - if (dev->flags & IFF_UP && vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock_bh(&vi->refill_lock); + if (dev->flags & IFF_UP) { + int i; + + for (i =3D 0; i < vi->curr_queue_pairs; i++) { + spin_lock_bh(&vi->rq[i].refill_lock); + if (vi->rq[i].refill_enabled) + schedule_delayed_work(&vi->rq[i].refill, 0); + spin_unlock_bh(&vi->rq[i].refill_lock); + } + } =20 return 0; } @@ -3843,10 +3831,6 @@ static int virtnet_close(struct net_device *dev) struct virtnet_info *vi =3D netdev_priv(dev); int i; =20 - /* Make sure NAPI doesn't schedule refill work */ - disable_delayed_refill(vi); - /* Make sure refill_work doesn't re-enable napi! */ - cancel_delayed_work_sync(&vi->refill); /* Prevent the config change callback from changing carrier * after close */ @@ -3857,6 +3841,10 @@ static int virtnet_close(struct net_device *dev) cancel_work_sync(&vi->config_work); =20 for (i =3D 0; i < vi->max_queue_pairs; i++) { + /* Make sure NAPI doesn't schedule refill work */ + disable_delayed_refill(&vi->rq[i]); + /* Make sure refill_work doesn't re-enable napi! */ + cancel_delayed_work_sync(&vi->rq[i].refill); virtnet_disable_queue_pair(vi, i); virtnet_cancel_dim(vi, &vi->rq[i].dim); } @@ -5802,7 +5790,6 @@ static int virtnet_restore_up(struct virtio_device *v= dev) =20 virtio_device_ready(vdev); =20 - enable_delayed_refill(vi); enable_rx_mode_work(vi); =20 if (netif_running(vi->dev)) { @@ -6559,8 +6546,9 @@ static int virtnet_alloc_queues(struct virtnet_info *= vi) if (!vi->rq) goto err_rq; =20 - INIT_DELAYED_WORK(&vi->refill, refill_work); for (i =3D 0; i < vi->max_queue_pairs; i++) { + INIT_DELAYED_WORK(&vi->rq[i].refill, refill_work); + spin_lock_init(&vi->rq[i].refill_lock); vi->rq[i].pages =3D NULL; netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll, i); @@ -6901,7 +6889,6 @@ static int virtnet_probe(struct virtio_device *vdev) =20 INIT_WORK(&vi->config_work, virtnet_config_changed_work); INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work); - spin_lock_init(&vi->refill_lock); =20 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { vi->mergeable_rx_bufs =3D true; @@ -7165,7 +7152,9 @@ static int virtnet_probe(struct virtio_device *vdev) net_failover_destroy(vi->failover); free_vqs: virtio_reset_device(vdev); - cancel_delayed_work_sync(&vi->refill); + for (i =3D 0; i < vi->max_queue_pairs; i++) + cancel_delayed_work_sync(&vi->rq[i].refill); + free_receive_page_frags(vi); virtnet_del_vqs(vi); free: --=20 2.43.0 From nobody Sat Feb 7 23:10:58 2026 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0B2534CFB5 for ; Tue, 23 Dec 2025 15:27:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503635; cv=none; b=WttYCYTUvOCNZexIoKNCvB+6t2NLbUSPvt675bF6I6yWrhcl+6VpuzJEDNugbnh2I4Dn2JmuIoBMNA7KcnQikCsqJpXjO9tV2l7lKJSvxdF50xHMtHmTWhJXYsHz7Fwdleu8YRk1G67YuIUBvs5b1lshegA5y5bI6tSABf98dlk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503635; c=relaxed/simple; bh=L0CtNZGdC7XEOtswvmiReCjZIo8GsOvQxUlegVLHBuQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=roE26j/gRmKWoTnYZbFMtj5woaj7G4qL2ZLy9NRETJdQyNuP0HLGw7/+2sWJjB9wqG3RZCX246dkVyRCGCMDIVxQmcqrhGVIzFrF+v+4ZfM5UMY6X0UGJzlWlhwi7n7Vdg5fBwZ7MKfo0IebgMKQ4cWI4+9e4vzzg1fRicWSrc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Mns/JyQL; arc=none smtp.client-ip=209.85.216.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Mns/JyQL" Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-34c3cb504efso5589286a91.2 for ; Tue, 23 Dec 2025 07:27:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766503633; x=1767108433; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C+6D6PVRsFTnS4j3cktmLgH5az1adeAVQEKO6DToWSU=; b=Mns/JyQL3ZbTTxK31p0r18sp6DJ+Q+s97WF+rN1ST5Srksx87q+D55AAgtZy/O95iD nrLQPNeQ2HmIOwhbDJwcOx4d6jLHq2P42xCyqUwhZNTq9JiSV0Eu5Z8+hlkofdaAnPRD OzUyzgfIGEesEvqxIx3DYugm0STsrVfhPlS6Bg6BRK/rG6wF1jNKm7dnHz8AHZT/yiUD k1O6Bmdoh0uzin2fTFz+Y4h54Po88jj+PtrAneQO1opqjeMeBOyfo8gAhKTsFobreYHE XveNQv7bNKYtVd30uMyvWseOJq8wntTMIDVeGWoFcttADWyARXqaLe5Cj5UWCZGZl//L 9oAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766503633; x=1767108433; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=C+6D6PVRsFTnS4j3cktmLgH5az1adeAVQEKO6DToWSU=; b=LlggOGqvH25LZCMlUWwabkMMUb/ISwikI30H2P2J4SKl1zvUz6IDsM06P5gn6CxZt4 QqNPQzbJ42M0xNVlzuP4H9XDzuB1D3A0cm+BJW39YA4elJu7VMKm/Cq8hXEyNUmEk1k9 opONS9V6wpf+5Gg78QXQJ3hllm4MFImxFBAowdqiso3su5kGZEkBs1cMp/j+zCSPXHmf ZHnuKx9LYvkhbCEvDe/exaJwUSWHPwAMMdtj+IoIF+qJfiUf2cvNIL+4EOwbTWZmlgpL Cpi71SnEfJyouJ3/tB2ydSopzig//HimviCZM0abW4Z4hAUnJ3GYVVTVV2/u9TUvow0l A7IQ== X-Forwarded-Encrypted: i=1; AJvYcCWvFBbAbZMxAMfDiIb4Vk5SkFgMzsE3xIsanBDT9PblLUTwOMZzXbj3O5alRqDZyPypo0uaNoVMYmYUQqQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yz9J+gVDo6gNM/DAOfL5HZP8qrhdKrOArZsyKapTKG+bBTwerVi Ym62KLqARZQwHcJSIMlW1PGmiS5sanrjPkkQAs6BxEzxuLpkoxg7C5GE X-Gm-Gg: AY/fxX6lGnlT4gs9fLc3e5tbMG43B0ytCnMEXh+N89qkPRKLFo59l/1HVTrEULvPhgF bGg3jfMzIvSRaKvIrNxvf+aFy5jUv94JQHc9hWCULRXzpcwz+zobHZWX5H+tUJOz0SQTDPBwOSb /eRCedrx4y7hRR65CdEPsIauha82H8IN25qw2c2iHkQvIJABeydGjRVP1cbbk1jOZXy6KAahpXG 8jJeam2LLtuDv0Rg1/1UTFYLkO35YBrqg7tY2d+1ogh84aIxW4roUc103ag1sMM3ySyUPMsqp6Y fVT3lz3XMmFTH1dyamEV3dWEFv89Bb5XZ7eoYaH8d+8GhkDwCWmL5YIHYfwLHkDDzN8EPbkZwAP 0DBNNkS0un0NQmayuGvRtc50riIRUs/iAIFu9Xn1AEZ2niv+5yV9A2swli2r8FwrGOjD8+64Alz q9Wm9Bms7gHsmUh/wqO558Za4kc1hWndxbBh0= X-Google-Smtp-Source: AGHT+IHPZySRpyLioNlkGmfBeBlfCuPepN+ltwQg7UoU9dgxoY0VAt295aBDiCrvyeMv0Apodcz+Pg== X-Received: by 2002:a17:90b:3c4d:b0:32e:7270:9499 with SMTP id 98e67ed59e1d1-34e91f6f93amr11885237a91.0.1766503632955; Tue, 23 Dec 2025 07:27:12 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:3523:f373:4d1d:e7f0]) by smtp.googlemail.com with ESMTPSA id 98e67ed59e1d1-34e76ae7618sm8006138a91.1.2025.12.23.07.27.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Dec 2025 07:27:12 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh Subject: [PATCH net 2/3] virtio-net: ensure rx NAPI is enabled before enabling refill work Date: Tue, 23 Dec 2025 22:25:32 +0700 Message-ID: <20251223152533.24364-3-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251223152533.24364-1-minhquangbui99@gmail.com> References: <20251223152533.24364-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Calling napi_disable() on an already disabled napi can cause the deadlock. Because the delayed refill work will call napi_disable(), we must ensure that refill work is only enabled and scheduled after we have enabled the rx queue's NAPI. Signed-off-by: Bui Quang Minh Reported-by: Paolo Abeni --- drivers/net/virtio_net.c | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 63126e490bda..8016d2b378cf 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3208,16 +3208,31 @@ static int virtnet_open(struct net_device *dev) int i, err; =20 for (i =3D 0; i < vi->max_queue_pairs; i++) { + bool schedule_refill =3D false; + + /* - We must call try_fill_recv before enabling napi of the same + * receive queue so that it doesn't race with the call in + * virtnet_receive. + * - We must enable and schedule delayed refill work only when + * we have enabled all the receive queue's napi. Otherwise, in + * refill_work, we have a deadlock when calling napi_disable on + * an already disabled napi. + */ if (i < vi->curr_queue_pairs) { - enable_delayed_refill(&vi->rq[i]); /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->rq[i].refill, 0); + schedule_refill =3D true; } =20 err =3D virtnet_enable_queue_pair(vi, i); if (err < 0) goto err_enable_qp; + + if (i < vi->curr_queue_pairs) { + enable_delayed_refill(&vi->rq[i]); + if (schedule_refill) + schedule_delayed_work(&vi->rq[i].refill, 0); + } } =20 if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { @@ -3456,11 +3471,16 @@ static void __virtnet_rx_resume(struct virtnet_info= *vi, bool running =3D netif_running(vi->dev); bool schedule_refill =3D false; =20 + /* See the comment in virtnet_open for the ordering rule + * of try_fill_recv, receive queue napi_enable and delayed + * refill enable/schedule. + */ if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) schedule_refill =3D true; if (running) virtnet_napi_enable(rq); =20 + enable_delayed_refill(rq); if (schedule_refill) schedule_delayed_work(&rq->refill, 0); } @@ -3470,18 +3490,15 @@ static void virtnet_rx_resume_all(struct virtnet_in= fo *vi) int i; =20 for (i =3D 0; i < vi->max_queue_pairs; i++) { - if (i < vi->curr_queue_pairs) { - enable_delayed_refill(&vi->rq[i]); + if (i < vi->curr_queue_pairs) __virtnet_rx_resume(vi, &vi->rq[i], true); - } else { + else __virtnet_rx_resume(vi, &vi->rq[i], false); - } } } =20 static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queu= e *rq) { - enable_delayed_refill(rq); __virtnet_rx_resume(vi, rq, true); } =20 --=20 2.43.0 From nobody Sat Feb 7 23:10:58 2026 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECFBD34D39C for ; Tue, 23 Dec 2025 15:27:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503640; cv=none; b=LCAWscYGkE+q+6EtVdWNGqALytTlfik27bHm9TOz0i0UYCLPlqXVf9i+J4UuxtoQvxCNq/Cu6QGCidYnuJcYi5sMfBdM5LKQu9BQwGZf0BFtOk4v3WAWbrCsSSKWXhdfbMMNfb8vdd1Zj5hify5BMgbArqizShAUf6I6S1R30Wg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766503640; c=relaxed/simple; bh=dqAVCd406spP927ct6RgvB4a8I1dOdU+v808mOA7XuI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gLO4dd/MTPMoLiI6HqJw3D82tyBWUICIRRqv0eb7EccfivGdhVSN2clG4sfURpFk8tdrRU9BRV3c5qrstX49ECNzv+p9ABH/87aCr4nPhFIUFSw8/6zipdUNHMu2NBvnL7B6lkVB5ayv8/bNYsTGcZTKRK651RZe4/ri74PJfDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dEEOpmP1; arc=none smtp.client-ip=209.85.216.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dEEOpmP1" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-34c718c5481so4920144a91.3 for ; Tue, 23 Dec 2025 07:27:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766503638; x=1767108438; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ze9XAlqzMbfmN07PhRbHKjr9BLXv6+joNuOst0RuStI=; b=dEEOpmP1FdgkOa3ktWP8RLJupssR+p8a0ID+w0AEfkGuW8t9R5zULSqd5pllEznehv eGj7xgm98JtQGTmKza/LZkAFZPDb+24cAJy1ubomu5FCY4xz113NQe/PVfsQBWWUJEFJ Rlr/PuHgHsP2/aAdgYQsr1/9OxCdN2OCEgp8hehA28jtPza3s5gDuJ8dnPJFjbqXsB9D PVyC2ufQqtWqe86Om883mdJIaHvf0dgwTTOg1dJssCdLoUiu5r2meFWox6zaG1PvIkLD iuEJms2dHp9YZPCO1ib/Ujhcgt9MXadBCN8TWkbdupyJ2BlEBQe/L0Urx31DCC7TUxPa DupQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766503638; x=1767108438; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=Ze9XAlqzMbfmN07PhRbHKjr9BLXv6+joNuOst0RuStI=; b=h54IsmJNWb25jx/RQnTBk5XgMPIXDocjyXZI5ccmPrltrK502FKlelzRnlb5TjmsQ4 2Zss0Wn8Z68n8sZnc70QBAv+iXmuG67p4mYlgeK+Q+8l1HkR2IhnUCwJkI8BRMmiQhS8 bnsWD9dxgDuj6pbYOUHWWZgZwwIP6ovdMXUwvUHYgKEkmPcylpDjhGTL6b4i1WhPExQl zNKfDDF//Ffmai/VRFDSNAVEgTB7VlpYNoJSVEUnC99TudGUikFGKyWw1V2zMjpMkaLt XlWgHNSC8vuSyAtqva7w/KjHNwS/cOc7ZOAK5EpgTLMX7L0/r11v8CRWkfEKst+Pv3E1 jO4Q== X-Forwarded-Encrypted: i=1; AJvYcCWyM6omvZT3JANLN85wHUMLhULlnKIPhz4CmspTlUB3K0Fio5kQW9nuq/anJwX/N6pNBJ4hgSKx9ZGJ5lo=@vger.kernel.org X-Gm-Message-State: AOJu0YxbDAPv2VZt7zOLoMmtfgIAUQZydSGtO4HCvPamHpueSuQGysbA wCtr0vlHe9kqzxjQum4P4e9op/zSX1ipxjTkIn/rNJsR3QPQEB2jdwkT X-Gm-Gg: AY/fxX7ubhaIa/VBngMwcN0H2rUywEqOWj2O7ks9AIB2rKEF2yo1NcgMUteZ90rXlOZ M3bAtk0ZlF97ixiv/zEUM+JpGPKwcz+CmUlBavR0jd+UUXqryLjPyCdAulREl30/DTy0Fs5JRWL iUM2IObq7RrZy9tsTFxe8x1G0IHU0WVRNACU8/ufduvHWxFNaHNjPz0/aOCUP8UJOCWTauOBv6s ilfi8AC+kQ8Xbi3NW62SXO1zJ1MtXWeGNnnFsDBn2b5Ayk1kZGUpaGzl4oREGQe/fSEAQkNIaGz HwSnTuyO5Q39dt5sStwRNFb7vm8DUtDFUA4q3nmm/CoR43tpfqcNQClncdN5ojQ58+KHsUzADeN jSJgbW10HZwWuzGDtSO/uIc6EOGzXMbHWts8E3DdwdvzncQkJG5ZVXYTJ0PGBZJgmnsrNaO4kab Ob+fbClFKelohr+J4Hgwy68LHI X-Google-Smtp-Source: AGHT+IEk5kNaKT2vsTzXqOJuqMCa8BgC7mYYuynqk5le798li0SZVD/6VGOpORiTUPiHf0D9J/Vgsw== X-Received: by 2002:a17:90b:3ccf:b0:349:162d:ae1e with SMTP id 98e67ed59e1d1-34e921f7eb0mr11071309a91.33.1766503638325; Tue, 23 Dec 2025 07:27:18 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:3523:f373:4d1d:e7f0]) by smtp.googlemail.com with ESMTPSA id 98e67ed59e1d1-34e76ae7618sm8006138a91.1.2025.12.23.07.27.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Dec 2025 07:27:17 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh Subject: [PATCH net 3/3] virtio-net: schedule the pending refill work after being enabled Date: Tue, 23 Dec 2025 22:25:33 +0700 Message-ID: <20251223152533.24364-4-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251223152533.24364-1-minhquangbui99@gmail.com> References: <20251223152533.24364-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As we need to move the enable_delayed_refill after napi_enable, it's possible that a refill work needs to be scheduled in virtnet_receive but it cannot. This can make the receive side stuck because if we don't have any receive buffers, there will be nothing trigger the refill logic. So in case it happens, in virtnet_receive, set the rx queue's refill_pending, then when the refill work is enabled again, a refill work will be scheduled. Signed-off-by: Bui Quang Minh Reported-by: Paolo Abeni --- drivers/net/virtio_net.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 8016d2b378cf..ddc62dab2f9a 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -383,6 +383,9 @@ struct receive_queue { /* Is delayed refill enabled? */ bool refill_enabled; =20 + /* A refill work needs to be scheduled when delayed refill is enabled */ + bool refill_pending; + /* The lock to synchronize the access to refill_enabled */ spinlock_t refill_lock; =20 @@ -720,10 +723,13 @@ static void virtnet_rq_free_buf(struct virtnet_info *= vi, put_page(virt_to_head_page(buf)); } =20 -static void enable_delayed_refill(struct receive_queue *rq) +static void enable_delayed_refill(struct receive_queue *rq, + bool schedule_refill) { spin_lock_bh(&rq->refill_lock); rq->refill_enabled =3D true; + if (rq->refill_pending || schedule_refill) + schedule_delayed_work(&rq->refill, 0); spin_unlock_bh(&rq->refill_lock); } =20 @@ -3032,6 +3038,8 @@ static int virtnet_receive(struct receive_queue *rq, = int budget, spin_lock(&rq->refill_lock); if (rq->refill_enabled) schedule_delayed_work(&rq->refill, 0); + else + rq->refill_pending =3D true; spin_unlock(&rq->refill_lock); } } @@ -3228,11 +3236,8 @@ static int virtnet_open(struct net_device *dev) if (err < 0) goto err_enable_qp; =20 - if (i < vi->curr_queue_pairs) { - enable_delayed_refill(&vi->rq[i]); - if (schedule_refill) - schedule_delayed_work(&vi->rq[i].refill, 0); - } + if (i < vi->curr_queue_pairs) + enable_delayed_refill(&vi->rq[i], schedule_refill); } =20 if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { @@ -3480,9 +3485,7 @@ static void __virtnet_rx_resume(struct virtnet_info *= vi, if (running) virtnet_napi_enable(rq); =20 - enable_delayed_refill(rq); - if (schedule_refill) - schedule_delayed_work(&rq->refill, 0); + enable_delayed_refill(rq, schedule_refill); } =20 static void virtnet_rx_resume_all(struct virtnet_info *vi) --=20 2.43.0