From nobody Sun Feb 8 06:22:45 2026 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4A4C26056A for ; Fri, 2 Jan 2026 15:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367253; cv=none; b=seuYfGEiVAkgcvZ1GCOgqZeiLFk6Dgmg9IhP9wRePWrVkWJujvirV4MxBAhpwWBGzh78H2YAeWgnsduGq9vYtSpUESYW/SeKhW0sg79RVlgx+j34vPj0bEhYqAiizlMyWMMewX4cWXa51M4R0Cl7qVpkdZj6xvYJoA3gMlfFW+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367253; c=relaxed/simple; bh=AVmbCTua2itYQSdqi67Rg2DvQJpSs9Fw5lo+ym0KgKI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sT1BG543ewZCpy/ce6WcJJR3w9SoyCJJ9uhaqCgYLtuZMz8fnBURVPZDV3Y3c94TY6XU9367zhg3ZxbgA81tgw3+T9VfQDa4Z8nJHWD3Cp/v9pQrrpHveOXk6EMYjQYzldJykVihbGSzdokIcD4YYtOgUV9zWmNOksuXCubiGuI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nHgspHF9; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nHgspHF9" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7e1651ae0d5so9411700b3a.1 for ; Fri, 02 Jan 2026 07:20:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767367248; x=1767972048; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=viiMT0mJ8A1eQE5ko81RyEvaps1qhf8umuQOghYhhoM=; b=nHgspHF9T7yoCA4K1d8XVNuDBmDhdYM8YBktwCnxbjG95MFCE3M6fF3bHqEqvr/Ua9 QQ5wnuUPwC9xSFF3hAfdVB71LSCEnGXQSCIeHtjW9qq1M122ZlRICcp9TthWlVeBzYou uHyf8/0lVgs0XbekgewKVfWyosQ8hzxllxkDRNBlUbUVSTyND+GC2rO91Pc+8xQCFSq0 X3qIAVgYFXSHbbKqJECsa6xVx1/LZadsYdPq8cDDSUVmRgk4GnSERuvYADTbTngFyxso a33gsqa/fbN7DfuehjP01pSRcofhDcjH3FRidSRUKe4URcj/rVRBq0DnjxUWPs5OvqwF pmWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767367248; x=1767972048; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=viiMT0mJ8A1eQE5ko81RyEvaps1qhf8umuQOghYhhoM=; b=j5V3ZKkVxu8S1Dzxt8IEqO4FIWQL1C32/STuLftCY+fT0fpFVeEKaigH3eOT5USfi2 tm4f6yRF5xiFtFza8LZC8x7z97WqTI2H3N0jHXt4z1PnhaMFbtDDxFEm7ybAoFd6Jb8m 3qyfgnN7wmKRGmcKNKhPXHYF2z5i6sfjJ4BVPwJa9adC2/8gWgAgpqufnl44zxIV5kJL mlm6nQXC3You2069rG8PLUX3X24wzvelWjFe0/TnKj+kUf5FhK1X+k9wmr8zYYOo4g03 onV0e7sUYTWR/IyczzjJUsTBLirqV6Xodc+Lemc/1IV8nXL49SAAem+PUPeT8tY32LgK bgIg== X-Forwarded-Encrypted: i=1; AJvYcCXOTzs3g2+LnHJFfj2JvA0sPONgiyn2PUufYLnnLWnneJbhrtWbVheVIS84/vstQBTOlhOLKXJoIVI5RmQ=@vger.kernel.org X-Gm-Message-State: AOJu0YziJVzp+DVRIQ9uefIDfWNX10P0e1BLGd+AulUdqRdBvI1qNwHg Drn4qKBJWzKO3ZoF8zEmPFinBz8k18sn5uvUIjfNwnqRN1THVJah4cfHn6DfxhBO X-Gm-Gg: AY/fxX6vDEj+vSiw5g66MyUuMTysCfn9fe+CVdp0spQLwyhDa6qx+feBPvSq+TU+PCS neddc7SwANJfqDJTvQi+0Fak3wti8s/J7MDi5Enzc34QLCyQXUWrdhRGZ0uCe6d18LTKCaAXlJK p2YbTaPdqgywpC8PXk6vuhgQ5b/whIAvzkNGz3NORvYwASFN96vX6QrbJdY94U9Wqnj/z1jr5y9 Qd/JLSEilfBqTpHn894PiOv6D0F2mcv2BupyXTj2UK95/chVVIK7gW3CCDc6M+ItM0uTATSrpfc 0qB9dUMz4ygLV0281fvUKBGFr12M9/Z93Dkd0pdHwtwTId8zkefKYIQelNXd6KmP0y/3nBtuH0x peDG+t3teHxSrHimv8UGF2SqUN4B6C7PAtkXvVWn+Vdze+GmCLGe2xPen4znbhG0QdX3T84l0Yi N0YmVSDG5PbPVmJtu/ddkcBjjGmfDh0EMYnA== X-Google-Smtp-Source: AGHT+IGW/oGz/NZjf+BGO8R4q9EUzB2Mdtq7Z2nuodi9vs6ehE50GZWiOZQO50TBdIV5gsvGFMRUSA== X-Received: by 2002:a05:6a21:99a4:b0:342:9cb7:649d with SMTP id adf61e73a8af0-376a7cec847mr41356285637.26.1767367248120; Fri, 02 Jan 2026 07:20:48 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:a612:725:7af0:96ca]) by smtp.googlemail.com with ESMTPSA id 41be03b00d2f7-c1e7c146aabsm35041268a12.25.2026.01.02.07.20.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jan 2026 07:20:47 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh , stable@vger.kernel.org Subject: [PATCH net v2 1/3] virtio-net: don't schedule delayed refill worker Date: Fri, 2 Jan 2026 22:20:21 +0700 Message-ID: <20260102152023.10773-2-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260102152023.10773-1-minhquangbui99@gmail.com> References: <20260102152023.10773-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we fail to refill the receive buffers, we schedule a delayed worker to retry later. However, this worker creates some concurrency issues such as races and deadlocks. To simplify the logic and avoid further problems, we will instead retry refilling in the next NAPI poll. Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx") Reported-by: Paolo Abeni Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/4= 00961/3-xdp-py/stderr Cc: stable@vger.kernel.org Suggested-by: Xuan Zhuo Signed-off-by: Bui Quang Minh Acked-by: Michael S. Tsirkin --- drivers/net/virtio_net.c | 55 ++++++++++++++++++++++------------------ 1 file changed, 30 insertions(+), 25 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bb3aeca66c6..ac514c9383ae 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3035,7 +3035,7 @@ static int virtnet_receive_packets(struct virtnet_inf= o *vi, } =20 static int virtnet_receive(struct receive_queue *rq, int budget, - unsigned int *xdp_xmit) + unsigned int *xdp_xmit, bool *retry_refill) { struct virtnet_info *vi =3D rq->vq->vdev->priv; struct virtnet_rq_stats stats =3D {}; @@ -3047,12 +3047,8 @@ static int virtnet_receive(struct receive_queue *rq,= int budget, packets =3D virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats); =20 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size= (rq->vq)) / 2) { - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { - spin_lock(&vi->refill_lock); - if (vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock(&vi->refill_lock); - } + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) + *retry_refill =3D true; } =20 u64_stats_set(&stats.packets, packets); @@ -3129,18 +3125,18 @@ static int virtnet_poll(struct napi_struct *napi, i= nt budget) struct send_queue *sq; unsigned int received; unsigned int xdp_xmit =3D 0; - bool napi_complete; + bool napi_complete, retry_refill =3D false; =20 virtnet_poll_cleantx(rq, budget); =20 - received =3D virtnet_receive(rq, budget, &xdp_xmit); + received =3D virtnet_receive(rq, budget, &xdp_xmit, &retry_refill); rq->packets_in_napi +=3D received; =20 if (xdp_xmit & VIRTIO_XDP_REDIR) xdp_do_flush(); =20 /* Out of packets? */ - if (received < budget) { + if (received < budget && !retry_refill) { napi_complete =3D virtqueue_napi_complete(napi, rq->vq, received); /* Intentionally not taking dim_lock here. This may result in a * spurious net_dim call. But if that happens virtnet_rx_dim_work @@ -3160,7 +3156,7 @@ static int virtnet_poll(struct napi_struct *napi, int= budget) virtnet_xdp_put_sq(vi, sq); } =20 - return received; + return retry_refill ? budget : received; } =20 static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_ind= ex) @@ -3230,9 +3226,11 @@ static int virtnet_open(struct net_device *dev) =20 for (i =3D 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) - /* Make sure we have some buffers: if oom use wq. */ - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->refill, 0); + /* If this fails, we will retry later in + * NAPI poll, which is scheduled in the below + * virtnet_enable_queue_pair + */ + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL); =20 err =3D virtnet_enable_queue_pair(vi, i); if (err < 0) @@ -3473,15 +3471,15 @@ static void __virtnet_rx_resume(struct virtnet_info= *vi, bool refill) { bool running =3D netif_running(vi->dev); - bool schedule_refill =3D false; =20 - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) - schedule_refill =3D true; + if (refill) + /* If this fails, we will retry later in NAPI poll, which is + * scheduled in the below virtnet_napi_enable + */ + try_fill_recv(vi, rq, GFP_KERNEL); + if (running) virtnet_napi_enable(rq); - - if (schedule_refill) - schedule_delayed_work(&vi->refill, 0); } =20 static void virtnet_rx_resume_all(struct virtnet_info *vi) @@ -3777,6 +3775,7 @@ static int virtnet_set_queues(struct virtnet_info *vi= , u16 queue_pairs) struct virtio_net_rss_config_trailer old_rss_trailer; struct net_device *dev =3D vi->dev; struct scatterlist sg; + int i; =20 if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ)) return 0; @@ -3829,11 +3828,17 @@ static int virtnet_set_queues(struct virtnet_info *= vi, u16 queue_pairs) } succ: vi->curr_queue_pairs =3D queue_pairs; - /* virtnet_open() will refill when device is going to up. */ - spin_lock_bh(&vi->refill_lock); - if (dev->flags & IFF_UP && vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock_bh(&vi->refill_lock); + if (dev->flags & IFF_UP) { + /* Let the NAPI poll refill the receive buffer for us. We can't + * safely call try_fill_recv() here because the NAPI might be + * enabled already. + */ + local_bh_disable(); + for (i =3D 0; i < vi->curr_queue_pairs; i++) + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq); + + local_bh_enable(); + } =20 return 0; } --=20 2.43.0 From nobody Sun Feb 8 06:22:45 2026 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E731B25D1E9 for ; Fri, 2 Jan 2026 15:20:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367257; cv=none; b=O4VAmGfa94kfXnT3ECsyBE2dzS6xaXvW6EihX388f5PgbMHXYk22zzi3H6W7rZjqAaH4g/gjHuuvJjakfFrvket2S7nApOZXt4AOpnRyl3iicowhVNMH5WJjPK5gd/hjWBox+xoEkYETaT22nmJ/zcyVJG7rmbvfAuJIouaPE9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367257; c=relaxed/simple; bh=IhIOm/waVwb/Npd7naFhHnteaL1RMxK1LRiQGSxMRvU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kuSYqys7ELNHERQYyifjFik6y2ablit9iUwWJHDB+hjKKUPUizJ32BB4Tzo0NS0C3VekR61yZP6XaRsFQ7sfTxBcW/ZuMFSWe5qRZDVL+oZKs5kgohD/9gSXjP24i8okAk8cLC/lX/Zhti1E7aCDIbNAl+mna2IeRzwtqSpDB1E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bbAX9zBw; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bbAX9zBw" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-7aae5f2633dso12599636b3a.3 for ; Fri, 02 Jan 2026 07:20:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767367255; x=1767972055; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6p9926v+j8hRdiPJTHDEVVkAKp32Z3dkDYChWeFnkTc=; b=bbAX9zBw2ONdq3W1f+IIpkty+jiVkPi6HUJOuSvWcyDpDG6ag+uahKLobxHVzQETcu c3D/HZf/9pKSpiYlfqOYCuOQWg/MVEZilwJg6td/XSvDjCzPNIBpVLXT8UM4GVutawqq Hl+qgzpGxyGishWTGs1Vq1v0lAiTr0XsnlJXcfad89HcVL/Qk8tQosmu69rSLtu4PxIf bUKhFgfs5Q6Oi7hSG8qZHG5DRuSvl2cj0KVL7VhXQiZ7ul5Mo1RxASdCHQrUShWcP/es opsx+O7bWys9VhuGTf+7Vg0FPrfCRY9PFWDKOgHAiF0pAlJzEt7gDCJu3tAS4pQMwGgx SObQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767367255; x=1767972055; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=6p9926v+j8hRdiPJTHDEVVkAKp32Z3dkDYChWeFnkTc=; b=YPCRpklfs4bOlrpDsDiwVNJawDHY/ryX6DEYa8OPyfCTp1wjQFxQY4C7h5ULYLMjLJ Ax2Aed0eqEb1eq0Z+mAVm4fk7hiLI7xKiQUwcPHjsQv3ecHVwoeLe0dUcra44YB6u4Ln gEPfjcFNVX5nzzPzRpYVu6cyNiKRtjEYy06Y5ClVyy56QK0siEEncb7ROWySaoZ5L+8o 0HmR7Qx5sH7a00nqRMr632BEckufouQJMngrCCq6t//xvDFzwa2Z/oOvhT2l4S9dV6NA VwL92Bf/G7jXp03+xCDL92RNjp/C09T8LKWvBy6AEoBRU0mXgqZfZgI7cOGNfxKKfvDW 7Gcg== X-Forwarded-Encrypted: i=1; AJvYcCXuLuaKvnFfyMJrvPyFd/0tZDPPvw56Y3LBeSqinooUmC3+MC0G7+yrLM3dsprLOFqWXFfwcyJrBinXQeI=@vger.kernel.org X-Gm-Message-State: AOJu0YzLz85XVy2eoL5+OP3CocGlwgSdSSqNIGNDKtvgeON03+7xuWrk W1wpbvlzSBDM4eB/I4Hs2HCww5IOA3V3gglfV4teCxW0KacqEe518IQI X-Gm-Gg: AY/fxX4zgwCKMAXHeTmRVkoZQttLrxdjSsTk5QJ6RaLYEsYyobu1i3G80x3usUXqfpA at5m6NHWNQaZUYIt3Z562zJXeYjP9cofBZlZsLsxARSL5QwEp8dXvxdz/n9xjhUAJFUzFgwt9PK GDPPPNDy47XvivN5AlYjyOq7gSimFKxKypf52F87+/JJXFj0zjKKeOMGWnyaRl0FbPTtT3t4vcw vQHqj4qYIUIbfu+DxjiH37uUMDHOyezf7lY/rYc63LfopT1aNuH9m4Di2dsq66MvV6RPUVlGit8 g+6/6Q/ghQBfgM8imgGBn4+UB/O0iAW1G+zEWC2bBD+OO/s4vdSzhgu33ZhXBTBAlOCC4s7hV58 f8FsBY5CAnTuWFoOGVYfXUwDTY7RKc8SyxrjtO3TdovnZg6iejTznpY9wMywkc+JXoNnZR2sMO/ 0ITgEpez40MVaGmfi0R+N75gQ= X-Google-Smtp-Source: AGHT+IG6Z89asELtZL25bqp1wItzqCcfh/QFAyUyxW2i4m6A92FWF0k6z8ouT/hhDGRbNkmbOPf90w== X-Received: by 2002:a05:6a20:431a:b0:35d:5d40:6d86 with SMTP id adf61e73a8af0-376a9de51cbmr41286930637.40.1767367254850; Fri, 02 Jan 2026 07:20:54 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:a612:725:7af0:96ca]) by smtp.googlemail.com with ESMTPSA id 41be03b00d2f7-c1e7c146aabsm35041268a12.25.2026.01.02.07.20.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jan 2026 07:20:53 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh Subject: [PATCH net v2 2/3] virtio-net: remove unused delayed refill worker Date: Fri, 2 Jan 2026 22:20:22 +0700 Message-ID: <20260102152023.10773-3-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260102152023.10773-1-minhquangbui99@gmail.com> References: <20260102152023.10773-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since we change to retry refilling receive buffer in NAPI poll instead of delayed worker, remove all unused delayed refill worker code. Signed-off-by: Bui Quang Minh Acked-by: Michael S. Tsirkin --- drivers/net/virtio_net.c | 86 ---------------------------------------- 1 file changed, 86 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ac514c9383ae..7e77a05b5662 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -441,9 +441,6 @@ struct virtnet_info { /* Packet virtio header size */ u8 hdr_len; =20 - /* Work struct for delayed refilling if we run low on memory. */ - struct delayed_work refill; - /* UDP tunnel support */ bool tx_tnl; =20 @@ -451,12 +448,6 @@ struct virtnet_info { =20 bool rx_tnl_csum; =20 - /* Is delayed refill enabled? */ - bool refill_enabled; - - /* The lock to synchronize the access to refill_enabled */ - spinlock_t refill_lock; - /* Work struct for config space updates */ struct work_struct config_work; =20 @@ -720,20 +711,6 @@ static void virtnet_rq_free_buf(struct virtnet_info *v= i, put_page(virt_to_head_page(buf)); } =20 -static void enable_delayed_refill(struct virtnet_info *vi) -{ - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled =3D true; - spin_unlock_bh(&vi->refill_lock); -} - -static void disable_delayed_refill(struct virtnet_info *vi) -{ - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled =3D false; - spin_unlock_bh(&vi->refill_lock); -} - static void enable_rx_mode_work(struct virtnet_info *vi) { rtnl_lock(); @@ -2948,42 +2925,6 @@ static void virtnet_napi_disable(struct receive_queu= e *rq) napi_disable(napi); } =20 -static void refill_work(struct work_struct *work) -{ - struct virtnet_info *vi =3D - container_of(work, struct virtnet_info, refill.work); - bool still_empty; - int i; - - for (i =3D 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq =3D &vi->rq[i]; - - /* - * When queue API support is added in the future and the call - * below becomes napi_disable_locked, this driver will need to - * be refactored. - * - * One possible solution would be to: - * - cancel refill_work with cancel_delayed_work (note: - * non-sync) - * - cancel refill_work with cancel_delayed_work_sync in - * virtnet_remove after the netdev is unregistered - * - wrap all of the work in a lock (perhaps the netdev - * instance lock) - * - check netif_running() and return early to avoid a race - */ - napi_disable(&rq->napi); - still_empty =3D !try_fill_recv(vi, rq, GFP_KERNEL); - virtnet_napi_do_enable(rq->vq, &rq->napi); - - /* In theory, this can happen: if we don't get any buffers in - * we will *never* try to fill again. - */ - if (still_empty) - schedule_delayed_work(&vi->refill, HZ/2); - } -} - static int virtnet_receive_xsk_bufs(struct virtnet_info *vi, struct receive_queue *rq, int budget, @@ -3222,8 +3163,6 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi =3D netdev_priv(dev); int i, err; =20 - enable_delayed_refill(vi); - for (i =3D 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) /* If this fails, we will retry later in @@ -3249,9 +3188,6 @@ static int virtnet_open(struct net_device *dev) return 0; =20 err_enable_qp: - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); - for (i--; i >=3D 0; i--) { virtnet_disable_queue_pair(vi, i); virtnet_cancel_dim(vi, &vi->rq[i].dim); @@ -3445,24 +3381,12 @@ static void virtnet_rx_pause_all(struct virtnet_inf= o *vi) { int i; =20 - /* - * Make sure refill_work does not run concurrently to - * avoid napi_disable race which leads to deadlock. - */ - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); for (i =3D 0; i < vi->max_queue_pairs; i++) __virtnet_rx_pause(vi, &vi->rq[i]); } =20 static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue= *rq) { - /* - * Make sure refill_work does not run concurrently to - * avoid napi_disable race which leads to deadlock. - */ - disable_delayed_refill(vi); - cancel_delayed_work_sync(&vi->refill); __virtnet_rx_pause(vi, rq); } =20 @@ -3486,7 +3410,6 @@ static void virtnet_rx_resume_all(struct virtnet_info= *vi) { int i; =20 - enable_delayed_refill(vi); for (i =3D 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) __virtnet_rx_resume(vi, &vi->rq[i], true); @@ -3497,7 +3420,6 @@ static void virtnet_rx_resume_all(struct virtnet_info= *vi) =20 static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queu= e *rq) { - enable_delayed_refill(vi); __virtnet_rx_resume(vi, rq, true); } =20 @@ -3848,10 +3770,6 @@ static int virtnet_close(struct net_device *dev) struct virtnet_info *vi =3D netdev_priv(dev); int i; =20 - /* Make sure NAPI doesn't schedule refill work */ - disable_delayed_refill(vi); - /* Make sure refill_work doesn't re-enable napi! */ - cancel_delayed_work_sync(&vi->refill); /* Prevent the config change callback from changing carrier * after close */ @@ -5807,7 +5725,6 @@ static int virtnet_restore_up(struct virtio_device *v= dev) =20 virtio_device_ready(vdev); =20 - enable_delayed_refill(vi); enable_rx_mode_work(vi); =20 if (netif_running(vi->dev)) { @@ -6564,7 +6481,6 @@ static int virtnet_alloc_queues(struct virtnet_info *= vi) if (!vi->rq) goto err_rq; =20 - INIT_DELAYED_WORK(&vi->refill, refill_work); for (i =3D 0; i < vi->max_queue_pairs; i++) { vi->rq[i].pages =3D NULL; netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll, @@ -6906,7 +6822,6 @@ static int virtnet_probe(struct virtio_device *vdev) =20 INIT_WORK(&vi->config_work, virtnet_config_changed_work); INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work); - spin_lock_init(&vi->refill_lock); =20 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { vi->mergeable_rx_bufs =3D true; @@ -7170,7 +7085,6 @@ static int virtnet_probe(struct virtio_device *vdev) net_failover_destroy(vi->failover); free_vqs: virtio_reset_device(vdev); - cancel_delayed_work_sync(&vi->refill); free_receive_page_frags(vi); virtnet_del_vqs(vi); free: --=20 2.43.0 From nobody Sun Feb 8 06:22:45 2026 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D323B25A33A for ; Fri, 2 Jan 2026 15:21:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367262; cv=none; b=WKRbnNuHdMQ0x7qg27oKTfVwjWN8Fjh8oJTK5N/VVng20BfNeK1Ln3ADI989tVAWJ2Dd1Ll4mU9wVHe/1RRAXee5/2HyRmS7G1QtuDz9Xqkh9McAg2viyyHBCADynC6ISuNCPamywLpk70MiHDkPlr8KKOX8Zry6Zcx0v+5cVn4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767367262; c=relaxed/simple; bh=/4eiguhDqiumviny97YbX/V0ioG/S7VlDOjqS+Yf6c8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Yg3IbKQPNh95nHYt47WOOEBMwp5ij1uetmLwnqr7dCE3KzK/owwKjly3FJeflS4be839V8kOXEy42UOaqqkxnfaDTGLgyO91OYZKaggIIviB7zhxo7k49YhRoDd4IrGog5U4NKhXqgbhQFy3Oc40+A0sUHuL2H7BCmVV4MaDZlw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fMtEg4lF; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fMtEg4lF" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-34c902f6845so17249537a91.2 for ; Fri, 02 Jan 2026 07:21:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767367260; x=1767972060; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n9jxgbD+/yxl3j2kHcSn/k4rVRA0Pzivb1lp5AvUuyE=; b=fMtEg4lFf06H+5Uz4Nw99Dw09SIhxEiIMRs6w8pMj7dKJQof0b0YcXHySIaCkTb1xm k+/ms4lhK/2j5CE2y7rXZfq2KJJs1Gt358lsPCkEPyI6tVMIWFwxl1SG2ThhP4Wnxyx3 Z1qOTS8W6nF8zuplxpyXl71ak5LBm0ZmzOkomW3E9t3J0R6JV3bcdHhQATiXIcRCUd/u pEYj9AqHSgBajjN9SPUgCiG1GP6JOBR5zf6EPnf406/8JBihowLwnP3fRLRio2koZOU7 mwP2ZvaM51T5BKUOh6uE4boZtU2JLnInpZ4xYH318MsKups0msNln4q8xrPy/SG+vGPk +4tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767367260; x=1767972060; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=n9jxgbD+/yxl3j2kHcSn/k4rVRA0Pzivb1lp5AvUuyE=; b=C3J5jFrNv6eYDS1pVS7MRCM+m1MgX9vBIi9l0F7g7UhLGs3IT7l2dv02nnaOVS5k/q UvhYT5QFU9GRHM66xeT21yqiFT3pkAiw2pEOr4okfUeebbwG+tkyqWnyvLtaPdXC0xJ4 dMGDCFV4UT0m9rgoNk/NcjyxBDPqVIIshZAdo7viwpJoFQGqhccTfZw5W4i8O3YdxuwR zNeH9uZ/abPAmM6VV7vK88r2Fbm01cAUuK4ANOO5BIkdWBTPtXaAUw7f2Numd4NCZfk2 cDKXduITu1rz+uTstQLfJkpiqxbnXuZs6D+G3C2x3F9CLWm34QEIuUFnSCvAJiWyatn/ aRNQ== X-Forwarded-Encrypted: i=1; AJvYcCWk5IuRvaChoBg/JZHBynfL9fRgTFaMkhetcIIph66cfew4DqPR2n8MMCjaIpn+fcnbkxbYdbUQMpHSgZI=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2ZMdi3/emWlwNIdusArfwBoEhDpCuKjPxnnoBg1srEbsARobX KxE2/JdFcYZ1EDzLkoRrwbPnG727I5y73vR+ML7GRX2q190EPJcf+rnR X-Gm-Gg: AY/fxX4z4ibSbC/5fE+Ppd2WVboMj5xa0phD6XZi26TRyOAegn6VTi3omSm/477dV6L caKwL3KYpwf25VuZkOkTrodydaJX60Wiqh0C7MYep2RfTAkTJC+Cjllchx6KElUT6gx5hIY5pgJ XeNRFZHwH/h8zLzKjpn7U1Yr2wvjTK+myhR8grkgu8vbCSQNQra9J+2BWzNth/6cK+MalO8cpKu AwwVbwSLkkFLSOmjmOJC21iKzPP0c0FAipCxD4KknkSRC/I5D+E31plncpmtvGR11KGGjDChYEv th+ChSI2JRKZIIxykF5IOIRp2gWy6irEuWJBfMcGvIq69bBSdFxZjF6MZ/IbEFfO9Gs6IyTzwgv SP1ewh4pLPY0t4GCXhUq8xIWCKQCBz1AG6M1SFr2bpxVPtv2kaHPIgh71j7YS/2PPcNWw6Skqbp CcuMk6Qpnktcoacsq8Qi23fIs= X-Google-Smtp-Source: AGHT+IG+uB/bwtxyi8mZp0xNuzYOVJCRuR6o1MY96JjPiCTZmrbu0bFieKOLX9lXthIPYGtwJPA6DA== X-Received: by 2002:a17:90b:5188:b0:340:cb18:922 with SMTP id 98e67ed59e1d1-34e92139e2dmr36663487a91.14.1767367260086; Fri, 02 Jan 2026 07:21:00 -0800 (PST) Received: from minh.192.168.1.1 ([2001:ee0:4f4c:210:a612:725:7af0:96ca]) by smtp.googlemail.com with ESMTPSA id 41be03b00d2f7-c1e7c146aabsm35041268a12.25.2026.01.02.07.20.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jan 2026 07:20:59 -0800 (PST) From: Bui Quang Minh To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Bui Quang Minh Subject: [PATCH net v2 3/3] virtio-net: clean up __virtnet_rx_pause/resume Date: Fri, 2 Jan 2026 22:20:23 +0700 Message-ID: <20260102152023.10773-4-minhquangbui99@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260102152023.10773-1-minhquangbui99@gmail.com> References: <20260102152023.10773-1-minhquangbui99@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The delayed refill worker is removed which makes virtnet_rx_pause/resume quite the same as __virtnet_rx_pause/resume. So remove __virtnet_rx_pause/resume and move the code to virtnet_rx_pause/resume. Signed-off-by: Bui Quang Minh Acked-by: Michael S. Tsirkin --- drivers/net/virtio_net.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7e77a05b5662..95c80f55fa9a 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3366,8 +3366,8 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, st= ruct net_device *dev) return NETDEV_TX_OK; } =20 -static void __virtnet_rx_pause(struct virtnet_info *vi, - struct receive_queue *rq) +static void virtnet_rx_pause(struct virtnet_info *vi, + struct receive_queue *rq) { bool running =3D netif_running(vi->dev); =20 @@ -3382,17 +3382,12 @@ static void virtnet_rx_pause_all(struct virtnet_inf= o *vi) int i; =20 for (i =3D 0; i < vi->max_queue_pairs; i++) - __virtnet_rx_pause(vi, &vi->rq[i]); + virtnet_rx_pause(vi, &vi->rq[i]); } =20 -static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue= *rq) -{ - __virtnet_rx_pause(vi, rq); -} - -static void __virtnet_rx_resume(struct virtnet_info *vi, - struct receive_queue *rq, - bool refill) +static void virtnet_rx_resume(struct virtnet_info *vi, + struct receive_queue *rq, + bool refill) { bool running =3D netif_running(vi->dev); =20 @@ -3412,17 +3407,12 @@ static void virtnet_rx_resume_all(struct virtnet_in= fo *vi) =20 for (i =3D 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) - __virtnet_rx_resume(vi, &vi->rq[i], true); + virtnet_rx_resume(vi, &vi->rq[i], true); else - __virtnet_rx_resume(vi, &vi->rq[i], false); + virtnet_rx_resume(vi, &vi->rq[i], false); } } =20 -static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queu= e *rq) -{ - __virtnet_rx_resume(vi, rq, true); -} - static int virtnet_rx_resize(struct virtnet_info *vi, struct receive_queue *rq, u32 ring_num) { @@ -3436,7 +3426,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi, if (err) netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qind= ex, err); =20 - virtnet_rx_resume(vi, rq); + virtnet_rx_resume(vi, rq, true); return err; } =20 @@ -5814,7 +5804,7 @@ static int virtnet_rq_bind_xsk_pool(struct virtnet_in= fo *vi, struct receive_queu =20 rq->xsk_pool =3D pool; =20 - virtnet_rx_resume(vi, rq); + virtnet_rx_resume(vi, rq, true); =20 if (pool) return 0; --=20 2.43.0