From nobody Fri Dec 19 19:18:40 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDA2C20897B for ; Fri, 10 Jan 2025 08:35:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736498138; cv=none; b=eVF0zZliVX5Tqcm4cgENyMqnXiUbXcvSFGxQRp6MMFDX3DSGN/ITp9PJuRvjDws67PeaW4YXnvoW7ltOC+GrlCFJrFvkFT1xdNTe/pTeqMR3CVSFwtuKZ2EhdcykMaHSeQgppgQsxjUmAay8inCUWsejltzc4gDEEAeEbUf3Qho= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736498138; c=relaxed/simple; bh=QjfOQgTaUWrmSTlG/geSr55nIlTbnhvRsiGytk5OSls=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hEoNmfYhQOHP3qd0iA7t/QDX+tMByP0kzm9Rr+H5+QK0eIP2t/GL+OtyG1N+VGg39SStdRwqGvCQ4CASARk7GzJRHPXRYvrjpjIpwZsT0lOiwiLpfTsNLyYnTQVS/gpKhoku7WAGb00EqQdRZk194RNfdIPYlsE29fL/XA4pSKY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NZanrMgX; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NZanrMgX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1736498135; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sBLDY4DTw2zaWMjga9j81VUizG/Z+zQDnv2kJoLZ4eU=; b=NZanrMgXpBV3R/3XasJAoMyv8/AGj9S10YT3QUZA08R85lje4u7gEfPIZbSz3jY1U+wFnR aTXmpXDawB5sO0OK14rsTiJoh8E+c7gyclXOi4jo6JzhnCK2Wf3UUCUwKqe92emg+d8phA nV+gnEzSrMP2R67HvesGkg+A5vPmCJ8= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-314-O9cZ9UnePZKNLfUGtcpnaA-1; Fri, 10 Jan 2025 03:35:34 -0500 X-MC-Unique: O9cZ9UnePZKNLfUGtcpnaA-1 X-Mimecast-MFC-AGG-ID: O9cZ9UnePZKNLfUGtcpnaA Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43619b135bcso9151395e9.1 for ; Fri, 10 Jan 2025 00:35:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736498133; x=1737102933; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sBLDY4DTw2zaWMjga9j81VUizG/Z+zQDnv2kJoLZ4eU=; b=STPzBW2u/RXqCjJa41meUdSPXHgeKVPtycRmzJJIQsJzJ82Qk6oZz17LxO7iEG9Hut ha47fubHmyyZSwH01WWgIm8BRL2vW/Dg69VIWNM47paTyLvpi4VdA/26qZ1lWV742y/S y2dhUQDOCGAZjPD07u5pG0wI+TevIQrBxGsvSFGhQAOBvOZEYf50AZG8jV6WkQPs4usi lr4tUDpBqWEOKJMvidB2svjF0YK+m8OGU9utJzna7E83M/GEYiyVJbAf3cLma5484HtA bB73+J+lJyRfvNKIQ2mXa9HYijbwuLMxI+RS7boBCMQHZIPyDlcNlzzTdtrH1Ox2dkzU Y6YA== X-Forwarded-Encrypted: i=1; AJvYcCU55xVgJb20FeXxJ7rtcPo3tCQzzPOngVxPj0ja3T5Ek8Bfp26fO1XKqHJLzrWoTLIozhwVlxZ5hCFt8V4=@vger.kernel.org X-Gm-Message-State: AOJu0Yz2eg6E4gqb4E8fFqPENt+wxqI3ELCKYvE8drXIZFw1SDfFcvmc 04M6wOxbiVGqB+a2hlQEtMaDOdEt8yDVzeYiiLIidc+OJKOVfZrUvmqhrPq9lgdWme+uGsZBJke XtVmlqX2NCwCZ7Q9QpucJ3i7PcPMLl9DwdnzAmEKrVtNhfiQUB5Cg/jXEKXqO7Q== X-Gm-Gg: ASbGncuMqWK9/H4/CxGSs1Hfj894dHNRoCOepf66coYr6mBtNy2ObfMtbxH+7JDFxWY jY9moNFjK/rVSMr3T70Ag937cc4SE+sWFGyDpbswOq28nWBh9BSNd6ki8JwvRk/jZNr2OB1d2FU 1TuclgLKNiBltCrN2q0nyzXxkN+7oXUteKCLOX1GHMFIoQhrOXOnuhWCtcYaod+2pLbAFrw4C6i rYVtrehr91SYV6e8hzzrseF3zjieIRZSG2Zk+IoUJzDPvU= X-Received: by 2002:a5d:64eb:0:b0:385:ec89:2f07 with SMTP id ffacd0b85a97d-38a87312d2emr8464160f8f.32.1736498133290; Fri, 10 Jan 2025 00:35:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IEkY+6wyisWu9I5OnlNIAEaCc1RkfYRMe4WOOPOqI9L+4Q6XFHZJA9lRe1HqGdOwc7Z+AC+ww== X-Received: by 2002:a5d:64eb:0:b0:385:ec89:2f07 with SMTP id ffacd0b85a97d-38a87312d2emr8464107f8f.32.1736498132691; Fri, 10 Jan 2025 00:35:32 -0800 (PST) Received: from step1.. ([5.77.78.183]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-436dcc8ddddsm73101805e9.0.2025.01.10.00.35.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Jan 2025 00:35:31 -0800 (PST) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: Xuan Zhuo , bpf@vger.kernel.org, linux-kernel@vger.kernel.org, Luigi Leonardi , "David S. Miller" , Wongi Lee , Stefano Garzarella , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , "Michael S. Tsirkin" , Eric Dumazet , kvm@vger.kernel.org, Paolo Abeni , Stefan Hajnoczi , Jason Wang , Simon Horman , Hyunwoo Kim , Jakub Kicinski , Michal Luczaj , virtualization@lists.linux.dev, Bobby Eshleman , stable@vger.kernel.org Subject: [PATCH net v2 3/5] vsock/virtio: cancel close work in the destructor Date: Fri, 10 Jan 2025 09:35:09 +0100 Message-ID: <20250110083511.30419-4-sgarzare@redhat.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250110083511.30419-1-sgarzare@redhat.com> References: <20250110083511.30419-1-sgarzare@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During virtio_transport_release() we can schedule a delayed work to perform the closing of the socket before destruction. The destructor is called either when the socket is really destroyed (reference counter to zero), or it can also be called when we are de-assigning the transport. In the former case, we are sure the delayed work has completed, because it holds a reference until it completes, so the destructor will definitely be called after the delayed work is finished. But in the latter case, the destructor is called by AF_VSOCK core, just after the release(), so there may still be delayed work scheduled. Refactor the code, moving the code to delete the close work already in the do_close() to a new function. Invoke it during destruction to make sure we don't leave any pending work. Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") Cc: stable@vger.kernel.org Reported-by: Hyunwoo Kim Closes: https://lore.kernel.org/netdev/Z37Sh+utS+iV3+eb@v4bel-B760M-AORUS-E= LITE-AX/ Signed-off-by: Stefano Garzarella Reviewed-by: Luigi Leonardi Tested-by: Hyunwoo Kim --- net/vmw_vsock/virtio_transport_common.c | 29 ++++++++++++++++++------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio= _transport_common.c index 51a494b69be8..7f7de6d88096 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -26,6 +26,9 @@ /* Threshold for detecting small packets to copy */ #define GOOD_COPY_LEN 128 =20 +static void virtio_transport_cancel_close_work(struct vsock_sock *vsk, + bool cancel_timeout); + static const struct virtio_transport * virtio_transport_get_ops(struct vsock_sock *vsk) { @@ -1109,6 +1112,8 @@ void virtio_transport_destruct(struct vsock_sock *vsk) { struct virtio_vsock_sock *vvs =3D vsk->trans; =20 + virtio_transport_cancel_close_work(vsk, true); + kfree(vvs); vsk->trans =3D NULL; } @@ -1204,17 +1209,11 @@ static void virtio_transport_wait_close(struct sock= *sk, long timeout) } } =20 -static void virtio_transport_do_close(struct vsock_sock *vsk, - bool cancel_timeout) +static void virtio_transport_cancel_close_work(struct vsock_sock *vsk, + bool cancel_timeout) { struct sock *sk =3D sk_vsock(vsk); =20 - sock_set_flag(sk, SOCK_DONE); - vsk->peer_shutdown =3D SHUTDOWN_MASK; - if (vsock_stream_has_data(vsk) <=3D 0) - sk->sk_state =3D TCP_CLOSING; - sk->sk_state_change(sk); - if (vsk->close_work_scheduled && (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) { vsk->close_work_scheduled =3D false; @@ -1226,6 +1225,20 @@ static void virtio_transport_do_close(struct vsock_s= ock *vsk, } } =20 +static void virtio_transport_do_close(struct vsock_sock *vsk, + bool cancel_timeout) +{ + struct sock *sk =3D sk_vsock(vsk); + + sock_set_flag(sk, SOCK_DONE); + vsk->peer_shutdown =3D SHUTDOWN_MASK; + if (vsock_stream_has_data(vsk) <=3D 0) + sk->sk_state =3D TCP_CLOSING; + sk->sk_state_change(sk); + + virtio_transport_cancel_close_work(vsk, cancel_timeout); +} + static void virtio_transport_close_timeout(struct work_struct *work) { struct vsock_sock *vsk =3D --=20 2.47.1