From nobody Sun Nov 24 09:34:42 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1724839822; cv=none; d=zohomail.com; s=zohoarc; b=GJulyhXcpf7ByKjZJoj3y7w4/9coy5Rt9B8rr1CqVWFkVRMvI8dkSeLORvFRwUrfo0E7bXn32wHcU2OQKopdfRO+vD6/cr4I9LO6irUgmMuyyS0YvvZRRbL2HYUfUFJQ1AVPeedq6RpoM0WUCeR7Yvu3FAe2ZxUVS3o7y85LXKs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1724839822; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=PgkMjxhaD/4TgKGaIxgp3BWvuFUhfEbSCmqtsS1YO/U=; b=jVSJwkHageKIUBDPz+3oqTu6hMmnCj3FL0wz9g/74ogZkCP8z1wfyDzzUsMWa3vQjr8lsr9NRFE55idAUkAThkJ0fNf7eSULdZ8jhdn2phJvi5mUL3Jh7uO4NVmTSXMgvmtB8ivgN4gcRQ0tw4myA/PrIKz9YNU+LBhvitALz6M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1724839822385988.0842159459131; Wed, 28 Aug 2024 03:10:22 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sjFcX-0005XV-2M; Wed, 28 Aug 2024 06:09:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sjFcV-0005Pi-2c for qemu-devel@nongnu.org; Wed, 28 Aug 2024 06:09:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sjFcS-0003Vl-OP for qemu-devel@nongnu.org; Wed, 28 Aug 2024 06:09:42 -0400 Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-EUcGkgNlOneL_OK4EivEcg-1; Wed, 28 Aug 2024 06:09:37 -0400 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EEFF31955BF9; Wed, 28 Aug 2024 10:09:35 +0000 (UTC) Received: from kaapi.redhat.com (unknown [10.39.192.70]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A787F1955F1B; Wed, 28 Aug 2024 10:09:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724839779; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PgkMjxhaD/4TgKGaIxgp3BWvuFUhfEbSCmqtsS1YO/U=; b=EKbmLDl4papxz4nKtKUzCE9COdpDcNrsb2wzD8cbu913Pnf5JB4sCIm7eIAu6FLwY+2bU3 ItmTxyiwEeKvB11OjHn+cgLXE6RHph3hqySFlk9j2JvzONScF1i87XXwwlfSsbbGW0tU/x v6rKpw1KZhOtFlgI8Gi4eAkjYRwnHZg= X-MC-Unique: EUcGkgNlOneL_OK4EivEcg-1 From: Prasad Pandit To: qemu-devel@nongnu.org Cc: mst@redhat.com, farosas@suse.de, jasowang@redhat.com, mcoqueli@redhat.com, peterx@redhat.com, Prasad Pandit Subject: [PATCH v2 2/2] vhost-user: add a request-reply lock Date: Wed, 28 Aug 2024 15:39:14 +0530 Message-ID: <20240828100914.105728-3-ppandit@redhat.com> In-Reply-To: <20240828100914.105728-1-ppandit@redhat.com> References: <20240828100914.105728-1-ppandit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=ppandit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1724839823913116600 Content-Type: text/plain; charset="utf-8" From: Prasad Pandit QEMU threads use vhost_user_write/read calls to send and receive request/reply messages from a vhost-user device. When multiple threads communicate with the same vhost-user device, they can receive each other's messages, resulting in an erroneous state. When fault_thread exits upon completion of Postcopy migration, it sends a 'postcopy_end' message to the vhost-user device. But sometimes 'postcopy_end' message is sent while vhost device is being setup via vhost_dev_start(). Thread-1 Thread-2 vhost_dev_start postcopy_ram_incoming_cleanup vhost_device_iotlb_miss postcopy_notify vhost_backend_update_device_iotlb vhost_user_postcopy_notifier vhost_user_send_device_iotlb_msg vhost_user_postcopy_end process_message_reply process_message_reply vhost_user_read vhost_user_read vhost_user_read_header vhost_user_read_header "Fail to update device iotlb" "Failed to receive reply to postcopy_en= d" This creates confusion when vhost-user device receives 'postcopy_end' message while it is trying to update IOTLB entries. vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5. vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb vhost_user_postcopy_end: 700871,700900: Failed to receive reply to postcopy_end vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5. Here fault thread seems to end the postcopy migration while another thread is starting the vhost-user device. Add a mutex lock to hold for one request-reply cycle and avoid such race condition. Fixes: 46343570c06e ("vhost+postcopy: Wire up POSTCOPY_END notify") Suggested-by: Peter Xu Signed-off-by: Prasad Pandit --- hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++ include/hw/virtio/vhost-user.h | 3 ++ 2 files changed, 77 insertions(+) v2: - Place QEMU_LOCK_GUARD near the vhost_user_write() calls, holding the lock for longer fails some tests during rpmbuild(8). - rpmbuild(8) fails for some SRPMs, not all. RHEL-9 SRPM builds with this patch, whereas Fedora SRPM does not build. - The host OS also seems to affect rpmbuild(8). Some SRPMs build well on RHEL-9, but not on Fedora-40 machine. - koji builds successful with this patch https://koji.fedoraproject.org/koji/taskinfo?taskID=3D122254011 https://koji.fedoraproject.org/koji/taskinfo?taskID=3D122252369 v1: Use QEMU_LOCK_GUARD(), rename lock variable - https://lore.kernel.org/qemu-devel/20240808095147.291626-3-ppandit@redha= t.com/ v0: - https://lore.kernel.org/all/Zo_9OlX0pV0paFj7@x1n/ - https://lore.kernel.org/all/20240720153808-mutt-send-email-mst@kernel.or= g/ diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 00561daa06..7b030ae2cd 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -24,6 +24,7 @@ #include "qemu/main-loop.h" #include "qemu/uuid.h" #include "qemu/sockets.h" +#include "qemu/lockable.h" #include "sysemu/runstate.h" #include "sysemu/cryptodev.h" #include "migration/postcopy-ram.h" @@ -446,6 +447,10 @@ static int vhost_user_set_log_base(struct vhost_dev *d= ev, uint64_t base, .hdr.size =3D sizeof(msg.payload.log), }; + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + /* Send only once with first queue pair */ if (dev->vq_index !=3D 0) { return 0; @@ -664,6 +669,7 @@ static int send_remove_regions(struct vhost_dev *dev, bool reply_supported) { struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; struct vhost_memory_region *shadow_reg; int i, fd, shadow_reg_idx, ret; ram_addr_t offset; @@ -685,6 +691,8 @@ static int send_remove_regions(struct vhost_dev *dev, vhost_user_fill_msg_region(®ion_buffer, shadow_reg, 0); msg->payload.mem_reg.region =3D region_buffer; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, msg, NULL, 0); if (ret < 0) { return ret; @@ -718,6 +726,7 @@ static int send_add_regions(struct vhost_dev *dev, bool reply_supported, bool track_ramblocks) { struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; int i, fd, ret, reg_idx, reg_fd_idx; struct vhost_memory_region *reg; MemoryRegion *mr; @@ -746,6 +755,8 @@ static int send_add_regions(struct vhost_dev *dev, vhost_user_fill_msg_region(®ion_buffer, reg, offset); msg->payload.mem_reg.region =3D region_buffer; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, msg, &fd, 1); if (ret < 0) { return ret; @@ -893,6 +904,7 @@ static int vhost_user_set_mem_table_postcopy(struct vho= st_dev *dev, bool config_mem_slots) { struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num =3D 0; VhostUserMsg msg_reply; @@ -926,6 +938,8 @@ static int vhost_user_set_mem_table_postcopy(struct vho= st_dev *dev, return ret; } + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, fds, fd_num); if (ret < 0) { return ret; @@ -1005,6 +1019,7 @@ static int vhost_user_set_mem_table(struct vhost_dev = *dev, struct vhost_memory *mem) { struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num =3D 0; bool do_postcopy =3D u->postcopy_listen && u->postcopy_fd.handler; @@ -1044,6 +1059,8 @@ static int vhost_user_set_mem_table(struct vhost_dev = *dev, return ret; } + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, fds, fd_num); if (ret < 0) { return ret; @@ -1089,6 +1106,10 @@ static int vhost_user_get_u64(struct vhost_dev *dev,= int request, uint64_t *u64) return 0; } + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -1138,6 +1159,10 @@ static int vhost_user_write_sync(struct vhost_dev *d= ev, VhostUserMsg *msg, } } +/* struct vhost_user *u =3D dev->opaque; + * struct VhostUserState *us =3D u->user; + * QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + */ ret =3D vhost_user_write(dev, msg, NULL, 0); if (ret < 0) { return ret; @@ -1277,6 +1302,8 @@ static int vhost_user_get_vring_base(struct vhost_dev= *dev, .hdr.size =3D sizeof(msg.payload.state), }; struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); VhostUserHostNotifier *n =3D fetch_notifier(u->user, ring->index); if (n) { @@ -1669,6 +1696,9 @@ int vhost_user_get_shared_object(struct vhost_dev *de= v, unsigned char *uuid, }; memcpy(msg.payload.object.uuid, uuid, sizeof(msg.payload.object.uuid)); + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -1889,6 +1919,9 @@ static int vhost_setup_backend_channel(struct vhost_d= ev *dev) msg.hdr.flags |=3D VHOST_USER_NEED_REPLY_MASK; } + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, &sv[1], 1); if (ret) { goto out; @@ -1993,6 +2026,9 @@ static int vhost_user_postcopy_advise(struct vhost_de= v *dev, Error **errp) .hdr.flags =3D VHOST_USER_VERSION, }; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { error_setg(errp, "Failed to send postcopy_advise to vhost"); @@ -2051,6 +2087,9 @@ static int vhost_user_postcopy_listen(struct vhost_de= v *dev, Error **errp) trace_vhost_user_postcopy_listen(); + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { error_setg(errp, "Failed to send postcopy_listen to vhost"); @@ -2080,6 +2119,9 @@ static int vhost_user_postcopy_end(struct vhost_dev *= dev, Error **errp) trace_vhost_user_postcopy_end_entry(); + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { error_setg(errp, "Failed to send postcopy_end to vhost"); @@ -2372,6 +2414,10 @@ static int vhost_user_net_set_mtu(struct vhost_dev *= dev, uint16_t mtu) msg.hdr.flags |=3D VHOST_USER_NEED_REPLY_MASK; } + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -2396,6 +2442,10 @@ static int vhost_user_send_device_iotlb_msg(struct v= host_dev *dev, .payload.iotlb =3D *imsg, }; + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -2428,6 +2478,10 @@ static int vhost_user_get_config(struct vhost_dev *d= ev, uint8_t *config, assert(config_len <=3D VHOST_USER_MAX_CONFIG_SIZE); + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + msg.payload.config.offset =3D 0; msg.payload.config.size =3D config_len; ret =3D vhost_user_write(dev, &msg, NULL, 0); @@ -2492,6 +2546,10 @@ static int vhost_user_set_config(struct vhost_dev *d= ev, const uint8_t *data, p =3D msg.payload.config.region; memcpy(p, data, size); + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -2570,6 +2628,10 @@ static int vhost_user_crypto_create_session(struct v= host_dev *dev, } } + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + msg.payload.session.op_code =3D backend_info->op_code; msg.payload.session.session_id =3D backend_info->session_id; ret =3D vhost_user_write(dev, &msg, NULL, 0); @@ -2662,6 +2724,9 @@ static int vhost_user_get_inflight_fd(struct vhost_de= v *dev, return 0; } + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { return ret; @@ -2757,6 +2822,7 @@ bool vhost_user_init(VhostUserState *user, CharBacken= d *chr, Error **errp) user->memory_slots =3D 0; user->notifiers =3D g_ptr_array_new_full(VIRTIO_QUEUE_MAX / 4, &vhost_user_state_destroy); + qemu_mutex_init(&user->vhost_user_request_reply_lock); return true; } @@ -2769,6 +2835,7 @@ void vhost_user_cleanup(VhostUserState *user) user->notifiers =3D (GPtrArray *) g_ptr_array_free(user->notifiers, tr= ue); memory_region_transaction_commit(); user->chr =3D NULL; + qemu_mutex_destroy(&user->vhost_user_request_reply_lock); } @@ -2902,6 +2969,9 @@ static int vhost_user_set_device_state_fd(struct vhos= t_dev *dev, return -ENOTSUP; } + struct VhostUserState *us =3D vu->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, &fd, 1); close(fd); if (ret < 0) { @@ -2965,6 +3035,10 @@ static int vhost_user_check_device_state(struct vhos= t_dev *dev, Error **errp) return -ENOTSUP; } + struct vhost_user *u =3D dev->opaque; + struct VhostUserState *us =3D u->user; + QEMU_LOCK_GUARD(&us->vhost_user_request_reply_lock); + ret =3D vhost_user_write(dev, &msg, NULL, 0); if (ret < 0) { error_setg_errno(errp, -ret, diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h index 324cd8663a..e96f12d449 100644 --- a/include/hw/virtio/vhost-user.h +++ b/include/hw/virtio/vhost-user.h @@ -67,6 +67,9 @@ typedef struct VhostUserState { GPtrArray *notifiers; int memory_slots; bool supports_config; + + /* Hold lock for a request-reply cycle */ + QemuMutex vhost_user_request_reply_lock; } VhostUserState; /** -- 2.46.0