From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180664; cv=none; d=zohomail.com; s=zohoarc; b=RomcyAUm3xz3sXRYRdWrZdGzHP6Yq31J1oo/2650N2HbYeLjTI8+gTbyGas74J9brBGtY/jQYOq+8lyevRz5HWgoouq+jNUjyQzwXGa6tYFaV5WHLD0gs7tOd3TFSsd7rsTJY/7gtbqY0c34/HPasKUXLFaAoOt5/fBWdn2WkYw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180664; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=WZy41tfjoU8aHftvwSt3RzK3QFCxnhNMHWVDMCQrRsY=; b=b5+1ypboJ+r1SdDG3jLivGvoYqVXZdQIRiyrWUe8wK4BGNMIwfrIxhrqd0QO+s/XJ0KwgnXcJcAhCpJexpr6AhT8wJV5A9GhLrZODDZr29g9KylOyLra0/51RbJM8Lvmru3G0x5wInMdLNkBd5PJ2rVJBPxHD+rDm/h4i+qqmB4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180664176383.40448347151494; Thu, 21 Dec 2023 09:44:24 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5F-000208-0U; Thu, 21 Dec 2023 12:43:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN54-0001z1-HQ for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:34 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN52-0004Mj-P2 for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:34 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-627-TnoKV_6QMjm5vZiWighjZQ-1; Thu, 21 Dec 2023 12:43:28 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2F8491C03146; Thu, 21 Dec 2023 17:43:28 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A72C492BC6; Thu, 21 Dec 2023 17:43:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WZy41tfjoU8aHftvwSt3RzK3QFCxnhNMHWVDMCQrRsY=; b=KqOeG64K3908bPeBPtnaN0EndwiMy/gT2fbbFo8DcBxuuiBJTK/xFPksr1VXUwH0ThxFoj l7qvQmAKUqPqqoVHV5VGgIzGj6VOJ3sPXH6DE2x1beV4EQyNhfbX6dHIXLfavKyDi3qdS2 OvRl42Lwk/1utXi+Zlz3Sw4lJr7MOzo= X-MC-Unique: TnoKV_6QMjm5vZiWighjZQ-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 01/13] vdpa: add VhostVDPAShared Date: Thu, 21 Dec 2023 18:43:10 +0100 Message-Id: <20231221174322.3130442-2-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180665142100001 It will hold properties shared among all vhost_vdpa instances associated with of the same device. For example, we just need one iova_tree or one memory listener for the entire device. Next patches will register the vhost_vdpa memory listener at the beginning of the VM migration at the destination. This enables QEMU to map the memory to the device before stopping the VM at the source, instead of doing while both source and destination are stopped, thus minimizing the downtime. However, the destination QEMU is unaware of which vhost_vdpa struct will register its memory_listener. If the source guest has CVQ enabled, it will be the one associated with the CVQ. Otherwise, it will be the first one. Save the memory operations related members in a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang Tested-by: Lei Yang --- include/hw/virtio/vhost-vdpa.h | 5 +++++ net/vhost-vdpa.c | 24 ++++++++++++++++++++++-- 2 files changed, 27 insertions(+), 2 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 5407d54fd7..eb1a56d75a 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -30,6 +30,10 @@ typedef struct VhostVDPAHostNotifier { void *addr; } VhostVDPAHostNotifier; =20 +/* Info shared by all vhost_vdpa device models */ +typedef struct vhost_vdpa_shared { +} VhostVDPAShared; + typedef struct vhost_vdpa { int device_fd; int index; @@ -46,6 +50,7 @@ typedef struct vhost_vdpa { bool suspended; /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; + VhostVDPAShared *shared; GPtrArray *shadow_vqs; const VhostShadowVirtqueueOps *shadow_vq_ops; void *shadow_vq_ops_opaque; diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index d0614d7954..8b661b9e6d 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -240,6 +240,10 @@ static void vhost_vdpa_cleanup(NetClientState *nc) qemu_close(s->vhost_vdpa.device_fd); s->vhost_vdpa.device_fd =3D -1; } + if (s->vhost_vdpa.index !=3D 0) { + return; + } + g_free(s->vhost_vdpa.shared); } =20 /** Dummy SetSteeringEBPF to support RSS for vhost-vdpa backend */ @@ -1661,6 +1665,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientS= tate *peer, bool svq, struct vhost_vdpa_iova_range iova_r= ange, uint64_t features, + VhostVDPAShared *shared, Error **errp) { NetClientState *nc =3D NULL; @@ -1696,6 +1701,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientS= tate *peer, if (queue_pair_index =3D=3D 0) { vhost_vdpa_net_valid_svq_features(features, &s->vhost_vdpa.migration_blocker= ); + s->vhost_vdpa.shared =3D g_new0(VhostVDPAShared, 1); } else if (!is_datapath) { s->cvq_cmd_out_buffer =3D mmap(NULL, vhost_vdpa_net_cvq_cmd_page_l= en(), PROT_READ | PROT_WRITE, @@ -1708,11 +1714,16 @@ static NetClientState *net_vhost_vdpa_init(NetClien= tState *peer, s->vhost_vdpa.shadow_vq_ops_opaque =3D s; s->cvq_isolated =3D cvq_isolated; } + if (queue_pair_index !=3D 0) { + s->vhost_vdpa.shared =3D shared; + } + ret =3D vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, n= vqs); if (ret) { qemu_del_net_client(nc); return NULL; } + return nc; } =20 @@ -1824,17 +1835,26 @@ int net_init_vhost_vdpa(const Netdev *netdev, const= char *name, ncs =3D g_malloc0(sizeof(*ncs) * queue_pairs); =20 for (i =3D 0; i < queue_pairs; i++) { + VhostVDPAShared *shared =3D NULL; + + if (i) { + shared =3D DO_UPCAST(VhostVDPAState, nc, ncs[0])->vhost_vdpa.s= hared; + } ncs[i] =3D net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd, i, 2, true, opts->x_s= vq, - iova_range, features, errp); + iova_range, features, shared, errp); if (!ncs[i]) goto err; } =20 if (has_cvq) { + VhostVDPAState *s0 =3D DO_UPCAST(VhostVDPAState, nc, ncs[0]); + VhostVDPAShared *shared =3D s0->vhost_vdpa.shared; + nc =3D net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd, i, 1, false, - opts->x_svq, iova_range, features, errp); + opts->x_svq, iova_range, features, shared, + errp); if (!nc) goto err; } --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180709; cv=none; d=zohomail.com; s=zohoarc; b=IKovpCiYPE6sQBzfEO5u93mxRSbmx2Nxp0dTBpzRacfURAKr1CXjEeaDU7Up6ntOfFC5yMcnpa0IedfNpCTanBdD2SCfhB1wNTQWUZeL245gUNo6bwKHYi5big5PwJAHG4IhgKeeoDQStPn0sZuw0ZE1UXvHx54y4k9IuVCf050= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180709; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=YvA8DRQtDSqR0QnTAPfKnn8uM+q3xCroDQsExpaGUPw=; b=MXrDTbEAHf4rGniR327oFAlAFBJS84P2G19bQbTjXFLv6e8nRX7keogl9/h39G2sQ2tMyMnss/0wdyxAr+8KBeNfTtf/+2c1SMsd/legxzsRvCkc5K4dm3dK+Ad7vE9maeSOOsZxHo5psDt+jaHORm4R/x6AmkJ4+rI/SaqHYUM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180709336185.01695213638277; Thu, 21 Dec 2023 09:45:09 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5H-00024h-4Z; Thu, 21 Dec 2023 12:43:47 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN58-0001zZ-EV for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:40 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN54-0004OW-Ef for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:36 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-158-Ai62b3-1NUC72qPUMYHF5Q-1; Thu, 21 Dec 2023 12:43:30 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 132CA38143B6; Thu, 21 Dec 2023 17:43:30 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6DF07492BC6; Thu, 21 Dec 2023 17:43:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180613; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YvA8DRQtDSqR0QnTAPfKnn8uM+q3xCroDQsExpaGUPw=; b=J5aoLn/2UAH/OSFZMkuqniM8fmSuHudB7xJqvsE+GaDlXi1Mc3oymnwuKvDXn0djtezZm+ Z4NjV+9T0DP4dL2XhdbcgkluefW96V+PVewqKW2HrBQ3QkDfBC/TUGtSXS7eoZaZb20sZc a90JVy9FhjnTeKX1HCmLVG634/7qeJI= X-MC-Unique: Ai62b3-1NUC72qPUMYHF5Q-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 02/13] vdpa: move iova tree to the shared struct Date: Thu, 21 Dec 2023 18:43:11 +0100 Message-Id: <20231221174322.3130442-3-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180711413100003 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova tree to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 4 +-- hw/virtio/vhost-vdpa.c | 19 ++++++------ net/vhost-vdpa.c | 54 +++++++++++++++------------------- 3 files changed, 35 insertions(+), 42 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index eb1a56d75a..ac036055d3 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -32,6 +32,8 @@ typedef struct VhostVDPAHostNotifier { =20 /* Info shared by all vhost_vdpa device models */ typedef struct vhost_vdpa_shared { + /* IOVA mapping used by the Shadow Virtqueue */ + VhostIOVATree *iova_tree; } VhostVDPAShared; =20 typedef struct vhost_vdpa { @@ -48,8 +50,6 @@ typedef struct vhost_vdpa { bool shadow_data; /* Device suspended successfully */ bool suspended; - /* IOVA mapping used by the Shadow Virtqueue */ - VhostIOVATree *iova_tree; VhostVDPAShared *shared; GPtrArray *shadow_vqs; const VhostShadowVirtqueueOps *shadow_vq_ops; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 819b2d811a..9cee38cb6d 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -358,7 +358,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, mem_region.size =3D int128_get64(llsize) - 1, mem_region.perm =3D IOMMU_ACCESS_FLAG(true, section->readonly), =20 - r =3D vhost_iova_tree_map_alloc(v->iova_tree, &mem_region); + r =3D vhost_iova_tree_map_alloc(v->shared->iova_tree, &mem_region); if (unlikely(r !=3D IOVA_OK)) { error_report("Can't allocate a mapping (%d)", r); goto fail; @@ -379,7 +379,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, =20 fail_map: if (v->shadow_data) { - vhost_iova_tree_remove(v->iova_tree, mem_region); + vhost_iova_tree_remove(v->shared->iova_tree, mem_region); } =20 fail: @@ -441,13 +441,13 @@ static void vhost_vdpa_listener_region_del(MemoryList= ener *listener, .size =3D int128_get64(llsize) - 1, }; =20 - result =3D vhost_iova_tree_find_iova(v->iova_tree, &mem_region); + result =3D vhost_iova_tree_find_iova(v->shared->iova_tree, &mem_re= gion); if (!result) { /* The memory listener map wasn't mapped */ return; } iova =3D result->iova; - vhost_iova_tree_remove(v->iova_tree, *result); + vhost_iova_tree_remove(v->shared->iova_tree, *result); } vhost_vdpa_iotlb_batch_begin_once(v); /* @@ -1059,7 +1059,8 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vd= pa *v, hwaddr addr) const DMAMap needle =3D { .translated_addr =3D addr, }; - const DMAMap *result =3D vhost_iova_tree_find_iova(v->iova_tree, &need= le); + const DMAMap *result =3D vhost_iova_tree_find_iova(v->shared->iova_tre= e, + &needle); hwaddr size; int r; =20 @@ -1075,7 +1076,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vd= pa *v, hwaddr addr) return; } =20 - vhost_iova_tree_remove(v->iova_tree, *result); + vhost_iova_tree_remove(v->shared->iova_tree, *result); } =20 static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev, @@ -1103,7 +1104,7 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa= *v, DMAMap *needle, { int r; =20 - r =3D vhost_iova_tree_map_alloc(v->iova_tree, needle); + r =3D vhost_iova_tree_map_alloc(v->shared->iova_tree, needle); if (unlikely(r !=3D IOVA_OK)) { error_setg(errp, "Cannot allocate iova (%d)", r); return false; @@ -1115,7 +1116,7 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa= *v, DMAMap *needle, needle->perm =3D=3D IOMMU_RO); if (unlikely(r !=3D 0)) { error_setg_errno(errp, -r, "Cannot map region to device"); - vhost_iova_tree_remove(v->iova_tree, *needle); + vhost_iova_tree_remove(v->shared->iova_tree, *needle); } =20 return r =3D=3D 0; @@ -1216,7 +1217,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *d= ev) goto err; } =20 - vhost_svq_start(svq, dev->vdev, vq, v->iova_tree); + vhost_svq_start(svq, dev->vdev, vq, v->shared->iova_tree); ok =3D vhost_vdpa_svq_map_rings(dev, svq, &addr, &err); if (unlikely(!ok)) { goto err_map; diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 8b661b9e6d..10703e5833 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -354,8 +354,8 @@ static void vhost_vdpa_net_data_start_first(VhostVDPASt= ate *s) migration_add_notifier(&s->migration_state, vdpa_net_migration_state_notifier); if (v->shadow_vqs_enabled) { - v->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + v->shared->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, + v->iova_range.last); } } =20 @@ -380,11 +380,6 @@ static int vhost_vdpa_net_data_start(NetClientState *n= c) return 0; } =20 - if (v->shadow_vqs_enabled) { - VhostVDPAState *s0 =3D vhost_vdpa_net_first_nc_vdpa(s); - v->iova_tree =3D s0->vhost_vdpa.iova_tree; - } - return 0; } =20 @@ -417,9 +412,8 @@ static void vhost_vdpa_net_client_stop(NetClientState *= nc) =20 dev =3D s->vhost_vdpa.dev; if (dev->vq_index + dev->nvqs =3D=3D dev->vq_index_end) { - g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete); - } else { - s->vhost_vdpa.iova_tree =3D NULL; + g_clear_pointer(&s->vhost_vdpa.shared->iova_tree, + vhost_iova_tree_delete); } } =20 @@ -474,7 +468,7 @@ static int vhost_vdpa_set_address_space_id(struct vhost= _vdpa *v, =20 static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr) { - VhostIOVATree *tree =3D v->iova_tree; + VhostIOVATree *tree =3D v->shared->iova_tree; DMAMap needle =3D { /* * No need to specify size or to look for more translations since @@ -508,7 +502,7 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,= void *buf, size_t size, map.translated_addr =3D (hwaddr)(uintptr_t)buf; map.size =3D size - 1; map.perm =3D write ? IOMMU_RW : IOMMU_RO, - r =3D vhost_iova_tree_map_alloc(v->iova_tree, &map); + r =3D vhost_iova_tree_map_alloc(v->shared->iova_tree, &map); if (unlikely(r !=3D IOVA_OK)) { error_report("Cannot map injected element"); return r; @@ -523,7 +517,7 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,= void *buf, size_t size, return 0; =20 dma_map_err: - vhost_iova_tree_remove(v->iova_tree, map); + vhost_iova_tree_remove(v->shared->iova_tree, map); return r; } =20 @@ -583,24 +577,22 @@ out: return 0; } =20 - if (s0->vhost_vdpa.iova_tree) { - /* - * SVQ is already configured for all virtqueues. Reuse IOVA tree = for - * simplicity, whether CVQ shares ASID with guest or not, because: - * - Memory listener need access to guest's memory addresses alloc= ated - * in the IOVA tree. - * - There should be plenty of IOVA address space for both ASID no= t to - * worry about collisions between them. Guest's translations are - * still validated with virtio virtqueue_pop so there is no risk= for - * the guest to access memory that it shouldn't. - * - * To allocate a iova tree per ASID is doable but it complicates t= he - * code and it is not worth it for the moment. - */ - v->iova_tree =3D s0->vhost_vdpa.iova_tree; - } else { - v->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + /* + * If other vhost_vdpa already have an iova_tree, reuse it for simplic= ity, + * whether CVQ shares ASID with guest or not, because: + * - Memory listener need access to guest's memory addresses allocated= in + * the IOVA tree. + * - There should be plenty of IOVA address space for both ASID not to + * worry about collisions between them. Guest's translations are st= ill + * validated with virtio virtqueue_pop so there is no risk for the g= uest + * to access memory that it shouldn't. + * + * To allocate a iova tree per ASID is doable but it complicates the c= ode + * and it is not worth it for the moment. + */ + if (!v->shared->iova_tree) { + v->shared->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, + v->iova_range.last); } =20 r =3D vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer, --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180686; cv=none; d=zohomail.com; s=zohoarc; b=heERI5NLFCsBW/FHylbmlWPuzSVJBoDl7tK0Vr4BPK4MagGfSIRUDlYab4h1umum9GmAIdm0qSbzmW4p//rjKpnHm3j9n3EdasX03wA0hEnkaKEAFKG1epjWKOU6YPM96wY0zMojU4yaeIWUxP3t9tg1pie79MOi9GcgG8tos/c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180686; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AJ6EY9+dxb+hUkXtFe3Tj/Ogm+TWyUj2g+fUoYkez98=; b=Vg5AykivjUQvMWZR+ytP6VJkQ5KUVZeTAtWbeKQ6tO6oapfiBpQzq1tfYGU6k4/MsR6sfMlpmxCR+X/7anPmWGVdk6kDNbiwvuNJzJdYoc9oPfyYR5qQsQBv2q2OAKpT2PM7yqSvFU3XBtghf1dlGuKIM7pQsnBPmhxTUenJOTw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180686253884.0540144733894; Thu, 21 Dec 2023 09:44:46 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5L-000278-B5; Thu, 21 Dec 2023 12:43:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5E-00023p-T2 for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN59-0004Q4-M4 for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:41 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-341-8DZju88lP8-a6n1XCuNU0Q-1; Thu, 21 Dec 2023 12:43:33 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16D6E1C03146; Thu, 21 Dec 2023 17:43:33 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5208F492BC6; Thu, 21 Dec 2023 17:43:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AJ6EY9+dxb+hUkXtFe3Tj/Ogm+TWyUj2g+fUoYkez98=; b=eEhCgxY/ehXtG2Fe6qXgNGxoG76+KCu7K5GnP7n1hlPXF8eJJK4Mqg20JJn6+A/k0wMD48 9w48bNiZLhS+tnz9ZjJAiVA+Iu1o8BvCV1yPD8V3RoHS28eEFbpZ3wFZROCOWJrohxUuN6 TbwB0+IBbc318pHOJpbEFdzzF/XYzsE= X-MC-Unique: 8DZju88lP8-a6n1XCuNU0Q-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 03/13] vdpa: move iova_range to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:12 +0100 Message-Id: <20231221174322.3130442-4-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180687258100005 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova range to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 3 ++- hw/virtio/vdpa-dev.c | 5 ++++- hw/virtio/vhost-vdpa.c | 16 ++++++++++------ net/vhost-vdpa.c | 10 +++++----- 4 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index ac036055d3..8d52a7e498 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -32,6 +32,8 @@ typedef struct VhostVDPAHostNotifier { =20 /* Info shared by all vhost_vdpa device models */ typedef struct vhost_vdpa_shared { + struct vhost_vdpa_iova_range iova_range; + /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; } VhostVDPAShared; @@ -43,7 +45,6 @@ typedef struct vhost_vdpa { bool iotlb_batch_begin_sent; uint32_t address_space_id; MemoryListener listener; - struct vhost_vdpa_iova_range iova_range; uint64_t acked_features; bool shadow_vqs_enabled; /* Vdpa must send shadow addresses as IOTLB key for data queues, not G= PA */ diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c index f22d5d5bc0..457960d28a 100644 --- a/hw/virtio/vdpa-dev.c +++ b/hw/virtio/vdpa-dev.c @@ -114,7 +114,8 @@ static void vhost_vdpa_device_realize(DeviceState *dev,= Error **errp) strerror(-ret)); goto free_vqs; } - v->vdpa.iova_range =3D iova_range; + v->vdpa.shared =3D g_new0(VhostVDPAShared, 1); + v->vdpa.shared->iova_range =3D iova_range; =20 ret =3D vhost_dev_init(&v->dev, &v->vdpa, VHOST_BACKEND_TYPE_VDPA, 0, = NULL); if (ret < 0) { @@ -162,6 +163,7 @@ vhost_cleanup: vhost_dev_cleanup(&v->dev); free_vqs: g_free(vqs); + g_free(v->vdpa.shared); out: qemu_close(v->vhostfd); v->vhostfd =3D -1; @@ -184,6 +186,7 @@ static void vhost_vdpa_device_unrealize(DeviceState *de= v) g_free(s->config); g_free(s->dev.vqs); vhost_dev_cleanup(&s->dev); + g_free(s->vdpa.shared); qemu_close(s->vhostfd); s->vhostfd =3D -1; } diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 9cee38cb6d..2bceadd118 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -213,10 +213,10 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier= *n, IOMMUTLBEntry *iotlb) RCU_READ_LOCK_GUARD(); /* check if RAM section out of device range */ llend =3D int128_add(int128_makes64(iotlb->addr_mask), int128_makes64(= iova)); - if (int128_gt(llend, int128_make64(v->iova_range.last))) { + if (int128_gt(llend, int128_make64(v->shared->iova_range.last))) { error_report("RAM section out of device range (max=3D0x%" PRIx64 ", end addr=3D0x%" PRIx64 ")", - v->iova_range.last, int128_get64(llend)); + v->shared->iova_range.last, int128_get64(llend)); return; } =20 @@ -316,8 +316,10 @@ static void vhost_vdpa_listener_region_add(MemoryListe= ner *listener, int page_size =3D qemu_target_page_size(); int page_mask =3D -page_size; =20 - if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first, - v->iova_range.last, page_mask)= ) { + if (vhost_vdpa_listener_skipped_section(section, + v->shared->iova_range.first, + v->shared->iova_range.last, + page_mask)) { return; } if (memory_region_is_iommu(section->mr)) { @@ -403,8 +405,10 @@ static void vhost_vdpa_listener_region_del(MemoryListe= ner *listener, int page_size =3D qemu_target_page_size(); int page_mask =3D -page_size; =20 - if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first, - v->iova_range.last, page_mask)= ) { + if (vhost_vdpa_listener_skipped_section(section, + v->shared->iova_range.first, + v->shared->iova_range.last, + page_mask)) { return; } if (memory_region_is_iommu(section->mr)) { diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 10703e5833..7be2c30ad3 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -354,8 +354,8 @@ static void vhost_vdpa_net_data_start_first(VhostVDPASt= ate *s) migration_add_notifier(&s->migration_state, vdpa_net_migration_state_notifier); if (v->shadow_vqs_enabled) { - v->shared->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + v->shared->iova_tree =3D vhost_iova_tree_new(v->shared->iova_range= .first, + v->shared->iova_range.l= ast); } } =20 @@ -591,8 +591,8 @@ out: * and it is not worth it for the moment. */ if (!v->shared->iova_tree) { - v->shared->iova_tree =3D vhost_iova_tree_new(v->iova_range.first, - v->iova_range.last); + v->shared->iova_tree =3D vhost_iova_tree_new(v->shared->iova_range= .first, + v->shared->iova_range.l= ast); } =20 r =3D vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer, @@ -1688,12 +1688,12 @@ static NetClientState *net_vhost_vdpa_init(NetClien= tState *peer, s->always_svq =3D svq; s->migration_state.notify =3D NULL; s->vhost_vdpa.shadow_vqs_enabled =3D svq; - s->vhost_vdpa.iova_range =3D iova_range; s->vhost_vdpa.shadow_data =3D svq; if (queue_pair_index =3D=3D 0) { vhost_vdpa_net_valid_svq_features(features, &s->vhost_vdpa.migration_blocker= ); s->vhost_vdpa.shared =3D g_new0(VhostVDPAShared, 1); + s->vhost_vdpa.shared->iova_range =3D iova_range; } else if (!is_datapath) { s->cvq_cmd_out_buffer =3D mmap(NULL, vhost_vdpa_net_cvq_cmd_page_l= en(), PROT_READ | PROT_WRITE, --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180726; cv=none; d=zohomail.com; s=zohoarc; b=bmk9+o5j538sWsN9A/mQI7w8WF9gdljc6T5V+HZw016Ztp+mG79SaEYhJJP5cjAI8mj8PxxhfesHTqcHkzvvpRAZ84Ly+cy9bRQeHA1By5irztpqjJ/rFBLo9XYqbAA/oKWmuFXwoW+u9yciaj9UZZhdwsiTsucfWuL4YNnByCU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180726; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=FyxOQUcohfLV3GcvLXt2h1Qk7/Uv5b6Nb+fqdDUkLQ0=; b=hQVNz0bqaEcB1MVvx+3KkxGbAsHysY4krIru10s5LsIRhca1Cf10RBNLpplBB/w04jLMg6ZJsiEcbq32zWciCN3S6Cz5Mxvk+QxeNAEJqYihWjc9pQyFL5MsT//Jq+tZwYVV5ZAGEKIOR5y0HCJipLce6s9fBkE3jIbjxP66sS8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180726695375.6967509489423; Thu, 21 Dec 2023 09:45:26 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5M-00027S-Ao; Thu, 21 Dec 2023 12:43:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5E-00023o-T2 for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN59-0004RP-MF for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:41 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-245-x_n5s6e5Mx-QHqPz7eel7w-1; Thu, 21 Dec 2023 12:43:35 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EFA1783538E; Thu, 21 Dec 2023 17:43:34 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 56B07492BC6; Thu, 21 Dec 2023 17:43:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FyxOQUcohfLV3GcvLXt2h1Qk7/Uv5b6Nb+fqdDUkLQ0=; b=Q3YXd9z9FBD7ODlTzCyqRbdJ80hkiMiIcbT7yKzCD3hCZA75K3X+WQw9BvVe7N8OiStkyx mNDjPFKkBsflsMtOjQTqbKMebKmj8bHBB9AQhlQ02KggbPKecGTVcCtfBcSk/rXKPndRaK f0NA/jfY4YGeRUO3Rnc0bedF57kGuIs= X-MC-Unique: x_n5s6e5Mx-QHqPz7eel7w-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 04/13] vdpa: move shadow_data to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:13 +0100 Message-Id: <20231221174322.3130442-5-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180727381100001 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the shadow_data member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- v1 from RFC: * Fix vhost_vdpa_net_cvq_start checking for always_svq instead of shadow_data. This could cause CVQ not being shadowed if vhost_vdpa_net_cvq_start was called in the middle of a migration. v2: * Avoid repeated setting shared->shadow_data by squashing Si-Wei's patch [1] [1] https://patchwork.kernel.org/project/qemu-devel/patch/1701970793-6865-1= 0-git-send-email-si-wei.liu@oracle.com/ --- include/hw/virtio/vhost-vdpa.h | 5 +++-- hw/virtio/vhost-vdpa.c | 6 +++--- net/vhost-vdpa.c | 22 +++++----------------- 3 files changed, 11 insertions(+), 22 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 8d52a7e498..01e0f25e27 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -36,6 +36,9 @@ typedef struct vhost_vdpa_shared { =20 /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; + + /* Vdpa must send shadow addresses as IOTLB key for data queues, not G= PA */ + bool shadow_data; } VhostVDPAShared; =20 typedef struct vhost_vdpa { @@ -47,8 +50,6 @@ typedef struct vhost_vdpa { MemoryListener listener; uint64_t acked_features; bool shadow_vqs_enabled; - /* Vdpa must send shadow addresses as IOTLB key for data queues, not G= PA */ - bool shadow_data; /* Device suspended successfully */ bool suspended; VhostVDPAShared *shared; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 2bceadd118..ec028e4c56 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -353,7 +353,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, vaddr, section->readonly); =20 llsize =3D int128_sub(llend, int128_make64(iova)); - if (v->shadow_data) { + if (v->shared->shadow_data) { int r; =20 mem_region.translated_addr =3D (hwaddr)(uintptr_t)vaddr, @@ -380,7 +380,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, return; =20 fail_map: - if (v->shadow_data) { + if (v->shared->shadow_data) { vhost_iova_tree_remove(v->shared->iova_tree, mem_region); } =20 @@ -435,7 +435,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, =20 llsize =3D int128_sub(llend, int128_make64(iova)); =20 - if (v->shadow_data) { + if (v->shared->shadow_data) { const DMAMap *result; const void *vaddr =3D memory_region_get_ram_ptr(section->mr) + section->offset_within_region + diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 7be2c30ad3..bf8e8327da 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -290,15 +290,6 @@ static ssize_t vhost_vdpa_receive(NetClientState *nc, = const uint8_t *buf, return size; } =20 -/** From any vdpa net client, get the netclient of the first queue pair */ -static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s) -{ - NICState *nic =3D qemu_get_nic(s->nc.peer); - NetClientState *nc0 =3D qemu_get_peer(nic->ncs, 0); - - return DO_UPCAST(VhostVDPAState, nc, nc0); -} - static void vhost_vdpa_net_log_global_enable(VhostVDPAState *s, bool enabl= e) { struct vhost_vdpa *v =3D &s->vhost_vdpa; @@ -369,13 +360,12 @@ static int vhost_vdpa_net_data_start(NetClientState *= nc) if (s->always_svq || migration_is_setup_or_active(migrate_get_current()->state)) { v->shadow_vqs_enabled =3D true; - v->shadow_data =3D true; } else { v->shadow_vqs_enabled =3D false; - v->shadow_data =3D false; } =20 if (v->index =3D=3D 0) { + v->shared->shadow_data =3D v->shadow_vqs_enabled; vhost_vdpa_net_data_start_first(s); return 0; } @@ -523,7 +513,7 @@ dma_map_err: =20 static int vhost_vdpa_net_cvq_start(NetClientState *nc) { - VhostVDPAState *s, *s0; + VhostVDPAState *s; struct vhost_vdpa *v; int64_t cvq_group; int r; @@ -534,12 +524,10 @@ static int vhost_vdpa_net_cvq_start(NetClientState *n= c) s =3D DO_UPCAST(VhostVDPAState, nc, nc); v =3D &s->vhost_vdpa; =20 - s0 =3D vhost_vdpa_net_first_nc_vdpa(s); - v->shadow_data =3D s0->vhost_vdpa.shadow_vqs_enabled; - v->shadow_vqs_enabled =3D s0->vhost_vdpa.shadow_vqs_enabled; + v->shadow_vqs_enabled =3D v->shared->shadow_data; s->vhost_vdpa.address_space_id =3D VHOST_VDPA_GUEST_PA_ASID; =20 - if (s->vhost_vdpa.shadow_data) { + if (v->shared->shadow_data) { /* SVQ is already configured for all virtqueues */ goto out; } @@ -1688,12 +1676,12 @@ static NetClientState *net_vhost_vdpa_init(NetClien= tState *peer, s->always_svq =3D svq; s->migration_state.notify =3D NULL; s->vhost_vdpa.shadow_vqs_enabled =3D svq; - s->vhost_vdpa.shadow_data =3D svq; if (queue_pair_index =3D=3D 0) { vhost_vdpa_net_valid_svq_features(features, &s->vhost_vdpa.migration_blocker= ); s->vhost_vdpa.shared =3D g_new0(VhostVDPAShared, 1); s->vhost_vdpa.shared->iova_range =3D iova_range; + s->vhost_vdpa.shared->shadow_data =3D svq; } else if (!is_datapath) { s->cvq_cmd_out_buffer =3D mmap(NULL, vhost_vdpa_net_cvq_cmd_page_l= en(), PROT_READ | PROT_WRITE, --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180685; cv=none; d=zohomail.com; s=zohoarc; b=lNLj5/N7STR83lF8SI606e6ms6lk1IcTSSyVlJtm4jR5941DwavXz/LtaO8wY9D63G+RLNGWdnRh0SKcX8XrAjOtWOEtS9R+MR/KzoGyy3JJ6PRoMzhT2qYGrSNhTp0GZIRA/rgvGf+OcwvDVTB7hXPEW6YHtuMsTDENn4SmyyQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180685; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6+QK/N2PZdG4lv35CVCveKrlfGmbDCglKspi37Q1jgc=; b=OjUKfRSXz+x8WIwNJCpPkXI5FPx3S8i+vN3QzhIWJ1EhXVPrrQhJNyb5BuzLqHFjFnPwoE8XGu24NXJzkB++R3KesHQc6cw5fu9lZlp8A/8oTS286y6hGl38Xxgq9TacEVwprEqjczwXbJjgEQZGYIv6G1Rv4u/lDrno9Vpaw/o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180685424927.2239679110767; Thu, 21 Dec 2023 09:44:45 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5L-00027R-Vc; Thu, 21 Dec 2023 12:43:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5G-00024i-JY for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:46 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5C-0004S9-PB for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:46 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-91-lK9all-PM36QpePhmpdgOQ-1; Thu, 21 Dec 2023 12:43:37 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D2A088828C9; Thu, 21 Dec 2023 17:43:36 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3A59F492BC6; Thu, 21 Dec 2023 17:43:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6+QK/N2PZdG4lv35CVCveKrlfGmbDCglKspi37Q1jgc=; b=Wf6P5LIMHjYCkCoQFV7IiiNB/FwI2vSBp5rhj4uS6RL9v90/2IryJil6EYAYkDei7V7pBt K5Q7QsuuVyMhA+qAePS7S7KHVX2raKfILHEc2a336oADIlGfX5cf4nStinCE87moid8Bid LxFfjUuFFY72YlglgfEiu6OJG+6ep2g= X-MC-Unique: lK9all-PM36QpePhmpdgOQ-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 05/13] vdpa: use vdpa shared for tracing Date: Thu, 21 Dec 2023 18:43:14 +0100 Message-Id: <20231221174322.3130442-6-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180687218100004 By the end of this series dma_map and dma_unmap functions don't have the vdpa device for tracing. Movinge trace function to shared member one. Print it also in the vdpa initialization so log reader can relate them. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- hw/virtio/vhost-vdpa.c | 26 ++++++++++++++------------ hw/virtio/trace-events | 14 +++++++------- 2 files changed, 21 insertions(+), 19 deletions(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index ec028e4c56..85de60b184 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -101,7 +101,7 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t a= sid, hwaddr iova, msg.iotlb.perm =3D readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW; msg.iotlb.type =3D VHOST_IOTLB_UPDATE; =20 - trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova, + trace_vhost_vdpa_dma_map(v->shared, fd, msg.type, msg.asid, msg.iotlb.= iova, msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.pe= rm, msg.iotlb.type); =20 @@ -131,8 +131,8 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, msg.iotlb.size =3D size; msg.iotlb.type =3D VHOST_IOTLB_INVALIDATE; =20 - trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova, - msg.iotlb.size, msg.iotlb.type); + trace_vhost_vdpa_dma_unmap(v->shared, fd, msg.type, msg.asid, + msg.iotlb.iova, msg.iotlb.size, msg.iotlb.t= ype); =20 if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", @@ -151,7 +151,8 @@ static void vhost_vdpa_listener_begin_batch(struct vhos= t_vdpa *v) .iotlb.type =3D VHOST_IOTLB_BATCH_BEGIN, }; =20 - trace_vhost_vdpa_listener_begin_batch(v, fd, msg.type, msg.iotlb.type); + trace_vhost_vdpa_listener_begin_batch(v->shared, fd, msg.type, + msg.iotlb.type); if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", fd, errno, strerror(errno)); @@ -186,7 +187,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *= listener) msg.type =3D v->msg_type; msg.iotlb.type =3D VHOST_IOTLB_BATCH_END; =20 - trace_vhost_vdpa_listener_commit(v, fd, msg.type, msg.iotlb.type); + trace_vhost_vdpa_listener_commit(v->shared, fd, msg.type, msg.iotlb.ty= pe); if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", fd, errno, strerror(errno)); @@ -329,7 +330,8 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, =20 if (unlikely((section->offset_within_address_space & ~page_mask) !=3D (section->offset_within_region & ~page_mask))) { - trace_vhost_vdpa_listener_region_add_unaligned(v, section->mr->nam= e, + trace_vhost_vdpa_listener_region_add_unaligned(v->shared, + section->mr->name, section->offset_within_address_space & ~page_mask, section->offset_within_region & ~page_mask); return; @@ -349,7 +351,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, section->offset_within_region + (iova - section->offset_within_address_space); =20 - trace_vhost_vdpa_listener_region_add(v, iova, int128_get64(llend), + trace_vhost_vdpa_listener_region_add(v->shared, iova, int128_get64(lle= nd), vaddr, section->readonly); =20 llsize =3D int128_sub(llend, int128_make64(iova)); @@ -417,7 +419,8 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, =20 if (unlikely((section->offset_within_address_space & ~page_mask) !=3D (section->offset_within_region & ~page_mask))) { - trace_vhost_vdpa_listener_region_del_unaligned(v, section->mr->nam= e, + trace_vhost_vdpa_listener_region_del_unaligned(v->shared, + section->mr->name, section->offset_within_address_space & ~page_mask, section->offset_within_region & ~page_mask); return; @@ -426,7 +429,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, iova =3D ROUND_UP(section->offset_within_address_space, page_size); llend =3D vhost_vdpa_section_end(section, page_mask); =20 - trace_vhost_vdpa_listener_region_del(v, iova, + trace_vhost_vdpa_listener_region_del(v->shared, iova, int128_get64(int128_sub(llend, int128_one()))); =20 if (int128_ge(int128_make64(iova), llend)) { @@ -583,12 +586,11 @@ static void vhost_vdpa_init_svq(struct vhost_dev *hde= v, struct vhost_vdpa *v) =20 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **er= rp) { - struct vhost_vdpa *v; + struct vhost_vdpa *v =3D opaque; assert(dev->vhost_ops->backend_type =3D=3D VHOST_BACKEND_TYPE_VDPA); - trace_vhost_vdpa_init(dev, opaque); + trace_vhost_vdpa_init(dev, v->shared, opaque); int ret; =20 - v =3D opaque; v->dev =3D dev; dev->opaque =3D opaque ; v->listener =3D vhost_vdpa_memory_listener; diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index 637cac4edf..77905d1994 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -30,16 +30,16 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d = flags:0x%"PRIx32"" vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p" =20 # vhost-vdpa.c -vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, u= int64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "v= dpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0= x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8 -vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid,= uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PR= Iu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8 -vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_= t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 -vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t typ= e) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 -vhost_vdpa_listener_region_add_unaligned(void *v, const char *name, uint64= _t offset_as, uint64_t offset_page) "vdpa: %p region %s offset_within_addre= ss_space %"PRIu64" offset_within_region %"PRIu64 +vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, u= int64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "v= dpa_shared:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" = size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8 +vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid,= uint64_t iova, uint64_t size, uint8_t type) "vdpa_shared:%p fd: %d msg_typ= e: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PR= Iu8 +vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_= t type) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 +vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t typ= e) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 +vhost_vdpa_listener_region_add_unaligned(void *v, const char *name, uint64= _t offset_as, uint64_t offset_page) "vdpa_shared: %p region %s offset_withi= n_address_space %"PRIu64" offset_within_region %"PRIu64 vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, = void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" va= ddr: %p read-only: %d" -vhost_vdpa_listener_region_del_unaligned(void *v, const char *name, uint64= _t offset_as, uint64_t offset_page) "vdpa: %p region %s offset_within_addre= ss_space %"PRIu64" offset_within_region %"PRIu64 +vhost_vdpa_listener_region_del_unaligned(void *v, const char *name, uint64= _t offset_as, uint64_t offset_page) "vdpa_shared: %p region %s offset_withi= n_address_space %"PRIu64" offset_within_region %"PRIu64 vhost_vdpa_listener_region_del(void *vdpa, uint64_t iova, uint64_t llend) = "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64 vhost_vdpa_add_status(void *dev, uint8_t status) "dev: %p status: 0x%"PRIx8 -vhost_vdpa_init(void *dev, void *vdpa) "dev: %p vdpa: %p" +vhost_vdpa_init(void *dev, void *s, void *vdpa) "dev: %p, common dev: %p v= dpa: %p" vhost_vdpa_cleanup(void *dev, void *vdpa) "dev: %p vdpa: %p" vhost_vdpa_memslots_limit(void *dev, int ret) "dev: %p =3D 0x%x" vhost_vdpa_set_mem_table(void *dev, uint32_t nregions, uint32_t padding) "= dev: %p nregions: %"PRIu32" padding: 0x%"PRIx32 --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180763; cv=none; d=zohomail.com; s=zohoarc; b=lPFNHPfROPxShgOQ0TSjU6U1vXn//9E9GwA1qKvshJfZ8kVjHjwKIQNaAdsAjem5YhUgNkiISUa0rLCILH/Rdn5AsXTxmdIDDpO4fJZab1UpvJplmKyah5Wqjhble2xm5PnLhdoKxJeYcptVq5gsroTb8SQ7T2H1tfjSPmGm9eQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180763; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=sKw7Lc5dbc3ZhJ80NoSM5i4XA6DsdEMiVfFfx0uDMPU=; b=D1Veju8oWoyeY1lG4PkPwjpSoRTU2B5OR25kYGw88vIEAAnEEDMs6wYgvh7EROxQwc8CUKEQixLz/VcVnHQsV6B561R49nLWZLL4FUfCmTSfcDUjhqQU4B4b6HRkCZNgR1pHEYZsd2miQ5UHkW0rMGJN6n5i/BPpKpXmpEQnJwA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180763740781.5252761364585; Thu, 21 Dec 2023 09:46:03 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5M-00027X-Kb; Thu, 21 Dec 2023 12:43:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5H-00025Z-Gx for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5E-0004Te-MQ for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:46 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-253-dtJZpqO9PceXpeHENiyE3A-1; Thu, 21 Dec 2023 12:43:39 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B5AFE38143AE; Thu, 21 Dec 2023 17:43:38 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1D213492BC6; Thu, 21 Dec 2023 17:43:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180623; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sKw7Lc5dbc3ZhJ80NoSM5i4XA6DsdEMiVfFfx0uDMPU=; b=bHp0F+ClMtLhdwHmoWh7zT4KvGQE0qH+Cy9mJ2Li10x4enAAwcSq9se74jdsonk6TzStBC FuCO5uXVCmFC7e8elkyzdsKp8T/s3bkJW61uToCTcpdQWgXAm+amX8qlvFcneUpVQDgkK6 LwtEQWwoLTSgW3slrleXC1H3x64ITuY= X-MC-Unique: dtJZpqO9PceXpeHENiyE3A-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 06/13] vdpa: move file descriptor to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:15 +0100 Message-Id: <20231221174322.3130442-7-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180765486100001 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the file descriptor to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 2 +- hw/virtio/vdpa-dev.c | 2 +- hw/virtio/vhost-vdpa.c | 14 +++++++------- net/vhost-vdpa.c | 11 ++++------- 4 files changed, 13 insertions(+), 16 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 01e0f25e27..796a180afa 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -32,6 +32,7 @@ typedef struct VhostVDPAHostNotifier { =20 /* Info shared by all vhost_vdpa device models */ typedef struct vhost_vdpa_shared { + int device_fd; struct vhost_vdpa_iova_range iova_range; =20 /* IOVA mapping used by the Shadow Virtqueue */ @@ -42,7 +43,6 @@ typedef struct vhost_vdpa_shared { } VhostVDPAShared; =20 typedef struct vhost_vdpa { - int device_fd; int index; uint32_t msg_type; bool iotlb_batch_begin_sent; diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c index 457960d28a..8774986571 100644 --- a/hw/virtio/vdpa-dev.c +++ b/hw/virtio/vdpa-dev.c @@ -66,7 +66,6 @@ static void vhost_vdpa_device_realize(DeviceState *dev, E= rror **errp) if (*errp) { return; } - v->vdpa.device_fd =3D v->vhostfd; =20 v->vdev_id =3D vhost_vdpa_device_get_u32(v->vhostfd, VHOST_VDPA_GET_DEVICE_ID, errp); @@ -115,6 +114,7 @@ static void vhost_vdpa_device_realize(DeviceState *dev,= Error **errp) goto free_vqs; } v->vdpa.shared =3D g_new0(VhostVDPAShared, 1); + v->vdpa.shared->device_fd =3D v->vhostfd; v->vdpa.shared->iova_range =3D iova_range; =20 ret =3D vhost_dev_init(&v->dev, &v->vdpa, VHOST_BACKEND_TYPE_VDPA, 0, = NULL); diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 85de60b184..095543395b 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -90,7 +90,7 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asi= d, hwaddr iova, hwaddr size, void *vaddr, bool readonly) { struct vhost_msg_v2 msg =3D {}; - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; int ret =3D 0; =20 msg.type =3D v->msg_type; @@ -122,7 +122,7 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, hwaddr size) { struct vhost_msg_v2 msg =3D {}; - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; int ret =3D 0; =20 msg.type =3D v->msg_type; @@ -145,7 +145,7 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, =20 static void vhost_vdpa_listener_begin_batch(struct vhost_vdpa *v) { - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; struct vhost_msg_v2 msg =3D { .type =3D v->msg_type, .iotlb.type =3D VHOST_IOTLB_BATCH_BEGIN, @@ -174,7 +174,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *= listener) struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); struct vhost_dev *dev =3D v->dev; struct vhost_msg_v2 msg =3D {}; - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; =20 if (!(dev->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH))) { return; @@ -499,7 +499,7 @@ static int vhost_vdpa_call(struct vhost_dev *dev, unsig= ned long int request, void *arg) { struct vhost_vdpa *v =3D dev->opaque; - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; int ret; =20 assert(dev->vhost_ops->backend_type =3D=3D VHOST_BACKEND_TYPE_VDPA); @@ -657,7 +657,7 @@ static int vhost_vdpa_host_notifier_init(struct vhost_d= ev *dev, int queue_index) struct vhost_vdpa *v =3D dev->opaque; VirtIODevice *vdev =3D dev->vdev; VhostVDPAHostNotifier *n; - int fd =3D v->device_fd; + int fd =3D v->shared->device_fd; void *addr; char *name; =20 @@ -1286,7 +1286,7 @@ static void vhost_vdpa_suspend(struct vhost_dev *dev) =20 if (dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) { trace_vhost_vdpa_suspend(dev); - r =3D ioctl(v->device_fd, VHOST_VDPA_SUSPEND); + r =3D ioctl(v->shared->device_fd, VHOST_VDPA_SUSPEND); if (unlikely(r)) { error_report("Cannot suspend: %s(%d)", g_strerror(errno), errn= o); } else { diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index bf8e8327da..10cf0027de 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -235,14 +235,11 @@ static void vhost_vdpa_cleanup(NetClientState *nc) vhost_net_cleanup(s->vhost_net); g_free(s->vhost_net); s->vhost_net =3D NULL; - } - if (s->vhost_vdpa.device_fd >=3D 0) { - qemu_close(s->vhost_vdpa.device_fd); - s->vhost_vdpa.device_fd =3D -1; } if (s->vhost_vdpa.index !=3D 0) { return; } + qemu_close(s->vhost_vdpa.shared->device_fd); g_free(s->vhost_vdpa.shared); } =20 @@ -448,7 +445,7 @@ static int vhost_vdpa_set_address_space_id(struct vhost= _vdpa *v, }; int r; =20 - r =3D ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid); + r =3D ioctl(v->shared->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid); if (unlikely(r < 0)) { error_report("Can't set vq group %u asid %u, errno=3D%d (%s)", asid.index, asid.num, errno, g_strerror(errno)); @@ -544,7 +541,7 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc) return 0; } =20 - cvq_group =3D vhost_vdpa_get_vring_group(v->device_fd, + cvq_group =3D vhost_vdpa_get_vring_group(v->shared->device_fd, v->dev->vq_index_end - 1, &err); if (unlikely(cvq_group < 0)) { @@ -1671,7 +1668,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientS= tate *peer, qemu_set_info_str(nc, TYPE_VHOST_VDPA); s =3D DO_UPCAST(VhostVDPAState, nc, nc); =20 - s->vhost_vdpa.device_fd =3D vdpa_device_fd; s->vhost_vdpa.index =3D queue_pair_index; s->always_svq =3D svq; s->migration_state.notify =3D NULL; @@ -1680,6 +1676,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientS= tate *peer, vhost_vdpa_net_valid_svq_features(features, &s->vhost_vdpa.migration_blocker= ); s->vhost_vdpa.shared =3D g_new0(VhostVDPAShared, 1); + s->vhost_vdpa.shared->device_fd =3D vdpa_device_fd; s->vhost_vdpa.shared->iova_range =3D iova_range; s->vhost_vdpa.shared->shadow_data =3D svq; } else if (!is_datapath) { --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180759; cv=none; d=zohomail.com; s=zohoarc; b=iL4Ff1LFAFZsLt6TY/wdnEEDqT18gHpUf/mp4/sycY8h+uEHRBXZec0t7u5T/mfxztdad0UsX9XAILcVYAOC3EmHZTLDPxTIBFMEUuWl3mFt9VbMbPxJk1f6eJaVLq/TIqOQhsdTDexKF0YbXeukgKjRAClLziL9D+s7IxPAgE4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180759; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=5U4m8fIAMN0n6WTo/QLzpZYbobkarUjTFlCLBwhwyvM=; b=HycJ0PqCEm2U3G+nHjaLeLtVe2LC08iOwt6D3xNuWcTTTdmpMewWjYiq17ZN3o5CFkeNjPkZXJeVuziYLmcmUsF35E4JP4Ncm2RCSHekF/ilcLIfVAonGL8oiaXP2NhW/oslr5ZrbsM/j7NDaF2hyRQiybCv6EaSTi9nTPcIuO4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180759706697.8248416485436; Thu, 21 Dec 2023 09:45:59 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5P-00028o-PX; Thu, 21 Dec 2023 12:43:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5H-00025k-Ra for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5F-0004Tm-CW for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:47 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-p2sLhM77Nfq3KyI01dQB-Q-1; Thu, 21 Dec 2023 12:43:41 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98C9C85A58A; Thu, 21 Dec 2023 17:43:40 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0026C492BC6; Thu, 21 Dec 2023 17:43:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180624; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5U4m8fIAMN0n6WTo/QLzpZYbobkarUjTFlCLBwhwyvM=; b=Hz0b4fA9C29xiWbhCjZuEXDckU2gSKvOm3shHPEKl95wnU52zN4KmCqPqotMFnm+OB9Prg p7riRGRDZOE9bMenYf5hQ6YvF8198XGum3b9rz5HJg81PZXzsnmK969HuHW907VCf6Wx6j rRDVYbywF9k+5dHTQvJNeQVfpvBzzXA= X-MC-Unique: p2sLhM77Nfq3KyI01dQB-Q-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 07/13] vdpa: move iotlb_batch_begin_sent to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:16 +0100 Message-Id: <20231221174322.3130442-8-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180761509100003 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iotlb_batch_begin_sent member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 3 ++- hw/virtio/vhost-vdpa.c | 8 ++++---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 796a180afa..05219bbcf7 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -38,6 +38,8 @@ typedef struct vhost_vdpa_shared { /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; =20 + bool iotlb_batch_begin_sent; + /* Vdpa must send shadow addresses as IOTLB key for data queues, not G= PA */ bool shadow_data; } VhostVDPAShared; @@ -45,7 +47,6 @@ typedef struct vhost_vdpa_shared { typedef struct vhost_vdpa { int index; uint32_t msg_type; - bool iotlb_batch_begin_sent; uint32_t address_space_id; MemoryListener listener; uint64_t acked_features; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 095543395b..85b13e09f4 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -162,11 +162,11 @@ static void vhost_vdpa_listener_begin_batch(struct vh= ost_vdpa *v) static void vhost_vdpa_iotlb_batch_begin_once(struct vhost_vdpa *v) { if (v->dev->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH) && - !v->iotlb_batch_begin_sent) { + !v->shared->iotlb_batch_begin_sent) { vhost_vdpa_listener_begin_batch(v); } =20 - v->iotlb_batch_begin_sent =3D true; + v->shared->iotlb_batch_begin_sent =3D true; } =20 static void vhost_vdpa_listener_commit(MemoryListener *listener) @@ -180,7 +180,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *= listener) return; } =20 - if (!v->iotlb_batch_begin_sent) { + if (!v->shared->iotlb_batch_begin_sent) { return; } =20 @@ -193,7 +193,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *= listener) fd, errno, strerror(errno)); } =20 - v->iotlb_batch_begin_sent =3D false; + v->shared->iotlb_batch_begin_sent =3D false; } =20 static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *i= otlb) --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180728; cv=none; d=zohomail.com; s=zohoarc; b=lyBRJGU1Qip8SR0qJYpJTbh5SW/ObJATPrf4EiSGiXlPQGNOfd74ahFrhYTMchAwvObt+ZTonmw2oSta+gPaZeR11jaZJkkdYXbDJ6kfhZdwKamzYx056YLtg/bgQY5mymcuyddmdR9JqOMtGKFKA6u7mGRptq08dwM8+k/3b2I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180728; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=X1DbYgqk7ziG6yx4zIVFuyjJx5NuNdzxtgur0I7rsm0=; b=Qv1TouhAOOCVhCGhGeGnNIQNqTkgHW1T7ZBlaG+qb0JX+0Mehuqv4huMryh5oqLJVG4JGbhMLOz9Sa/MgEt+oXbvzTjlXXMgdQhMNjE+gbfSuziOh9tN/kf7d9WPNNkbQOvXN/TVCe9sdYcwjT0E/26Zq0UYWK0SAwslnr7oDUE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180728249870.7502199208203; Thu, 21 Dec 2023 09:45:28 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5P-000293-Lj; Thu, 21 Dec 2023 12:43:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5I-00025p-O9 for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:51 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5H-0004UB-8k for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:48 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-154-3EQqOzZBP4m3aFpb8475Qw-1; Thu, 21 Dec 2023 12:43:42 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7CD223C2B60B; Thu, 21 Dec 2023 17:43:42 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6F7F492BC6; Thu, 21 Dec 2023 17:43:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X1DbYgqk7ziG6yx4zIVFuyjJx5NuNdzxtgur0I7rsm0=; b=D2+6Bw0zfK0CavfvYHF18WBw+nSGnmHs6TdREO+IE2u4hHvaKlD/ti1YdDZ2CzanmBvuQo owmH9tipWbHwhkTs9KK9/fp87KybEXbCRZz1yL2Wo2jdrTisTWhD2J6hKMhTZ/wCGjXuVT mTy5sj3CnHxWz+A5aGaLoZ5sYxthpu8= X-MC-Unique: 3EQqOzZBP4m3aFpb8475Qw-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 08/13] vdpa: move backend_cap to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:17 +0100 Message-Id: <20231221174322.3130442-9-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180729346100007 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the backend_cap member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 3 +++ hw/virtio/vhost-vdpa.c | 8 +++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 05219bbcf7..11ac14085a 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -38,6 +38,9 @@ typedef struct vhost_vdpa_shared { /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; =20 + /* Copy of backend features */ + uint64_t backend_cap; + bool iotlb_batch_begin_sent; =20 /* Vdpa must send shadow addresses as IOTLB key for data queues, not G= PA */ diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 85b13e09f4..458e46befd 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -161,7 +161,7 @@ static void vhost_vdpa_listener_begin_batch(struct vhos= t_vdpa *v) =20 static void vhost_vdpa_iotlb_batch_begin_once(struct vhost_vdpa *v) { - if (v->dev->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH) && + if (v->shared->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH) && !v->shared->iotlb_batch_begin_sent) { vhost_vdpa_listener_begin_batch(v); } @@ -172,11 +172,10 @@ static void vhost_vdpa_iotlb_batch_begin_once(struct = vhost_vdpa *v) static void vhost_vdpa_listener_commit(MemoryListener *listener) { struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); - struct vhost_dev *dev =3D v->dev; struct vhost_msg_v2 msg =3D {}; int fd =3D v->shared->device_fd; =20 - if (!(dev->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH))) { + if (!(v->shared->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH)= )) { return; } =20 @@ -834,6 +833,8 @@ static int vhost_vdpa_set_features(struct vhost_dev *de= v, =20 static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev) { + struct vhost_vdpa *v =3D dev->opaque; + uint64_t features; uint64_t f =3D 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | 0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH | @@ -855,6 +856,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev = *dev) } =20 dev->backend_cap =3D features; + v->shared->backend_cap =3D features; =20 return 0; } --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180700; cv=none; d=zohomail.com; s=zohoarc; b=lROSaLjxuv95ZSEru5ADpNn/6xi5Gphv6iNNjOTkGkad/CvkraxEWh4OefHoa+2YhgBbUMhxXWGss5Nn2UbVeC3AjGFOYJbfbe4wK39eb4jW5Moajn8q1aK7qkUZknSqvZ5XKbqbLX9rgf4AyFkrBjGNWl0CSJKNCs3Q0CVke9w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180700; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=wEqxCxIU4wEiF1ZxDcemsg37cDOuCjoWmO0/s/NRmkQ=; b=LRSGOj1POs4xaGkGW/JQNPgdtvMSZWTqLMJtEEvwrR9twW4wOkUvq+bA/8i72/S3hruTsCjj5FTMxVW01kWusAioNdNUOpGG31J0ZZ9jsizPErhuc/hDCqQlkcdHRo3O/Fn1+HuiSchB0kwof5eSbndb5du89+5YgFYEjcqTUH8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180700713509.28212122206935; Thu, 21 Dec 2023 09:45:00 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5O-00028M-8I; Thu, 21 Dec 2023 12:43:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5J-00025q-BD for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:51 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5H-0004UZ-8q for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:48 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-266-mr7t-Y87Mw2svoP5W9xjEw-1; Thu, 21 Dec 2023 12:43:44 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 61A633C2B60E; Thu, 21 Dec 2023 17:43:44 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id BC922492BC6; Thu, 21 Dec 2023 17:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wEqxCxIU4wEiF1ZxDcemsg37cDOuCjoWmO0/s/NRmkQ=; b=BJFf8i0GFYFmJBBfrtxPr8IR+PCtFWYWXzru/kcFsRRVAsQKloC4aAbVnBjEZYLEbIdIKI aEWmeA33mtxwMEtMC4VOUJiD4DDJOn+p572JKsrAACroiEFW8VzQHjSaDavHFgmAx2pJ4r dEo4oZswQywvhEHOHGr6h+RQ+qzqnvc= X-MC-Unique: mr7t-Y87Mw2svoP5W9xjEw-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 09/13] vdpa: remove msg type of vhost_vdpa Date: Thu, 21 Dec 2023 18:43:18 +0100 Message-Id: <20231221174322.3130442-10-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180701269100003 It is always VHOST_IOTLB_MSG_V2. We can always make it back per vhost_dev if needed. This change makes easier for vhost_vdpa_map and unmap not to depend on vhost_vdpa but only in VhostVDPAShared. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 1 - hw/virtio/vhost-vdpa.c | 9 ++++----- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 11ac14085a..5bd964dac5 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -49,7 +49,6 @@ typedef struct vhost_vdpa_shared { =20 typedef struct vhost_vdpa { int index; - uint32_t msg_type; uint32_t address_space_id; MemoryListener listener; uint64_t acked_features; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 458e46befd..38afcbf1c9 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -93,7 +93,7 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asi= d, hwaddr iova, int fd =3D v->shared->device_fd; int ret =3D 0; =20 - msg.type =3D v->msg_type; + msg.type =3D VHOST_IOTLB_MSG_V2; msg.asid =3D asid; msg.iotlb.iova =3D iova; msg.iotlb.size =3D size; @@ -125,7 +125,7 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, int fd =3D v->shared->device_fd; int ret =3D 0; =20 - msg.type =3D v->msg_type; + msg.type =3D VHOST_IOTLB_MSG_V2; msg.asid =3D asid; msg.iotlb.iova =3D iova; msg.iotlb.size =3D size; @@ -147,7 +147,7 @@ static void vhost_vdpa_listener_begin_batch(struct vhos= t_vdpa *v) { int fd =3D v->shared->device_fd; struct vhost_msg_v2 msg =3D { - .type =3D v->msg_type, + .type =3D VHOST_IOTLB_MSG_V2, .iotlb.type =3D VHOST_IOTLB_BATCH_BEGIN, }; =20 @@ -183,7 +183,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *= listener) return; } =20 - msg.type =3D v->msg_type; + msg.type =3D VHOST_IOTLB_MSG_V2; msg.iotlb.type =3D VHOST_IOTLB_BATCH_END; =20 trace_vhost_vdpa_listener_commit(v->shared, fd, msg.type, msg.iotlb.ty= pe); @@ -593,7 +593,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void = *opaque, Error **errp) v->dev =3D dev; dev->opaque =3D opaque ; v->listener =3D vhost_vdpa_memory_listener; - v->msg_type =3D VHOST_IOTLB_MSG_V2; vhost_vdpa_init_svq(dev, v); =20 error_propagate(&dev->migration_blocker, v->migration_blocker); --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180677; cv=none; d=zohomail.com; s=zohoarc; b=jYz4Es9I2NZu1CaATMRgO3sPFV/FJz8DAma4T7SNy+A3dy7W2Iv9U+RKhm2OD6HNmKPXoiGazQ5WUiMKnBjzFmgRy0g+CvPESXY08kVsDPUZobtI5KB/y9JKZrQ87+4arslwY/ABF3A9TAmDWzY7A1m7zghD67PCMO/MLmqa0B4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180677; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=tmPzHpJ7J2BnbFMyirhJkM/dBrBacMx6b4xylXe8QsI=; b=B8HNxnvu5RNq1nHoTUIeHk1tV9RCbTRq+EKzHjRVln1JjrOouFYEzk6OrrrfL6llZJJv922so0Og9dqSq7adr902T21xvUmiYWzqrJZaIFbiroQS7ZEBPMImiMYCebx+bI+vGpunyhX6paFCrTcqgq0Js4aiIEjqdtRv+zBJxqs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180677983543.5505851434657; Thu, 21 Dec 2023 09:44:37 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5W-0002Ar-Ja; Thu, 21 Dec 2023 12:44:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5N-00028J-5G for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:54 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5L-0004VP-Mn for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:52 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-177-arJIWvtOMxuOMw_KfozWNQ-1; Thu, 21 Dec 2023 12:43:46 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 44EBF38143B6; Thu, 21 Dec 2023 17:43:46 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9FB35492BC6; Thu, 21 Dec 2023 17:43:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180631; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tmPzHpJ7J2BnbFMyirhJkM/dBrBacMx6b4xylXe8QsI=; b=QbS0LvBKEWt4BUIU8Uo+WbVCImY04tiQBJlQAphiMDMAEF2YFzxa1uWx1SVGqrKch0DsLR +yy5c50xokaGxURXYP5Wv+bZlkDlhvUGHDwxI1J/fMEIXnVcS+xnFg6gEt2K613xfcOGlr QiyJuTRTCWuJmIqwRTXhJb8MNq6+clw= X-MC-Unique: arJIWvtOMxuOMw_KfozWNQ-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 10/13] vdpa: move iommu_list to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:19 +0100 Message-Id: <20231221174322.3130442-11-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180679150100001 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iommu_list member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 2 +- hw/virtio/vhost-vdpa.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 5bd964dac5..3880b9e7f2 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -34,6 +34,7 @@ typedef struct VhostVDPAHostNotifier { typedef struct vhost_vdpa_shared { int device_fd; struct vhost_vdpa_iova_range iova_range; + QLIST_HEAD(, vdpa_iommu) iommu_list; =20 /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; @@ -62,7 +63,6 @@ typedef struct vhost_vdpa { struct vhost_dev *dev; Error *migration_blocker; VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX]; - QLIST_HEAD(, vdpa_iommu) iommu_list; IOMMUNotifier n; } VhostVDPA; =20 diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 38afcbf1c9..a07cd85081 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -279,7 +279,7 @@ static void vhost_vdpa_iommu_region_add(MemoryListener = *listener, return; } =20 - QLIST_INSERT_HEAD(&v->iommu_list, iommu, iommu_next); + QLIST_INSERT_HEAD(&v->shared->iommu_list, iommu, iommu_next); memory_region_iommu_replay(iommu->iommu_mr, &iommu->n); =20 return; @@ -292,7 +292,7 @@ static void vhost_vdpa_iommu_region_del(MemoryListener = *listener, =20 struct vdpa_iommu *iommu; =20 - QLIST_FOREACH(iommu, &v->iommu_list, iommu_next) + QLIST_FOREACH(iommu, &v->shared->iommu_list, iommu_next) { if (MEMORY_REGION(iommu->iommu_mr) =3D=3D section->mr && iommu->n.start =3D=3D section->offset_within_region) { --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180731; cv=none; d=zohomail.com; s=zohoarc; b=X8/oeq8Gznb5XyZA5eMmJywznzSNOh3iB414n6sXXRTenP54W1kYWzSR9dYNPhrXZZdw487P4OzxM+eKn46+uJahZqezAbbcfMGtN5M0oGPjiIT3X80Mt2unoNp6UwfjpMSmCU/51iUepMdoSfV8oTkAkMKoSu1eGxq0ibojCZQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180731; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AOJdU7+LTCbcIbFGhWsioiLCcidyY/foJ+x5Lc4jyQ4=; b=PXM3DcaLDQHvXKTMpILL2MRfK9rG2qE4yZCuvY/MaepReXbS/y42ltFPfXi/7QyHpbwX9ZGhCbIDGOSZbO1jmpmJ69d//SCo4TJ0kD+1mpcFsPR/fpR8K+0TabYJtrqeoZ6E+o5dCceStdOPAxuxta5ePDT4cNgeKaOO0aMnubo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180731914841.554823773175; Thu, 21 Dec 2023 09:45:31 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5Q-00029C-2U; Thu, 21 Dec 2023 12:43:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5O-00028h-CX for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:54 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5M-0004VU-Ib for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:54 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-347-wdVaY6fCMbGqyB_W5evRtQ-1; Thu, 21 Dec 2023 12:43:48 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28DB3869EC4; Thu, 21 Dec 2023 17:43:48 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 843C8492BC6; Thu, 21 Dec 2023 17:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180631; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AOJdU7+LTCbcIbFGhWsioiLCcidyY/foJ+x5Lc4jyQ4=; b=hETyMh9jk3NKUczAUwl4/TFuwCWboijGBfHYrrX47BQkhvX4bb90OV3qy1WSY2PEiBfu0F O9iNxlaazI9CM2WSSFSQOhtFP/3nNV3wiM6gc3FfYGv4o/7oO7i3vzsH2NUqJP8+OvBeb7 FhXcc+CzOe93W+GFLlJ/6QzvEbbsXW4= X-MC-Unique: wdVaY6fCMbGqyB_W5evRtQ-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 11/13] vdpa: use VhostVDPAShared in vdpa_dma_map and unmap Date: Thu, 21 Dec 2023 18:43:20 +0100 Message-Id: <20231221174322.3130442-12-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180733397100003 The callers only have the shared information by the end of this series. Start converting this functions. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 4 +-- hw/virtio/vhost-vdpa.c | 50 +++++++++++++++++----------------- net/vhost-vdpa.c | 5 ++-- 3 files changed, 30 insertions(+), 29 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 3880b9e7f2..705c754776 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -69,9 +69,9 @@ typedef struct vhost_vdpa { int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_r= ange); int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx); =20 -int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, +int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova, hwaddr size, void *vaddr, bool readonly); -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, +int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova, hwaddr size); =20 typedef struct vdpa_iommu { diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index a07cd85081..0ed6550aad 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -86,11 +86,11 @@ static bool vhost_vdpa_listener_skipped_section(MemoryR= egionSection *section, * The caller must set asid =3D 0 if the device does not support asid. * This is not an ABI break since it is set to 0 by the initializer anyway. */ -int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, +int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova, hwaddr size, void *vaddr, bool readonly) { struct vhost_msg_v2 msg =3D {}; - int fd =3D v->shared->device_fd; + int fd =3D s->device_fd; int ret =3D 0; =20 msg.type =3D VHOST_IOTLB_MSG_V2; @@ -101,7 +101,7 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t a= sid, hwaddr iova, msg.iotlb.perm =3D readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW; msg.iotlb.type =3D VHOST_IOTLB_UPDATE; =20 - trace_vhost_vdpa_dma_map(v->shared, fd, msg.type, msg.asid, msg.iotlb.= iova, + trace_vhost_vdpa_dma_map(s, fd, msg.type, msg.asid, msg.iotlb.iova, msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.pe= rm, msg.iotlb.type); =20 @@ -118,11 +118,11 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, * The caller must set asid =3D 0 if the device does not support asid. * This is not an ABI break since it is set to 0 by the initializer anyway. */ -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, +int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova, hwaddr size) { struct vhost_msg_v2 msg =3D {}; - int fd =3D v->shared->device_fd; + int fd =3D s->device_fd; int ret =3D 0; =20 msg.type =3D VHOST_IOTLB_MSG_V2; @@ -131,8 +131,8 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t= asid, hwaddr iova, msg.iotlb.size =3D size; msg.iotlb.type =3D VHOST_IOTLB_INVALIDATE; =20 - trace_vhost_vdpa_dma_unmap(v->shared, fd, msg.type, msg.asid, - msg.iotlb.iova, msg.iotlb.size, msg.iotlb.t= ype); + trace_vhost_vdpa_dma_unmap(s, fd, msg.type, msg.asid, msg.iotlb.iova, + msg.iotlb.size, msg.iotlb.type); =20 if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", @@ -143,30 +143,29 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32= _t asid, hwaddr iova, return ret; } =20 -static void vhost_vdpa_listener_begin_batch(struct vhost_vdpa *v) +static void vhost_vdpa_listener_begin_batch(VhostVDPAShared *s) { - int fd =3D v->shared->device_fd; + int fd =3D s->device_fd; struct vhost_msg_v2 msg =3D { .type =3D VHOST_IOTLB_MSG_V2, .iotlb.type =3D VHOST_IOTLB_BATCH_BEGIN, }; =20 - trace_vhost_vdpa_listener_begin_batch(v->shared, fd, msg.type, - msg.iotlb.type); + trace_vhost_vdpa_listener_begin_batch(s, fd, msg.type, msg.iotlb.type); if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", fd, errno, strerror(errno)); } } =20 -static void vhost_vdpa_iotlb_batch_begin_once(struct vhost_vdpa *v) +static void vhost_vdpa_iotlb_batch_begin_once(VhostVDPAShared *s) { - if (v->shared->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH) && - !v->shared->iotlb_batch_begin_sent) { - vhost_vdpa_listener_begin_batch(v); + if (s->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH) && + !s->iotlb_batch_begin_sent) { + vhost_vdpa_listener_begin_batch(s); } =20 - v->shared->iotlb_batch_begin_sent =3D true; + s->iotlb_batch_begin_sent =3D true; } =20 static void vhost_vdpa_listener_commit(MemoryListener *listener) @@ -226,7 +225,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *= n, IOMMUTLBEntry *iotlb) if (!memory_get_xlat_addr(iotlb, &vaddr, NULL, &read_only, NULL)) { return; } - ret =3D vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova, + ret =3D vhost_vdpa_dma_map(v->shared, VHOST_VDPA_GUEST_PA_ASID, io= va, iotlb->addr_mask + 1, vaddr, read_only); if (ret) { error_report("vhost_vdpa_dma_map(%p, 0x%" HWADDR_PRIx ", " @@ -234,7 +233,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *= n, IOMMUTLBEntry *iotlb) v, iova, iotlb->addr_mask + 1, vaddr, ret); } } else { - ret =3D vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, + ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, = iova, iotlb->addr_mask + 1); if (ret) { error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " @@ -370,8 +369,8 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, iova =3D mem_region.iova; } =20 - vhost_vdpa_iotlb_batch_begin_once(v); - ret =3D vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova, + vhost_vdpa_iotlb_batch_begin_once(v->shared); + ret =3D vhost_vdpa_dma_map(v->shared, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize), vaddr, section->readonl= y); if (ret) { error_report("vhost vdpa map fail!"); @@ -455,13 +454,13 @@ static void vhost_vdpa_listener_region_del(MemoryList= ener *listener, iova =3D result->iova; vhost_iova_tree_remove(v->shared->iova_tree, *result); } - vhost_vdpa_iotlb_batch_begin_once(v); + vhost_vdpa_iotlb_batch_begin_once(v->shared); /* * The unmap ioctl doesn't accept a full 64-bit. need to check it */ if (int128_eq(llsize, int128_2_64())) { llsize =3D int128_rshift(llsize, 1); - ret =3D vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, + ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, = iova, int128_get64(llsize)); =20 if (ret) { @@ -471,7 +470,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, } iova +=3D int128_get64(llsize); } - ret =3D vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, + ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize)); =20 if (ret) { @@ -1077,7 +1076,8 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vd= pa *v, hwaddr addr) } =20 size =3D ROUND_UP(result->size, qemu_real_host_page_size()); - r =3D vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size); + r =3D vhost_vdpa_dma_unmap(v->shared, v->address_space_id, result->iov= a, + size); if (unlikely(r < 0)) { error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r),= -r); return; @@ -1117,7 +1117,7 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa= *v, DMAMap *needle, return false; } =20 - r =3D vhost_vdpa_dma_map(v, v->address_space_id, needle->iova, + r =3D vhost_vdpa_dma_map(v->shared, v->address_space_id, needle->iova, needle->size + 1, (void *)(uintptr_t)needle->translated_addr, needle->perm =3D=3D IOMMU_RO); diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 10cf0027de..3726ee5d67 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -471,7 +471,8 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa = *v, void *addr) return; } =20 - r =3D vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->siz= e + 1); + r =3D vhost_vdpa_dma_unmap(v->shared, v->address_space_id, map->iova, + map->size + 1); if (unlikely(r !=3D 0)) { error_report("Device cannot unmap: %s(%d)", g_strerror(r), r); } @@ -495,7 +496,7 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,= void *buf, size_t size, return r; } =20 - r =3D vhost_vdpa_dma_map(v, v->address_space_id, map.iova, + r =3D vhost_vdpa_dma_map(v->shared, v->address_space_id, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf, !write); if (unlikely(r < 0)) { goto dma_map_err; --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180685; cv=none; d=zohomail.com; s=zohoarc; b=jHOX6yIGA1pUXkKS2bfH+p0WifLLphwv3UqUtYD/wk5POsZkn3C+bWSTd7qkxvoEfT0Go0Ht789/dqxi4UCxe/x1PB7qy8MivTgaEGkK5YVsvul8YFztcxbi5bfwwzu1Cb4rAonhxiaId3kjQ/Rwjby4c40cyq+o0/pEk2UN67U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180685; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=cuBFWfNNtw64fAJhsw7LPpnb02Jg4XxtbrnhOX0GkdM=; b=Rp0eh4uNmQ5OYQxVP1RDgoa7VIYrwQnCoSNfiGcRhXAIbulGteo+aHFpvP1xsrlD8SQxH+5vI5N7w5pF16K3yknm15urNJNs1zXCkmLoU2Lnm49sLiwzMvrpa81psqHPdWwzv2UapxwcCUc0LWMTZFErH9En+fBLiOYQEFDm/Yk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180685348442.5696658756748; Thu, 21 Dec 2023 09:44:45 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5W-0002Ao-H1; Thu, 21 Dec 2023 12:44:02 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5S-00029r-Br for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:58 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5Q-0004Vl-4C for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:58 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-691-DDJfLMIoMlKaHVF3jPaDaQ-1; Thu, 21 Dec 2023 12:43:50 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B90B280C29B; Thu, 21 Dec 2023 17:43:50 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 66F82492BC6; Thu, 21 Dec 2023 17:43:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cuBFWfNNtw64fAJhsw7LPpnb02Jg4XxtbrnhOX0GkdM=; b=XR0zzNz2D5NoC6DRl49xeaXfbAHzRmUj782ynCWY1SqkI1K4UF1mGg/2fHXhiF3PJudzBH HnUZzyI1OVELd7wzbpBGpq7BaER+RNS3VS154nTU7B/UoFjSHywcuJhwDKlymOp7Wp1v3D WwGLay0TLUAKd1hFzMimCqI7NnnisGw= X-MC-Unique: DDJfLMIoMlKaHVF3jPaDaQ-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 12/13] vdpa: use dev_shared in vdpa_iommu Date: Thu, 21 Dec 2023 18:43:21 +0100 Message-Id: <20231221174322.3130442-13-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180687215100003 The memory listener functions can call these too. Make vdpa_iommu work with VhostVDPAShared. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- include/hw/virtio/vhost-vdpa.h | 2 +- hw/virtio/vhost-vdpa.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 705c754776..2abee2164a 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -75,7 +75,7 @@ int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asi= d, hwaddr iova, hwaddr size); =20 typedef struct vdpa_iommu { - struct vhost_vdpa *dev; + VhostVDPAShared *dev_shared; IOMMUMemoryRegion *iommu_mr; hwaddr iommu_offset; IOMMUNotifier n; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 0ed6550aad..61553ad196 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -199,7 +199,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *= n, IOMMUTLBEntry *iotlb) struct vdpa_iommu *iommu =3D container_of(n, struct vdpa_iommu, n); =20 hwaddr iova =3D iotlb->iova + iommu->iommu_offset; - struct vhost_vdpa *v =3D iommu->dev; + VhostVDPAShared *s =3D iommu->dev_shared; void *vaddr; int ret; Int128 llend; @@ -212,10 +212,10 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier= *n, IOMMUTLBEntry *iotlb) RCU_READ_LOCK_GUARD(); /* check if RAM section out of device range */ llend =3D int128_add(int128_makes64(iotlb->addr_mask), int128_makes64(= iova)); - if (int128_gt(llend, int128_make64(v->shared->iova_range.last))) { + if (int128_gt(llend, int128_make64(s->iova_range.last))) { error_report("RAM section out of device range (max=3D0x%" PRIx64 ", end addr=3D0x%" PRIx64 ")", - v->shared->iova_range.last, int128_get64(llend)); + s->iova_range.last, int128_get64(llend)); return; } =20 @@ -225,20 +225,20 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier= *n, IOMMUTLBEntry *iotlb) if (!memory_get_xlat_addr(iotlb, &vaddr, NULL, &read_only, NULL)) { return; } - ret =3D vhost_vdpa_dma_map(v->shared, VHOST_VDPA_GUEST_PA_ASID, io= va, + ret =3D vhost_vdpa_dma_map(s, VHOST_VDPA_GUEST_PA_ASID, iova, iotlb->addr_mask + 1, vaddr, read_only); if (ret) { error_report("vhost_vdpa_dma_map(%p, 0x%" HWADDR_PRIx ", " "0x%" HWADDR_PRIx ", %p) =3D %d (%m)", - v, iova, iotlb->addr_mask + 1, vaddr, ret); + s, iova, iotlb->addr_mask + 1, vaddr, ret); } } else { - ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, = iova, + ret =3D vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova, iotlb->addr_mask + 1); if (ret) { error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " "0x%" HWADDR_PRIx ") =3D %d (%m)", - v, iova, iotlb->addr_mask + 1, ret); + s, iova, iotlb->addr_mask + 1, ret); } } } @@ -270,7 +270,7 @@ static void vhost_vdpa_iommu_region_add(MemoryListener = *listener, iommu_idx); iommu->iommu_offset =3D section->offset_within_address_space - section->offset_within_region; - iommu->dev =3D v; + iommu->dev_shared =3D v->shared; =20 ret =3D memory_region_register_iommu_notifier(section->mr, &iommu->n, = NULL); if (ret) { --=20 2.39.3 From nobody Tue Nov 26 22:10:59 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703180699; cv=none; d=zohomail.com; s=zohoarc; b=WcfKEvGIagBUZIVhwdudOY1YEQp9A17Hs95r8omWVLCGRAhYs/qOBFL2GRt0ZZevjiKzDzha6JqjdCG1x9vCiWX6GokMFpRuffwMzqfjhooMbcDSPJ5ACA3VDtGHXscpgGcSJAvICBM7D4ZO24ODajomfj6TOynlKf1fnwuVtf8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703180699; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8oZGVLrHZeHyElE456ioQgw2/GBJXSFFXweSFhDgZ4I=; b=XZoL3EC2MLfqxlJehjGZEPfhvXgGyqPoq0SxR4M4bNNmTRjqWYdnUSKP3IkcPwKFnZXuO224+UUEAVOa3i9uuDjArH7ZzCYBF46PrmytIFNAMZCm2TnVfoeSFsRybx5A3npI4GP/7AIzBggIrocYdihYOtQoqL7qBbPw+xztqRI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703180699633806.0543825823663; Thu, 21 Dec 2023 09:44:59 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rGN5c-0002Dj-FH; Thu, 21 Dec 2023 12:44:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5T-0002A7-3x for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:44:00 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rGN5Q-0004Vq-DL for qemu-devel@nongnu.org; Thu, 21 Dec 2023 12:43:58 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-597-4lapE4CZPxyz6lD65tBmjw-1; Thu, 21 Dec 2023 12:43:52 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E73888828CA; Thu, 21 Dec 2023 17:43:51 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A1C4492BC6; Thu, 21 Dec 2023 17:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703180635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8oZGVLrHZeHyElE456ioQgw2/GBJXSFFXweSFhDgZ4I=; b=Geu5Cxd9kc4LtZXZC7UcVAainNsh6DssQv5KlmgfSkS+MQ25aO2+vcgYiMYBPxcrx1ZmQN vSirmz55bzGCzzn4Xr7lJSGshGKWRkawo0ao10YA02NwCzwauoeAsOvNYPXd82A85WE/x4 o5abARV0D3g9QfYnvqaO/LUDWgXLMVs= X-MC-Unique: 4lapE4CZPxyz6lD65tBmjw-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Dragos Tatulea , Zhu Lingshan , Parav Pandit , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , si-wei.liu@oracle.com, Laurent Vivier , Lei Yang Subject: [PATCH v4 13/13] vdpa: move memory listener to vhost_vdpa_shared Date: Thu, 21 Dec 2023 18:43:22 +0100 Message-Id: <20231221174322.3130442-14-eperezma@redhat.com> In-Reply-To: <20231221174322.3130442-1-eperezma@redhat.com> References: <20231221174322.3130442-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.061, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703180701275100004 Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the memory listener to a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio P=C3=A9rez Acked-by: Jason Wang --- v4: * Actually fix the issue, unregistering shared->listener in devices with vq_index > 1 while shared has been freed in net/vhost-vdpa.c freeing the device with vq_index =3D=3D 0. Keeping the free in 0 as fd is closed at index 0. v3: * Only memory_listener_unregister at vhost_vdpa_cleanup in the last dev. SIGSEGV detected by both Si-Wei and Lei Yang [1]. * Move ram_block_discard_disable at vhost_vdpa_cleanup to the last dev [1] https://patchwork.kernel.org/comment/25614601/ --- include/hw/virtio/vhost-vdpa.h | 2 +- hw/virtio/vhost-vdpa.c | 84 ++++++++++++++++------------------ 2 files changed, 40 insertions(+), 46 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 2abee2164a..8f54e5edd4 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -33,6 +33,7 @@ typedef struct VhostVDPAHostNotifier { /* Info shared by all vhost_vdpa device models */ typedef struct vhost_vdpa_shared { int device_fd; + MemoryListener listener; struct vhost_vdpa_iova_range iova_range; QLIST_HEAD(, vdpa_iommu) iommu_list; =20 @@ -51,7 +52,6 @@ typedef struct vhost_vdpa_shared { typedef struct vhost_vdpa { int index; uint32_t address_space_id; - MemoryListener listener; uint64_t acked_features; bool shadow_vqs_enabled; /* Device suspended successfully */ diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 61553ad196..5801980301 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -170,28 +170,28 @@ static void vhost_vdpa_iotlb_batch_begin_once(VhostVD= PAShared *s) =20 static void vhost_vdpa_listener_commit(MemoryListener *listener) { - struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); + VhostVDPAShared *s =3D container_of(listener, VhostVDPAShared, listene= r); struct vhost_msg_v2 msg =3D {}; - int fd =3D v->shared->device_fd; + int fd =3D s->device_fd; =20 - if (!(v->shared->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH)= )) { + if (!(s->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH))) { return; } =20 - if (!v->shared->iotlb_batch_begin_sent) { + if (!s->iotlb_batch_begin_sent) { return; } =20 msg.type =3D VHOST_IOTLB_MSG_V2; msg.iotlb.type =3D VHOST_IOTLB_BATCH_END; =20 - trace_vhost_vdpa_listener_commit(v->shared, fd, msg.type, msg.iotlb.ty= pe); + trace_vhost_vdpa_listener_commit(s, fd, msg.type, msg.iotlb.type); if (write(fd, &msg, sizeof(msg)) !=3D sizeof(msg)) { error_report("failed to write, fd=3D%d, errno=3D%d (%s)", fd, errno, strerror(errno)); } =20 - v->shared->iotlb_batch_begin_sent =3D false; + s->iotlb_batch_begin_sent =3D false; } =20 static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *i= otlb) @@ -246,7 +246,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *= n, IOMMUTLBEntry *iotlb) static void vhost_vdpa_iommu_region_add(MemoryListener *listener, MemoryRegionSection *section) { - struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); + VhostVDPAShared *s =3D container_of(listener, VhostVDPAShared, listene= r); =20 struct vdpa_iommu *iommu; Int128 end; @@ -270,7 +270,7 @@ static void vhost_vdpa_iommu_region_add(MemoryListener = *listener, iommu_idx); iommu->iommu_offset =3D section->offset_within_address_space - section->offset_within_region; - iommu->dev_shared =3D v->shared; + iommu->dev_shared =3D s; =20 ret =3D memory_region_register_iommu_notifier(section->mr, &iommu->n, = NULL); if (ret) { @@ -278,7 +278,7 @@ static void vhost_vdpa_iommu_region_add(MemoryListener = *listener, return; } =20 - QLIST_INSERT_HEAD(&v->shared->iommu_list, iommu, iommu_next); + QLIST_INSERT_HEAD(&s->iommu_list, iommu, iommu_next); memory_region_iommu_replay(iommu->iommu_mr, &iommu->n); =20 return; @@ -287,11 +287,11 @@ static void vhost_vdpa_iommu_region_add(MemoryListene= r *listener, static void vhost_vdpa_iommu_region_del(MemoryListener *listener, MemoryRegionSection *section) { - struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); + VhostVDPAShared *s =3D container_of(listener, VhostVDPAShared, listene= r); =20 struct vdpa_iommu *iommu; =20 - QLIST_FOREACH(iommu, &v->shared->iommu_list, iommu_next) + QLIST_FOREACH(iommu, &s->iommu_list, iommu_next) { if (MEMORY_REGION(iommu->iommu_mr) =3D=3D section->mr && iommu->n.start =3D=3D section->offset_within_region) { @@ -307,7 +307,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, MemoryRegionSection *section) { DMAMap mem_region =3D {}; - struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); + VhostVDPAShared *s =3D container_of(listener, VhostVDPAShared, listene= r); hwaddr iova; Int128 llend, llsize; void *vaddr; @@ -315,10 +315,8 @@ static void vhost_vdpa_listener_region_add(MemoryListe= ner *listener, int page_size =3D qemu_target_page_size(); int page_mask =3D -page_size; =20 - if (vhost_vdpa_listener_skipped_section(section, - v->shared->iova_range.first, - v->shared->iova_range.last, - page_mask)) { + if (vhost_vdpa_listener_skipped_section(section, s->iova_range.first, + s->iova_range.last, page_mask)= ) { return; } if (memory_region_is_iommu(section->mr)) { @@ -328,8 +326,7 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, =20 if (unlikely((section->offset_within_address_space & ~page_mask) !=3D (section->offset_within_region & ~page_mask))) { - trace_vhost_vdpa_listener_region_add_unaligned(v->shared, - section->mr->name, + trace_vhost_vdpa_listener_region_add_unaligned(s, section->mr->nam= e, section->offset_within_address_space & ~page_mask, section->offset_within_region & ~page_mask); return; @@ -349,18 +346,18 @@ static void vhost_vdpa_listener_region_add(MemoryList= ener *listener, section->offset_within_region + (iova - section->offset_within_address_space); =20 - trace_vhost_vdpa_listener_region_add(v->shared, iova, int128_get64(lle= nd), + trace_vhost_vdpa_listener_region_add(s, iova, int128_get64(llend), vaddr, section->readonly); =20 llsize =3D int128_sub(llend, int128_make64(iova)); - if (v->shared->shadow_data) { + if (s->shadow_data) { int r; =20 mem_region.translated_addr =3D (hwaddr)(uintptr_t)vaddr, mem_region.size =3D int128_get64(llsize) - 1, mem_region.perm =3D IOMMU_ACCESS_FLAG(true, section->readonly), =20 - r =3D vhost_iova_tree_map_alloc(v->shared->iova_tree, &mem_region); + r =3D vhost_iova_tree_map_alloc(s->iova_tree, &mem_region); if (unlikely(r !=3D IOVA_OK)) { error_report("Can't allocate a mapping (%d)", r); goto fail; @@ -369,8 +366,8 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, iova =3D mem_region.iova; } =20 - vhost_vdpa_iotlb_batch_begin_once(v->shared); - ret =3D vhost_vdpa_dma_map(v->shared, VHOST_VDPA_GUEST_PA_ASID, iova, + vhost_vdpa_iotlb_batch_begin_once(s); + ret =3D vhost_vdpa_dma_map(s, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize), vaddr, section->readonl= y); if (ret) { error_report("vhost vdpa map fail!"); @@ -380,8 +377,8 @@ static void vhost_vdpa_listener_region_add(MemoryListen= er *listener, return; =20 fail_map: - if (v->shared->shadow_data) { - vhost_iova_tree_remove(v->shared->iova_tree, mem_region); + if (s->shadow_data) { + vhost_iova_tree_remove(s->iova_tree, mem_region); } =20 fail: @@ -398,17 +395,15 @@ fail: static void vhost_vdpa_listener_region_del(MemoryListener *listener, MemoryRegionSection *section) { - struct vhost_vdpa *v =3D container_of(listener, struct vhost_vdpa, lis= tener); + VhostVDPAShared *s =3D container_of(listener, VhostVDPAShared, listene= r); hwaddr iova; Int128 llend, llsize; int ret; int page_size =3D qemu_target_page_size(); int page_mask =3D -page_size; =20 - if (vhost_vdpa_listener_skipped_section(section, - v->shared->iova_range.first, - v->shared->iova_range.last, - page_mask)) { + if (vhost_vdpa_listener_skipped_section(section, s->iova_range.first, + s->iova_range.last, page_mask)= ) { return; } if (memory_region_is_iommu(section->mr)) { @@ -417,8 +412,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, =20 if (unlikely((section->offset_within_address_space & ~page_mask) !=3D (section->offset_within_region & ~page_mask))) { - trace_vhost_vdpa_listener_region_del_unaligned(v->shared, - section->mr->name, + trace_vhost_vdpa_listener_region_del_unaligned(s, section->mr->nam= e, section->offset_within_address_space & ~page_mask, section->offset_within_region & ~page_mask); return; @@ -427,7 +421,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, iova =3D ROUND_UP(section->offset_within_address_space, page_size); llend =3D vhost_vdpa_section_end(section, page_mask); =20 - trace_vhost_vdpa_listener_region_del(v->shared, iova, + trace_vhost_vdpa_listener_region_del(s, iova, int128_get64(int128_sub(llend, int128_one()))); =20 if (int128_ge(int128_make64(iova), llend)) { @@ -436,7 +430,7 @@ static void vhost_vdpa_listener_region_del(MemoryListen= er *listener, =20 llsize =3D int128_sub(llend, int128_make64(iova)); =20 - if (v->shared->shadow_data) { + if (s->shadow_data) { const DMAMap *result; const void *vaddr =3D memory_region_get_ram_ptr(section->mr) + section->offset_within_region + @@ -446,37 +440,37 @@ static void vhost_vdpa_listener_region_del(MemoryList= ener *listener, .size =3D int128_get64(llsize) - 1, }; =20 - result =3D vhost_iova_tree_find_iova(v->shared->iova_tree, &mem_re= gion); + result =3D vhost_iova_tree_find_iova(s->iova_tree, &mem_region); if (!result) { /* The memory listener map wasn't mapped */ return; } iova =3D result->iova; - vhost_iova_tree_remove(v->shared->iova_tree, *result); + vhost_iova_tree_remove(s->iova_tree, *result); } - vhost_vdpa_iotlb_batch_begin_once(v->shared); + vhost_vdpa_iotlb_batch_begin_once(s); /* * The unmap ioctl doesn't accept a full 64-bit. need to check it */ if (int128_eq(llsize, int128_2_64())) { llsize =3D int128_rshift(llsize, 1); - ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, = iova, + ret =3D vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize)); =20 if (ret) { error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " "0x%" HWADDR_PRIx ") =3D %d (%m)", - v, iova, int128_get64(llsize), ret); + s, iova, int128_get64(llsize), ret); } iova +=3D int128_get64(llsize); } - ret =3D vhost_vdpa_dma_unmap(v->shared, VHOST_VDPA_GUEST_PA_ASID, iova, + ret =3D vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize)); =20 if (ret) { error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " "0x%" HWADDR_PRIx ") =3D %d (%m)", - v, iova, int128_get64(llsize), ret); + s, iova, int128_get64(llsize), ret); } =20 memory_region_unref(section->mr); @@ -591,7 +585,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void = *opaque, Error **errp) =20 v->dev =3D dev; dev->opaque =3D opaque ; - v->listener =3D vhost_vdpa_memory_listener; + v->shared->listener =3D vhost_vdpa_memory_listener; vhost_vdpa_init_svq(dev, v); =20 error_propagate(&dev->migration_blocker, v->migration_blocker); @@ -751,10 +745,10 @@ static int vhost_vdpa_cleanup(struct vhost_dev *dev) trace_vhost_vdpa_cleanup(dev, v); if (vhost_vdpa_first_dev(dev)) { ram_block_discard_disable(false); + memory_listener_unregister(&v->shared->listener); } =20 vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs); - memory_listener_unregister(&v->listener); vhost_vdpa_svq_cleanup(dev); =20 dev->opaque =3D NULL; @@ -1327,7 +1321,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev= , bool started) "IOMMU and try again"); return -1; } - memory_listener_register(&v->listener, dev->vdev->dma_as); + memory_listener_register(&v->shared->listener, dev->vdev->dma_as); =20 return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK); } @@ -1346,7 +1340,7 @@ static void vhost_vdpa_reset_status(struct vhost_dev = *dev) vhost_vdpa_reset_device(dev); vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE | VIRTIO_CONFIG_S_DRIVER); - memory_listener_unregister(&v->listener); + memory_listener_unregister(&v->shared->listener); } =20 static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base, --=20 2.39.3