From nobody Tue May 21 09:41:36 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=huawei.com ARC-Seal: i=1; a=rsa-sha256; t=1620466692; cv=none; d=zohomail.com; s=zohoarc; b=emNYQ4iBWV85ABZTst8ddjtESEkMwvt1yjyOUTYds4tZCJJzO0ReoPQv+uoGv0/pzfAV0gDWwxhEu/NY34TKA63ao5bhV9YJ6r4vU76Sr3gpuVrbGLN0CBOp0Afkjg7wIOLgPOB8ggRi+mJjGUnljfiW4e45DBLwqW4ltlYvsco= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1620466692; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gRh20iO+9v3NDXKInc2+fAwoxp2kLph16YvvRres/i8=; b=QUD94WjsRvRtMYSaZlR9p3nahDMx+Y0X97bsKjUzzguYrjqvA/ATnPPQpYFK2cRKZognsF5cjxEuDzLkVPY58S3O5WFDrJVNo8//zE7aUZR6EMsGlQO+8/UDznTcF/i3HMKAocVQsQsTbUFLTQr8ulhVDg6NOzzwoOAq91ZtxmY= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1620466692191396.6753650816071; Sat, 8 May 2021 02:38:12 -0700 (PDT) Received: from localhost ([::1]:35818 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lfJPW-00050W-Vl for importer@patchew.org; Sat, 08 May 2021 05:38:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48746) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJU-0006eM-CU for qemu-devel@nongnu.org; Sat, 08 May 2021 05:31:56 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:5042) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJR-0005eH-Hs for qemu-devel@nongnu.org; Sat, 08 May 2021 05:31:55 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Fchq92qSyzlc1R; Sat, 8 May 2021 17:29:41 +0800 (CST) Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Sat, 8 May 2021 17:31:42 +0800 From: Kunkun Jiang To: Alex Williamson , , Kirti Wankhede , "open list : All patches CC here" Subject: [RFC PATCH 1/3] linux-headers: update against 5.12 and "manual clear vfio dirty log" series Date: Sat, 8 May 2021 17:31:03 +0800 Message-ID: <20210508093105.2558-2-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210508093105.2558-1-jiangkunkun@huawei.com> References: <20210508093105.2558-1-jiangkunkun@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.174.185.210] X-CFilter-Loop: Reflected Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=45.249.212.32; envelope-from=jiangkunkun@huawei.com; helo=szxga06-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zenghui Yu , wanghaibin.wang@huawei.com, Kunkun Jiang , Keqian Zhu Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Zenghui Yu The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR and VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in the kernel, update the header to add them. Signed-off-by: Zenghui Yu Signed-off-by: Kunkun Jiang --- linux-headers/linux/vfio.h | 61 +++++++++++++++++++++++++++++++++++++- 1 file changed, 60 insertions(+), 1 deletion(-) diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h index 609099e455..7e918c697f 100644 --- a/linux-headers/linux/vfio.h +++ b/linux-headers/linux/vfio.h @@ -46,6 +46,20 @@ */ #define VFIO_NOIOMMU_IOMMU 8 =20 +/* Supports VFIO_DMA_UNMAP_FLAG_ALL */ +#define VFIO_UNMAP_ALL 9 + +/* Supports the vaddr flag for DMA map and unmap */ +#define VFIO_UPDATE_VADDR 10 + +/* + * The vfio_iommu driver may support user clears dirty log manually, which= means + * dirty log is not cleared automatically after dirty log is copied to use= rspace, + * it's user's duty to clear dirty log. Note: when user queries this exten= sion + * and vfio_iommu driver supports it, then it is enabled. + */ +#define VFIO_DIRTY_LOG_MANUAL_CLEAR 11 + /* * The IOCTL interface is designed for extensibility by embedding the * structure length (argsz) and flags into structures passed between @@ -1074,12 +1088,22 @@ struct vfio_iommu_type1_info_dma_avail { * * Map process virtual addresses to IO virtual addresses using the * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required. + * + * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova, and + * unblock translation of host virtual addresses in the iova range. The v= addr + * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR. = To + * maintain memory consistency within the user application, the updated va= ddr + * must address the same memory object as originally mapped. Failure to d= o so + * will result in user memory corruption and/or device misbehavior. iova = and + * size must match those in the original MAP_DMA call. Protection is not + * changed, and the READ & WRITE flags must be 0. */ struct vfio_iommu_type1_dma_map { __u32 argsz; __u32 flags; #define VFIO_DMA_MAP_FLAG_READ (1 << 0) /* readable from device */ #define VFIO_DMA_MAP_FLAG_WRITE (1 << 1) /* writable from device */ +#define VFIO_DMA_MAP_FLAG_VADDR (1 << 2) __u64 vaddr; /* Process virtual address */ __u64 iova; /* IO virtual address */ __u64 size; /* Size of mapping (bytes) */ @@ -1102,6 +1126,7 @@ struct vfio_bitmap { * field. No guarantee is made to the user that arbitrary unmaps of iova * or size different from those used in the original mapping call will * succeed. + * * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get the dirty bit= map * before unmapping IO virtual addresses. When this flag is set, the user = must * provide a struct vfio_bitmap in data[]. User must provide zero-allocated @@ -1111,11 +1136,21 @@ struct vfio_bitmap { * indicates that the page at that offset from iova is dirty. A Bitmap of = the * pages in the range of unmapped size is returned in the user-provided * vfio_bitmap.data. + * + * If flags & VFIO_DMA_UNMAP_FLAG_ALL, unmap all addresses. iova and size + * must be 0. This cannot be combined with the get-dirty-bitmap flag. + * + * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host + * virtual addresses in the iova range. Tasks that attempt to translate an + * iova's vaddr will block. DMA to already-mapped pages continues. This + * cannot be combined with the get-dirty-bitmap flag. */ struct vfio_iommu_type1_dma_unmap { __u32 argsz; __u32 flags; #define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0) +#define VFIO_DMA_UNMAP_FLAG_ALL (1 << 1) +#define VFIO_DMA_UNMAP_FLAG_VADDR (1 << 2) __u64 iova; /* IO virtual address */ __u64 size; /* Size of mapping (bytes) */ __u8 data[]; @@ -1161,7 +1196,29 @@ struct vfio_iommu_type1_dma_unmap { * actual bitmap. If dirty pages logging is not enabled, an error will be * returned. * - * Only one of the flags _START, _STOP and _GET may be specified at a time. + * The VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR flag is almost same = as + * VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP, except that it requires underly= ing + * dirty bitmap is not cleared automatically. The user can clear it manual= ly by + * calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP flag se= t. + * + * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP flag se= t, + * instructs the IOMMU driver to clear the dirty status of pages in a bitm= ap + * for IOMMU container for a given IOVA range. The user must specify the I= OVA + * range, the bitmap and the pgsize through the structure + * vfio_iommu_type1_dirty_bitmap_get in the data[] portion. This interface + * supports clearing a bitmap of the smallest supported pgsize only and ca= n be + * modified in future to clear a bitmap of any specified supported pgsize.= The + * user must provide a memory area for the bitmap memory and specify its s= ize + * in bitmap.size. One bit is used to represent one page consecutively sta= rting + * from iova offset. The user should provide page size in bitmap.pgsize fi= eld. + * A bit set in the bitmap indicates that the page at that offset from iov= a is + * cleared the dirty status, and dirty tracking is re-enabled for that pag= e. The + * caller must set argsz to a value including the size of structure + * vfio_iommu_dirty_bitmap_get, but excluing the size of the actual bitmap= . If + * dirty pages logging is not enabled, an error will be returned. + * + * Only one of the flags _START, _STOP, _GET, _GET_NOCLEAR, and _CLEAR may= be + * specified at a time. * */ struct vfio_iommu_type1_dirty_bitmap { @@ -1170,6 +1227,8 @@ struct vfio_iommu_type1_dirty_bitmap { #define VFIO_IOMMU_DIRTY_PAGES_FLAG_START (1 << 0) #define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP (1 << 1) #define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP (1 << 2) +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR (1 << 3) +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP (1 << 4) __u8 data[]; }; =20 --=20 2.23.0 From nobody Tue May 21 09:41:36 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=huawei.com ARC-Seal: i=1; a=rsa-sha256; t=1620466770; cv=none; d=zohomail.com; s=zohoarc; b=iGWNuLgFMmMxxh2N89fu4nt3iS6eps/NGz/JME6e+QY0agLSl3ww/OKPdTIG33wa55H4v+cdVBkiDP3weSEwmfQOYrUhSO5HZmPpoBSbrmqDZSsxVbjcmpq7mteXLJ9w1RbiIT72pcz/kSLovYwKV4tEUMrhI337yHYt49VLbfo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1620466770; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=v0fF5Awj6KS/ugWwGgmzjBuCxwJWXnjmT3yjzIAyEjs=; b=dnqIGGzOcMsuQ9B4D6UXXw61OgnRyev+q4Ru6nMx6/dxVcK5AwuX1nqiCkDPB14llvSD53MHcuB6MlMDFJYFhAQwAQjRvfjnv5wOZi2yqKW2zK70+EvwIOK5GbgV0pH0UFCkXiP/0ziG75UAULMVnNjv3C2ystkLXTjtdIGhlRM= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 162046677035296.5480156347553; Sat, 8 May 2021 02:39:30 -0700 (PDT) Received: from localhost ([::1]:38362 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lfJQn-0006h5-Dv for importer@patchew.org; Sat, 08 May 2021 05:39:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48772) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJb-0006j5-A8 for qemu-devel@nongnu.org; Sat, 08 May 2021 05:32:03 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:4462) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJV-0005gM-8a for qemu-devel@nongnu.org; Sat, 08 May 2021 05:32:03 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Fchnw2LFkzQkmJ; Sat, 8 May 2021 17:28:36 +0800 (CST) Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Sat, 8 May 2021 17:31:45 +0800 From: Kunkun Jiang To: Alex Williamson , , Kirti Wankhede , "open list : All patches CC here" Subject: [RFC PATCH 2/3] vfio: Maintain DMA mapping range for the container Date: Sat, 8 May 2021 17:31:04 +0800 Message-ID: <20210508093105.2558-3-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210508093105.2558-1-jiangkunkun@huawei.com> References: <20210508093105.2558-1-jiangkunkun@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.174.185.210] X-CFilter-Loop: Reflected Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=45.249.212.191; envelope-from=jiangkunkun@huawei.com; helo=szxga05-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zenghui Yu , wanghaibin.wang@huawei.com, Kunkun Jiang , Keqian Zhu Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Zenghui Yu When synchronizing dirty bitmap from kernel VFIO we do it in a per-iova-range fashion and we allocate the userspace bitmap for each of the ioctl. This patch introduces `struct VFIODMARange` to describe a range of the given DMA mapping with respect to a VFIO_IOMMU_MAP_DMA operation, and make the bitmap cache of this range be persistent so that we don't need to g_try_malloc0() every time. Note that the new structure is almost a copy of `struct vfio_iommu_type1_dma_map` but only internally used by QEMU. More importantly, the cached per-iova-range dirty bitmap will be further used when we want to add support for the CLEAR_BITMAP and this cached bitmap will be used to guarantee we don't clear any unknown dirty bits otherwise that can be a severe data loss issue for migration code. It's pretty intuitive to maintain a bitmap per container since we perform log_sync at this granule. But I don't know how to deal with things like memory hot-{un}plug, sparse DMA mappings, etc. Suggestions welcome. * yet something to-do: - can't work with guest viommu - no locks - etc [ The idea and even the commit message are largely inherited from kvm side. See commit 9f4bf4baa8b820c7930e23c9566c9493db7e1d25. ] Signed-off-by: Zenghui Yu Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 62 +++++++++++++++++++++++++++++++---- include/hw/vfio/vfio-common.h | 9 +++++ 2 files changed, 65 insertions(+), 6 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index ae5654fcdb..b8b6418e69 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -421,6 +421,29 @@ unmap_exit: return ret; } =20 +static VFIODMARange *vfio_lookup_match_range(VFIOContainer *container, + hwaddr start_addr, hwaddr size) +{ + VFIODMARange *qrange; + + QLIST_FOREACH(qrange, &container->dma_list, next) { + if (qrange->iova =3D=3D start_addr && qrange->size =3D=3D size) { + return qrange; + } + } + return NULL; +} + +static void vfio_dma_range_init_dirty_bitmap(VFIODMARange *qrange) +{ + uint64_t pages, size; + + pages =3D REAL_HOST_PAGE_ALIGN(qrange->size) / qemu_real_host_page_siz= e; + size =3D ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / BITS_PER_BYT= E; + + qrange->bitmap =3D g_malloc0(size); +} + /* * DMA - Mapping and unmapping for the "type1" IOMMU interface used on x86 */ @@ -434,12 +457,29 @@ static int vfio_dma_unmap(VFIOContainer *container, .iova =3D iova, .size =3D size, }; + VFIODMARange *qrange; =20 if (iotlb && container->dirty_pages_supported && vfio_devices_all_running_and_saving(container)) { return vfio_dma_unmap_bitmap(container, iova, size, iotlb); } =20 + /* + * unregister the DMA range + * + * It seems that the memory layer will give us the same section as the= one + * used in region_add(). Otherwise it'll be complicated to manipulate = the + * bitmap across region_{add,del}. Is there any guarantee? + * + * But there is really not such a restriction on the kernel interface + * (VFIO_IOMMU_DIRTY_PAGES_FLAG_{UN}MAP_DMA, etc). + */ + qrange =3D vfio_lookup_match_range(container, iova, size); + assert(qrange); + g_free(qrange->bitmap); + QLIST_REMOVE(qrange, next); + g_free(qrange); + while (ioctl(container->fd, VFIO_IOMMU_UNMAP_DMA, &unmap)) { /* * The type1 backend has an off-by-one bug in the kernel (71a7d3d7= 8e3c @@ -476,6 +516,14 @@ static int vfio_dma_map(VFIOContainer *container, hwad= dr iova, .iova =3D iova, .size =3D size, }; + VFIODMARange *qrange; + + qrange =3D g_malloc0(sizeof(*qrange)); + qrange->iova =3D iova; + qrange->size =3D size; + QLIST_INSERT_HEAD(&container->dma_list, qrange, next); + /* XXX allocate the dirty bitmap on demand */ + vfio_dma_range_init_dirty_bitmap(qrange); =20 if (!readonly) { map.flags |=3D VFIO_DMA_MAP_FLAG_WRITE; @@ -1023,9 +1071,14 @@ static int vfio_get_dirty_bitmap(VFIOContainer *cont= ainer, uint64_t iova, { struct vfio_iommu_type1_dirty_bitmap *dbitmap; struct vfio_iommu_type1_dirty_bitmap_get *range; + VFIODMARange *qrange; uint64_t pages; int ret; =20 + qrange =3D vfio_lookup_match_range(container, iova, size); + /* the same as vfio_dma_unmap() */ + assert(qrange); + dbitmap =3D g_malloc0(sizeof(*dbitmap) + sizeof(*range)); =20 dbitmap->argsz =3D sizeof(*dbitmap) + sizeof(*range); @@ -1044,11 +1097,8 @@ static int vfio_get_dirty_bitmap(VFIOContainer *cont= ainer, uint64_t iova, pages =3D REAL_HOST_PAGE_ALIGN(range->size) / qemu_real_host_page_size; range->bitmap.size =3D ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / BITS_PER_BYTE; - range->bitmap.data =3D g_try_malloc0(range->bitmap.size); - if (!range->bitmap.data) { - ret =3D -ENOMEM; - goto err_out; - } + + range->bitmap.data =3D (__u64 *)qrange->bitmap; =20 ret =3D ioctl(container->fd, VFIO_IOMMU_DIRTY_PAGES, dbitmap); if (ret) { @@ -1064,7 +1114,6 @@ static int vfio_get_dirty_bitmap(VFIOContainer *conta= iner, uint64_t iova, trace_vfio_get_dirty_bitmap(container->fd, range->iova, range->size, range->bitmap.size, ram_addr); err_out: - g_free(range->bitmap.data); g_free(dbitmap); =20 return ret; @@ -1770,6 +1819,7 @@ static int vfio_connect_container(VFIOGroup *group, A= ddressSpace *as, container->dirty_pages_supported =3D false; QLIST_INIT(&container->giommu_list); QLIST_INIT(&container->hostwin_list); + QLIST_INIT(&container->dma_list); =20 ret =3D vfio_init_container(container, group->fd, errp); if (ret) { diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 6141162d7a..bd6eca9332 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -76,6 +76,14 @@ typedef struct VFIOAddressSpace { =20 struct VFIOGroup; =20 +typedef struct VFIODMARange { + QLIST_ENTRY(VFIODMARange) next; + hwaddr iova; + size_t size; + void *vaddr; /* unused */ + unsigned long *bitmap; /* dirty bitmap cache for this range */ +} VFIODMARange; + typedef struct VFIOContainer { VFIOAddressSpace *space; int fd; /* /dev/vfio/vfio, empowered by the attached groups */ @@ -91,6 +99,7 @@ typedef struct VFIOContainer { QLIST_HEAD(, VFIOGuestIOMMU) giommu_list; QLIST_HEAD(, VFIOHostDMAWindow) hostwin_list; QLIST_HEAD(, VFIOGroup) group_list; + QLIST_HEAD(, VFIODMARange) dma_list; QLIST_ENTRY(VFIOContainer) next; } VFIOContainer; =20 --=20 2.23.0 From nobody Tue May 21 09:41:36 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=huawei.com ARC-Seal: i=1; a=rsa-sha256; t=1620466428; cv=none; d=zohomail.com; s=zohoarc; b=QCAxM7C+Ya8/1AJ5zIMpu015yOr/dcxBr298Pgz4KBDpq6LdK2hZJdHQQXv8IvmoyVaAeKFkWeS2QfdypBsftpf1gUf5SPNKWYUlpvFO1/gR7FrPyd/PMBEfIa/NCupgC4ZtM7SQhwfMXGVz/g7PJxuZlGWKt02z39EfWSm/0ck= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1620466428; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qhZWvkPI/zVbX+aQG56qGWgGNhEq9zHHPrGo2UGxhnM=; b=C/CytJDj+eXfjRclSvyGoRkyeDejIKU5cA4+OnaK9tUBe4webPa1iqk447SqISjfPFeh3C/AtFTrLYyz5csX5RxoVWz3m1b3Go0YqYQUzFOA36H7PfmgIxZJlToXrvAzCFHST7yQ0cv6U8kLLCbKMHDu6FtJ//n92WUNieDvlEM= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1620466428774910.6431985954036; Sat, 8 May 2021 02:33:48 -0700 (PDT) Received: from localhost ([::1]:56954 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lfJLH-0008US-Q4 for importer@patchew.org; Sat, 08 May 2021 05:33:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48768) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJZ-0006ig-EP for qemu-devel@nongnu.org; Sat, 08 May 2021 05:32:01 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:4461) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lfJJV-0005gH-3M for qemu-devel@nongnu.org; Sat, 08 May 2021 05:32:01 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Fchnw25hCzQklf; Sat, 8 May 2021 17:28:36 +0800 (CST) Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Sat, 8 May 2021 17:31:48 +0800 From: Kunkun Jiang To: Alex Williamson , , Kirti Wankhede , "open list : All patches CC here" Subject: [RFC PATCH 3/3] vfio/migration: Add support for manual clear vfio dirty log Date: Sat, 8 May 2021 17:31:05 +0800 Message-ID: <20210508093105.2558-4-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20210508093105.2558-1-jiangkunkun@huawei.com> References: <20210508093105.2558-1-jiangkunkun@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.174.185.210] X-CFilter-Loop: Reflected Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=45.249.212.191; envelope-from=jiangkunkun@huawei.com; helo=szxga05-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zenghui Yu , wanghaibin.wang@huawei.com, Kunkun Jiang , Keqian Zhu Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Zenghui Yu The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR and VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in the kernel, tweak the userspace side to use them. Check if the kernel supports VFIO_DIRTY_LOG_MANUAL_CLEAR and provide the log_clear() hook for vfio_memory_listener. If the kernel supports it, deliever the clear message to kernel. Signed-off-by: Zenghui Yu Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 149 +++++++++++++++++++++++++++++++++- include/hw/vfio/vfio-common.h | 1 + 2 files changed, 148 insertions(+), 2 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index b8b6418e69..9c41a36a61 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1082,7 +1082,9 @@ static int vfio_get_dirty_bitmap(VFIOContainer *conta= iner, uint64_t iova, dbitmap =3D g_malloc0(sizeof(*dbitmap) + sizeof(*range)); =20 dbitmap->argsz =3D sizeof(*dbitmap) + sizeof(*range); - dbitmap->flags =3D VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP; + dbitmap->flags =3D container->dirty_log_manual_clear ? + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR : + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP; range =3D (struct vfio_iommu_type1_dirty_bitmap_get *)&dbitmap->data; range->iova =3D iova; range->size =3D size; @@ -1213,12 +1215,148 @@ static void vfio_listener_log_sync(MemoryListener = *listener, } } =20 +/* + * I'm not sure if there's any alignment requirement for the CLEAR_BITMAP + * ioctl. But copy from kvm side and align {start, size} with 64 pages. + * + * I think the code can be simplified a lot if no alignment requirement. + */ +#define VFIO_CLEAR_LOG_SHIFT 6 +#define VFIO_CLEAR_LOG_ALIGN (qemu_real_host_page_size << VFIO_CLEAR_LOG_= SHIFT) +#define VFIO_CLEAR_LOG_MASK (-VFIO_CLEAR_LOG_ALIGN) + +static int vfio_log_clear_one_range(VFIOContainer *container, + VFIODMARange *qrange, uint64_t start, uint64_t size) +{ + struct vfio_iommu_type1_dirty_bitmap *dbitmap; + struct vfio_iommu_type1_dirty_bitmap_get *range; + + dbitmap =3D g_malloc0(sizeof(*dbitmap) + sizeof(*range)); + + dbitmap->argsz =3D sizeof(*dbitmap) + sizeof(*range); + dbitmap->flags =3D VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP; + range =3D (struct vfio_iommu_type1_dirty_bitmap_get *)&dbitmap->data; + + /* + * Now let's deal with the actual bitmap, which is almost the same + * as the kvm side. + */ + uint64_t end, bmap_start, start_delta, bmap_npages; + unsigned long *bmap_clear =3D NULL, psize =3D qemu_real_host_page_size; + int ret; + + bmap_start =3D start & VFIO_CLEAR_LOG_MASK; + start_delta =3D start - bmap_start; + bmap_start /=3D psize; + + bmap_npages =3D DIV_ROUND_UP(size + start_delta, VFIO_CLEAR_LOG_ALIGN) + << VFIO_CLEAR_LOG_SHIFT; + end =3D qrange->size / psize; + if (bmap_npages > end - bmap_start) { + bmap_npages =3D end - bmap_start; + } + start_delta /=3D psize; + + if (start_delta) { + bmap_clear =3D bitmap_new(bmap_npages); + bitmap_copy_with_src_offset(bmap_clear, qrange->bitmap, + bmap_start, start_delta + size / psize= ); + bitmap_clear(bmap_clear, 0, start_delta); + range->bitmap.data =3D (__u64 *)bmap_clear; + } else { + range->bitmap.data =3D (__u64 *)(qrange->bitmap + BIT_WORD(bmap_st= art)); + } + + range->iova =3D qrange->iova + bmap_start * psize; + range->size =3D bmap_npages * psize; + range->bitmap.size =3D ROUND_UP(bmap_npages, sizeof(__u64) * BITS_PER_= BYTE) / + BITS_PER_BYTE; + range->bitmap.pgsize =3D qemu_real_host_page_size; + + ret =3D ioctl(container->fd, VFIO_IOMMU_DIRTY_PAGES, dbitmap); + if (ret) { + error_report("Failed to clear dirty log for iova: 0x%"PRIx64 + " size: 0x%"PRIx64" err: %d", (uint64_t)range->iova, + (uint64_t)range->size, errno); + goto err_out; + } + + bitmap_clear(qrange->bitmap, bmap_start + start_delta, size / psize); +err_out: + g_free(bmap_clear); + g_free(dbitmap); + return 0; +} + +static int vfio_physical_log_clear(VFIOContainer *container, + MemoryRegionSection *section) +{ + uint64_t start, size, offset, count; + VFIODMARange *qrange; + int ret =3D 0; + + if (!container->dirty_log_manual_clear) { + /* No need to do explicit clear */ + return ret; + } + + start =3D section->offset_within_address_space; + size =3D int128_get64(section->size); + + if (!size) { + return ret; + } + + QLIST_FOREACH(qrange, &container->dma_list, next) { + /* + * Discard ranges that do not overlap the section (e.g., the + * Memory BAR regions of the device) + */ + if (qrange->iova > start + size - 1 || + start > qrange->iova + qrange->size - 1) { + continue; + } + + if (start >=3D qrange->iova) { + /* The range starts before section or is aligned to it. */ + offset =3D start - qrange->iova; + count =3D MIN(qrange->size - offset, size); + } else { + /* The range starts after section. */ + offset =3D 0; + count =3D MIN(qrange->size, size - (qrange->iova - start)); + } + ret =3D vfio_log_clear_one_range(container, qrange, offset, count); + if (ret < 0) { + break; + } + } + + return ret; +} + +static void vfio_listener_log_clear(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container =3D container_of(listener, VFIOContainer, lis= tener); + + if (vfio_listener_skipped_section(section) || + !container->dirty_pages_supported) { + return; + } + + if (vfio_devices_all_dirty_tracking(container)) { + vfio_physical_log_clear(container, section); + } +} + static const MemoryListener vfio_memory_listener =3D { .region_add =3D vfio_listener_region_add, .region_del =3D vfio_listener_region_del, .log_global_start =3D vfio_listener_log_global_start, .log_global_stop =3D vfio_listener_log_global_stop, .log_sync =3D vfio_listener_log_sync, + .log_clear =3D vfio_listener_log_clear, }; =20 static void vfio_listener_release(VFIOContainer *container) @@ -1646,7 +1784,7 @@ static int vfio_get_iommu_type(VFIOContainer *contain= er, static int vfio_init_container(VFIOContainer *container, int group_fd, Error **errp) { - int iommu_type, ret; + int iommu_type, dirty_log_manual_clear, ret; =20 iommu_type =3D vfio_get_iommu_type(container, errp); if (iommu_type < 0) { @@ -1675,6 +1813,13 @@ static int vfio_init_container(VFIOContainer *contai= ner, int group_fd, } =20 container->iommu_type =3D iommu_type; + + dirty_log_manual_clear =3D ioctl(container->fd, VFIO_CHECK_EXTENSION, + VFIO_DIRTY_LOG_MANUAL_CLEAR); + if (dirty_log_manual_clear) { + container->dirty_log_manual_clear =3D dirty_log_manual_clear; + } + return 0; } =20 diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index bd6eca9332..bcd6a0c440 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -93,6 +93,7 @@ typedef struct VFIOContainer { Error *error; bool initialized; bool dirty_pages_supported; + bool dirty_log_manual_clear; uint64_t dirty_pgsizes; uint64_t max_dirty_bitmap_size; unsigned long pgsizes; --=20 2.23.0