From nobody Fri Nov 14 19:22:31 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1589490875; cv=none; d=zohomail.com; s=zohoarc; b=YRa7WYkWHMTvVtmqgoauxlV1hPTU2UwkK3AXtKKLcRedWJHXHsn/B2LQy8i+nDDlWUSotiyMIS5rhAIQaRZV0Cy60t8/FIn6ApMTYiIX6W29v6lOF+wbPQ0kPg91xRQST5pDK4bWFHHrM7NLMixmlnP6zxfv8N9lXEQn7ns1TmE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589490875; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=izB/kfrbOkgeuoTy4WvibQk5KhK5lRPGhMmv3elCpMw=; b=PhGCRdtvSPYtJzw9l7ApIsOAkr4G+XjeziPYdT/Dc5uIw+B00EJI8noe0bYWGgNDyfJhWs20sm6FcoPxvVAIcHjIBmM7H4CGbcizZImFxy9gbCKuG+SdJlwrc/JBh0gHBYzcp8MX5liKTxUnYYkkl3Iw4JGxH9TzEGx/2cZABqQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1589490875479149.31283934976761; Thu, 14 May 2020 14:14:35 -0700 (PDT) Received: from localhost ([::1]:40724 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jZLBa-0001du-9F for importer@patchew.org; Thu, 14 May 2020 17:14:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47654) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jZL8s-00050l-Si for qemu-devel@nongnu.org; Thu, 14 May 2020 17:11:47 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:5578) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jZL8r-0000sC-Lg for qemu-devel@nongnu.org; Thu, 14 May 2020 17:11:46 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 14 May 2020 14:11:31 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 14 May 2020 14:11:44 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 14 May 2020 21:11:43 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 14 May 2020 21:11:37 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 14 May 2020 14:11:44 -0700 From: Kirti Wankhede To: , Subject: [PATCH Kernel v20 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Date: Fri, 15 May 2020 02:07:45 +0530 Message-ID: <1589488667-9683-7-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1589488667-9683-1-git-send-email-kwankhede@nvidia.com> References: <1589488667-9683-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589490692; bh=izB/kfrbOkgeuoTy4WvibQk5KhK5lRPGhMmv3elCpMw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=eMV9NEycBQ1zpD8foR+Y4AibzuuuOydDT555I9rJ027rrAjPx4wPovQ5u8Ct0H73B HXBCXOk0p0FMnunTSYcf+w26oqUtU+qv/zprfu7y2gWkORnG5amfz3XibO0MM7Dyn5 rQ+Exmq61pOeli4gWjWnJOwdJYqrd1DhvMEXvSxx+wUclJd1vYDlz4V0awhrlJ+LIN sjXSp3Do5jnHPmg5f/mfXz3c4oQkNwRAh5hspuKp7A4sv4bSA/OybYytYf9LY9Edhx SXd6sUitb0JbukPcutRuWY9igjdFkXAoaReeeeOlsE2Nuh5mobZULmwaCw/CSjvDzw 5MuFd4obrqtfQ== Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.228.121.65; envelope-from=kwankhede@nvidia.com; helo=hqnvemgate26.nvidia.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/14 17:11:44 X-ACL-Warn: Detected OS = Windows 7 or 8 [fuzzy] X-Spam_score_int: -70 X-Spam_score: -7.1 X-Spam_bar: ------- X-Spam_report: (-7.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" DMA mapped pages, including those pinned by mdev vendor drivers, might get unpinned and unmapped while migration is active and device is still running. For example, in pre-copy phase while guest driver could access those pages, host device or vendor driver can dirty these mapped pages. Such pages should be marked dirty so as to maintain memory consistency for a user making use of dirty page tracking. To get bitmap during unmap, user should allocate memory for bitmap, set it all zeros, set size of allocated memory, set page size to be considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 77 ++++++++++++++++++++++++++++++++++---= ---- include/uapi/linux/vfio.h | 10 ++++++ 2 files changed, 75 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index b76d3b14abfd..a1dc57bcece5 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -195,11 +195,15 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu,= struct vfio_dma *old) static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize) { uint64_t npages =3D dma->size / pgsize; + size_t bitmap_size; =20 if (npages > DIRTY_BITMAP_PAGES_MAX) return -EINVAL; =20 - dma->bitmap =3D kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL); + /* Allocate extra 64 bits which are used for bitmap manipulation */ + bitmap_size =3D DIRTY_BITMAP_BYTES(npages) + sizeof(u64); + + dma->bitmap =3D kvzalloc(bitmap_size, GFP_KERNEL); if (!dma->bitmap) return -ENOMEM; =20 @@ -999,23 +1003,25 @@ static int verify_bitmap_size(uint64_t npages, uint6= 4_t bitmap_size) } =20 static int vfio_dma_do_unmap(struct vfio_iommu *iommu, - struct vfio_iommu_type1_dma_unmap *unmap) + struct vfio_iommu_type1_dma_unmap *unmap, + struct vfio_bitmap *bitmap) { - uint64_t mask; struct vfio_dma *dma, *dma_last =3D NULL; - size_t unmapped =3D 0; + size_t unmapped =3D 0, pgsize; int ret =3D 0, retries =3D 0; + unsigned long pgshift; =20 mutex_lock(&iommu->lock); =20 - mask =3D ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1; + pgshift =3D __ffs(iommu->pgsize_bitmap); + pgsize =3D (size_t)1 << pgshift; =20 - if (unmap->iova & mask) { + if (unmap->iova & (pgsize - 1)) { ret =3D -EINVAL; goto unlock; } =20 - if (!unmap->size || unmap->size & mask) { + if (!unmap->size || unmap->size & (pgsize - 1)) { ret =3D -EINVAL; goto unlock; } @@ -1026,9 +1032,15 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, goto unlock; } =20 - WARN_ON(mask & PAGE_MASK); -again: + /* When dirty tracking is enabled, allow only min supported pgsize */ + if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) && + (!iommu->dirty_page_tracking || (bitmap->pgsize !=3D pgsize))) { + ret =3D -EINVAL; + goto unlock; + } =20 + WARN_ON((pgsize - 1) & PAGE_MASK); +again: /* * vfio-iommu-type1 (v1) - User mappings were coalesced together to * avoid tracking individual mappings. This means that the granularity @@ -1066,6 +1078,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, ret =3D -EINVAL; goto unlock; } + dma =3D vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0); if (dma && dma->iova + dma->size !=3D unmap->iova + unmap->size) { ret =3D -EINVAL; @@ -1083,6 +1096,23 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, if (dma->task->mm !=3D current->mm) break; =20 + if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) && + (dma_last !=3D dma)) { + + /* + * mark all pages dirty if all pages are pinned and + * mapped + */ + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, + dma->size >> pgshift); + + ret =3D update_user_bitmap(bitmap->data, dma, + unmap->iova, pgsize); + if (ret) + break; + } + if (!RB_EMPTY_ROOT(&dma->pfn_list)) { struct vfio_iommu_type1_dma_unmap nb_unmap; =20 @@ -2447,17 +2477,40 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, =20 } else if (cmd =3D=3D VFIO_IOMMU_UNMAP_DMA) { struct vfio_iommu_type1_dma_unmap unmap; - long ret; + struct vfio_bitmap bitmap =3D { 0 }; + int ret; =20 minsz =3D offsetofend(struct vfio_iommu_type1_dma_unmap, size); =20 if (copy_from_user(&unmap, (void __user *)arg, minsz)) return -EFAULT; =20 - if (unmap.argsz < minsz || unmap.flags) + if (unmap.argsz < minsz || + unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) return -EINVAL; =20 - ret =3D vfio_dma_do_unmap(iommu, &unmap); + if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) { + unsigned long pgshift; + + if (unmap.argsz < (minsz + sizeof(bitmap))) + return -EINVAL; + + if (copy_from_user(&bitmap, + (void __user *)(arg + minsz), + sizeof(bitmap))) + return -EFAULT; + + if (!access_ok((void __user *)bitmap.data, bitmap.size)) + return -EINVAL; + + pgshift =3D __ffs(bitmap.pgsize); + ret =3D verify_bitmap_size(unmap.size >> pgshift, + bitmap.size); + if (ret) + return ret; + } + + ret =3D vfio_dma_do_unmap(iommu, &unmap, &bitmap); if (ret) return ret; =20 diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 123de3bc2dce..0a0c7315ddd6 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1048,12 +1048,22 @@ struct vfio_bitmap { * field. No guarantee is made to the user that arbitrary unmaps of iova * or size different from those used in the original mapping call will * succeed. + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap + * before unmapping IO virtual addresses. When this flag is set, user must + * provide data[] as structure vfio_bitmap. User must allocate memory to g= et + * bitmap, zero the bitmap memory and must set size of allocated memory in + * vfio_bitmap.size field. A bit in bitmap represents one page of user pro= vided + * page size in 'pgsize', consecutively starting from iova offset. Bit set + * indicates page at that offset from iova is dirty. Bitmap of pages in the + * range of unmapped size is returned in vfio_bitmap.data */ struct vfio_iommu_type1_dma_unmap { __u32 argsz; __u32 flags; +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0) __u64 iova; /* IO virtual address */ __u64 size; /* Size of mapping (bytes) */ + __u8 data[]; }; =20 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14) --=20 2.7.0