From nobody Fri Nov 14 19:24:22 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1589579319; cv=none; d=zohomail.com; s=zohoarc; b=c6Noo0CEgIZz6p9UgHhiptJHpuJcXrCfKNK2/5AVCCPaFz7SFlog7X+pJ40KReMP/BbTtiaowo/0govN86S0MIZaK3sLHbklHMVWmmklo0yLAMESCZzu3wkL8W/vODDN3xOojxr/yCXdAd0tbV4qHu9Bm3bKlFrcI/0O5ZwL2QE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589579319; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZBRmkoeTJZ1FSdM0f83CcxykXlKpxr6CXlXwN8RMx60=; b=AXCbSQBFpDxVVJuE2mQdGzU2CVGgsavksYpTct4IRUAgEYPN4cWSGf7S6G0HZ4+o6cpqKkMVc8CTBC7nuv0bATJQ7fWZAoQ7Fuz079gxuLnrrB0svPzlFOjO7C2EDYKfshwltxOsjLPYiZYtMp9V2tky/KGiWw6HhJuDUdqIfoo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1589579319160134.33691724275423; Fri, 15 May 2020 14:48:39 -0700 (PDT) Received: from localhost ([::1]:52296 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jZiC5-0005Xy-O9 for importer@patchew.org; Fri, 15 May 2020 17:48:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45830) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jZiAU-00043n-GW for qemu-devel@nongnu.org; Fri, 15 May 2020 17:46:58 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:9480) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jZiAT-0006Dr-91 for qemu-devel@nongnu.org; Fri, 15 May 2020 17:46:58 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 15 May 2020 14:44:37 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 15 May 2020 14:46:55 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 15 May 2020 21:46:55 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 15 May 2020 21:46:48 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 15 May 2020 14:46:55 -0700 From: Kirti Wankhede To: , Subject: [PATCH Kernel v21 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Date: Sat, 16 May 2020 02:43:18 +0530 Message-ID: <1589577203-20640-4-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1589577203-20640-1-git-send-email-kwankhede@nvidia.com> References: <1589577203-20640-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589579077; bh=ZBRmkoeTJZ1FSdM0f83CcxykXlKpxr6CXlXwN8RMx60=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=hn8jmHvq4a5jnLe1lLZrisYJoszh/wvr2OetKTUazGjz4JYJxDDYZ33wilxBS6WOD OlxiUs3B9IRNhskilWyP2SE6sSBYjlKnQ8WPA7lF5SKw/xKkkvch3KQkVzD4Gvb7HI 9SP+qZuU7lBAO8a2yspwJNN3mwhkF150IweGwFNdKRMfDFsl46mukqIpFwiutRM9ue Eshq3Mlv1RKQJUAdyq2sJq+IpXif9VTwb1vmfICRGknR3vtlOWEaueWogKhKOPLMTA QHhtF8UkWbJGnMlehw261JACvZ12hi3hvDtFVz3ak9Wza61RkSaelbpe0PNKw+WPKO 0QNor8udrQjeg== Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.228.121.143; envelope-from=kwankhede@nvidia.com; helo=hqnvemgate24.nvidia.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/15 17:46:42 X-ACL-Warn: Detected OS = Windows 7 or 8 [fuzzy] X-Spam_score_int: -70 X-Spam_score: -7.1 X-Spam_bar: ------- X-Spam_report: (-7.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Calculate and cache pgsize_bitmap when iommu->domain_list is updated and iommu->external_domain is set for mdev device. Add iommu->lock protection when cached pgsize_bitmap is accessed. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 88 +++++++++++++++++++++++--------------= ---- 1 file changed, 49 insertions(+), 39 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index fa735047b04d..de17787ffece 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -69,6 +69,7 @@ struct vfio_iommu { struct rb_root dma_list; struct blocking_notifier_head notifier; unsigned int dma_avail; + uint64_t pgsize_bitmap; bool v2; bool nesting; }; @@ -805,15 +806,14 @@ static void vfio_remove_dma(struct vfio_iommu *iommu,= struct vfio_dma *dma) iommu->dma_avail++; } =20 -static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) +static void vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap =3D ULONG_MAX; =20 - mutex_lock(&iommu->lock); + iommu->pgsize_bitmap =3D ULONG_MAX; + list_for_each_entry(domain, &iommu->domain_list, next) - bitmap &=3D domain->domain->pgsize_bitmap; - mutex_unlock(&iommu->lock); + iommu->pgsize_bitmap &=3D domain->domain->pgsize_bitmap; =20 /* * In case the IOMMU supports page sizes smaller than PAGE_SIZE @@ -823,12 +823,10 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_i= ommu *iommu) * granularity while iommu driver can use the sub-PAGE_SIZE size * to map the buffer. */ - if (bitmap & ~PAGE_MASK) { - bitmap &=3D PAGE_MASK; - bitmap |=3D PAGE_SIZE; + if (iommu->pgsize_bitmap & ~PAGE_MASK) { + iommu->pgsize_bitmap &=3D PAGE_MASK; + iommu->pgsize_bitmap |=3D PAGE_SIZE; } - - return bitmap; } =20 static int vfio_dma_do_unmap(struct vfio_iommu *iommu, @@ -839,19 +837,28 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, size_t unmapped =3D 0; int ret =3D 0, retries =3D 0; =20 - mask =3D ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; + mutex_lock(&iommu->lock); + + mask =3D ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1; + + if (unmap->iova & mask) { + ret =3D -EINVAL; + goto unlock; + } + + if (!unmap->size || unmap->size & mask) { + ret =3D -EINVAL; + goto unlock; + } =20 - if (unmap->iova & mask) - return -EINVAL; - if (!unmap->size || unmap->size & mask) - return -EINVAL; if (unmap->iova + unmap->size - 1 < unmap->iova || - unmap->size > SIZE_MAX) - return -EINVAL; + unmap->size > SIZE_MAX) { + ret =3D -EINVAL; + goto unlock; + } =20 WARN_ON(mask & PAGE_MASK); again: - mutex_lock(&iommu->lock); =20 /* * vfio-iommu-type1 (v1) - User mappings were coalesced together to @@ -930,6 +937,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, blocking_notifier_call_chain(&iommu->notifier, VFIO_IOMMU_NOTIFY_DMA_UNMAP, &nb_unmap); + mutex_lock(&iommu->lock); goto again; } unmapped +=3D dma->size; @@ -1045,24 +1053,28 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, if (map->size !=3D size || map->vaddr !=3D vaddr || map->iova !=3D iova) return -EINVAL; =20 - mask =3D ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; - - WARN_ON(mask & PAGE_MASK); - /* READ/WRITE from device perspective */ if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) prot |=3D IOMMU_WRITE; if (map->flags & VFIO_DMA_MAP_FLAG_READ) prot |=3D IOMMU_READ; =20 - if (!prot || !size || (size | iova | vaddr) & mask) - return -EINVAL; + mutex_lock(&iommu->lock); =20 - /* Don't allow IOVA or virtual address wrap */ - if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) - return -EINVAL; + mask =3D ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1; =20 - mutex_lock(&iommu->lock); + WARN_ON(mask & PAGE_MASK); + + if (!prot || !size || (size | iova | vaddr) & mask) { + ret =3D -EINVAL; + goto out_unlock; + } + + /* Don't allow IOVA or virtual address wrap */ + if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) { + ret =3D -EINVAL; + goto out_unlock; + } =20 if (vfio_find_dma(iommu, iova, size)) { ret =3D -EEXIST; @@ -1668,6 +1680,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_= data, if (!iommu->external_domain) { INIT_LIST_HEAD(&domain->group_list); iommu->external_domain =3D domain; + vfio_pgsize_bitmap(iommu); } else { kfree(domain); } @@ -1793,6 +1806,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_= data, } =20 list_add(&domain->next, &iommu->domain_list); + vfio_pgsize_bitmap(iommu); done: /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); @@ -2004,6 +2018,7 @@ static void vfio_iommu_type1_detach_group(void *iommu= _data, list_del(&domain->next); kfree(domain); vfio_iommu_aper_expand(iommu, &iova_copy); + vfio_pgsize_bitmap(iommu); } break; } @@ -2136,8 +2151,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iom= mu *iommu, size_t size; int iovas =3D 0, i =3D 0, ret; =20 - mutex_lock(&iommu->lock); - list_for_each_entry(iova, &iommu->iova_list, list) iovas++; =20 @@ -2146,17 +2159,14 @@ static int vfio_iommu_iova_build_caps(struct vfio_i= ommu *iommu, * Return 0 as a container with a single mdev device * will have an empty list */ - ret =3D 0; - goto out_unlock; + return 0; } =20 size =3D sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); =20 cap_iovas =3D kzalloc(size, GFP_KERNEL); - if (!cap_iovas) { - ret =3D -ENOMEM; - goto out_unlock; - } + if (!cap_iovas) + return -ENOMEM; =20 cap_iovas->nr_iovas =3D iovas; =20 @@ -2169,8 +2179,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iom= mu *iommu, ret =3D vfio_iommu_iova_add_cap(caps, cap_iovas, size); =20 kfree(cap_iovas); -out_unlock: - mutex_unlock(&iommu->lock); return ret; } =20 @@ -2215,11 +2223,13 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, info.cap_offset =3D 0; /* output, no-recopy necessary */ } =20 + mutex_lock(&iommu->lock); info.flags =3D VFIO_IOMMU_INFO_PGSIZES; =20 - info.iova_pgsizes =3D vfio_pgsize_bitmap(iommu); + info.iova_pgsizes =3D iommu->pgsize_bitmap; =20 ret =3D vfio_iommu_iova_build_caps(iommu, &caps); + mutex_unlock(&iommu->lock); if (ret) return ret; =20 --=20 2.7.0