From nobody Mon Feb 9 04:46:26 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1589402408; cv=none; d=zohomail.com; s=zohoarc; b=Ze9ms+A5AO6bzWtMjeCtyD5Fx10CdFkLPg3mRWYeJzVaQwi29WhgaQfs2g6PW33chGSA6JCGSUB+z2GVxgOWJOVh9TyvGqRXr38Vh+bz2svOl+8tgV63Xi+Oy0PGjudzGkH26cntjLdJIAMdRo942ywkFFCFpww2E4Bj7ufVuZE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589402408; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=p18Tye1V+UyJHxKzz13RY6bXcoaqy1XoKO753WoD52U=; b=cuPT3CmjZqP6anAV9xD+uAhNfl8DxfDtWGaMJdM7AjMNGnaDZGsRD59jFLdAPrD8fPsnmA2K8NeiKP12JzWNu7N1FAxuhn3qnOevUQSGymgnh+0I/6adMqvvRIsq2s8f6SCdyXwWo3+DO3z3gi40wDFQpAnE3RRa1XsDb2Q1Aa8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1589402408955561.7723980080758; Wed, 13 May 2020 13:40:08 -0700 (PDT) Received: from localhost ([::1]:50532 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jYyAh-0002vx-KQ for importer@patchew.org; Wed, 13 May 2020 16:40:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54380) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jYy8t-0007BA-KN for qemu-devel@nongnu.org; Wed, 13 May 2020 16:38:15 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:11677) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jYy8s-0006xP-5v for qemu-devel@nongnu.org; Wed, 13 May 2020 16:38:15 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 13 May 2020 13:36:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 13 May 2020 13:38:12 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 13 May 2020 20:38:12 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 13 May 2020 20:38:06 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 13 May 2020 13:38:12 -0700 From: Kirti Wankhede To: , Subject: [PATCH Kernel v19 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Date: Thu, 14 May 2020 01:34:34 +0530 Message-ID: <1589400279-28522-4-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1589400279-28522-1-git-send-email-kwankhede@nvidia.com> References: <1589400279-28522-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589402218; bh=p18Tye1V+UyJHxKzz13RY6bXcoaqy1XoKO753WoD52U=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=MribBTbrxv0kRjXi5aQTTazsokWJI5mqNTjdX9ldshlENNvvH6V+6hX0P0DzimCey vMZ27FSCI6trKgQ/VY/cEfGZo6hJeA/uFcAUcR0O29Q5GiFAJEUNLYHR3CBxyRSmP4 uCDpJ//70FhDjAictwVxSwfsTkqHjTPvzb74GBbcgShgT7mHlFsWAnh0mS73RHAUGF ygepTUvWV4fG4jGmaxe8dR4qEY7DwuC8QLR7YFUphFeKY+uLb5jXC2+k5jomlKbltu lnKR6eIzxsH+k8mk6+PERLrZ6CpxO7PfaU77MI6HN24B17VZ8ylmmSjcD52qw+ESX0 geqKLGEXNyPfQ== Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.228.121.64; envelope-from=kwankhede@nvidia.com; helo=hqnvemgate25.nvidia.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/13 16:26:46 X-ACL-Warn: Detected OS = Windows 7 or 8 [fuzzy] X-Spam_score_int: -70 X-Spam_score: -7.1 X-Spam_bar: ------- X-Spam_report: (-7.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Calculate and cache pgsize_bitmap when iommu->domain_list is updated. Add iommu->lock protection when cached pgsize_bitmap is accessed. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 87 +++++++++++++++++++++++--------------= ---- 1 file changed, 48 insertions(+), 39 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index fa735047b04d..6f09fbabed12 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -69,6 +69,7 @@ struct vfio_iommu { struct rb_root dma_list; struct blocking_notifier_head notifier; unsigned int dma_avail; + uint64_t pgsize_bitmap; bool v2; bool nesting; }; @@ -805,15 +806,14 @@ static void vfio_remove_dma(struct vfio_iommu *iommu,= struct vfio_dma *dma) iommu->dma_avail++; } =20 -static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) +static void vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap =3D ULONG_MAX; =20 - mutex_lock(&iommu->lock); + iommu->pgsize_bitmap =3D ULONG_MAX; + list_for_each_entry(domain, &iommu->domain_list, next) - bitmap &=3D domain->domain->pgsize_bitmap; - mutex_unlock(&iommu->lock); + iommu->pgsize_bitmap &=3D domain->domain->pgsize_bitmap; =20 /* * In case the IOMMU supports page sizes smaller than PAGE_SIZE @@ -823,12 +823,10 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_i= ommu *iommu) * granularity while iommu driver can use the sub-PAGE_SIZE size * to map the buffer. */ - if (bitmap & ~PAGE_MASK) { - bitmap &=3D PAGE_MASK; - bitmap |=3D PAGE_SIZE; + if (iommu->pgsize_bitmap & ~PAGE_MASK) { + iommu->pgsize_bitmap &=3D PAGE_MASK; + iommu->pgsize_bitmap |=3D PAGE_SIZE; } - - return bitmap; } =20 static int vfio_dma_do_unmap(struct vfio_iommu *iommu, @@ -839,19 +837,28 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, size_t unmapped =3D 0; int ret =3D 0, retries =3D 0; =20 - mask =3D ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; + mutex_lock(&iommu->lock); + + mask =3D ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1; + + if (unmap->iova & mask) { + ret =3D -EINVAL; + goto unlock; + } + + if (!unmap->size || unmap->size & mask) { + ret =3D -EINVAL; + goto unlock; + } =20 - if (unmap->iova & mask) - return -EINVAL; - if (!unmap->size || unmap->size & mask) - return -EINVAL; if (unmap->iova + unmap->size - 1 < unmap->iova || - unmap->size > SIZE_MAX) - return -EINVAL; + unmap->size > SIZE_MAX) { + ret =3D -EINVAL; + goto unlock; + } =20 WARN_ON(mask & PAGE_MASK); again: - mutex_lock(&iommu->lock); =20 /* * vfio-iommu-type1 (v1) - User mappings were coalesced together to @@ -930,6 +937,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, blocking_notifier_call_chain(&iommu->notifier, VFIO_IOMMU_NOTIFY_DMA_UNMAP, &nb_unmap); + mutex_lock(&iommu->lock); goto again; } unmapped +=3D dma->size; @@ -1045,24 +1053,28 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, if (map->size !=3D size || map->vaddr !=3D vaddr || map->iova !=3D iova) return -EINVAL; =20 - mask =3D ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; - - WARN_ON(mask & PAGE_MASK); - /* READ/WRITE from device perspective */ if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) prot |=3D IOMMU_WRITE; if (map->flags & VFIO_DMA_MAP_FLAG_READ) prot |=3D IOMMU_READ; =20 - if (!prot || !size || (size | iova | vaddr) & mask) - return -EINVAL; + mutex_lock(&iommu->lock); =20 - /* Don't allow IOVA or virtual address wrap */ - if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) - return -EINVAL; + mask =3D ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1; =20 - mutex_lock(&iommu->lock); + WARN_ON(mask & PAGE_MASK); + + if (!prot || !size || (size | iova | vaddr) & mask) { + ret =3D -EINVAL; + goto out_unlock; + } + + /* Don't allow IOVA or virtual address wrap */ + if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) { + ret =3D -EINVAL; + goto out_unlock; + } =20 if (vfio_find_dma(iommu, iova, size)) { ret =3D -EEXIST; @@ -1793,6 +1805,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_= data, } =20 list_add(&domain->next, &iommu->domain_list); + vfio_pgsize_bitmap(iommu); done: /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); @@ -2004,6 +2017,7 @@ static void vfio_iommu_type1_detach_group(void *iommu= _data, list_del(&domain->next); kfree(domain); vfio_iommu_aper_expand(iommu, &iova_copy); + vfio_pgsize_bitmap(iommu); } break; } @@ -2136,8 +2150,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iom= mu *iommu, size_t size; int iovas =3D 0, i =3D 0, ret; =20 - mutex_lock(&iommu->lock); - list_for_each_entry(iova, &iommu->iova_list, list) iovas++; =20 @@ -2146,17 +2158,14 @@ static int vfio_iommu_iova_build_caps(struct vfio_i= ommu *iommu, * Return 0 as a container with a single mdev device * will have an empty list */ - ret =3D 0; - goto out_unlock; + return 0; } =20 size =3D sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); =20 cap_iovas =3D kzalloc(size, GFP_KERNEL); - if (!cap_iovas) { - ret =3D -ENOMEM; - goto out_unlock; - } + if (!cap_iovas) + return -ENOMEM; =20 cap_iovas->nr_iovas =3D iovas; =20 @@ -2169,8 +2178,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iom= mu *iommu, ret =3D vfio_iommu_iova_add_cap(caps, cap_iovas, size); =20 kfree(cap_iovas); -out_unlock: - mutex_unlock(&iommu->lock); return ret; } =20 @@ -2215,11 +2222,13 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, info.cap_offset =3D 0; /* output, no-recopy necessary */ } =20 + mutex_lock(&iommu->lock); info.flags =3D VFIO_IOMMU_INFO_PGSIZES; =20 - info.iova_pgsizes =3D vfio_pgsize_bitmap(iommu); + info.iova_pgsizes =3D iommu->pgsize_bitmap; =20 ret =3D vfio_iommu_iova_build_caps(iommu, &caps); + mutex_unlock(&iommu->lock); if (ret) return ret; =20 --=20 2.7.0