From nobody Fri Dec 19 12:21:33 2025 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2825E27FB05 for ; Tue, 28 Oct 2025 16:15:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.145.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761668124; cv=none; b=Dq64BoPs0O683sAeXVe/EdMziU0n6v1Np8zFeLljab6cJFSnb7x+b+P1elH+b4gZ5DFVtIvhavvo+p29O0MGvfswdMItKkzg77LhEyECckaGmblcTUPxzYwd5/iP7fLwhKkMkOPI+6hDP8fHpWa5rJjt5WJSJ8MyCUTeqdwlJz0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761668124; c=relaxed/simple; bh=L9hqR11lu9H4XlY3g2ghJUmNvK2Vduby3QCfQC3ufIA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=nVkrNO4L7PLEKyS04+n+kxiJIxxo68KhE1FD+H90on/5KL6wss/c/lRS+VHYqQRxL3W596UnEvM/EbqkwM86780sQsmCI+0MhaWp/oghu08Huc7BPzjMQmhht7dMaAsmb/Ykc4dZ4N/Isif+N/5eXzAugDnAHczXsqYtrIm+4QU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=fb.com; spf=pass smtp.mailfrom=meta.com; dkim=pass (2048-bit key) header.d=fb.com header.i=@fb.com header.b=PgydI/yV; arc=none smtp.client-ip=67.231.145.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=meta.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=fb.com header.i=@fb.com header.b="PgydI/yV" Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 59SEo1l6105054 for ; Tue, 28 Oct 2025 09:15:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=s2048-2025-q2; bh=/4A/pamM+dW+0rJ6NsCHbv58sh8cFT1xwf3Z2nTc9lE=; b=PgydI/yV5Yjm xBrhwWNVkqKCwFwo/WwIz2qLJF9Aqm6XjD9vmQ1Zu9woPMkXGo1moJ7+pwCVOeBQ oC6v1fJRVyuM6kgELjF6dxQRM7t2FLONNuyd2uxfj8ua+LXYaZqFScAw3bWhJ/l9 jFaKvYSV331/eLnR8PWM9He3C6HbJQYM3ob5lDJIrxyY4RQkP6Dp1ZMUI8wu2cSK yFXypYzQ2A9W4dAi8lo+ZwJVj9xGKr0nPC+q3FgZKVC3BudJ3M10vVs1B9iX53jf 6MHbR8TpXVlYSld0V7HpQdMNTqIgEKvFS9O5kZkKiHBHcrN9E6r75W+VsTMuf/D4 dhqnCc9X8A== Received: from maileast.thefacebook.com ([163.114.135.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 4a2yxngtu0-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 28 Oct 2025 09:15:22 -0700 (PDT) Received: from twshared82436.15.frc2.facebook.com (2620:10d:c0a8:fe::f072) by mail.thefacebook.com (2620:10d:c0a9:6f::237c) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.2562.20; Tue, 28 Oct 2025 16:15:18 +0000 Received: by devgpu012.nha5.facebook.com (Postfix, from userid 28580) id 03F41512939; Tue, 28 Oct 2025 09:15:05 -0700 (PDT) From: Alex Mastro Date: Tue, 28 Oct 2025 09:15:02 -0700 Subject: [PATCH v6 3/5] vfio/type1: handle DMA map/unmap up to the addressable limit Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <20251028-fix-unmap-v6-3-2542b96bcc8e@fb.com> References: <20251028-fix-unmap-v6-0-2542b96bcc8e@fb.com> In-Reply-To: <20251028-fix-unmap-v6-0-2542b96bcc8e@fb.com> To: Alex Williamson CC: Jason Gunthorpe , Alejandro Jimenez , David Matlack , , , Alex Mastro , Jason Gunthorpe X-Mailer: b4 0.13.0 X-FB-Internal: Safe X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMDI4MDEzNyBTYWx0ZWRfXySI6I3Idgk2E e7Bi4py34j9l2uv9aWt4YZJxwQNnIHNJfw014yoIZfBZlR9MCdFps3Sjg8DlTfIGP/ZhG/ggsnj qd6i3dy8uUydreBVWBHAzyC0aViEWxPoEXPBTM8unMxK6miaBguiiFl3g1HtLCoakxS2cYi9MjH l4vZm53i709CfT1SIoovHsaQgYR03a6cQgbci/oLgi22FQk5qYnmu8l9zlvZSY6DaivGSgAqjVi AzS90ubvEY3b1jJ+fl0711Qfo9hDP60S0U7wCweuDGfu6QE8dbjTfmomGjzC1ShBCxdF6g4fcwl p6GWMDal36J0hHUrOT4CxI7LdN7FWBiQ7/GMFhn8veoKlKMc5rKNXJhvh9XSqFe7P6us789uB9l oozAbX5+mMM0UdkyqjzLbZpjhBrU9Q== X-Proofpoint-GUID: wDQSGvZ7J4CEO_XeBBEvnuTrmqoGTGUu X-Authority-Analysis: v=2.4 cv=fcWgCkQF c=1 sm=1 tr=0 ts=6900ec1a cx=c_pps a=MfjaFnPeirRr97d5FC5oHw==:117 a=MfjaFnPeirRr97d5FC5oHw==:17 a=IkcTkHD0fZMA:10 a=x6icFKpwvdMA:10 a=VkNPw1HP01LnGYTKEx00:22 a=yPCof4ZbAAAA:8 a=Ikd4Dj_1AAAA:8 a=FOH2dFAWAAAA:8 a=8P58oYQow4jR27pd84AA:9 a=QEXdDO2ut3YA:10 X-Proofpoint-ORIG-GUID: wDQSGvZ7J4CEO_XeBBEvnuTrmqoGTGUu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-10-28_06,2025-10-22_01,2025-03-28_01 Before this commit, it was possible to create end of address space mappings, but unmapping them via VFIO_IOMMU_UNMAP_DMA, replaying them for newly added iommu domains, and querying their dirty pages via VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP was broken due to bugs caused by comparisons against (iova + size) expressions, which overflow to zero. Additionally, there appears to be a page pinning leak in the vfio_iommu_type1_release() path, since vfio_unmap_unpin()'s loop body where unmap_unpin_*() are called will never be entered due to overflow of (iova + size) to zero. This commit handles DMA map/unmap operations up to the addressable limit by comparing against inclusive end-of-range limits, and changing iteration to perform relative traversals across range sizes, rather than absolute traversals across addresses. vfio_link_dma() inserts a zero-sized vfio_dma into the rb-tree, and is only used for that purpose, so discard the size from consideration for the insertion point. Tested-by: Alejandro Jimenez Fixes: 73fa0d10d077 ("vfio: Type1 IOMMU implementation") Reviewed-by: Jason Gunthorpe Reviewed-by: Alejandro Jimenez Signed-off-by: Alex Mastro --- drivers/vfio/vfio_iommu_type1.c | 77 ++++++++++++++++++++++---------------= ---- 1 file changed, 42 insertions(+), 35 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index 48bcc0633d44..5167bec14e36 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -168,12 +168,14 @@ static struct vfio_dma *vfio_find_dma(struct vfio_iom= mu *iommu, { struct rb_node *node =3D iommu->dma_list.rb_node; =20 + WARN_ON(!size); + while (node) { struct vfio_dma *dma =3D rb_entry(node, struct vfio_dma, node); =20 - if (start + size <=3D dma->iova) + if (start + size - 1 < dma->iova) node =3D node->rb_left; - else if (start >=3D dma->iova + dma->size) + else if (start > dma->iova + dma->size - 1) node =3D node->rb_right; else return dma; @@ -183,16 +185,19 @@ static struct vfio_dma *vfio_find_dma(struct vfio_iom= mu *iommu, } =20 static struct rb_node *vfio_find_dma_first_node(struct vfio_iommu *iommu, - dma_addr_t start, size_t size) + dma_addr_t start, + dma_addr_t end) { struct rb_node *res =3D NULL; struct rb_node *node =3D iommu->dma_list.rb_node; struct vfio_dma *dma_res =3D NULL; =20 + WARN_ON(end < start); + while (node) { struct vfio_dma *dma =3D rb_entry(node, struct vfio_dma, node); =20 - if (start < dma->iova + dma->size) { + if (start <=3D dma->iova + dma->size - 1) { res =3D node; dma_res =3D dma; if (start >=3D dma->iova) @@ -202,7 +207,7 @@ static struct rb_node *vfio_find_dma_first_node(struct = vfio_iommu *iommu, node =3D node->rb_right; } } - if (res && size && dma_res->iova >=3D start + size) + if (res && dma_res->iova > end) res =3D NULL; return res; } @@ -212,11 +217,13 @@ static void vfio_link_dma(struct vfio_iommu *iommu, s= truct vfio_dma *new) struct rb_node **link =3D &iommu->dma_list.rb_node, *parent =3D NULL; struct vfio_dma *dma; =20 + WARN_ON(new->size !=3D 0); + while (*link) { parent =3D *link; dma =3D rb_entry(parent, struct vfio_dma, node); =20 - if (new->iova + new->size <=3D dma->iova) + if (new->iova <=3D dma->iova) link =3D &(*link)->rb_left; else link =3D &(*link)->rb_right; @@ -1141,12 +1148,12 @@ static size_t unmap_unpin_slow(struct vfio_domain *= domain, static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dm= a, bool do_accounting) { - dma_addr_t iova =3D dma->iova, end =3D dma->iova + dma->size; struct vfio_domain *domain, *d; LIST_HEAD(unmapped_region_list); struct iommu_iotlb_gather iotlb_gather; int unmapped_region_cnt =3D 0; long unlocked =3D 0; + size_t pos =3D 0; =20 if (!dma->size) return 0; @@ -1170,13 +1177,14 @@ static long vfio_unmap_unpin(struct vfio_iommu *iom= mu, struct vfio_dma *dma, } =20 iommu_iotlb_gather_init(&iotlb_gather); - while (iova < end) { + while (pos < dma->size) { size_t unmapped, len; phys_addr_t phys, next; + dma_addr_t iova =3D dma->iova + pos; =20 phys =3D iommu_iova_to_phys(domain->domain, iova); if (WARN_ON(!phys)) { - iova +=3D PAGE_SIZE; + pos +=3D PAGE_SIZE; continue; } =20 @@ -1185,7 +1193,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu= , struct vfio_dma *dma, * may require hardware cache flushing, try to find the * largest contiguous physical memory chunk to unmap. */ - for (len =3D PAGE_SIZE; iova + len < end; len +=3D PAGE_SIZE) { + for (len =3D PAGE_SIZE; pos + len < dma->size; len +=3D PAGE_SIZE) { next =3D iommu_iova_to_phys(domain->domain, iova + len); if (next !=3D phys + len) break; @@ -1206,7 +1214,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu= , struct vfio_dma *dma, break; } =20 - iova +=3D unmapped; + pos +=3D unmapped; } =20 dma->iommu_mapped =3D false; @@ -1298,7 +1306,7 @@ static int update_user_bitmap(u64 __user *bitmap, str= uct vfio_iommu *iommu, } =20 static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *i= ommu, - dma_addr_t iova, size_t size, size_t pgsize) + dma_addr_t iova, dma_addr_t iova_end, size_t pgsize) { struct vfio_dma *dma; struct rb_node *n; @@ -1315,8 +1323,8 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap,= struct vfio_iommu *iommu, if (dma && dma->iova !=3D iova) return -EINVAL; =20 - dma =3D vfio_find_dma(iommu, iova + size - 1, 0); - if (dma && dma->iova + dma->size !=3D iova + size) + dma =3D vfio_find_dma(iommu, iova_end, 1); + if (dma && dma->iova + dma->size - 1 !=3D iova_end) return -EINVAL; =20 for (n =3D rb_first(&iommu->dma_list); n; n =3D rb_next(n)) { @@ -1325,7 +1333,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap,= struct vfio_iommu *iommu, if (dma->iova < iova) continue; =20 - if (dma->iova > iova + size - 1) + if (dma->iova > iova_end) break; =20 ret =3D update_user_bitmap(bitmap, iommu, dma, iova, pgsize); @@ -1418,7 +1426,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, if (unmap_all) { if (iova || size) goto unlock; - size =3D SIZE_MAX; + iova_end =3D ~(dma_addr_t)0; } else { if (!size || size & (pgsize - 1)) goto unlock; @@ -1473,17 +1481,17 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iom= mu, if (dma && dma->iova !=3D iova) goto unlock; =20 - dma =3D vfio_find_dma(iommu, iova_end, 0); - if (dma && dma->iova + dma->size !=3D iova + size) + dma =3D vfio_find_dma(iommu, iova_end, 1); + if (dma && dma->iova + dma->size - 1 !=3D iova_end) goto unlock; } =20 ret =3D 0; - n =3D first_n =3D vfio_find_dma_first_node(iommu, iova, size); + n =3D first_n =3D vfio_find_dma_first_node(iommu, iova, iova_end); =20 while (n) { dma =3D rb_entry(n, struct vfio_dma, node); - if (dma->iova >=3D iova + size) + if (dma->iova > iova_end) break; =20 if (!iommu->v2 && iova > dma->iova) @@ -1813,12 +1821,12 @@ static int vfio_iommu_replay(struct vfio_iommu *iom= mu, =20 for (; n; n =3D rb_next(n)) { struct vfio_dma *dma; - dma_addr_t iova; + size_t pos =3D 0; =20 dma =3D rb_entry(n, struct vfio_dma, node); - iova =3D dma->iova; =20 - while (iova < dma->iova + dma->size) { + while (pos < dma->size) { + dma_addr_t iova =3D dma->iova + pos; phys_addr_t phys; size_t size; =20 @@ -1834,14 +1842,14 @@ static int vfio_iommu_replay(struct vfio_iommu *iom= mu, phys =3D iommu_iova_to_phys(d->domain, iova); =20 if (WARN_ON(!phys)) { - iova +=3D PAGE_SIZE; + pos +=3D PAGE_SIZE; continue; } =20 size =3D PAGE_SIZE; p =3D phys + size; i =3D iova + size; - while (i < dma->iova + dma->size && + while (pos + size < dma->size && p =3D=3D iommu_iova_to_phys(d->domain, i)) { size +=3D PAGE_SIZE; p +=3D PAGE_SIZE; @@ -1849,9 +1857,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, } } else { unsigned long pfn; - unsigned long vaddr =3D dma->vaddr + - (iova - dma->iova); - size_t n =3D dma->iova + dma->size - iova; + unsigned long vaddr =3D dma->vaddr + pos; + size_t n =3D dma->size - pos; long npage; =20 npage =3D vfio_pin_pages_remote(dma, vaddr, @@ -1882,7 +1889,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, goto unwind; } =20 - iova +=3D size; + pos +=3D size; } } =20 @@ -1899,29 +1906,29 @@ static int vfio_iommu_replay(struct vfio_iommu *iom= mu, unwind: for (; n; n =3D rb_prev(n)) { struct vfio_dma *dma =3D rb_entry(n, struct vfio_dma, node); - dma_addr_t iova; + size_t pos =3D 0; =20 if (dma->iommu_mapped) { iommu_unmap(domain->domain, dma->iova, dma->size); continue; } =20 - iova =3D dma->iova; - while (iova < dma->iova + dma->size) { + while (pos < dma->size) { + dma_addr_t iova =3D dma->iova + pos; phys_addr_t phys, p; size_t size; dma_addr_t i; =20 phys =3D iommu_iova_to_phys(domain->domain, iova); if (!phys) { - iova +=3D PAGE_SIZE; + pos +=3D PAGE_SIZE; continue; } =20 size =3D PAGE_SIZE; p =3D phys + size; i =3D iova + size; - while (i < dma->iova + dma->size && + while (pos + size < dma->size && p =3D=3D iommu_iova_to_phys(domain->domain, i)) { size +=3D PAGE_SIZE; p +=3D PAGE_SIZE; @@ -3059,7 +3066,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_i= ommu *iommu, =20 if (iommu->dirty_page_tracking) ret =3D vfio_iova_dirty_bitmap(range.bitmap.data, - iommu, iova, size, + iommu, iova, iova_end, range.bitmap.pgsize); else ret =3D -EINVAL; --=20 2.47.3