From nobody Mon Dec 1 22:03:53 2025 Received: from sender3-pp-f112.zoho.com (sender3-pp-f112.zoho.com [136.143.184.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97D85298CAB for ; Thu, 27 Nov 2025 03:50:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.184.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764215456; cv=pass; b=UeDpct8U1vpCf7kLoa6J++7YZiVgwhHkW43zCeEq8JZdxmk43bxI2QN/2N1f3DCzKv83sTaIPygl24YixdIURqJg/EUcu4CyX55Gd6IL6JOXKOScZAg57rc9EwOvsal31GNErkR17LLVuh/0+q2gmo5zRwodRNv9a6JCE3Hp4Do= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764215456; c=relaxed/simple; bh=6iIwPmmOu6mI9Fo640qUUTHlrxr6DdaTPi+JZac4030=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ovij6nUqnGtBmHE7vJnsl7TLnshff0uKJxuaY7vPw1N/RZrIsmDarNk/U/OGyyfKz3Y3q4Ca6Rst9huCyLsualqrEoMazKOJD2bQj2Cde41tjbyDk/dXCGu2V29jdoSseY2FLxb0ha1TkYSEb9wQrUUVHndaBZBJz76RZgs1SUQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=D4oBTfB6; arc=pass smtp.client-ip=136.143.184.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="D4oBTfB6" ARC-Seal: i=1; a=rsa-sha256; t=1764215435; cv=none; d=zohomail.com; s=zohoarc; b=aOD87E7jC9dcuOGiMAb8hjpmA2QDVE/ZAUPC15wJbPmP6g3ZBHLK8coQ39k0m5CUNWKy7UF7r7F3+f5BY4f0rGQOVxq2mK031RcjPspmZZ5WjoHKwavLWExShGt9P/PLgMSwUUyxdS89nvmT0Ej600HQRDV/nONjz7hFaij2i+Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1764215435; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=+lmb5v+FP/fnNt8vPmqnG/P5zFtUDYlZGk/OvxAYLd4=; b=nSQZh6e5OliMElZ0N3P48epRpwcexxp4QcTUIMrsJjX9PVN/QkF+M+LS+tC4cqy0YUTaF0cpRY3de8+qZ6e9yYwwVayTSWo5KHgCH/8nWmcUMB0+0sJSDPFOZaakg5J+Pp8D3NbARVOn/IH085/+FFziOt+jwV17m1s4caPmaSI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1764215435; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=+lmb5v+FP/fnNt8vPmqnG/P5zFtUDYlZGk/OvxAYLd4=; b=D4oBTfB67bL7iYMWZrD8jaLEuKiXHTxWojntI/hQ++FTkc71dmiQ4WPtm0WSKiGG YQpzo+gApSymVfnJCqnc++JbFgUNPZyWHABdAahH/vh4oO836cQbayleEX4Yzr0N9+J XJkGtxq+G3P4S3BKZmy/k6OPoZnwHhmtuWufP33U= Received: by mx.zohomail.com with SMTPS id 176421543432152.11821338160519; Wed, 26 Nov 2025 19:50:34 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: linux-kernel@vger.kernel.org Cc: dri-devel@lists.freedesktop.org, Steven Price , Boris Brezillon , kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= , Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Subject: [PATCH v2 1/1] drm/panthor: Support partial unmaps of huge pages Date: Thu, 27 Nov 2025 03:50:13 +0000 Message-ID: <20251127035021.624045-2-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251127035021.624045-1-adrian.larumbe@collabora.com> References: <20251127035021.624045-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior") did away with the treatment of partial unmaps of huge IOPTEs. In the case of Panthor, that means an attempt to run a VM_BIND unmap operation on a memory region whose start address and size aren't 2MiB aligned, in the event it intersects with a huge page, would lead to ARM IOMMU management code to fail and a warning being raised. Presently, and for lack of a better alternative, it's best to have Panthor handle partial unmaps at the driver level, by unmapping entire huge pages and remapping the difference between them and the requested unmap region. This could change in the future when the VM_BIND uAPI is expanded to enforce huge page alignment and map/unmap operational constraints that render this code unnecessary. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/panthor/panthor_mmu.c | 76 +++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 183da30fa500..41d7974c95ea 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -2110,6 +2110,57 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuv= a_op *op, void *priv) return 0; } =20 +static bool +is_huge_page(const struct panthor_vma *unmap_vma, u64 addr) +{ + const struct page *pg; + pgoff_t bo_offset; + + bo_offset =3D addr - unmap_vma->base.va.addr + unmap_vma->base.gem.offset; + pg =3D to_panthor_bo(unmap_vma->base.gem.obj)->base.pages[bo_offset >> PA= GE_SHIFT]; + + return (folio_order(page_folio(pg)) >=3D PMD_ORDER); +} + +struct remap_params { + u64 prev_remap_start, prev_remap_range; + u64 next_remap_start, next_remap_range; +}; + +static struct remap_params +get_map_unmap_intervals(const struct drm_gpuva_op_remap *op, + const struct panthor_vma *unmap_vma, + u64 *unmap_start, u64 *unmap_range) +{ + u64 aligned_unmap_start, aligned_unmap_end, unmap_end; + struct remap_params params =3D {0}; + + drm_gpuva_op_remap_to_unmap_range(op, unmap_start, unmap_range); + unmap_end =3D *unmap_start + *unmap_range; + + aligned_unmap_start =3D ALIGN_DOWN(*unmap_start, SZ_2M); + + if (aligned_unmap_start < *unmap_start && + unmap_vma->base.va.addr <=3D aligned_unmap_start && + is_huge_page(unmap_vma, *unmap_start)) { + params.prev_remap_start =3D aligned_unmap_start; + params.prev_remap_range =3D *unmap_start & (SZ_2M - 1); + *unmap_range +=3D *unmap_start - aligned_unmap_start; + *unmap_start =3D aligned_unmap_start; + } + + aligned_unmap_end =3D ALIGN(unmap_end, SZ_2M); + + if (aligned_unmap_end > unmap_end && + (unmap_vma->base.va.addr + unmap_vma->base.va.range >=3D aligned_unma= p_end) && + is_huge_page(unmap_vma, unmap_end - 1)) { + *unmap_range +=3D params.next_remap_range =3D aligned_unmap_end - unmap_= end; + params.next_remap_start =3D unmap_end; + } + + return params; +} + static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op, void *priv) { @@ -2118,19 +2169,44 @@ static int panthor_gpuva_sm_step_remap(struct drm_g= puva_op *op, struct panthor_vm_op_ctx *op_ctx =3D vm->op_ctx; struct panthor_vma *prev_vma =3D NULL, *next_vma =3D NULL; u64 unmap_start, unmap_range; + struct remap_params params; int ret; =20 drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); + + /* + * ARM IOMMU page table management code disallows partial unmaps of huge = pages, + * so when a partial unmap is requested, we must first unmap the entire h= uge + * page and then remap the difference between the huge page minus the req= uested + * unmap region. Calculating the right offsets and ranges for the differe= nt unmap + * and map operations is the responsibility of the following function. + */ + params =3D get_map_unmap_intervals(&op->remap, unmap_vma, &unmap_start, &= unmap_range); + ret =3D panthor_vm_unmap_pages(vm, unmap_start, unmap_range); if (ret) return ret; =20 if (op->remap.prev) { + ret =3D panthor_vm_map_pages(vm, params.prev_remap_start, + flags_to_prot(unmap_vma->flags), + to_drm_gem_shmem_obj(op->remap.prev->gem.obj)->sgt, + op->remap.prev->gem.offset, params.prev_remap_range); + if (ret) + return ret; + prev_vma =3D panthor_vm_op_ctx_get_vma(op_ctx); panthor_vma_init(prev_vma, unmap_vma->flags); } =20 if (op->remap.next) { + ret =3D panthor_vm_map_pages(vm, params.next_remap_start, + flags_to_prot(unmap_vma->flags), + to_drm_gem_shmem_obj(op->remap.next->gem.obj)->sgt, + op->remap.next->gem.offset, params.next_remap_range); + if (ret) + return ret; + next_vma =3D panthor_vm_op_ctx_get_vma(op_ctx); panthor_vma_init(next_vma, unmap_vma->flags); } --=20 2.51.2