From nobody Mon Apr 6 10:32:52 2026 Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C16743C7E1E for ; Fri, 20 Mar 2026 15:19:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.251.105.195 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774019970; cv=none; b=g+AmhNUoA8w5jzuUphBnGrmIolZfFC0FAOdsnFm66GxHmp4P8j2Es859wBxctEEkV+7lq6IapKTw6fZB28eNog/8ie1SzFvXCKliEPNu5nqEbK5k6qRwdc/7hNP0xHYLkez87ZppPHM2KzbZs2TnoykcQkXdzjLx1HSzKHT2274= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774019970; c=relaxed/simple; bh=3oG15JOGt3yWldT/mndyrAyBX9KOSHWNpA76HB1szSE=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=WIbPjAqP1R39UAJPzdbx0W2KpYXPR4TAnDO0Gcu2wJsn8yX79Y+i/Ht4I832azfC04aWpyNsVcsK77iVV/0IlAf+fiT16DASGrHxiueW0WZIwTksS4YXN4EiM8HYyHlmg3c4MDD8jv4cnqXf/PITR84v8l6cuNDqsK4mKmCHoaI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b=NpHGCt/b; arc=none smtp.client-ip=148.251.105.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="NpHGCt/b" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1774019964; bh=3oG15JOGt3yWldT/mndyrAyBX9KOSHWNpA76HB1szSE=; h=From:To:Cc:Subject:Date:From; b=NpHGCt/baiEmmxeAswhRZVW9QRMz2k/if2v9ItV/tYOWAycWnpIfCT74z4kyzwteD yBcuxQ9sB+MREJxp0ssPSvQXQV9NyQGmtWf1opnlOZFHJ4HG4Zp/U2YafR3MrIQaSf FIK5saD1KL9uSZAorBJIfr5m1tRTybVaxyELe4jR4VzvvFrWb3DQ3//fWVbDt91vQX mA0BIHPan5fDBBBm8hzF+BRzD41tQrkv28a60mFNThIwE7HSQgJGLEYPl02v5+MZxw tbG1T4qArs2OI3TRKSODl22UkZE4kvAN5REnFT538tJ4l8mBNdWMlLyHjFdrCooxVs xbfYni2PdbPFA== Received: from fedora (unknown [IPv6:2a01:e0a:2c:6930:d919:a6e:5ea1:8a9f]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id DA26917E026C; Fri, 20 Mar 2026 16:19:23 +0100 (CET) From: Boris Brezillon To: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org Cc: David Airlie , Simona Vetter , linux-kernel@vger.kernel.org, Andrew Morton , David Hildenbrand , Lorenzo Stoakes , Zi Yan , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , linux-mm@kvack.org, Boris Brezillon , kernel@collabora.com, Biju Das , Tommaso Merciai Subject: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Date: Fri, 20 Mar 2026 16:19:13 +0100 Message-ID: <20260320151914.586945-1-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Unlike PTEs which are automatically upgraded to writeable entries if .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(), and we currently pretend to have handled the make-writeable request even though we only ever map things read-only. Make sure we pass the proper "write" info to vmf_insert_pfn_pmd() in that case. This also means we have to record the mkwrite event in the .huge_fault() path now. Move the dirty tracking logic to a drm_gem_shmem_record_mkwrite() helper so it can also be called from drm_gem_shmem_pfn_mkwrite(). Note that this wasn't a problem before commit 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because the pgprot were not lowered to read-only before this commit (see the vma_wants_writenotify() in vma_set_page_prot()). Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in m= map") Signed-off-by: Boris Brezillon Cc: Biju Das Cc: Thomas Zimmermann Cc: Tommaso Merciai Acked-by: Thomas Zimmermann Reviewed-by: Lo=C3=AFc Molinari Tested-by: Tommaso Merciai --- This patch is based on drm-tip [2], because that's the only branch that has both [1] and the dirty tracking changes that live in drm-misc-next. Also added the THP maintainers in Cc, so I can hopefully get some feedback on the fix. For instance, I'm still unsure drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking there? should we call folio_mark_dirty_lock()? should we call the fault handler directly from there and have all the dirty tracking in this .[huge_]fault path?). [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@g= mail.com/ [2]https://gitlab.freedesktop.org/drm/tip --- drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++-------- 1 file changed, 32 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 2062ca607833..545933c7f712 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, s= truct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create); =20 +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct drm_gem_object *obj =3D vma->vm_private_data; + struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); + loff_t num_pages =3D obj->size >> PAGE_SHIFT; + pgoff_t page_offset =3D vmf->pgoff - vma->vm_pgoff; /* page offset within= VMA */ + + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >=3D num_pages)) + return; + + file_update_time(vma->vm_file); + folio_mark_dirty(page_folio(shmem->pages[page_offset])); +} + static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order, unsigned long pfn) { @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf,= unsigned int order, =20 if (aligned && folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) { + vm_fault_t ret; + pfn &=3D PMD_MASK >> PAGE_SHIFT; - return vmf_insert_pfn_pmd(vmf, pfn, false); + + /* Unlike PTEs which are automatically upgraded to + * writeable entries, the PMD upgrades go through + * .huge_fault(). Make sure we pass the "write" info + * along in that case. + * This also means we have to record the write fault + * here, instead of in .pfn_mkwrite(). + */ + ret =3D vmf_insert_pfn_pmd(vmf, pfn, + vmf->flags & FAULT_FLAG_WRITE); + if (ret =3D=3D VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE)) + drm_gem_shmem_record_mkwrite(vmf); + + return ret; } #endif } @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_stru= ct *vma) =20 static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) { - struct vm_area_struct *vma =3D vmf->vma; - struct drm_gem_object *obj =3D vma->vm_private_data; - struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); - loff_t num_pages =3D obj->size >> PAGE_SHIFT; - pgoff_t page_offset =3D vmf->pgoff - vma->vm_pgoff; /* page offset within= VMA */ - - if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >=3D num_pages)) - return VM_FAULT_SIGBUS; - - file_update_time(vma->vm_file); - - folio_mark_dirty(page_folio(shmem->pages[page_offset])); - + drm_gem_shmem_record_mkwrite(vmf); return 0; } =20 --=20 2.53.0