From nobody Mon Apr 6 15:50:55 2026 Received: from mail-yx1-f50.google.com (mail-yx1-f50.google.com [74.125.224.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61E31336EDA for ; Thu, 19 Mar 2026 01:52:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773885165; cv=none; b=A73Po6c/tCHqLsnZaf8Lb24Dl/Va+KHuCbCwUDXcGA+R9gSao0HkKoYGdJ3r9s6AKjP7+Z1j6TOKcgMGJevW0os2kIkDVspBHBO9eHUfZLqI3P6GNpJpqv4v91UblDShFaUiec9QP+gX3xMhxdcwtgtf9aI0cxYdJm9I7Ph7whY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773885165; c=relaxed/simple; bh=iPJqbH60JD7bqokuJTlavf71UWlQRA9uEWx4U7NjUHY=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=WlqsvGGsM/DEoQza0BA3Yacox0eT+/ehy2OMdBI8r2Hyl8dfZwWdCu0dayk3bt7niIeERd1gLPF97p/WUjAy/mlTiiTyqqUalfr/RUHi9KASYr4lAGIzTfPyYeNyPYJnpDzwhg5Pt01uKdsM32V8zZuC/tOBuEYy5yr65/Gdc7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Ek1YIi3r; arc=none smtp.client-ip=74.125.224.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ek1YIi3r" Received: by mail-yx1-f50.google.com with SMTP id 956f58d0204a3-64e8ae85700so643379d50.1 for ; Wed, 18 Mar 2026 18:52:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773885158; x=1774489958; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=qlHEZwH8tqEO/s/NHlnldFwOV7C+KWQvIH81OWzg32s=; b=Ek1YIi3rF9bXThmZQudprtp7yS0rTqUxWkxJatoPSpFPnweqSNKlDxeGCOAodcf7ui kBy0KeroVokTmAUj1cWEEi1TuxJPzqWPgtgFeNTrawZ4WBQe9X9hmMLng30zrBNWi3b3 jnZpwjOQbXFhHxE//D5Vc2rFOErISD7xYEXZlQpzbur4Jl+8XAVWIhbdQIXWwzb1U49k jKlFaVJi4unu5YNDK2glEqT6pqrfmmlx9BLYC3sEDEVk7XFI++cM+2l9RacBLc+Wi12Y 85fKlWwSLBffm6HL2kkIJgThu+NPFTqaTDzCxJf0BB7fNKGQhzvIqp4YfXUgdMYxBhSA kbPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773885158; x=1774489958; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=qlHEZwH8tqEO/s/NHlnldFwOV7C+KWQvIH81OWzg32s=; b=KUJeJSDDEeoSaLwCqgPF/Ip4W3D0+Hobl1TUVSjDS8zfjnF6joDiqaxvedir6aLGIR 8IMbwqGKX2YdxhvuTyMCI+UGLlK4oxZRKCLiz89BvTrLtt4qf7HRL3iIqeAki4kKeQYr +Tzs3l8ZX/N+9xrn1RHbdIRgrPkNTyUtJJ/GVm3fdU1xNJ2TFBBjiax/76vvZVMvFm75 rQ8qIvO/VFWdU0thLjaBblGqiYIzJt6RTqjqxUz9AETJ5YDrixM8ZSxFjtudS2Q0Qqap Kt5x3ty1bSwMiOQmJVanwDmatIuK7ea7Od6BnW7+rnxoM4wAre0pcNsvKH5Ak0qjdUb4 QxsQ== X-Forwarded-Encrypted: i=1; AJvYcCUKCJyZhrPA6DOUbx2ovHyWDcceDw8XzGOMLGX2ZGaljXibY6pxdlXfHrKaHgaiCeyFLX7qx7fQ2yjX5iE=@vger.kernel.org X-Gm-Message-State: AOJu0YxtXvxKN0w5yiEVlvETD5Tpz9z+wvxxMYHwVR+zsxV8qVZLaq5e 9yqJq3mbCbcQnn2oS5vBeWajL2F2EcAn642JZdh5Bj7S45jjT3LxNjH/ X-Gm-Gg: ATEYQzwWcwbkZPRYpuCI+w797fxLLDBG3KEkjW8PM+RyPPgu0iSv0fHvBTT14KyXPkB vyM0It5iJEV1mGbTrb9gZ22L8GZxe0/+A69uVYNww90CGyMIVQpBEUMdRAHCblgtmelzpIc7ceb MHGQamKZMwMn/lCYC07mB7jeeFB8Ygf9Thtl2GhZoFTCSsQXB5FT1bFr5gp/0YSlvKzBSTOzDmH 4F+EKs4Cf34FiLVwU1qrAux2deSfx39+bLqV16iZ8/R7b8fMv/9sJ+QRsBvEm/o07lbR/8RMvJD hH4zeK9lm4PhEbCtQu3LrvtrMjPg1bfoYRgJGrWIN4LlLvDBQOgXFBcjmrfZOECej01go/PgDwZ GVI+ES6n4MwbL1xYDZzBCL4+KTWYDo+X9EsG0BqX1FQeh8o8b+/Hzqp68ZQvPKp8reWH/sFo9i6 fxXJdDozYnsfQg4kcoEz7N+0GNVi7NTyGsQMIJSVBNMEaCWdwJUOnl1mNMHk0OFw7pZTwW X-Received: by 2002:a05:690c:60c6:b0:798:2723:ab48 with SMTP id 00721157ae682-79a71dbf4bemr55966117b3.57.1773885157844; Wed, 18 Mar 2026 18:52:37 -0700 (PDT) Received: from ryzoh.168.0.127 ([2804:14c:5fc8:80a3:ce6:1c2c:9ef7:b8da]) by smtp.googlemail.com with ESMTPSA id 00721157ae682-79a715f8b4esm29366997b3.45.2026.03.18.18.52.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2026 18:52:37 -0700 (PDT) From: Pedro Demarchi Gomes To: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Loic Molinari Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Pedro Demarchi Gomes Subject: [PATCH v4] drm/shmem-helper: Fix huge page mapping in fault handler Date: Wed, 18 Mar 2026 22:52:24 -0300 Message-ID: <20260319015224.46896-1-pedrodemargomes@gmail.com> X-Mailer: git-send-email 2.47.3 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When running ./tools/testing/selftests/mm/split_huge_page_test multiple times with /sys/kernel/mm/transparent_hugepage/shmem_enabled and /sys/kernel/mm/transparent_hugepage/enabled set as always the following BUG occurs: [ 232.728858] ------------[ cut here ]------------ [ 232.729458] kernel BUG at mm/memory.c:2276! [ 232.729726] Oops: invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI [ 232.730217] CPU: 19 UID: 60578 PID: 1497 Comm: llvmpipe-9 Not tainted 7.= 0.0-rc1mm-new+ #19 PREEMPT(lazy) [ 232.730855] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS = 1.17.0-9.fc43 06/10/2025 [ 232.731360] RIP: 0010:walk_to_pmd+0x29e/0x3c0 [ 232.731569] Code: d8 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 48 89 = ea 48 89 de 4c 89 f7 e8 ae 85 ff ff 85 c0 0f 84 1f fe ff ff 31 db eb d0 <0f= > 0b 48 89 ea 48 89 de 4c 89 f7 e8 92 8b ff ff 85 c0 75 e8 48 b8 [ 232.732614] RSP: 0000:ffff8881aa6ff9a8 EFLAGS: 00010282 [ 232.732991] RAX: 8000000142e002e7 RBX: ffff8881433cae10 RCX: dffffc00000= 00000 [ 232.733362] RDX: 0000000000000000 RSI: 00007fb47840b000 RDI: 8000000142e= 002e7 [ 232.733801] RBP: 00007fb47840b000 R08: 0000000000000000 R09: 1ffff110354= dff46 [ 232.734168] R10: fffffbfff0cb921d R11: 00000000910da5ce R12: 1ffffffff0c= 1fcdd [ 232.734459] R13: 1ffffffff0c23f36 R14: ffff888171628040 R15: 00000000000= 00000 [ 232.734861] FS: 00007fb4907f86c0(0000) GS:ffff888791f2c000(0000) knlGS:= 0000000000000000 [ 232.735265] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 232.735548] CR2: 00007fb47840be00 CR3: 000000015e6dc000 CR4: 00000000000= 006f0 [ 232.736031] Call Trace: [ 232.736273] [ 232.736500] get_locked_pte+0x1f/0xa0 [ 232.736878] insert_pfn+0x9f/0x350 [ 232.737190] ? __pfx_pat_pagerange_is_ram+0x10/0x10 [ 232.737614] ? __pfx_insert_pfn+0x10/0x10 [ 232.737990] ? __pfx_css_rstat_updated+0x10/0x10 [ 232.738281] ? __pfx_pfn_modify_allowed+0x10/0x10 [ 232.738552] ? lookup_memtype+0x62/0x180 [ 232.738761] vmf_insert_pfn_prot+0x14b/0x340 [ 232.739012] ? __pfx_vmf_insert_pfn_prot+0x10/0x10 [ 232.739247] ? __pfx___might_resched+0x10/0x10 [ 232.739475] drm_gem_shmem_fault.cold+0x18/0x39 [ 232.739677] ? rcu_read_unlock+0x20/0x70 [ 232.739882] __do_fault+0x251/0x7b0 [ 232.740028] do_fault+0x6e1/0xc00 [ 232.740167] ? __lock_acquire+0x590/0xc40 [ 232.740335] handle_pte_fault+0x439/0x760 [ 232.740498] ? mtree_range_walk+0x252/0xae0 [ 232.740669] ? __pfx_handle_pte_fault+0x10/0x10 [ 232.740899] __handle_mm_fault+0xa02/0xf30 [ 232.741066] ? __pfx___handle_mm_fault+0x10/0x10 [ 232.741255] ? find_vma+0xa1/0x120 [ 232.741403] handle_mm_fault+0x2bf/0x8f0 [ 232.741564] do_user_addr_fault+0x2d3/0xed0 [ 232.741736] ? trace_page_fault_user+0x1bf/0x240 [ 232.741969] exc_page_fault+0x87/0x120 [ 232.742124] asm_exc_page_fault+0x26/0x30 [ 232.742288] RIP: 0033:0x7fb4d73ed546 [ 232.742441] Code: 66 41 0f 6f fb 66 44 0f 6d dc 66 44 0f 6f c6 66 41 0f = 6d f1 66 0f 6c fc 66 45 0f 6c c1 66 44 0f 6f c9 66 0f 6d ca 66 0f db f0 <66= > 0f df 04 08 66 44 0f 6c ca 66 45 0f db c2 66 44 0f df 10 66 44 [ 232.743193] RSP: 002b:00007fb4907f68a0 EFLAGS: 00010206 [ 232.743565] RAX: 00007fb47840aa00 RBX: 00007fb4d73ec070 RCX: 00000000000= 01400 [ 232.743871] RDX: 0000000000002800 RSI: 0000000000003c00 RDI: 00000000000= 00001 [ 232.744150] RBP: 0000000000000004 R08: 0000000000001400 R09: 00007fb4d73= ec060 [ 232.744433] R10: 000055f0261a4288 R11: 00007fb4c013da40 R12: 00000000000= 00008 [ 232.744712] R13: 0000000000000000 R14: 4332322132212110 R15: 00000000000= 00004 [ 232.746616] [ 232.746711] Modules linked in: nft_nat nft_masq veth bridge stp llc snd_= seq_dummy snd_hrtimer snd_seq snd_seq_device snd_timer snd soundcore overla= y rfkill nf_conntrack_netbios_ns nf_conntrack_broadcast nft_fib_inet nft_fi= b_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 n= ft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag= _ipv4 nf_tables qrtr ppdev 9pnet_virtio 9pnet parport_pc i2c_piix4 netfs pc= spkr parport i2c_smbus joydev sunrpc vfat fat loop dm_multipath nfnetlink v= sock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport zr= am lz4hc_compress vmw_vmci lz4_compress vsock e1000 bochs serio_raw ata_gen= eric pata_acpi scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev fuse qemu_fw_c= fg [ 232.749308] ---[ end trace 0000000000000000 ]--- [ 232.749507] RIP: 0010:walk_to_pmd+0x29e/0x3c0 [ 232.749692] Code: d8 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 48 89 = ea 48 89 de 4c 89 f7 e8 ae 85 ff ff 85 c0 0f 84 1f fe ff ff 31 db eb d0 <0f= > 0b 48 89 ea 48 89 de 4c 89 f7 e8 92 8b ff ff 85 c0 75 e8 48 b8 [ 232.750428] RSP: 0000:ffff8881aa6ff9a8 EFLAGS: 00010282 [ 232.750645] RAX: 8000000142e002e7 RBX: ffff8881433cae10 RCX: dffffc00000= 00000 [ 232.750954] RDX: 0000000000000000 RSI: 00007fb47840b000 RDI: 8000000142e= 002e7 [ 232.751232] RBP: 00007fb47840b000 R08: 0000000000000000 R09: 1ffff110354= dff46 [ 232.751514] R10: fffffbfff0cb921d R11: 00000000910da5ce R12: 1ffffffff0c= 1fcdd [ 232.751837] R13: 1ffffffff0c23f36 R14: ffff888171628040 R15: 00000000000= 00000 [ 232.752124] FS: 00007fb4907f86c0(0000) GS:ffff888791f2c000(0000) knlGS:= 0000000000000000 [ 232.752441] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 232.752674] CR2: 00007fb47840be00 CR3: 000000015e6dc000 CR4: 00000000000= 006f0 [ 232.752983] Kernel panic - not syncing: Fatal exception [ 232.753510] Kernel Offset: disabled [ 232.754643] ---[ end Kernel panic - not syncing: Fatal exception ]--- This happens when two concurrent page faults occur within the same PMD rang= e. One fault installs a PMD mapping through vmf_insert_pfn_pmd(), while the ot= her attempts to install a PTE mapping via vmf_insert_pfn(). The bug is triggered because a pmd_trans_huge is not expected when walking the page table inside vmf_insert_pfn. Avoid this race by adding a huge_fault callback to drm_gem_shmem_vm_ops so = that PMD-sized mappings are handled through the appropriate huge page fault path. Fixes: 211b9a39f261 ("drm/shmem-helper: Map huge pages in fault handler") Signed-off-by: Pedro Demarchi Gomes Reviewed-by: Boris Brezillon --- Changes in v4: - Use try_insert_pfn() to insert pte or pmd mapping Changes in v3: https://lore.kernel.org/all/20260316002649.211819-1-pedrodem= argomes@gmail.com/ - Pass a try_pmd boolean parameter to drm_gem_shmem_any_fault - Compile drm_gem_shmem_huge_fault only if CONFIG_ARCH_SUPPORTS_PMD_PFNMAP is defined to avoid a build warning Changes in v2: https://lore.kernel.org/dri-devel/20260313141719.3949700-1-p= edrodemargomes@gmail.com/ - Keep the #ifdef unindented - Create drm_gem_shmem_any_fault to handle faults of any order and use drm_gem_shmem_[huge_]fault() as wrappers v1: https://lore.kernel.org/all/20260312155027.1682606-1-pedrodemargomes@gm= ail.com/ --- drivers/gpu/drm/drm_gem_shmem_helper.c | 50 ++++++++++++++------------ 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 7b5a49935ae4..c549293b5bb6 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -550,27 +550,27 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, = struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create); -static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long = addr, - struct page *page) +static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order, + unsigned long pfn) { + if (!order) { + return vmf_insert_pfn(vmf->vma, vmf->address, pfn); #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP - unsigned long pfn =3D page_to_pfn(page); - unsigned long paddr =3D pfn << PAGE_SHIFT; - bool aligned =3D (addr & ~PMD_MASK) =3D=3D (paddr & ~PMD_MASK); - - if (aligned && - pmd_none(*vmf->pmd) && - folio_test_pmd_mappable(page_folio(page))) { - pfn &=3D PMD_MASK >> PAGE_SHIFT; - if (vmf_insert_pfn_pmd(vmf, pfn, false) =3D=3D VM_FAULT_NOPAGE) - return true; - } + } else if (order =3D=3D PMD_ORDER) { + unsigned long paddr =3D pfn << PAGE_SHIFT; + bool aligned =3D (vmf->address & ~PMD_MASK) =3D=3D (paddr & ~PMD_MASK); + + if (aligned && + folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) { + pfn &=3D PMD_MASK >> PAGE_SHIFT; + return vmf_insert_pfn_pmd(vmf, pfn, false); + } #endif - - return false; + } + return VM_FAULT_FALLBACK; } -static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) +static vm_fault_t drm_gem_shmem_any_fault(struct vm_fault *vmf, unsigned i= nt order) { struct vm_area_struct *vma =3D vmf->vma; struct drm_gem_object *obj =3D vma->vm_private_data; @@ -581,6 +581,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *= vmf) pgoff_t page_offset; unsigned long pfn; + if (order && order !=3D PMD_ORDER) + return VM_FAULT_FALLBACK; + /* Offset to faulty address in the VMA. */ page_offset =3D vmf->pgoff - vma->vm_pgoff; @@ -593,13 +596,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault = *vmf) goto out; } - if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) { - ret =3D VM_FAULT_NOPAGE; - goto out; - } - pfn =3D page_to_pfn(pages[page_offset]); - ret =3D vmf_insert_pfn(vma, vmf->address, pfn); + ret =3D try_insert_pfn(vmf, order, pfn); out: dma_resv_unlock(shmem->base.resv); @@ -607,6 +605,11 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault = *vmf) return ret; } +static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) +{ + return drm_gem_shmem_any_fault(vmf, 0); +} + static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj =3D vma->vm_private_data; @@ -643,6 +646,9 @@ static void drm_gem_shmem_vm_close(struct vm_area_struc= t *vma) const struct vm_operations_struct drm_gem_shmem_vm_ops =3D { .fault =3D drm_gem_shmem_fault, +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP + .huge_fault =3D drm_gem_shmem_any_fault, +#endif .open =3D drm_gem_shmem_vm_open, .close =3D drm_gem_shmem_vm_close, }; -- 2.47.3