From nobody Mon Feb 9 04:31:24 2026 Received: from flow-a7-smtp.messagingengine.com (flow-a7-smtp.messagingengine.com [103.168.172.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2480533D6; Mon, 27 Oct 2025 11:56:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761566211; cv=none; b=ToaNmroPWXHYeR+/fn53JeHq4wbST2Zox6meu8GNXoOXitEVGwvPDokromzVWmhvk0n49p84fAmKOPQuK811gEMY2rnRcfXoWymWQ5ydPTDzb9fMJE5t8CnSCMGy/VjBmsi+8oQ/6kX32/Y6v+C44NFKscpvLEKe5R2v2DQwLnk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761566211; c=relaxed/simple; bh=tt4YGuNiEyO4ChIfz3k6M/qivZfbtbjVzM1DpwDMOOI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nBmarpj7QSoMU2RVZvjTFTRaOo1oqw2ahHEDhpb9Q0qbSr3Mh3rTqNENm6wFbjX2itGCz4AqnbqBtACA4DxvmAWOf52Mx3DfXmurtWNI5S6czWMrMYueOPSFWY7AgHDl6Ay1ZRng/PvYvpjsENehcergEuh/Xw44sKTnnQc/OwQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name; spf=pass smtp.mailfrom=shutemov.name; dkim=pass (2048-bit key) header.d=shutemov.name header.i=@shutemov.name header.b=FDUM9yjk; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=Y48Rbk1J; arc=none smtp.client-ip=103.168.172.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=shutemov.name Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=shutemov.name header.i=@shutemov.name header.b="FDUM9yjk"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Y48Rbk1J" Received: from phl-compute-05.internal (phl-compute-05.internal [10.202.2.45]) by mailflow.phl.internal (Postfix) with ESMTP id E4EC8138020E; Mon, 27 Oct 2025 07:56:48 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-05.internal (MEProxy); Mon, 27 Oct 2025 07:56:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:subject:subject:to:to; s=fm1; t=1761566208; x= 1761573408; bh=fMNCxaIXhkuB9AFidHwvKyW61Us/jESq3kVR7JAZcns=; b=F DUM9yjkZYTKWMjkQ5qHpoXVwltrN/MLnKknNqnXygFJV1IL0PA820zERKWzU4NBf b/oTSGssDC3WRY5sjsqEUZm9N+DceSTEnMWNrTIUBxizICEtPivhGWNnZNcs6XAK sZQUva2nuDcgf+lx2R41mTXoR2DEgkKGZ/seL8GX30IoOZ20yMmv+87OaCyfmaR0 hGBGbr4NRMg8z7s0izd+yiAtsYlBOgeaNkiwN8u6ggRakv6IaPn0hsQKqwrW8BSI IsqnX0mjs/uHW8N8tm4aWPGpROr3Bt/w5YWtlQCfxnHuexw/6cdkrnhqnglfIGTg 5+jHYZjzT6H8ZAxnFsxnQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:subject:subject:to:to:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm3; t=1761566208; x=1761573408; bh=f MNCxaIXhkuB9AFidHwvKyW61Us/jESq3kVR7JAZcns=; b=Y48Rbk1JPIqI8Rfun sCoVoc6zqwgg1SZAReucsfVOCrryRUHkR8w8YBCLrmkyqoMVntF+RaEK4l/FIMeM C1xTNrYebWcqtU4+qjyy7ruUPSX9oeeeQUW81oL0iHuL612OUIVbKYgCykO6BX+/ rdYBqZGrTNm2fpWtY10k74bUZqLz3dR3k3ec3WpNglnxYU3fljOMyzO1LW71oXYG ic8lJBoflUjKbX4a3jsk02XqpNnSsQLxVdlKj5yzRH1rZd6FKTLqr3QjiSugr2rT /EvV1588I4kmsELWCOi/A572Y8Xrm7bnMDELU80CVX2oYNJ0gRY4DXqjh5lITL4E qg06A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdeggdduheejleduucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvgeqnecugg ftrfgrthhtvghrnhepgeevhedtgfdvhfdugeffueduvdegveejhfevveeghfdvveeiveet iedvheejhfejnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepkhhirhhilhhlsehshhhuthgvmhhovhdrnhgrmhgvpdhnsggprhgtphhtthhopedv fedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfoh hunhgurghtihhonhdrohhrghdprhgtphhtthhopegurghvihgusehrvgguhhgrthdrtgho mhdprhgtphhtthhopehhuhhghhgusehgohhoghhlvgdrtghomhdprhgtphhtthhopeifih hllhihsehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtohepvhhirhhoseiivghnihhv rdhlihhnuhigrdhorhhgrdhukhdprhgtphhtthhopegsrhgruhhnvghrsehkvghrnhgvlh drohhrghdprhgtphhtthhopehlohhrvghniihordhsthhorghkvghssehorhgrtghlvgdr tghomhdprhgtphhtthhopehlihgrmhdrhhhofihlvghtthesohhrrggtlhgvrdgtohhmpd hrtghpthhtohepvhgsrggskhgrsehsuhhsvgdrtgii X-ME-Proxy: Feedback-ID: ie3994620:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Oct 2025 07:56:47 -0400 (EDT) From: Kiryl Shutsemau To: Andrew Morton , David Hildenbrand , Hugh Dickins , Matthew Wilcox , Alexander Viro , Christian Brauner Cc: Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Rik van Riel , Harry Yoo , Johannes Weiner , Shakeel Butt , Baolin Wang , "Darrick J. Wong" , Dave Chinner , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Kiryl Shutsemau Subject: [PATCHv3 1/2] mm/memory: Do not populate page table entries beyond i_size Date: Mon, 27 Oct 2025 11:56:35 +0000 Message-ID: <20251027115636.82382-2-kirill@shutemov.name> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20251027115636.82382-1-kirill@shutemov.name> References: <20251027115636.82382-1-kirill@shutemov.name> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kiryl Shutsemau Accesses within VMA, but beyond i_size rounded up to PAGE_SIZE are supposed to generate SIGBUS. Recent changes attempted to fault in full folio where possible. They did not respect i_size, which led to populating PTEs beyond i_size and breaking SIGBUS semantics. Darrick reported generic/749 breakage because of this. However, the problem existed before the recent changes. With huge=3Dalways tmpfs, any write to a file leads to PMD-size allocation. Following the fault-in of the folio will install PMD mapping regardless of i_size. Fix filemap_map_pages() and finish_fault() to not install: - PTEs beyond i_size; - PMD mappings across i_size; Make an exception for shmem/tmpfs that for long time intentionally mapped with PMDs across i_size. Signed-off-by: Kiryl Shutsemau Fixes: 19773df031bc ("mm/fault: try to map the entire file folio in finish_= fault()") Fixes: 357b92761d94 ("mm/filemap: map entire large folio faultaround") Fixes: 01c70267053d ("fs: add a filesystem flag for THPs") Reported-by: "Darrick J. Wong" --- mm/filemap.c | 28 ++++++++++++++++++++-------- mm/memory.c | 20 +++++++++++++++++++- 2 files changed, 39 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index b7b297c1ad4f..ff75bd89b68c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3690,7 +3690,8 @@ static struct folio *next_uptodate_folio(struct xa_st= ate *xas, static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct folio *folio, unsigned long start, unsigned long addr, unsigned int nr_pages, - unsigned long *rss, unsigned short *mmap_miss) + unsigned long *rss, unsigned short *mmap_miss, + bool can_map_large) { unsigned int ref_from_caller =3D 1; vm_fault_t ret =3D 0; @@ -3705,7 +3706,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_f= ault *vmf, * The folio must not cross VMA or page table boundary. */ addr0 =3D addr - start * PAGE_SIZE; - if (folio_within_vma(folio, vmf->vma) && + if (can_map_large && folio_within_vma(folio, vmf->vma) && (addr0 & PMD_MASK) =3D=3D ((addr0 + folio_size(folio) - 1) & PMD_MASK= )) { vmf->pte -=3D start; page -=3D start; @@ -3820,13 +3821,27 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long rss =3D 0; unsigned int nr_pages =3D 0, folio_type; unsigned short mmap_miss =3D 0, mmap_miss_saved; + bool can_map_large; =20 rcu_read_lock(); folio =3D next_uptodate_folio(&xas, mapping, end_pgoff); if (!folio) goto out; =20 - if (filemap_map_pmd(vmf, folio, start_pgoff)) { + file_end =3D DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; + end_pgoff =3D min(end_pgoff, file_end); + + /* + * Do not allow to map with PTEs beyond i_size and with PMD + * across i_size to preserve SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + can_map_large =3D shmem_mapping(mapping) || + file_end >=3D folio_next_index(folio); + + if (can_map_large && filemap_map_pmd(vmf, folio, start_pgoff)) { ret =3D VM_FAULT_NOPAGE; goto out; } @@ -3839,10 +3854,6 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, goto out; } =20 - file_end =3D DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; - if (end_pgoff > file_end) - end_pgoff =3D file_end; - folio_type =3D mm_counter_file(folio); do { unsigned long end; @@ -3859,7 +3870,8 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, else ret |=3D filemap_map_folio_range(vmf, folio, xas.xa_index - folio->index, addr, - nr_pages, &rss, &mmap_miss); + nr_pages, &rss, &mmap_miss, + can_map_large); =20 folio_unlock(folio); } while ((folio =3D next_uptodate_folio(&xas, mapping, end_pgoff)) !=3D N= ULL); diff --git a/mm/memory.c b/mm/memory.c index 39e21688e74b..1a3eb070f8df 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -77,6 +77,7 @@ #include #include #include +#include =20 #include =20 @@ -5545,8 +5546,25 @@ vm_fault_t finish_fault(struct vm_fault *vmf) return ret; } =20 + if (!needs_fallback && vma->vm_file) { + struct address_space *mapping =3D vma->vm_file->f_mapping; + pgoff_t file_end; + + file_end =3D DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); + + /* + * Do not allow to map with PTEs beyond i_size and with PMD + * across i_size to preserve SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + needs_fallback =3D !shmem_mapping(mapping) && + file_end < folio_next_index(folio); + } + if (pmd_none(*vmf->pmd)) { - if (folio_test_pmd_mappable(folio)) { + if (!needs_fallback && folio_test_pmd_mappable(folio)) { ret =3D do_set_pmd(vmf, folio, page); if (ret !=3D VM_FAULT_FALLBACK) return ret; --=20 2.50.1 From nobody Mon Feb 9 04:31:24 2026 Received: from flow-a7-smtp.messagingengine.com (flow-a7-smtp.messagingengine.com [103.168.172.142]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BADF3002BD; Mon, 27 Oct 2025 11:56:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.142 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761566214; cv=none; b=mX53yX1qocFskJ6ZzzhRztQs7InYod3CC4TwXR6Wf8PWnJ8447dJ3zjnPL7BhRrez5jKrrf7Ppbi8InpzsZmr23Is7T/xtFlTAa/eK/0wnBQa+VLnSc/dc9SBvdCNA6E4KM0LJimekX6qQtGZf1vzwfxAnrO3rWZ95Qr1rsyQGI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761566214; c=relaxed/simple; bh=jltUd+YsZDBz1K3JVTVPMH5Lps5gj1uliM54pjmNZ7c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Oyf3TKnsLyZu/qGo3NsuzdttjaSzD/zmHtj45ILc18O2dT+FtoupyTdeLvsUikYP+4aZZnzPgSVRgjC8rKUAQyifHV0igNh915uvE/iDpPAeha/eBr5wGM6J7fRJXFMjj7bEZTlPcZnmIww8bSh+O/kz7bc3/rnBhe1t2rgvlac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name; spf=pass smtp.mailfrom=shutemov.name; dkim=pass (2048-bit key) header.d=shutemov.name header.i=@shutemov.name header.b=JgXUErhS; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=c1+dniXl; arc=none smtp.client-ip=103.168.172.142 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=shutemov.name Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=shutemov.name header.i=@shutemov.name header.b="JgXUErhS"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="c1+dniXl" Received: from phl-compute-02.internal (phl-compute-02.internal [10.202.2.42]) by mailflow.phl.internal (Postfix) with ESMTP id 1D85E138026E; Mon, 27 Oct 2025 07:56:51 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-02.internal (MEProxy); Mon, 27 Oct 2025 07:56:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:subject:subject:to:to; s=fm1; t=1761566211; x= 1761573411; bh=K3fQ2fEoyjQsfqeD80DG/98tb3JsUKWLI+ymnihBKZ4=; b=J gXUErhSWuZVAtKi8moOJOqmc5aoBDcOEaWX1mwnXUdmPhGK+tIZqpzdUl+FpCvSx 074QdIvn3KUhh8GwEIddVren7KuYmpE4RprMoQYMtin4SQ1CsUYYL+4p1qwpaOqN piTECd0loRLFafAswOP8eHjBX6Hn69qDVSmd5IKnr0mOZcPfLgIM27oSjaf//brV L4VkrZ0iuCF13hQLup0736cGRQ1qXKewuIjCeN1ae43Vp4TrNUmnsk0paXPiM9AD XpuQOO/3GHxCZFI9yMAMRB5bqBx19MLkVnUqJlfUzbxbk+oOMH2vB9y899m+nK6r sD5VS01VbFUB/YAqxrZ0Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:subject:subject:to:to:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm3; t=1761566211; x=1761573411; bh=K 3fQ2fEoyjQsfqeD80DG/98tb3JsUKWLI+ymnihBKZ4=; b=c1+dniXl1Jm+aRbgu hNs/r6PllBXokTSetn1vUY/FHeXtwdFyjpIXs0l/yVF097lGFYv6MJQCco+n/Zjj rRtKffdZhd4oC/YYDheQHUV6+ow+nMPKYsYMUbMUzKY8jDPTbtbt5dUT28Kbta4h /4/3rIT+QM1w6Ei9/LaLr0BT6h9n9hhdrXhPUgJ4Yn09tAy1wer1LxIRDHkIjJQa V1N32j/lsCKFcu0x9oZbJyq8RIVGP4Nzeg2mxSzMJpfbwPgAxYbxs8zmevn0dFRV y8boD/odka6CE+62LuswRvxVs4CP1MEAS1pxNdJSU0vaTBX+9N/nd+kH22A08PbT N3NxQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdeggdduheejleduucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvgeqnecugg ftrfgrthhtvghrnhepgeevhedtgfdvhfdugeffueduvdegveejhfevveeghfdvveeiveet iedvheejhfejnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepkhhirhhilhhlsehshhhuthgvmhhovhdrnhgrmhgvpdhnsggprhgtphhtthhopedv fedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfoh hunhgurghtihhonhdrohhrghdprhgtphhtthhopegurghvihgusehrvgguhhgrthdrtgho mhdprhgtphhtthhopehhuhhghhgusehgohhoghhlvgdrtghomhdprhgtphhtthhopeifih hllhihsehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtohepvhhirhhoseiivghnihhv rdhlihhnuhigrdhorhhgrdhukhdprhgtphhtthhopegsrhgruhhnvghrsehkvghrnhgvlh drohhrghdprhgtphhtthhopehlohhrvghniihordhsthhorghkvghssehorhgrtghlvgdr tghomhdprhgtphhtthhopehlihgrmhdrhhhofihlvghtthesohhrrggtlhgvrdgtohhmpd hrtghpthhtohepvhgsrggskhgrsehsuhhsvgdrtgii X-ME-Proxy: Feedback-ID: ie3994620:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Oct 2025 07:56:50 -0400 (EDT) From: Kiryl Shutsemau To: Andrew Morton , David Hildenbrand , Hugh Dickins , Matthew Wilcox , Alexander Viro , Christian Brauner Cc: Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Rik van Riel , Harry Yoo , Johannes Weiner , Shakeel Butt , Baolin Wang , "Darrick J. Wong" , Dave Chinner , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Kiryl Shutsemau Subject: [PATCHv3 2/2] mm/truncate: Unmap large folio on split failure Date: Mon, 27 Oct 2025 11:56:36 +0000 Message-ID: <20251027115636.82382-3-kirill@shutemov.name> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20251027115636.82382-1-kirill@shutemov.name> References: <20251027115636.82382-1-kirill@shutemov.name> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kiryl Shutsemau Accesses within VMA, but beyond i_size rounded up to PAGE_SIZE are supposed to generate SIGBUS. This behavior might not be respected on truncation. During truncation, the kernel splits a large folio in order to reclaim memory. As a side effect, it unmaps the folio and destroys PMD mappings of the folio. The folio will be refaulted as PTEs and SIGBUS semantics are preserved. However, if the split fails, PMD mappings are preserved and the user will not receive SIGBUS on any accesses within the PMD. Unmap the folio on split failure. It will lead to refault as PTEs and preserve SIGBUS semantics. Make an exception for shmem/tmpfs that for long time intentionally mapped with PMDs across i_size. Signed-off-by: Kiryl Shutsemau --- mm/truncate.c | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 9210cf808f5c..3c5a50ae3274 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -177,6 +177,32 @@ int truncate_inode_folio(struct address_space *mapping= , struct folio *folio) return 0; } =20 +static int try_folio_split_or_unmap(struct folio *folio, struct page *spli= t_at, + unsigned long min_order) +{ + enum ttu_flags ttu_flags =3D + TTU_SYNC | + TTU_SPLIT_HUGE_PMD | + TTU_IGNORE_MLOCK; + int ret; + + ret =3D try_folio_split_to_order(folio, split_at, min_order); + + /* + * If the split fails, unmap the folio, so it will be refaulted + * with PTEs to respect SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + if (ret && !shmem_mapping(folio->mapping)) { + try_to_unmap(folio, ttu_flags); + WARN_ON(folio_mapped(folio)); + } + + return ret; +} + /* * Handle partial folios. The folio may be entirely within the * range if a split has raced with us. If not, we zero the part of the @@ -226,7 +252,7 @@ bool truncate_inode_partial_folio(struct folio *folio, = loff_t start, loff_t end) =20 min_order =3D mapping_min_folio_order(folio->mapping); split_at =3D folio_page(folio, PAGE_ALIGN_DOWN(offset) / PAGE_SIZE); - if (!try_folio_split_to_order(folio, split_at, min_order)) { + if (!try_folio_split_or_unmap(folio, split_at, min_order)) { /* * try to split at offset + length to make sure folios within * the range can be dropped, especially to avoid memory waste @@ -250,13 +276,10 @@ bool truncate_inode_partial_folio(struct folio *folio= , loff_t start, loff_t end) if (!folio_trylock(folio2)) goto out; =20 - /* - * make sure folio2 is large and does not change its mapping. - * Its split result does not matter here. - */ + /* make sure folio2 is large and does not change its mapping */ if (folio_test_large(folio2) && folio2->mapping =3D=3D folio->mapping) - try_folio_split_to_order(folio2, split_at2, min_order); + try_folio_split_or_unmap(folio2, split_at2, min_order); =20 folio_unlock(folio2); out: --=20 2.50.1