include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-)
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
On 1/30/26 14:48, Zi Yan wrote: > Hi all, > > Based on my discussion with Jason about device private folio > reinitialization[1], I realize that the concepts of compound page and folio > are mixed together and confusing, as people think a compound page is equal > to a folio. This is not true, since a compound page means a group of > pages is managed as a whole and it can be something other than a folio, > for example, a slab page. To avoid further confusing people, this > patchset separates compound page from folio by moving any folio related > code out of compound page functions. > > The code is on top of mm-new (2026-01-28-20-27) and all mm selftests > passed. > > The key change is that a compound page no longer sets: > 1. folio->_nr_pages, > 2. folio->_large_mapcount, > 3. folio->_nr_pages_mapped, > 4. folio->_mm_ids, > 5. folio->_mm_id_mapcount, > 6. folio->_pincount, > 7. folio->_entire_mapcount, > 8. folio->_deferred_list. > > Since these fields are only used by folios that are rmappable. The code > setting these fields is moved to page_rmappable_folio(). To make the > code move, this patchset also needs to changes several places, where > folio and compound page are used interchangably or unusual folio use: > > 1. in io_mem_alloc_compound(), a compound page is allocated, but later > it is mapped via vm_insert_pages() like a rmappable folio; > 2. __split_folio_to_order() sets large_rmappable flag directly instead > of using page_rmappable_folio() for after-split folios; > 3. hugetlb unsets large_rmappable to escape deferred_list unqueue > operation. > > At last, the page freeing path is also changed to have different checks > for compound page and folio. > Thanks for doing this! > One thing to note is that for compound page, I do not store compound > order in folio->_nr_pages, which overlaps with page[1].memcg_data and > use 1 << compound_order() instead, since I do not want to add a new > union to struct page and compound_nr() is not as widely used as > folio_nr_pages(). But let me know if there is a performance concern for > this. > > Comments and suggestions are welcome. > What does this mean for treating compound pages as folios, does this break code that makes any assumptions about their interop? > > > Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] > > Zi Yan (5): > io_uring: allocate folio in io_mem_alloc_compound() and function > rename > mm/huge_memory: use page_rmappable_folio() to convert after-split > folios > mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list > handling > mm: only use struct page in compound_nr() and compound_order() > mm: code separation for compound page and folio > > include/linux/mm.h | 12 ++++-------- > io_uring/memmap.c | 12 ++++++------ > mm/huge_memory.c | 5 ++--- > mm/hugetlb.c | 8 ++++---- > mm/hugetlb_cma.c | 2 +- > mm/internal.h | 47 +++++++++++++++++++++++++++------------------- > mm/mm_init.c | 2 +- > mm/page_alloc.c | 23 ++++++++++++++++++----- > 8 files changed, 64 insertions(+), 47 deletions(-) >
On 2 Feb 2026, at 23:30, Balbir Singh wrote: > On 1/30/26 14:48, Zi Yan wrote: >> Hi all, >> >> Based on my discussion with Jason about device private folio >> reinitialization[1], I realize that the concepts of compound page and folio >> are mixed together and confusing, as people think a compound page is equal >> to a folio. This is not true, since a compound page means a group of >> pages is managed as a whole and it can be something other than a folio, >> for example, a slab page. To avoid further confusing people, this >> patchset separates compound page from folio by moving any folio related >> code out of compound page functions. >> >> The code is on top of mm-new (2026-01-28-20-27) and all mm selftests >> passed. >> >> The key change is that a compound page no longer sets: >> 1. folio->_nr_pages, >> 2. folio->_large_mapcount, >> 3. folio->_nr_pages_mapped, >> 4. folio->_mm_ids, >> 5. folio->_mm_id_mapcount, >> 6. folio->_pincount, >> 7. folio->_entire_mapcount, >> 8. folio->_deferred_list. >> >> Since these fields are only used by folios that are rmappable. The code >> setting these fields is moved to page_rmappable_folio(). To make the >> code move, this patchset also needs to changes several places, where >> folio and compound page are used interchangably or unusual folio use: >> >> 1. in io_mem_alloc_compound(), a compound page is allocated, but later >> it is mapped via vm_insert_pages() like a rmappable folio; >> 2. __split_folio_to_order() sets large_rmappable flag directly instead >> of using page_rmappable_folio() for after-split folios; >> 3. hugetlb unsets large_rmappable to escape deferred_list unqueue >> operation. >> >> At last, the page freeing path is also changed to have different checks >> for compound page and folio. >> > > Thanks for doing this! > > >> One thing to note is that for compound page, I do not store compound >> order in folio->_nr_pages, which overlaps with page[1].memcg_data and >> use 1 << compound_order() instead, since I do not want to add a new >> union to struct page and compound_nr() is not as widely used as >> folio_nr_pages(). But let me know if there is a performance concern for >> this. >> >> Comments and suggestions are welcome. >> > > What does this mean for treating compound pages as folios, does this break > code that makes any assumptions about their interop? Yes. All folio initialization code is moved from prep_compound_page() to page_rmappable_folio(), so such users will see warnings on some folio fields are not set properly. They should call page_rmappable_folio() on compound pages they are planning to use as folios. A common use case is to vm_insert_page(s)() on subpages of a folio. For in-tree users, I am converting them all in this series. > >> >> >> Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] >> >> Zi Yan (5): >> io_uring: allocate folio in io_mem_alloc_compound() and function >> rename >> mm/huge_memory: use page_rmappable_folio() to convert after-split >> folios >> mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list >> handling >> mm: only use struct page in compound_nr() and compound_order() >> mm: code separation for compound page and folio >> >> include/linux/mm.h | 12 ++++-------- >> io_uring/memmap.c | 12 ++++++------ >> mm/huge_memory.c | 5 ++--- >> mm/hugetlb.c | 8 ++++---- >> mm/hugetlb_cma.c | 2 +- >> mm/internal.h | 47 +++++++++++++++++++++++++++------------------- >> mm/mm_init.c | 2 +- >> mm/page_alloc.c | 23 ++++++++++++++++++----- >> 8 files changed, 64 insertions(+), 47 deletions(-) >> Best Regards, Yan, Zi
On 4 Feb 2026, at 11:21, Zi Yan wrote: > On 2 Feb 2026, at 23:30, Balbir Singh wrote: > >> On 1/30/26 14:48, Zi Yan wrote: >>> Hi all, >>> >>> Based on my discussion with Jason about device private folio >>> reinitialization[1], I realize that the concepts of compound page and folio >>> are mixed together and confusing, as people think a compound page is equal >>> to a folio. This is not true, since a compound page means a group of >>> pages is managed as a whole and it can be something other than a folio, >>> for example, a slab page. To avoid further confusing people, this >>> patchset separates compound page from folio by moving any folio related >>> code out of compound page functions. >>> >>> The code is on top of mm-new (2026-01-28-20-27) and all mm selftests >>> passed. >>> >>> The key change is that a compound page no longer sets: >>> 1. folio->_nr_pages, >>> 2. folio->_large_mapcount, >>> 3. folio->_nr_pages_mapped, >>> 4. folio->_mm_ids, >>> 5. folio->_mm_id_mapcount, >>> 6. folio->_pincount, >>> 7. folio->_entire_mapcount, >>> 8. folio->_deferred_list. >>> >>> Since these fields are only used by folios that are rmappable. The code >>> setting these fields is moved to page_rmappable_folio(). To make the >>> code move, this patchset also needs to changes several places, where >>> folio and compound page are used interchangably or unusual folio use: >>> >>> 1. in io_mem_alloc_compound(), a compound page is allocated, but later >>> it is mapped via vm_insert_pages() like a rmappable folio; >>> 2. __split_folio_to_order() sets large_rmappable flag directly instead >>> of using page_rmappable_folio() for after-split folios; >>> 3. hugetlb unsets large_rmappable to escape deferred_list unqueue >>> operation. >>> >>> At last, the page freeing path is also changed to have different checks >>> for compound page and folio. >>> >> >> Thanks for doing this! >> >> >>> One thing to note is that for compound page, I do not store compound >>> order in folio->_nr_pages, which overlaps with page[1].memcg_data and >>> use 1 << compound_order() instead, since I do not want to add a new >>> union to struct page and compound_nr() is not as widely used as >>> folio_nr_pages(). But let me know if there is a performance concern for >>> this. >>> >>> Comments and suggestions are welcome. >>> >> >> What does this mean for treating compound pages as folios, does this break >> code that makes any assumptions about their interop? > > Yes. All folio initialization code is moved from prep_compound_page() to > page_rmappable_folio(), so such users will see warnings on some folio fields > are not set properly. They should call page_rmappable_folio() on compound > pages they are planning to use as folios. A common use case is to > vm_insert_page(s)() on subpages of a folio. > > For in-tree users, I am converting them all in this series. Considering a recent report[1], where drivers/scsi/sg.c allocates compound pages with __GFP_COMP and maps them into userspace via sg_vma_fault(), I guess almost all __GFP_COMP users are really using folios instead of compound pages. [1] https://lore.kernel.org/all/PS1PPF7E1D7501F1E4F4441E7ECD056DEADAB98A@PS1PPF7E1D7501F.apcprd02.prod.outlook.com/ > >> >>> >>> >>> Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] >>> >>> Zi Yan (5): >>> io_uring: allocate folio in io_mem_alloc_compound() and function >>> rename >>> mm/huge_memory: use page_rmappable_folio() to convert after-split >>> folios >>> mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list >>> handling >>> mm: only use struct page in compound_nr() and compound_order() >>> mm: code separation for compound page and folio >>> >>> include/linux/mm.h | 12 ++++-------- >>> io_uring/memmap.c | 12 ++++++------ >>> mm/huge_memory.c | 5 ++--- >>> mm/hugetlb.c | 8 ++++---- >>> mm/hugetlb_cma.c | 2 +- >>> mm/internal.h | 47 +++++++++++++++++++++++++++------------------- >>> mm/mm_init.c | 2 +- >>> mm/page_alloc.c | 23 ++++++++++++++++++----- >>> 8 files changed, 64 insertions(+), 47 deletions(-) >>> > > > Best Regards, > Yan, Zi Best Regards, Yan, Zi
On Wed, Feb 04, 2026 at 01:29:45PM -0500, Zi Yan wrote: > > For in-tree users, I am converting them all in this series. > > Considering a recent report[1], where drivers/scsi/sg.c allocates compound > pages with __GFP_COMP and maps them into userspace via sg_vma_fault(), > I guess almost all __GFP_COMP users are really using folios instead of > compound pages. This would be my guess, and it seems like a good cleanup to make them actually create fully proper folios if they are being mmaped. I suspect the only places not using "folios" are frozen page users (or places yet to be converted to frozen pages) Which is back to my previous remarks that having a good definition for what struct page memory is allowed to be used by a frozen page user would be helpful. If the mm can retain some of the tail page memory for itself you don't need to make several of the changes here. Jason
syzbot ci has tested the following series [v1] Separate compound page from folio https://lore.kernel.org/all/20260130034818.472804-1-ziy@nvidia.com * [RFC PATCH 1/5] io_uring: allocate folio in io_mem_alloc_compound() and function rename * [RFC PATCH 2/5] mm/huge_memory: use page_rmappable_folio() to convert after-split folios * [RFC PATCH 3/5] mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling * [RFC PATCH 4/5] mm: only use struct page in compound_nr() and compound_order() * [RFC PATCH 5/5] mm: code separation for compound page and folio and found the following issue: WARNING in __folio_large_mapcount_sanity_checks Full report is available here: https://ci.syzbot.org/series/f64f0297-d388-4cfa-b3be-f05819d0ce34 *** WARNING in __folio_large_mapcount_sanity_checks tree: mm-new URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git base: 0241748f8b68fc2bf637f4901b9d7ca660d177ca arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/76dc5ea6-0ff5-410b-8b1f-72e5607a704e/config C repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/c_repro syz repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/syz_repro ------------[ cut here ]------------ diff > folio_large_nr_pages(folio) WARNING: ./include/linux/rmap.h:148 at __folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148, CPU#1: syz.0.17/5988 Modules linked in: CPU: 1 UID: 0 PID: 5988 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:__folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148 Code: 5f 5d e9 4a 4e 64 09 cc e8 84 d8 aa ff 90 0f 0b 90 e9 82 fc ff ff e8 76 d8 aa ff 90 0f 0b 90 e9 8f fc ff ff e8 68 d8 aa ff 90 <0f> 0b 90 e9 b8 fc ff ff e8 5a d8 aa ff 90 0f 0b 90 e9 f2 fc ff ff RSP: 0018:ffffc900040e72f8 EFLAGS: 00010293 RAX: ffffffff8217c0f8 RBX: ffffea0006ef5c00 RCX: ffff888105fdba80 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000 RBP: 0000000000000001 R08: ffffea0006ef5c07 R09: 1ffffd4000ddeb80 R10: dffffc0000000000 R11: fffff94000ddeb81 R12: 0000000000000001 R13: 0000000000000000 R14: 1ffffd4000ddeb8f R15: ffffea0006ef5c78 FS: 00005555867b3500(0000) GS:ffff8882a9923000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00002000000000c0 CR3: 0000000103ab0000 CR4: 00000000000006f0 Call Trace: <TASK> folio_add_return_large_mapcount include/linux/rmap.h:184 [inline] __folio_add_rmap mm/rmap.c:1377 [inline] __folio_add_file_rmap mm/rmap.c:1696 [inline] folio_add_file_rmap_ptes+0x4c2/0xe60 mm/rmap.c:1722 insert_page_into_pte_locked+0x5ab/0x910 mm/memory.c:2378 insert_page+0x186/0x2d0 mm/memory.c:2398 packet_mmap+0x360/0x530 net/packet/af_packet.c:4622 vfs_mmap include/linux/fs.h:2053 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2468 [inline] __mmap_new_vma mm/vma.c:2532 [inline] __mmap_region mm/vma.c:2759 [inline] mmap_region+0x18fe/0x2240 mm/vma.c:2837 do_mmap+0xc39/0x10c0 mm/mmap.c:559 vm_mmap_pgoff+0x2c9/0x4f0 mm/util.c:581 ksys_mmap_pgoff+0x51e/0x760 mm/mmap.c:605 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f5d7399acb9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe9f3eea78 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f5d73c15fa0 RCX: 00007f5d7399acb9 RDX: 0000000000000002 RSI: 0000000000030000 RDI: 0000200000000000 RBP: 00007f5d73a08bf7 R08: 0000000000000003 R09: 0000000000000000 R10: 0000000000000011 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f5d73c15fac R14: 00007f5d73c15fa0 R15: 00007f5d73c15fa0 </TASK> *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
On 30 Jan 2026, at 3:15, syzbot ci wrote:
> syzbot ci has tested the following series
>
> [v1] Separate compound page from folio
> https://lore.kernel.org/all/20260130034818.472804-1-ziy@nvidia.com
> * [RFC PATCH 1/5] io_uring: allocate folio in io_mem_alloc_compound() and function rename
> * [RFC PATCH 2/5] mm/huge_memory: use page_rmappable_folio() to convert after-split folios
> * [RFC PATCH 3/5] mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling
> * [RFC PATCH 4/5] mm: only use struct page in compound_nr() and compound_order()
> * [RFC PATCH 5/5] mm: code separation for compound page and folio
>
> and found the following issue:
> WARNING in __folio_large_mapcount_sanity_checks
>
> Full report is available here:
> https://ci.syzbot.org/series/f64f0297-d388-4cfa-b3be-f05819d0ce34
>
> ***
>
> WARNING in __folio_large_mapcount_sanity_checks
>
> tree: mm-new
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git
> base: 0241748f8b68fc2bf637f4901b9d7ca660d177ca
> arch: amd64
> compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
> config: https://ci.syzbot.org/builds/76dc5ea6-0ff5-410b-8b1f-72e5607a704e/config
> C repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/c_repro
> syz repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/syz_repro
>
> ------------[ cut here ]------------
> diff > folio_large_nr_pages(folio)
> WARNING: ./include/linux/rmap.h:148 at __folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148, CPU#1: syz.0.17/5988
> Modules linked in:
> CPU: 1 UID: 0 PID: 5988 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:__folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148
> Code: 5f 5d e9 4a 4e 64 09 cc e8 84 d8 aa ff 90 0f 0b 90 e9 82 fc ff ff e8 76 d8 aa ff 90 0f 0b 90 e9 8f fc ff ff e8 68 d8 aa ff 90 <0f> 0b 90 e9 b8 fc ff ff e8 5a d8 aa ff 90 0f 0b 90 e9 f2 fc ff ff
> RSP: 0018:ffffc900040e72f8 EFLAGS: 00010293
> RAX: ffffffff8217c0f8 RBX: ffffea0006ef5c00 RCX: ffff888105fdba80
> RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
> RBP: 0000000000000001 R08: ffffea0006ef5c07 R09: 1ffffd4000ddeb80
> R10: dffffc0000000000 R11: fffff94000ddeb81 R12: 0000000000000001
> R13: 0000000000000000 R14: 1ffffd4000ddeb8f R15: ffffea0006ef5c78
> FS: 00005555867b3500(0000) GS:ffff8882a9923000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00002000000000c0 CR3: 0000000103ab0000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> folio_add_return_large_mapcount include/linux/rmap.h:184 [inline]
> __folio_add_rmap mm/rmap.c:1377 [inline]
> __folio_add_file_rmap mm/rmap.c:1696 [inline]
> folio_add_file_rmap_ptes+0x4c2/0xe60 mm/rmap.c:1722
> insert_page_into_pte_locked+0x5ab/0x910 mm/memory.c:2378
> insert_page+0x186/0x2d0 mm/memory.c:2398
> packet_mmap+0x360/0x530 net/packet/af_packet.c:4622
> vfs_mmap include/linux/fs.h:2053 [inline]
> mmap_file mm/internal.h:167 [inline]
> __mmap_new_file_vma mm/vma.c:2468 [inline]
> __mmap_new_vma mm/vma.c:2532 [inline]
> __mmap_region mm/vma.c:2759 [inline]
> mmap_region+0x18fe/0x2240 mm/vma.c:2837
> do_mmap+0xc39/0x10c0 mm/mmap.c:559
> vm_mmap_pgoff+0x2c9/0x4f0 mm/util.c:581
> ksys_mmap_pgoff+0x51e/0x760 mm/mmap.c:605
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f5d7399acb9
> Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007ffe9f3eea78 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
> RAX: ffffffffffffffda RBX: 00007f5d73c15fa0 RCX: 00007f5d7399acb9
> RDX: 0000000000000002 RSI: 0000000000030000 RDI: 0000200000000000
> RBP: 00007f5d73a08bf7 R08: 0000000000000003 R09: 0000000000000000
> R10: 0000000000000011 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f5d73c15fac R14: 00007f5d73c15fa0 R15: 00007f5d73c15fa0
> </TASK>
>
>
> ***
>
> If these findings have caused you to resend the series or submit a
> separate fix, please add the following tag to your commit message:
> Tested-by: syzbot@syzkaller.appspotmail.com
>
> ---
> This report is generated by a bot. It may contain errors.
> syzbot ci engineers can be reached at syzkaller@googlegroups.com.
The issue comes from alloc_one_pg_vec_page() in net/packet/af_packet.c.
It allocates a compound page with __GFP_COMP, but latter does vm_insert_page()
in packet_mmap(), using it as a folio.
The fix below is a hack. We will need a get_free_folios() instead.
I will check all __GFP_COMP callers to find out which ones are using it
as a folio and which ones are using it as a compound page. I suspect
most are using it as a folio.
#syz test
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2194a6b3a062..90858d20dfbe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5311,6 +5311,8 @@ unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order)
page = alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order);
if (!page)
return 0;
+ if (gfp_mask & __GFP_COMP)
+ return (unsigned long)folio_address(page_rmappable_folio(page));
return (unsigned long) page_address(page);
}
EXPORT_SYMBOL(get_free_pages_noprof);
Best Regards,
Yan, Zi
© 2016 - 2026 Red Hat, Inc.