It was observed at [1] and [2] that the current kernel behaviour of
shattering a hugezeropage is inconsistent and suboptimal. For a VMA with
a THP allowable order, when we write-fault on it, the kernel installs a
PMD-mapped THP. On the other hand, if we first get a read fault, we get
a PMD pointing to the hugezeropage; subsequent write will trigger a
write-protection fault, shattering the hugezeropage into one writable
page, and all the other PTEs write-protected. The conclusion being, as
compared to the case of a single write-fault, applications have to suffer
512 extra page faults if they were to use the VMA as such, plus we get
the overhead of khugepaged trying to replace that area with a THP anyway.
Instead, replace the hugezeropage with a THP on wp-fault.
v2->v3:
- Drop foliop and order parameters, prefix the thp functions with pmd_
- First allocate THP, then pgtable, not vice-versa
- Move pgtable_trans_huge_deposit() from map_pmd_thp() to caller
- Drop exposing functions in include/linux/huge_mm.h
- Open code do_huge_zero_wp_pmd_locked()
- Release folio in case of pmd change after taking the lock, or
check_stable_address_space() returning VM_FAULT_SIGBUS
- Drop uffd-wp preservation. Looking at page_table_check_pmd_flags(),
preserving uffd-wp on a writable entry is invalid. Looking at
mfill_atomic(), uffd_copy() is a null operation when pmd is marked
uffd-wp.
v1->v2:
- Wrap do_huge_zero_wp_pmd_locked() around lock and unlock
- Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid
- calling sleeping function from spinlock context
[1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/
[2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/
The patchset applies on the latest mm-unstable branch.
Dev Jain (2):
mm: Abstract THP allocation
mm: Allocate THP on hugezeropage wp-fault
mm/huge_memory.c | 158 ++++++++++++++++++++++++++++++++++-------------
1 file changed, 114 insertions(+), 44 deletions(-)
--
2.30.2