[PATCH v7 0/2] Do not shatter hugezeropage on wp-fault

Dev Jain posted 2 patches 1 month, 2 weeks ago
mm/huge_memory.c | 139 +++++++++++++++++++++++++++++++++--------------
1 file changed, 97 insertions(+), 42 deletions(-)
[PATCH v7 0/2] Do not shatter hugezeropage on wp-fault
Posted by Dev Jain 1 month, 2 weeks ago
It was observed at [1] and [2] that the current kernel behaviour of
shattering a hugezeropage is inconsistent and suboptimal. For a VMA with
a THP allowable order, when we write-fault on it, the kernel installs a
PMD-mapped THP. On the other hand, if we first get a read fault, we get
a PMD pointing to the hugezeropage; subsequent write will trigger a
write-protection fault, shattering the hugezeropage into one writable
page, and all the other PTEs write-protected. The conclusion being, as
compared to the case of a single write-fault, applications have to suffer
512 extra page faults if they were to use the VMA as such, plus we get
the overhead of khugepaged trying to replace that area with a THP anyway.

Instead, replace the hugezeropage with a THP on wp-fault.

[1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/
[2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/

The patchset has been rebased on the mm-unstable branch.

v6->v7: Align function parameters in the second line by two tabs

v5->v6:
 - More goto ommissions, remove build warning for !CONFIG_NUMA

v4->v5:
 - Directly return VM_FAULT_FALLBACK in case of !folio

v3->v4:
 - Renames: pmd_thp_fault_alloc -> vma_alloc_anon_folio_pmd,
   map_pmd_thp -> map_anon_folio_pmd
 - Instead of passing around, compute haddr at various places, similar
   with gfp flags
 - Pass haddr to update_mmu_cache_pmd() instead of unaligned address
 - Do not pass vmf to map_anon_folio_pmd
 - Do declarations in reverse xmas tree order
 - Drop a new line which was introduced accidentally
 - Call __pmd_thp_fault_success_stats from map_anon_folio_pmd
 - Correctly return NULL from vma_alloc_anon_folio_pmd
 - Initialize pgtable to NULL in __do_huge_pmd_anonymous_page, to
   prevent freeing pgtable when not even allocated
 - Drop if conditions from map_anon_folio_pmd, let the caller handle that

v2->v3:
 - Drop foliop and order parameters, prefix the thp functions with pmd_
 - First allocate THP, then pgtable, not vice-versa
 - Move pgtable_trans_huge_deposit() from map_pmd_thp() to caller
 - Drop exposing functions in include/linux/huge_mm.h
 - Open code do_huge_zero_wp_pmd_locked()
 - Release folio in case of pmd change after taking the lock, or
   check_stable_address_space() returning VM_FAULT_SIGBUS
 - Drop uffd-wp preservation. Looking at page_table_check_pmd_flags(), 
   preserving uffd-wp on a writable entry is invalid. Looking at
   mfill_atomic(), uffd_copy() is a null operation when pmd is marked
   uffd-wp.

v1->v2:
 - Wrap do_huge_zero_wp_pmd_locked() around lock and unlock
 - Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid
 - calling sleeping function from spinlock context

Dev Jain (2):
  mm: Abstract THP allocation
  mm: Allocate THP on hugezeropage wp-fault

 mm/huge_memory.c | 139 +++++++++++++++++++++++++++++++++--------------
 1 file changed, 97 insertions(+), 42 deletions(-)

-- 
2.30.2