To perform memory hotplug operations, the memmap (aka struct page) will be
updated. For arm64 with 4K page size, the typical granularity is 128M,
which corresponds to a 2M memmap buffer.
Commit 2045a3b8911b ("mm/sparse-vmemmap: generalise vmemmap_populate_hugepages()")
optimizes this 2M buffer to be mapped with PMD huge pages. However,
commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
which supports 2M subsection hotplug granularity, causes other issues
(refer to the change log of patch #1). The logic is adjusted to populate
with huge pages only if the hotplug address/size is section-aligned.
Zhenhua Huang (2):
arm64: mm: vmemmap populate to page level if not section aligned
arm64: mm: implement vmemmap_check_pmd for arm64
arch/arm64/mm/mmu.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--
2.25.1