From nobody Fri Dec 19 13:31:38 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84A4C31A7FE for ; Mon, 8 Dec 2025 06:29:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765175397; cv=none; b=bX9v85aZhXrzNoG0ZTwuyLmQwA9JrGLTsvFk6NyJ/g4CFOTCktV7uqSmRYsw0OV3GnavHd3IRbLyToo4EjHWiDselw9haGN7zSwTEf9pe5bOkbO3mHDR1dDmd2QIstoQ9Z7NwnjQa5A3tcydHdxCTcFxXyRqlBtnhiPCibVyS0A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765175397; c=relaxed/simple; bh=IsAQBy+/F0THMv/6pxlb7dlCDTvmO/F43GY/84Lx7Qs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MY6F7d9DVPDEbMl7kmg9UoEpKKLuD/EkNabv0w0QzViHB2iZ5j9mdVyB7gP5vVGRq6ovQUDbGgYf4W3qboCOQdUAzuaT4M8pW3LSerlMsQH0BSweUzHfwqIemIRbYnV0P13Du8hMf/BbrTaK25VG/LkmENbN3YgAn7Vl4vo9c08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rrkKJurf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rrkKJurf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C35AEC19424; Mon, 8 Dec 2025 06:29:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1765175396; bh=IsAQBy+/F0THMv/6pxlb7dlCDTvmO/F43GY/84Lx7Qs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rrkKJurf/ekHLTqr3oXC4YqVnKygVWt0HJT+83gq/Sz7EFTeS1jr0l3zZ3x3TwyZh uRswznSYSjtlgylkCypij/Ca+pKIB1LnH8V2t5o8RUaD+dlzdGd0OYo08Ek9z5f3dT R1HLsTu7RvLf2loccy9ldM0/g9rnq8W30RR2OV/3lg0Gq61og2650e77nJqi8EBgMK 04/1zCu7TbWB1cErBqV1C3ZFWqnYOt1J0upc/zfA0B53yngOk1KwXdH+pfIszva00f ETh14f5tGl6dakWTIosb3VkJlpnfEiXyFUSpK3CbHj87TKWd4cPyWgK/WjoT06T/2W UbEDolHGhNslg== From: SeongJae Park To: Cc: SeongJae Park , "Liam R. Howlett" , Andrew Morton , David Hildenbrand , Jann Horn , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Pedro Falcato , Suren Baghdasaryan , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v3 05/37] mm/{mprotect,memory}: (no upstream-aimed hack) implement MM_CP_DAMON Date: Sun, 7 Dec 2025 22:29:09 -0800 Message-ID: <20251208062943.68824-6-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251208062943.68824-1-sj@kernel.org> References: <20251208062943.68824-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Note that this is not upstreamable as-is. This is only for helping discussion of other changes of its series. DAMON is using Accessed bits of page table entries as the major source of the access information. It lacks some additional information such as which CPU was making the access. Page faults could be another source of information for such additional information. Implement another change_protection() flag for such use cases, namely MM_CP_DAMON. DAMON will install PAGE_NONE protections using the flag. To avoid interfering with NUMA_BALANCING, which is also using PAGE_NON protection, pass the faults to DAMON only when NUMA_BALANCING is disabled. Again, this is not upstreamable as-is. There were comments about this on the previous version, and I was unable to take time on addressing those. As a result, this version is not addressing any of those previous comments. I'm sending this, though, to help discussions on patches of its series, except this one. Please forgive me adding this to your inbox without addressing your comments, and ignore. I will establish another discussion for this part later. Signed-off-by: SeongJae Park --- include/linux/mm.h | 1 + mm/memory.c | 60 ++++++++++++++++++++++++++++++++++++++++++++-- mm/mprotect.c | 5 ++++ 3 files changed, 64 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 553cf9f438f1..2cba5a0196da 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2848,6 +2848,7 @@ int get_cmdline(struct task_struct *task, char *buffe= r, int buflen); #define MM_CP_UFFD_WP_RESOLVE (1UL << 3) /* Resolve wp */ #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) +#define MM_CP_DAMON (1UL << 4) =20 bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long add= r, pte_t pte); diff --git a/mm/memory.c b/mm/memory.c index 6675e87eb7dd..5dc85adb1e59 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -78,6 +78,7 @@ #include #include #include +#include =20 #include =20 @@ -6172,6 +6173,54 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, = pud_t orig_pud) return VM_FAULT_FALLBACK; } =20 +/* + * NOTE: This is only poc purpose "hack" that will not be upstreamed as is. + * More discussions between all stakeholders including maintainers of MM c= ore, + * NUMA balancing, and DAMON should be made to make this upstreamable. + * (https://lore.kernel.org/20251128193947.80866-1-sj@kernel.org) + * + * This function is called from page fault handler, for page faults on + * P{TE,MD}-protected but vma-accessible pages. DAMON is making the fake + * protection for access sampling purpose. This function simply clear the + * protection and report this access to DAMON, by calling + * damon_report_page_fault(). + * + * The protection clear code is copied from NUMA fault handling code for P= TE. + * Again, this is only poc purpose "hack" to show what information DAMON w= ant + * from page fault events, rather than an upstream-aimed version. + */ +static vm_fault_t do_damon_page(struct vm_fault *vmf, bool huge_pmd) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct folio *folio; + pte_t pte, old_pte; + bool writable =3D false, ignore_writable =3D false; + bool pte_write_upgrade =3D vma_wants_manual_pte_write_upgrade(vma); + + spin_lock(vmf->ptl); + old_pte =3D ptep_get(vmf->pte); + if (unlikely(!pte_same(old_pte, vmf->orig_pte))) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; + } + pte =3D pte_modify(old_pte, vma->vm_page_prot); + writable =3D pte_write(pte); + if (!writable && pte_write_upgrade && + can_change_pte_writable(vma, vmf->address, pte)) + writable =3D true; + folio =3D vm_normal_folio(vma, vmf->address, pte); + if (folio && folio_test_large(folio)) + numa_rebuild_large_mapping(vmf, vma, folio, pte, + ignore_writable, pte_write_upgrade); + else + numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte, + writable); + pte_unmap_unlock(vmf->pte, vmf->ptl); + + damon_report_page_fault(vmf, huge_pmd); + return 0; +} + /* * These routines also need to handle stuff like marking pages dirty * and/or accessed for architectures that don't do it in hardware (most @@ -6236,8 +6285,11 @@ static vm_fault_t handle_pte_fault(struct vm_fault *= vmf) if (!pte_present(vmf->orig_pte)) return do_swap_page(vmf); =20 - if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) + if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) { + if (sysctl_numa_balancing_mode =3D=3D NUMA_BALANCING_DISABLED) + return do_damon_page(vmf, false); return do_numa_page(vmf); + } =20 spin_lock(vmf->ptl); entry =3D vmf->orig_pte; @@ -6363,8 +6415,12 @@ static vm_fault_t __handle_mm_fault(struct vm_area_s= truct *vma, return 0; } if (pmd_trans_huge(vmf.orig_pmd)) { - if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) + if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) { + if (sysctl_numa_balancing_mode =3D=3D + NUMA_BALANCING_DISABLED) + return do_damon_page(&vmf, true); return do_huge_pmd_numa_page(&vmf); + } =20 if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && !pmd_write(vmf.orig_pmd)) { diff --git a/mm/mprotect.c b/mm/mprotect.c index 5c330e817129..d2c14162f93d 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -651,6 +651,11 @@ long change_protection(struct mmu_gather *tlb, WARN_ON_ONCE(cp_flags & MM_CP_PROT_NUMA); #endif =20 +#ifdef CONFIG_ARCH_SUPPORTS_NUMA_BALANCING + if (cp_flags & MM_CP_DAMON) + newprot =3D PAGE_NONE; +#endif + if (is_vm_hugetlb_page(vma)) pages =3D hugetlb_change_protection(tlb, vma, start, end, newprot, cp_flags); --=20 2.47.3