From nobody Wed Oct 8 13:22:11 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0424C19597F for ; Sat, 28 Jun 2025 11:35:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751110530; cv=none; b=BCO+W/Jq3LUVCyQVjGkPjMbz4AeJXPmOKc/RUNcsWPRHiPWZoPX0oeEpIUHY28A1h/dQarSyn2ZA/vf3bK+l44zSMCrAmLYNqdCGAkouqFF1hJB+U7pPZsIuoKQocOb6QKUL0R5jgDVVYXSUPJ6bX4LPkUiFjjBpxCHFd2a2EAc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751110530; c=relaxed/simple; bh=nPYaSwFH+Qd3oI6VaUdovIuW0JAmKGFC6AKJtda1+h4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=atUPhBOym3zfaowBieQj1NVLk7QBRjD0EBGD/WAYOosjgHZSaYqqcMoC6NjSEm+uNRM4pGkHvoRKEZ6wKKRSVeou1BdG64inmRa9XT6nztVLA0QuBeUdBQxN5M08AmQLIpsIvgX/W+XbCt8378CcQz1WmeBZHK/5cd+ZRP8bm7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 914721BD0; Sat, 28 Jun 2025 04:35:11 -0700 (PDT) Received: from MacBook-Pro.blr.arm.com (MacBook-Pro.blr.arm.com [10.164.18.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AA6D23F762; Sat, 28 Jun 2025 04:35:20 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com, Dev Jain Subject: [PATCH v4 3/4] mm: Optimize mprotect() by PTE-batching Date: Sat, 28 Jun 2025 17:04:34 +0530 Message-Id: <20250628113435.46678-4-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250628113435.46678-1-dev.jain@arm.com> References: <20250628113435.46678-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use folio_pte_batch to batch process a large folio. Reuse the folio from prot_numa case if possible. For all cases other than the PageAnonExclusive case, if the case holds true for one pte in the batch, one can confirm that that case will hold true for other ptes in the batch too; for pte_needs_soft_dirty_wp(), we do not pass FPB_IGNORE_SOFT_DIRTY. modify_prot_start_ptes() collects the dirty and access bits across the batch, therefore batching across pte_dirty(): this is correct since the dirty bit on the PTE really is just an indication that the folio got written to, so even if the PTE is not actually dirty (but one of the PTEs in the batch is), the wp-fault optimization can be made. The crux now is how to batch around the PageAnonExclusive case; we must check the corresponding condition for every single page. Therefore, from the large folio batch, we process sub batches of ptes mapping pages with the same PageAnonExclusive condition, and process that sub batch, then determine and process the next sub batch, and so on. Note that this does not cause any extra overhead; if suppose the size of the folio batch is 512, then the sub batch processing in total will take 512 iterations, which is the same as what we would have done before. Signed-off-by: Dev Jain --- mm/mprotect.c | 143 +++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 117 insertions(+), 26 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index 627b0d67cc4a..28c7ce7728ff 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -40,35 +40,47 @@ =20 #include "internal.h" =20 -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long add= r, - pte_t pte) -{ - struct page *page; +enum tristate { + TRI_FALSE =3D 0, + TRI_TRUE =3D 1, + TRI_MAYBE =3D -1, +}; =20 +/* + * Returns enum tristate indicating whether the pte can be changed to writ= able. + * If TRI_MAYBE is returned, then the folio is anonymous and the user must + * additionally check PageAnonExclusive() for every page in the desired ra= nge. + */ +static int maybe_change_pte_writable(struct vm_area_struct *vma, + unsigned long addr, pte_t pte, + struct folio *folio) +{ if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE))) - return false; + return TRI_FALSE; =20 /* Don't touch entries that are not even readable. */ if (pte_protnone(pte)) - return false; + return TRI_FALSE; =20 /* Do we need write faults for softdirty tracking? */ if (pte_needs_soft_dirty_wp(vma, pte)) - return false; + return TRI_FALSE; =20 /* Do we need write faults for uffd-wp tracking? */ if (userfaultfd_pte_wp(vma, pte)) - return false; + return TRI_FALSE; =20 if (!(vma->vm_flags & VM_SHARED)) { /* * Writable MAP_PRIVATE mapping: We can only special-case on * exclusive anonymous pages, because we know that our * write-fault handler similarly would map them writable without - * any additional checks while holding the PT lock. + * any additional checks while holding the PT lock. So if the + * folio is not anonymous, we know we cannot change pte to + * writable. If it is anonymous then the caller must further + * check that the page is AnonExclusive(). */ - page =3D vm_normal_page(vma, addr, pte); - return page && PageAnon(page) && PageAnonExclusive(page); + return (!folio || folio_test_anon(folio)) ? TRI_MAYBE : TRI_FALSE; } =20 VM_WARN_ON_ONCE(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)); @@ -80,15 +92,61 @@ bool can_change_pte_writable(struct vm_area_struct *vma= , unsigned long addr, * FS was already notified and we can simply mark the PTE writable * just like the write-fault handler would do. */ - return pte_dirty(pte); + return pte_dirty(pte) ? TRI_TRUE : TRI_FALSE; +} + +/* + * Returns the number of pages within the folio, starting from the page + * indicated by pgidx and up to pgidx + max_nr, that have the same value of + * PageAnonExclusive(). Must only be called for anonymous folios. Value of + * PageAnonExclusive() is returned in *exclusive. + */ +static int anon_exclusive_batch(struct folio *folio, int pgidx, int max_nr, + bool *exclusive) +{ + struct page *page; + int nr =3D 1; + + if (!folio) { + *exclusive =3D false; + return nr; + } + + page =3D folio_page(folio, pgidx++); + *exclusive =3D PageAnonExclusive(page); + while (nr < max_nr) { + page =3D folio_page(folio, pgidx++); + if ((*exclusive) !=3D PageAnonExclusive(page)) + break; + nr++; + } + + return nr; +} + +bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long add= r, + pte_t pte) +{ + struct page *page; + int ret; + + ret =3D maybe_change_pte_writable(vma, addr, pte, NULL); + if (ret =3D=3D TRI_MAYBE) { + page =3D vm_normal_page(vma, addr, pte); + ret =3D page && PageAnon(page) && PageAnonExclusive(page); + } + + return ret; } =20 static int mprotect_folio_pte_batch(struct folio *folio, unsigned long add= r, - pte_t *ptep, pte_t pte, int max_nr_ptes) + pte_t *ptep, pte_t pte, int max_nr_ptes, fpb_t switch_off_flags) { - const fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + + flags &=3D ~switch_off_flags; =20 - if (!folio || !folio_test_large(folio) || (max_nr_ptes =3D=3D 1)) + if (!folio || !folio_test_large(folio)) return 1; =20 return folio_pte_batch(folio, addr, ptep, pte, max_nr_ptes, flags, @@ -154,7 +212,8 @@ static int prot_numa_skip_ptes(struct folio **foliop, s= truct vm_area_struct *vma } =20 skip_batch: - nr_ptes =3D mprotect_folio_pte_batch(folio, addr, pte, oldpte, max_nr_pte= s); + nr_ptes =3D mprotect_folio_pte_batch(folio, addr, pte, oldpte, + max_nr_ptes, 0); out: *foliop =3D folio; return nr_ptes; @@ -191,7 +250,10 @@ static long change_pte_range(struct mmu_gather *tlb, if (pte_present(oldpte)) { int max_nr_ptes =3D (end - addr) >> PAGE_SHIFT; struct folio *folio =3D NULL; - pte_t ptent; + int sub_nr_ptes, pgidx =3D 0; + pte_t ptent, newpte; + bool sub_set_write; + int set_write; =20 /* * Avoid trapping faults against the zero or KSM @@ -206,6 +268,11 @@ static long change_pte_range(struct mmu_gather *tlb, continue; } =20 + if (!folio) + folio =3D vm_normal_folio(vma, addr, oldpte); + + nr_ptes =3D mprotect_folio_pte_batch(folio, addr, pte, oldpte, + max_nr_ptes, FPB_IGNORE_SOFT_DIRTY); oldpte =3D modify_prot_start_ptes(vma, addr, pte, nr_ptes); ptent =3D pte_modify(oldpte, newprot); =20 @@ -227,15 +294,39 @@ static long change_pte_range(struct mmu_gather *tlb, * example, if a PTE is already dirty and no other * COW or special handling is required. */ - if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && - !pte_write(ptent) && - can_change_pte_writable(vma, addr, ptent)) - ptent =3D pte_mkwrite(ptent, vma); - - modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr_ptes); - if (pte_needs_flush(oldpte, ptent)) - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); - pages++; + set_write =3D (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && + !pte_write(ptent); + if (set_write) + set_write =3D maybe_change_pte_writable(vma, addr, ptent, folio); + + while (nr_ptes) { + if (set_write =3D=3D TRI_MAYBE) { + sub_nr_ptes =3D anon_exclusive_batch(folio, + pgidx, nr_ptes, &sub_set_write); + } else { + sub_nr_ptes =3D nr_ptes; + sub_set_write =3D (set_write =3D=3D TRI_TRUE); + } + + if (sub_set_write) + newpte =3D pte_mkwrite(ptent, vma); + else + newpte =3D ptent; + + modify_prot_commit_ptes(vma, addr, pte, oldpte, + newpte, sub_nr_ptes); + if (pte_needs_flush(oldpte, newpte)) + tlb_flush_pte_range(tlb, addr, + sub_nr_ptes * PAGE_SIZE); + + addr +=3D sub_nr_ptes * PAGE_SIZE; + pte +=3D sub_nr_ptes; + oldpte =3D pte_advance_pfn(oldpte, sub_nr_ptes); + ptent =3D pte_advance_pfn(ptent, sub_nr_ptes); + nr_ptes -=3D sub_nr_ptes; + pages +=3D sub_nr_ptes; + pgidx +=3D sub_nr_ptes; + } } else if (is_swap_pte(oldpte)) { swp_entry_t entry =3D pte_to_swp_entry(oldpte); pte_t newpte; --=20 2.30.2