From nobody Mon Dec 1 22:05:45 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39CA2322DD0 for ; Thu, 27 Nov 2025 23:37:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764286631; cv=none; b=oeMwRX3wS01P7LTHhcxa5u/HldA8HjZdRnj0TA9pRj/P3QCO7gd4eW8DvbhoWkB9Gjz26RYd3bXj/nGr/gY7I5nBVne8T/OAx9bd3uzgjOsS+A9sipwjC5BJIr5P0dQOn+/l7zjRKNUmEpC3+qUpcl0je3orF6t+NPbRsE6mEFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764286631; c=relaxed/simple; bh=tDwk1I3VaSC472orDHjOphavJ/t+I4WrhRaBpxRD3MM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YTHKT9yamT7vXe3cRAfgE6qyxgxSo3dXpsYhy1/mmAJkr7a/Gwu8t2teD0jSZ9oNvr8MJKg1+cprzGlajWWRAf3EFwKAQN9YVXsGfNsnmbngkX9GbSPW2UfPPAE8AhyYFOPpltk6ddKk211ABQqIhiSNB65AAFHlaTk9S1T1TVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=suse.de; spf=pass smtp.mailfrom=suse.de; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b=shy6esmj; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b=yAyK82Jg; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b=shy6esmj; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b=yAyK82Jg; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=suse.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b="shy6esmj"; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b="yAyK82Jg"; dkim=pass (1024-bit key) header.d=suse.de header.i=@suse.de header.b="shy6esmj"; dkim=permerror (0-bit key) header.d=suse.de header.i=@suse.de header.b="yAyK82Jg" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 920BD5BD0D; Thu, 27 Nov 2025 23:37:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1764286620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GXN77uvZaUEzKpDFHr5D8RJvGuDYGsfduCNQSgDBJBM=; b=shy6esmjWDwV1Q5h4BPZmal58rRAd/KUi1HdBHgwxwrWaQYhCsclNBw6iS00ZR82wYJTKF 9LTXhtLSfIUtWDQKy11UkoSKYLEvyzIjyvmuthZQYLaiCLFoAZAvEmJ93T+Aa+fi6HTIyc RLy58zmqHpdpiBPwI7jaHHPoPVdyw14= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1764286620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GXN77uvZaUEzKpDFHr5D8RJvGuDYGsfduCNQSgDBJBM=; b=yAyK82Jgo+FHSngW/yAhnG1uWQvgdDMtxLUPg5fpqwMJgP4CS5gpJLwroubvrhd8cMmp1l qMUTpXoABkTRolAg== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=shy6esmj; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=yAyK82Jg DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1764286620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GXN77uvZaUEzKpDFHr5D8RJvGuDYGsfduCNQSgDBJBM=; b=shy6esmjWDwV1Q5h4BPZmal58rRAd/KUi1HdBHgwxwrWaQYhCsclNBw6iS00ZR82wYJTKF 9LTXhtLSfIUtWDQKy11UkoSKYLEvyzIjyvmuthZQYLaiCLFoAZAvEmJ93T+Aa+fi6HTIyc RLy58zmqHpdpiBPwI7jaHHPoPVdyw14= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1764286620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GXN77uvZaUEzKpDFHr5D8RJvGuDYGsfduCNQSgDBJBM=; b=yAyK82Jgo+FHSngW/yAhnG1uWQvgdDMtxLUPg5fpqwMJgP4CS5gpJLwroubvrhd8cMmp1l qMUTpXoABkTRolAg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 54D883EA63; Thu, 27 Nov 2025 23:37:00 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id N2MbDpzgKGlOGQAAD6G6ig (envelope-from ); Thu, 27 Nov 2025 23:37:00 +0000 From: Gabriel Krisman Bertazi To: linux-mm@kvack.org Cc: Gabriel Krisman Bertazi , linux-kernel@vger.kernel.org, jack@suse.cz, Mateusz Guzik , Shakeel Butt , Michal Hocko , Mathieu Desnoyers , Dennis Zhou , Tejun Heo , Christoph Lameter , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan Subject: [RFC PATCH 4/4] mm: Split a slow path for updating mm counters Date: Thu, 27 Nov 2025 18:36:31 -0500 Message-ID: <20251127233635.4170047-5-krisman@suse.de> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251127233635.4170047-1-krisman@suse.de> References: <20251127233635.4170047-1-krisman@suse.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; URIBL_BLOCKED(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo,suse.de:mid,suse.de:dkim,suse.de:email]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCPT_COUNT_TWELVE(0.00)[18]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; FREEMAIL_CC(0.00)[suse.de,vger.kernel.org,suse.cz,gmail.com,linux.dev,kernel.org,efficios.com,gentwo.org,linux-foundation.org,redhat.com,oracle.com,google.com]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; DWL_DNSWL_BLOCKED(0.00)[suse.de:dkim]; DKIM_TRACE(0.00)[suse.de:+]; R_RATELIMIT(0.00)[to_ip_from(RLpqz8f45ibb1mrnbixkpon6m4)]; FREEMAIL_ENVRCPT(0.00)[gmail.com] X-Spam-Level: X-Spam-Score: -3.01 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Queue-Id: 920BD5BD0D X-Rspamd-Action: no action X-Spam-Flag: NO Content-Type: text/plain; charset="utf-8" For cases where we know we are not coming from local context, there is no point in touching current when incrementing/decrementing the counters. Split this path into another helper to avoid this cost. Signed-off-by: Gabriel Krisman Bertazi --- arch/s390/mm/gmap_helpers.c | 4 ++-- arch/s390/mm/pgtable.c | 4 ++-- fs/exec.c | 2 +- include/linux/mm.h | 14 +++++++++++--- kernel/events/uprobes.c | 2 +- mm/filemap.c | 2 +- mm/huge_memory.c | 22 +++++++++++----------- mm/khugepaged.c | 6 +++--- mm/ksm.c | 2 +- mm/madvise.c | 2 +- mm/memory.c | 20 ++++++++++---------- mm/migrate.c | 2 +- mm/migrate_device.c | 2 +- mm/rmap.c | 16 ++++++++-------- mm/swapfile.c | 6 +++--- mm/userfaultfd.c | 2 +- 16 files changed, 58 insertions(+), 50 deletions(-) diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c index d4c3c36855e2..6d8498c56d08 100644 --- a/arch/s390/mm/gmap_helpers.c +++ b/arch/s390/mm/gmap_helpers.c @@ -29,9 +29,9 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry) { if (!non_swap_entry(entry)) - dec_mm_counter(mm, MM_SWAPENTS); + dec_mm_counter_other(mm, MM_SWAPENTS); else if (is_migration_entry(entry)) - dec_mm_counter(mm, mm_counter(pfn_swap_entry_folio(entry))); + dec_mm_counter_other(mm, mm_counter(pfn_swap_entry_folio(entry))); free_swap_and_cache(entry); } =20 diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 0fde20bbc50b..021a04f958e5 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -686,11 +686,11 @@ void ptep_unshadow_pte(struct mm_struct *mm, unsigned= long saddr, pte_t *ptep) static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry) { if (!non_swap_entry(entry)) - dec_mm_counter(mm, MM_SWAPENTS); + dec_mm_counter_other(mm, MM_SWAPENTS); else if (is_migration_entry(entry)) { struct folio *folio =3D pfn_swap_entry_folio(entry); =20 - dec_mm_counter(mm, mm_counter(folio)); + dec_mm_counter_other(mm, mm_counter(folio)); } free_swap_and_cache(entry); } diff --git a/fs/exec.c b/fs/exec.c index 4298e7e08d5d..33d0eb00d315 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -137,7 +137,7 @@ static void acct_arg_size(struct linux_binprm *bprm, un= signed long pages) return; =20 bprm->vma_pages =3D pages; - add_mm_counter(mm, MM_ANONPAGES, diff); + add_mm_counter_local(mm, MM_ANONPAGES, diff); } =20 static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long = pos, diff --git a/include/linux/mm.h b/include/linux/mm.h index 29de4c60ac6c..2db12280e938 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2689,7 +2689,7 @@ static inline unsigned long get_mm_counter_sum(struct= mm_struct *mm, int member) =20 void mm_trace_rss_stat(struct mm_struct *mm, int member); =20 -static inline void add_mm_counter(struct mm_struct *mm, int member, long v= alue) +static inline void add_mm_counter_local(struct mm_struct *mm, int member, = long value) { if (READ_ONCE(current->mm) =3D=3D mm) lazy_percpu_counter_add_fast(&mm->rss_stat[member], value); @@ -2698,9 +2698,17 @@ static inline void add_mm_counter(struct mm_struct *= mm, int member, long value) =20 mm_trace_rss_stat(mm, member); } +static inline void add_mm_counter_other(struct mm_struct *mm, int member, = long value) +{ + lazy_percpu_counter_add_atomic(&mm->rss_stat[member], value); + + mm_trace_rss_stat(mm, member); +} =20 -#define inc_mm_counter(mm, member) add_mm_counter(mm, member, 1) -#define dec_mm_counter(mm, member) add_mm_counter(mm, member, -1) +#define inc_mm_counter_local(mm, member) add_mm_counter_local(mm, member, = 1) +#define dec_mm_counter_local(mm, member) add_mm_counter_local(mm, member, = -1) +#define inc_mm_counter_other(mm, member) add_mm_counter_other(mm, member, = 1) +#define dec_mm_counter_other(mm, member) add_mm_counter_other(mm, member, = -1) =20 /* Optimized variant when folio is already known not to be anon */ static inline int mm_counter_file(struct folio *folio) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8709c69118b5..9c0e73dd2948 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -447,7 +447,7 @@ static int __uprobe_write(struct vm_area_struct *vma, if (!orig_page_is_identical(vma, vaddr, fw->page, &pmd_mappable)) goto remap; =20 - dec_mm_counter(vma->vm_mm, MM_ANONPAGES); + dec_mm_counter_other(vma->vm_mm, MM_ANONPAGES); folio_remove_rmap_pte(folio, fw->page, vma); if (!folio_mapped(folio) && folio_test_swapcache(folio) && folio_trylock(folio)) { diff --git a/mm/filemap.c b/mm/filemap.c index 13f0259d993c..5d1656e63602 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3854,7 +3854,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, =20 folio_unlock(folio); } while ((folio =3D next_uptodate_folio(&xas, mapping, end_pgoff)) !=3D N= ULL); - add_mm_counter(vma->vm_mm, folio_type, rss); + add_mm_counter_other(vma->vm_mm, folio_type, rss); pte_unmap_unlock(vmf->pte, vmf->ptl); trace_mm_filemap_map_pages(mapping, start_pgoff, end_pgoff); out: diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1b81680b4225..614b0a8e168b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1228,7 +1228,7 @@ static void map_anon_folio_pmd(struct folio *folio, p= md_t *pmd, folio_add_lru_vma(folio, vma); set_pmd_at(vma->vm_mm, haddr, pmd, entry); update_mmu_cache_pmd(vma, haddr, pmd); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter_local(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); count_vm_event(THP_FAULT_ALLOC); count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); @@ -1444,7 +1444,7 @@ static vm_fault_t insert_pmd(struct vm_area_struct *v= ma, unsigned long addr, } else { folio_get(fop.folio); folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma); - add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR); + add_mm_counter_local(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR); } } else { entry =3D pmd_mkhuge(pfn_pmd(fop.pfn, prot)); @@ -1563,7 +1563,7 @@ static vm_fault_t insert_pud(struct vm_area_struct *v= ma, unsigned long addr, =20 folio_get(fop.folio); folio_add_file_rmap_pud(fop.folio, &fop.folio->page, vma); - add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PUD_NR); + add_mm_counter_local(mm, mm_counter_file(fop.folio), HPAGE_PUD_NR); } else { entry =3D pud_mkhuge(pfn_pud(fop.pfn, prot)); entry =3D pud_mkspecial(entry); @@ -1714,7 +1714,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm= _struct *src_mm, pmd =3D pmd_swp_mkuffd_wp(pmd); set_pmd_at(src_mm, addr, src_pmd, pmd); } - add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter_local(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); if (!userfaultfd_wp(dst_vma)) @@ -1758,7 +1758,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm= _struct *src_mm, __split_huge_pmd(src_vma, src_pmd, addr, false); return -EAGAIN; } - add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter_local(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); out_zero_page: mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); @@ -2223,11 +2223,11 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, =20 if (folio_test_anon(folio)) { zap_deposited_table(tlb->mm, pmd); - add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); + add_mm_counter_other(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - add_mm_counter(tlb->mm, mm_counter_file(folio), + add_mm_counter_other(tlb->mm, mm_counter_file(folio), -HPAGE_PMD_NR); =20 /* @@ -2719,7 +2719,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, page =3D pud_page(orig_pud); folio =3D page_folio(page); folio_remove_rmap_pud(folio, page, vma); - add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR); + add_mm_counter_other(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR); =20 spin_unlock(ptl); tlb_remove_page_size(tlb, page, HPAGE_PUD_SIZE); @@ -2755,7 +2755,7 @@ static void __split_huge_pud_locked(struct vm_area_st= ruct *vma, pud_t *pud, folio_set_referenced(folio); folio_remove_rmap_pud(folio, page, vma); folio_put(folio); - add_mm_counter(vma->vm_mm, mm_counter_file(folio), + add_mm_counter_local(vma->vm_mm, mm_counter_file(folio), -HPAGE_PUD_NR); } =20 @@ -2874,7 +2874,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, folio_remove_rmap_pmd(folio, page, vma); folio_put(folio); } - add_mm_counter(mm, mm_counter_file(folio), -HPAGE_PMD_NR); + add_mm_counter_local(mm, mm_counter_file(folio), -HPAGE_PMD_NR); return; } =20 @@ -3188,7 +3188,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm= _area_struct *vma, =20 folio_remove_rmap_pmd(folio, pmd_page(orig_pmd), vma); zap_deposited_table(mm, pmdp); - add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + add_mm_counter_local(mm, MM_ANONPAGES, -HPAGE_PMD_NR); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index abe54f0043c7..a6634ca0667d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -691,7 +691,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *= pte, nr_ptes =3D 1; pteval =3D ptep_get(_pte); if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { - add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); + add_mm_counter_other(vma->vm_mm, MM_ANONPAGES, 1); if (is_zero_pfn(pte_pfn(pteval))) { /* * ptl mostly unnecessary. @@ -1664,7 +1664,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, /* step 3: set proper refcount and mm_counters. */ if (nr_mapped_ptes) { folio_ref_sub(folio, nr_mapped_ptes); - add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes); + add_mm_counter_other(mm, mm_counter_file(folio), -nr_mapped_ptes); } =20 /* step 4: remove empty page table */ @@ -1700,7 +1700,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, if (nr_mapped_ptes) { flush_tlb_mm(mm); folio_ref_sub(folio, nr_mapped_ptes); - add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes); + add_mm_counter_other(mm, mm_counter_file(folio), -nr_mapped_ptes); } unlock: if (start_pte) diff --git a/mm/ksm.c b/mm/ksm.c index 7bc726b50b2f..7434cf1f4925 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1410,7 +1410,7 @@ static int replace_page(struct vm_area_struct *vma, s= truct page *page, * will get wrong values in /proc, and a BUG message in dmesg * when tearing down the mm. */ - dec_mm_counter(mm, MM_ANONPAGES); + dec_mm_counter_other(mm, MM_ANONPAGES); } =20 flush_cache_page(vma, addr, pte_pfn(ptep_get(ptep))); diff --git a/mm/madvise.c b/mm/madvise.c index fb1c86e630b6..ba7ea134f5ad 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -776,7 +776,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, } =20 if (nr_swap) - add_mm_counter(mm, MM_SWAPENTS, nr_swap); + add_mm_counter_local(mm, MM_SWAPENTS, nr_swap); if (start_pte) { arch_leave_lazy_mmu_mode(); pte_unmap_unlock(start_pte, ptl); diff --git a/mm/memory.c b/mm/memory.c index 74b45e258323..9a18ac25955c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -488,7 +488,7 @@ static inline void add_mm_rss_vec(struct mm_struct *mm,= int *rss) =20 for (i =3D 0; i < NR_MM_COUNTERS; i++) if (rss[i]) - add_mm_counter(mm, i, rss[i]); + add_mm_counter_other(mm, i, rss[i]); } =20 static bool is_bad_page_map_ratelimited(void) @@ -2306,7 +2306,7 @@ static int insert_page_into_pte_locked(struct vm_area= _struct *vma, pte_t *pte, pteval =3D pte_mkyoung(pteval); pteval =3D maybe_mkwrite(pte_mkdirty(pteval), vma); } - inc_mm_counter(vma->vm_mm, mm_counter_file(folio)); + inc_mm_counter_local(vma->vm_mm, mm_counter_file(folio)); folio_add_file_rmap_pte(folio, page, vma); } set_pte_at(vma->vm_mm, addr, pte, pteval); @@ -3716,12 +3716,12 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { if (old_folio) { if (!folio_test_anon(old_folio)) { - dec_mm_counter(mm, mm_counter_file(old_folio)); - inc_mm_counter(mm, MM_ANONPAGES); + dec_mm_counter_other(mm, mm_counter_file(old_folio)); + inc_mm_counter_other(mm, MM_ANONPAGES); } } else { ksm_might_unmap_zero_page(mm, vmf->orig_pte); - inc_mm_counter(mm, MM_ANONPAGES); + inc_mm_counter_other(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry =3D folio_mk_pte(new_folio, vma->vm_page_prot); @@ -4916,8 +4916,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (should_try_to_free_swap(folio, vma, vmf->flags)) folio_free_swap(folio); =20 - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); - add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); + add_mm_counter_other(vma->vm_mm, MM_ANONPAGES, nr_pages); + add_mm_counter_other(vma->vm_mm, MM_SWAPENTS, -nr_pages); pte =3D mk_pte(page, vma->vm_page_prot); if (pte_swp_soft_dirty(vmf->orig_pte)) pte =3D pte_mksoft_dirty(pte); @@ -5223,7 +5223,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *= vmf) } =20 folio_ref_add(folio, nr_pages - 1); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + add_mm_counter_other(vma->vm_mm, MM_ANONPAGES, nr_pages); count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); @@ -5375,7 +5375,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct fo= lio *folio, struct page *pa if (write) entry =3D maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); =20 - add_mm_counter(vma->vm_mm, mm_counter_file(folio), HPAGE_PMD_NR); + add_mm_counter_other(vma->vm_mm, mm_counter_file(folio), HPAGE_PMD_NR); folio_add_file_rmap_pmd(folio, page, vma); =20 /* @@ -5561,7 +5561,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) folio_ref_add(folio, nr_pages - 1); set_pte_range(vmf, folio, page, nr_pages, addr); type =3D is_cow ? MM_ANONPAGES : mm_counter_file(folio); - add_mm_counter(vma->vm_mm, type, nr_pages); + add_mm_counter_other(vma->vm_mm, type, nr_pages); ret =3D 0; =20 unlock: diff --git a/mm/migrate.c b/mm/migrate.c index e3065c9edb55..dd8c6e6224f9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -329,7 +329,7 @@ static bool try_to_map_unused_to_zeropage(struct page_v= ma_mapped_walk *pvmw, =20 set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte); =20 - dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio)); + dec_mm_counter_other(pvmw->vma->vm_mm, mm_counter(folio)); return true; } =20 diff --git a/mm/migrate_device.c b/mm/migrate_device.c index abd9f6850db6..7f3e5d7b3109 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -676,7 +676,7 @@ static void migrate_vma_insert_page(struct migrate_vma = *migrate, if (userfaultfd_missing(vma)) goto unlock_abort; =20 - inc_mm_counter(mm, MM_ANONPAGES); + inc_mm_counter_other(mm, MM_ANONPAGES); folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); if (!folio_is_zone_device(folio)) folio_add_lru_vma(folio, vma); diff --git a/mm/rmap.c b/mm/rmap.c index ac4f783d6ec2..0f6023ffb65d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2085,7 +2085,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, set_huge_pte_at(mm, address, pvmw.pte, pteval, hsz); } else { - dec_mm_counter(mm, mm_counter(folio)); + dec_mm_counter_other(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } } else if (likely(pte_present(pteval)) && pte_unused(pteval) && @@ -2100,7 +2100,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * migration) will not expect userfaults on already * copied pages. */ - dec_mm_counter(mm, mm_counter(folio)); + dec_mm_counter_other(mm, mm_counter(folio)); } else if (folio_test_anon(folio)) { swp_entry_t entry =3D page_swap_entry(subpage); pte_t swp_pte; @@ -2155,7 +2155,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, set_ptes(mm, address, pvmw.pte, pteval, nr_pages); goto walk_abort; } - add_mm_counter(mm, MM_ANONPAGES, -nr_pages); + add_mm_counter_other(mm, MM_ANONPAGES, -nr_pages); goto discard; } =20 @@ -2188,8 +2188,8 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, list_add(&mm->mmlist, &init_mm.mmlist); spin_unlock(&mmlist_lock); } - dec_mm_counter(mm, MM_ANONPAGES); - inc_mm_counter(mm, MM_SWAPENTS); + dec_mm_counter_other(mm, MM_ANONPAGES); + inc_mm_counter_other(mm, MM_SWAPENTS); swp_pte =3D swp_entry_to_pte(entry); if (anon_exclusive) swp_pte =3D pte_swp_mkexclusive(swp_pte); @@ -2217,7 +2217,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * * See Documentation/mm/mmu_notifier.rst */ - dec_mm_counter(mm, mm_counter_file(folio)); + dec_mm_counter_other(mm, mm_counter_file(folio)); } discard: if (unlikely(folio_test_hugetlb(folio))) { @@ -2476,7 +2476,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, set_huge_pte_at(mm, address, pvmw.pte, pteval, hsz); } else { - dec_mm_counter(mm, mm_counter(folio)); + dec_mm_counter_other(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } } else if (likely(pte_present(pteval)) && pte_unused(pteval) && @@ -2491,7 +2491,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, * migration) will not expect userfaults on already * copied pages. */ - dec_mm_counter(mm, mm_counter(folio)); + dec_mm_counter_other(mm, mm_counter(folio)); } else { swp_entry_t entry; pte_t swp_pte; diff --git a/mm/swapfile.c b/mm/swapfile.c index 10760240a3a2..70f7d31c0854 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2163,7 +2163,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_= t *pmd, if (unlikely(hwpoisoned || !folio_test_uptodate(folio))) { swp_entry_t swp_entry; =20 - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); + dec_mm_counter_other(vma->vm_mm, MM_SWAPENTS); if (hwpoisoned) { swp_entry =3D make_hwpoison_entry(page); } else { @@ -2181,8 +2181,8 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_= t *pmd, */ arch_swap_restore(folio_swap(entry, folio), folio); =20 - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); + dec_mm_counter_other(vma->vm_mm, MM_SWAPENTS); + inc_mm_counter_other(vma->vm_mm, MM_ANONPAGES); folio_get(folio); if (folio =3D=3D swapcache) { rmap_t rmap_flags =3D RMAP_NONE; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index af61b95c89e4..34e760c37b7b 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -221,7 +221,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd, * Must happen after rmap, as mm_counter() checks mapping (via * PageAnon()), which is set by __page_set_anon_rmap(). */ - inc_mm_counter(dst_mm, mm_counter(folio)); + inc_mm_counter_other(dst_mm, mm_counter(folio)); =20 set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); =20 --=20 2.51.0