From nobody Thu Oct 2 15:34:43 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA8A823B618 for ; Mon, 15 Sep 2025 16:46:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757954801; cv=none; b=qfqPVImosQOjvK9BdTur1Es29Q/Q2lRA7hSBJsF2K1o1wCzM8KUk3oQ5JtpCjhxiFWWzQdEtYu1+0aLgJvdX1PqvQC9HGyTX2SoUhgUjc9Jxf8T08B+4YVvgttuAzPZqeJa3rgBDwUfrKokdWVwiaEgsPnnSORIkHGbZ9vYrQ6c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757954801; c=relaxed/simple; bh=Wm5kVy+QK3aYdtcEt135qslNE4sqGTNT0KRjz0cm82A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GcPv5selo1Q2YZ5emv0wAx1pH+YIrQ6fZ+Mi/e7+0VFgLQAlnUFfp/op90ikgSmnFdUTIdr7IqJtdbssd+9vEGB41SzG27TkO0fy8vijvbP3hiK0KT5TXRPw1AXIyIg9gVKNP3AkvpCcBykNIGUu4RZza8AkGd/tI8nZombIHsE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--kaleshsingh.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cwPWWf17; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kaleshsingh.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cwPWWf17" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-24456ebed7bso58565925ad.0 for ; Mon, 15 Sep 2025 09:46:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1757954799; x=1758559599; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9uec6c9hHTVF7EYZ252n3iiXGzXanDu3XqGzsoJcRFg=; b=cwPWWf178tua5mD0wIhxdd+PSC2PkLrO63zLECIDoL9b4rmjfWTzwozfE5hxw6EviN AQmObY1Zv2SzVDz4AUFdiBA4v5+RmQufRoNuGb+k17PUdymD1SJ6SfyDBJ6bxlJie9h7 88OaLrSeATBJ3w162YyB603AGPujPcbM/BrohEFYI2ZDDdlc7VSuI02pvgn7n5jxo0MA VJ3M1GumJyB0ezDoyRbTRrG+hJ6873pJKZU2Tow0/Ht/7XghNjE3vskoSnm0xJ0fJQB+ k9xevCPOEUyhZjI0vyUBzxO4VpqetyIzRhdXQDA06AUzDhxY+htNRJZ0eitLJfx49w5z acJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757954799; x=1758559599; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9uec6c9hHTVF7EYZ252n3iiXGzXanDu3XqGzsoJcRFg=; b=hzWYqVKEIzV7tmBSiCD+I8G2e6ml4mkOh3wSgjyqhadOKPAuQlgXnaeHCMc3NSbxA1 gMn3g2Bsz4LTyaIsDfEd/jkz1eGAihnqYxdjGyJc7/tsQ+4pPXse5mbH1IfJKXnPi5N2 t0qwi0fXh/Hi2sCz5C5dvcn9TYMZ1PAMAyjhZxzamPyVy10CAJrdwQkbbfYdBWOxa5sx 86rE+1R08WSDyEFMK6lb3v384pbvAUQXwg3QErDitRUAknHl5Lo5sJijs6lBp/tQHhyy NNkgFkIdzfpBD7XXNJpPYOu999TQTBjAwEvqutS5qAmwVXmZgIyvxlcmhRfE0UR9ib43 gbRg== X-Forwarded-Encrypted: i=1; AJvYcCWyXETUCh6E5pevwpcHDeN09lIJ59/vVtNmZ6/icD+3SQqQCXsn2zmucjaAJyXNqcgBLY6MdDZjNXqVHNI=@vger.kernel.org X-Gm-Message-State: AOJu0YxSh7PmV/bKNthUBk+Di21gb5nlRR2j7IRQsg5xLU6ziIRcVERy vynsPJSA/Ctme6M29IppomzeK6utwyJHZazwjDpUJtFLBwjOCbOpInCUNexBUSE06jBeJ5pp0tW cV9zxhdEfNw/cSx++pXoej7AXXQ== X-Google-Smtp-Source: AGHT+IE0Au7KtYKMn6v6+XfekUxenGOOYdpGtRed0ASYQgDdnt6D91OKFdMUEycPdM3BioeBGvXFHcpGg9KCP0jNrA== X-Received: from plbmz4.prod.google.com ([2002:a17:903:3504:b0:266:c070:158d]) (user=kaleshsingh job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d4ce:b0:25d:510:622c with SMTP id d9443c01a7336-25d2da0f07fmr167476645ad.28.1757954798904; Mon, 15 Sep 2025 09:46:38 -0700 (PDT) Date: Mon, 15 Sep 2025 09:36:35 -0700 In-Reply-To: <20250915163838.631445-1-kaleshsingh@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250915163838.631445-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.51.0.384.g4c02a37b29-goog Message-ID: <20250915163838.631445-5-kaleshsingh@google.com> Subject: [PATCH v2 4/7] mm: rename mm_struct::map_count to vma_count From: Kalesh Singh To: akpm@linux-foundation.org, minchan@kernel.org, lorenzo.stoakes@oracle.com, david@redhat.com, Liam.Howlett@oracle.com, rppt@kernel.org, pfalcato@suse.de Cc: kernel-team@android.com, android-mm@google.com, Kalesh Singh , Alexander Viro , Christian Brauner , Jan Kara , Kees Cook , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , Valentin Schneider , Jann Horn , Shuah Khan , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A mechanical rename of the mm_struct->map_count field to vma_count; no functional change is intended. The name "map_count" is ambiguous within the memory management subsystem, as it can be confused with the folio/page->_mapcount field, which tracks PTE references. The new name, vma_count, is more precise as this field has always counted the number of vm_area_structs associated with an mm_struct. Cc: Andrew Morton Cc: David Hildenbrand Cc: "Liam R. Howlett" Cc: Lorenzo Stoakes Cc: Mike Rapoport Cc: Minchan Kim Cc: Pedro Falcato Signed-off-by: Kalesh Singh Acked-by: David Hildenbrand Reviewed-by: Lorenzo Stoakes Reviewed-by: Pedro Falcato --- Changes in v2: - map_count is easily confused with _mapcount rename to vma_count, per Da= vid fs/binfmt_elf.c | 2 +- fs/coredump.c | 2 +- include/linux/mm_types.h | 2 +- kernel/fork.c | 2 +- mm/debug.c | 2 +- mm/mmap.c | 6 +++--- mm/nommu.c | 6 +++--- mm/vma.c | 24 ++++++++++++------------ tools/testing/vma/vma.c | 32 ++++++++++++++++---------------- tools/testing/vma/vma_internal.h | 6 +++--- 10 files changed, 42 insertions(+), 42 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 264fba0d44bd..52449dec12cb 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1643,7 +1643,7 @@ static int fill_files_note(struct memelfnote *note, s= truct coredump_params *cprm data[0] =3D count; data[1] =3D PAGE_SIZE; /* - * Count usually is less than mm->map_count, + * Count usually is less than mm->vma_count, * we need to move filenames down. */ n =3D cprm->vma_count - count; diff --git a/fs/coredump.c b/fs/coredump.c index 60bc9685e149..8881459c53d9 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -1731,7 +1731,7 @@ static bool dump_vma_snapshot(struct coredump_params = *cprm) cprm->vma_data_size =3D 0; gate_vma =3D get_gate_vma(mm); - cprm->vma_count =3D mm->map_count + (gate_vma ? 1 : 0); + cprm->vma_count =3D mm->vma_count + (gate_vma ? 1 : 0); cprm->vma_meta =3D kvmalloc_array(cprm->vma_count, sizeof(*cprm->vma_meta= ), GFP_KERNEL); if (!cprm->vma_meta) { diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 08bc2442db93..4343be2f9e85 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1020,7 +1020,7 @@ struct mm_struct { #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* size of all page tables */ #endif - int map_count; /* number of VMAs */ + int vma_count; /* number of VMAs */ spinlock_t page_table_lock; /* Protects page tables and some * counters diff --git a/kernel/fork.c b/kernel/fork.c index c4ada32598bd..8fcbbf947579 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1037,7 +1037,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm= , struct task_struct *p, mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); mm_pgtables_bytes_init(mm); - mm->map_count =3D 0; + mm->vma_count =3D 0; mm->locked_vm =3D 0; atomic64_set(&mm->pinned_vm, 0); memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); diff --git a/mm/debug.c b/mm/debug.c index b4388f4dcd4d..40fc9425a84a 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -204,7 +204,7 @@ void dump_mm(const struct mm_struct *mm) mm->pgd, atomic_read(&mm->mm_users), atomic_read(&mm->mm_count), mm_pgtables_bytes(mm), - mm->map_count, + mm->vma_count, mm->hiwater_rss, mm->hiwater_vm, mm->total_vm, mm->locked_vm, (u64)atomic64_read(&mm->pinned_vm), mm->data_vm, mm->exec_vm, mm->stack_vm, diff --git a/mm/mmap.c b/mm/mmap.c index af88ce1fbb5f..c6769394a174 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1308,7 +1308,7 @@ void exit_mmap(struct mm_struct *mm) vma =3D vma_next(&vmi); } while (vma && likely(!xa_is_zero(vma))); =20 - BUG_ON(count !=3D mm->map_count); + BUG_ON(count !=3D mm->vma_count); =20 trace_exit_mmap(mm); destroy: @@ -1517,7 +1517,7 @@ static int sysctl_max_map_count __read_mostly =3D DEF= AULT_MAX_MAP_COUNT; */ int vma_count_remaining(const struct mm_struct *mm) { - const int map_count =3D mm->map_count; + const int map_count =3D mm->vma_count; const int max_count =3D sysctl_max_map_count; =20 return (max_count > map_count) ? (max_count - map_count) : 0; @@ -1828,7 +1828,7 @@ __latent_entropy int dup_mmap(struct mm_struct *mm, s= truct mm_struct *oldmm) */ vma_iter_bulk_store(&vmi, tmp); =20 - mm->map_count++; + mm->vma_count++; =20 if (tmp->vm_ops && tmp->vm_ops->open) tmp->vm_ops->open(tmp); diff --git a/mm/nommu.c b/mm/nommu.c index dd75f2334812..9ab2e5ca736d 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -576,7 +576,7 @@ static void setup_vma_to_mm(struct vm_area_struct *vma,= struct mm_struct *mm) =20 static void cleanup_vma_from_mm(struct vm_area_struct *vma) { - vma->vm_mm->map_count--; + vma->vm_mm->vma_count--; /* remove the VMA from the mapping */ if (vma->vm_file) { struct address_space *mapping; @@ -1198,7 +1198,7 @@ unsigned long do_mmap(struct file *file, goto error_just_free; =20 setup_vma_to_mm(vma, current->mm); - current->mm->map_count++; + current->mm->vma_count++; /* add the VMA to the tree */ vma_iter_store_new(&vmi, vma); =20 @@ -1366,7 +1366,7 @@ static int split_vma(struct vma_iterator *vmi, struct= vm_area_struct *vma, setup_vma_to_mm(vma, mm); setup_vma_to_mm(new, mm); vma_iter_store_new(vmi, new); - mm->map_count++; + mm->vma_count++; return 0; =20 err_vmi_preallocate: diff --git a/mm/vma.c b/mm/vma.c index df0e8409f63d..64f4e7c867c3 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -352,7 +352,7 @@ static void vma_complete(struct vma_prepare *vp, struct= vma_iterator *vmi, * (it may either follow vma or precede it). */ vma_iter_store_new(vmi, vp->insert); - mm->map_count++; + mm->vma_count++; } =20 if (vp->anon_vma) { @@ -383,7 +383,7 @@ static void vma_complete(struct vma_prepare *vp, struct= vma_iterator *vmi, } if (vp->remove->anon_vma) anon_vma_merge(vp->vma, vp->remove); - mm->map_count--; + mm->vma_count--; mpol_put(vma_policy(vp->remove)); if (!vp->remove2) WARN_ON_ONCE(vp->vma->vm_end < vp->remove->vm_end); @@ -683,13 +683,13 @@ void validate_mm(struct mm_struct *mm) } #endif /* Check for a infinite loop */ - if (++i > mm->map_count + 10) { + if (++i > mm->vma_count + 10) { i =3D -1; break; } } - if (i !=3D mm->map_count) { - pr_emerg("map_count %d vma iterator %d\n", mm->map_count, i); + if (i !=3D mm->vma_count) { + pr_emerg("vma_count %d vma iterator %d\n", mm->vma_count, i); bug =3D 1; } VM_BUG_ON_MM(bug, mm); @@ -1266,7 +1266,7 @@ static void vms_complete_munmap_vmas(struct vma_munma= p_struct *vms, struct mm_struct *mm; =20 mm =3D current->mm; - mm->map_count -=3D vms->vma_count; + mm->vma_count -=3D vms->vma_count; mm->locked_vm -=3D vms->locked_vm; if (vms->unlock) mmap_write_downgrade(mm); @@ -1340,14 +1340,14 @@ static int vms_gather_munmap_vmas(struct vma_munmap= _struct *vms, if (vms->start > vms->vma->vm_start) { /* - * Make sure that map_count on return from munmap() will + * Make sure that vma_count on return from munmap() will * not exceed its limit; but let map_count go just above * its limit temporarily, to help free resources as expected. */ if (vms->end < vms->vma->vm_end && !vma_count_remaining(vms->vma->vm_mm)) { error =3D -ENOMEM; - goto map_count_exceeded; + goto vma_count_exceeded; } =20 /* Don't bother splitting the VMA if we can't unmap it anyway */ @@ -1461,7 +1461,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_s= truct *vms, modify_vma_failed: reattach_vmas(mas_detach); start_split_failed: -map_count_exceeded: +vma_count_exceeded: return error; } =20 @@ -1795,7 +1795,7 @@ int vma_link(struct mm_struct *mm, struct vm_area_str= uct *vma) vma_start_write(vma); vma_iter_store_new(&vmi, vma); vma_link_file(vma); - mm->map_count++; + mm->vma_count++; validate_mm(mm); return 0; } @@ -2495,7 +2495,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) /* Lock the VMA since it is modified after insertion into VMA tree */ vma_start_write(vma); vma_iter_store_new(vmi, vma); - map->mm->map_count++; + map->mm->vma_count++; vma_link_file(vma); /* @@ -2810,7 +2810,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; - mm->map_count++; + mm->vma_count++; validate_mm(mm); out: perf_event_mmap(vma); diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 656e1c75b711..69fa7d14a6c2 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -261,7 +261,7 @@ static int cleanup_mm(struct mm_struct *mm, struct vma_= iterator *vmi) } mtree_destroy(&mm->mm_mt); - mm->map_count =3D 0; + mm->vma_count =3D 0; return count; } @@ -500,7 +500,7 @@ static bool test_merge_new(void) INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); ASSERT_FALSE(merged); - ASSERT_EQ(mm.map_count, 4); + ASSERT_EQ(mm.vma_count, 4); /* * Merge BOTH sides. @@ -519,7 +519,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 3); + ASSERT_EQ(mm.vma_count, 3); /* * Merge to PREVIOUS VMA. @@ -536,7 +536,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 3); + ASSERT_EQ(mm.vma_count, 3); /* * Merge to NEXT VMA. @@ -555,7 +555,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 6); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 3); + ASSERT_EQ(mm.vma_count, 3); /* * Merge BOTH sides. @@ -573,7 +573,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); /* * Merge to NEXT VMA. @@ -591,7 +591,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 0xa); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); /* * Merge BOTH sides. @@ -608,7 +608,7 @@ static bool test_merge_new(void) ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 1); + ASSERT_EQ(mm.vma_count, 1); /* * Final state. @@ -967,7 +967,7 @@ static bool test_vma_merge_new_with_close(void) ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_EQ(vma->vm_ops, &vm_ops); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); cleanup_mm(&mm, &vmi); return true; @@ -1017,7 +1017,7 @@ static bool test_merge_existing(void) ASSERT_EQ(vma->vm_pgoff, 2); ASSERT_TRUE(vma_write_started(vma)); ASSERT_TRUE(vma_write_started(vma_next)); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -1045,7 +1045,7 @@ static bool test_merge_existing(void) ASSERT_EQ(vma_next->vm_pgoff, 2); ASSERT_EQ(vma_next->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma_next)); - ASSERT_EQ(mm.map_count, 1); + ASSERT_EQ(mm.vma_count, 1); /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1079,7 +1079,7 @@ static bool test_merge_existing(void) ASSERT_EQ(vma->vm_pgoff, 6); ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -1108,7 +1108,7 @@ static bool test_merge_existing(void) ASSERT_EQ(vma_prev->vm_pgoff, 0); ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma_prev)); - ASSERT_EQ(mm.map_count, 1); + ASSERT_EQ(mm.vma_count, 1); /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1138,7 +1138,7 @@ static bool test_merge_existing(void) ASSERT_EQ(vma_prev->vm_pgoff, 0); ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); ASSERT_TRUE(vma_write_started(vma_prev)); - ASSERT_EQ(mm.map_count, 1); + ASSERT_EQ(mm.vma_count, 1); /* Clear down and reset. We should have deleted prev and next. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1540,7 +1540,7 @@ static bool test_merge_extend(void) ASSERT_EQ(vma->vm_end, 0x4000); ASSERT_EQ(vma->vm_pgoff, 0); ASSERT_TRUE(vma_write_started(vma)); - ASSERT_EQ(mm.map_count, 1); + ASSERT_EQ(mm.vma_count, 1); cleanup_mm(&mm, &vmi); return true; @@ -1652,7 +1652,7 @@ static bool test_mmap_region_basic(void) 0x24d, NULL); ASSERT_EQ(addr, 0x24d000); - ASSERT_EQ(mm.map_count, 2); + ASSERT_EQ(mm.vma_count, 2); for_each_vma(vmi, vma) { if (vma->vm_start =3D=3D 0x300000) { diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 52cd7ddc73f4..15525b86145d 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -251,7 +251,7 @@ struct mutex {}; struct mm_struct { struct maple_tree mm_mt; - int map_count; /* number of VMAs */ + int vma_count; /* number of VMAs */ unsigned long total_vm; /* Total pages mapped */ unsigned long locked_vm; /* Pages that have PG_mlocked set */ unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */ @@ -1520,10 +1520,10 @@ static inline vm_flags_t ksm_vma_flags(const struct= mm_struct *, const struct fi /* Helper to get VMA count capacity */ static int vma_count_remaining(const struct mm_struct *mm) { - const int map_count =3D mm->map_count; + const int vma_count =3D mm->vma_count; const int max_count =3D sysctl_max_map_count; - return (max_count > map_count) ? (max_count - map_count) : 0; + return (max_count > vma_count) ? (max_count - vma_count) : 0; } #endif /* __MM_VMA_INTERNAL_H */ --=20 2.51.0.384.g4c02a37b29-goog