From nobody Tue Feb 10 03:45:01 2026 Received: from sg-1-100.ptr.blmpb.com (sg-1-100.ptr.blmpb.com [118.26.132.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 670291D5160 for ; Wed, 7 Jan 2026 11:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=118.26.132.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767785600; cv=none; b=k/EIjBjRpa/TYcgQn2Gryu2ojHoBsr0uXMJmdW0CjctkH3YnAF4Ww7tCe2ZgMD9t1TgrIJ30/zF+PBAD33CBBb0f6UuXiMImn+OZn7rYECqMbxsSdX5+vzEkJpLWqD2AH09chACnKgjZL/GmibcqcnIQRxGhsYCkmI9HxCufjFI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767785600; c=relaxed/simple; bh=YiOkTG+9jjbhwfXqK9o5ZlgnbVymwObS/c8XdfVwzvM=; h=Mime-Version:From:Date:Cc:Subject:References:In-Reply-To: Content-Type:To:Message-Id; b=KZ796A3emgLUeWIvBu/1DweM3PQlbtv9Ux9wkfeFXf2HZSxc86n5uTZGPhioP/VK9d7AxB9FujrgmcVP9Ff9OK4+WCBUAlBOTdFpfM0MombVWx4PXEC28a4U9oZz0q0/ro7UzwiqVTyGW3HOW+6pEBRrwKsyBdjcCMHzMd/yljU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=PmpX/nDy; arc=none smtp.client-ip=118.26.132.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="PmpX/nDy" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1767785585; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=N8AGG1ZCnIMLj3cPqWeKg86cRrsdjc3RFAx/nQ6Si9A=; b=PmpX/nDyt63mqnGxwjy7PsO2mBIGjTYevkQ1FCWBXEKGb9y6RkKuZCUfB17+ZF+ZypGrc5 dWBw5IJbWKQ/gPBQfrIalNDWcd9K+Er2KhQjZcVHXAgf8EwNjf9sH0tujkyCYLrWFhJjI4 GbyQWAOwwDeEmX4MjLiSESuWvOox2QXNRjcYNG3VzxpjPE7yjxrQUKxsme7ceH1YovTQNH Donjot/ayfX+Cx8PAG3tiGhAveuHE85Kensog6wJHA5KEH6io2M0bdej0AS0q7py3wc8B1 FZNpT2ta4MBV/4KNNGCaWJb+26QckTObnNPV6rzIIol7pYmjG5xNffuKSnwZSA== Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable From: "Li Zhe" Date: Wed, 7 Jan 2026 19:31:24 +0800 X-Mailer: git-send-email 2.45.2 X-Original-From: Li Zhe X-Lms-Return-Path: Cc: , , Subject: [PATCH v2 2/8] mm/hugetlb: convert to prep_account_new_hugetlb_folio() References: <20260107113130.37231-1-lizhe.67@bytedance.com> In-Reply-To: <20260107113130.37231-1-lizhe.67@bytedance.com> To: , , , , Message-Id: <20260107113130.37231-3-lizhe.67@bytedance.com> Content-Type: text/plain; charset="utf-8" After a huge folio is instantiated, it is always initialized through the successive calls to prep_new_hugetlb_folio() and account_new_hugetlb_folio(). To eliminate the risk that future changes update one routine but overlook the other, the two functions have been consolidated into a single entry point prep_account_new_hugetlb_folio(). Signed-off-by: Li Zhe --- mm/hugetlb.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 001fc0ed4c48..a7e582abe9f9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1874,18 +1874,14 @@ void free_huge_folio(struct folio *folio) /* * Must be called with the hugetlb lock held */ -static void account_new_hugetlb_folio(struct hstate *h, struct folio *foli= o) -{ - lockdep_assert_held(&hugetlb_lock); - h->nr_huge_pages++; - h->nr_huge_pages_node[folio_nid(folio)]++; -} - -static void prep_new_hugetlb_folio(struct folio *folio) +static void prep_account_new_hugetlb_folio(struct hstate *h, + struct folio *folio) { lockdep_assert_held(&hugetlb_lock); folio_clear_hugetlb_freed(folio); prep_clear_zeroed(folio); + h->nr_huge_pages++; + h->nr_huge_pages_node[folio_nid(folio)]++; } =20 void init_new_hugetlb_folio(struct folio *folio) @@ -2012,8 +2008,7 @@ void prep_and_add_allocated_folios(struct hstate *h, /* Add all new pool pages to free lists in one lock cycle */ spin_lock_irqsave(&hugetlb_lock, flags); list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); enqueue_hugetlb_folio(h, folio); } spin_unlock_irqrestore(&hugetlb_lock, flags); @@ -2220,13 +2215,12 @@ static struct folio *alloc_surplus_hugetlb_folio(st= ruct hstate *h, return NULL; =20 spin_lock_irq(&hugetlb_lock); - prep_new_hugetlb_folio(folio); /* * nr_huge_pages needs to be adjusted within the same lock cycle * as surplus_pages, otherwise it might confuse * persistent_huge_pages() momentarily. */ - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); =20 /* * We could have raced with the pool size change. @@ -2264,8 +2258,7 @@ static struct folio *alloc_migrate_hugetlb_folio(stru= ct hstate *h, gfp_t gfp_mas return NULL; =20 spin_lock_irq(&hugetlb_lock); - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); spin_unlock_irq(&hugetlb_lock); =20 /* fresh huge pages are frozen */ @@ -2831,18 +2824,17 @@ static int alloc_and_dissolve_hugetlb_folio(struct = folio *old_folio, /* * Ok, old_folio is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be - * incremented again when calling account_new_hugetlb_folio() + * incremented again when calling prep_account_new_hugetlb_folio() * and enqueue_hugetlb_folio() for new_folio. The counters will * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); =20 - prep_new_hugetlb_folio(new_folio); /* * Ref count on new_folio is already zero as it was dropped * earlier. It can be directly added to the pool free list. */ - account_new_hugetlb_folio(h, new_folio); + prep_account_new_hugetlb_folio(h, new_folio); enqueue_hugetlb_folio(h, new_folio); =20 /* @@ -3318,8 +3310,7 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, hugetlb_bootmem_init_migratetype(folio, h); /* Subdivide locks to achieve better parallel performance */ spin_lock_irqsave(&hugetlb_lock, flags); - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } --=20 2.20.1