From nobody Mon Feb 9 19:25:27 2026 Received: from sg-1-100.ptr.blmpb.com (sg-1-100.ptr.blmpb.com [118.26.132.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B19F30B501 for ; Thu, 25 Dec 2025 08:22:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=118.26.132.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766650935; cv=none; b=nKHe+ILFlED4Bhd03dw5Daey+v45KHeWx75geOiub2C3ok4sjSEtQtZWIeMX8LJM1jgsczuuNy2Dd5icWIMwpAvF0fRPmo8X5vHB+2wETrf2qhLpeXbBy0z2JZ7VqNZ9wZAbi6BDHzPdSBa2Pt+9c9nJ9Y1y7oGB8vfpJ/s+oHk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766650935; c=relaxed/simple; bh=GQIkfXQ/K0Nh7zHEGSDwiB5PLzTkcYE14horMa8yxBg=; h=Content-Type:Cc:From:Message-Id:Mime-Version:In-Reply-To: References:To:Subject:Date; b=LlkNUfV3YeQI7l6YrgkDlw6Qp0waOiI6uxKafsRRuK5HQFkRsdjLSM4QnQ5gvAlZD210JewkWNi9sLY+WF82N+3xq2YYrTjnYFjbv+4U4uVlw7eDB01XTPRCrWqhRKmR6tmDPLX/Kh930q2adSJFYXo+QAg2D2HUMaA1xYwoijY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=U85SMdZQ; arc=none smtp.client-ip=118.26.132.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="U85SMdZQ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1766650919; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=7K6GkPYyCFR++S6aGBfoJNJnDFOzlltDzPDQoso/00A=; b=U85SMdZQcuVuEGqqNtBjY6wwCReAVbjoh8N7d3Rty8cue0Q9D4pvF+oEXlGi03Zk0Wjo83 QBYVwU7UkEv42plV6A0MPILvv4BtQ/1bAl+PmsbuFntC6lAn6YhX3c0Axczq7Pmba0GYej SRJwXNLfNrCOEbNooZEJcAMb1P/IMYt2bRRl3slbaTS++YW/O8NSIAVz3XiOtP4STDBrrD GgZU6++uUs23psCNtrjl9Kn6RrCBUT45uDHZB/VzXMGIAc10tAAmPYoItZCGThgcHmVBI6 vweFSmMkX/4Hx+YuZZmm8fS6IQ+TBXuYdC4HgTPCH4/ndr0gZYItCbLNTP7YjQ== Cc: , , From: =?utf-8?q?=E6=9D=8E=E5=96=86?= Message-Id: <20251225082059.1632-3-lizhe.67@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20251225082059.1632-1-lizhe.67@bytedance.com> References: <20251225082059.1632-1-lizhe.67@bytedance.com> X-Lms-Return-Path: To: , , , , Content-Transfer-Encoding: quoted-printable Subject: [PATCH 2/8] mm/hugetlb: convert to prep_account_new_hugetlb_folio() Date: Thu, 25 Dec 2025 16:20:53 +0800 X-Mailer: git-send-email 2.45.2 X-Original-From: lizhe.67@bytedance.com Content-Type: text/plain; charset="utf-8" From: Li Zhe After a huge folio is instantiated, it is always initialized through the successive calls to prep_new_hugetlb_folio() and account_new_hugetlb_folio(). To eliminate the risk that future changes update one routine but overlook the other, the two functions have been consolidated into a single entry point prep_account_new_hugetlb_folio(). Signed-off-by: Li Zhe --- mm/hugetlb.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d20614b1c927..63f9369789b5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1874,18 +1874,14 @@ void free_huge_folio(struct folio *folio) /* * Must be called with the hugetlb lock held */ -static void account_new_hugetlb_folio(struct hstate *h, struct folio *foli= o) -{ - lockdep_assert_held(&hugetlb_lock); - h->nr_huge_pages++; - h->nr_huge_pages_node[folio_nid(folio)]++; -} - -static void prep_new_hugetlb_folio(struct folio *folio) +static void prep_account_new_hugetlb_folio(struct hstate *h, + struct folio *folio) { lockdep_assert_held(&hugetlb_lock); folio_clear_hugetlb_freed(folio); prep_clear_zeroed(folio); + h->nr_huge_pages++; + h->nr_huge_pages_node[folio_nid(folio)]++; } =20 void init_new_hugetlb_folio(struct folio *folio) @@ -2012,8 +2008,7 @@ void prep_and_add_allocated_folios(struct hstate *h, /* Add all new pool pages to free lists in one lock cycle */ spin_lock_irqsave(&hugetlb_lock, flags); list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); enqueue_hugetlb_folio(h, folio); } spin_unlock_irqrestore(&hugetlb_lock, flags); @@ -2220,13 +2215,12 @@ static struct folio *alloc_surplus_hugetlb_folio(st= ruct hstate *h, return NULL; =20 spin_lock_irq(&hugetlb_lock); - prep_new_hugetlb_folio(folio); /* * nr_huge_pages needs to be adjusted within the same lock cycle * as surplus_pages, otherwise it might confuse * persistent_huge_pages() momentarily. */ - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); =20 /* * We could have raced with the pool size change. @@ -2264,8 +2258,7 @@ static struct folio *alloc_migrate_hugetlb_folio(stru= ct hstate *h, gfp_t gfp_mas return NULL; =20 spin_lock_irq(&hugetlb_lock); - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); spin_unlock_irq(&hugetlb_lock); =20 /* fresh huge pages are frozen */ @@ -2831,18 +2824,17 @@ static int alloc_and_dissolve_hugetlb_folio(struct = folio *old_folio, /* * Ok, old_folio is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be - * incremented again when calling account_new_hugetlb_folio() + * incremented again when calling prep_account_new_hugetlb_folio() * and enqueue_hugetlb_folio() for new_folio. The counters will * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); =20 - prep_new_hugetlb_folio(new_folio); /* * Ref count on new_folio is already zero as it was dropped * earlier. It can be directly added to the pool free list. */ - account_new_hugetlb_folio(h, new_folio); + prep_account_new_hugetlb_folio(h, new_folio); enqueue_hugetlb_folio(h, new_folio); =20 /* @@ -3318,8 +3310,7 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, hugetlb_bootmem_init_migratetype(folio, h); /* Subdivide locks to achieve better parallel performance */ spin_lock_irqsave(&hugetlb_lock, flags); - prep_new_hugetlb_folio(folio); - account_new_hugetlb_folio(h, folio); + prep_account_new_hugetlb_folio(h, folio); enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } --=20 2.20.1