From nobody Tue Feb 10 01:59:43 2026 Received: from sg-1-100.ptr.blmpb.com (sg-1-100.ptr.blmpb.com [118.26.132.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E74F30AAC0 for ; Thu, 25 Dec 2025 08:22:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=118.26.132.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766650979; cv=none; b=CS56oCIZP/VC5rhAk1iKFTSezj3o15+pC2nqW6aJ0tAD4J8Fz85tEiKNDAymsQCzkx9f5BO38ZI7n++84m2tOX3rwZpjVgDsnf91n00taHjQratP2dykAl4ajiOBw2u6OPAsymHtr0RNWvoqUZZ0Xnu/uk7w6TPw9yJbx27XnTY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766650979; c=relaxed/simple; bh=dc6xXW91+gVIwsIxV+NrIFQjSg7wzPiQSMpkL68wxbk=; h=From:References:Date:Content-Type:To:Subject:Message-Id: Mime-Version:In-Reply-To:Cc; b=R/GmclE6LAkUnasKwRHUQ/Va4OudsORmaRRCvu92X+WdEG43w8g5f0vtBZbZRLSsBIj43pv/UpvhuKpYIFlMCZXEpp22KOquARNggOhCIIjGIwKo0HzWmt0BnusdB7NLVCwl4ftxEs43UzJmfz5AJ7Olbh8ab9wq3oezF7sMT0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=XijCKMRX; arc=none smtp.client-ip=118.26.132.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="XijCKMRX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1766650971; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=5cEdrCycZoSGIaSMQtohlYU0qPrlLixmKafhXY+xcm0=; b=XijCKMRXz0bUt2h22L9Vpe/2yPl0HCS7pucYTxXUo6SPNfg6diJIe5n61WwUS4awfvGlzC RQMyqJy3LYue9UM2Ru6f6YxJRNT/Dp1cKmghUiPx2btIGfUDmr5mfwRDgcBJKM0JwrkthL Hqz7/wkLo8+B/H3N9A2jvqInd/y4qht1b6Ml3f5Ellm0X5ct6LzhAlN5UNLz+YUuZPd2aS /LIJSWfDzi43wQc1h7BuY/NgvBYGa8kbStb0WglMtaTuzfaaslXy9F9omtPPh8Bd831OkP /nDj9LpsU7EQEk1bwbKOfSN2CQPswXjZeCURWYtCok+oOcO+safd35BCdojunw== From: =?utf-8?q?=E6=9D=8E=E5=96=86?= X-Lms-Return-Path: X-Original-From: lizhe.67@bytedance.com Content-Transfer-Encoding: quoted-printable References: <20251225082059.1632-1-lizhe.67@bytedance.com> Date: Thu, 25 Dec 2025 16:20:58 +0800 X-Mailer: git-send-email 2.45.2 To: , , , , Subject: [PATCH 7/8] mm/hugetlb: add epoll support for interface "zeroable_hugepages" Message-Id: <20251225082059.1632-8-lizhe.67@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20251225082059.1632-1-lizhe.67@bytedance.com> Cc: , , Content-Type: text/plain; charset="utf-8" From: Li Zhe Add epoll support for interface "zeroable_hugepages". When no huge folios are available for pre-zeroing, user space can block on the zeroable_hugepages file with epoll, and it will be woken as soon as one or more huge folios become eligible for pre-zeroing. Signed-off-by: Li Zhe --- mm/hugetlb.c | 13 +++++++++++++ mm/hugetlb_internal.h | 6 ++++++ mm/hugetlb_sysfs.c | 22 +++++++++++++++++++++- 3 files changed, 40 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8d36487659f8..c2df0317fe15 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1868,6 +1868,7 @@ void free_huge_folio(struct folio *folio) arch_clear_hugetlb_flags(folio); enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); + do_zero_free_notify(h, folio_nid(folio)); } } =20 @@ -1999,8 +2000,10 @@ static struct folio *alloc_fresh_hugetlb_folio(struc= t hstate *h, void prep_and_add_allocated_folios(struct hstate *h, struct list_head *folio_list) { + nodemask_t allocated_mask =3D NODE_MASK_NONE; unsigned long flags; struct folio *folio, *tmp_f; + int nid; =20 /* Send list for bulk vmemmap optimization processing */ hugetlb_vmemmap_optimize_folios(h, folio_list); @@ -2010,8 +2013,12 @@ void prep_and_add_allocated_folios(struct hstate *h, list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { prep_account_new_hugetlb_folio(h, folio); enqueue_hugetlb_folio(h, folio); + node_set(folio_nid(folio), allocated_mask); } spin_unlock_irqrestore(&hugetlb_lock, flags); + + for_each_node_mask(nid, allocated_mask) + do_zero_free_notify(h, nid); } =20 /* @@ -2383,6 +2390,8 @@ static int gather_surplus_pages(struct hstate *h, lon= g delta) long needed, allocated; bool alloc_ok =3D true; nodemask_t *mbind_nodemask, alloc_nodemask; + nodemask_t allocated_mask =3D NODE_MASK_NONE; + int nid; =20 mbind_nodemask =3D policy_mbind_nodemask(htlb_alloc_mask(h)); if (mbind_nodemask) @@ -2455,9 +2464,12 @@ static int gather_surplus_pages(struct hstate *h, lo= ng delta) break; /* Add the page to the hugetlb allocator */ enqueue_hugetlb_folio(h, folio); + node_set(folio_nid(folio), allocated_mask); } free: spin_unlock_irq(&hugetlb_lock); + for_each_node_mask(nid, allocated_mask) + do_zero_free_notify(h, nid); =20 /* * Free unnecessary surplus pages to the buddy allocator. @@ -2841,6 +2853,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct fo= lio *old_folio, * Folio has been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); + do_zero_free_notify(h, folio_nid(new_folio)); update_and_free_hugetlb_folio(h, old_folio, false); } =20 diff --git a/mm/hugetlb_internal.h b/mm/hugetlb_internal.h index 1d2f870deccf..9c60661283c7 100644 --- a/mm/hugetlb_internal.h +++ b/mm/hugetlb_internal.h @@ -106,6 +106,12 @@ extern ssize_t __nr_hugepages_store_common(bool obey_m= empolicy, struct hstate *h, int nid, unsigned long count, size_t len); =20 +#ifdef CONFIG_NUMA +extern void do_zero_free_notify(struct hstate *h, int nid); +#else +static inline void do_zero_free_notify(struct hstate *h, int nid) {} +#endif + extern void hugetlb_sysfs_init(void) __init; =20 #ifdef CONFIG_SYSCTL diff --git a/mm/hugetlb_sysfs.c b/mm/hugetlb_sysfs.c index 08ad39d3e022..c063237249f6 100644 --- a/mm/hugetlb_sysfs.c +++ b/mm/hugetlb_sysfs.c @@ -340,6 +340,7 @@ static bool hugetlb_sysfs_initialized __ro_after_init; =20 struct node_hstate_item { struct kobject *hstate_kobj; + struct work_struct notify_work; }; =20 /* @@ -355,6 +356,21 @@ struct node_hstate { }; static struct node_hstate node_hstates[MAX_NUMNODES]; =20 +static void pre_zero_notify_fun(struct work_struct *work) +{ + struct node_hstate_item *item =3D + container_of(work, struct node_hstate_item, notify_work); + + sysfs_notify(item->hstate_kobj, NULL, "zeroable_hugepages"); +} + +void do_zero_free_notify(struct hstate *h, int nid) +{ + struct node_hstate *nhs =3D &node_hstates[nid]; + + schedule_work(&nhs->items[hstate_index(h)].notify_work); +} + static ssize_t zeroable_hugepages_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -564,8 +580,11 @@ void hugetlb_register_node(struct node *node) return; =20 for_each_hstate(h) { + int index =3D hstate_index(h); + struct node_hstate_item *item =3D &nhs->items[index]; + err =3D hugetlb_sysfs_add_hstate(h, nhs->hugepages_kobj, - &nhs->items[hstate_index(h)].hstate_kobj, + &item->hstate_kobj, &per_node_hstate_attr_group); if (err) { pr_err("HugeTLB: Unable to add hstate %s for node %d\n", @@ -573,6 +592,7 @@ void hugetlb_register_node(struct node *node) hugetlb_unregister_node(node); break; } + INIT_WORK(&item->notify_work, pre_zero_notify_fun); } } =20 --=20 2.20.1