From nobody Sun Oct 5 01:50:07 2025 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93B112DC333; Mon, 11 Aug 2025 08:41:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901706; cv=none; b=ORxm/ex5HSHqD1uFbMlX6pLSnGaQbQYhnylQNqNvs4tRHFLNYsIyP8B1Ed/0kFYv5T6LNpucb0jL72RdxbE2z9KcSYtg+tgKNQtgqUcqYUS5WZNFotf2OgChlsUWnfRTTnWEhGrxWGdm17RyC4pg3NIkTsq5AQ5t6jTWyTsfqkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901706; c=relaxed/simple; bh=Lnbkr3x0JYfoWu6sKfOzg3PL074sWgz9L0+tsOihiWs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MmDFkGhccYOZNgJz8IKvMsXM1qPOfFuz2efb6uLGoy7616F8yGxlyh+mHvhX6Ub7PDUpaUZXwWSz5GiOOY/P7rFN+9iApRQy8/A1+yGyfL/7PAUP21G2dmz9Wok/9aP24Kpu9MGhoZij9oQPpZEYlEYeyva5yFUja5EAdnvd8OU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=I1V64/Qa; arc=none smtp.client-ip=80.241.56.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="I1V64/Qa" Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4c0p5T5kCFz9sQm; Mon, 11 Aug 2025 10:41:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754901693; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8/ct59qYuG3RzWBHGmFR3jjZkQ8rYUbub3Dfwy3OQL8=; b=I1V64/Qa0hW8Rv/GwNRA7NkvQNI/IiT42ZOqRmActPnPYDdt/AlPaJekhUaRyfIQHlKqP3 MHEM1VZIq+MoRYSteiJOOq6BoKgtOwTLJLyzhsuxvCaGpIyUx0U7UE9nB7WoYxAKKq9UqC ykDyhGids0yVZxjcIrV6McddnYZ1h4Q7bpV+oglCwJGv+7mcVjDs7lrgtzgMdcdP42pbBE xMKkSNG4v5pd4AnJ9oExDbVvbqeenPWqy1BqZZiUM6XnMBdI5GQVQj9cjHh/vev5EvKU1S rp19Jazq1ULSQQymX9cnSYmUfEGKOSlqjMKzddFVJ63NMVIXfqEaZ278FNkX/Q== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, Ritesh Harjani , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH v3 1/5] mm: rename huge_zero_page to huge_zero_folio Date: Mon, 11 Aug 2025 10:41:09 +0200 Message-ID: <20250811084113.647267-2-kernel@pankajraghav.com> In-Reply-To: <20250811084113.647267-1-kernel@pankajraghav.com> References: <20250811084113.647267-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav As the transition already happened from exposing huge_zero_page to huge_zero_folio, change the name of the shrinker and the other helper function to reflect that. No functional changes. Reviewed-by: Lorenzo Stoakes Reviewed-by: Zi Yan Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- mm/huge_memory.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..6625514f622b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -207,7 +207,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, return orders; } =20 -static bool get_huge_zero_page(void) +static bool get_huge_zero_folio(void) { struct folio *zero_folio; retry: @@ -237,7 +237,7 @@ static bool get_huge_zero_page(void) return true; } =20 -static void put_huge_zero_page(void) +static void put_huge_zero_folio(void) { /* * Counter should never go to zero here. Only shrinker can put @@ -251,11 +251,11 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct= *mm) if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) return READ_ONCE(huge_zero_folio); =20 - if (!get_huge_zero_page()) + if (!get_huge_zero_folio()) return NULL; =20 if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) - put_huge_zero_page(); + put_huge_zero_folio(); =20 return READ_ONCE(huge_zero_folio); } @@ -263,18 +263,18 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct= *mm) void mm_put_huge_zero_folio(struct mm_struct *mm) { if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) - put_huge_zero_page(); + put_huge_zero_folio(); } =20 -static unsigned long shrink_huge_zero_page_count(struct shrinker *shrink, - struct shrink_control *sc) +static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink, + struct shrink_control *sc) { /* we can free zero page only if last reference remains */ return atomic_read(&huge_zero_refcount) =3D=3D 1 ? HPAGE_PMD_NR : 0; } =20 -static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink, - struct shrink_control *sc) +static unsigned long shrink_huge_zero_folio_scan(struct shrinker *shrink, + struct shrink_control *sc) { if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) =3D=3D 1) { struct folio *zero_folio =3D xchg(&huge_zero_folio, NULL); @@ -287,7 +287,7 @@ static unsigned long shrink_huge_zero_page_scan(struct = shrinker *shrink, return 0; } =20 -static struct shrinker *huge_zero_page_shrinker; +static struct shrinker *huge_zero_folio_shrinker; =20 #ifdef CONFIG_SYSFS static ssize_t enabled_show(struct kobject *kobj, @@ -849,8 +849,8 @@ static inline void hugepage_exit_sysfs(struct kobject *= hugepage_kobj) =20 static int __init thp_shrinker_init(void) { - huge_zero_page_shrinker =3D shrinker_alloc(0, "thp-zero"); - if (!huge_zero_page_shrinker) + huge_zero_folio_shrinker =3D shrinker_alloc(0, "thp-zero"); + if (!huge_zero_folio_shrinker) return -ENOMEM; =20 deferred_split_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | @@ -858,13 +858,13 @@ static int __init thp_shrinker_init(void) SHRINKER_NONSLAB, "thp-deferred_split"); if (!deferred_split_shrinker) { - shrinker_free(huge_zero_page_shrinker); + shrinker_free(huge_zero_folio_shrinker); return -ENOMEM; } =20 - huge_zero_page_shrinker->count_objects =3D shrink_huge_zero_page_count; - huge_zero_page_shrinker->scan_objects =3D shrink_huge_zero_page_scan; - shrinker_register(huge_zero_page_shrinker); + huge_zero_folio_shrinker->count_objects =3D shrink_huge_zero_folio_count; + huge_zero_folio_shrinker->scan_objects =3D shrink_huge_zero_folio_scan; + shrinker_register(huge_zero_folio_shrinker); =20 deferred_split_shrinker->count_objects =3D deferred_split_count; deferred_split_shrinker->scan_objects =3D deferred_split_scan; @@ -875,7 +875,7 @@ static int __init thp_shrinker_init(void) =20 static void __init thp_shrinker_exit(void) { - shrinker_free(huge_zero_page_shrinker); + shrinker_free(huge_zero_folio_shrinker); shrinker_free(deferred_split_shrinker); } =20 --=20 2.49.0 From nobody Sun Oct 5 01:50:07 2025 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C02F2DCBF3; Mon, 11 Aug 2025 08:41:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.152 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901708; cv=none; b=I+W4vhB+5Zm8moYjCsaiciHNHnRTCJKI4pamXefQ4dnrKcdLxx5NlPVvrby+w6+0CvevjjfoFdkd3nI5Vn+pUsSOwSef6sMbayq8Lk0v/qXUjP7JdJ9XtTY1vDnAVXJHrcHff5ncxXJpRx823OwoRILlqQGZZ9enaaJ1xewVINc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901708; c=relaxed/simple; bh=MYC10gdNX8ZtrrG0256wz28UpKQ/saMS5pv8moprmVk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ell3x21pt1ui+PLjFOtygBlkDGCySXVH9HqlQVggA4OuEhO40dxspb1w8vr3SSYZX4BoONysJBs7yee42e24RXTjCegzUqIQ2Thsiy9N6eh6zC9itP1KFYFROZtTFtxdSfE1Gqx6l8NlMZ/yosQC8rqEjAVdAoREH2AbFIscA+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=aVMV4hxq; arc=none smtp.client-ip=80.241.56.152 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="aVMV4hxq" Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4c0p5g2QKhz9scy; Mon, 11 Aug 2025 10:41:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754901703; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lUZ7WtmWLoytrmXItECRu19QRh1MV13CnCf09RIJKKo=; b=aVMV4hxqdrJ/w+Dhi3PQB0knp6EQFRpSo46schhnfElLbE6CjEZAeIvK1ehHDvD/0Tic/g nZ8STAmYaMDg8sHjCeULTE+X+ojQAsEZ2k11AAnZARh2HV7LgBmmo/5xPQqNgtQsqI1cxy 29IuW6kPcxqNtoiZY9yQQX8PWTJrgoU4Wks1105sIeTCJV8OhDqjkHZQTtN5uvigIReOnL gIvUQhDY4ZK+AaYuMiY5lKSHjS/1ZiGmeEYIUXKcJkIOwwi0iYjniCtFfLIpST8VBuRYi2 daUNip6xcW6NheFZoPEPMLLcEvccs6jfiepEL2kZX+TVy3O8gKpHrgGPbPn8PA== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, Ritesh Harjani , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH v3 2/5] mm: rename MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO Date: Mon, 11 Aug 2025 10:41:10 +0200 Message-ID: <20250811084113.647267-3-kernel@pankajraghav.com> In-Reply-To: <20250811084113.647267-1-kernel@pankajraghav.com> References: <20250811084113.647267-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav As all the helper functions has been renamed from *_page to *_folio, rename the MM flag from MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO. No functional changes. Reviewed-by: Lorenzo Stoakes Reviewed-by: Zi Yan Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- include/linux/mm_types.h | 2 +- mm/huge_memory.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3ed763e7ec6f..cf94df4955c7 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1758,7 +1758,7 @@ enum { #define MMF_RECALC_UPROBES 20 /* MMF_HAS_UPROBES can be wrong */ #define MMF_OOM_SKIP 21 /* mm is of no interest for the OOM killer */ #define MMF_UNSTABLE 22 /* mm is unstable for copy_from_user */ -#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zer= o page */ +#define MMF_HUGE_ZERO_FOLIO 23 /* mm has ever used the global huge ze= ro folio */ #define MMF_DISABLE_THP 24 /* disable THP for all VMAs */ #define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP) #define MMF_OOM_REAP_QUEUED 25 /* mm was queued for oom_reaper */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6625514f622b..ff06dee213eb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -248,13 +248,13 @@ static void put_huge_zero_folio(void) =20 struct folio *mm_get_huge_zero_folio(struct mm_struct *mm) { - if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) return READ_ONCE(huge_zero_folio); =20 if (!get_huge_zero_folio()) return NULL; =20 - if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_and_set_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) put_huge_zero_folio(); =20 return READ_ONCE(huge_zero_folio); @@ -262,7 +262,7 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *= mm) =20 void mm_put_huge_zero_folio(struct mm_struct *mm) { - if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) put_huge_zero_folio(); } =20 --=20 2.49.0 From nobody Sun Oct 5 01:50:07 2025 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6CE12DC332; Mon, 11 Aug 2025 08:41:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901716; cv=none; b=jFgmyWwWUbufpn49AoTIPS678zVDB19iBsVkMqG9wW2wqgcXjOEbIaAilxXkHMm4M6gqP4EVaKheiSuIfwUYIhf1S68gmIEit881LlEe+w6cUYnOUg/vPtD1GUr1Sqr1Vvdk6idhLTFtfoYQ8wzrfVf/rlV3MAAN+E55vKfyYGk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901716; c=relaxed/simple; bh=2b0zyIvneT/BWO5yrZEXWf95Ch6sXqk84aA8EAjtrTU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=moamfYlppulPjtbVxb4xlVK6K8Ip0OVXwDCxqE0ch93Luy0Q7GAMjzvlYQcYbyk4kCQGPPhWfiKJ/ODdCsAA2gH8VArSIQTeqnUNIMDfNOy6YCm84uGT8MtVcGEY/eccHzyquRZsHt1HL0nirt9h46zMB3F2kh9Cf+c8sfYPJA4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=gBlz1RVR; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="gBlz1RVR" Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4c0p5p4vylz9st3; Mon, 11 Aug 2025 10:41:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754901710; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UHSeKOMw7l+pStcGyuQlwa9eRqPMVMMIKBkvUO8W9Ak=; b=gBlz1RVR3MHa+jwwbGxchOX7mNK5kGfgcYep5YrEHGKNTYDA2LyL1hS6okPt5QhVyOs6YM 5HDLvwZUK5rkbJ5XjA7/DcV2w/0uIY70npLSS5EtsnR99pHCcBL+6Zb7EQvkMfhh9jFT5G F1emO7nlk8/mAq8c9EdkEH0YOweeltAnp9vAnrgFAH5lpvTtaeY8vtuPPZzVanMTpafyNh eyrKd17lzaT4rSW6cF75yO+e0Sjl6iRbEzz1PWXCN9lOao67Jae7Ey0ZNdEJYrfInsxrmo EJ4SxvFT7IPzklGlwjhOo/xVnaz38fAiBthdJ6KLtZeba76XEXKiwf2hHKqWqw== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, Ritesh Harjani , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH v3 3/5] mm: add persistent huge zero folio Date: Mon, 11 Aug 2025 10:41:11 +0200 Message-ID: <20250811084113.647267-4-kernel@pankajraghav.com> In-Reply-To: <20250811084113.647267-1-kernel@pankajraghav.com> References: <20250811084113.647267-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav Many places in the kernel need to zero out larger chunks, but the maximum segment that can be zeroed out at a time by ZERO_PAGE is limited by PAGE_SIZE. This is especially annoying in block devices and filesystems where multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage bvec support in block layer, it is much more efficient to send out larger zero pages as a part of single bvec. This concern was raised during the review of adding Large Block Size support to XFS[1][2]. Usually huge_zero_folio is allocated on demand, and it will be deallocated by the shrinker if there are no users of it left. At moment, huge_zero_folio infrastructure refcount is tied to the process lifetime that created it. This might not work for bio layer as the completions can be async and the process that created the huge_zero_folio might no longer be alive. And, one of the main points that came up during discussion is to have something bigger than zero page as a drop-in replacement. Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will result in allocating the huge zero folio during early init and never free the memory by disabling the shrinker. This makes using the huge_zero_folio without having to pass any mm struct and does not tie the lifetime of the zero folio to anything, making it a drop-in replacement for ZERO_PAGE. If PERSISTENT_HUGE_ZERO_FOLIO config option is enabled, then mm_get_huge_zero_folio() will simply return the allocated page instead of dynamically allocating a new PMD page. Use this option carefully in resource constrained systems as it uses one full PMD sized page for zeroing purposes. [1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/ [2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/ Reviewed-by: Lorenzo Stoakes Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- include/linux/huge_mm.h | 16 ++++++++++++++++ mm/Kconfig | 16 ++++++++++++++++ mm/huge_memory.c | 40 ++++++++++++++++++++++++++++++---------- 3 files changed, 62 insertions(+), 10 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7748489fde1b..bd547857c6c1 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -495,6 +495,17 @@ static inline bool is_huge_zero_pmd(pmd_t pmd) struct folio *mm_get_huge_zero_folio(struct mm_struct *mm); void mm_put_huge_zero_folio(struct mm_struct *mm); =20 +static inline struct folio *get_persistent_huge_zero_folio(void) +{ + if (!IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) + return NULL; + + if (unlikely(!huge_zero_folio)) + return NULL; + + return huge_zero_folio; +} + static inline bool thp_migration_supported(void) { return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION); @@ -685,6 +696,11 @@ static inline int change_huge_pud(struct mmu_gather *t= lb, { return 0; } + +static inline struct folio *get_persistent_huge_zero_folio(void) +{ + return NULL; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/Kconfig b/mm/Kconfig index e443fe8cd6cf..d81726f112b9 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -823,6 +823,22 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_WANTS_THP_SWAP def_bool n =20 +config PERSISTENT_HUGE_ZERO_FOLIO + bool "Allocate a PMD sized folio for zeroing" + depends on TRANSPARENT_HUGEPAGE + help + Enable this option to reduce the runtime refcounting overhead + of the huge zero folio and expand the places in the kernel + that can use huge zero folios. For instance, block I/O benefits + from access to large folios for zeroing memory. + + With this option enabled, the huge zero folio is allocated + once and never freed. One full huge page's worth of memory shall + be used. + + Say Y if your system has lots of memory. Say N if you are + memory constrained. + config MM_ID def_bool n =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ff06dee213eb..5c00e59ca5da 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -248,6 +248,9 @@ static void put_huge_zero_folio(void) =20 struct folio *mm_get_huge_zero_folio(struct mm_struct *mm) { + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) + return huge_zero_folio; + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) return READ_ONCE(huge_zero_folio); =20 @@ -262,6 +265,9 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *= mm) =20 void mm_put_huge_zero_folio(struct mm_struct *mm) { + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) + return; + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) put_huge_zero_folio(); } @@ -849,16 +855,34 @@ static inline void hugepage_exit_sysfs(struct kobject= *hugepage_kobj) =20 static int __init thp_shrinker_init(void) { - huge_zero_folio_shrinker =3D shrinker_alloc(0, "thp-zero"); - if (!huge_zero_folio_shrinker) - return -ENOMEM; - deferred_split_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE | SHRINKER_NONSLAB, "thp-deferred_split"); - if (!deferred_split_shrinker) { - shrinker_free(huge_zero_folio_shrinker); + if (!deferred_split_shrinker) + return -ENOMEM; + + deferred_split_shrinker->count_objects =3D deferred_split_count; + deferred_split_shrinker->scan_objects =3D deferred_split_scan; + shrinker_register(deferred_split_shrinker); + + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) { + /* + * Bump the reference of the huge_zero_folio and do not + * initialize the shrinker. + * + * huge_zero_folio will always be NULL on failure. We assume + * that get_huge_zero_folio() will most likely not fail as + * thp_shrinker_init() is invoked early on during boot. + */ + if (!get_huge_zero_folio()) + pr_warn("Allocating persistent huge zero folio failed\n"); + return 0; + } + + huge_zero_folio_shrinker =3D shrinker_alloc(0, "thp-zero"); + if (!huge_zero_folio_shrinker) { + shrinker_free(deferred_split_shrinker); return -ENOMEM; } =20 @@ -866,10 +890,6 @@ static int __init thp_shrinker_init(void) huge_zero_folio_shrinker->scan_objects =3D shrink_huge_zero_folio_scan; shrinker_register(huge_zero_folio_shrinker); =20 - deferred_split_shrinker->count_objects =3D deferred_split_count; - deferred_split_shrinker->scan_objects =3D deferred_split_scan; - shrinker_register(deferred_split_shrinker); - return 0; } =20 --=20 2.49.0 From nobody Sun Oct 5 01:50:07 2025 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61BC02DC342; Mon, 11 Aug 2025 08:42:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901724; cv=none; b=Atv5ac4p1llL6Kd4vu3oTmR9QAP98EOWaqOeo4eUYIpdtJqkaDrD1nGxjJssy3QCVhY/TkZJD0u221dntu/626sBkDZgjdGMVEf+LIQoUc8nM1DT5FWJYuStBk9Qf5NPIK6l7VI/FhmxtgZsfDYDOOQ3OvmaWqYOmcS0tWQLo0g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901724; c=relaxed/simple; bh=kWY+PKfzENRiVd8Q2P+lb01kGaNVxI3Z5RjljYKvems=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZsRCKATw1oGPrMKkyJSQNC7G70DdViSdPMLbSjBgdQJNBOzCrOULUsX6cU+eBKc8oFoJvVotD2uyBMAu77sNOlY4UQmF66H2dIb1XPpwmabPxzlrXBeKQdkGLanPNhQYaf9hNYszb5yhtvfylQ3E5HQLO/mECPX8VOAKW6ZdKNw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=nr+YHUy3; arc=none smtp.client-ip=80.241.56.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="nr+YHUy3" Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4c0p5y5hW9z9sjB; Mon, 11 Aug 2025 10:41:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754901718; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lNDD6Lspe3zTqDVuRc5jRvkT84xxqlND6jbD75Q1wQI=; b=nr+YHUy3VNa5UF3vHJkP5BW1tlHs1G6/RJqZE5SHfi04AKfWRTb9oILzOZsUHsZ5+0k0Jv 0s1E6s82NN8QJbiGysYQfWFbHNz6fkqkUKUvSqVJyRr3dsHf+OMrNkyp0D7mW/FMrX/1Ih 1lF1MLnFgo5ga7itElQovT9JB97FwiZSm9iznkImuUMOQypcXRhf6kd+DQYAhWH0Ww+IsH 4VuYetK4pDDxa5NxTwsjSlzdi/nQg1AW5ORranbsVgruJv97UdPjY5DuK7cS/X2ryns0GU zXhQSDuNtW68crvPN3RsvrwSI66vs0C76ZSIO0zhHlB0bk5+oqw8Au3tmkY/Dw== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, Ritesh Harjani , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH v3 4/5] mm: add largest_zero_folio() routine Date: Mon, 11 Aug 2025 10:41:12 +0200 Message-ID: <20250811084113.647267-5-kernel@pankajraghav.com> In-Reply-To: <20250811084113.647267-1-kernel@pankajraghav.com> References: <20250811084113.647267-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav The callers of mm_get_huge_zero_folio() have access to a mm struct and the lifetime of the huge_zero_folio is tied to the lifetime of the mm struct. largest_zero_folio() will give access to huge_zero_folio when PERSISTENT_HUGE_ZERO_FOLIO config option is enabled for callers that do not want to tie the lifetime to a mm struct. This is very useful for filesystem and block layers where the request completions can be async and there is no guarantee on the mm struct lifetime. This function will return a ZERO_PAGE folio if PERSISTENT_HUGE_ZERO_FOLIO is disabled or if we failed to allocate a huge_zero_folio during early init. Reviewed-by: Lorenzo Stoakes Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- include/linux/huge_mm.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index bd547857c6c1..14d424830fa8 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -714,4 +714,26 @@ static inline int split_folio_to_order(struct folio *f= olio, int new_order) return split_folio_to_list_to_order(folio, NULL, new_order); } =20 +/** + * largest_zero_folio - Get the largest zero size folio available + * + * This function shall be used when mm_get_huge_zero_folio() cannot be + * used as there is no appropriate mm lifetime to tie the huge zero folio + * from the caller. + * + * Deduce the size of the folio with folio_size instead of assuming the + * folio size. + * + * Return: pointer to PMD sized zero folio if CONFIG_PERSISTENT_HUGE_ZERO_= FOLIO + * is enabled or a single page sized zero folio + */ +static inline struct folio *largest_zero_folio(void) +{ + struct folio *folio =3D get_persistent_huge_zero_folio(); + + if (folio) + return folio; + + return page_folio(ZERO_PAGE(0)); +} #endif /* _LINUX_HUGE_MM_H */ --=20 2.49.0 From nobody Sun Oct 5 01:50:07 2025 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD8682DC355; Mon, 11 Aug 2025 08:42:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901738; cv=none; b=BlLmLAsZZOyuJx0KMxRDD1VKY5ZwxRh5xTudGLdOsVHlVlA/GgRmX+zkXoX+VDS9K6uZV2EOh5xlqiE3n5RSqdwu93vsrV6qzDypTpSwMJeOcYQgnqyWAcUQJi2Kwf6J5Vi9pxO2pJ4KuLxcEPbjb1PCiPKx97kkFsKaIdbMb10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754901738; c=relaxed/simple; bh=fFzN46s8fI+jPXzBN2wRyau0XR7zJopdfaevTTLhQwQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z1ZSVE2Z/Ls4yYJC3nsnmtvG8w3Wk1QIEP103yMrqC22DNqFmZPH17Nfn8vbco4sLvktqMTCci92tcnAExvXJeSjeTt872zGArGGJkD7lGcZ/i8tUtRKCLagNU2wUfoZ1XIyBofDas+7Y7haYjwgRlUMgC/MMWkf8yafQ8FbP0U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=rGm1QFmz; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="rGm1QFmz" Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4c0p6F0pJfz9stX; Mon, 11 Aug 2025 10:42:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754901733; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hl5pR6KEXqqmRKm0XQhFRqtMBlaQMUsVvgjCapFINjk=; b=rGm1QFmz51ShKNaeGL34QTmDDcDaTUPtXgDxX299EO84u8rhHXjtzpqUs8/3HQwAYYSULU qCRkrj9m7aIpWSquYSC0fDON72jTE4y6bFSEn7q/yw9Phr6z7w7v0lnAp2SooRC+qsgjiz fqgsO49XTmXAqE8lKgJN7lIY28fMc5kUT68++lKECB1z06gUhZevCMd4lnDW05FrPtaYyC VTk9gpio0Sl47tLZspolgNhZNKeNrk/8xSpMarUOjh7i48wT8+CZTjsMGvdC1WlwvfMU1V fA/uU/YJPVXL+tBbfJRi0yC54rqyc+ICu+I/tHg7P25mCPm6g73/CZl6OtkxIw== Authentication-Results: outgoing_mbo_mout; dkim=none; spf=pass (outgoing_mbo_mout: domain of kernel@pankajraghav.com designates 2001:67c:2050:b231:465::202 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, Ritesh Harjani , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH v3 5/5] block: use largest_zero_folio in __blkdev_issue_zero_pages() Date: Mon, 11 Aug 2025 10:41:13 +0200 Message-ID: <20250811084113.647267-6-kernel@pankajraghav.com> In-Reply-To: <20250811084113.647267-1-kernel@pankajraghav.com> References: <20250811084113.647267-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4c0p6F0pJfz9stX Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav Use largest_zero_folio() in __blkdev_issue_zero_pages(). On systems with CONFIG_PERSISTENT_HUGE_ZERO_FOLIO enabled, we will end up sending larger bvecs instead of multiple small ones. Noticed a 4% increase in performance on a commercial NVMe SSD which does not support OP_WRITE_ZEROES. The device's MDTS was 128K. The performance gains might be bigger if the device supports bigger MDTS. Acked-by: Lorenzo Stoakes Signed-off-by: Pankaj Raghav Acked-by: David Hildenbrand --- block/blk-lib.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 4c9f20a689f7..3030a772d3aa 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -196,6 +196,8 @@ static void __blkdev_issue_zero_pages(struct block_devi= ce *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned int flags) { + struct folio *zero_folio =3D largest_zero_folio(); + while (nr_sects) { unsigned int nr_vecs =3D __blkdev_sectors_to_bio_pages(nr_sects); struct bio *bio; @@ -208,15 +210,14 @@ static void __blkdev_issue_zero_pages(struct block_de= vice *bdev, break; =20 do { - unsigned int len, added; + unsigned int len; =20 - len =3D min_t(sector_t, - PAGE_SIZE, nr_sects << SECTOR_SHIFT); - added =3D bio_add_page(bio, ZERO_PAGE(0), len, 0); - if (added < len) + len =3D min_t(sector_t, folio_size(zero_folio), + nr_sects << SECTOR_SHIFT); + if (!bio_add_folio(bio, zero_folio, len, 0)) break; - nr_sects -=3D added >> SECTOR_SHIFT; - sector +=3D added >> SECTOR_SHIFT; + nr_sects -=3D len >> SECTOR_SHIFT; + sector +=3D len >> SECTOR_SHIFT; } while (nr_sects); =20 *biop =3D bio_chain_and_submit(*biop, bio); --=20 2.49.0