From nobody Sun Oct 5 12:52:26 2025 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAF7514EC62; Mon, 4 Aug 2025 12:14:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309666; cv=none; b=Dkfn2FAmY+p13J6CC95/2WXnnwT+dgM1YR/mhKa5Y1tM+/NVtlGABtryjcsGMzHLrERx4oOrqWEEDjayJFT4H4eFSoKagI4vZCCqQCvJcyKpVG88ialgxZA+2lKagqTMuTSsp7gdX8Xc6BRUF5js1xdEYtjvG+6mUFvuqI5Pq6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309666; c=relaxed/simple; bh=d5cqKEY5CbIXWeLXswhu4+/IzdeUgge67U0jVF/tfAk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r9gvGWsqa6r/Z248cldDNkLNqIesj+dq6DPNq7HZu4u0i0VZbqN2FVjPHH49zCB1b5sJtpQVq/K98O0ozdE+GauEvDuqQHZhNle0pAkIIhafXHxMruJdR6U2vmAymKz8v5yN98w8FtQKII7dMvCfG8FXWIPlU1mZRAIt0SgtxNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=UqL2d18w; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="UqL2d18w" Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4bwb871KgPz9st4; Mon, 4 Aug 2025 14:14:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754309655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wg6vfvVQRpqn/feIP7hePbKRJwLF2oar9zj0xHqTXhU=; b=UqL2d18wzvHrh7tJVeJYCXjGyxAtaFcs6MKVFbkeIkqEflCP4Dx6giEliPtbcnbWHyZDu7 P51MPKLvSqf93s2tp3y0kyjk/cFhINFXyu/CixWPgy8iugkpGbQT0MzLmFQ7SdKMIM9iW/ VdsYUqnPg/wxAYnDxCZlRu3iBvWowKQCoZgoUBDPxUwVHsT7x3EVmgjuhFl7Liv2ggwTUm I9TzT4YlcLDNf0AKGYKYedOlMnS6sLOEbkRgwuGIIKUOmUNu5W7jvCSvmxd4Cw8x6NtjBJ tCYS6XbMdIrkmm4Kn2SgHagJ+ipJV0xdoB6ACPdJnwr43ZnClFuU72io38LD1A== Authentication-Results: outgoing_mbo_mout; dkim=none; spf=pass (outgoing_mbo_mout: domain of kernel@pankajraghav.com designates 2001:67c:2050:b231:465::2 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org, Ritesh Harjani , linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH 1/5] mm: rename huge_zero_page to huge_zero_folio Date: Mon, 4 Aug 2025 14:13:52 +0200 Message-ID: <20250804121356.572917-2-kernel@pankajraghav.com> In-Reply-To: <20250804121356.572917-1-kernel@pankajraghav.com> References: <20250804121356.572917-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4bwb871KgPz9st4 Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav As we already moved from exposing huge_zero_page to huge_zero_folio, change the name of the shrinker and the other helper function to reflect that. No functional changes. Reviewed-by: Lorenzo Stoakes Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- mm/huge_memory.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..6625514f622b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -207,7 +207,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, return orders; } =20 -static bool get_huge_zero_page(void) +static bool get_huge_zero_folio(void) { struct folio *zero_folio; retry: @@ -237,7 +237,7 @@ static bool get_huge_zero_page(void) return true; } =20 -static void put_huge_zero_page(void) +static void put_huge_zero_folio(void) { /* * Counter should never go to zero here. Only shrinker can put @@ -251,11 +251,11 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct= *mm) if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) return READ_ONCE(huge_zero_folio); =20 - if (!get_huge_zero_page()) + if (!get_huge_zero_folio()) return NULL; =20 if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) - put_huge_zero_page(); + put_huge_zero_folio(); =20 return READ_ONCE(huge_zero_folio); } @@ -263,18 +263,18 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct= *mm) void mm_put_huge_zero_folio(struct mm_struct *mm) { if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) - put_huge_zero_page(); + put_huge_zero_folio(); } =20 -static unsigned long shrink_huge_zero_page_count(struct shrinker *shrink, - struct shrink_control *sc) +static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink, + struct shrink_control *sc) { /* we can free zero page only if last reference remains */ return atomic_read(&huge_zero_refcount) =3D=3D 1 ? HPAGE_PMD_NR : 0; } =20 -static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink, - struct shrink_control *sc) +static unsigned long shrink_huge_zero_folio_scan(struct shrinker *shrink, + struct shrink_control *sc) { if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) =3D=3D 1) { struct folio *zero_folio =3D xchg(&huge_zero_folio, NULL); @@ -287,7 +287,7 @@ static unsigned long shrink_huge_zero_page_scan(struct = shrinker *shrink, return 0; } =20 -static struct shrinker *huge_zero_page_shrinker; +static struct shrinker *huge_zero_folio_shrinker; =20 #ifdef CONFIG_SYSFS static ssize_t enabled_show(struct kobject *kobj, @@ -849,8 +849,8 @@ static inline void hugepage_exit_sysfs(struct kobject *= hugepage_kobj) =20 static int __init thp_shrinker_init(void) { - huge_zero_page_shrinker =3D shrinker_alloc(0, "thp-zero"); - if (!huge_zero_page_shrinker) + huge_zero_folio_shrinker =3D shrinker_alloc(0, "thp-zero"); + if (!huge_zero_folio_shrinker) return -ENOMEM; =20 deferred_split_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | @@ -858,13 +858,13 @@ static int __init thp_shrinker_init(void) SHRINKER_NONSLAB, "thp-deferred_split"); if (!deferred_split_shrinker) { - shrinker_free(huge_zero_page_shrinker); + shrinker_free(huge_zero_folio_shrinker); return -ENOMEM; } =20 - huge_zero_page_shrinker->count_objects =3D shrink_huge_zero_page_count; - huge_zero_page_shrinker->scan_objects =3D shrink_huge_zero_page_scan; - shrinker_register(huge_zero_page_shrinker); + huge_zero_folio_shrinker->count_objects =3D shrink_huge_zero_folio_count; + huge_zero_folio_shrinker->scan_objects =3D shrink_huge_zero_folio_scan; + shrinker_register(huge_zero_folio_shrinker); =20 deferred_split_shrinker->count_objects =3D deferred_split_count; deferred_split_shrinker->scan_objects =3D deferred_split_scan; @@ -875,7 +875,7 @@ static int __init thp_shrinker_init(void) =20 static void __init thp_shrinker_exit(void) { - shrinker_free(huge_zero_page_shrinker); + shrinker_free(huge_zero_folio_shrinker); shrinker_free(deferred_split_shrinker); } =20 --=20 2.49.0 From nobody Sun Oct 5 12:52:26 2025 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4588257AC1; Mon, 4 Aug 2025 12:14:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309667; cv=none; b=Nj1uh6flbBU4YgJaQJqG06/uJGHo+XdHLDzrCi2+1ucUgDDu9JakgXEvVlT5leYQr8brXTIEdE4ZgUoHxloAl2+WNTD3fyftqwXIg6Kbww9Frc2takHd+275M+olZdETtEVByf8U7TG3OZUMpr844R8l/dwgFs+QdOBF9H6oaCM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309667; c=relaxed/simple; bh=UKAD9G0m+k7yLbfJfggBlGndpjwV/RD9cXO+ZpPCKQE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B3Q1R0swXxnCA/s1P3z/2tiaCi6QYLcI94LGFX7XA+9I14ong/C49qoWaTTS/gtzvGIHCPkubxynfA6+pJE3AiM3FODbjtGBthrPrUX2RBP0H0JOiH9l3na9uBlOun8rh/lVFJtYD3P++DIpq55EJKlSoo8uhg5e89gagedZoXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=hqpyJBmN; arc=none smtp.client-ip=80.241.56.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="hqpyJBmN" Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4bwb8F6Hrsz9t1l; Mon, 4 Aug 2025 14:14:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754309661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X5VaNFZHQ+UIZ3WR6R9OAhvW0u/boKe4vv5xZzerPYI=; b=hqpyJBmN5S+9+tFYA5ZZm0p8MUaJIBmETy/KUwmZ87TCwxxfGTJEGKZwZnpigomTBopqAr kGrWMPK7osRRCFCZE7FC1x+cmC2MGBpPHt2lT1Kpaiq24X1PRkvxKR+ya9aQVI6Pw1eGcA MU2FYXdz1BJf1I8Er/H00byiePvy/1G996TwXZubjOcc85Iqo3U4mYQhMIOx8vPOYxz7F4 eRXVRNCWaB7V7yaxi0DGd5pH9tZsRcvNhlirWHp/8sGhRnAGPzrM2C7ca4p9ftpzFAevua hPzZXE0zPMNNMDCCZ6RRKE7apaJOQJRVY9Q4igA87V3Z046Aau8NZK2WdVcNJg== Authentication-Results: outgoing_mbo_mout; dkim=none; spf=pass (outgoing_mbo_mout: domain of kernel@pankajraghav.com designates 2001:67c:2050:b231:465::202 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org, Ritesh Harjani , linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH 2/5] mm: rename MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO Date: Mon, 4 Aug 2025 14:13:53 +0200 Message-ID: <20250804121356.572917-3-kernel@pankajraghav.com> In-Reply-To: <20250804121356.572917-1-kernel@pankajraghav.com> References: <20250804121356.572917-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4bwb8F6Hrsz9t1l Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav As all the helper functions has been renamed from *_page to *_folio, rename the MM flag from MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO. No functional changes. Suggested-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- include/linux/mm_types.h | 2 +- mm/huge_memory.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 1ec273b06691..2ad5eaddfcce 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1753,7 +1753,7 @@ enum { #define MMF_RECALC_UPROBES 20 /* MMF_HAS_UPROBES can be wrong */ #define MMF_OOM_SKIP 21 /* mm is of no interest for the OOM killer */ #define MMF_UNSTABLE 22 /* mm is unstable for copy_from_user */ -#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zer= o page */ +#define MMF_HUGE_ZERO_FOLIO 23 /* mm has ever used the global huge ze= ro folio */ #define MMF_DISABLE_THP 24 /* disable THP for all VMAs */ #define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP) #define MMF_OOM_REAP_QUEUED 25 /* mm was queued for oom_reaper */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6625514f622b..ff06dee213eb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -248,13 +248,13 @@ static void put_huge_zero_folio(void) =20 struct folio *mm_get_huge_zero_folio(struct mm_struct *mm) { - if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) return READ_ONCE(huge_zero_folio); =20 if (!get_huge_zero_folio()) return NULL; =20 - if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_and_set_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) put_huge_zero_folio(); =20 return READ_ONCE(huge_zero_folio); @@ -262,7 +262,7 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *= mm) =20 void mm_put_huge_zero_folio(struct mm_struct *mm) { - if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) + if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags)) put_huge_zero_folio(); } =20 --=20 2.49.0 From nobody Sun Oct 5 12:52:26 2025 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9450D2580F7; Mon, 4 Aug 2025 12:14:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309674; cv=none; b=EbIRPBrSp+kLn1mfiiRIcgzsOHRMsWZJsuuYuipTCTGdljnpAbwHqOXVdSyf95CuTAxFDJL6ef7Qb1Iauh9FmfxTATQzuzHbSELGHZTDRdNpfXbKiZ2nAbeNin89Hvx70zbQ1KGifp0zlUDwxRRQzVsSVjJN6FnE9yY53dEL5z8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309674; c=relaxed/simple; bh=LOWusdy/tgbd374PucOt+i12ZWi01OrqnWhG5hl6yoc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Hav332dyLV1rqcfQgbutQDZq3qk9OgSvh/9ePBszuNEamMMYjHpvivajZ00VXptxBFswS4nbkz2zp44Qyt/bqcaxfZG/1Cbc96JUWld8UtygrwizDwLtIjNuyGmcEQ9GTNMeUmwGCUO9L13aSVjCFsFVtaQ6UAtlg2VZ/7mkkEE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=kng33P8J; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="kng33P8J" Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4bwb8N5XPnz9t4V; Mon, 4 Aug 2025 14:14:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754309668; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9lJsTh+ri0GWL7AJeRsPrPgXU3IEK6w18AYWDMyJ55E=; b=kng33P8JYjLm434SEri9Gmt25PK/1/+YsynuwJvQWoLTeFnmhKrwcZGBPx4fXxBLoSlgRm m0MEq5k/IvpBw03o9qS6PkuettM1qUNoWyrhrs6SiZbVu/FN+hVTsIJRFyxDwMwFFuc51Z UIBv0TvGYKXO6IfIp4/X3sMEMMyO2x0MNzzNVtLiKfkjUmDp3sfhvT/cIsWRvgffbgK1C1 i2/W3REQarIhCtlBCCbFF1NTMxAN8lPUz2Phjt8kxIG0aAe42nx0sTPiKLxr9CeKWryhyg e3ZH6LfD2EuIs74dKyNHP9mbuMYJYUwh4G9Tu9B2KshUiIPAoaUOcIg7kuDsvg== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org, Ritesh Harjani , linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH 3/5] mm: add static huge zero folio Date: Mon, 4 Aug 2025 14:13:54 +0200 Message-ID: <20250804121356.572917-4-kernel@pankajraghav.com> In-Reply-To: <20250804121356.572917-1-kernel@pankajraghav.com> References: <20250804121356.572917-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav There are many places in the kernel where we need to zeroout larger chunks but the maximum segment we can zeroout at a time by ZERO_PAGE is limited by PAGE_SIZE. This is especially annoying in block devices and filesystems where we attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage bvec support in block layer, it is much more efficient to send out larger zero pages as a part of single bvec. This concern was raised during the review of adding LBS support to XFS[1][2]. Usually huge_zero_folio is allocated on demand, and it will be deallocated by the shrinker if there are no users of it left. At moment, huge_zero_folio infrastructure refcount is tied to the process lifetime that created it. This might not work for bio layer as the completions can be async and the process that created the huge_zero_folio might no longer be alive. And, one of the main point that came during discussion is to have something bigger than zero page as a drop-in replacement. Add a config option STATIC_HUGE_ZERO_FOLIO that will result in allocating the huge zero folio on first request, if not already allocated, and turn it static such that it can never get freed. This makes using the huge_zero_folio without having to pass any mm struct and does not tie the lifetime of the zero folio to anything, making it a drop-in replacement for ZERO_PAGE. If STATIC_HUGE_ZERO_FOLIO config option is enabled, then mm_get_huge_zero_folio() will simply return this page instead of dynamically allocating a new PMD page. This option can waste memory in small systems or systems with 64k base page size. So make it an opt-in and also add an option from individual architecture so that we don't enable this feature for larger base page size systems. Only x86 is enabled as a part of this series. Other architectures shall be enabled as a follow-up to this series. [1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/ [2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/ Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- arch/x86/Kconfig | 1 + include/linux/huge_mm.h | 18 ++++++++++++++++ mm/Kconfig | 21 +++++++++++++++++++ mm/huge_memory.c | 46 ++++++++++++++++++++++++++++++++++++++++- 4 files changed, 85 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0ce86e14ab5e..8e2aa1887309 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -153,6 +153,7 @@ config X86 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64 select ARCH_WANT_HUGETLB_VMEMMAP_PREINIT if X86_64 select ARCH_WANTS_THP_SWAP if X86_64 + select ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO if X86_64 select ARCH_HAS_PARANOID_L1D_FLUSH select ARCH_WANT_IRQS_OFF_ACTIVATE_MM select BUILDTIME_TABLE_SORT diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7748489fde1b..78ebceb61d0e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -476,6 +476,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); =20 extern struct folio *huge_zero_folio; extern unsigned long huge_zero_pfn; +extern atomic_t huge_zero_folio_is_static; =20 static inline bool is_huge_zero_folio(const struct folio *folio) { @@ -494,6 +495,18 @@ static inline bool is_huge_zero_pmd(pmd_t pmd) =20 struct folio *mm_get_huge_zero_folio(struct mm_struct *mm); void mm_put_huge_zero_folio(struct mm_struct *mm); +struct folio *__get_static_huge_zero_folio(void); + +static inline struct folio *get_static_huge_zero_folio(void) +{ + if (!IS_ENABLED(CONFIG_STATIC_HUGE_ZERO_FOLIO)) + return NULL; + + if (likely(atomic_read(&huge_zero_folio_is_static))) + return huge_zero_folio; + + return __get_static_huge_zero_folio(); +} =20 static inline bool thp_migration_supported(void) { @@ -685,6 +698,11 @@ static inline int change_huge_pud(struct mmu_gather *t= lb, { return 0; } + +static inline struct folio *get_static_huge_zero_folio(void) +{ + return NULL; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/Kconfig b/mm/Kconfig index e443fe8cd6cf..366a6d2d771e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -823,6 +823,27 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_WANTS_THP_SWAP def_bool n =20 +config ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO + def_bool n + +config STATIC_HUGE_ZERO_FOLIO + bool "Allocate a PMD sized folio for zeroing" + depends on ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO && TRANSPARENT_HUGEPAGE + help + Without this config enabled, the huge zero folio is allocated on + demand and freed under memory pressure once no longer in use. + To detect remaining users reliably, references to the huge zero folio + must be tracked precisely, so it is commonly only available for mapping + it into user page tables. + + With this config enabled, the huge zero folio can also be used + for other purposes that do not implement precise reference counting: + it is still allocated on demand, but never freed, allowing for more + wide-spread use, for example, when performing I/O similar to the + traditional shared zeropage. + + Not suitable for memory constrained systems. + config MM_ID def_bool n =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ff06dee213eb..e117b280b38d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -75,6 +75,7 @@ static unsigned long deferred_split_scan(struct shrinker = *shrink, static bool split_underused_thp =3D true; =20 static atomic_t huge_zero_refcount; +atomic_t huge_zero_folio_is_static __read_mostly; struct folio *huge_zero_folio __read_mostly; unsigned long huge_zero_pfn __read_mostly =3D ~0UL; unsigned long huge_anon_orders_always __read_mostly; @@ -266,6 +267,45 @@ void mm_put_huge_zero_folio(struct mm_struct *mm) put_huge_zero_folio(); } =20 +#ifdef CONFIG_STATIC_HUGE_ZERO_FOLIO + +struct folio *__get_static_huge_zero_folio(void) +{ + static unsigned long fail_count_clear_timer; + static atomic_t huge_zero_static_fail_count __read_mostly; + + if (unlikely(!slab_is_available())) + return NULL; + + /* + * If we failed to allocate a huge zero folio, just refrain from + * trying for one minute before retrying to get a reference again. + */ + if (atomic_read(&huge_zero_static_fail_count) > 1) { + if (time_before(jiffies, fail_count_clear_timer)) + return NULL; + atomic_set(&huge_zero_static_fail_count, 0); + } + /* + * Our raised reference will prevent the shrinker from ever having + * success. + */ + if (!get_huge_zero_folio()) { + int count =3D atomic_inc_return(&huge_zero_static_fail_count); + + if (count > 1) + fail_count_clear_timer =3D get_jiffies_64() + 60 * HZ; + + return NULL; + } + + if (atomic_cmpxchg(&huge_zero_folio_is_static, 0, 1) !=3D 0) + put_huge_zero_folio(); + + return huge_zero_folio; +} +#endif /* CONFIG_STATIC_HUGE_ZERO_FOLIO */ + static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink, struct shrink_control *sc) { @@ -277,7 +317,11 @@ static unsigned long shrink_huge_zero_folio_scan(struc= t shrinker *shrink, struct shrink_control *sc) { if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) =3D=3D 1) { - struct folio *zero_folio =3D xchg(&huge_zero_folio, NULL); + struct folio *zero_folio; + + if (WARN_ON_ONCE(atomic_read(&huge_zero_folio_is_static))) + return 0; + zero_folio =3D xchg(&huge_zero_folio, NULL); BUG_ON(zero_folio =3D=3D NULL); WRITE_ONCE(huge_zero_pfn, ~0UL); folio_put(zero_folio); --=20 2.49.0 From nobody Sun Oct 5 12:52:26 2025 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42B89258CE2; Mon, 4 Aug 2025 12:14:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309683; cv=none; b=XL0Wl/lzamgrV0Dqo97J2I4DE1WLrcZSckZxtV5usg0wEJGiOlTtLdLWSsxjM0T8XtwdjwZa8kYS3OFTxUIw68mVkFvyzjvXKb4BTgiQtJkD8BHCpqImyv6NPtoeJcyAyHJBb/q8ajvUIwIZ+JSnH/54TJGWoyGNc6WiCrUaW3E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309683; c=relaxed/simple; bh=3yvTRW5FhiAKs+vOBuG4HDRUaaXxURgqxp/7gqTmM9g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o6eI3zqaxicIa8rTAUjN0IOjgqRCHNpJCiULMhNJVShcgqVTtSO3UXxkUiGaCurdxJftRHhuQYAQuvDYZ3zWiM0yI3CtLv6POO0XM3Vz7FkwM26XIzpuJVlw70rCmWMuzdOmU/Qyje5Yx57pM2y92CAxAku3mf04aIAsqRntdzw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=JogdjNc8; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="JogdjNc8" Received: from smtp1.mailbox.org (smtp1.mailbox.org [10.196.197.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4bwb8Z5DNqz9sy6; Mon, 4 Aug 2025 14:14:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754309678; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kgGhybuB+tAZvqsghR5tZCuaExpLC7iQdGM6Ue4YApc=; b=JogdjNc8TbOYGaRal6ul2EdZ36M2+wgOdqzDmdw3o4etELnbmBQeg79oT/sOzxB33y0cQN qRMI/4ZoRg0HAWSBDELzHTQzWLCs4YFWROruNL1nq9mQc5FC1/70hgrFl/zGwM3N7u+6V4 KHw5BFNEg0Lu+8WStXiP05uU6wg/0Jis8IHzu52cYC6YBI1Niros/a1LLyH5Xxz4STWBwy J+WhOF6PhL3+9tOHYwpg3yftiilUDt9aL01rwjXaVo49+tWVlwqK+UReV2OsJTo96uRAd/ lriRpdwu9IdqxW56eCKiIwtkfKb21aEBKqavjMlzLVrVV5vJWJhbQExhq+qnGg== From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org, Ritesh Harjani , linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH 4/5] mm: add largest_zero_folio() routine Date: Mon, 4 Aug 2025 14:13:55 +0200 Message-ID: <20250804121356.572917-5-kernel@pankajraghav.com> In-Reply-To: <20250804121356.572917-1-kernel@pankajraghav.com> References: <20250804121356.572917-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav Add largest_zero_folio() routine so that huge_zero_folio can be used directly when CONFIG_STATIC_HUGE_ZERO_FOLIO is enabled. This will return ZERO_PAGE folio if CONFIG_STATIC_HUGE_ZERO_FOLIO is disabled or if we failed to allocate a huge_zero_folio. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Pankaj Raghav --- include/linux/huge_mm.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 78ebceb61d0e..c44a6736704b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -716,4 +716,21 @@ static inline int split_folio_to_order(struct folio *f= olio, int new_order) return split_folio_to_list_to_order(folio, NULL, new_order); } =20 +/* + * largest_zero_folio - Get the largest zero size folio available + * + * This function will return huge_zero_folio if CONFIG_STATIC_HUGE_ZERO_FO= LIO + * is enabled. Otherwise, a ZERO_PAGE folio is returned. + * + * Deduce the size of the folio with folio_size instead of assuming the + * folio size. + */ +static inline struct folio *largest_zero_folio(void) +{ + struct folio *folio =3D get_static_huge_zero_folio(); + + if (folio) + return folio; + return page_folio(ZERO_PAGE(0)); +} #endif /* _LINUX_HUGE_MM_H */ --=20 2.49.0 From nobody Sun Oct 5 12:52:26 2025 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5096E257AF9; Mon, 4 Aug 2025 12:14:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.161 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309693; cv=none; b=tym/SRULQ+bJt4PQzV6L765aZ6dfpfdQUFAUz7WypRBoh35qR8iCS7zPadZ0qwUadFG8+TFL7VuHrWiBcN4PxyJwtVh23Im7RS1pwWmhp4jkWfRoj3gRtf5HkI0DpFQf3uWCP0oXCWVdtroLiDiu1JxPHaVAgv2lIJvadrUuGDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754309693; c=relaxed/simple; bh=cE5HF3Q/pCpPUC8URCS1OOHU8K26yfnjKDLE9CXvFJg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZYbtmngLAdmc0/6+BKoJGrGbNLM52iOzGumEgUx4XxpZkbjbxcc1IhUAwYlL5YIgWaLQejbkyoRXHZWclvOZ5tKzdK1Wn9NvkuE8dLIK8YQtZbb1DenDdtfuj+ZuMYNwQT3Dc2TdLcGU+NM4Nq6USECAa7jyCCIh/EeuXGPquD8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=y4Rv0tCm; arc=none smtp.client-ip=80.241.56.161 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="y4Rv0tCm" Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4bwb8l4PMTz9shb; Mon, 4 Aug 2025 14:14:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1754309687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aUm65lZP53PD6UFdCNM0FQrwo2MXNsuP3026Ad+nKaE=; b=y4Rv0tCmaVgD31H5miqrX3K20uYG9fm+wdi+/Sy2amT1kSzTYClOtvSxRGbWkTh7kFtx0g HHt8oVU3das9R/nUe4juc6tSW/A/5Tyg/Vyhd8pMoPLi4fWGep6y2mdWg4ywbdETFHVqg9 eEeU3Ir4814EluMrwjX9DdiMn5Ts3LaS58zGSQ5/OEvLKgZvlm+jqdCANukxTaEsK3vYJ3 z/gam7TBBVKILGDRxrO2D3XuJoQIGggS9XlZArNyWQwqwCEz69FZagUgzrkIbqSBzZ5cd7 Klh+d92S00osdmGx6fJ4NMhLIKrjKEe4+/GEXYY1UMII3FMMP4qCuF2mc2goBg== Authentication-Results: outgoing_mbo_mout; dkim=none; spf=pass (outgoing_mbo_mout: domain of kernel@pankajraghav.com designates 2001:67c:2050:b231:465::1 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com From: "Pankaj Raghav (Samsung)" To: Suren Baghdasaryan , Ryan Roberts , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Vlastimil Babka , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org, Ritesh Harjani , linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [PATCH 5/5] block: use largest_zero_folio in __blkdev_issue_zero_pages() Date: Mon, 4 Aug 2025 14:13:56 +0200 Message-ID: <20250804121356.572917-6-kernel@pankajraghav.com> In-Reply-To: <20250804121356.572917-1-kernel@pankajraghav.com> References: <20250804121356.572917-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4bwb8l4PMTz9shb Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav Use largest_zero_folio() in __blkdev_issue_zero_pages(). On systems with CONFIG_STATIC_HUGE_ZERO_FOLIO enabled, we will end up sending larger bvecs instead of multiple small ones. Noticed a 4% increase in performance on a commercial NVMe SSD which does not support OP_WRITE_ZEROES. The device's MDTS was 128K. The performance gains might be bigger if the device supports bigger MDTS. Signed-off-by: Pankaj Raghav --- block/blk-lib.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 4c9f20a689f7..3030a772d3aa 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -196,6 +196,8 @@ static void __blkdev_issue_zero_pages(struct block_devi= ce *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned int flags) { + struct folio *zero_folio =3D largest_zero_folio(); + while (nr_sects) { unsigned int nr_vecs =3D __blkdev_sectors_to_bio_pages(nr_sects); struct bio *bio; @@ -208,15 +210,14 @@ static void __blkdev_issue_zero_pages(struct block_de= vice *bdev, break; =20 do { - unsigned int len, added; + unsigned int len; =20 - len =3D min_t(sector_t, - PAGE_SIZE, nr_sects << SECTOR_SHIFT); - added =3D bio_add_page(bio, ZERO_PAGE(0), len, 0); - if (added < len) + len =3D min_t(sector_t, folio_size(zero_folio), + nr_sects << SECTOR_SHIFT); + if (!bio_add_folio(bio, zero_folio, len, 0)) break; - nr_sects -=3D added >> SECTOR_SHIFT; - sector +=3D added >> SECTOR_SHIFT; + nr_sects -=3D len >> SECTOR_SHIFT; + sector +=3D len >> SECTOR_SHIFT; } while (nr_sects); =20 *biop =3D bio_chain_and_submit(*biop, bio); --=20 2.49.0