From nobody Fri Apr 17 04:48:22 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67858331A66; Mon, 23 Feb 2026 17:39:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771868386; cv=none; b=cTC7aAr8igD0tEKVe1vzV01BwoWr85aZOS1XO2nPyH+hBLSYEe80WLJpTr14VaS+peRd+a8ukB2z3WcpV7Yv/RPffj7YEoKF/BGNnP3GLrywJwv5K54hbLLXI65m2B7Qm6txgwmpBCyEpHwfV3M7sjiVnDdYBMIWdn7zcSIjWA4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771868386; c=relaxed/simple; bh=KPLMRqJNulMdw70KhDZpgFoKTEgDZfxf2vi2hDqk/Bg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jv2a2+t2EUfwiBQZWjLvPWJNuMq0/jE3BXUh9s53Z8DTX8RukHATjt75Y86VANhBdjmI5cslkJYu7Ok8xV2NTeLYBLMPy0lkesyQvYjg6ERtpOfzUnfU7pWJfw5D5WrBGO25CZ+2DsAws99/CIN2TdZC3Mq4sMTuefk8EOs9X8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gwZPMv2o; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gwZPMv2o" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB9C9C116D0; Mon, 23 Feb 2026 17:39:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771868386; bh=KPLMRqJNulMdw70KhDZpgFoKTEgDZfxf2vi2hDqk/Bg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gwZPMv2oaRXv+q25/D5x6vlZzXflUJmjGV4pALpQJgz46CHbsuieJUiHTSaROAbnm u7pmDRkPoqTELwyN5p5AbHtMTBjfrilOMgqbApDyxACq46wf1nYixFbyN9gYQdcmaP VQqSgbKN1Y0UT/mklqsSts0sqKX0785iCigbgnDRLxdU7qcPyMh08y0uchYRV+2VON yCIztLlF1sGjQHylJ+ueQAr0tIt1R4fnKDZq+g1VT2rDE2AkQbW9VIhB3yy0v7sgtn 1omxvKDF1Xt2WYMBOknNyu27Z0nuIP3JYPdG+qQ8XvqYyK6u2mpUVPn6S6C86Wsbp5 6dJL1sbClyTDw== From: Pratyush Yadav To: Pasha Tatashin , Mike Rapoport , Pratyush Yadav , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH 1/2] mm: memfd_luo: always make all folios uptodate Date: Mon, 23 Feb 2026 18:39:28 +0100 Message-ID: <20260223173931.2221759-2-pratyush@kernel.org> X-Mailer: git-send-email 2.53.0.371.g1d285c8824-goog In-Reply-To: <20260223173931.2221759-1-pratyush@kernel.org> References: <20260223173931.2221759-1-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Pratyush Yadav (Google)" When a folio is added to a shmem file via fallocate, it is not zeroed on allocation. This is done as a performance optimization since it is possible the folio will never end up being used at all. When the folio is used, shmem checks for the uptodate flag, and if absent, zeroes the folio (and sets the flag) before returning to user. With LUO, the flags of each folio are saved at preserve time. It is possible to have a memfd with some folios fallocated but not uptodate. For those, the uptodate flag doesn't get saved. The folios might later end up being used and become uptodate. They would get passed to the next kernel via KHO correctly since they did get preserved. But they won't have the MEMFD_LUO_FOLIO_UPTODATE flag. This means that when the memfd is retrieved, the folios will be added to the shmem file without the uptodate flag. They will be zeroed before first use, losing the data in those folios. Since we take a big performance hit in allocating, zeroing, and pinning all folios at prepare time anyway, take some more and zero all non-uptodate ones too. Later when there is a stronger need to make prepare faster, this can be optimized. To avoid racing with another uptodate operation, take the folio lock. Fixes: b3749f174d68 ("mm: memfd_luo: allow preserving memfd") Cc: stable@vger.kernel.org Signed-off-by: Pratyush Yadav (Google) Reviewed-by: Mike Rapoport (Microsoft) --- mm/memfd_luo.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c index a34fccc23b6a..ccbf1337f650 100644 --- a/mm/memfd_luo.c +++ b/mm/memfd_luo.c @@ -152,10 +152,31 @@ static int memfd_luo_preserve_folios(struct file *fil= e, if (err) goto err_unpreserve; =20 + folio_lock(folio); + if (folio_test_dirty(folio)) flags |=3D MEMFD_LUO_FOLIO_DIRTY; - if (folio_test_uptodate(folio)) - flags |=3D MEMFD_LUO_FOLIO_UPTODATE; + + /* + * If the folio is not uptodate, it was fallocated but never + * used. Saving this flag at prepare() doesn't work since it + * might change later when someone uses the folio. + * + * Since we have taken the performance penalty of allocating, + * zeroing, and pinning all the folios in the holes, take a bit + * more and zero all non-uptodate folios too. + * + * NOTE: For someone looking to improve preserve performance, + * this is a good place to look. + */ + if (!folio_test_uptodate(folio)) { + folio_zero_range(folio, 0, folio_size(folio)); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + } + flags |=3D MEMFD_LUO_FOLIO_UPTODATE; + + folio_unlock(folio); =20 pfolio->pfn =3D folio_pfn(folio); pfolio->flags =3D flags; --=20 2.53.0.371.g1d285c8824-goog From nobody Fri Apr 17 04:48:22 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8A8B334683; Mon, 23 Feb 2026 17:39:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771868387; cv=none; b=WxZIatzMFQZRuFjbpWihwFfwCdnXQ95QFjYi8QM5UmUVP4Zf/u3i/cczFZ4JgQDED8+jE9T0eUsYyCv1gjU6XvtaMVjBZd51jACT8YpgxdbTEQFYEFeEsHmHlVeY2deMtjYY2XUFfidYptlA778KrYs8Z53suQHg10zFoeDEiOY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771868387; c=relaxed/simple; bh=NkcemZpJG7TCrHsvZICKpUMsr7YdyLPMVXVEcANIaKM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZDokif6cYRUSdhi2vqLtYhV9y+smmrpRIztKb4uITmzV6cXpSHeLJhMn9r0ELolIEp/U0pLfrmfSltF15JqCDQ7MKre2qH+E6H3DoQ8zA8w17+iGB+jpApSk+K7utwOPRyjRCdZ0edX5wtp5IJYy4WfSEmARK3JXM1pGyl6BIfI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nHTL20es; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nHTL20es" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FCABC19421; Mon, 23 Feb 2026 17:39:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771868387; bh=NkcemZpJG7TCrHsvZICKpUMsr7YdyLPMVXVEcANIaKM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nHTL20es9YL5TdCn2/GZoN52kB9d1BjQa4LXiqBfKeudVn+ehx8Yfqm1hXMnuKye7 inc8mYiLyi+w677wZMOPLtBnhSE86TqylTRKxVANwNQsqae+Evb0hRxJD1g8x41bTh 8/sTpvSYR9niPAD5TLbMKm7yIGMcmJTgSJOHAdj/jEjCgql2smOs7EGVJ0U7jb2sJE qa02Mm3EFfOQeGuVqRgIZ2Nr7ZWmW47qGTcLdCHFP/9S2KtaPyPfsUBgJgKU/HE4Tt hwRr0ONDc4y3rQJRvZ3qpfxmM4GInWPkN8c7l3R4OT8k9hqAYxPH6swiwbCSM1rn5b TDDyFJ4/yotAw== From: Pratyush Yadav To: Pasha Tatashin , Mike Rapoport , Pratyush Yadav , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH 2/2] mm: memfd_luo: always dirty all folios Date: Mon, 23 Feb 2026 18:39:29 +0100 Message-ID: <20260223173931.2221759-3-pratyush@kernel.org> X-Mailer: git-send-email 2.53.0.371.g1d285c8824-goog In-Reply-To: <20260223173931.2221759-1-pratyush@kernel.org> References: <20260223173931.2221759-1-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Pratyush Yadav (Google)" A dirty folio is one which has been written to. A clean folio is its opposite. Since a clean folio has no user data, it can be freed under memory pressure. memfd preservation with LUO saves the flag at preserve(). This is problematic. The folio might get dirtied later. Saving it at freeze() also doesn't work, since the dirty bit from PTE is normally synced at unmap and there might still be mappings of the file at freeze(). To see why this is a problem, say a folio is clean at preserve, but gets dirtied later. The serialized state of the folio will mark it as clean. After retrieve, the next kernel will see the folio as clean and might try to reclaim it under memory pressure. This will result in losing user data. Mark all folios of the file as dirty, and always set the MEMFD_LUO_FOLIO_DIRTY flag. This comes with the side effect of making all clean folios un-reclaimable. This is a cost that has to be paid for participants of live update. It is not expected to be a common use case to preserve a lot of clean folios anyway. Since the value of pfolio->flags is a constant now, drop the flags variable and set it directly. Fixes: b3749f174d68 ("mm: memfd_luo: allow preserving memfd") Cc: stable@vger.kernel.org Signed-off-by: Pratyush Yadav (Google) Reviewed-by: Mike Rapoport (Microsoft) --- mm/memfd_luo.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c index ccbf1337f650..9eac02d06b5a 100644 --- a/mm/memfd_luo.c +++ b/mm/memfd_luo.c @@ -146,7 +146,6 @@ static int memfd_luo_preserve_folios(struct file *file, for (i =3D 0; i < nr_folios; i++) { struct memfd_luo_folio_ser *pfolio =3D &folios_ser[i]; struct folio *folio =3D folios[i]; - unsigned int flags =3D 0; =20 err =3D kho_preserve_folio(folio); if (err) @@ -154,8 +153,26 @@ static int memfd_luo_preserve_folios(struct file *file, =20 folio_lock(folio); =20 - if (folio_test_dirty(folio)) - flags |=3D MEMFD_LUO_FOLIO_DIRTY; + /* + * A dirty folio is one which has been written to. A clean folio + * is its opposite. Since a clean folio does not carry user + * data, it can be freed by page reclaim under memory pressure. + * + * Saving the dirty flag at prepare() time doesn't work since it + * can change later. Saving it at freeze() also won't work + * because the dirty bit is normally synced at unmap and there + * might still be a mapping of the file at freeze(). + * + * To see why this is a problem, say a folio is clean at + * preserve, but gets dirtied later. The pfolio flags will mark + * it as clean. After retrieve, the next kernel might try to + * reclaim this folio under memory pressure, losing user data. + * + * Unconditionally mark it dirty to avoid this problem. This + * comes at the cost of making clean folios un-reclaimable after + * live update. + */ + folio_mark_dirty(folio); =20 /* * If the folio is not uptodate, it was fallocated but never @@ -174,12 +191,11 @@ static int memfd_luo_preserve_folios(struct file *fil= e, flush_dcache_folio(folio); folio_mark_uptodate(folio); } - flags |=3D MEMFD_LUO_FOLIO_UPTODATE; =20 folio_unlock(folio); =20 pfolio->pfn =3D folio_pfn(folio); - pfolio->flags =3D flags; + pfolio->flags =3D MEMFD_LUO_FOLIO_DIRTY | MEMFD_LUO_FOLIO_UPTODATE; pfolio->index =3D folio->index; } =20 --=20 2.53.0.371.g1d285c8824-goog