From nobody Mon Apr 6 13:28:22 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 325D03D4100 for ; Thu, 19 Mar 2026 13:00:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773925234; cv=none; b=pLavQZgyGaa6TaZhNoKxA3p5kFjMQG9U4eiCm6yweVPpC3lYenIVWTdimjRUumM8y8g4UuQCldC+Qtvy79g3eiz36PbY1gc/zUZ3ZsKM9qY3ZVBfec2L1xeHnwibKi0dAz3vtaOIyUFmO4HsTeinY0Ylzbpaefu34+grTrez4nY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773925234; c=relaxed/simple; bh=tlTUWhNPK6pHczCJVqf9Z+TZVety3K7Ng+XiEbd0tdQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sJCOPlnDNizmgg720s7jrTTvp1mcsiCJFPWjUJ/UC07k8qroQlS8NGeV5kQhfQk/IYgIcU0oQKnQ3PiqA7RdHENNJnadsmbDNYvrQWHkpdJLuX8zTvkJFi3NWWXeHNtmF1HNiGo02fZxGuy3daxa1mvBssm52wVLqvrB14HNsnI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rqco5QSX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rqco5QSX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56090C2BC9E; Thu, 19 Mar 2026 13:00:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773925233; bh=tlTUWhNPK6pHczCJVqf9Z+TZVety3K7Ng+XiEbd0tdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rqco5QSXRL2rOThabB6xWRg0bHs54Q2yKy822BvcWtOYk5p4KLkjo1KTTDm84e937 GTu9uV+rBEpSv/aBmKfG4ryN+7uqtiNeq/xNEGy6UHBAMvna1UIANatgjZ3MCYaTFE bWYDdUvIXCzJVGFkZdzzPFNxlgDRoCENORV5XCKE9vP+4n8vze4EpyzsJa9/Qb+mZ+ LMiCmGVeS6XBuucxH3x2FUFs89GOJlavSVhvSz6fWl2QuYUg6+zPrvI2NsftVgEyUP 9ikCm5FdcENujRkOeVQAMlUWUCcZq2xYPXkOLoW5PQIafIOzhROkLm9Kb1wvD3Ky70 0+mVSS7sT3Klw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/9] mm/huge_memory: add a common exit path to zap_huge_pmd() Date: Thu, 19 Mar 2026 13:00:11 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Other than when we acquire the PTL, we always need to unlock the PTL, and optionally need to flush on exit. The code is currently very duplicated in this respect, so default flush_needed to false, set it true in the case in which it's required, then share the same logic for all exit paths. This also makes flush_needed make more sense as a function-scope value (we don't need to flush for the PFN map/mixed map, zero huge, error cases for instance). Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Baolin Wang --- mm/huge_memory.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a2f87315195d..c84b30461cc5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2431,7 +2431,7 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, pmd_t *pmd, unsigned long addr) { struct folio *folio =3D NULL; - bool flush_needed =3D true; + bool flush_needed =3D false; spinlock_t *ptl; pmd_t orig_pmd; =20 @@ -2453,19 +2453,18 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - spin_unlock(ptl); - return true; + goto out; } if (is_huge_zero_pmd(orig_pmd)) { if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - spin_unlock(ptl); - return true; + goto out; } =20 if (pmd_present(orig_pmd)) { struct page *page =3D pmd_page(orig_pmd); =20 + flush_needed =3D true; folio =3D page_folio(page); folio_remove_rmap_pmd(folio, page, vma); WARN_ON_ONCE(folio_mapcount(folio) < 0); @@ -2474,14 +2473,12 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 folio =3D softleaf_to_folio(entry); - flush_needed =3D false; =20 if (!thp_migration_supported()) WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); } else { WARN_ON_ONCE(true); - spin_unlock(ptl); - return true; + goto out; } =20 if (folio_test_anon(folio)) { @@ -2508,10 +2505,10 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, folio_put(folio); } =20 +out: spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); - return true; } =20 --=20 2.53.0