From nobody Mon Feb 9 11:10:50 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 94D8421858E; Wed, 26 Feb 2025 12:07:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740571650; cv=none; b=h0fyQu+0ZB48rW5Df4cXO3tBnEzPHav3Vv6dLzCzSXtNvaV2aaPuNnQBmPPlDSCZdwNDORw1hEfu454rd1u3dkYnqGKXlUVsznYZCTJMY3tUzGHLPHk6wPCejoKcWq7ffw3kPRwGmEdo1sAhvFGiTD8JMft+RXouna4jS5PmlKY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740571650; c=relaxed/simple; bh=6ZfpCat5aZB2/wrXQ8xcz2Zir4gf7rzFK5yLy+wk3f8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jciZ1uOxLRhG2w8caGVAB/qkQMQyVo79S0I89MJ5Omfps4xQ1M/PHW45gS7KFFny4VFRDnAJJQAd2sr+ZAXDniUrwtmHrWWUrkvMhIrgLIR3GMdtYIkgZ7RAK3JkU8V07Uz4VrtO353jEsEegGakcnsC+Z54etEiEXWrtoUlbw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D2B1E13D5; Wed, 26 Feb 2025 04:07:43 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 96BB13F5A1; Wed, 26 Feb 2025 04:07:22 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Andreas Larsson , Arnd Bergmann , Muchun Song , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Dev Jain , Kevin Brodsky , Alexandre Ghiti Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v3 3/3] arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level Date: Wed, 26 Feb 2025 12:06:53 +0000 Message-ID: <20250226120656.2400136-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226120656.2400136-1-ryan.roberts@arm.com> References: <20250226120656.2400136-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in flush_hugetlb_tlb_range() was not updated. So as it stands, when trying to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will spuriously try to invalidate at level 0 on LPA2-enabled systems. Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are at it, explicitly use the correct stride and level for CONT_PMD_SIZE and CONT_PTE_SIZE, which should provide a minor optimization. Cc: stable@vger.kernel.org Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_L= PA2") Reviewed-by: Anshuman Khandual Reviewed-by: Catalin Marinas Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/huge= tlb.h index 03db9cb21ace..07fbf5bf85a7 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -76,12 +76,22 @@ static inline void flush_hugetlb_tlb_range(struct vm_ar= ea_struct *vma, { unsigned long stride =3D huge_page_size(hstate_vma(vma)); =20 - if (stride =3D=3D PMD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 2); - else if (stride =3D=3D PUD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 1); - else - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); + switch (stride) { +#ifndef __PAGETABLE_PMD_FOLDED + case PUD_SIZE: + __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1); + break; +#endif + case CONT_PMD_SIZE: + case PMD_SIZE: + __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2); + break; + case CONT_PTE_SIZE: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3); + break; + default: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN); + } } =20 #endif /* __ASM_HUGETLB_H */ --=20 2.43.0