From nobody Fri Dec 19 07:24:11 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 501EB34D4F5 for ; Tue, 16 Dec 2025 14:46:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765896388; cv=none; b=mPKufzgYMTlxLGEYnYdyaMZfAJtwhAHjGqCrCrg3DXyfiSkegHCPM5wSs0/ik3cyPyJQ4eNi+Ll4EqoeO6IFORIrqP2Cbvqtw0tLTWhsOGojr968ndeSMg0Au+KgkaVruQ/PRGD8e8B7tj9JpqK/sRBGNCiCEBcOW5pdLzK3FLc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765896388; c=relaxed/simple; bh=x6HDdQOqqbTmmUwD8mIyoGgVh3LJqaeQY3P+aCh6RKw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EL3OQEIIGwqUO2b8hFuw3TaVskWWP5kPIHV1XnF+zTB+XYIHrqIpem0R0WoeegUuv79aszJqyqsL5mShhP8k2Y9//FjsJLTY5dCJgIORepF6EZQwAoyG9MZNpzSuV3ibbjTyO2ANEPReMTV3Q/c/qPlPdLg2JZT36mayp8MwfG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B55271691; Tue, 16 Dec 2025 06:46:17 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 612B13F73F; Tue, 16 Dec 2025 06:46:23 -0800 (PST) From: Ryan Roberts To: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , Linu Cherian Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Date: Tue, 16 Dec 2025 14:45:53 +0000 Message-ID: <20251216144601.2106412-9-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251216144601.2106412-1-ryan.roberts@arm.com> References: <20251216144601.2106412-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Will Deacon __flush_tlb_range_limit_excess() is unnecessarily complicated: - It takes a 'start', 'end' and 'pages' argument, whereas it only needs 'pages' (which the caller has computed from the other two arguments!). - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when the system doesn't support range-based invalidation but the range to be invalidated would result in fewer than MAX_DVM_OPS invalidations. Simplify the function so that it no longer takes the 'start' and 'end' arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on systems that implement range-based invalidation. Signed-off-by: Will Deacon Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/tlbflush.h | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 0e1902f66e01..3b72a71feac0 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -527,21 +527,13 @@ static __always_inline void __flush_tlb_range_op(tlbi= _op lop, tlbi_op rop, #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_l= pa2_is_enabled()) =20 -static inline bool __flush_tlb_range_limit_excess(unsigned long start, - unsigned long end, unsigned long pages, unsigned long stride) +static inline bool __flush_tlb_range_limit_excess(unsigned long pages, + unsigned long stride) { - /* - * When the system does not support TLB range based flush - * operation, (MAX_DVM_OPS - 1) pages can be handled. But - * with TLB range based operation, MAX_TLBI_RANGE_PAGES - * pages can be handled. - */ - if ((!system_supports_tlb_range() && - (end - start) >=3D (MAX_DVM_OPS * stride)) || - pages > MAX_TLBI_RANGE_PAGES) + if (system_supports_tlb_range() && pages > MAX_TLBI_RANGE_PAGES) return true; =20 - return false; + return pages >=3D (MAX_DVM_OPS * stride) >> PAGE_SHIFT; } =20 static inline void __flush_tlb_range_nosync(struct mm_struct *mm, @@ -555,7 +547,7 @@ static inline void __flush_tlb_range_nosync(struct mm_s= truct *mm, end =3D round_up(end, stride); pages =3D (end - start) >> PAGE_SHIFT; =20 - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { + if (__flush_tlb_range_limit_excess(pages, stride)) { flush_tlb_mm(mm); return; } @@ -619,7 +611,7 @@ static inline void flush_tlb_kernel_range(unsigned long= start, unsigned long end end =3D round_up(end, stride); pages =3D (end - start) >> PAGE_SHIFT; =20 - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { + if (__flush_tlb_range_limit_excess(pages, stride)) { flush_tlb_all(); return; } --=20 2.43.0