From nobody Mon Feb 9 09:10:07 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2DD3832FA3C for ; Mon, 19 Jan 2026 17:22:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768843351; cv=none; b=rqlKzLZ6PBbEYd6B47eofWJ5NX9JjQcCWGDngZ4l8rl7C92Q+y2upq1yJqlEtRuh4+wtjBs6p29eoph441B9QcZpZzmnUVTf/QG9CGGcgA/+3kuqFwzyH+eJ+czULB3KYl799C5VJu4RaQCFA/Mh/Xcr7nBlIFdZTVh+Y8Y53is= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768843351; c=relaxed/simple; bh=Jan+31Q2Hw/hGtZEt6aPN7o2cAE607/gvNKz9NdR48g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZlKVC93QfnK4RlTlyl1JAQgkqpfq2tDcgps01oZtC9IbvWp4ELfZGdRwfUm84CeqdBDB8Sb/9Pnv2DHOLzuJmUUs1DdU9VHpJVb7xAslymBCjhV7jFXUsdWpBZmPva3H233S25jZa389zKYsuVeZ7ncYL0abNnP6fetXVSfMIq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD4731476; Mon, 19 Jan 2026 09:22:22 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F2CB23F632; Mon, 19 Jan 2026 09:22:27 -0800 (PST) From: Ryan Roberts To: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , Linu Cherian , Jonathan Cameron Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Date: Mon, 19 Jan 2026 17:21:55 +0000 Message-ID: <20260119172202.1681510-9-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260119172202.1681510-1-ryan.roberts@arm.com> References: <20260119172202.1681510-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Will Deacon __flush_tlb_range_limit_excess() is unnecessarily complicated: - It takes a 'start', 'end' and 'pages' argument, whereas it only needs 'pages' (which the caller has computed from the other two arguments!). - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when the system doesn't support range-based invalidation but the range to be invalidated would result in fewer than MAX_DVM_OPS invalidations. Simplify the function so that it no longer takes the 'start' and 'end' arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on systems that implement range-based invalidation. Signed-off-by: Will Deacon Signed-off-by: Ryan Roberts Reviewed-by: Jonathan Cameron --- arch/arm64/include/asm/tlbflush.h | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index cac7768f3483..26e468d86afb 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -526,21 +526,19 @@ static __always_inline void __flush_tlb_range_op(tlbi= _op lop, tlbi_op rop, #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_l= pa2_is_enabled()) =20 -static inline bool __flush_tlb_range_limit_excess(unsigned long start, - unsigned long end, unsigned long pages, unsigned long stride) +static inline bool __flush_tlb_range_limit_excess(unsigned long pages, + unsigned long stride) { /* - * When the system does not support TLB range based flush - * operation, (MAX_DVM_OPS - 1) pages can be handled. But - * with TLB range based operation, MAX_TLBI_RANGE_PAGES - * pages can be handled. + * Assume that the worst case number of DVM ops required to flush a + * given range on a system that supports tlb-range is 20 (4 scales, 1 + * final page, 15 for alignment on LPA2 systems), which is much smaller + * than MAX_DVM_OPS. */ - if ((!system_supports_tlb_range() && - (end - start) >=3D (MAX_DVM_OPS * stride)) || - pages > MAX_TLBI_RANGE_PAGES) - return true; + if (system_supports_tlb_range()) + return pages > MAX_TLBI_RANGE_PAGES; =20 - return false; + return pages >=3D (MAX_DVM_OPS * stride) >> PAGE_SHIFT; } =20 static inline void __flush_tlb_range_nosync(struct mm_struct *mm, @@ -554,7 +552,7 @@ static inline void __flush_tlb_range_nosync(struct mm_s= truct *mm, end =3D round_up(end, stride); pages =3D (end - start) >> PAGE_SHIFT; =20 - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { + if (__flush_tlb_range_limit_excess(pages, stride)) { flush_tlb_mm(mm); return; } @@ -618,7 +616,7 @@ static inline void flush_tlb_kernel_range(unsigned long= start, unsigned long end end =3D round_up(end, stride); pages =3D (end - start) >> PAGE_SHIFT; =20 - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { + if (__flush_tlb_range_limit_excess(pages, stride)) { flush_tlb_all(); return; } --=20 2.43.0