From nobody Mon Feb 9 11:04:56 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4B20632ED31 for ; Mon, 19 Jan 2026 17:22:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768843349; cv=none; b=Fswnfhu9dMLDuTqKxX/C8CBdGsndJpzF0steHI5ltf7tu7aY9K+yegsfu0TAArPe3IenPTApQF9LJxHklXJb97ly5zHGDWkW880rZocyLYtJdyYwasPU6yns2EjFwljRB+XkUwI4OgT2tNk8DECp98/yCAoLHtT/i08JSu0stKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768843349; c=relaxed/simple; bh=L/cPwoOoOEDBevLDme30pvhYpMBZ2VFV6Y1DI/sulyM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ac1Hxm7GYyqekgKjbtvWrqW8gmRc0zQOpFBDaLzf1JZs+VNp0pHAD3OEVjsgagTRBorwQgHUQ/VH/+1eNMSlR3MWddJJAvrB1Pszl14vC/S/Nv+YrhfvmXkpq4bNNgjiMbvoBGekAI4ovLrfHDTB/57NAKABUTQXIrgrf9rF658= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1D528FEC; Mon, 19 Jan 2026 09:22:21 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2FFD93F632; Mon, 19 Jan 2026 09:22:26 -0800 (PST) From: Ryan Roberts To: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , Linu Cherian , Jonathan Cameron Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Date: Mon, 19 Jan 2026 17:21:54 +0000 Message-ID: <20260119172202.1681510-8-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260119172202.1681510-1-ryan.roberts@arm.com> References: <20260119172202.1681510-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Will Deacon Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to decrement scale"), we don't need to clamp the 'pages' argument to fit the range for the specified 'scale' as we know that the upper bits will have been processed in a prior iteration. Drop the clamping and simplify the __TLBI_RANGE_NUM() macro. Signed-off-by: Will Deacon Reviewed-by: Ryan Roberts Reviewed-by: Dev Jain Signed-off-by: Ryan Roberts Reviewed-by: Jonathan Cameron --- arch/arm64/include/asm/tlbflush.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index a8513b649fe5..cac7768f3483 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -207,11 +207,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, = u32 level) * range. */ #define __TLBI_RANGE_NUM(pages, scale) \ - ({ \ - int __pages =3D min((pages), \ - __TLBI_RANGE_PAGES(31, (scale))); \ - (__pages >> (5 * (scale) + 1)) - 1; \ - }) + (((pages) >> (5 * (scale) + 1)) - 1) =20 /* * TLB Invalidation --=20 2.43.0