From nobody Fri Dec 19 07:24:13 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B623934D4CA for ; Tue, 16 Dec 2025 14:46:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765896385; cv=none; b=iGYkx27yzpimM9uB4e69iCg29M0vxv4UCjYkTDgydATt7y2CCIGP5M6hPYxB/DQ2OXhnopAKpPshBeH/iKoEU/AbrtUvCrAT2yJ0v+cG1Jp2ll/uVf3H6/zujafgTcE8TuNHCU3XL+wA8oz6SQRY6b9o+01/JJbffdEqsD7pxOg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765896385; c=relaxed/simple; bh=gAIubC0DksD1OzYXQeCMVrJfdUI4I7LTg5qvfpFqonc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iBnP/R4MgXcp59GVvd7vlbug9eG7BoAXbYPO6xcBhphj4PoKx+caLvF9edGADl7r7Zp3d9Qemz9UTnbFdUpc6+RbUIYlMkErbTcmtXFdVy/+6SXVQ3CutKqjmZBe7NevKd31WOHrJt37fwsd9SG2b7EHEgqq5GjVgUPKr5zmjdo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F23A91655; Tue, 16 Dec 2025 06:46:15 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A2F173F73F; Tue, 16 Dec 2025 06:46:21 -0800 (PST) From: Ryan Roberts To: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , Linu Cherian Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Date: Tue, 16 Dec 2025 14:45:52 +0000 Message-ID: <20251216144601.2106412-8-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251216144601.2106412-1-ryan.roberts@arm.com> References: <20251216144601.2106412-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Will Deacon Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to decrement scale"), we don't need to clamp the 'pages' argument to fit the range for the specified 'scale' as we know that the upper bits will have been processed in a prior iteration. Drop the clamping and simplify the __TLBI_RANGE_NUM() macro. Signed-off-by: Will Deacon Reviewed-by: Ryan Roberts Reviewed-by: Dev Jain Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/tlbflush.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index d2a144a09a8f..0e1902f66e01 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -208,11 +208,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, = u32 level) * range. */ #define __TLBI_RANGE_NUM(pages, scale) \ - ({ \ - int __pages =3D min((pages), \ - __TLBI_RANGE_PAGES(31, (scale))); \ - (__pages >> (5 * (scale) + 1)) - 1; \ - }) + (((pages) >> (5 * (scale) + 1)) - 1) =20 /* * TLB Invalidation --=20 2.43.0