arch/arm/mm/context.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: Ye Xingchen <ye.xingchen@zte.com.cn>
bitmap_zero() is faster than bitmap_clear(), so use bitmap_zero()
instead of bitmap_clear().
Signed-off-by: Ye Xingchen <ye.xingchen@zte.com.cn>
---
arch/arm/mm/context.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 4204ffa2d104..2e95a707eb93 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -139,7 +139,7 @@ static void flush_context(unsigned int cpu)
u64 asid;
/* Update the list of reserved ASIDs and the ASID bitmap. */
- bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
+ bitmap_zero(asid_map, NUM_USER_ASIDS);
for_each_possible_cpu(i) {
asid = atomic64_xchg(&per_cpu(active_asids, i), 0);
/*
--
2.25.1
On Sat, May 06, 2023 at 04:35:22PM +0800, ye.xingchen@zte.com.cn wrote: > From: Ye Xingchen <ye.xingchen@zte.com.cn> > > bitmap_zero() is faster than bitmap_clear(), so use bitmap_zero() > instead of bitmap_clear(). Maybe in theory, but as NUM_USER_ASIDS is a power of two (256), and therefore both start and nbits are aigned to BITMAP_MEM_ALIGNMENT, bitmap_clear() will call memset(). The only difference between the two are that bitmap_zero() doesn't involve the compiler working out that it can call memset() (which will be worked out at compile time not run time). So, I doubt that this change makes any difference what so ever to the generated code, and thus this change is just for change sake. In other words, it's just useless churn. Thanks anyway. -- RMK's Patch system: https://www.armlinux.org.uk/developer/patches/ FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!
© 2016 - 2026 Red Hat, Inc.