arch/arm64/mm/context.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: Ye Xingchen <ye.xingchen@zte.com.cn>
bitmap_zero() is faster than bitmap_clear(), so use bitmap_zero()
instead of bitmap_clear().
Signed-off-by: Ye Xingchen <ye.xingchen@zte.com.cn>
---
arch/arm64/mm/context.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index e1e0dca01839..ed0bf7f8e8ce 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -95,7 +95,7 @@ static void set_reserved_asid_bits(void)
else if (arm64_kernel_unmapped_at_el0())
set_kpti_asid_bits(asid_map);
else
- bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
+ bitmap_zero(asid_map, NUM_USER_ASIDS);
}
#define asid_gen_match(asid) \
--
2.25.1
On Sat, May 06, 2023 at 04:36:31PM +0800, ye.xingchen@zte.com.cn wrote: > From: Ye Xingchen <ye.xingchen@zte.com.cn> > > bitmap_zero() is faster than bitmap_clear(), so use bitmap_zero() > instead of bitmap_clear(). Is it? Don't these both boil down to: memset(asid_map, 0, NUM_USER_ASIDS / 8) ? Will > --- > arch/arm64/mm/context.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > index e1e0dca01839..ed0bf7f8e8ce 100644 > --- a/arch/arm64/mm/context.c > +++ b/arch/arm64/mm/context.c > @@ -95,7 +95,7 @@ static void set_reserved_asid_bits(void) > else if (arm64_kernel_unmapped_at_el0()) > set_kpti_asid_bits(asid_map); > else > - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); > + bitmap_zero(asid_map, NUM_USER_ASIDS); > } > > #define asid_gen_match(asid) \ > -- > 2.25.1
© 2016 - 2026 Red Hat, Inc.