flush_tlb_kernel_range() is invoked when kernel memory mapping changes.
On x86 platforms without the INVLPGB feature enabled, we need to send IPIs
to every online CPU and synchronously wait for them to complete
do_kernel_range_flush(). This process can be time-consuming due to factors
such as a large number of CPUs or other issues (like interrupts being
disabled). flush_tlb_kernel_range() always disables preemption, this may
affect the scheduling latency of other tasks on the current CPU.
Previous patch converted flush_tlb_info from per-cpu variable to on-stack
variable. Additionally, it's no longer necessary to explicitly disable
preemption before calling smp_call*() since they internally handles the
preemption logic. Now is's safe to enable preemption during
flush_tlb_kernel_range(). Additionally, in get_flush_tlb_info() use
raw_smp_processor_id() to avoid warnings from check_preemption_disabled().
Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
arch/x86/mm/tlb.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 58c6f3d2f993..c37cc9845abc 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1394,7 +1394,7 @@ static void get_flush_tlb_info(struct flush_tlb_info *info,
info->stride_shift = stride_shift;
info->freed_tables = freed_tables;
info->new_tlb_gen = new_tlb_gen;
- info->initiating_cpu = smp_processor_id();
+ info->initiating_cpu = raw_smp_processor_id();
info->trim_cpumask = 0;
}
@@ -1461,6 +1461,8 @@ static void invlpgb_kernel_range_flush(struct flush_tlb_info *info)
{
unsigned long addr, nr;
+ guard(preempt)();
+
for (addr = info->start; addr < info->end; addr += nr << PAGE_SHIFT) {
nr = (info->end - addr) >> PAGE_SHIFT;
@@ -1505,7 +1507,6 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
struct flush_tlb_info info;
- guard(preempt)();
get_flush_tlb_info(&info, NULL, start, end, PAGE_SHIFT, false,
TLB_GENERATION_INVALID);
--
2.20.1