From nobody Wed Dec 17 21:28:07 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29E963D6B; Fri, 6 Dec 2024 09:40:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733478007; cv=none; b=su+BbyBaNEIbJBxT99qWbEyIAlUbjE6oPZL2buEIXWyDv77LSTUJBD9JzJi2fJWm8MRZ4yyDdEAPVTUX7+eW1LxbrZcynH/8egWcGRudX56AboMbf2KC9GD8Uz5N7QSmX9qYbM2D6RKuzTdX7np+U8v11RYkv7XgBEFMmufsTOQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733478007; c=relaxed/simple; bh=Liejtf0ALa9JY3z3iDVw8FZ0VAaoMc19AyWHHF7md7A=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=aShJ80r8dkMCq0lLd5XPZzJAfSzy/4p5vh1TzOtWXaTxpS24D4aLk4OtUlelqr6L/5WBLd8O3eMto43uhLqhKIPgzdrvoFy6CuB9w2KXnAm01obJ1PFTbC+D7OA/dLf4GvI60uPVMApdfExQUreIIvkVjgpdLSf3DnB4AANBdyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=QoRbk7yT; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=sP89hhaO; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="QoRbk7yT"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="sP89hhaO" Date: Fri, 06 Dec 2024 09:40:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1733478001; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S+PJemGZXzbhqCn6pSZAKqxxokpQmaTqCd+YYXejcs8=; b=QoRbk7yTPRRVgFhRDQ5+uuyPRBItSWjoY0RTqk8+MAACBaV6SU//hWmaRqO3Cyyq8T+u6W JtPCdpgggATxmTIgH/unTGJrve3eKRzft+P4tqtk9B0TKl4jAKXDnJj5cWraLmegLb1Q6k tVMMeQ66cq/xAomEk26SmkPP5zfUh2VVHMiMuy5o0SG+jWzdjaKwEiQi7oTx5Js94F7v6D Zdr3/oJxQJ2ayZ98HhYhh+XgV8TeraHuOjmI8Hw4YVk1tIZa7PvUXOc+F2oXXny7id2o8E 8OZjUg7hOx4T4RVZRZA1AtEu4o6NIvy4xL/29fwkAS5pvrk3BzRoAwXKHvL/ZA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1733478001; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S+PJemGZXzbhqCn6pSZAKqxxokpQmaTqCd+YYXejcs8=; b=sP89hhaOmYuDAAYqXLVheh3i1hQpMDCRX/jj1Iwe89OB5dKegW6hVC16LwjDlGngCqDvBl xHY+kOzn9BoOaHAQ== From: "tip-bot2 for Rik van Riel" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/mm] x86/mm/tlb: Only trim the mm_cpumask once a second Cc: kernel test roboto , Rik van Riel , Ingo Molnar , Dave Hansen , Andy Lutomirski , Mathieu Desnoyers , Peter Zijlstra , Linus Torvalds , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20241204210316.612ee573@fangorn> References: <20241204210316.612ee573@fangorn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <173347800031.412.15841167548338325511.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the x86/mm branch of tip: Commit-ID: 6db2526c1d694c91c6e05e2f186c085e9460f202 Gitweb: https://git.kernel.org/tip/6db2526c1d694c91c6e05e2f186c085e9= 460f202 Author: Rik van Riel AuthorDate: Wed, 04 Dec 2024 21:03:16 -05:00 Committer: Ingo Molnar CommitterDate: Fri, 06 Dec 2024 10:26:20 +01:00 x86/mm/tlb: Only trim the mm_cpumask once a second Setting and clearing CPU bits in the mm_cpumask is only ever done by the CPU itself, from the context switch code or the TLB flush code. Synchronization is handled by switch_mm_irqs_off() blocking interrupts. Sending TLB flush IPIs to CPUs that are in the mm_cpumask, but no longer running the program causes a regression in the will-it-scale tlbflush2 test. This test is contrived, but a large regression here might cause a small regression in some real world workload. Instead of always sending IPIs to CPUs that are in the mm_cpumask, but no longer running the program, send these IPIs only once a second. The rest of the time we can skip over CPUs where the loaded_mm is different from the target mm. Reported-by: kernel test roboto Signed-off-by: Rik van Riel Signed-off-by: Ingo Molnar Cc: Dave Hansen Cc: Andy Lutomirski Cc: Mathieu Desnoyers Cc: Peter Zijlstra Cc: Linus Torvalds Link: https://lore.kernel.org/r/20241204210316.612ee573@fangorn Closes: https://lore.kernel.org/oe-lkp/202411282207.6bd28eae-lkp@intel.com/ --- arch/x86/include/asm/mmu.h | 2 ++- arch/x86/include/asm/mmu_context.h | 1 +- arch/x86/include/asm/tlbflush.h | 1 +- arch/x86/mm/tlb.c | 35 ++++++++++++++++++++++++++--- 4 files changed, 36 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index ce4677b..3b496cd 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -37,6 +37,8 @@ typedef struct { */ atomic64_t tlb_gen; =20 + unsigned long next_trim_cpumask; + #ifdef CONFIG_MODIFY_LDT_SYSCALL struct rw_semaphore ldt_usr_sem; struct ldt_struct *ldt; diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_= context.h index 2886cb6..795fdd5 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -151,6 +151,7 @@ static inline int init_new_context(struct task_struct *= tsk, =20 mm->context.ctx_id =3D atomic64_inc_return(&last_mm_ctx_id); atomic64_set(&mm->context.tlb_gen, 0); + mm->context.next_trim_cpumask =3D jiffies + HZ; =20 #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS if (cpu_feature_enabled(X86_FEATURE_OSPKE)) { diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 69e79ff..02fc2aa 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -222,6 +222,7 @@ struct flush_tlb_info { unsigned int initiating_cpu; u8 stride_shift; u8 freed_tables; + u8 trim_cpumask; }; =20 void flush_tlb_local(void); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 3c30817..458a5d5 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -892,9 +892,36 @@ done: nr_invalidate); } =20 -static bool tlb_is_not_lazy(int cpu, void *data) +static bool should_flush_tlb(int cpu, void *data) { - return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu); + struct flush_tlb_info *info =3D data; + + /* Lazy TLB will get flushed at the next context switch. */ + if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu)) + return false; + + /* No mm means kernel memory flush. */ + if (!info->mm) + return true; + + /* The target mm is loaded, and the CPU is not lazy. */ + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) =3D=3D info->mm) + return true; + + /* In cpumask, but not the loaded mm? Periodically remove by flushing. */ + if (info->trim_cpumask) + return true; + + return false; +} + +static bool should_trim_cpumask(struct mm_struct *mm) +{ + if (time_after(jiffies, READ_ONCE(mm->context.next_trim_cpumask))) { + WRITE_ONCE(mm->context.next_trim_cpumask, jiffies + HZ); + return true; + } + return false; } =20 DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared= ); @@ -928,7 +955,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cp= umask *cpumask, if (info->freed_tables) on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true); else - on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func, + on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func, (void *)info, 1, cpumask); } =20 @@ -979,6 +1006,7 @@ static struct flush_tlb_info *get_flush_tlb_info(struc= t mm_struct *mm, info->freed_tables =3D freed_tables; info->new_tlb_gen =3D new_tlb_gen; info->initiating_cpu =3D smp_processor_id(); + info->trim_cpumask =3D 0; =20 return info; } @@ -1021,6 +1049,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigne= d long start, * flush_tlb_func_local() directly in this case. */ if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + info->trim_cpumask =3D should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); } else if (mm =3D=3D this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled();