From nobody Mon Feb 9 21:21:06 2026 Received: from sg-1-103.ptr.blmpb.com (sg-1-103.ptr.blmpb.com [118.26.132.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F9EA32FA34 for ; Tue, 3 Feb 2026 11:27:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=118.26.132.103 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770118077; cv=none; b=QPW7xZtabjMeVGSoJ4EmS5KEUg71Gb56MyGn377crPJgOI4BmPdiLdneEXwbqKzQFkvM2RgmplDEy2WSb6mtUhK6x1JsIfbCtBnWcsPW7Icu0TmXhBXuuCB+PAcD3owK2TZKojzaGrtGdTDs0ICDAUFxIQ9TkXyaLfh9HpFiaXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770118077; c=relaxed/simple; bh=GLYGBVnW802UzY6CUwcavE9aJDfVPeku32Rz3znly7o=; h=Content-Type:References:To:From:Cc:Mime-Version:In-Reply-To: Subject:Date:Message-Id; b=ECsp05Gpvc8URe+aMsbfLn3C6gCqET4MKnitxU/WEbEEj1IiUtUDEVP8LAONT1x4n1S2jMgRMZWaP1/AtlyRKwo8kEIO2iFKkyC8sy7HNrJ8mk52Zo+IGeHDnXLHj0z4ySkhddTS+WTsJNseRGlUfgm8MvjOEkvYdSnXKMSfQ8A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=cooHrDYN; arc=none smtp.client-ip=118.26.132.103 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="cooHrDYN" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1770118070; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=IpE3NNYVl9/0SgQzJZ3rQxnLyoiiIaSB0M6BLs0BeCg=; b=cooHrDYN2k1jd02RzxhiCmeyIG1ErnSAbg0QZdmEKpA4kZjgB6ieibXSgJICRIS5h1x/Wi if81zKBm70Vjt6oOq8hJIOiYzL7T3remgP6kunee5nEn09SnLH+H2dR4Tl8T5t6q1oXeuM Q3AdJiNI39t37POCFQRBZmmPDyKsSGUDxhwAYzFOSnQJRn7MEKZE812dU+SYURTPcfYv6g i6kvzhvjVi7E/aDa01xLSaE164L/Q9E3nwGR0XE0vzdJLqpHT7wSqFUWuMiIkoYMjxf+5C vfSdO2nkMDMNaSZWeOjMHYuwfZ3tbjwYEhinoLVC4LJxGCJ7LN1cldKtCqz0Jw== X-Mailer: git-send-email 2.20.1 Content-Transfer-Encoding: quoted-printable X-Lms-Return-Path: References: <20260203112401.3889029-1-zhouchuyi@bytedance.com> To: , , , , , , , From: "Chuyi Zhou" X-Original-From: Chuyi Zhou Cc: , "Chuyi Zhou" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20260203112401.3889029-1-zhouchuyi@bytedance.com> Subject: [PATCH 11/11] x86/mm: Enable preemption during flush_tlb_kernel_range Date: Tue, 3 Feb 2026 19:24:01 +0800 Message-Id: <20260203112401.3889029-12-zhouchuyi@bytedance.com> Content-Type: text/plain; charset="utf-8" flush_tlb_kernel_range() is invoked when kernel memory mapping changes. On x86 platforms without the INVLPGB feature enabled, we need to send IPIs to every online CPU and synchronously wait for them to complete do_kernel_range_flush(). This process can be time-consuming due to factors such as a large number of CPUs or other issues (like interrupts being disabled). flush_tlb_kernel_range() always disables preemption, this may affect the scheduling latency of other tasks on the current CPU. Previous patch convert flush_tlb_info from per-cpu variable to on-stack variable. Additionally, it's no longer necessary to explicitly disable preemption before calling smp_call*() since they internally handles the preemption logic. Now is's safe to enable preemption during flush_tlb_kernel_range(). Signed-off-by: Chuyi Zhou --- arch/x86/mm/tlb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 4162d7ff024f..f0de6c1e387f 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1467,6 +1467,8 @@ static void invlpgb_kernel_range_flush(struct flush_t= lb_info *info) { unsigned long addr, nr; =20 + guard(preempt)(); + for (addr =3D info->start; addr < info->end; addr +=3D nr << PAGE_SHIFT) { nr =3D (info->end - addr) >> PAGE_SHIFT; =20 @@ -1517,7 +1519,7 @@ void flush_tlb_kernel_range(unsigned long start, unsi= gned long end) .new_tlb_gen =3D TLB_GENERATION_INVALID }; =20 - guard(preempt)(); + guard(migrate)(); =20 if ((end - start) >> PAGE_SHIFT > tlb_single_page_flush_ceiling) { start =3D 0; --=20 2.20.1