From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60A5D326D55 for ; Wed, 22 Apr 2026 06:41:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840069; cv=none; b=GwGuWhm8GZyte0+8lvLEffghdXQehGsbY3TVLGIXkrdajy+aIwRbV/6ss5AAeoi7/GM+jtw6ZaacrWeUGyakt/sheV3C21zWbCTWwu6cNg20qMTarNku/033UCeBOw8CjG0Ak+ylg1J8vXsR20bJx1F1aPp4y9ycRrptcYkPlVw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840069; c=relaxed/simple; bh=vjDiELEsTM3Bl7TRFkOGsWsfnQBmnJgaIfGmmUi3IXY=; h=To:Date:Mime-Version:Content-Type:Cc:Subject:Message-Id: References:In-Reply-To:From; b=YwCvZ77O60vRjaP7zeeenRhyKtMt36zzWNsRufzKsL5lGW13jwoMUpZKCKDM0MHGGJ6oMPVn+8t/Y/5Gtncrx3tslzuQN588qUSlUW+l0BVO3mUBGtroR7sA8MxXn9yF231JcH6EVu9M6hKNMsT2ixqV2lLl6U2rThquu9QzjM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=fMm9ToTr; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="fMm9ToTr" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840056; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=pUAz1OvrDFjrp8kL9OWlmZBzUtE2UrTW6wTEaEHA+8U=; b=fMm9ToTraINFlZGqnJv+vqy/uVNP4ql55T0tT0uttmc+vETaz0x9F5L8HMz6umbyN2uDYv Nc39v59QA4cRNzLn8IIsdnbxPLghQhMZ/pPr0BeSiOUSh/xtPNJLypFaVDQIr9kE6S+aPi g6bEK0zlhxn9GcAhHVSbc+Ti5cF9GkWV5zuJK1esriLbkT2qc+OJS2q88yX1R+dK9FqV4p 9Z02U50xy2Yjse+6Fe5SjHKMi/gzQHzsaNzH2eOuR30gvBjQP+8CCCmEGxEtRs3JSojGS0 ZcQwi5rgVMUfWnyz60x5loXQ0vGyd5h5MCi9T0N+2wNpaqshwLcLmFgtsN30dA== To: , , , , , , , , , , , , Date: Wed, 22 Apr 2026 14:40:11 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Original-From: Chuyi Zhou X-Mailer: git-send-email 2.20.1 Cc: , "Chuyi Zhou" Subject: [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait Message-Id: <20260422064022.3791652-2-zhouchuyi@bytedance.com> X-Lms-Return-Path: References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Content-Type: text/plain; charset="utf-8" The latter patches will enable preemption before csd_lock_wait(), which could break csdlock_debug. Because the slice of other tasks on the CPU may be accounted between ktime_get_mono_fast_ns() calls, disable preemption explicitly in __csd_lock_wait(). This is a preparation for the next patches. Signed-off-by: Chuyi Zhou Acked-by: Muchun Song Reviewed-by: Steven Rostedt (Google) --- kernel/smp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/smp.c b/kernel/smp.c index f349960f79ca..fc1f7a964616 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -323,6 +323,8 @@ static void __csd_lock_wait(call_single_data_t *csd) int bug_id =3D 0; u64 ts0, ts1; =20 + guard(preempt)(); + ts1 =3D ts0 =3D ktime_get_mono_fast_ns(); for (;;) { if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages)) --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E2C73271FD for ; Wed, 22 Apr 2026 06:41:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840082; cv=none; b=SbENQUr76vGxceXE3dOFvN/P26pTI/Iih/x8x7giYByS3igW202bF1YjA5bSbEwA0QzzmhGDBIuJiLOQbBzzwXBRWeyYblDecfhZy3HSmz8T1a6erbB2WzKt4A8+gCDLfn86gwFsbzqO0KHV4eWgsQEJAoTqHSCRjf/2IJh0itY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840082; c=relaxed/simple; bh=LldljtaPW5ZtrUdVB5DF65oiNbDI86xEp7E1a1JNrg8=; h=Content-Type:Message-Id:From:Subject:Mime-Version:To:Date: In-Reply-To:References:Cc; b=eRqz8VKtNmbUSxOanhiz/5Hjj5JQFUL+0igJwF1Kelv3ZT6D22nfPcyAJlad4MmNk/dTEb20dODuUBD3vQt2y0Gv7cDheqd3bFHT4eXZuKzt9i9tbsxD7UPUInAMAnp0ad4mxmgOJsT/bhetn4O7YUDkmPtd8emUTlLZ/g0x5zs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=SGRzZop8; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="SGRzZop8" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840070; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=zaVLXF5fAu4ZiPLqJExgKtU8coNIuqJ1IUnigMO3d+0=; b=SGRzZop821QBQZM2pUrIxrK6Wkz/W2WOzUclk4zK7K1H+oc11e4VcfJE1fCFBEvdj+Qhx9 6KlT0fDKmfFyFQOoXmt062XXWZZ5fekYGS0sNZJAzLJeqN+DOaLp4HUhEC+ePYz5HuK/Y+ f4j5mZzonmgrOIG7T64A4D9Bzm51/9BfPNFTiV/gbfW4prF99B9GDsfUPkd4UaRQz77CSV GegjKq49p1M8YI0mSE/Y/nkH/pSrKibPcQMJD3xDgHgEvdEXLatcHijHN58bQaXjXRK5Iz hM5jkNZL8DSzRUB0wgwbVXBG44lDmoitqQpBsp8K6AcuZUqvCD1QYg+76sEfIg== X-Mailer: git-send-email 2.20.1 Message-Id: <20260422064022.3791652-3-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Subject: [PATCH v5 02/12] smp: Enable preemption early in smp_call_function_single Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable To: , , , , , , , , , , , , Date: Wed, 22 Apr 2026 14:40:12 +0800 X-Lms-Return-Path: X-Original-From: Chuyi Zhou In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Cc: , "Chuyi Zhou" Content-Type: text/plain; charset="utf-8" Now smp_call_function_single() disables preemption mainly for the following reasons: - To protect the per-cpu csd_data from concurrent modification by other tasks on the current CPU in the !wait case. For the wait case, synchronization is not a concern as on-stack csd is used. - To prevent the remote online CPU from being offlined. Specifically, we want to ensure that no new IPIs are queued after smpcfd_dying_cpu() has finished. Disabling preemption for the entire execution is unnecessary, especially csd_lock_wait() part does not require preemption protection. This patch enables preemption before csd_lock_wait() to reduce the preemption-disabled critical section. Signed-off-by: Chuyi Zhou Reviewed-by: Muchun Song Reviewed-by: Steven Rostedt (Google) --- kernel/smp.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index fc1f7a964616..b603d4229f95 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -685,11 +685,16 @@ int smp_call_function_single(int cpu, smp_call_func_t= func, void *info, =20 err =3D generic_exec_single(cpu, csd); =20 + /* + * @csd is stack-allocated when @wait is true. No concurrent access + * except from the IPI completion path, so we can re-enable preemption + * early to reduce latency. + */ + put_cpu(); + if (wait) csd_lock_wait(csd); =20 - put_cpu(); - return err; } EXPORT_SYMBOL(smp_call_function_single); --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F4613271FD for ; Wed, 22 Apr 2026 06:41:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840102; cv=none; b=siVe4b2vpky96lFO9pNlMsVtNDO6lRUNFNvasFY5L0Ds9cUy5kbkuNPt7F8U3mjsXnZDvQz0uWW0xK9jHEK3sgJyo+4fDPFRxI4KVnljSMfRwBFalxVZL06RGqMfjeQkHatFd/yDsurKKbbEKxDkjsfga+7gYI1lygMVPxqooeM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840102; c=relaxed/simple; bh=URWVko95nJSiaf29F0X75wvg7nH8o2Ry2LQMPhD9M0I=; h=To:References:Content-Type:From:Subject:Cc:Date:Message-Id: Mime-Version:In-Reply-To; b=oC8pqis2W+3MioMFwENwiLx0ROcLCezmVctTPY98UnkCY/8un2V/3zt2Zme1FgKDuowTf3H8Fg1122U3y5AUysN7RvX3/UyRW3NH98jI6/grTgbpFfo2nfN2xkzsFwcydC/q1EtFb2VX6QbNXFCsW3b1usQxCvun8hNCrybBgtE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=cptbmnKg; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="cptbmnKg" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840090; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=J8V3QrE+KvQ2uTHG0cXM4gqpM11omo8UVp24qbrwnGo=; b=cptbmnKgLblM46h9sLQXijcK6dUBvbnRpd54wF1JgtNhYmbQJySVXuFYigTD7JpmlFZd5H PDPtPrbtactNdnU9QGMqk2ClUDt1IAblnLblLqTH7erYBwlAeXPwZzZGhOB/tgj1o4XxHu 8sbvhGfK6SSZwwm7pfoSSHk8pTLevlguw4IKM2edqFFpRY878Ialjwr2QXsfP05/Qebyoh YSvYV42DVHB+JnMcMoCWkfeuKDqniaEjwJmBsmsJHVvYijHj+nKNdYRVFc8LFZ04RiOj2s /FfqHqbE02d2af7sAc9P3cmP/s0QelDLMi+qmgy3FgitltIwj9A0r2qr6J3mVA== To: , , , , , , , , , , , , X-Original-From: Chuyi Zhou References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Subject: [PATCH v5 03/12] smp: Refactor remote CPU selection in smp_call_function_any() Content-Transfer-Encoding: quoted-printable X-Mailer: git-send-email 2.20.1 Cc: , "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:13 +0800 Message-Id: <20260422064022.3791652-4-zhouchuyi@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Content-Type: text/plain; charset="utf-8" Currently, smp_call_function_any() disables preemption across the entire process of picking a target CPU, enqueueing the IPI, and synchronously waiting for the remote CPU. Since smp_call_function_single() has already been optimized to re-enable preemption before the synchronous csd_lock_wait(), callers of smp_call_function_any() should also benefit from this optimization to reduce the preemption-disabled critical section. A naive approach would be to simply remove get_cpu() and put_cpu() from smp_call_function_any(), leaving the preemption disablement entirely to smp_call_function_single(). However, doing so opens a dangerous preemption window between picking the remote CPU (e.g., via sched_numa_find_nth_cpu()) and dispatching the IPI inside smp_call_function_single(). If the selected remote CPU is fully offlined during this window, smp_call_function_single() will fail its cpu_online() check and return -ENXIO directly to the caller, violating the guarantee to execute on *any* online CPU in the mask. To safely enable this optimization, this patch refactors the logic of smp_call_function_any() and smp_call_function_single(). By moving the random remote CPU selection into a common __smp_call_function_single(), and keep the entire selection and IPI dispatch process within a single preemption-disabled region. Signed-off-by: Chuyi Zhou --- kernel/smp.c | 46 +++++++++++++++++++++++++--------------------- 1 file changed, 25 insertions(+), 21 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index b603d4229f95..f5bd648d6ae4 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -627,16 +627,8 @@ void flush_smp_call_function_queue(void) local_irq_restore(flags); } =20 -/* - * smp_call_function_single - Run a function on a specific CPU - * @func: The function to run. This must be fast and non-blocking. - * @info: An arbitrary pointer to pass to the function. - * @wait: If true, wait until function has completed on other CPUs. - * - * Returns 0 on success, else a negative status code. - */ -int smp_call_function_single(int cpu, smp_call_func_t func, void *info, - int wait) +static int __smp_call_function_single(int cpu, smp_call_func_t func, + void *info, const struct cpumask *mask, int wait) { call_single_data_t *csd; call_single_data_t csd_stack =3D { @@ -653,6 +645,14 @@ int smp_call_function_single(int cpu, smp_call_func_t = func, void *info, */ this_cpu =3D get_cpu(); =20 + if (mask) { + /* Try for same CPU (cheapest) */ + if (!cpumask_test_cpu(this_cpu, mask)) + cpu =3D sched_numa_find_nth_cpu(mask, 0, cpu_to_node(this_cpu)); + else + cpu =3D this_cpu; + } + /* * Can deadlock when called with interrupts disabled. * We allow cpu's that are not yet online though, as no one else can @@ -697,6 +697,20 @@ int smp_call_function_single(int cpu, smp_call_func_t = func, void *info, =20 return err; } + +/* + * smp_call_function_single - Run a function on a specific CPU + * @func: The function to run. This must be fast and non-blocking. + * @info: An arbitrary pointer to pass to the function. + * @wait: If true, wait until function has completed on other CPUs. + * + * Returns 0 on success, else a negative status code. + */ +int smp_call_function_single(int cpu, smp_call_func_t func, void *info, + int wait) +{ + return __smp_call_function_single(cpu, func, info, NULL, wait); +} EXPORT_SYMBOL(smp_call_function_single); =20 /** @@ -761,17 +775,7 @@ EXPORT_SYMBOL_GPL(smp_call_function_single_async); int smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, void *info, int wait) { - unsigned int cpu; - int ret; - - /* Try for same CPU (cheapest) */ - cpu =3D get_cpu(); - if (!cpumask_test_cpu(cpu, mask)) - cpu =3D sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu)); - - ret =3D smp_call_function_single(cpu, func, info, wait); - put_cpu(); - return ret; + return __smp_call_function_single(-1, func, info, mask, wait); } EXPORT_SYMBOL_GPL(smp_call_function_any); =20 --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-113.ptr.blmpb.com (va-1-113.ptr.blmpb.com [209.127.230.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D13843A1CD for ; Wed, 22 Apr 2026 06:41:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840114; cv=none; b=knDoMph6P/dZhhI7x6EF2mg6s000Rapus6Yxz6Kq5Pu9cM/U4egUh16WY7/cyjIs1kUwqPK78kkbCjFmBwPomu3AR43oiia1n0zHH1cWH2FjpyUi1F+dLP16DPztc5PzdT568QpRhICDc58bPcmX4cH2QACrMK8F7uTOAa8o4UI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840114; c=relaxed/simple; bh=k9T78q1pYPzZ3bZYv6cs8hAVzgUt5595lhlH4klvTd0=; h=References:In-Reply-To:Content-Type:From:Date:Mime-Version:To:Cc: Subject:Message-Id; b=MDXQ9i8PVvlljmRGiUeerDw9vCeZNu7Iza7gbRwVSmk2QHLODeskMSwbMpurPHSHyLkdXoyXHREi6cCFQNhl0GftD5RrVZfx20FcQ0nr4Hcq707EDrLyX6svApdOelUeWOhdn75AkQHRYFQWFke9jXxxEjeNkvkopaoxR+vt+S8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=Cr8PcQeV; arc=none smtp.client-ip=209.127.230.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="Cr8PcQeV" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840096; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=t/FEMzHwYZVNFGSZeeMKZuqL01meXojHZtuMQf6BtCU=; b=Cr8PcQeVxpTVWIF3L87Jng0+Z1tfD5O2uK6Hh0BOW6b4fCJVlD9BvdaTc4pfVSGjTAdKsT rRuaNdYS1aLj5cWquYdiup/EKSuZPgWyaoft8FyhOe/duoEe0TUUSPBxrj2LgbtxtOfNnl NE0Ivei5uK4uUusFV5wZg9FNZe4+EximfGo9JSkHWGWiFYNtmVhl4hyqxXo4mMgWUZj2PX w+eDw1Cf9t/DP4hnGmndTw9F6XJJk9d8DQg2GOat9tE4bhdZc2wL1fychFxlGGOtQ3JZE0 Wo4qNUIIgiZTIY5SqrhUXdBYb89U1b7bc5Qp5oZfTqW8lHiE8AqbfWcvZkkSxg== References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:14 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Chuyi Zhou Content-Transfer-Encoding: quoted-printable X-Lms-Return-Path: To: , , , , , , , , , , , , Cc: , "Chuyi Zhou" Subject: [PATCH v5 04/12] smp: Use task-local IPI cpumask in smp_call_function_many_cond() Message-Id: <20260422064022.3791652-5-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 Content-Type: text/plain; charset="utf-8" This patch prepares the task-local IPI cpumask during thread creation, and uses the local cpumask to replace the percpu cfd cpumask in smp_call_function_many_cond(). We will enable preemption during csd_lock_wait() later, and this can prevent concurrent access to the cfd->cpumask from other tasks on the current CPU. For cases where cpumask_size() is smaller than or equal to the pointer size, it tries to stash the cpumask in the pointer itself to avoid extra memory allocations. Signed-off-by: Chuyi Zhou --- include/linux/sched.h | 6 +++++ include/linux/smp.h | 20 +++++++++++++++ kernel/fork.c | 9 ++++++- kernel/smp.c | 59 ++++++++++++++++++++++++++++++++++++++----- 4 files changed, 87 insertions(+), 7 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 8ec3b6d7d718..022df8b9c62f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1347,6 +1347,12 @@ struct task_struct { struct list_head perf_event_list; struct perf_ctx_data __rcu *perf_ctx_data; #endif +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPTION) + union { + cpumask_t *ipi_mask_ptr; + unsigned long ipi_mask_val; + }; +#endif #ifdef CONFIG_DEBUG_PREEMPT unsigned long preempt_disable_ip; #endif diff --git a/include/linux/smp.h b/include/linux/smp.h index 1ebd88026119..c7b8cc82ad3c 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -167,6 +167,12 @@ void smp_call_function_many(const struct cpumask *mask, int smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, void *info, int wait); =20 +#ifdef CONFIG_PREEMPTION +int smp_task_ipi_mask_alloc(struct task_struct *task); +void smp_task_ipi_mask_free(struct task_struct *task); +cpumask_t *smp_task_ipi_mask(struct task_struct *cur); +#endif + void kick_all_cpus_sync(void); void wake_up_all_idle_cpus(void); bool cpus_peek_for_pending_ipi(const struct cpumask *mask); @@ -306,4 +312,18 @@ bool csd_lock_is_stuck(void); static inline bool csd_lock_is_stuck(void) { return false; } #endif =20 +#if !defined(CONFIG_SMP) || !defined(CONFIG_PREEMPTION) +static inline int smp_task_ipi_mask_alloc(struct task_struct *task) +{ + return 0; +} +static inline void smp_task_ipi_mask_free(struct task_struct *task) +{ +} +static inline cpumask_t *smp_task_ipi_mask(struct task_struct *cur) +{ + return NULL; +} +#endif + #endif /* __LINUX_SMP_H */ diff --git a/kernel/fork.c b/kernel/fork.c index 079802cb6100..206dda0d5254 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -533,6 +533,7 @@ void free_task(struct task_struct *tsk) #endif release_user_cpus_ptr(tsk); scs_release(tsk); + smp_task_ipi_mask_free(tsk); =20 #ifndef CONFIG_THREAD_INFO_IN_TASK /* @@ -930,10 +931,14 @@ static struct task_struct *dup_task_struct(struct tas= k_struct *orig, int node) #endif account_kernel_stack(tsk, 1); =20 - err =3D scs_prepare(tsk, node); + err =3D smp_task_ipi_mask_alloc(tsk); if (err) goto free_stack; =20 + err =3D scs_prepare(tsk, node); + if (err) + goto free_ipi_mask; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under @@ -1004,6 +1009,8 @@ static struct task_struct *dup_task_struct(struct tas= k_struct *orig, int node) #endif return tsk; =20 +free_ipi_mask: + smp_task_ipi_mask_free(tsk); free_stack: exit_task_stack_account(tsk); free_thread_stack(tsk); diff --git a/kernel/smp.c b/kernel/smp.c index f5bd648d6ae4..488ffeec5cd1 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -779,6 +779,44 @@ int smp_call_function_any(const struct cpumask *mask, } EXPORT_SYMBOL_GPL(smp_call_function_any); =20 +static DEFINE_STATIC_KEY_FALSE(ipi_mask_inlined); + +#ifdef CONFIG_PREEMPTION + +int smp_task_ipi_mask_alloc(struct task_struct *task) +{ + if (static_branch_unlikely(&ipi_mask_inlined)) + return 0; + + task->ipi_mask_ptr =3D kmalloc(cpumask_size(), GFP_KERNEL); + if (!task->ipi_mask_ptr) + return -ENOMEM; + + return 0; +} + +void smp_task_ipi_mask_free(struct task_struct *task) +{ + if (static_branch_unlikely(&ipi_mask_inlined)) + return; + + kfree(task->ipi_mask_ptr); +} + +cpumask_t *smp_task_ipi_mask(struct task_struct *cur) +{ + /* + * If cpumask_size() is smaller than or equal to the pointer + * size, it stashes the cpumask in the pointer itself to + * avoid extra memory allocations. + */ + if (static_branch_unlikely(&ipi_mask_inlined)) + return (cpumask_t *)&cur->ipi_mask_val; + + return cur->ipi_mask_ptr; +} +#endif + /* * Flags to be used as scf_flags argument of smp_call_function_many_cond(). * @@ -796,11 +834,18 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, int cpu, last_cpu, this_cpu =3D smp_processor_id(); struct call_function_data *cfd; bool wait =3D scf_flags & SCF_WAIT; + struct cpumask *cpumask, *task_mask; + bool preemptible_wait; int nr_cpus =3D 0; bool run_remote =3D false; =20 lockdep_assert_preemption_disabled(); =20 + task_mask =3D smp_task_ipi_mask(current); + preemptible_wait =3D task_mask; + cfd =3D this_cpu_ptr(&cfd_data); + cpumask =3D preemptible_wait ? task_mask : cfd->cpumask; + /* * Can deadlock when called with interrupts disabled. * We allow cpu's that are not yet online though, as no one else can @@ -821,16 +866,15 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, =20 /* Check if we need remote execution, i.e., any CPU excluding this one. */ if (cpumask_any_and_but(mask, cpu_online_mask, this_cpu) < nr_cpu_ids) { - cfd =3D this_cpu_ptr(&cfd_data); - cpumask_and(cfd->cpumask, mask, cpu_online_mask); - __cpumask_clear_cpu(this_cpu, cfd->cpumask); + cpumask_and(cpumask, mask, cpu_online_mask); + __cpumask_clear_cpu(this_cpu, cpumask); =20 cpumask_clear(cfd->cpumask_ipi); - for_each_cpu(cpu, cfd->cpumask) { + for_each_cpu(cpu, cpumask) { call_single_data_t *csd =3D per_cpu_ptr(cfd->csd, cpu); =20 if (cond_func && !cond_func(cpu, info)) { - __cpumask_clear_cpu(cpu, cfd->cpumask); + __cpumask_clear_cpu(cpu, cpumask); continue; } =20 @@ -881,7 +925,7 @@ static void smp_call_function_many_cond(const struct cp= umask *mask, } =20 if (run_remote && wait) { - for_each_cpu(cpu, cfd->cpumask) { + for_each_cpu(cpu, cpumask) { call_single_data_t *csd; =20 csd =3D per_cpu_ptr(cfd->csd, cpu); @@ -997,6 +1041,9 @@ EXPORT_SYMBOL(nr_cpu_ids); void __init setup_nr_cpu_ids(void) { set_nr_cpu_ids(find_last_bit(cpumask_bits(cpu_possible_mask), NR_CPUS) + = 1); + + if (IS_ENABLED(CONFIG_PREEMPTION) && cpumask_size() <=3D sizeof(unsigned = long)) + static_branch_enable(&ipi_mask_inlined); } =20 /* Called by boot processor to activate the rest. */ --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-113.ptr.blmpb.com (va-1-113.ptr.blmpb.com [209.127.230.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75A9F308F2A for ; Wed, 22 Apr 2026 06:42:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840122; cv=none; b=C6mDpm7iT/bGEBlXAC+X7tg+rhKiCiIoMug1LOlmCyjcOY08SvzXaR6O2WmXHBavIjsuv7NtAOyQOxhI8c4eWrIcTo2ooWyYxNegC1TJlcCGl9neps1cayZ2/R/ImEkbvORc8+86ZY4bikUgW9tdp6MlSLLKSwBdb+n5ZyBn6fI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840122; c=relaxed/simple; bh=OO646WuKGmgFSFYe9FvARUx+VhUlGtw7/cm0SRBwhjw=; h=Cc:Mime-Version:Message-Id:To:From:Date:Content-Type:In-Reply-To: References:Subject; b=bKIefXs89Bq7JWLIPuKqGP6Bw2nt94SOHsVULOf1MCnN+Jiw8EB0RjqVRlwlepdaFHkR38MmoM7GH1WSABSrDhMLbC8j/2R8xxGh1BynoFTcgJUp74USSB1AzSgi3DhzCT0gWdHlrmzwcYqn/dBiEEVzdZMCeb22UOz3BwZfU50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=PpU0scf0; arc=none smtp.client-ip=209.127.230.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="PpU0scf0" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840110; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=85KnZHQm+pawIoqIUMMkQNfYVWqvOEYk3+Jj4EIbOwo=; b=PpU0scf0vkuJLsn8GQM9CjYidDjtSAJDcC90Sma5D5/x45EE2XETrmux+jcH5eNCQp8DW6 5nK6d6u+ePouhLRdBeDLeya1JkphfD71DK5ApPEGqw9sRj4uMtCxKf/Beftg3MgZEdlAIy 3kUFwrDEiMTY9gPl2puFhNMGWmkpqQKAA2hOZMSTxRuZa6YX1MvAnTqKM7AbiX+Gze17F2 h++BQHPgUzLJz7IHw93N0rq11RvpJdmR+TRYfCcvXg+koWfcrwsacTIfZZl7uuNlCcax9k oToxrOeavZFz+/XOv6Y4BkJoujiLWTNAUKm92pclkcmiXBkavxjYD+i8d8tvuQ== Cc: , "Chuyi Zhou" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Message-Id: <20260422064022.3791652-6-zhouchuyi@bytedance.com> To: , , , , , , , , , , , , From: "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:15 +0800 In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 X-Original-From: Chuyi Zhou References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> X-Lms-Return-Path: Subject: [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Later patch would enable preemption during csd_lock_wait() in smp_call_function_many_cond(), which may cause access cfd->csd data that has already been freed in smpcfd_dead_cpu(). One way to fix the above issue is to use the RCU mechanism to protect the csd data and wait for all read critical sections to exit before freeing the memory in smpcfd_dead_cpu(), but this could delay CPU shutdown. This patch chooses a simpler approach: allocate the percpu csd on the UP side only once and skip freeing the csd memory in smpcfd_dead_cpu(). Suggested-by: Sebastian Andrzej Siewior Signed-off-by: Chuyi Zhou Acked-by: Muchun Song --- kernel/smp.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 488ffeec5cd1..134b181fb593 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -63,7 +63,15 @@ int smpcfd_prepare_cpu(unsigned int cpu) free_cpumask_var(cfd->cpumask); return -ENOMEM; } - cfd->csd =3D alloc_percpu(call_single_data_t); + + /* + * The percpu csd is allocated only once and never freed. + * This ensures that smp_call_function_many_cond() can safely + * access the csd of an offlined CPU if it gets preempted + * during csd_lock_wait(). + */ + if (!cfd->csd) + cfd->csd =3D alloc_percpu(call_single_data_t); if (!cfd->csd) { free_cpumask_var(cfd->cpumask); free_cpumask_var(cfd->cpumask_ipi); @@ -79,7 +87,6 @@ int smpcfd_dead_cpu(unsigned int cpu) =20 free_cpumask_var(cfd->cpumask); free_cpumask_var(cfd->cpumask_ipi); - free_percpu(cfd->csd); return 0; } =20 --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-111.ptr.blmpb.com (va-1-111.ptr.blmpb.com [209.127.230.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66FA830F95F for ; Wed, 22 Apr 2026 06:42:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840136; cv=none; b=hoeqH/9QDkNn6BAPi2qTFLz/jDiqSIULrlYRt/KUYHP9pLWXkt92FKBj4qcYU/pKah3QhChcTkobJAmZw7sY+wZCruExlueBp38g/W5agw3mmVlSq5EbtpnVD4pzoryzuVo9o4qGrUvVNmqQ+XdJKpVuYmZL77QrNt7Fa1DHUUY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840136; c=relaxed/simple; bh=lbVGvscc6Yux7wex5P2QWpzIKu6M2KOJPzvYOXeJSjE=; h=To:References:Content-Type:Cc:From:Date:Subject:Mime-Version: In-Reply-To:Message-Id; b=IKq/9+ZorX/8HL7KyCF4m18qrNwfuUSzBhGILHFWoGrAl+XaUz3FTbDQ7VOpH27oyKAPbHXyeiN/Ct8827WCkw6UYH7CMBYVkE/daAF9+FXOTh6MyljCWdbpTYqGlSbL75M16+N1U1T7Jcq5azxNEtlP7Zvsq9kDty/ZrCb7YvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=QwJjnDsE; arc=none smtp.client-ip=209.127.230.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="QwJjnDsE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840123; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=1ceipqWNnupV8EnGhQ6REvwy2UJt+K85hUV9wcY4lcg=; b=QwJjnDsEBcsUlq6X5SDFkt4pWv+WVm7E8/5IP+CM3XWq5AtNgstySSsTlStCQix5OSYT8c PGV7FQs4V7DwTC1Uz10cp3AZSopWOYgQdgr25M5E30/P2VLV5Mcg12fg/tX6+znfG/lDo0 SJ+7/taoeobeX+52DR+6yxHXv7jOA5b5QWv2jTGs7rwe885AwjtUfgwChg1d+lNBJGfJ0N GiYUiphwz8ETg304e9z/FlXBfJmr0Bz31SkZF0nlkHK/VpWffeiiwmfs3miAjwIRPJC+lz bn/1bQGRS6ZlQ3r0cN9E596A89mIejd1RtmIpf6fnYBkTWD3+RyN6c7dZvENpQ== To: , , , , , , , , , , , , References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Content-Transfer-Encoding: quoted-printable X-Lms-Return-Path: Cc: , "Chuyi Zhou" From: "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:16 +0800 Subject: [PATCH v5 06/12] smp: Enable preemption early in smp_call_function_many_cond Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Message-Id: <20260422064022.3791652-7-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 X-Original-From: Chuyi Zhou Content-Type: text/plain; charset="utf-8" Disabling preemption entirely during smp_call_function_many_cond() was primarily for the following reasons: - To prevent the remote online CPU from going offline. Specifically, we want to ensure that no new csds are queued after smpcfd_dying_cpu() has finished. Therefore, preemption must be disabled until all necessary IPIs are sent. - To prevent current CPU from going offline. Being migrated to another CPU and calling csd_lock_wait() may cause UAF due to smpcfd_dead_cpu() during the current CPU offline process. - To protect the per-cpu cfd_data from concurrent modification by other tasks on the current CPU. cfd_data contains cpumasks and per-cpu csds. Before enqueueing a csd, we block on the csd_lock() to ensure the previous async csd->func() has completed, and then initialize csd->func and csd->info. After sending the IPI, we spin-wait for the remote CPU to call csd_unlock(). Actually the csd_lock mechanism already guarantees csd serialization. If preemption occurs during csd_lock_wait, other concurrent smp_call_function_many_cond calls will simply block until the previous csd->func() completes: task A task B sd->func =3D fun_a send ipis preempted by B ---------------> csd_lock(csd); // block until last // fun_a finished csd->func =3D func_b; csd->info =3D info; ... send ipis switch back to A <--------------- csd_lock_wait(csd); // block until remote finish func_* Previous patches replaced the per-cpu cfd->cpumask with task-local cpumask, and the percpu csd is allocated only once and is never freed to ensure we can safely access csd. Now we can enable preemption before csd_lock_wait() which makes the potentially unpredictable csd_lock_wait() preemptible and migratable. Signed-off-by: Chuyi Zhou --- kernel/smp.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 134b181fb593..e0983d5f41a2 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -838,7 +838,7 @@ static void smp_call_function_many_cond(const struct cp= umask *mask, unsigned int scf_flags, smp_cond_func_t cond_func) { - int cpu, last_cpu, this_cpu =3D smp_processor_id(); + int cpu, last_cpu, this_cpu; struct call_function_data *cfd; bool wait =3D scf_flags & SCF_WAIT; struct cpumask *cpumask, *task_mask; @@ -846,10 +846,10 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, int nr_cpus =3D 0; bool run_remote =3D false; =20 - lockdep_assert_preemption_disabled(); - task_mask =3D smp_task_ipi_mask(current); - preemptible_wait =3D task_mask; + preemptible_wait =3D task_mask && preemptible(); + + this_cpu =3D get_cpu(); cfd =3D this_cpu_ptr(&cfd_data); cpumask =3D preemptible_wait ? task_mask : cfd->cpumask; =20 @@ -931,6 +931,19 @@ static void smp_call_function_many_cond(const struct c= pumask *mask, local_irq_restore(flags); } =20 + /* + * We may block in csd_lock_wait() for a significant amount of time, + * especially when interrupts are disabled or with a large number of + * remote CPUs. Try to enable preemption before csd_lock_wait(). + * + * Use the task_mask instead of cfd->cpumask to avoid concurrency + * modification from tasks on the same cpu. If preemption occurs during + * csd_lock_wait, other concurrent smp_call_function_many_cond() calls + * will simply block until the previous csd->func() completes. + */ + if (preemptible_wait) + put_cpu(); + if (run_remote && wait) { for_each_cpu(cpu, cpumask) { call_single_data_t *csd; @@ -939,6 +952,9 @@ static void smp_call_function_many_cond(const struct cp= umask *mask, csd_lock_wait(csd); } } + + if (!preemptible_wait) + put_cpu(); } =20 /** @@ -950,8 +966,7 @@ static void smp_call_function_many_cond(const struct cp= umask *mask, * on other CPUs. * * You must not call this function with disabled interrupts or from a - * hardware interrupt handler or from a bottom half handler. Preemption - * must be disabled when calling this function. + * hardware interrupt handler or from a bottom half handler. * * @func is not called on the local CPU even if @mask contains it. Consid= er * using on_each_cpu_cond_mask() instead if this is not desirable. --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-115.ptr.blmpb.com (va-1-115.ptr.blmpb.com [209.127.230.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91BBF308F2A for ; Wed, 22 Apr 2026 06:42:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.115 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840149; cv=none; b=AafjVs00E2+yIOjGwtVECt5iasnu9BOtQ8Te4w4EL4LMvnFBBY1OUK3EXoHvay+2OZXqDrURSD0fXGy5CCDn8127GijpVHbdIc2dq89PTmz41it/OzoknLyIVQJccfqxn9uHjvao1Da6kZXStNPueI1GXyW65VYRC4YJlALVyRw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840149; c=relaxed/simple; bh=qjS+mb3EE606kHZhh1IPc+z99XN0hfs3Mv2/g072RRU=; h=To:Mime-Version:Cc:Subject:From:Content-Type:Date:Message-Id: In-Reply-To:References; b=CoOVQQOoMYJsFAwc4DNTjFlFjbtFTguqmaommYveceYsJVNw2m9Hwk5MrBnUUbXzN2u3mhUfWPChLpmcRP83ukLCXNvgR2phI+EJSIWMv/xWyBURVFEug8UT9iW6n1Nzuo/xFVM12HL1EXPLAZRlnbEi5Hy9w0e8aLycv9SJ8kE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=crfM4JRC; arc=none smtp.client-ip=209.127.230.115 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="crfM4JRC" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840137; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=Sb94oJKIlMb8Yp73w/QFlJCQjlfRgE9kcYzEn+e1SG4=; b=crfM4JRCsY5ZyDVSBc1X2s3CtwC1E0gsYZy4jAbkYCXe6WOm0ZDdOaXjbDxJdFbBrpnx3r KoXMotA+A5kZt1ya3TIjLU4sFmC+QIgOWRFRqhW56+zuHa69stxlH8VsgdqK3qjQ4yxJKX fNvz9nsyZvZ3dhyPTzCWK7PmcS4bilDJsd5l0nSurNTQGuLGHiGWVRMNqdtOt4m/Gf6Z7z sOuRFRAZYH4F6+u1Fj77MX+fLdc9ddE+jsh1glDlWnqvqdC6zHayrxkJ0yjajcABGjsRr1 rBjGAZaetj4hur+U2Zwx5yWLirQXOdCUUJ6DnAWmFt8FcSYw+gm+LjoZQ2UWgA== To: , , , , , , , , , , , , Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Chuyi Zhou Cc: , "Chuyi Zhou" Subject: [PATCH v5 07/12] smp: Remove preempt_disable from smp_call_function From: "Chuyi Zhou" X-Mailer: git-send-email 2.20.1 Date: Wed, 22 Apr 2026 14:40:17 +0800 Message-Id: <20260422064022.3791652-8-zhouchuyi@bytedance.com> X-Lms-Return-Path: In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Content-Transfer-Encoding: quoted-printable References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Content-Type: text/plain; charset="utf-8" Now smp_call_function_many_cond() internally handles the preemption logic, so smp_call_function() does not need to explicitly disable preemption. Remove preempt_{enable, disable} from smp_call_function(). Signed-off-by: Chuyi Zhou Reviewed-by: Muchun Song --- kernel/smp.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index e0983d5f41a2..7200ce6043bc 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -995,9 +995,8 @@ EXPORT_SYMBOL(smp_call_function_many); */ void smp_call_function(smp_call_func_t func, void *info, int wait) { - preempt_disable(); - smp_call_function_many(cpu_online_mask, func, info, wait); - preempt_enable(); + smp_call_function_many_cond(cpu_online_mask, func, info, + wait ? SCF_WAIT : 0, NULL); } EXPORT_SYMBOL(smp_call_function); =20 --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B13CE32E757 for ; Wed, 22 Apr 2026 06:42:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840165; cv=none; b=hTKWUztk4j9XEIlkteROoo1jSCxvVwz5r5AlUV2pwVtcdkM/8TIVVqOP9PKdAbChaJGRnF/1Jx407A8shL46Hm2U3eexPk/4cztwNMoVL6BHJoUcxdPTUSF21fUTfDRw9h06ZVNNrlBJCK4fYe9mto5fg5rzPe9p/aRvJxO88xg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840165; c=relaxed/simple; bh=omIQbEoeww6syGXmw+09PSTeCYbe8t2/grkXRgT50Jo=; h=To:Cc:Date:In-Reply-To:References:Message-Id:From:Subject: Mime-Version:Content-Type; b=nNIBqZ/kr5pifTj4qDJoSNJ8ctWsCEQOP95X5i8RB7sw+PkB4o4ShrJa/iQj3ANlcLiRSld5Yj90tUbg7QgmHV1yaea1BxDyBR6az/9vUs0beodl0uemqNCtLdvMWcHedyM89cSa6gN09HE/4j8jzWe03L7o0ChHHNqR2mHvDAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=Cngjelhw; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="Cngjelhw" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840150; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=uF1T/589l6V+ufzpTfZwQRXsXVUiiuB5MJuYkbLn2IQ=; b=CngjelhwN61We+izgUZNTuX94+tgxUDvmTRee55m9cyU73owY/hPpM0E56fFcQjCfOwwLn EMnkKfPbglnAMnSIDbttFc52IalX5nrnwycby/0mpKAKwS7/ToU8YPIlBEER3Y7Ngxdmmu GQcYohfFHjL3p0Kdos1kVnO4Ekq+sIYnAImKwa8rChP2cA+Kq+KoMITNgn4ToQp5WgMapO 0Jiz3O8xQ1Vl2rKfj+WIEDVZOHmN6kU3laES+Bz8u0f0Gg/GVRPGDZEHndbKF+U0EGqjGV Cr1STdrZiTdMCvWQhW8D1uF7xylLgN2v/lY5Cb1E1lO3RuEHfFzqOytlQFh+qA== To: , , , , , , , , , , , , Cc: , "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:18 +0800 X-Mailer: git-send-email 2.20.1 In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Message-Id: <20260422064022.3791652-9-zhouchuyi@bytedance.com> Content-Transfer-Encoding: quoted-printable From: "Chuyi Zhou" Subject: [PATCH v5 08/12] smp: Remove preempt_disable from on_each_cpu_cond_mask Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: X-Original-From: Chuyi Zhou Content-Type: text/plain; charset="utf-8" Now smp_call_function_many_cond() internally handles the preemption logic, so on_each_cpu_cond_mask does not need to explicitly disable preemption. Remove preempt_{enable, disable} from on_each_cpu_cond_mask(). Signed-off-by: Chuyi Zhou Reviewed-by: Muchun Song --- kernel/smp.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 7200ce6043bc..8e28baa42bcf 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -1118,9 +1118,7 @@ void on_each_cpu_cond_mask(smp_cond_func_t cond_func,= smp_call_func_t func, if (wait) scf_flags |=3D SCF_WAIT; =20 - preempt_disable(); smp_call_function_many_cond(mask, func, info, scf_flags, cond_func); - preempt_enable(); } EXPORT_SYMBOL(on_each_cpu_cond_mask); =20 --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96B98324B06 for ; Wed, 22 Apr 2026 06:42:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840176; cv=none; b=P20gveS/XNcX90pQus6PnEoPRV3iq5dP6gQl/FjcQ7E/3L8xZ8+vRx0CeI/9Y8//pyHZ+3lqecOkD+msfr+GoSdu1Rpqn853L5P9QLZMEFGFBQ8+sg1roK75QtpElu6DYmrkkMv7bSXcyQQZ71ES/MLdN/IrMnm85ZdrSKub9K0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840176; c=relaxed/simple; bh=fF5rNKBntigPfaLSBL5JPchOMBc9gOsuc0c/gcdz6x4=; h=Cc:Content-Type:Subject:Message-Id:In-Reply-To:From:Mime-Version: To:Date:References; b=cyIVDA/agd1OC/V5JYF+gnQCN4aC3mHsoTO+oPewoYhxUiev6I9KsABU8SKmCPZJ3/KsYhsztF4Rx3SoKBIJsBIXBzW14a4Rb8LmQX/kRSNDUIPCtnhp2csqnaA0OSM5cndNoMggxT73LtgsQnwU7Yji3NV+8BfumfR6YENb4iw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=RIaGN/zU; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="RIaGN/zU" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840164; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=ICWN1OsCdHFaPjXkwVSuhRMSca3SDV1TAbqsiUSfFFA=; b=RIaGN/zUYtdKdTlO8X2RKD2Vxb/Y1XeE5cxj6Zw59alA+UVQCoUuS6oHQI6habZbsbIC1V gVlqMLN+VxORWglrUlW2ojfPsgDzsEi3dPETkLpplTRBg7tasxJG2RfQXHVwQNVjx/fEig lyiZkL1/NmzE/FnkroTEAV1KzGn/K0gcmfzYZA/yQd6zh0ZIORhygFBKZo76ti0tDcW6fQ DERP/+0dXpFdwMYLobXuDrmo/Mrw0PcgcgxYFYAgrv6n8aHelswfn0WRb1Nsxmer5G5XsU EDLbWR/t11d9pycuvRrWFelRNNMQcPd4dCY/TPJCjcvQpX/lv4Das0kosMSjmg== Cc: , "Chuyi Zhou" X-Mailer: git-send-email 2.20.1 Subject: [PATCH v5 09/12] scftorture: Remove preempt_disable in scftorture_invoke_one Message-Id: <20260422064022.3791652-10-zhouchuyi@bytedance.com> In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: X-Original-From: Chuyi Zhou To: , , , , , , , , , , , , Date: Wed, 22 Apr 2026 14:40:19 +0800 Content-Transfer-Encoding: quoted-printable References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Content-Type: text/plain; charset="utf-8" Previous patches make smp_call*() functions handle preemption logic internally. Thus, the explicit preempt_disable() surrounding these calls becomes unnecessary. Furthermore, keeping the external preempt_disable() would prevent scftorture from exercising the newly narrowed internal preemption-disabled regions during IPI dispatch. This patch removes the preempt_{enable, disable} pairs in scftorture_invoke_one(). Removing this preemption protection could expose a race condition with CPU hotplug when use_cpus_read_lock is false. Specifically, for multi-cast operations (SCF_PRIM_MANY or SCF_PRIM_ALL), if only 1 CPU is online, smp_call_function_many() correctly skips sending IPIs and leaves scfc_out as false. Without preemption disabled, a CPU hotplug thread could preempt the test thread, bring a second CPU online, and increment num_online_cpus(). When the test thread resumes, the validation check would see num_online_cpus() > 1 and falsely trigger the memory-ordering warning, leaking the scfcp structure. To avoid this potential false positive, restrict the num_online_cpus() > 1 condition to only apply when use_cpus_read_lock is true, ensuring the CPU count remains stable during evaluation. Signed-off-by: Chuyi Zhou --- kernel/scftorture.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/kernel/scftorture.c b/kernel/scftorture.c index 327c315f411c..2082f9b44370 100644 --- a/kernel/scftorture.c +++ b/kernel/scftorture.c @@ -348,6 +348,8 @@ static void scftorture_invoke_one(struct scf_statistics= *scfp, struct torture_ra int ret =3D 0; struct scf_check *scfcp =3D NULL; struct scf_selector *scfsp =3D scf_sel_rand(trsp); + bool is_single =3D (scfsp->scfs_prim =3D=3D SCF_PRIM_SINGLE || + scfsp->scfs_prim =3D=3D SCF_PRIM_SINGLE_RPC); =20 if (scfsp->scfs_prim =3D=3D SCF_PRIM_SINGLE || scfsp->scfs_wait) { scfcp =3D kmalloc_obj(*scfcp, GFP_ATOMIC); @@ -364,8 +366,6 @@ static void scftorture_invoke_one(struct scf_statistics= *scfp, struct torture_ra } if (use_cpus_read_lock) cpus_read_lock(); - else - preempt_disable(); switch (scfsp->scfs_prim) { case SCF_PRIM_RESCHED: if (IS_BUILTIN(CONFIG_SCF_TORTURE_TEST)) { @@ -411,13 +411,10 @@ static void scftorture_invoke_one(struct scf_statisti= cs *scfp, struct torture_ra if (!ret) { if (use_cpus_read_lock) cpus_read_unlock(); - else - preempt_enable(); + wait_for_completion(&scfcp->scfc_completion); if (use_cpus_read_lock) cpus_read_lock(); - else - preempt_disable(); } else { scfp->n_single_rpc_ofl++; scf_add_to_free_list(scfcp); @@ -452,7 +449,7 @@ static void scftorture_invoke_one(struct scf_statistics= *scfp, struct torture_ra scfcp->scfc_out =3D true; } if (scfcp && scfsp->scfs_wait) { - if (WARN_ON_ONCE((num_online_cpus() > 1 || scfsp->scfs_prim =3D=3D SCF_P= RIM_SINGLE) && + if (WARN_ON_ONCE(((use_cpus_read_lock && num_online_cpus() > 1) || is_si= ngle) && !scfcp->scfc_out)) { pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfs= p->scfs_prim); atomic_inc(&n_mb_out_errs); // Leak rather than trash! @@ -463,8 +460,6 @@ static void scftorture_invoke_one(struct scf_statistics= *scfp, struct torture_ra } if (use_cpus_read_lock) cpus_read_unlock(); - else - preempt_enable(); if (allocfail) schedule_timeout_idle((1 + longwait) * HZ); // Let no-wait handlers com= plete. else if (!(torture_random(trsp) & 0xfff)) --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-111.ptr.blmpb.com (va-1-111.ptr.blmpb.com [209.127.230.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EC4C329373 for ; Wed, 22 Apr 2026 06:43:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840191; cv=none; b=S5XAb0UMJ3diH69mTNAuSEzT0Dw0PqEsQ5p5cSABYYM+MEw0ZGrMllMm3fNBRVp+ks/XyNAOry62cMfNZxQmjH7OlbvHMFatvmh01Te080z6qhpjku+/jANa4H6txjaXyF4kpNv5D5usHj3lhQVMutEh9bTxPUB2MfyWJDsZbeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840191; c=relaxed/simple; bh=yKMOFqHHPiGM6QdNGInGyQVsW7G2+Z6vzitvihpbaCE=; h=In-Reply-To:Content-Type:Cc:Date:To:Mime-Version:References:From: Subject:Message-Id; b=TKh5uL5y0sUgyYcPYKJYyt4GvQkHAp2kLa+pn8y41dG8RZBzKhquBybgi38MA8jTeRs8iQS7ML5Le8VMKZjjLZf/wIy3DssKdjU4JohiFa0mUSLZFOldKjX6iUNPBq+onQvbSsyyCWWApRn56N8208qvo5jpGyxu9w9l18r6wfc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=owQpIZhR; arc=none smtp.client-ip=209.127.230.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="owQpIZhR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840177; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=aUK3r6pXI/HwsoVBOCdW4m6bVqpLkwnx6XaS+jnRFrM=; b=owQpIZhR4u4K3xIBwjmddkEcVp8l+4I46pOIrNIhBDxxTUjUqb1zQ/DvRoMBm1SxdzNhJx pYm8Td1G+9kmMWIxSU/I7cEjXfjpz2Jy0E6xewxNytZM4F5A+EoCDVLJwQxiqYf2Zvk6zc 7kdLghCPPX3PhxJLCdw/CVyyEGWxYPzBKxSCw3wW375SSUZOxzNaNd2kXqfpYTazRuam5D n+AdA7eSgmTcnJEN5sgZuFuEe8HUhqMntqyCRoWtkbBzztXQ5N5Py8R5r41ZdspVpwVk7S DvI1sfCXHkRXytdjStci7Hs0w2FXgBMv7cHHS4/GDnE6ZzyCW/n+YNczvBd+eg== X-Original-From: Chuyi Zhou X-Mailer: git-send-email 2.20.1 In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Cc: , "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:20 +0800 X-Lms-Return-Path: To: , , , , , , , , , , , , Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Subject: [PATCH v5 10/12] x86/mm: Move flush_tlb_info back to the stack Message-Id: <20260422064022.3791652-11-zhouchuyi@bytedance.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Commit 3db6d5a5ecaf ("x86/mm/tlb: Remove 'struct flush_tlb_info' from the stack") converted flush_tlb_info from stack variable to per-CPU variable. This brought about a performance improvement of around 3% in extreme test. However, it also required that all flush_tlb* operations keep preemption disabled entirely to prevent concurrent modifications of flush_tlb_info. flush_tlb* needs to send IPIs to remote CPUs and synchronously wait for all remote CPUs to complete their local TLB flushes. The process could take tens of milliseconds when interrupts are disabled or with a large number of remote CPUs. From the perspective of improving kernel real-time performance, this patch reverts flush_tlb_info back to stack variables and align it with SMP_CACHE_BYTES. In certain configurations, SMP_CACHE_BYTES may be large, so the alignment size is limited to 64. This is a preparation for enabling preemption during TLB flush in next patch. To evaluate the performance impact of this patch, use the following script to reproduce the microbenchmark mentioned in commit 3db6d5a5ecaf ("x86/mm/tlb: Remove 'struct flush_tlb_info' from the stack"). The test environment is an Ice Lake system (Intel(R) Xeon(R) Platinum 8336C) with 128 CPUs and 2 NUMA nodes. During the test, the threads were bound to specific CPUs, and both pti and mitigations were disabled: #include #include #include #include #include #include #define NUM_OPS 1000000 #define NUM_THREADS 3 #define NUM_RUNS 5 #define PAGE_SIZE 4096 volatile int stop_threads =3D 0; void *busy_wait_thread(void *arg) { while (!stop_threads) { __asm__ volatile ("nop"); } return NULL; } long long get_usec() { struct timeval tv; gettimeofday(&tv, NULL); return tv.tv_sec * 1000000LL + tv.tv_usec; } int main() { pthread_t threads[NUM_THREADS]; char *addr; int i, r; addr =3D mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr =3D=3D MAP_FAILED) { perror("mmap"); exit(1); } for (i =3D 0; i < NUM_THREADS; i++) { if (pthread_create(&threads[i], NULL, busy_wait_thread, NULL)) exit(1); } printf("Running benchmark: %d runs, %d ops each, %d background\n" "threads\n", NUM_RUNS, NUM_OPS, NUM_THREADS); for (r =3D 0; r < NUM_RUNS; r++) { long long start, end; start =3D get_usec(); for (i =3D 0; i < NUM_OPS; i++) { addr[0] =3D 1; if (madvise(addr, PAGE_SIZE, MADV_DONTNEED)) { perror("madvise"); exit(1); } } end =3D get_usec(); double duration =3D (double)(end - start); double avg_lat =3D duration / NUM_OPS; printf("Run %d: Total time %.2f us, Avg latency %.4f us/op\n", r + 1, duration, avg_lat); } stop_threads =3D 1; for (i =3D 0; i < NUM_THREADS; i++) pthread_join(threads[i], NULL); munmap(addr, PAGE_SIZE); return 0; } base on-stack-aligned on-stack-not-aligned ---- --------- ----------- avg (usec/op) 2.5278 2.5261 2.5508 stddev 0.0007 0.0027 0.0023 The benchmark results show that the average latency difference between the baseline (base) and the properly aligned stack variable (on-stack-aligned) is within the standard deviation (stddev). This indicates that the variations are caused by testing noise, and reverting to a stack variable with proper alignment causes no performance regression compared to the per-CPU implementation. The unaligned version (on-stack-not-aligned) shows a minor performance drop. This demonstrates that we can improve the real-time performance without sacrificing performance. Suggested-by: Sebastian Andrzej Siewior Suggested-by: Nadav Amit Signed-off-by: Chuyi Zhou --- arch/x86/include/asm/tlbflush.h | 8 +++- arch/x86/mm/tlb.c | 72 +++++++++------------------------ 2 files changed, 27 insertions(+), 53 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 0545fe75c3fa..f4e4505d4ece 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -211,6 +211,12 @@ extern u16 invlpgb_count_max; =20 extern void initialize_tlbstate_and_flush(void); =20 +#if SMP_CACHE_BYTES > 64 +#define FLUSH_TLB_INFO_ALIGN 64 +#else +#define FLUSH_TLB_INFO_ALIGN SMP_CACHE_BYTES +#endif + /* * TLB flushing: * @@ -249,7 +255,7 @@ struct flush_tlb_info { u8 stride_shift; u8 freed_tables; u8 trim_cpumask; -}; +} __aligned(FLUSH_TLB_INFO_ALIGN); =20 void flush_tlb_local(void); void flush_tlb_one_user(unsigned long addr); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index af43d177087e..cfc3a72477f5 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1373,28 +1373,12 @@ void flush_tlb_multi(const struct cpumask *cpumask, */ unsigned long tlb_single_page_flush_ceiling __read_mostly =3D 33; =20 -static DEFINE_PER_CPU_SHARED_ALIGNED(struct flush_tlb_info, flush_tlb_info= ); - -#ifdef CONFIG_DEBUG_VM -static DEFINE_PER_CPU(unsigned int, flush_tlb_info_idx); -#endif - -static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm, - unsigned long start, unsigned long end, - unsigned int stride_shift, bool freed_tables, - u64 new_tlb_gen) +static void get_flush_tlb_info(struct flush_tlb_info *info, + struct mm_struct *mm, + unsigned long start, unsigned long end, + unsigned int stride_shift, bool freed_tables, + u64 new_tlb_gen) { - struct flush_tlb_info *info =3D this_cpu_ptr(&flush_tlb_info); - -#ifdef CONFIG_DEBUG_VM - /* - * Ensure that the following code is non-reentrant and flush_tlb_info - * is not overwritten. This means no TLB flushing is initiated by - * interrupt handlers and machine-check exception handlers. - */ - BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) !=3D 1); -#endif - /* * If the number of flushes is so large that a full flush * would be faster, do a full flush. @@ -1412,32 +1396,22 @@ static struct flush_tlb_info *get_flush_tlb_info(st= ruct mm_struct *mm, info->new_tlb_gen =3D new_tlb_gen; info->initiating_cpu =3D smp_processor_id(); info->trim_cpumask =3D 0; - - return info; -} - -static void put_flush_tlb_info(void) -{ -#ifdef CONFIG_DEBUG_VM - /* Complete reentrancy prevention checks */ - barrier(); - this_cpu_dec(flush_tlb_info_idx); -#endif } =20 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables) { - struct flush_tlb_info *info; + struct flush_tlb_info _info; + struct flush_tlb_info *info =3D &_info; int cpu =3D get_cpu(); u64 new_tlb_gen; =20 /* This is also a barrier that synchronizes with switch_mm(). */ new_tlb_gen =3D inc_mm_tlb_gen(mm); =20 - info =3D get_flush_tlb_info(mm, start, end, stride_shift, freed_tables, - new_tlb_gen); + get_flush_tlb_info(&_info, mm, start, end, stride_shift, freed_tables, + new_tlb_gen); =20 /* * flush_tlb_multi() is not optimized for the common case in which only @@ -1457,7 +1431,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigne= d long start, local_irq_enable(); } =20 - put_flush_tlb_info(); put_cpu(); mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } @@ -1527,19 +1500,16 @@ static void kernel_tlb_flush_range(struct flush_tlb= _info *info) =20 void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - struct flush_tlb_info *info; + struct flush_tlb_info info; =20 guard(preempt)(); + get_flush_tlb_info(&info, NULL, start, end, PAGE_SHIFT, false, + TLB_GENERATION_INVALID); =20 - info =3D get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false, - TLB_GENERATION_INVALID); - - if (info->end =3D=3D TLB_FLUSH_ALL) - kernel_tlb_flush_all(info); + if (info.end =3D=3D TLB_FLUSH_ALL) + kernel_tlb_flush_all(&info); else - kernel_tlb_flush_range(info); - - put_flush_tlb_info(); + kernel_tlb_flush_range(&info); } =20 /* @@ -1707,12 +1677,11 @@ EXPORT_SYMBOL_FOR_KVM(__flush_tlb_all); =20 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { - struct flush_tlb_info *info; + struct flush_tlb_info info; =20 int cpu =3D get_cpu(); - - info =3D get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, - TLB_GENERATION_INVALID); + get_flush_tlb_info(&info, NULL, 0, TLB_FLUSH_ALL, 0, false, + TLB_GENERATION_INVALID); /* * flush_tlb_multi() is not optimized for the common case in which only * a local TLB flush is needed. Optimize this use-case by calling @@ -1722,17 +1691,16 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap= _batch *batch) invlpgb_flush_all_nonglobals(); batch->unmapped_pages =3D false; } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { - flush_tlb_multi(&batch->cpumask, info); + flush_tlb_multi(&batch->cpumask, &info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); local_irq_disable(); - flush_tlb_func(info); + flush_tlb_func(&info); local_irq_enable(); } =20 cpumask_clear(&batch->cpumask); =20 - put_flush_tlb_info(); put_cpu(); } =20 --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-114.ptr.blmpb.com (va-1-114.ptr.blmpb.com [209.127.230.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A9CD3290A9 for ; Wed, 22 Apr 2026 06:43:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.114 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840209; cv=none; b=bfKDlD5qYCJ9mXnZafp9L6O67rU1mr8nEeB09VrfmcmamXd22NOiU5JFnkj2rBFPXBDgkdHFGIuXXQQVop+Mnh6GRi5v6mWzlbyJ06oLSm6LEDpV40IdKmt1GaoMoM9YCBwHEGEWQLN7LAxqOuClX40Xrm2RldjfHCavBSqMVIg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840209; c=relaxed/simple; bh=qSQTKmFatG7y7DFIJ0d6KtLoocQeO0QaRbWFAqrfiOM=; h=Subject:Message-Id:Mime-Version:To:Date:In-Reply-To:Content-Type: Cc:References:From; b=p4WMkdT9N36P+UIQJ6gREvsGkFNz/He8bOie/dJ1RsXu8/DDx9h7UYeiP3ydoVjnqJXE9NC8YbffxwKaIkexH2Wcu5eotvXwSMbHS5QuFwpm1+V5AaNIwfnAzlqLQtNgthLcuQH9m4c7w/UHCNLjZeRJgHtQP9R8pvgfIiexvs0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=h7FIn0z9; arc=none smtp.client-ip=209.127.230.114 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="h7FIn0z9" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840191; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=hJBc7tJV1GDZ0XkoFVNF8pqXPsHEQjYxsWQJKGn58QQ=; b=h7FIn0z906kZENvr8r23vu9zooFNhUwc7B+YZ7t29UlH94vEt2bjEiM8vJlSA32cwh43++ czARziCEvOnxODDOkNswJG3p/2OA6PhzlY3BVdslwqKQIkAw7CSgCkDyqgqoob2xYuupxc sOuo1bQ0Xbygg3LJh1r9sQ7EQDSN7A/s2nyOp8O9gcAYqEoQ9lmtUbzyUWWZEr1mafvoDX aPT64YvAns2raKqI3jUmCv+yP6mYh2tG6j/ZRXgLizxc3dEgC9Iv/AIN+NWmllI7byLq2e ktRNYW010pP5zaNHLTFdDE+opRRFyvxgzX0wrqTmpIT6YsRkUeoTtprqH+saKQ== Subject: [PATCH v5 11/12] x86/mm: Enable preemption during native_flush_tlb_multi Message-Id: <20260422064022.3791652-12-zhouchuyi@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: To: , , , , , , , , , , , , Date: Wed, 22 Apr 2026 14:40:21 +0800 Content-Transfer-Encoding: quoted-printable X-Mailer: git-send-email 2.20.1 In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Cc: , "Chuyi Zhou" X-Original-From: Chuyi Zhou References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> From: "Chuyi Zhou" Content-Type: text/plain; charset="utf-8" native_flush_tlb_multi() may be frequently called by flush_tlb_mm_range() and arch_tlbbatch_flush() in production environments. When pages are reclaimed or process exit, native_flush_tlb_multi() sends IPIs to remote CPUs and waits for all remote CPUs to complete their local TLB flushes. The overall latency may reach tens of milliseconds due to a large number of remote CPUs and other factors (such as interrupts being disabled). Since flush_tlb_mm_range() and arch_tlbbatch_flush() always disable preemption, which may cause increased scheduling latency for other threads on the current CPU. Previous patch converted flush_tlb_info from per-cpu variable to on-stack variable. Additionally, it's no longer necessary to explicitly disable preemption before calling smp_call*() since they internally handle the preemption logic. Now it's safe to enable preemption during native_flush_tlb_multi(). Signed-off-by: Chuyi Zhou --- arch/x86/kernel/kvm.c | 4 +++- arch/x86/mm/tlb.c | 9 +++++++-- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 3bc062363814..4f7f4c1149b9 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -668,8 +668,10 @@ static void kvm_flush_tlb_multi(const struct cpumask *= cpumask, u8 state; int cpu; struct kvm_steal_time *src; - struct cpumask *flushmask =3D this_cpu_cpumask_var_ptr(__pv_cpu_mask); + struct cpumask *flushmask; =20 + guard(preempt)(); + flushmask =3D this_cpu_cpumask_var_ptr(__pv_cpu_mask); cpumask_copy(flushmask, cpumask); /* * We have to call flush only on online vCPUs. And diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index cfc3a72477f5..58c6f3d2f993 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1421,9 +1421,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsign= ed long start, if (mm_global_asid(mm)) { broadcast_tlb_flush(info); } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + put_cpu(); info->trim_cpumask =3D should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); consider_global_asid(mm); + goto invalidate; } else if (mm =3D=3D this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled(); local_irq_disable(); @@ -1432,6 +1434,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigne= d long start, } =20 put_cpu(); +invalidate: mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } =20 @@ -1691,7 +1694,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_b= atch *batch) invlpgb_flush_all_nonglobals(); batch->unmapped_pages =3D false; } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + put_cpu(); flush_tlb_multi(&batch->cpumask, &info); + goto clear; } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); local_irq_disable(); @@ -1699,9 +1704,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_b= atch *batch) local_irq_enable(); } =20 - cpumask_clear(&batch->cpumask); - put_cpu(); +clear: + cpumask_clear(&batch->cpumask); } =20 /* --=20 2.20.1 From nobody Sat Apr 25 11:54:50 2026 Received: from va-1-112.ptr.blmpb.com (va-1-112.ptr.blmpb.com [209.127.230.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD878332EA0 for ; Wed, 22 Apr 2026 06:43:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840215; cv=none; b=aaUKs0bhIHJNghncOgL6yXTrH+HNWKqpqGUe7IaVs7DSDi49DaLa3GTGZm245477X/PflkFqrLFxOmBewm4nXtsZKRGjO1D/7DesZ77BPjviVULQYUcaD4Ujf5OaSW1Jtl16NWpQbAo8S7ea0gQ/0Tn6VxVeZAsaKIzu0fom6as= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776840215; c=relaxed/simple; bh=9HZ6YSDto50uyHsiSF3PXW23dmlhrFNA7PdEkwEU9pA=; h=To:In-Reply-To:References:Content-Type:Message-Id:Cc:Subject: Mime-Version:From:Date; b=NMovh1hpmKVlMXtM4AIV2jUWgIrXtq2JYSTk369folX8jchmGp4pghAvFMtEguigsZlnp1Sjs1inFGjMyOt9aSHzPJBdTlcuPB0EIFy84CurKcjkHAPeq+y9kMrfBjNv8fKIA5+atmpvSIgalvXBDMGUkMa95YsQEYRIa69IqnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=leZDgfeL; arc=none smtp.client-ip=209.127.230.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="leZDgfeL" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1776840204; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=OwZiM/rvVevkD2sWSEluAauZ9Gw9rO6C52AV1cynZb8=; b=leZDgfeLwoj+e7xEnNnQmZ2fc7DvpcQKv11JKHzYdeImcrzqGZZAKL9dJ33nDuYw7a4Toh i8ENPgNCdlhRggmby+nNGLOfsxSgPKncWJPI9HCvKnfPB2BovcoaWOw4hrAvOm+/jzApJR o16AKuTXL4yYw7pumiF5KsVAoOKWlWQpFgH/wgQgrlRpVfd9bLlFoMptkhYnpVt71ltb4c dgAy9vKuED1KvaJOuAPcnl3CGybWBFtg/GETW8m36765JYkVbO4xMb8da1eq2vf5P8gqwh qKNrGzFiojhy346ywt+oFdyQjwgE3SpOVwwi4FD9BY0ABN2XaSU9dsX+uyz1Lg== To: , , , , , , , , , , , , In-Reply-To: <20260422064022.3791652-1-zhouchuyi@bytedance.com> References: <20260422064022.3791652-1-zhouchuyi@bytedance.com> Message-Id: <20260422064022.3791652-13-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 Cc: , "Chuyi Zhou" Subject: [PATCH v5 12/12] x86/mm: Enable preemption during flush_tlb_kernel_range Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Chuyi Zhou Content-Transfer-Encoding: quoted-printable From: "Chuyi Zhou" Date: Wed, 22 Apr 2026 14:40:22 +0800 X-Lms-Return-Path: Content-Type: text/plain; charset="utf-8" flush_tlb_kernel_range() is invoked when kernel memory mapping changes. On x86 platforms without the INVLPGB feature enabled, we need to send IPIs to every online CPU and synchronously wait for them to complete do_kernel_range_flush(). This process can be time-consuming due to factors such as a large number of CPUs or other issues (like interrupts being disabled). flush_tlb_kernel_range() always disables preemption, this may affect the scheduling latency of other tasks on the current CPU. Previous patch converted flush_tlb_info from per-cpu variable to on-stack variable. Additionally, it's no longer necessary to explicitly disable preemption before calling smp_call*() since they internally handles the preemption logic. Now it's safe to enable preemption during flush_tlb_kernel_range(). Additionally, in get_flush_tlb_info() use raw_smp_processor_id() to avoid warnings from check_preemption_disabled(). Signed-off-by: Chuyi Zhou --- arch/x86/mm/tlb.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 58c6f3d2f993..c37cc9845abc 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1394,7 +1394,7 @@ static void get_flush_tlb_info(struct flush_tlb_info = *info, info->stride_shift =3D stride_shift; info->freed_tables =3D freed_tables; info->new_tlb_gen =3D new_tlb_gen; - info->initiating_cpu =3D smp_processor_id(); + info->initiating_cpu =3D raw_smp_processor_id(); info->trim_cpumask =3D 0; } =20 @@ -1461,6 +1461,8 @@ static void invlpgb_kernel_range_flush(struct flush_t= lb_info *info) { unsigned long addr, nr; =20 + guard(preempt)(); + for (addr =3D info->start; addr < info->end; addr +=3D nr << PAGE_SHIFT) { nr =3D (info->end - addr) >> PAGE_SHIFT; =20 @@ -1505,7 +1507,6 @@ void flush_tlb_kernel_range(unsigned long start, unsi= gned long end) { struct flush_tlb_info info; =20 - guard(preempt)(); get_flush_tlb_info(&info, NULL, start, end, PAGE_SHIFT, false, TLB_GENERATION_INVALID); =20 --=20 2.20.1