From nobody Tue Feb 10 00:57:48 2026 Received: from sg-1-100.ptr.blmpb.com (sg-1-100.ptr.blmpb.com [118.26.132.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 209742EBB8C for ; Tue, 3 Feb 2026 11:26:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=118.26.132.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770117968; cv=none; b=XHawR9goS9D/HOuuX/wLu/sz5NI+HWhqMWM7VqPFYH5xGhQ8bv5no6vKb7lfgPcQKRFVn9bD9HG1XvEOU0aINI/98pgtNqk2m7nwLZa1nr6NesLuqBZ8Ov1Nw1LeFfgoeprfyZt2K/AnOnjVhuCDwDFvY1J6bpd1yIgkAxn6q3Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770117968; c=relaxed/simple; bh=e+8wl+N+NxaXEiUDqDmrsUti1Yp4Bgh5W+VSAj8ZkUY=; h=To:Subject:References:Cc:From:Message-Id:Date:Mime-Version: In-Reply-To:Content-Type; b=aAqNcv5A7qqgtOabK+gT7kU4mVy6hhR2VVZ78GKlCcBcmWxxam48x7nK8XAdUNkk9wpghbXATakcpD13NU206oz3VPsOb+ninNAHqM3KdKTCXIDl9xogwzA4OHdijJS5itUF9eMeq2C1l8fdHTIyll5/t8RIiVsPVG2sO546OFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=EnrbvbEI; arc=none smtp.client-ip=118.26.132.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="EnrbvbEI" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=2212171451; d=bytedance.com; t=1770117954; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=jNKPSWTCnUlclqvcSRrpUMGBsa3SgDKXU/ZT4JMAWRM=; b=EnrbvbEI6vLL8axyxWnVPpiCddRvrWZFYXjJt8e+Y1L25prcs8dM6yIFq4G7hju+aoVS+3 3kY0EHCbCBUuHdwokj7QZxd+kjCVEumkj/aPSC7A/Ek1inLsQQiqhSatBa8sghfL4M+wq6 8Kaosa+JJ2uJzTjFg/Yp3cwYisZ5Bi70WZsxKORybcAJQsCNf9IvQKGiLL3yRQ7HHYx6Rs 0uAor3tFD64VpXTunOhJZo2D2ozh+VtnK5LEAZKYbBcvW5pm2PqOgnjztJYTVOFV2G5qXa rNEDk8UL1X8VlJbjWc1eMX28mEEwK66AiIKnvO+sfb8x4hNOgB5EdXch3/Vyrw== To: , , , , , , , Subject: [PATCH 04/11] smp: Use on-stack cpumask in smp_call_function_many_cond References: <20260203112401.3889029-1-zhouchuyi@bytedance.com> X-Original-From: Chuyi Zhou Cc: , "Chuyi Zhou" From: "Chuyi Zhou" Message-Id: <20260203112401.3889029-5-zhouchuyi@bytedance.com> Content-Transfer-Encoding: quoted-printable X-Lms-Return-Path: Date: Tue, 3 Feb 2026 19:23:54 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20260203112401.3889029-1-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 Content-Type: text/plain; charset="utf-8" This patch use on-stack cpumask to replace percpu cfd cpumask in smp_call_function_many_cond(). alloc_cpumask_var() may fail when CONFIG_CPUMASK_OFFSTACK is enabled. In such extreme case, fall back to cfd->cpumask. This is a preparation for the next patch. Signed-off-by: Chuyi Zhou --- kernel/smp.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index f572716c3c7d..35948afced2e 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -805,11 +805,17 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, int cpu, last_cpu, this_cpu =3D smp_processor_id(); struct call_function_data *cfd; bool wait =3D scf_flags & SCF_WAIT; + bool preemptible_wait =3D true; + cpumask_var_t cpumask_stack; + struct cpumask *cpumask; int nr_cpus =3D 0; bool run_remote =3D false; =20 lockdep_assert_preemption_disabled(); =20 + if (!alloc_cpumask_var(&cpumask_stack, GFP_ATOMIC)) + preemptible_wait =3D false; + /* * Can deadlock when called with interrupts disabled. * We allow cpu's that are not yet online though, as no one else can @@ -831,15 +837,18 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, /* Check if we need remote execution, i.e., any CPU excluding this one. */ if (cpumask_any_and_but(mask, cpu_online_mask, this_cpu) < nr_cpu_ids) { cfd =3D this_cpu_ptr(&cfd_data); - cpumask_and(cfd->cpumask, mask, cpu_online_mask); - __cpumask_clear_cpu(this_cpu, cfd->cpumask); + + cpumask =3D preemptible_wait ? cpumask_stack : cfd->cpumask; + + cpumask_and(cpumask, mask, cpu_online_mask); + __cpumask_clear_cpu(this_cpu, cpumask); =20 cpumask_clear(cfd->cpumask_ipi); - for_each_cpu(cpu, cfd->cpumask) { + for_each_cpu(cpu, cpumask) { call_single_data_t *csd =3D per_cpu_ptr(cfd->csd, cpu); =20 if (cond_func && !cond_func(cpu, info)) { - __cpumask_clear_cpu(cpu, cfd->cpumask); + __cpumask_clear_cpu(cpu, cpumask); continue; } =20 @@ -890,13 +899,16 @@ static void smp_call_function_many_cond(const struct = cpumask *mask, } =20 if (run_remote && wait) { - for_each_cpu(cpu, cfd->cpumask) { + for_each_cpu(cpu, cpumask) { call_single_data_t *csd; =20 csd =3D per_cpu_ptr(cfd->csd, cpu); csd_lock_wait(csd); } } + + if (preemptible_wait) + free_cpumask_var(cpumask_stack); } =20 /** --=20 2.20.1