From nobody Sat Feb 7 11:05:03 2026 Received: from mail-dy1-f171.google.com (mail-dy1-f171.google.com [74.125.82.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B48AC3A1D18 for ; Wed, 4 Feb 2026 09:34:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770197686; cv=none; b=h5RuNTR/QS9yJY45xIutUX2XA9QR6mBKgvKW5gnjLEHE6ykgbKEXCYqXPYqzYSn5I57TbMQi1fwVSMpT3lVRuBNO0JBZS+qY/ND8CNiEhqWSTYCSn+d0WRB91OO/E0KCQ+0jZLDSYqvNl0f7CpVNVz+CG+AzvbxgaRfsvCvAsuI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770197686; c=relaxed/simple; bh=CRoEjsErLKmZgCHhsxldyfsjMOvokOoZYgxJ5C4gOQo=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=bkbo/CJIxiPm22pbXOyivp1kp3q3ktjd7cEHYYY9FD7RSPP58pUri/B4vJ85rBR19r3eNtl+ADQu2ZZ4cwDsdtPwJzdpBylKyEkd8kTCehx6zdhsqolKkD6LVOmSHnTMAcfbtAF0J86btsI0hzEf8f99NIVQ3mEEbZaztfDEXZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=RuGGHXVz; arc=none smtp.client-ip=74.125.82.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RuGGHXVz" Received: by mail-dy1-f171.google.com with SMTP id 5a478bee46e88-2b7070acfdcso7116243eec.0 for ; Wed, 04 Feb 2026 01:34:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1770197686; x=1770802486; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=o72PYQSE1sHioGlc1ptWPScARuFf9bKlEV/m1NDU5ws=; b=RuGGHXVzpP1gn0ZCO0StVcFjN8OatFJsjCvyE9SQY/Fr5t8pZWXHkVZcxZkpkca27A UBcy11C2gzi8KqbdBR+z8X8GGHvSlFbXwp9mS7OkARDQkFOIyRzf8riXZYha95UoIRjG 4MBzhNzRk/N2Qe0pQRZbS76Xq7tbwb25AGxSndohvYncwS2v8FD9EZ1bL+fOxnmNZN44 bUDTJzpbacY3nMBKetlmx4En0ixV2JYF/QujSj+oaaegOPT5A/Lx/a/KCnMmkAyVYii6 78oS6ek1ZoLC/9bBS72m0o0YdL3U7FZ8DWWvwdn84KIP6n3VeYM41W+BUTlSiY/hQ2lg Ma+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770197686; x=1770802486; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=o72PYQSE1sHioGlc1ptWPScARuFf9bKlEV/m1NDU5ws=; b=V3iCfnyw5nCrAahC5u0jD7Xy2SOzwMKQPNeZZg1pyaa6hljKTJlgu9z96sC8x5PLkr izetsKyN9V8h69WJwiFb/wm2JCSFVKQkdb8o043n2TANs2tMCNaEun70+N7fGQG+fy/C zPAWs6Fd1rH1u39XSvkqFxVAtqfzVSsLuwkBQLX4ivc2ldKSXBMVSYqcfq9OtSMTzXFy XLjK1ieDZT8e6ALlp64m1X7sIAmnlfrMNh1snWHUL+MbTO/mgkCh8IvOyF4aWrj8ozP+ Vy2SJsEOJeUtwhLRCmfeCtAUThzAqtfN7zZQ6b3Wrof/YGNn90IVJs0/4V0xnABJpj8a 6IhQ== X-Forwarded-Encrypted: i=1; AJvYcCXZqJ6cswrCKJdDY+1oVmbzM5oxKNuEQDedLxxbmd8YNpUuVebDSguGqxVKUod+Q8Oj/bYrEY9ZQhQiSxk=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+xkv0wGy/bv7oHyZmkSEcNjn21pTxXOROu0SsEb00gZZuj6TK 0qwxH+0uVrd7RoM31Ui1quhT4wJZyNjbl2cOXobKI4N7Rwq1PBUlCuHAGRAeoRaIaHzdYg== X-Gm-Gg: AZuq6aLy8AgjMIj1BWwIJHwu0XG1GRswLXLG9JvTek1p9a/3JLwKupXNAQZrtA3n8iQ Bg6s5Zg/yNELtm4s2G9s4TeGM5oNGsUERbhngR8n4JczGb2Ct6HgY6vO8e1k6GML1tBRHXUSi7W KeU4RT/Bj4besdOtPNHZPUkITjrIgYBWHMj9rWTfz5uV2fpTfR6rjp0XoqYHiUEurnPw/52vJ59 g3WK8oXVi9QE7cUP4U9XB6SZmbAOo2SZyGGsavaTrpGi86u+3Hb0FLKTxeiYTzDZOSnsoWfzie/ 7LIf1DvZKOfV9jBciknqCdLPgOerkDZPvPQRW9QJcn4uAqjCGKyGn33U+j3uKq12IVKH+xALtNP BB8ZHMVo8l7cVRtiKa9FYdhF7JNOyOQPgMYgQaLiH58tKiwdmCnHnuZq2ENHEv/KtKp2UhNv8HL 4Ek/w= X-Received: by 2002:a05:7300:dc17:b0:2b4:5514:38fd with SMTP id 5a478bee46e88-2b83296003fmr1026526eec.20.1770197685714; Wed, 04 Feb 2026 01:34:45 -0800 (PST) Received: from debian ([74.48.213.230]) by smtp.gmail.com with ESMTPSA id 5a478bee46e88-2b832e1290csm1199397eec.1.2026.02.04.01.34.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Feb 2026 01:34:45 -0800 (PST) From: Qiliang Yuan To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Tejun Heo , Andrea Righi , Emil Tsalapatis , Qiliang Yuan , Dan Schatzberg , Jake Hillion , zhidao su , David Dai Cc: Qiliang Yuan , David Vernet , Changwoo Min , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Douglas Anderson , Ryan Newton , sched-ext@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v2] sched/ext: Add cpumask to skip unsuitable dispatch queues Date: Wed, 4 Feb 2026 04:34:18 -0500 Message-ID: <20260204093435.3915393-1-realwujing@gmail.com> X-Mailer: git-send-email 2.51.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a cpus_allowed cpumask to struct scx_dispatch_q to track the union of affinity masks for all tasks enqueued in a user-defined DSQ. This allows a CPU to quickly skip DSQs that contain no tasks runnable on the current CPU, avoiding wasteful O(N) scans. - Allocate/free cpus_allowed only for user-defined DSQs. - Use free_dsq_rcu_callback to safely free the DSQ and its nested mask. - Update the mask in dispatch_enqueue() using cpumask_copy() for the first task and cpumask_or() for subsequent ones. Skip updates if the mask is already full. - Update the DSQ mask in set_cpus_allowed_scx() when a task's affinity changes while enqueued. - Handle allocation failures in scx_create_dsq() to prevent memory leaks. This optimization improves performance with many DSQs and tight affinity constraints. The bitwise overhead is significantly lower than potential cache misses during task iteration. Signed-off-by: Qiliang Yuan Signed-off-by: Qiliang Yuan --- v2: - Fix memory leak by adding RCU callback to free dsq->cpus_allowed. - Handle affinity changes while task is in DSQ via set_cpus_allowed_scx(). - Ensure dsq->cpus_allowed is only allocated for user DSQs. - Handle allocation failures in scx_create_dsq(). - Optimize enqueue path by using cpumask_copy() for the first task and skipping OR if mask is already full. include/linux/sched/ext.h | 1 + kernel/sched/ext.c | 68 ++++++++++++++++++++++++++++++++++++--- 2 files changed, 64 insertions(+), 5 deletions(-) diff --git a/include/linux/sched/ext.h b/include/linux/sched/ext.h index bcb962d5ee7d..f20e57cf53a3 100644 --- a/include/linux/sched/ext.h +++ b/include/linux/sched/ext.h @@ -79,6 +79,7 @@ struct scx_dispatch_q { struct rhash_head hash_node; struct llist_node free_node; struct rcu_head rcu; + struct cpumask *cpus_allowed; /* union of all tasks' allowed cpus */ }; =20 /* scx_entity.flags */ diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index afe28c04d5aa..0ae3728e08b8 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -1120,8 +1120,16 @@ static void dispatch_enqueue(struct scx_sched *sch, = struct scx_dispatch_q *dsq, =20 if (is_local) local_dsq_post_enq(dsq, p, enq_flags); - else + else { + /* Update cpumask to track union of all tasks' allowed CPUs */ + if (dsq->cpus_allowed) { + if (dsq->nr =3D=3D 1) + cpumask_copy(dsq->cpus_allowed, p->cpus_ptr); + else if (!cpumask_full(dsq->cpus_allowed)) + cpumask_or(dsq->cpus_allowed, dsq->cpus_allowed, p->cpus_ptr); + } raw_spin_unlock(&dsq->lock); + } } =20 static void task_unlink_from_dsq(struct task_struct *p, @@ -1138,6 +1146,10 @@ static void task_unlink_from_dsq(struct task_struct = *p, list_del_init(&p->scx.dsq_list.node); dsq_mod_nr(dsq, -1); =20 + /* Clear cpumask when queue becomes empty to prevent saturation */ + if (dsq->nr =3D=3D 0 && dsq->cpus_allowed) + cpumask_clear(dsq->cpus_allowed); + if (!(dsq->id & SCX_DSQ_FLAG_BUILTIN) && dsq->first_task =3D=3D p) { struct task_struct *first_task; =20 @@ -1897,6 +1909,14 @@ static bool consume_dispatch_q(struct scx_sched *sch= , struct rq *rq, if (list_empty(&dsq->list)) return false; =20 + /* + * O(1) optimization: Check if any task in the queue can run on this CPU. + * If the cpumask is allocated and this CPU is not in the allowed set, + * we can skip the entire queue without scanning. + */ + if (dsq->cpus_allowed && !cpumask_test_cpu(cpu_of(rq), dsq->cpus_allowed)) + return false; + raw_spin_lock(&dsq->lock); =20 nldsq_for_each_task(p, dsq) { @@ -2616,9 +2636,25 @@ static void set_cpus_allowed_scx(struct task_struct = *p, struct affinity_context *ac) { struct scx_sched *sch =3D scx_root; + struct scx_dispatch_q *dsq; =20 set_cpus_allowed_common(p, ac); =20 + /* + * If the task is currently in a DSQ, update the DSQ's allowed mask. + * As the task's affinity has changed, the DSQ's union mask must + * be updated to reflect the new allowed CPUs. + */ + dsq =3D p->scx.dsq; + if (dsq && dsq->cpus_allowed) { + unsigned long flags; + + raw_spin_lock_irqsave(&dsq->lock, flags); + if (p->scx.dsq =3D=3D dsq) + cpumask_or(dsq->cpus_allowed, dsq->cpus_allowed, p->cpus_ptr); + raw_spin_unlock_irqrestore(&dsq->lock, flags); + } + /* * The effective cpumask is stored in @p->cpus_ptr which may temporarily * differ from the configured one in @p->cpus_mask. Always tell the bpf @@ -3390,13 +3426,29 @@ DEFINE_SCHED_CLASS(ext) =3D { #endif }; =20 -static void init_dsq(struct scx_dispatch_q *dsq, u64 dsq_id) +static int init_dsq(struct scx_dispatch_q *dsq, u64 dsq_id) { memset(dsq, 0, sizeof(*dsq)); =20 raw_spin_lock_init(&dsq->lock); INIT_LIST_HEAD(&dsq->list); dsq->id =3D dsq_id; + + /* Allocate cpumask for tracking allowed CPUs only for user DSQs */ + if (!(dsq_id & SCX_DSQ_FLAG_BUILTIN)) { + dsq->cpus_allowed =3D kzalloc(cpumask_size(), GFP_KERNEL); + if (!dsq->cpus_allowed) + return -ENOMEM; + } + return 0; +} + +static void free_dsq_rcu_callback(struct rcu_head *rcu) +{ + struct scx_dispatch_q *dsq =3D container_of(rcu, struct scx_dispatch_q, r= cu); + + kfree(dsq->cpus_allowed); + kfree(dsq); } =20 static void free_dsq_irq_workfn(struct irq_work *irq_work) @@ -3405,7 +3457,7 @@ static void free_dsq_irq_workfn(struct irq_work *irq_= work) struct scx_dispatch_q *dsq, *tmp_dsq; =20 llist_for_each_entry_safe(dsq, tmp_dsq, to_free, free_node) - kfree_rcu(dsq, rcu); + call_rcu(&dsq->rcu, free_dsq_rcu_callback); } =20 static DEFINE_IRQ_WORK(free_dsq_irq_work, free_dsq_irq_workfn); @@ -6298,7 +6350,11 @@ __bpf_kfunc s32 scx_bpf_create_dsq(u64 dsq_id, s32 n= ode) if (!dsq) return -ENOMEM; =20 - init_dsq(dsq, dsq_id); + ret =3D init_dsq(dsq, dsq_id); + if (ret) { + kfree(dsq); + return ret; + } =20 rcu_read_lock(); =20 @@ -6310,8 +6366,10 @@ __bpf_kfunc s32 scx_bpf_create_dsq(u64 dsq_id, s32 n= ode) ret =3D -ENODEV; =20 rcu_read_unlock(); - if (ret) + if (ret) { + kfree(dsq->cpus_allowed); kfree(dsq); + } return ret; } =20 --=20 2.51.0