From nobody Thu Apr 2 20:28:17 2026 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDD323A5E82 for ; Thu, 26 Mar 2026 20:42:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774557780; cv=none; b=LI9ChDo2xhLzP93OIIHEX6KA5fHfTMtMha5Wn0zErSbixElAl+528QKzg6CCud4/CSi8WDv+u5pEaa2ZKqTTKKp2MXj693380iwCEVb+bQvbcb8ZbQgGwIQhmwIIC1dSUCKN3iFPiYVZUeFS4zzkGoDmS28tlMAXZxz7y9QHUr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774557780; c=relaxed/simple; bh=ETl5BcbglksFteo5kDYeab5aTtRy2qhdq/fkOmHb96o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=utOUuo+xTou4V6A4qV+K+Ed2UrC8cunwG+DyIBT2ZD6/X8wwzvHp8Okr32rphnqy6MYEHtQJGmM2NDJl3jtwuCc708tfPPp1W9qSpGbf/j95Vk8+2jIB7bnJ/qu5c4nKqrD7cWYng8/BddyYN7pmJ0gs/nC1DZqX6cCK70aIi9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YB2MoFE0; arc=none smtp.client-ip=209.85.215.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YB2MoFE0" Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-c73c990a96dso750182a12.0 for ; Thu, 26 Mar 2026 13:42:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774557778; x=1775162578; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PlGZ0nbQG26TwYl+c9kepBTMonB66LTBudTEARXlFvI=; b=YB2MoFE0+ENa9GLHqDm+ugPMfpE6WPjuQTIPyjkSu2inukDQJMkoj0/2/MANrD8sQT vIcFl8pcEIGoi8EU0i/cWhZ5M5+xYZlNMomLKJ+g84RTmL2JJDlvZhMZ4LRiurgHFzKd j+m5sGdbPxLTE5FIH575+m1a20+hXRdlsGZP9/d8Nk5uCi3aL2gPlp4PcTuh9JPtj+x1 6s2xH+6G0MA3fLzO+f684lviRBqwMcaLf5epy+BqtpN4IgnxUwc6nh8WhZDAPQRon94r oAJKcxhZJ1F6MUxnzkhCnnANF7das33yXehFsDsD+T1ISq4FA8waL8iTp2RN16sHvCLN 4UiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774557778; x=1775162578; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=PlGZ0nbQG26TwYl+c9kepBTMonB66LTBudTEARXlFvI=; b=Gn+TKVKVWI02mikKQ/w2CZk/j5qpm5lsmyv6xo3Z/iOYECaKGZP9QSO5KoyF9ovIHs 1bCknZyh8YcXaBXVGwpH40k9/+mx/B6FxI7iacQEbrHmf7mfLsM9yJ05xngsA62Xw/ap jO0uPowNd6bbHaR5px4iGR0mV7geQs66gMsOGw45JNpb8+knq1nA7sNOh33eq4g9NOoJ atbLdiTEwHXSNV1sVwKAV6RWXMDtz/FSuhtFY9zEc0Mutl3V9yjL27ByMNB9SsR46kiC elpQtS6XclnHvT8cIvTcGyFUXnKkqRDGFxM/zO94TDgqo6N+PrZxpyC9vwtFnoswS0Kw WG6g== X-Gm-Message-State: AOJu0YxkkzglbMOdHfyfF+3/rSpf7bWb34Fu+6Es5/h5VcwF887naIdA gtQqy0mKQMU2LVvLJU3pnd7l/+17/fbjFZW+rJOX0O0KLJzYvchMT2EJ X-Gm-Gg: ATEYQzxDBsWQgP2xRC0hOgg/GGZ9doD63W9kXiVcZAv/Wrzsu9NjJ5TJqMlaArLdKOF 1mqz/MW0AOJDJlQVQvyooysrjGH6XeofnWjSPwRICeCzTO3ROhHD61GOjJqy+lHC9+g8Jx0JjlC osbIbKTArx5o3ZgqaJe4uQ1Hpw/aTcN0JLMvEvRHR+iRALaug4L0d4uvvojOm4/xFGFPx/S0xkp rjSTdBzvsmgv+qmhgwfL+RusUr4q4fSAH1hlTiOg4U2A4vDShxmNfNS0CqYInJK+ZoSMfwQQqCN e3T8O4atCvS1Gakc1h7muXQU6DHW4YPj3PlKA2gIbA3j4u7PIlG6GCn1uasTqFhkYg25Hw0h6ID zlQ4SMXdoUb54LE0Ho+gmCeN44dQVMGbuDHdP8bcOSKrS+oTw/1V87Stty6klbX9k5ht0kKZ5I0 kPeEqGTv1WxbX3reBHmsk9ZNEXTYS8IkCb7ABnWxhmca4/KGgJX6rIDQYLv1KFuPlNs4ZYHqnYf sYorgt559LMQA== X-Received: by 2002:a17:903:1450:b0:2b0:5626:f75d with SMTP id d9443c01a7336-2b0cdcb7d29mr112655ad.26.1774557778157; Thu, 26 Mar 2026 13:42:58 -0700 (PDT) Received: from mi-HP-ProDesk-680-G6-PCI-Microtower-PC.mioffice.cn ([43.224.245.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b0cbfd17d9sm4004905ad.25.2026.03.26.13.42.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Mar 2026 13:42:57 -0700 (PDT) From: zhidao su X-Google-Original-From: zhidao su To: sched-ext@lists.linux.dev Cc: linux-kernel@vger.kernel.org, tj@kernel.org, void@manifault.com, arighi@nvidia.com, changwoo@igalia.com, peterz@infradead.org, mingo@redhat.com, zhidao su Subject: [PATCH 4/4] selftests/sched_ext: consume_immed: fix reliability with CPU affinity Date: Fri, 27 Mar 2026 04:42:38 +0800 Message-ID: <20260326204238.3755737-5-suzhidao@xiaomi.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260326204238.3755737-1-suzhidao@xiaomi.com> References: <20260326204238.3755737-1-suzhidao@xiaomi.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Two bugs prevented the IMMED slow path from triggering reliably: 1. Dispatch consumed at most one task per ops.dispatch() call. The IMMED slow path requires dsq->nr > 1 in the local DSQ after a scx_bpf_dsq_move_to_local() call. With only one call per dispatch, the local DSQ never accumulated two tasks. Fix: loop up to 8 times in dispatch, draining USER_DSQ into the local DSQ. On the second iteration the local DSQ already has one task; inserting a second raises dsq->nr to 2, triggering schedule_reenq_local and calling ops.enqueue() with SCX_TASK_REENQ_IMMED. 2. Workers were not pinned to CPU 0, so the scheduler spread them across all CPUs. USER_DSQ typically had only 0-1 tasks when CPU 0 dispatched, so the loop's second iteration rarely found a second task. Fix: pin all NUM_WORKERS threads to CPU 0 via pthread_attr_setaffinity_np(). With all workers competing for CPU 0, USER_DSQ always has a backlog, and every dispatch loop reliably accumulates 2+ tasks. Also move nr_consume_immed_reenq read to after bpf_link__destroy() so the kernel has fully flushed all counter updates before we check the assertion. Fixes: c50dcf533149 ("selftests/sched_ext: Add tests for SCX_ENQ_IMMED and = scx_bpf_dsq_reenq()") Signed-off-by: zhidao su --- .../selftests/sched_ext/consume_immed.bpf.c | 19 +++++++++++++++---- .../selftests/sched_ext/consume_immed.c | 15 ++++++++++++--- 2 files changed, 27 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/sched_ext/consume_immed.bpf.c b/tools/= testing/selftests/sched_ext/consume_immed.bpf.c index 9c7808f5abe1..a2a4ee2ee95b 100644 --- a/tools/testing/selftests/sched_ext/consume_immed.bpf.c +++ b/tools/testing/selftests/sched_ext/consume_immed.bpf.c @@ -55,10 +55,21 @@ void BPF_STRUCT_OPS(consume_immed_enqueue, struct task_= struct *p, =20 void BPF_STRUCT_OPS(consume_immed_dispatch, s32 cpu, struct task_struct *p= rev) { - if (cpu =3D=3D 0) - scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED); - else - scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL, 0); + int i; + + if (cpu !=3D 0) + return; + + /* + * Drain USER_DSQ into the local DSQ with SCX_ENQ_IMMED. Once two or + * more tasks accumulate in the local DSQ, dsq->nr > 1 triggers the + * IMMED slow path (schedule_reenq_local), re-enqueuing IMMED tasks + * through ops.enqueue() with SCX_TASK_REENQ_IMMED. + */ + for (i =3D 0; i < 8; i++) { + if (!scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED)) + break; + } } =20 s32 BPF_STRUCT_OPS_SLEEPABLE(consume_immed_init) diff --git a/tools/testing/selftests/sched_ext/consume_immed.c b/tools/test= ing/selftests/sched_ext/consume_immed.c index 7f9594cfa9cb..41e66cd5e879 100644 --- a/tools/testing/selftests/sched_ext/consume_immed.c +++ b/tools/testing/selftests/sched_ext/consume_immed.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include "consume_immed.bpf.skel.h" @@ -77,20 +78,28 @@ static enum scx_test_status run(void *ctx) =20 stop_workers =3D false; for (i =3D 0; i < NUM_WORKERS; i++) { - SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + pthread_attr_t attr; + cpu_set_t cpuset; + + pthread_attr_init(&attr); + CPU_ZERO(&cpuset); + CPU_SET(0, &cpuset); + pthread_attr_setaffinity_np(&attr, sizeof(cpuset), &cpuset); + SCX_FAIL_IF(pthread_create(&workers[i], &attr, worker_fn, NULL), "Failed to create worker %d", i); + pthread_attr_destroy(&attr); } =20 sleep(TEST_DURATION_SEC); =20 - reenq =3D skel->bss->nr_consume_immed_reenq; - stop_workers =3D true; for (i =3D 0; i < NUM_WORKERS; i++) pthread_join(workers[i], NULL); =20 bpf_link__destroy(link); =20 + reenq =3D skel->bss->nr_consume_immed_reenq; + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); SCX_GT(reenq, 0); =20 --=20 2.43.0