From nobody Sat Apr 4 00:10:55 2026 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CE2B302CD5 for ; Sun, 22 Mar 2026 07:35:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774164941; cv=none; b=m94cAY//BRRP4ur1BtyW1myAiYalcYkIHEIdl253UP+OjCxL7eSiX/bTsX2sElen+qqRRy9yWkac9I/5E+ACLw/XfG3d4996iVyMJTcHkrs4grA0ADokoHrFi1GQSwNlGmMsQ1YojpJTF+1sLsF8puJ9S9Ia6S6wW1C5DqWGe4E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774164941; c=relaxed/simple; bh=/sZjkuDYvE6BuLJH+KRct1qY1rXOmEK+ZZ47dnyerPo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k0FjBfCRFT5pJ2LD5zSwJ+6TqEfvTJFc3On4piDDPq2zpFZ6qpsg4VZZDntVixYCCExNZMKXvsEMMjSmcRaMqvX0oWLY3pZ41b8+hskI55jDNRqcHug5x4AByxbDZE1MX0aRvCWFx+nIGWqO4fRqVB4aqT+kju+MBf+ojht+uSk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DIg0FX4I; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DIg0FX4I" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-82985f42664so2120218b3a.0 for ; Sun, 22 Mar 2026 00:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1774164939; x=1774769739; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MDzxUlZGpxbxxE/wWp+uXqNbU9njLVLedyajAEYVZZY=; b=DIg0FX4IlhHYk4+Gk+DpNW8odO8hKuFqkvHWGvwOUoFIibTH+sTY0aQLx+2SqRQE37 KdZWFcj/dzxwgz32Kf2uxSXDpVzSfAm8LhadjrceRCaLqvjNt8h3L2GSPFxkQUs/nxXl cgBsJMwDyyE5VtQ6cppKvFwY+8EBhOAI/DHFplaEVPIF07V/QzvM3QhqhvBxfwLl6gU3 Ya3GduXEa8lpxtcRMactKiJ1M2niBv0ZdQ4R5GsjVg/q4prjuN2LBhaJ7n8Mi7e5TAqu V7d2XWsFVBAL8/gPybjTGPRI/1tTUo9UIuA/IoPpz2yNfFo7TUWLa6gwqupjj04kdVUS L6ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774164939; x=1774769739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=MDzxUlZGpxbxxE/wWp+uXqNbU9njLVLedyajAEYVZZY=; b=P6XABbudnF/EYGbUZnX7Xmiti4h7QhSbtVfUrbLIE4KLRpbnj3Bixa+VE3FcmDai3q iUnhuRaaQSBhSbjOe/vQk4pQsHRVzlapGGlk2iXhuzW9kuFIKSq/K7QFMlFkE/yn2nxO m4LY24/6AvZx4OcdMs+KcXReilDi44TCs+PPcdLqL8ZVDdzBAOY8euzitKfR4tpjFlJo zcY3XlpLomOv6U6T6e+Zhb2cPFBQURt7Dq0EpPI7Juei5ofI3QBHwvSxWHWQrwzu5wK4 jI8y3zoDBLCfv8C3DvIdpTgcQJusxVdhiEqqi3SVFQfc/Ep9sH23kMz67x3RNv4g6QHy rGUw== X-Gm-Message-State: AOJu0YwGVdEjlwV35hlbvbX7DnIUq4uuw4PQtFzT0Egiz6nYKYK6G5Uj FldldAG+yWX0CIPP1rwcERGpJgvVeSwz9AIYn58OaadF9dJgVtR2t4yg X-Gm-Gg: ATEYQzzNkoiFksvCj72uJZUk7ihLkhmKBU4Yjs0vLuoXAss5rUwtc6zVEY7W+MTaE8m uB8fgO7enkIRVbgEM5TJSXgrwO1zEZqQbswCQgBadCP6FmgxEzYnhOafGKj6BIEtwp+HG3AOhA1 cJMQXrU6YRz691lOYzhd5Cmg26Uq6U9gCqgHMAKd6In5aFCRpy9V1OUJgEZNa37YXBzy3hEGI8x ItOgkR55xKwdzOb3/Hi3yu5R2cMUS0ZoMhruQmExex384dcqwHIOtCWyI/RX7hc5bOs9ednGBnZ I2ZGZkH4ovbdN/Vh9l+X6wiFVHMuZGUm1pQPSXCSxIOPfaL3FneX4rPHYLbrTWbItUJsciBLHJH dM32IRzoML8vXAzli8C8AaM4TWqF6LSZtX+CMEXJbww1tFNyrYaaQ5DRBqhXrKl4sT2BHSpW84+ ZkjldlJHNrgaXNhrFCSFIOKb9tDcK7SGUKr8FuHkpSQnoeEXD4jv2T1RhptbGsj9f9wpUUIrybA 2viFfSbP0qjyg== X-Received: by 2002:a05:6a00:3405:b0:82c:24d5:63e6 with SMTP id d2e1a72fcca58-82c24d5692fmr4354075b3a.15.1774164938725; Sun, 22 Mar 2026 00:35:38 -0700 (PDT) Received: from mi-HP-ProDesk-680-G6-PCI-Microtower-PC.mioffice.cn ([43.224.245.226]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82b0409bf65sm6173245b3a.34.2026.03.22.00.35.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2026 00:35:38 -0700 (PDT) From: zhidao su X-Google-Original-From: zhidao su To: sched-ext@lists.linux.dev Cc: linux-kernel@vger.kernel.org, tj@kernel.org, void@manifault.com, arighi@nvidia.com, changwoo@igalia.com, peterz@infradead.org, mingo@redhat.com, Su Zhidao , zhidao su Subject: [PATCH v2] selftests/sched_ext: Add tests for SCX_ENQ_IMMED and scx_bpf_dsq_reenq() Date: Sun, 22 Mar 2026 15:35:33 +0800 Message-ID: <20260322073533.1022768-1-suzhidao@xiaomi.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260322072051.993347-1-suzhidao@xiaomi.com> References: <20260322072051.993347-1-suzhidao@xiaomi.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Su Zhidao Add three selftests covering features introduced in v7.1: - dsq_reenq: Verify scx_bpf_dsq_reenq() on user DSQs triggers ops.enqueue() with SCX_ENQ_REENQ and SCX_TASK_REENQ_KFUNC in p->scx.flags. - enq_immed: Verify SCX_OPS_ALWAYS_ENQ_IMMED slow path where tasks dispatched to a busy CPU's local DSQ are re-enqueued through ops.enqueue() with SCX_TASK_REENQ_IMMED. - consume_immed: Verify SCX_ENQ_IMMED via the consume path using scx_bpf_dsq_move_to_local___v2() with explicit SCX_ENQ_IMMED. All three tests skip gracefully on kernels that predate the required features by checking availability via __COMPAT_has_ksym() / __COMPAT_read_enum() before loading. Signed-off-by: zhidao su --- tools/testing/selftests/sched_ext/Makefile | 3 + .../selftests/sched_ext/consume_immed.bpf.c | 88 +++++++++++++ .../selftests/sched_ext/consume_immed.c | 115 +++++++++++++++++ .../selftests/sched_ext/dsq_reenq.bpf.c | 120 ++++++++++++++++++ tools/testing/selftests/sched_ext/dsq_reenq.c | 95 ++++++++++++++ .../selftests/sched_ext/enq_immed.bpf.c | 63 +++++++++ tools/testing/selftests/sched_ext/enq_immed.c | 117 +++++++++++++++++ 7 files changed, 601 insertions(+) create mode 100644 tools/testing/selftests/sched_ext/consume_immed.bpf.c create mode 100644 tools/testing/selftests/sched_ext/consume_immed.c create mode 100644 tools/testing/selftests/sched_ext/dsq_reenq.bpf.c create mode 100644 tools/testing/selftests/sched_ext/dsq_reenq.c create mode 100644 tools/testing/selftests/sched_ext/enq_immed.bpf.c create mode 100644 tools/testing/selftests/sched_ext/enq_immed.c diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/sel= ftests/sched_ext/Makefile index a3bbe2c7911b..84e4f69b8833 100644 --- a/tools/testing/selftests/sched_ext/Makefile +++ b/tools/testing/selftests/sched_ext/Makefile @@ -162,8 +162,11 @@ endef all_test_bpfprogs :=3D $(foreach prog,$(wildcard *.bpf.c),$(INCLUDE_DIR)/$= (patsubst %.c,%.skel.h,$(prog))) =20 auto-test-targets :=3D \ + consume_immed \ create_dsq \ dequeue \ + dsq_reenq \ + enq_immed \ enq_last_no_enq_fails \ ddsp_bogus_dsq_fail \ ddsp_vtimelocal_fail \ diff --git a/tools/testing/selftests/sched_ext/consume_immed.bpf.c b/tools/= testing/selftests/sched_ext/consume_immed.bpf.c new file mode 100644 index 000000000000..9c7808f5abe1 --- /dev/null +++ b/tools/testing/selftests/sched_ext/consume_immed.bpf.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate SCX_ENQ_IMMED semantics through the consume path. + * + * This is the orthogonal counterpart to enq_immed: + * + * enq_immed: SCX_ENQ_IMMED via scx_bpf_dsq_insert() to local DSQ + * with SCX_OPS_ALWAYS_ENQ_IMMED + * + * consume_immed: SCX_ENQ_IMMED via scx_bpf_dsq_move_to_local() with + * explicit SCX_ENQ_IMMED in enq_flags (requires v2 kfun= c) + * + * Worker threads belonging to test_tgid are inserted into USER_DSQ. + * ops.dispatch() on CPU 0 consumes from USER_DSQ with SCX_ENQ_IMMED. + * With multiple workers competing for CPU 0, dsq->nr > 1 triggers the + * IMMED slow path (reenqueue with SCX_TASK_REENQ_IMMED). + * + * Requires scx_bpf_dsq_move_to_local___v2() (v7.1+) for enq_flags support. + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +#define USER_DSQ 0 + +/* Set by userspace to identify the test process group. */ +const volatile u32 test_tgid; + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_IMMED are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +u64 nr_consume_immed_reenq; + +void BPF_STRUCT_OPS(consume_immed_enqueue, struct task_struct *p, + u64 enq_flags) +{ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_IMMED) + __sync_fetch_and_add(&nr_consume_immed_reenq, 1); + } + + if (p->tgid =3D=3D (pid_t)test_tgid) + scx_bpf_dsq_insert(p, USER_DSQ, SCX_SLICE_DFL, enq_flags); + else + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, + enq_flags); +} + +void BPF_STRUCT_OPS(consume_immed_dispatch, s32 cpu, struct task_struct *p= rev) +{ + if (cpu =3D=3D 0) + scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED); + else + scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL, 0); +} + +s32 BPF_STRUCT_OPS_SLEEPABLE(consume_immed_init) +{ + /* + * scx_bpf_dsq_move_to_local___v2() adds the enq_flags parameter. + * On older kernels the consume path cannot pass SCX_ENQ_IMMED. + */ + if (!bpf_ksym_exists(scx_bpf_dsq_move_to_local___v2)) { + scx_bpf_error("scx_bpf_dsq_move_to_local v2 not available"); + return -EOPNOTSUPP; + } + + return scx_bpf_create_dsq(USER_DSQ, -1); +} + +void BPF_STRUCT_OPS(consume_immed_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(consume_immed_ops, + .enqueue =3D (void *)consume_immed_enqueue, + .dispatch =3D (void *)consume_immed_dispatch, + .init =3D (void *)consume_immed_init, + .exit =3D (void *)consume_immed_exit, + .name =3D "consume_immed") diff --git a/tools/testing/selftests/sched_ext/consume_immed.c b/tools/test= ing/selftests/sched_ext/consume_immed.c new file mode 100644 index 000000000000..7f9594cfa9cb --- /dev/null +++ b/tools/testing/selftests/sched_ext/consume_immed.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for SCX_ENQ_IMMED via the consume path. + * + * Validates that scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED) on + * a busy CPU triggers the IMMED slow path, re-enqueuing tasks through + * ops.enqueue() with SCX_TASK_REENQ_IMMED. + * + * Skipped on single-CPU systems where local DSQ contention cannot occur. + */ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "consume_immed.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + volatile unsigned long i; + + for (i =3D 0; i < 100000UL; i++) + ; + usleep(100); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct consume_immed *skel; + + if (!__COMPAT_has_ksym("scx_bpf_dsq_move_to_local___v2")) { + fprintf(stderr, + "SKIP: scx_bpf_dsq_move_to_local v2 not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D consume_immed__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + + skel->rodata->test_tgid =3D (u32)getpid(); + + SCX_FAIL_IF(consume_immed__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct consume_immed *skel =3D ctx; + struct bpf_link *link; + pthread_t workers[NUM_WORKERS]; + long nproc; + int i; + u64 reenq; + + nproc =3D sysconf(_SC_NPROCESSORS_ONLN); + if (nproc <=3D 1) { + fprintf(stderr, + "SKIP: single CPU, consume IMMED slow path may not trigger\n"); + return SCX_TEST_SKIP; + } + + link =3D bpf_map__attach_struct_ops(skel->maps.consume_immed_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + reenq =3D skel->bss->nr_consume_immed_reenq; + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(reenq, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct consume_immed *skel =3D ctx; + + consume_immed__destroy(skel); +} + +struct scx_test consume_immed =3D { + .name =3D "consume_immed", + .description =3D "Verify SCX_ENQ_IMMED slow path via " + "scx_bpf_dsq_move_to_local() consume path", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&consume_immed) diff --git a/tools/testing/selftests/sched_ext/dsq_reenq.bpf.c b/tools/test= ing/selftests/sched_ext/dsq_reenq.bpf.c new file mode 100644 index 000000000000..750bb10508df --- /dev/null +++ b/tools/testing/selftests/sched_ext/dsq_reenq.bpf.c @@ -0,0 +1,120 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate scx_bpf_dsq_reenq() semantics on user DSQs. + * + * A BPF timer periodically calls scx_bpf_dsq_reenq() on a user DSQ, + * causing tasks to be re-enqueued through ops.enqueue() with SCX_ENQ_REENQ + * set and SCX_TASK_REENQ_KFUNC recorded in p->scx.flags. + * + * The test verifies: + * - scx_bpf_dsq_reenq() triggers ops.enqueue() with SCX_ENQ_REENQ + * - The reenqueue reason is SCX_TASK_REENQ_KFUNC (bit 12 set) + * - Tasks are correctly re-dispatched after reenqueue + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +#define USER_DSQ 0 + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_KFUNC are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +/* 5ms timer interval */ +#define REENQ_TIMER_NS (5 * 1000 * 1000ULL) + +/* + * Number of times ops.enqueue() was called with SCX_ENQ_REENQ set and + * SCX_TASK_REENQ_KFUNC recorded in p->scx.flags. + */ +u64 nr_reenq_kfunc; + +struct reenq_timer_val { + struct bpf_timer timer; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, u32); + __type(value, struct reenq_timer_val); +} reenq_timer SEC(".maps"); + +/* + * Timer callback: reenqueue all tasks currently sitting on USER_DSQ back + * through ops.enqueue() with SCX_ENQ_REENQ | SCX_TASK_REENQ_KFUNC. + */ +static int reenq_timerfn(void *map, int *key, struct bpf_timer *timer) +{ + scx_bpf_dsq_reenq(USER_DSQ, 0); + bpf_timer_start(timer, REENQ_TIMER_NS, 0); + return 0; +} + +void BPF_STRUCT_OPS(dsq_reenq_enqueue, struct task_struct *p, u64 enq_flag= s) +{ + /* + * If this is a kfunc-triggered reenqueue, verify that + * SCX_TASK_REENQ_KFUNC is recorded in p->scx.flags. + */ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_KFUNC) + __sync_fetch_and_add(&nr_reenq_kfunc, 1); + } + + /* + * Always dispatch to USER_DSQ so the timer can reenqueue tasks again + * on the next tick. + */ + scx_bpf_dsq_insert(p, USER_DSQ, SCX_SLICE_DFL, enq_flags); +} + +void BPF_STRUCT_OPS(dsq_reenq_dispatch, s32 cpu, struct task_struct *prev) +{ + scx_bpf_dsq_move_to_local(USER_DSQ, 0); +} + +s32 BPF_STRUCT_OPS_SLEEPABLE(dsq_reenq_init) +{ + struct reenq_timer_val *tval; + u32 key =3D 0; + s32 ret; + + ret =3D scx_bpf_create_dsq(USER_DSQ, -1); + if (ret) + return ret; + + if (!__COMPAT_has_generic_reenq()) { + scx_bpf_error("scx_bpf_dsq_reenq() not available"); + return -EOPNOTSUPP; + } + + tval =3D bpf_map_lookup_elem(&reenq_timer, &key); + if (!tval) + return -ESRCH; + + bpf_timer_init(&tval->timer, &reenq_timer, CLOCK_MONOTONIC); + bpf_timer_set_callback(&tval->timer, reenq_timerfn); + + return bpf_timer_start(&tval->timer, REENQ_TIMER_NS, 0); +} + +void BPF_STRUCT_OPS(dsq_reenq_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(dsq_reenq_ops, + .enqueue =3D (void *)dsq_reenq_enqueue, + .dispatch =3D (void *)dsq_reenq_dispatch, + .init =3D (void *)dsq_reenq_init, + .exit =3D (void *)dsq_reenq_exit, + .timeout_ms =3D 10000, + .name =3D "dsq_reenq") diff --git a/tools/testing/selftests/sched_ext/dsq_reenq.c b/tools/testing/= selftests/sched_ext/dsq_reenq.c new file mode 100644 index 000000000000..b0d99f9c9a9a --- /dev/null +++ b/tools/testing/selftests/sched_ext/dsq_reenq.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for scx_bpf_dsq_reenq() semantics. + * + * Attaches the dsq_reenq BPF scheduler, runs workload threads that + * sleep and yield to keep tasks on USER_DSQ, waits for the BPF timer + * to fire several times, then verifies that at least one kfunc-triggered + * reenqueue was observed (ops.enqueue() called with SCX_ENQ_REENQ and + * SCX_TASK_REENQ_KFUNC in p->scx.flags). + */ +#include +#include +#include +#include +#include "dsq_reenq.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; +static pthread_t workers[NUM_WORKERS]; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + usleep(500); + sched_yield(); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct dsq_reenq *skel; + + if (!__COMPAT_has_ksym("scx_bpf_dsq_reenq")) { + fprintf(stderr, "SKIP: scx_bpf_dsq_reenq() not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D dsq_reenq__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + SCX_FAIL_IF(dsq_reenq__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct dsq_reenq *skel =3D ctx; + struct bpf_link *link; + int i; + + link =3D bpf_map__attach_struct_ops(skel->maps.dsq_reenq_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(skel->bss->nr_reenq_kfunc, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct dsq_reenq *skel =3D ctx; + + dsq_reenq__destroy(skel); +} + +struct scx_test dsq_reenq =3D { + .name =3D "dsq_reenq", + .description =3D "Verify scx_bpf_dsq_reenq() triggers enqueue with " + "SCX_ENQ_REENQ and SCX_TASK_REENQ_KFUNC reason", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&dsq_reenq) diff --git a/tools/testing/selftests/sched_ext/enq_immed.bpf.c b/tools/test= ing/selftests/sched_ext/enq_immed.bpf.c new file mode 100644 index 000000000000..805dd0256218 --- /dev/null +++ b/tools/testing/selftests/sched_ext/enq_immed.bpf.c @@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate SCX_ENQ_IMMED fast/slow path semantics via the direct insert p= ath. + * + * With SCX_OPS_ALWAYS_ENQ_IMMED set, the kernel automatically adds + * SCX_ENQ_IMMED to every local DSQ dispatch. When the target CPU's local + * DSQ already has tasks queued (dsq->nr > 1), the kernel re-enqueues the + * task through ops.enqueue() with SCX_ENQ_REENQ and SCX_TASK_REENQ_IMMED + * recorded in p->scx.flags (the "slow path"). + * + * Worker threads are pinned to CPU 0 via SCX_DSQ_LOCAL_ON to guarantee + * local DSQ contention. + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +/* Set by userspace to identify the test process group. */ +const volatile u32 test_tgid; + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_IMMED are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +u64 nr_immed_reenq; + +void BPF_STRUCT_OPS(enq_immed_enqueue, struct task_struct *p, u64 enq_flag= s) +{ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_IMMED) + __sync_fetch_and_add(&nr_immed_reenq, 1); + } + + if (p->tgid =3D=3D (pid_t)test_tgid) + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | 0, SCX_SLICE_DFL, + enq_flags); + else + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, + enq_flags); +} + +void BPF_STRUCT_OPS(enq_immed_dispatch, s32 cpu, struct task_struct *prev) +{ + scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL, 0); +} + +void BPF_STRUCT_OPS(enq_immed_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(enq_immed_ops, + .enqueue =3D (void *)enq_immed_enqueue, + .dispatch =3D (void *)enq_immed_dispatch, + .exit =3D (void *)enq_immed_exit, + .flags =3D SCX_OPS_ALWAYS_ENQ_IMMED, + .name =3D "enq_immed") diff --git a/tools/testing/selftests/sched_ext/enq_immed.c b/tools/testing/= selftests/sched_ext/enq_immed.c new file mode 100644 index 000000000000..44681e41975d --- /dev/null +++ b/tools/testing/selftests/sched_ext/enq_immed.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for SCX_ENQ_IMMED via the direct insert path. + * + * Validates that dispatching tasks to a busy CPU's local DSQ with + * SCX_OPS_ALWAYS_ENQ_IMMED triggers the IMMED slow path: the kernel + * re-enqueues the task through ops.enqueue() with SCX_TASK_REENQ_IMMED. + * + * Skipped on single-CPU systems where local DSQ contention cannot occur. + */ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "enq_immed.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + volatile unsigned long i; + + for (i =3D 0; i < 100000UL; i++) + ; + usleep(100); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct enq_immed *skel; + u64 v; + + if (!__COMPAT_read_enum("scx_ops_flags", + "SCX_OPS_ALWAYS_ENQ_IMMED", &v)) { + fprintf(stderr, + "SKIP: SCX_OPS_ALWAYS_ENQ_IMMED not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D enq_immed__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + + skel->rodata->test_tgid =3D (u32)getpid(); + + SCX_FAIL_IF(enq_immed__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct enq_immed *skel =3D ctx; + struct bpf_link *link; + pthread_t workers[NUM_WORKERS]; + long nproc; + int i; + u64 reenq; + + nproc =3D sysconf(_SC_NPROCESSORS_ONLN); + if (nproc <=3D 1) { + fprintf(stderr, + "SKIP: single CPU, IMMED slow path may not trigger\n"); + return SCX_TEST_SKIP; + } + + link =3D bpf_map__attach_struct_ops(skel->maps.enq_immed_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + reenq =3D skel->bss->nr_immed_reenq; + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(reenq, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct enq_immed *skel =3D ctx; + + enq_immed__destroy(skel); +} + +struct scx_test enq_immed =3D { + .name =3D "enq_immed", + .description =3D "Verify SCX_ENQ_IMMED slow path via direct insert " + "with SCX_OPS_ALWAYS_ENQ_IMMED", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&enq_immed) --=20 2.43.0