From nobody Sat Apr 4 00:10:56 2026 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C6F41946DA for ; Sun, 22 Mar 2026 07:20:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774164059; cv=none; b=WmnYDqo/2hYkxM6t4+ZyYHaYFzGSi1srvKusYp4KFeuycBhYJcNybTMn23qmcBXyu1iluUHLvQI1U6kgs2XaTl/3SmFWgLqzF/lixZS24Wj5r1LcpO+MrG8h+hNZvOdA1OuBbJILSNprUudunrYhQCTQ24+k2eX0nFzcjbflHf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774164059; c=relaxed/simple; bh=MTfYH1mJOOIZnUBsfGF3xOOE0qMfObZu0BYKeHE+n+A=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Dv75M56BANpgb0mHHHkASMWvhcqxIv3/bMn/ednZCi/DXVNKMZMxqLLZeRogt3htUWuMldXDp/yirbv9CmvbvH7K2Q5gVFDRg9e79TsGA236LyzUh9eP3CY/bx9ccN1lH/kjuxf66mKdXWHQ17cFMaQa9EZVly9ym+m3zEcZve8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=huJVtbp+; arc=none smtp.client-ip=209.85.215.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="huJVtbp+" Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-c742723c863so1935688a12.0 for ; Sun, 22 Mar 2026 00:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1774164057; x=1774768857; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=HO+ajf2ZoKGrSNQnh5E0xTLD/ma7tKOZhmOru8ILlvQ=; b=huJVtbp+ajsVg6cEzIZELvyKfXTtMTjGB/19HyOURIToFtKOXqUrsVDDmNp/QB1z4K SOdkiA0qzBTCzIUdFUaerMyboynl76k7og7TRDXwCQB3TD6zeLbqme720b7696x7xbwL xkKyi6qE1RQA750a+Og4K+h0yHJJoxFMb8tcB+QpnA0fs+nAvlpmXL2h/jqleonv8+mp +/EB81w1MISqXQQlpFndjBX5hXogUca4IdyEs8seqYwKuoSKVabeQyREU+Kc4YEQcnNz MtYQN8Q9LSDsnUFhg4iINdLFD+qacnFOofyJ6hYGtn+lDXWFMamDSbdhO1bmj3/2DXBM rIyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774164057; x=1774768857; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=HO+ajf2ZoKGrSNQnh5E0xTLD/ma7tKOZhmOru8ILlvQ=; b=YEVL9WsCFZ+rqIA9V2vgTh8al9EhrGXkE+0jNQR5LeQLiibWA6LqYGPTbJLN6OmFSm LXBppYlhPiN3JeIvrpJIkCmAnexF84ni8VCzRTDHN+BU3FwUTaPHVLuxnaKm5O9jsgdD zRvdq9HkwIeqE1KkqGPrHSF2WDk8CrDusF9QKjblu8z2UTxvvAbb6UZzfkdexjv/X93l inykEA3QoizrwG2MY16+cwj+4oiybFz/gGzCyfi2+IfKAlLCr46uAmhC/FArD3rxtzP5 H0q9TLPN/iQjPRvq9nfTtTHsSNM5CEpvVHSpCeyAI2LKMupK5FJXwlxwn1/sqPsI8JFn lzwA== X-Gm-Message-State: AOJu0Yy5Cxd0KnKNMOQPp5m9rnPkSiz2H42c54gqPeV1c+Gnv213CB1b 9Kp3O2G2fUo7myQtrfDqEmxDzUpOLcEDUJOmIAicfa5lkJ48r0qHAAS7 X-Gm-Gg: ATEYQzxob475jvpai69BnFrE1aLHPlv6LT60Ny4GWjl2SOofy8FzAlonzaGDnkoo4x+ XOtiqC/aKmq+qLBMKvBBXhNl9g0RSsdRMUaVxFpGdE57PYaewd0DLcz6hYJQCIOrWqmGZNOgQ2f dmefSHoMtRZo0bWVouaKTAJZTAYMVOAa6Al4fEqEvMc5FkJ8bFOiAi2eXjXCYnh/L4ff0hczRcJ MEW2EXov2lCOAlLwIHrYZUKApQI+aaHhRXb0YCLpULqLCuumyC1779u4R8aQaw1o/JZiZCkJJKL tn098Hpq4raK24F7jMBkUdV0hkDnKe6J8lxEFx2XjtCWz8hqPTLWzfrVEV0SbcNKTF7HN51/0ud Q0z0bCFpzdpXWkXjcE4tZxeYhPQvAmlLgKV3yfaNMCkfPSLCLwwI+wqvNcgYUoMC86FCGa5pTC4 U3dDX+aYY44rUghR8FkghaXyfcPy2IdlmV69Xh5Y3mZn+6UmQlUI7oQUDSAzpOwxIaApj8uc+Ps nw= X-Received: by 2002:a05:6a20:a124:b0:398:92f7:9d24 with SMTP id adf61e73a8af0-39bce7611c9mr8030013637.0.1774164057349; Sun, 22 Mar 2026 00:20:57 -0700 (PDT) Received: from mi-HP-ProDesk-680-G6-PCI-Microtower-PC.mioffice.cn ([43.224.245.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c743a817cffsm5031608a12.11.2026.03.22.00.20.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2026 00:20:57 -0700 (PDT) From: zhidao su X-Google-Original-From: zhidao su To: sched-ext@lists.linux.dev Cc: linux-kernel@vger.kernel.org, tj@kernel.org, void@manifault.com, arighi@nvidia.com, changwoo@igalia.com, peterz@infradead.org, mingo@redhat.com, Su Zhidao Subject: [PATCH] selftests/sched_ext: Add tests for SCX_ENQ_IMMED and scx_bpf_dsq_reenq() Date: Sun, 22 Mar 2026 15:20:51 +0800 Message-ID: <20260322072051.993347-1-suzhidao@xiaomi.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Su Zhidao Add three selftests covering features introduced in v7.1: - dsq_reenq: Verify scx_bpf_dsq_reenq() on user DSQs triggers ops.enqueue() with SCX_ENQ_REENQ and SCX_TASK_REENQ_KFUNC in p->scx.flags. - enq_immed: Verify SCX_OPS_ALWAYS_ENQ_IMMED slow path where tasks dispatched to a busy CPU's local DSQ are re-enqueued through ops.enqueue() with SCX_TASK_REENQ_IMMED. - consume_immed: Verify SCX_ENQ_IMMED via the consume path using scx_bpf_dsq_move_to_local___v2() with explicit SCX_ENQ_IMMED. All three tests skip gracefully on kernels that predate the required features by checking availability via __COMPAT_has_ksym() / __COMPAT_read_enum() before loading. Signed-off-by: Su Zhidao --- tools/testing/selftests/sched_ext/Makefile | 3 + .../selftests/sched_ext/consume_immed.bpf.c | 88 +++++++++++++ .../selftests/sched_ext/consume_immed.c | 115 +++++++++++++++++ .../selftests/sched_ext/dsq_reenq.bpf.c | 120 ++++++++++++++++++ tools/testing/selftests/sched_ext/dsq_reenq.c | 95 ++++++++++++++ .../selftests/sched_ext/enq_immed.bpf.c | 63 +++++++++ tools/testing/selftests/sched_ext/enq_immed.c | 117 +++++++++++++++++ 7 files changed, 601 insertions(+) create mode 100644 tools/testing/selftests/sched_ext/consume_immed.bpf.c create mode 100644 tools/testing/selftests/sched_ext/consume_immed.c create mode 100644 tools/testing/selftests/sched_ext/dsq_reenq.bpf.c create mode 100644 tools/testing/selftests/sched_ext/dsq_reenq.c create mode 100644 tools/testing/selftests/sched_ext/enq_immed.bpf.c create mode 100644 tools/testing/selftests/sched_ext/enq_immed.c diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/sel= ftests/sched_ext/Makefile index a3bbe2c7911b..84e4f69b8833 100644 --- a/tools/testing/selftests/sched_ext/Makefile +++ b/tools/testing/selftests/sched_ext/Makefile @@ -162,8 +162,11 @@ endef all_test_bpfprogs :=3D $(foreach prog,$(wildcard *.bpf.c),$(INCLUDE_DIR)/$= (patsubst %.c,%.skel.h,$(prog))) =20 auto-test-targets :=3D \ + consume_immed \ create_dsq \ dequeue \ + dsq_reenq \ + enq_immed \ enq_last_no_enq_fails \ ddsp_bogus_dsq_fail \ ddsp_vtimelocal_fail \ diff --git a/tools/testing/selftests/sched_ext/consume_immed.bpf.c b/tools/= testing/selftests/sched_ext/consume_immed.bpf.c new file mode 100644 index 000000000000..9c7808f5abe1 --- /dev/null +++ b/tools/testing/selftests/sched_ext/consume_immed.bpf.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate SCX_ENQ_IMMED semantics through the consume path. + * + * This is the orthogonal counterpart to enq_immed: + * + * enq_immed: SCX_ENQ_IMMED via scx_bpf_dsq_insert() to local DSQ + * with SCX_OPS_ALWAYS_ENQ_IMMED + * + * consume_immed: SCX_ENQ_IMMED via scx_bpf_dsq_move_to_local() with + * explicit SCX_ENQ_IMMED in enq_flags (requires v2 kfun= c) + * + * Worker threads belonging to test_tgid are inserted into USER_DSQ. + * ops.dispatch() on CPU 0 consumes from USER_DSQ with SCX_ENQ_IMMED. + * With multiple workers competing for CPU 0, dsq->nr > 1 triggers the + * IMMED slow path (reenqueue with SCX_TASK_REENQ_IMMED). + * + * Requires scx_bpf_dsq_move_to_local___v2() (v7.1+) for enq_flags support. + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +#define USER_DSQ 0 + +/* Set by userspace to identify the test process group. */ +const volatile u32 test_tgid; + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_IMMED are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +u64 nr_consume_immed_reenq; + +void BPF_STRUCT_OPS(consume_immed_enqueue, struct task_struct *p, + u64 enq_flags) +{ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_IMMED) + __sync_fetch_and_add(&nr_consume_immed_reenq, 1); + } + + if (p->tgid =3D=3D (pid_t)test_tgid) + scx_bpf_dsq_insert(p, USER_DSQ, SCX_SLICE_DFL, enq_flags); + else + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, + enq_flags); +} + +void BPF_STRUCT_OPS(consume_immed_dispatch, s32 cpu, struct task_struct *p= rev) +{ + if (cpu =3D=3D 0) + scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED); + else + scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL, 0); +} + +s32 BPF_STRUCT_OPS_SLEEPABLE(consume_immed_init) +{ + /* + * scx_bpf_dsq_move_to_local___v2() adds the enq_flags parameter. + * On older kernels the consume path cannot pass SCX_ENQ_IMMED. + */ + if (!bpf_ksym_exists(scx_bpf_dsq_move_to_local___v2)) { + scx_bpf_error("scx_bpf_dsq_move_to_local v2 not available"); + return -EOPNOTSUPP; + } + + return scx_bpf_create_dsq(USER_DSQ, -1); +} + +void BPF_STRUCT_OPS(consume_immed_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(consume_immed_ops, + .enqueue =3D (void *)consume_immed_enqueue, + .dispatch =3D (void *)consume_immed_dispatch, + .init =3D (void *)consume_immed_init, + .exit =3D (void *)consume_immed_exit, + .name =3D "consume_immed") diff --git a/tools/testing/selftests/sched_ext/consume_immed.c b/tools/test= ing/selftests/sched_ext/consume_immed.c new file mode 100644 index 000000000000..7f9594cfa9cb --- /dev/null +++ b/tools/testing/selftests/sched_ext/consume_immed.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for SCX_ENQ_IMMED via the consume path. + * + * Validates that scx_bpf_dsq_move_to_local(USER_DSQ, SCX_ENQ_IMMED) on + * a busy CPU triggers the IMMED slow path, re-enqueuing tasks through + * ops.enqueue() with SCX_TASK_REENQ_IMMED. + * + * Skipped on single-CPU systems where local DSQ contention cannot occur. + */ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "consume_immed.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + volatile unsigned long i; + + for (i =3D 0; i < 100000UL; i++) + ; + usleep(100); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct consume_immed *skel; + + if (!__COMPAT_has_ksym("scx_bpf_dsq_move_to_local___v2")) { + fprintf(stderr, + "SKIP: scx_bpf_dsq_move_to_local v2 not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D consume_immed__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + + skel->rodata->test_tgid =3D (u32)getpid(); + + SCX_FAIL_IF(consume_immed__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct consume_immed *skel =3D ctx; + struct bpf_link *link; + pthread_t workers[NUM_WORKERS]; + long nproc; + int i; + u64 reenq; + + nproc =3D sysconf(_SC_NPROCESSORS_ONLN); + if (nproc <=3D 1) { + fprintf(stderr, + "SKIP: single CPU, consume IMMED slow path may not trigger\n"); + return SCX_TEST_SKIP; + } + + link =3D bpf_map__attach_struct_ops(skel->maps.consume_immed_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + reenq =3D skel->bss->nr_consume_immed_reenq; + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(reenq, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct consume_immed *skel =3D ctx; + + consume_immed__destroy(skel); +} + +struct scx_test consume_immed =3D { + .name =3D "consume_immed", + .description =3D "Verify SCX_ENQ_IMMED slow path via " + "scx_bpf_dsq_move_to_local() consume path", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&consume_immed) diff --git a/tools/testing/selftests/sched_ext/dsq_reenq.bpf.c b/tools/test= ing/selftests/sched_ext/dsq_reenq.bpf.c new file mode 100644 index 000000000000..750bb10508df --- /dev/null +++ b/tools/testing/selftests/sched_ext/dsq_reenq.bpf.c @@ -0,0 +1,120 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate scx_bpf_dsq_reenq() semantics on user DSQs. + * + * A BPF timer periodically calls scx_bpf_dsq_reenq() on a user DSQ, + * causing tasks to be re-enqueued through ops.enqueue() with SCX_ENQ_REENQ + * set and SCX_TASK_REENQ_KFUNC recorded in p->scx.flags. + * + * The test verifies: + * - scx_bpf_dsq_reenq() triggers ops.enqueue() with SCX_ENQ_REENQ + * - The reenqueue reason is SCX_TASK_REENQ_KFUNC (bit 12 set) + * - Tasks are correctly re-dispatched after reenqueue + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +#define USER_DSQ 0 + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_KFUNC are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +/* 5ms timer interval */ +#define REENQ_TIMER_NS (5 * 1000 * 1000ULL) + +/* + * Number of times ops.enqueue() was called with SCX_ENQ_REENQ set and + * SCX_TASK_REENQ_KFUNC recorded in p->scx.flags. + */ +u64 nr_reenq_kfunc; + +struct reenq_timer_val { + struct bpf_timer timer; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, u32); + __type(value, struct reenq_timer_val); +} reenq_timer SEC(".maps"); + +/* + * Timer callback: reenqueue all tasks currently sitting on USER_DSQ back + * through ops.enqueue() with SCX_ENQ_REENQ | SCX_TASK_REENQ_KFUNC. + */ +static int reenq_timerfn(void *map, int *key, struct bpf_timer *timer) +{ + scx_bpf_dsq_reenq(USER_DSQ, 0); + bpf_timer_start(timer, REENQ_TIMER_NS, 0); + return 0; +} + +void BPF_STRUCT_OPS(dsq_reenq_enqueue, struct task_struct *p, u64 enq_flag= s) +{ + /* + * If this is a kfunc-triggered reenqueue, verify that + * SCX_TASK_REENQ_KFUNC is recorded in p->scx.flags. + */ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_KFUNC) + __sync_fetch_and_add(&nr_reenq_kfunc, 1); + } + + /* + * Always dispatch to USER_DSQ so the timer can reenqueue tasks again + * on the next tick. + */ + scx_bpf_dsq_insert(p, USER_DSQ, SCX_SLICE_DFL, enq_flags); +} + +void BPF_STRUCT_OPS(dsq_reenq_dispatch, s32 cpu, struct task_struct *prev) +{ + scx_bpf_dsq_move_to_local(USER_DSQ, 0); +} + +s32 BPF_STRUCT_OPS_SLEEPABLE(dsq_reenq_init) +{ + struct reenq_timer_val *tval; + u32 key =3D 0; + s32 ret; + + ret =3D scx_bpf_create_dsq(USER_DSQ, -1); + if (ret) + return ret; + + if (!__COMPAT_has_generic_reenq()) { + scx_bpf_error("scx_bpf_dsq_reenq() not available"); + return -EOPNOTSUPP; + } + + tval =3D bpf_map_lookup_elem(&reenq_timer, &key); + if (!tval) + return -ESRCH; + + bpf_timer_init(&tval->timer, &reenq_timer, CLOCK_MONOTONIC); + bpf_timer_set_callback(&tval->timer, reenq_timerfn); + + return bpf_timer_start(&tval->timer, REENQ_TIMER_NS, 0); +} + +void BPF_STRUCT_OPS(dsq_reenq_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(dsq_reenq_ops, + .enqueue =3D (void *)dsq_reenq_enqueue, + .dispatch =3D (void *)dsq_reenq_dispatch, + .init =3D (void *)dsq_reenq_init, + .exit =3D (void *)dsq_reenq_exit, + .timeout_ms =3D 10000, + .name =3D "dsq_reenq") diff --git a/tools/testing/selftests/sched_ext/dsq_reenq.c b/tools/testing/= selftests/sched_ext/dsq_reenq.c new file mode 100644 index 000000000000..b0d99f9c9a9a --- /dev/null +++ b/tools/testing/selftests/sched_ext/dsq_reenq.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for scx_bpf_dsq_reenq() semantics. + * + * Attaches the dsq_reenq BPF scheduler, runs workload threads that + * sleep and yield to keep tasks on USER_DSQ, waits for the BPF timer + * to fire several times, then verifies that at least one kfunc-triggered + * reenqueue was observed (ops.enqueue() called with SCX_ENQ_REENQ and + * SCX_TASK_REENQ_KFUNC in p->scx.flags). + */ +#include +#include +#include +#include +#include "dsq_reenq.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; +static pthread_t workers[NUM_WORKERS]; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + usleep(500); + sched_yield(); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct dsq_reenq *skel; + + if (!__COMPAT_has_ksym("scx_bpf_dsq_reenq")) { + fprintf(stderr, "SKIP: scx_bpf_dsq_reenq() not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D dsq_reenq__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + SCX_FAIL_IF(dsq_reenq__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct dsq_reenq *skel =3D ctx; + struct bpf_link *link; + int i; + + link =3D bpf_map__attach_struct_ops(skel->maps.dsq_reenq_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(skel->bss->nr_reenq_kfunc, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct dsq_reenq *skel =3D ctx; + + dsq_reenq__destroy(skel); +} + +struct scx_test dsq_reenq =3D { + .name =3D "dsq_reenq", + .description =3D "Verify scx_bpf_dsq_reenq() triggers enqueue with " + "SCX_ENQ_REENQ and SCX_TASK_REENQ_KFUNC reason", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&dsq_reenq) diff --git a/tools/testing/selftests/sched_ext/enq_immed.bpf.c b/tools/test= ing/selftests/sched_ext/enq_immed.bpf.c new file mode 100644 index 000000000000..805dd0256218 --- /dev/null +++ b/tools/testing/selftests/sched_ext/enq_immed.bpf.c @@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Validate SCX_ENQ_IMMED fast/slow path semantics via the direct insert p= ath. + * + * With SCX_OPS_ALWAYS_ENQ_IMMED set, the kernel automatically adds + * SCX_ENQ_IMMED to every local DSQ dispatch. When the target CPU's local + * DSQ already has tasks queued (dsq->nr > 1), the kernel re-enqueues the + * task through ops.enqueue() with SCX_ENQ_REENQ and SCX_TASK_REENQ_IMMED + * recorded in p->scx.flags (the "slow path"). + * + * Worker threads are pinned to CPU 0 via SCX_DSQ_LOCAL_ON to guarantee + * local DSQ contention. + */ + +#include + +char _license[] SEC("license") =3D "GPL"; + +UEI_DEFINE(uei); + +/* Set by userspace to identify the test process group. */ +const volatile u32 test_tgid; + +/* + * SCX_TASK_REENQ_REASON_MASK and SCX_TASK_REENQ_IMMED are exported via + * vmlinux BTF as part of enum scx_ent_flags. + */ + +u64 nr_immed_reenq; + +void BPF_STRUCT_OPS(enq_immed_enqueue, struct task_struct *p, u64 enq_flag= s) +{ + if (enq_flags & SCX_ENQ_REENQ) { + u32 reason =3D p->scx.flags & SCX_TASK_REENQ_REASON_MASK; + + if (reason =3D=3D SCX_TASK_REENQ_IMMED) + __sync_fetch_and_add(&nr_immed_reenq, 1); + } + + if (p->tgid =3D=3D (pid_t)test_tgid) + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | 0, SCX_SLICE_DFL, + enq_flags); + else + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, + enq_flags); +} + +void BPF_STRUCT_OPS(enq_immed_dispatch, s32 cpu, struct task_struct *prev) +{ + scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL, 0); +} + +void BPF_STRUCT_OPS(enq_immed_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SCX_OPS_DEFINE(enq_immed_ops, + .enqueue =3D (void *)enq_immed_enqueue, + .dispatch =3D (void *)enq_immed_dispatch, + .exit =3D (void *)enq_immed_exit, + .flags =3D SCX_OPS_ALWAYS_ENQ_IMMED, + .name =3D "enq_immed") diff --git a/tools/testing/selftests/sched_ext/enq_immed.c b/tools/testing/= selftests/sched_ext/enq_immed.c new file mode 100644 index 000000000000..44681e41975d --- /dev/null +++ b/tools/testing/selftests/sched_ext/enq_immed.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Userspace test for SCX_ENQ_IMMED via the direct insert path. + * + * Validates that dispatching tasks to a busy CPU's local DSQ with + * SCX_OPS_ALWAYS_ENQ_IMMED triggers the IMMED slow path: the kernel + * re-enqueues the task through ops.enqueue() with SCX_TASK_REENQ_IMMED. + * + * Skipped on single-CPU systems where local DSQ contention cannot occur. + */ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "enq_immed.bpf.skel.h" +#include "scx_test.h" + +#define NUM_WORKERS 4 +#define TEST_DURATION_SEC 3 + +static volatile bool stop_workers; + +static void *worker_fn(void *arg) +{ + while (!stop_workers) { + volatile unsigned long i; + + for (i =3D 0; i < 100000UL; i++) + ; + usleep(100); + } + return NULL; +} + +static enum scx_test_status setup(void **ctx) +{ + struct enq_immed *skel; + u64 v; + + if (!__COMPAT_read_enum("scx_ops_flags", + "SCX_OPS_ALWAYS_ENQ_IMMED", &v)) { + fprintf(stderr, + "SKIP: SCX_OPS_ALWAYS_ENQ_IMMED not available\n"); + return SCX_TEST_SKIP; + } + + skel =3D enq_immed__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + + skel->rodata->test_tgid =3D (u32)getpid(); + + SCX_FAIL_IF(enq_immed__load(skel), "Failed to load skel"); + + *ctx =3D skel; + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct enq_immed *skel =3D ctx; + struct bpf_link *link; + pthread_t workers[NUM_WORKERS]; + long nproc; + int i; + u64 reenq; + + nproc =3D sysconf(_SC_NPROCESSORS_ONLN); + if (nproc <=3D 1) { + fprintf(stderr, + "SKIP: single CPU, IMMED slow path may not trigger\n"); + return SCX_TEST_SKIP; + } + + link =3D bpf_map__attach_struct_ops(skel->maps.enq_immed_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + stop_workers =3D false; + for (i =3D 0; i < NUM_WORKERS; i++) { + SCX_FAIL_IF(pthread_create(&workers[i], NULL, worker_fn, NULL), + "Failed to create worker %d", i); + } + + sleep(TEST_DURATION_SEC); + + reenq =3D skel->bss->nr_immed_reenq; + + stop_workers =3D true; + for (i =3D 0; i < NUM_WORKERS; i++) + pthread_join(workers[i], NULL); + + bpf_link__destroy(link); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); + SCX_GT(reenq, 0); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct enq_immed *skel =3D ctx; + + enq_immed__destroy(skel); +} + +struct scx_test enq_immed =3D { + .name =3D "enq_immed", + .description =3D "Verify SCX_ENQ_IMMED slow path via direct insert " + "with SCX_OPS_ALWAYS_ENQ_IMMED", + .setup =3D setup, + .run =3D run, + .cleanup =3D cleanup, +}; +REGISTER_SCX_TEST(&enq_immed) --=20 2.43.0