From nobody Sun Dec 14 14:13:22 2025 Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F4B222CBC9 for ; Thu, 16 Jan 2025 15:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.60.130.6 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737040604; cv=none; b=gqXqCy2bg5uwVDXQnHOhxVuQppZAaHmW77K3iZU+de+3rhilJOKSAVo9P1nUESOp24j8sX7W/x2blmGBz/oPqSpAgWRBXrZONCyg78Hf0v6lE6X7SIm1Q8LBXNhF6SoqNOuXHprTK2YP7tEhaD2kIjM2Ypkg87Q+ZZVOhSfYykA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737040604; c=relaxed/simple; bh=kJ3c6ptjFGYaX6uwgaIe76TY8VGNnkYwJpJzBBhEMyA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KxtagpZtTUN7H3Ov4M+fklutzZrngifD2jiP/QF9JhFwy8pFXP+PKYLSO+Knr/20EwewwqrfSgXLmWH2k7ZqyKs2TR3iudR2VzLnqK8dBGYyS8alutDa6qzEStRPFWF1EKUf7ADiMggnwQOedPEdMFlRMhTlX4WbnqpGbxG0vyM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=igalia.com; spf=pass smtp.mailfrom=igalia.com; dkim=pass (2048-bit key) header.d=igalia.com header.i=@igalia.com header.b=HBRNYdOv; arc=none smtp.client-ip=178.60.130.6 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=igalia.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=igalia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=igalia.com header.i=@igalia.com header.b="HBRNYdOv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=SQBYI3wItOyTJP+L2KhUmTCyz5+viFMXMHlyS4b3gBw=; b=HBRNYdOvaSLXJ8gcsggloJm+1N V3jRKLsK8WmBozn94wW9vJ9fiioCC4M0vY6jFWdzNObIDOxpMFvhZXDO2E4x3VrJ4c2V1kXfFnN31 ZjtJC7vGP947oKxWFdkibgpXrhlrP5rtVwphlAgKBSfZ65NGJj8ROF0Dfs/vcFfmrcCzxuVXgz2WU UbXkRebQYn0SlujfZL+9xNT+4OYi+PCixbqMSFmgojxGp0zKX+VkI3IHCYDr7NwTKVoOst6LXj1KR UrAZSnsjUBmTPoPUqsj+KE2vZSAZ6DZ7f9yyvrTmpnCh1niO/9OJ8w+cY4a5eZeBE8pxz2Lk8LSdj RBLf0LsA==; Received: from [58.29.143.236] (helo=localhost) by fanzine2.igalia.com with utf8esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tYRbn-00Gi1p-3Z; Thu, 16 Jan 2025 16:16:35 +0100 From: Changwoo Min To: tj@kernel.org, void@manifault.com, arighi@nvidia.com Cc: kernel-dev@igalia.com, linux-kernel@vger.kernel.org, Changwoo Min Subject: [PATCH 5/7] sched_ext: Add an event, SCX_EVENT_RQ_BYPASSING_OPS Date: Fri, 17 Jan 2025 00:15:41 +0900 Message-ID: <20250116151543.80163-6-changwoo@igalia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250116151543.80163-1-changwoo@igalia.com> References: <20250116151543.80163-1-changwoo@igalia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a core event, SCX_EVENT_RQ_BYPASSING_OPS, which represents how many operations are bypassed once the bypass mode is set. Signed-off-by: Changwoo Min --- kernel/sched/ext.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 094e19f5fb78..44b44d963a0c 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -1485,6 +1485,11 @@ struct scx_event_stat { * is dispatched to a local DSQ when exiting. */ u64 ENQ_LOCAL_EXITING; + + /* + * When the bypassing mode is set, the number of bypassed operations. + */ + u64 RQ_BYPASSING_OPS; }; =20 #define SCX_EVENT_IDX(e) (offsetof(struct scx_event_stat, e)/sizeof(u64)) @@ -1500,6 +1505,7 @@ enum scx_event_kind { SCX_EVENT_DEFINE(OFFLINE_LOCAL_DSQ), SCX_EVENT_DEFINE(CNTD_RUN_WO_ENQ), SCX_EVENT_DEFINE(ENQ_LOCAL_EXITING), + SCX_EVENT_DEFINE(RQ_BYPASSING_OPS), SCX_EVENT_END =3D SCX_EVENT_END_IDX(), }; =20 @@ -1508,6 +1514,7 @@ static const char *scx_event_stat_str[] =3D { [SCX_EVENT_OFFLINE_LOCAL_DSQ] =3D "offline_local_dsq", [SCX_EVENT_CNTD_RUN_WO_ENQ] =3D "cntd_run_wo_enq", [SCX_EVENT_ENQ_LOCAL_EXITING] =3D "enq_local_exiting", + [SCX_EVENT_RQ_BYPASSING_OPS] =3D "rq_bypassing_ops", }; =20 /* @@ -2087,8 +2094,10 @@ static void do_enqueue_task(struct rq *rq, struct ta= sk_struct *p, u64 enq_flags, if (!scx_rq_online(rq)) goto local; =20 - if (scx_rq_bypassing(rq)) + if (scx_rq_bypassing(rq)) { + scx_add_event(RQ_BYPASSING_OPS, 1); goto global; + } =20 if (p->scx.ddsp_dsq_id !=3D SCX_DSQ_INVALID) goto direct; @@ -2933,6 +2942,8 @@ static int balance_one(struct rq *rq, struct task_str= uct *prev) scx_rq_bypassing(rq))) { rq->scx.flags |=3D SCX_RQ_BAL_KEEP; scx_add_event(CNTD_RUN_WO_ENQ, 1); + if (scx_rq_bypassing(rq)) + scx_add_event(RQ_BYPASSING_OPS, 1); goto has_tasks; } rq->scx.flags &=3D ~SCX_RQ_IN_BALANCE; @@ -3708,6 +3719,9 @@ static int select_task_rq_scx(struct task_struct *p, = int prev_cpu, int wake_flag p->scx.slice =3D SCX_SLICE_DFL; p->scx.ddsp_dsq_id =3D SCX_DSQ_LOCAL; } + + if (scx_rq_bypassing(task_rq(p))) + scx_add_event(RQ_BYPASSING_OPS, 1); return cpu; } } @@ -3799,6 +3813,8 @@ void __scx_update_idle(struct rq *rq, bool idle, bool= do_notify) */ if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq)) SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); + else if (scx_rq_bypassing(rq)) + scx_add_event(RQ_BYPASSING_OPS, 1); =20 /* * Update the idle masks: @@ -3940,6 +3956,7 @@ static void task_tick_scx(struct rq *rq, struct task_= struct *curr, int queued) if (scx_rq_bypassing(rq)) { curr->scx.slice =3D 0; touch_core_sched(rq, curr); + scx_add_event(RQ_BYPASSING_OPS, 1); } else if (SCX_HAS_OP(tick)) { SCX_CALL_OP(SCX_KF_REST, tick, curr); } @@ -7131,8 +7148,10 @@ __bpf_kfunc void scx_bpf_kick_cpu(s32 cpu, u64 flags) * lead to irq_work_queue() malfunction such as infinite busy wait for * IRQ status update. Suppress kicking. */ - if (scx_rq_bypassing(this_rq)) + if (scx_rq_bypassing(this_rq)) { + scx_add_event(RQ_BYPASSING_OPS, 1); goto out; + } =20 /* * Actual kicking is bounced to kick_cpus_irq_workfn() to avoid nesting --=20 2.48.1