From nobody Sat Feb 7 15:59:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E4CCEB64D9 for ; Wed, 12 Jul 2023 11:51:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233452AbjGLLvV (ORCPT ); Wed, 12 Jul 2023 07:51:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233375AbjGLLuu (ORCPT ); Wed, 12 Jul 2023 07:50:50 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 053D91BF8; Wed, 12 Jul 2023 04:49:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689162559; x=1720698559; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=h/Q3ELFPI/B7R9qHwSf0ddcV5zoPb2zXtIlhXQ6+UW4=; b=SqkU/O5+5tzcdH5ffFhROZG5wH/y22jRLyWhbIaKHrBStiu4qaypvXEq 6tGznSftmEplwOtgKLTXNOpP427kewe2TeFvxQ48OtkvrfyEZ1TOGSqCm WGXSKcgcz3Cpm1oFtcDQVvCLga/VU6T/mUMUl1YMNIZCkBrqaY1LkdwcW c2Slc3Ouj1/JmxmFs9SdMjHgT96ilLGcc6rwLAU7bLSUKCW8rwX7sKghX zXcXmcCXH2QyPYq1r6a9app3IWSGGB31GzeIi9zwWk/VRw5Fy799fguJw B1dlj9uZ/rmUWemHGfNdSaexF4H+NxZMCGJENx4GwuyoSs8RDoQrKgdMQ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="344469378" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208";a="344469378" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 04:47:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="866094167" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208";a="866094167" Received: from eamonnob-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.237.202]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 04:47:06 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?UTF-8?q?St=C3=A9phane=20Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?UTF-8?q?Christian=20K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [PATCH 14/17] drm/i915: Implement cgroup controller over budget throttling Date: Wed, 12 Jul 2023 12:46:02 +0100 Message-Id: <20230712114605.519432-15-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com> References: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tvrtko Ursulin When notified by the drm core we are over our allotted time budget, i915 instance will check if any of the GPU engines it is reponsible for is fully saturated. If it is, and the client in question is using that engine, it will throttle it. For now throttling is done simplistically by lowering the scheduling priority while clients are throttled. Signed-off-by: Tvrtko Ursulin --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 38 ++++- drivers/gpu/drm/i915/i915_driver.c | 1 + drivers/gpu/drm/i915/i915_drm_client.c | 133 ++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 9 ++ 4 files changed, 180 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/d= rm/i915/gem/i915_gem_execbuffer.c index d3208a325614..047628769aa0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3086,6 +3086,42 @@ static void retire_requests(struct intel_timeline *t= l, struct i915_request *end) break; } =20 +#ifdef CONFIG_CGROUP_DRM +static unsigned int +__get_class(struct drm_i915_file_private *fpriv, const struct i915_request= *rq) +{ + unsigned int class; + + class =3D rq->context->engine->uabi_class; + + if (WARN_ON_ONCE(class >=3D ARRAY_SIZE(fpriv->client->throttle))) + class =3D 0; + + return class; +} + +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + struct drm_i915_file_private *file_priv =3D eb->file->driver_priv; + int prio; + + *attr =3D eb->gem_context->sched; + + prio =3D file_priv->client->throttle[__get_class(file_priv, rq)]; + if (prio) + attr->priority =3D prio; +} +#else +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + *attr =3D eb->gem_context->sched; +} +#endif + static int eb_request_add(struct i915_execbuffer *eb, struct i915_request = *rq, int err, bool last_parallel) { @@ -3102,7 +3138,7 @@ static int eb_request_add(struct i915_execbuffer *eb,= struct i915_request *rq, =20 /* Check that the context wasn't destroyed before submission */ if (likely(!intel_context_is_closed(eb->context))) { - attr =3D eb->gem_context->sched; + copy_priority(&attr, eb, rq); } else { /* Serialise with context_close via the add_to_timeline */ i915_request_set_error_once(rq, -ENOENT); diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915= _driver.c index 62a544d17659..3b9d47c2097b 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1794,6 +1794,7 @@ static const struct drm_ioctl_desc i915_ioctls[] =3D { #ifdef CONFIG_CGROUP_DRM static const struct drm_cgroup_ops i915_drm_cgroup_ops =3D { .active_time_us =3D i915_drm_cgroup_get_active_time_us, + .signal_budget =3D i915_drm_cgroup_signal_budget, }; #endif =20 diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/= i915_drm_client.c index c3298beb094a..9be007b10523 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -4,6 +4,7 @@ */ =20 #include +#include #include #include =20 @@ -122,6 +123,138 @@ u64 i915_drm_cgroup_get_active_time_us(struct drm_fil= e *file) =20 return busy; } + +int i915_drm_cgroup_signal_budget(struct drm_file *file, u64 usage, u64 bu= dget) +{ + struct drm_i915_file_private *fpriv =3D file->driver_priv; + u64 class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; + struct i915_drm_client *client =3D fpriv->client; + struct drm_i915_private *i915 =3D fpriv->i915; + struct intel_engine_cs *engine; + bool over =3D usage > budget; + struct task_struct *task; + struct pid *pid; + unsigned int i; + ktime_t unused; + int ret =3D 0; + u64 t; + + if (!supports_stats(i915)) + return -EINVAL; + + if (usage =3D=3D 0 && budget =3D=3D 0) + return 0; + + rcu_read_lock(); + pid =3D rcu_dereference(file->pid); + task =3D pid_task(pid, PIDTYPE_TGID); + if (over) { + client->over_budget++; + if (!client->over_budget) + client->over_budget =3D 2; + + drm_dbg(&i915->drm, "%s[%u] over budget (%llu/%llu)\n", + task ? task->comm : "", pid_vnr(pid), + usage, budget); + } else { + client->over_budget =3D 0; + memset(client->class_last, 0, sizeof(client->class_last)); + memset(client->throttle, 0, sizeof(client->throttle)); + + drm_dbg(&i915->drm, "%s[%u] un-throttled; under budget\n", + task ? task->comm : "", pid_vnr(pid)); + + rcu_read_unlock(); + return 0; + } + rcu_read_unlock(); + + memset(class_usage, 0, sizeof(class_usage)); + for_each_uabi_engine(engine, i915) + class_usage[engine->uabi_class] +=3D + ktime_to_ns(intel_engine_get_busy_time(engine, &unused)); + + memcpy(class_last, client->class_last, sizeof(class_last)); + memcpy(client->class_last, class_usage, sizeof(class_last)); + + for (i =3D 0; i < ARRAY_SIZE(uabi_class_names); i++) + class_usage[i] -=3D class_last[i]; + + t =3D client->last; + client->last =3D ktime_get_raw_ns(); + t =3D client->last - t; + + if (client->over_budget =3D=3D 1) + return 0; + + for (i =3D 0; i < ARRAY_SIZE(uabi_class_names); i++) { + u64 client_class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int capacity, rel_usage; + + if (!i915->engine_uabi_class_count[i]) + continue; + + t =3D DIV_ROUND_UP_ULL(t, 1000); + class_usage[i] =3D DIV_ROUND_CLOSEST_ULL(class_usage[i], 1000); + rel_usage =3D DIV_ROUND_CLOSEST_ULL(class_usage[i] * 100ULL, + t * + i915->engine_uabi_class_count[i]); + if (rel_usage < 95) { + /* Physical class not oversubsribed. */ + if (client->throttle[i]) { + client->throttle[i] =3D 0; + + rcu_read_lock(); + pid =3D rcu_dereference(file->pid); + task =3D pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] un-throttled; physical class %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + continue; + } + + client_class_usage[i] =3D + get_class_active_ns(client, i915, i, &capacity); + if (client_class_usage[i]) { + int permille; + + ret |=3D 1; + + permille =3D DIV_ROUND_CLOSEST_ULL((usage - budget) * + 1000, + budget); + client->throttle[i] =3D + DIV_ROUND_CLOSEST(permille * + I915_CONTEXT_MIN_USER_PRIORITY, + 1000); + if (client->throttle[i] < + I915_CONTEXT_MIN_USER_PRIORITY) + client->throttle[i] =3D + I915_CONTEXT_MIN_USER_PRIORITY; + + rcu_read_lock(); + pid =3D rcu_dereference(file->pid); + task =3D pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] %d=E2=80=B0 over budget, throttled to priority %d; physical cl= ass %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + permille, + client->throttle[i], + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + } + + return ret; +} #endif =20 #ifdef CONFIG_PROC_FS diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/= i915_drm_client.h index e0b143890e69..6eadc9596b8f 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -47,6 +47,13 @@ struct i915_drm_client { * @past_runtime: Accumulation of pphwsp runtimes from closed contexts. */ atomic64_t past_runtime[I915_LAST_UABI_ENGINE_CLASS + 1]; + +#ifdef CONFIG_CGROUP_DRM + int throttle[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int over_budget; + u64 last; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; +#endif }; =20 static inline struct i915_drm_client * @@ -91,5 +98,7 @@ i915_drm_client_add_context_objects(struct i915_drm_clien= t *client, #endif =20 u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file); +int i915_drm_cgroup_signal_budget(struct drm_file *file, + u64 usage, u64 budget); =20 #endif /* !__I915_DRM_CLIENT_H__ */ --=20 2.39.2