From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46272C433F5 for ; Tue, 10 May 2022 23:57:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229508AbiEJX5M (ORCPT ); Tue, 10 May 2022 19:57:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238668AbiEJX5B (ORCPT ); Tue, 10 May 2022 19:57:01 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96CFF20AE7D for ; Tue, 10 May 2022 16:56:59 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id d22-20020a25add6000000b00645d796034fso439230ybe.2 for ; Tue, 10 May 2022 16:56:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=4KvDBY92RR8WKuUjzgohCsuZSchRL19v0jjqGuoq0no=; b=EVnDMIpZB27MCIXhHatilUnRYlxARdcdAd7PK1rHIXFEB09aYu4tovCAvBFw8cZxyT gXDSSt3iUIN/1CO3fml62gvLZRGE+EMeeeLDQuSX7c4jbLENEPCC9PDIZujw5VNhH0q0 FNlArlRA5Ob2e7UO2kZWZMCqvsuYh5DB5yxiuRlqYM4K6p8fNfkF+nvk4AffdLlQoCVj Iow7SxlzmL5Vq+/E0YXhlhfw88NjMLlc65QX53QBUS8PoggN6lNR+09aPb47Qcif87k3 deyEJ4awwrFwdjAObdu51OKvcIO1OpNdSGWxV/SAb3A5rkBNGUpnsa8mp82Pi0LrZ+EP djEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=4KvDBY92RR8WKuUjzgohCsuZSchRL19v0jjqGuoq0no=; b=QN+0CafaGDuqmJ4cAZm4OH7mcleK/HaxQFHH3fFmPOE0NX8wsEWn8fxxD+z0jGE3hp XcPyXbT56h2+hZggkNrkHcwIrt/TMSvZwaoomVv4sPMbswTmF9FU+5L5B6pl5kzXSQvy RGPpqkQpRE6JKuzfOulYuDG5eo4AFRPA+2dGNNwdCT6j1a46A9twV4ubmv8O/RvQpcoy +36YrTDNiThMIpni3q9lCpWVlZZevSmiAlAwkLtk6EfGoT6YCOQrOd5X0mcafg6cQAJZ +wBv59txroNUWJVqM1rLchTAsHaieJSsDBZqL1YIiF3K/3GJBVPpQa54dSZjzQuEYLGP HpUg== X-Gm-Message-State: AOAM530Lfb5BXXGZcGSngRewivmVKN9rO0ZI2c2JyKJ9Nmmk0sVRbPZj S5yYit6wqSx6eOKkZiUEZmCBThUQ/t54iXw= X-Google-Smtp-Source: ABdhPJxlqpgDpJ9NusV/EqUoq7QOEUS1bEXKFWa7Ps8sG0Rk4PPsvk4A1DFBh7AwydMPg9ury9ovxi429oUOWa4= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a5b:7c4:0:b0:64b:da6:cb3b with SMTP id t4-20020a5b07c4000000b0064b0da6cb3bmr5590895ybq.104.1652227018683; Tue, 10 May 2022 16:56:58 -0700 (PDT) Date: Tue, 10 May 2022 23:56:45 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-2-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 1/6] gpu: rfc: Proposal for a GPU cgroup controller From: "T.J. Mercier" To: tjmercier@google.com, Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet Cc: daniel@ffwll.ch, hridya@google.com, christian.koenig@amd.com, jstultz@google.com, tkjos@android.com, cmllamas@google.com, surenb@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Hridya Valsaraju This patch adds a proposal for a new GPU cgroup controller for accounting/limiting GPU and GPU-related memory allocations. The proposed controller is based on the DRM cgroup controller[1] and follows the design of the RDMA cgroup controller. The new cgroup controller would: * Allow setting per-device limits on the total size of buffers allocated by device within a cgroup. * Expose a per-device/allocator breakdown of the buffers charged to a cgroup. The prototype in the following patches is only for memory accounting using the GPU cgroup controller and does not implement limit setting. [1]: https://lore.kernel.org/amd-gfx/20210126214626.16260-1-brian.welty@int= el.com/ Signed-off-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v7 changes Remove comment about duplicate name rejection which is not relevant to cgroups users per Michal Koutn=C3=BD. v6 changes Move documentation into cgroup-v2.rst per Tejun Heo. v5 changes Drop the global GPU cgroup "total" (sum of all device totals) portion of the design since there is no currently known use for this per Tejun Heo. Update for renamed functions/variables. v3 changes Remove Upstreaming Plan from gpu-cgroup.rst per John Stultz. Use more common dual author commit message format per John Stultz. --- Documentation/admin-guide/cgroup-v2.rst | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index 69d7a6983f78..2e1d26e327c7 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2352,6 +2352,29 @@ first, and stays charged to that cgroup until that r= esource is freed. Migrating a process to a different cgroup does not move the charge to the destination cgroup where the process has moved. =20 + +GPU +--- + +The GPU controller accounts for device and system memory allocated by the = GPU +and related subsystems for graphics use. Resource limits are not currently +supported. + +GPU Interface Files +~~~~~~~~~~~~~~~~~~~~ + + gpu.memory.current + A read-only file containing memory allocations in flat-keyed format. The = key + is a string representing the device name. The value is the size of the me= mory + charged to the device in bytes. The device names are globally unique.:: + + $ cat /sys/kernel/fs/cgroup1/gpu.memory.current + dev1 4194304 + dev2 104857600 + + The device name string is set by a device driver when it registers with t= he + GPU cgroup controller to participate in resource accounting. + Others ------ =20 --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6C4AC433EF for ; Tue, 10 May 2022 23:57:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238797AbiEJX5U (ORCPT ); Tue, 10 May 2022 19:57:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238629AbiEJX5E (ORCPT ); Tue, 10 May 2022 19:57:04 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2ED620823C for ; Tue, 10 May 2022 16:57:02 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id l26-20020a25ad5a000000b0064adb7991f6so417096ybe.8 for ; Tue, 10 May 2022 16:57:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=gpc+taySKk3MNhQfkQXGrP1E1S3X7b6iIkkrvK0jjQk=; b=FJgoF3XQED/pmTUW/k8/kwQOacCwuY8PajnlvmNo7vPUrW03rQhgNqVylRfk6bt9N5 +pBo04gYW2XdAwzcMMCR4QrepYsOYrpGeSwGHfZ7I8DbxYFsr4mFLEAh6eXvhioT62Aw pinwSZ/tiKIcjJZSco9BfroDbcJ+T0H3sY0aosRWw8hSHMilM7WEzjPRIvqiUwZ7Dm0n PAeMkkxoM+scEWgJgxPLA8dmQBCsp20INriwhq4AIBrW0b0Wo8r8cXXtwpDX2WvlaAkd xgPmaVvQrnsBAdrf9NuE/bszYtPwceY8NGbj7kqwxnmMUl4DnGiCLCwNEei5oFmBrFYb n9cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=gpc+taySKk3MNhQfkQXGrP1E1S3X7b6iIkkrvK0jjQk=; b=FSu+aDF+0yTjFtsiA88sT5+nI+6S21XtgSslLGxw+tokdFvApD7p6y3U1F+T6ZfZL3 ts36bpm568FClYRSvY8ZoL9I12MTsKC0SCum3Mw2R8EPZvzVfQHrRiaB0RwlkwyOePR6 8qoppjUJeelrnt7b28jqRKaEgtZIsthe14baLpwEwpdcd/BFK6Qi5ki1ZxIHpwhmTTTc SVZdNvKa65O/ylUjEK59dr/U0WxFUZSxZHaJ6Hkm1biZSGJtCwTaVP+pPeQNJga79mN7 Gw3DRzRC9g9w1sValqgcC/oGWNbv8OJq1OxcZEjuidgdqnmtgtiiTupucHHYkC1+6WEU JhEA== X-Gm-Message-State: AOAM530XxgGGjdG0JPw361Rydcjlg90g41IlOq0ghW+Kue4B8L30NzSY +K+EmmTGa2coyaQ5QR8cUu4CDxTcnnbhRN0= X-Google-Smtp-Source: ABdhPJz0zXu4ZAwmDD5a3Zr4zNSeYdSZDCB0YRiRirzMXasBTaC1edOgJ/1mVkJ29siYdT+Menb7z5oDCrlMv70= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a25:dd01:0:b0:64a:13f1:fd65 with SMTP id u1-20020a25dd01000000b0064a13f1fd65mr20129591ybg.609.1652227021979; Tue, 10 May 2022 16:57:01 -0700 (PDT) Date: Tue, 10 May 2022 23:56:46 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-3-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory From: "T.J. Mercier" To: tjmercier@google.com, Tejun Heo , Zefan Li , Johannes Weiner Cc: daniel@ffwll.ch, hridya@google.com, christian.koenig@amd.com, jstultz@google.com, tkjos@android.com, cmllamas@google.com, surenb@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Hridya Valsaraju The cgroup controller provides accounting for GPU and GPU-related memory allocations. The memory being accounted can be device memory or memory allocated from pools dedicated to serve GPU-related tasks. This patch adds APIs to: -allow a device to register for memory accounting using the GPU cgroup controller. -charge and uncharge allocated memory to a cgroup. When the cgroup controller is enabled, it would expose information about the memory allocated by each device(registered for GPU cgroup memory accounting) for each cgroup. The API/UAPI can be extended to set per-device/total allocation limits in the future. The cgroup controller has been named following the discussion in [1]. [1]: https://lore.kernel.org/amd-gfx/YCJp%2F%2FkMC7YjVMXv@phenom.ffwll.loca= l/ Signed-off-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v7 changes Hide gpucg and gpucg_bucket struct definitions per Michal Koutn=C3=BD. This means gpucg_register_bucket now returns an internally allocated struct gpucg_bucket. Move all public function documentation to the cgroup_gpu.h header. v5 changes Support all strings for gpucg_register_device instead of just string literals. Enforce globally unique gpucg_bucket names. Constrain gpucg_bucket name lengths to 64 bytes. Obtain just a single css refcount instead of nr_pages for each charge. Rename: gpucg_try_charge -> gpucg_charge find_cg_rpool_locked -> cg_rpool_find_locked init_cg_rpool -> cg_rpool_init get_cg_rpool_locked -> cg_rpool_get_locked "gpu cgroup controller" -> "GPU controller" gpucg_device -> gpucg_bucket usage -> size v4 changes Adjust gpucg_try_charge critical section for future charge transfer functionality. v3 changes Use more common dual author commit message format per John Stultz. v2 changes Fix incorrect Kconfig help section indentation per Randy Dunlap. --- include/linux/cgroup_gpu.h | 122 ++++++++++++ include/linux/cgroup_subsys.h | 4 + init/Kconfig | 7 + kernel/cgroup/Makefile | 1 + kernel/cgroup/gpu.c | 339 ++++++++++++++++++++++++++++++++++ 5 files changed, 473 insertions(+) create mode 100644 include/linux/cgroup_gpu.h create mode 100644 kernel/cgroup/gpu.c diff --git a/include/linux/cgroup_gpu.h b/include/linux/cgroup_gpu.h new file mode 100644 index 000000000000..cb228a16aa1f --- /dev/null +++ b/include/linux/cgroup_gpu.h @@ -0,0 +1,122 @@ +/* SPDX-License-Identifier: MIT + * Copyright 2019 Advanced Micro Devices, Inc. + * Copyright (C) 2022 Google LLC. + */ +#ifndef _CGROUP_GPU_H +#define _CGROUP_GPU_H + +#include + +#define GPUCG_BUCKET_NAME_MAX_LEN 64 + +struct gpucg; +struct gpucg_bucket; + +#ifdef CONFIG_CGROUP_GPU + +/** + * css_to_gpucg - get the corresponding gpucg ref from a cgroup_subsys_sta= te + * @css: the target cgroup_subsys_state + * + * Returns: gpu cgroup that contains the @css + */ +struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css); + +/** + * gpucg_get - get the gpucg reference that a task belongs to + * @task: the target task + * + * This increases the reference count of the css that the @task belongs to. + * + * Returns: reference to the gpu cgroup the task belongs to. + */ +struct gpucg *gpucg_get(struct task_struct *task); + +/** + * gpucg_put - put a gpucg reference + * @gpucg: the target gpucg + * + * Put a reference obtained via gpucg_get + */ +void gpucg_put(struct gpucg *gpucg); + +/** + * gpucg_parent - find the parent of a gpu cgroup + * @cg: the target gpucg + * + * This does not increase the reference count of the parent cgroup + * + * Returns: parent gpu cgroup of @cg + */ +struct gpucg *gpucg_parent(struct gpucg *cg); + +/** + * gpucg_charge - charge memory to the specified gpucg and gpucg_bucket. + * Caller must hold a reference to @gpucg obtained through gpucg_get(). Th= e size of the memory is + * rounded up to be a multiple of the page size. + * + * @gpucg: The gpu cgroup to charge the memory to. + * @bucket: The bucket to charge the memory to. + * @size: The size of memory to charge in bytes. + * This size will be rounded up to the nearest page size. + * + * Return: returns 0 if the charging is successful and otherwise returns a= n error code. + */ +int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 siz= e); + +/** + * gpucg_uncharge - uncharge memory from the specified gpucg and gpucg_buc= ket. + * The caller must hold a reference to @gpucg obtained through gpucg_get(). + * + * @gpucg: The gpu cgroup to uncharge the memory from. + * @bucket: The bucket to uncharge the memory from. + * @size: The size of memory to uncharge in bytes. + * This size will be rounded up to the nearest page size. + */ +void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 = size); + +/** + * gpucg_register_bucket - Registers a bucket for memory accounting using = the GPU cgroup controller. + * + * @name: Pointer to a null-terminated string to denote the name of the bu= cket. This name should be + * globally unique, and should not exceed @GPUCG_BUCKET_NAME_MAX_LE= N bytes. + * + * @bucket must remain valid. @name will be copied. + * + * Returns a pointer to a newly allocated bucket on success, or an errno c= ode otherwise. As buckets + * cannot be unregistered, this can never be freed. + */ +struct gpucg_bucket *gpucg_register_bucket(const char *name); +#else /* CONFIG_CGROUP_GPU */ + +static inline struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css) +{ + return NULL; +} + +static inline struct gpucg *gpucg_get(struct task_struct *task) +{ + return NULL; +} + +static inline void gpucg_put(struct gpucg *gpucg) {} + +static inline struct gpucg *gpucg_parent(struct gpucg *cg) +{ + return NULL; +} + +static inline int gpucg_charge(struct gpucg *gpucg, + struct gpucg_bucket *bucket, + u64 size) +{ + return 0; +} + +static inline void gpucg_uncharge(struct gpucg *gpucg, + struct gpucg_bucket *bucket, + u64 size) {} + +static inline struct gpucg_bucket *gpucg_register_bucket(const char *name)= {} +#endif /* CONFIG_CGROUP_GPU */ +#endif /* _CGROUP_GPU_H */ diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 445235487230..46a2a7b93c41 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -65,6 +65,10 @@ SUBSYS(rdma) SUBSYS(misc) #endif =20 +#if IS_ENABLED(CONFIG_CGROUP_GPU) +SUBSYS(gpu) +#endif + /* * The following subsystems are not supported on the default hierarchy. */ diff --git a/init/Kconfig b/init/Kconfig index ddcbefe535e9..2e00a190e170 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -984,6 +984,13 @@ config BLK_CGROUP =20 See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more inf= ormation. =20 +config CGROUP_GPU + bool "GPU controller (EXPERIMENTAL)" + select PAGE_COUNTER + help + Provides accounting and limit setting for memory allocations by the GPU= and + GPU-related subsystems. + config CGROUP_WRITEBACK bool depends on MEMCG && BLK_CGROUP diff --git a/kernel/cgroup/Makefile b/kernel/cgroup/Makefile index 12f8457ad1f9..be95a5a532fc 100644 --- a/kernel/cgroup/Makefile +++ b/kernel/cgroup/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_CGROUP_RDMA) +=3D rdma.o obj-$(CONFIG_CPUSETS) +=3D cpuset.o obj-$(CONFIG_CGROUP_MISC) +=3D misc.o obj-$(CONFIG_CGROUP_DEBUG) +=3D debug.o +obj-$(CONFIG_CGROUP_GPU) +=3D gpu.o diff --git a/kernel/cgroup/gpu.c b/kernel/cgroup/gpu.c new file mode 100644 index 000000000000..ad16ea15d427 --- /dev/null +++ b/kernel/cgroup/gpu.c @@ -0,0 +1,339 @@ +// SPDX-License-Identifier: MIT +// Copyright 2019 Advanced Micro Devices, Inc. +// Copyright (C) 2022 Google LLC. + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static struct gpucg *root_gpucg __read_mostly; + +/* + * Protects list of resource pools maintained on per cgroup basis and list + * of buckets registered for memory accounting using the GPU cgroup contro= ller. + */ +static DEFINE_MUTEX(gpucg_mutex); +static LIST_HEAD(gpucg_buckets); + +/* The GPU cgroup controller data structure */ +struct gpucg { + struct cgroup_subsys_state css; + + /* list of all resource pools that belong to this cgroup */ + struct list_head rpools; +}; + +/* A named entity representing bucket of tracked memory. */ +struct gpucg_bucket { + /* list of various resource pools in various cgroups that the bucket is p= art of */ + struct list_head rpools; + + /* list of all buckets registered for GPU cgroup accounting */ + struct list_head bucket_node; + + /* string to be used as identifier for accounting and limit setting */ + const char *name; +}; + +struct gpucg_resource_pool { + /* The bucket whose resource usage is tracked by this resource pool */ + struct gpucg_bucket *bucket; + + /* list of all resource pools for the cgroup */ + struct list_head cg_node; + + /* list maintained by the gpucg_bucket to keep track of its resource pool= s */ + struct list_head bucket_node; + + /* tracks memory usage of the resource pool */ + struct page_counter total; +}; + +static void free_cg_rpool_locked(struct gpucg_resource_pool *rpool) +{ + lockdep_assert_held(&gpucg_mutex); + + list_del(&rpool->cg_node); + list_del(&rpool->bucket_node); + kfree(rpool); +} + +static void gpucg_css_free(struct cgroup_subsys_state *css) +{ + struct gpucg_resource_pool *rpool, *tmp; + struct gpucg *gpucg =3D css_to_gpucg(css); + + // delete all resource pools + mutex_lock(&gpucg_mutex); + list_for_each_entry_safe(rpool, tmp, &gpucg->rpools, cg_node) + free_cg_rpool_locked(rpool); + mutex_unlock(&gpucg_mutex); + + kfree(gpucg); +} + +static struct cgroup_subsys_state * +gpucg_css_alloc(struct cgroup_subsys_state *parent_css) +{ + struct gpucg *gpucg, *parent; + + gpucg =3D kzalloc(sizeof(struct gpucg), GFP_KERNEL); + if (!gpucg) + return ERR_PTR(-ENOMEM); + + parent =3D css_to_gpucg(parent_css); + if (!parent) + root_gpucg =3D gpucg; + + INIT_LIST_HEAD(&gpucg->rpools); + + return &gpucg->css; +} + +static struct gpucg_resource_pool *cg_rpool_find_locked( + struct gpucg *cg, + struct gpucg_bucket *bucket) +{ + struct gpucg_resource_pool *rpool; + + lockdep_assert_held(&gpucg_mutex); + + list_for_each_entry(rpool, &cg->rpools, cg_node) + if (rpool->bucket =3D=3D bucket) + return rpool; + + return NULL; +} + +static struct gpucg_resource_pool *cg_rpool_init(struct gpucg *cg, + struct gpucg_bucket *bucket) +{ + struct gpucg_resource_pool *rpool =3D kzalloc(sizeof(*rpool), + GFP_KERNEL); + if (!rpool) + return ERR_PTR(-ENOMEM); + + rpool->bucket =3D bucket; + + page_counter_init(&rpool->total, NULL); + INIT_LIST_HEAD(&rpool->cg_node); + INIT_LIST_HEAD(&rpool->bucket_node); + list_add_tail(&rpool->cg_node, &cg->rpools); + list_add_tail(&rpool->bucket_node, &bucket->rpools); + + return rpool; +} + +/** + * get_cg_rpool_locked - find the resource pool for the specified bucket a= nd + * specified cgroup. If the resource pool does not exist for the cg, it is + * created in a hierarchical manner in the cgroup and its ancestor cgroups= who + * do not already have a resource pool entry for the bucket. + * + * @cg: The cgroup to find the resource pool for. + * @bucket: The bucket associated with the returned resource pool. + * + * Return: return resource pool entry corresponding to the specified bucke= t in + * the specified cgroup (hierarchically creating them if not existing alre= ady). + * + */ +static struct gpucg_resource_pool * +cg_rpool_get_locked(struct gpucg *cg, struct gpucg_bucket *bucket) +{ + struct gpucg *parent_cg, *p, *stop_cg; + struct gpucg_resource_pool *rpool, *tmp_rpool; + struct gpucg_resource_pool *parent_rpool =3D NULL, *leaf_rpool =3D NULL; + + rpool =3D cg_rpool_find_locked(cg, bucket); + if (rpool) + return rpool; + + stop_cg =3D cg; + do { + rpool =3D cg_rpool_init(stop_cg, bucket); + if (IS_ERR(rpool)) + goto err; + + if (!leaf_rpool) + leaf_rpool =3D rpool; + + stop_cg =3D gpucg_parent(stop_cg); + if (!stop_cg) + break; + + rpool =3D cg_rpool_find_locked(stop_cg, bucket); + } while (!rpool); + + /* + * Re-initialize page counters of all rpools created in this invocation + * to enable hierarchical charging. + * stop_cg is the first ancestor cg who already had a resource pool for + * the bucket. It can also be NULL if no ancestors had a pre-existing + * resource pool for the bucket before this invocation. + */ + rpool =3D leaf_rpool; + for (p =3D cg; p !=3D stop_cg; p =3D parent_cg) { + parent_cg =3D gpucg_parent(p); + if (!parent_cg) + break; + parent_rpool =3D cg_rpool_find_locked(parent_cg, bucket); + page_counter_init(&rpool->total, &parent_rpool->total); + + rpool =3D parent_rpool; + } + + return leaf_rpool; +err: + for (p =3D cg; p !=3D stop_cg; p =3D gpucg_parent(p)) { + tmp_rpool =3D cg_rpool_find_locked(p, bucket); + free_cg_rpool_locked(tmp_rpool); + } + return rpool; +} + +struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css) +{ + return css ? container_of(css, struct gpucg, css) : NULL; +} + +struct gpucg *gpucg_get(struct task_struct *task) +{ + if (!cgroup_subsys_enabled(gpu_cgrp_subsys)) + return NULL; + return css_to_gpucg(task_get_css(task, gpu_cgrp_id)); +} + +void gpucg_put(struct gpucg *gpucg) +{ + if (gpucg) + css_put(&gpucg->css); +} + +struct gpucg *gpucg_parent(struct gpucg *cg) +{ + return css_to_gpucg(cg->css.parent); +} + +int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 siz= e) +{ + struct page_counter *counter; + u64 nr_pages; + struct gpucg_resource_pool *rp; + int ret =3D 0; + + nr_pages =3D PAGE_ALIGN(size) >> PAGE_SHIFT; + + mutex_lock(&gpucg_mutex); + rp =3D cg_rpool_get_locked(gpucg, bucket); + /* + * Continue to hold gpucg_mutex because we use it to block charges while = transfers are in + * progress to avoid potentially exceeding a limit. + */ + if (IS_ERR(rp)) { + mutex_unlock(&gpucg_mutex); + return PTR_ERR(rp); + } + + if (page_counter_try_charge(&rp->total, nr_pages, &counter)) + css_get(&gpucg->css); + else + ret =3D -ENOMEM; + mutex_unlock(&gpucg_mutex); + + return ret; +} + +void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 = size) +{ + u64 nr_pages; + struct gpucg_resource_pool *rp; + + mutex_lock(&gpucg_mutex); + rp =3D cg_rpool_find_locked(gpucg, bucket); + /* + * gpucg_mutex can be unlocked here, rp will stay valid until gpucg is fr= eed and there are + * active refs on gpucg. Uncharges are fine while transfers are in progre= ss since there is + * no potential to exceed a limit while uncharging and transferring. + */ + mutex_unlock(&gpucg_mutex); + + if (unlikely(!rp)) { + pr_err("Resource pool not found, incorrect charge/uncharge ordering?\n"); + return; + } + + nr_pages =3D PAGE_ALIGN(size) >> PAGE_SHIFT; + page_counter_uncharge(&rp->total, nr_pages); + css_put(&gpucg->css); +} + +struct gpucg_bucket *gpucg_register_bucket(const char *name) +{ + struct gpucg_bucket *bucket, *b; + + if (!name) + return ERR_PTR(-EINVAL); + + if (strlen(name) >=3D GPUCG_BUCKET_NAME_MAX_LEN) + return ERR_PTR(-ENAMETOOLONG); + + bucket =3D kzalloc(sizeof(struct gpucg_bucket), GFP_KERNEL); + if (!bucket) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&bucket->bucket_node); + INIT_LIST_HEAD(&bucket->rpools); + bucket->name =3D kstrdup_const(name, GFP_KERNEL); + + mutex_lock(&gpucg_mutex); + list_for_each_entry(b, &gpucg_buckets, bucket_node) { + if (strncmp(b->name, bucket->name, GPUCG_BUCKET_NAME_MAX_LEN) =3D=3D 0) { + mutex_unlock(&gpucg_mutex); + kfree_const(bucket->name); + kfree(bucket); + return ERR_PTR(-EEXIST); + } + } + list_add_tail(&bucket->bucket_node, &gpucg_buckets); + mutex_unlock(&gpucg_mutex); + + return bucket; +} + +static int gpucg_resource_show(struct seq_file *sf, void *v) +{ + struct gpucg_resource_pool *rpool; + struct gpucg *cg =3D css_to_gpucg(seq_css(sf)); + + mutex_lock(&gpucg_mutex); + list_for_each_entry(rpool, &cg->rpools, cg_node) { + seq_printf(sf, "%s %lu\n", rpool->bucket->name, + page_counter_read(&rpool->total) * PAGE_SIZE); + } + mutex_unlock(&gpucg_mutex); + + return 0; +} + +struct cftype files[] =3D { + { + .name =3D "memory.current", + .seq_show =3D gpucg_resource_show, + }, + { } /* terminate */ +}; + +struct cgroup_subsys gpu_cgrp_subsys =3D { + .css_alloc =3D gpucg_css_alloc, + .css_free =3D gpucg_css_free, + .early_init =3D false, + .legacy_cftypes =3D files, + .dfl_cftypes =3D files, +}; --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80611C433EF for ; Tue, 10 May 2022 23:57:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238843AbiEJX53 (ORCPT ); Tue, 10 May 2022 19:57:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238687AbiEJX5H (ORCPT ); Tue, 10 May 2022 19:57:07 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FF7827CE1 for ; Tue, 10 May 2022 16:57:05 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2d11b6259adso2722827b3.19 for ; Tue, 10 May 2022 16:57:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=wNfZpS1Ph0bCGjgLml9yBH3I6aG1qAbhk+0rc1ZP/hY=; b=m1tdinBInvIZUHU5JN8ycqjveEnO3YkgAIzQKbLrHFO+rIyratWnLSMd/kGnkUfYxk QFS/2sxqcTKfPU9D7Jan8j+W3sUE663CsF0y9eEANP3DjiMd1xLCBGOc76l47EOivEIc nnQjAGKyPasbtMCN+cHTSXayLOljIA9JwLJ/0w5tpYggjUtqZGRZWAqsRwx2rDPySXTX BOSDHRItyryrPr94KsXB6Df9No8XpxQVOB+yj0ELIMH3iRkb+eGxnDAmoPx14qfzwJAS jgoMOfoLuHXh1ShUSoUHz7WMbiAgGhN2lB6gEakFMZBsumeevB5jjXpyx9Nlem7NwE0t IhGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=wNfZpS1Ph0bCGjgLml9yBH3I6aG1qAbhk+0rc1ZP/hY=; b=5qAQffCYH0vTcsvofsI+z8pyquRTmgDvtmDCjoT1tjFkpJbb9Pnb5MWQpaByy6HTsY jjjxnYL9yOyJ67IAVymDr3ZyPaL+/4dtzJdO7qidueYIVi34h+rLtH3ISdoegF90GkGo lqlSnlFu/E6fu9/ZDwNyW0NPjep+5j6HyUDZbTuVPxeBoBJl/Z0BXB/fyPAlBn+Pl3Rx LfXUMa0tGMU0nOXrUuGPsaGvTHPwvElw3LEUvsFMTSTV70x0Gxsy+IAc3FCp6EQ0SwjE DriTa9jIXyo1HlaWgPnPRIYalwcaj2D6uv59xNvJ5gJltwPpEImklGAfBccTTEkyh6VR DdVQ== X-Gm-Message-State: AOAM531x1YK/MpQi2DKeBtG7RVZFf893/2xOsZn4N+ffm0SJh5GMxJr7 JpZtO3qEZPjNoqdn3JWcOIRmr2NjpGale5o= X-Google-Smtp-Source: ABdhPJxzdN7A8Vnn6rBrUXgMrzw73E3knQrmokxQfU7igRSDoyBhnjeXgirKX7Wm4wFIxWhyPcQgOh5qijErwe0= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a81:b044:0:b0:2d6:bd1f:5d8b with SMTP id x4-20020a81b044000000b002d6bd1f5d8bmr22265953ywk.27.1652227024764; Tue, 10 May 2022 16:57:04 -0700 (PDT) Date: Tue, 10 May 2022 23:56:47 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-4-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 3/6] dmabuf: heaps: export system_heap buffers with GPU cgroup charging From: "T.J. Mercier" To: tjmercier@google.com, Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Benjamin Gaignard , Liam Mark , Laura Abbott , Brian Starkey , John Stultz Cc: daniel@ffwll.ch, tj@kernel.org, hridya@google.com, jstultz@google.com, tkjos@android.com, cmllamas@google.com, surenb@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" All DMA heaps now register a new GPU cgroup bucket upon creation, and the system_heap now exports buffers associated with its GPU cgroup bucket for tracking purposes. In order to support GPU cgroup charge transfer on a dma-buf, the current GPU cgroup information must be stored inside the dma-buf struct. For tracked buffers, exporters include the struct gpucg and struct gpucg_bucket pointers in the export info which can later be modified if the charge is migrated to another cgroup. Signed-off-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v7 changes Adapt to new gpucg_register_bucket API. v5 changes Merge dmabuf: Use the GPU cgroup charge/uncharge APIs into this patch. Remove all GPU cgroup code from dma-buf except what's necessary to support charge transfer. Previously charging was done in export, but for non-Android graphics use-cases this is not ideal since there may be a dealy between allocation and export, during which time there is no accounting. Append "-heap" to gpucg_bucket names. Charge on allocation instead of export. This should more closely mirror non-Android use-cases where there is potentially a delay between allocation and export. Put the charge and uncharge code in the same file (system_heap_allocate, system_heap_dma_buf_release) instead of splitting them between the heap and the dma_buf_release. Move no-op code to header file to match other files in the series. v3 changes Use more common dual author commit message format per John Stultz. v2 changes Move dma-buf cgroup charge transfer from a dma_buf_op defined by every heap to a single dma-buf function for all heaps per Daniel Vetter and Christian K=C3=B6nig. --- drivers/dma-buf/dma-buf.c | 19 +++++++++++++ drivers/dma-buf/dma-heap.c | 38 ++++++++++++++++++++++++++ drivers/dma-buf/heaps/system_heap.c | 28 +++++++++++++++++--- include/linux/dma-buf.h | 41 +++++++++++++++++++++++------ include/linux/dma-heap.h | 15 +++++++++++ 5 files changed, 129 insertions(+), 12 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index df23239b04fc..bc89c44bd9b9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -462,6 +462,24 @@ static struct file *dma_buf_getfile(struct dma_buf *dm= abuf, int flags) * &dma_buf_ops. */ =20 +#ifdef CONFIG_CGROUP_GPU +static void dma_buf_set_gpucg(struct dma_buf *dmabuf, const struct dma_buf= _export_info *exp) +{ + dmabuf->gpucg =3D exp->gpucg; + dmabuf->gpucg_bucket =3D exp->gpucg_bucket; +} + +void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *exp_info, + struct gpucg *gpucg, + struct gpucg_bucket *gpucg_bucket) +{ + exp_info->gpucg =3D gpucg; + exp_info->gpucg_bucket =3D gpucg_bucket; +} +#else +static void dma_buf_set_gpucg(struct dma_buf *dmabuf, struct dma_buf_expor= t_info *exp) {} +#endif + /** * dma_buf_export - Creates a new dma_buf, and associates an anon file * with this buffer, so it can be exported. @@ -527,6 +545,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_exp= ort_info *exp_info) init_waitqueue_head(&dmabuf->poll); dmabuf->cb_in.poll =3D dmabuf->cb_out.poll =3D &dmabuf->poll; dmabuf->cb_in.active =3D dmabuf->cb_out.active =3D 0; + dma_buf_set_gpucg(dmabuf, exp_info); =20 if (!resv) { resv =3D (struct dma_resv *)&dmabuf[1]; diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index 8f5848aa144f..48173a66d70d 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -7,10 +7,12 @@ */ =20 #include +#include #include #include #include #include +#include #include #include #include @@ -21,6 +23,7 @@ #include =20 #define DEVNAME "dma_heap" +#define HEAP_NAME_SUFFIX "-heap" =20 #define NUM_HEAP_MINORS 128 =20 @@ -31,6 +34,7 @@ * @heap_devt heap device node * @list list head connecting to list of heaps * @heap_cdev heap char device + * @gpucg_bucket gpu cgroup bucket for memory accounting * * Represents a heap of memory from which buffers can be made. */ @@ -41,6 +45,9 @@ struct dma_heap { dev_t heap_devt; struct list_head list; struct cdev heap_cdev; +#ifdef CONFIG_CGROUP_GPU + struct gpucg_bucket *gpucg_bucket; +#endif }; =20 static LIST_HEAD(heap_list); @@ -216,6 +223,18 @@ const char *dma_heap_get_name(struct dma_heap *heap) return heap->name; } =20 +/** + * dma_heap_get_gpucg_bucket() - get struct gpucg_bucket pointer for the h= eap. + * @heap: DMA-Heap to get the gpucg_bucket struct for. + * + * Returns: + * The gpucg_bucket struct pointer for the heap. NULL if the GPU cgroup co= ntroller is not enabled. + */ +struct gpucg_bucket *dma_heap_get_gpucg_bucket(struct dma_heap *heap) +{ + return heap->gpucg_bucket; +} + struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { struct dma_heap *heap, *h, *err_ret; @@ -228,6 +247,12 @@ struct dma_heap *dma_heap_add(const struct dma_heap_ex= port_info *exp_info) return ERR_PTR(-EINVAL); } =20 + if (IS_ENABLED(CONFIG_CGROUP_GPU) && strlen(exp_info->name) + strlen(HEAP= _NAME_SUFFIX) >=3D + GPUCG_BUCKET_NAME_MAX_LEN) { + pr_err("dma_heap: Name is too long for GPU cgroup\n"); + return ERR_PTR(-ENAMETOOLONG); + } + if (!exp_info->ops || !exp_info->ops->allocate) { pr_err("dma_heap: Cannot add heap with invalid ops struct\n"); return ERR_PTR(-EINVAL); @@ -253,6 +278,19 @@ struct dma_heap *dma_heap_add(const struct dma_heap_ex= port_info *exp_info) heap->ops =3D exp_info->ops; heap->priv =3D exp_info->priv; =20 + if (IS_ENABLED(CONFIG_CGROUP_GPU)) { + char gpucg_bucket_name[GPUCG_BUCKET_NAME_MAX_LEN]; + + snprintf(gpucg_bucket_name, sizeof(gpucg_bucket_name), "%s%s", + exp_info->name, HEAP_NAME_SUFFIX); + + heap->gpucg_bucket =3D gpucg_register_bucket(gpucg_bucket_name); + if (IS_ERR(heap->gpucg_bucket)) { + err_ret =3D ERR_CAST(heap->gpucg_bucket); + goto err0; + } + } + /* Find unused minor number */ ret =3D xa_alloc(&dma_heap_minors, &minor, heap, XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL); diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/sy= stem_heap.c index fcf836ba9c1f..27f686faef00 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -297,6 +297,11 @@ static void system_heap_dma_buf_release(struct dma_buf= *dmabuf) } sg_free_table(table); kfree(buffer); + + if (dmabuf->gpucg && dmabuf->gpucg_bucket) { + gpucg_uncharge(dmabuf->gpucg, dmabuf->gpucg_bucket, dmabuf->size); + gpucg_put(dmabuf->gpucg); + } } =20 static const struct dma_buf_ops system_heap_buf_ops =3D { @@ -346,11 +351,21 @@ static struct dma_buf *system_heap_allocate(struct dm= a_heap *heap, struct scatterlist *sg; struct list_head pages; struct page *page, *tmp_page; - int i, ret =3D -ENOMEM; + struct gpucg *gpucg; + struct gpucg_bucket *gpucg_bucket; + int i, ret; + + gpucg =3D gpucg_get(current); + gpucg_bucket =3D dma_heap_get_gpucg_bucket(heap); + ret =3D gpucg_charge(gpucg, gpucg_bucket, len); + if (ret) + goto put_gpucg; =20 buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); - if (!buffer) - return ERR_PTR(-ENOMEM); + if (!buffer) { + ret =3D -ENOMEM; + goto uncharge_gpucg; + } =20 INIT_LIST_HEAD(&buffer->attachments); mutex_init(&buffer->lock); @@ -396,6 +411,8 @@ static struct dma_buf *system_heap_allocate(struct dma_= heap *heap, exp_info.size =3D buffer->len; exp_info.flags =3D fd_flags; exp_info.priv =3D buffer; + dma_buf_exp_info_set_gpucg(&exp_info, gpucg, gpucg_bucket); + dmabuf =3D dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret =3D PTR_ERR(dmabuf); @@ -414,7 +431,10 @@ static struct dma_buf *system_heap_allocate(struct dma= _heap *heap, list_for_each_entry_safe(page, tmp_page, &pages, lru) __free_pages(page, compound_order(page)); kfree(buffer); - +uncharge_gpucg: + gpucg_uncharge(gpucg, gpucg_bucket, len); +put_gpucg: + gpucg_put(gpucg); return ERR_PTR(ret); } =20 diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 2097760e8e95..8e7c55c830b3 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -13,6 +13,7 @@ #ifndef __DMA_BUF_H__ #define __DMA_BUF_H__ =20 +#include #include #include #include @@ -303,7 +304,7 @@ struct dma_buf { /** * @size: * - * Size of the buffer; invariant over the lifetime of the buffer. + * Size of the buffer in bytes; invariant over the lifetime of the buffer. */ size_t size; =20 @@ -453,6 +454,14 @@ struct dma_buf { struct dma_buf *dmabuf; } *sysfs_entry; #endif + +#ifdef CONFIG_CGROUP_GPU + /** @gpucg: Pointer to the GPU cgroup this buffer currently belongs to. */ + struct gpucg *gpucg; + + /* @gpucg_bucket: Pointer to the GPU cgroup bucket whence this buffer ori= ginates. */ + struct gpucg_bucket *gpucg_bucket; +#endif }; =20 /** @@ -526,13 +535,15 @@ struct dma_buf_attachment { =20 /** * struct dma_buf_export_info - holds information needed to export a dma_b= uf - * @exp_name: name of the exporter - useful for debugging. - * @owner: pointer to exporter module - used for refcounting kernel module - * @ops: Attach allocator-defined dma buf ops to the new buffer - * @size: Size of the buffer - invariant over the lifetime of the buffer - * @flags: mode flags for the file - * @resv: reservation-object, NULL to allocate default one - * @priv: Attach private data of allocator to this buffer + * @exp_name: name of the exporter - useful for debugging. + * @owner: pointer to exporter module - used for refcounting kernel module + * @ops: Attach allocator-defined dma buf ops to the new buffer + * @size: Size of the buffer in bytes - invariant over the lifetime of th= e buffer + * @flags: mode flags for the file + * @resv: reservation-object, NULL to allocate default one + * @priv: Attach private data of allocator to this buffer + * @gpucg: Pointer to GPU cgroup this buffer is charged to, or NULL if no= t charged + * @gpucg_bucket: Pointer to GPU cgroup bucket this buffer comes from, or = NULL if not charged * * This structure holds the information required to export the buffer. Used * with dma_buf_export() only. @@ -545,6 +556,10 @@ struct dma_buf_export_info { int flags; struct dma_resv *resv; void *priv; +#ifdef CONFIG_CGROUP_GPU + struct gpucg *gpucg; + struct gpucg_bucket *gpucg_bucket; +#endif }; =20 /** @@ -630,4 +645,14 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_stru= ct *, unsigned long); int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map); void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); + +#ifdef CONFIG_CGROUP_GPU +void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *exp_info, + struct gpucg *gpucg, + struct gpucg_bucket *gpucg_bucket); +#else/* CONFIG_CGROUP_GPU */ +static inline void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *= exp_info, + struct gpucg *gpucg, + struct gpucg_bucket *gpucg_bucket) {} +#endif /* CONFIG_CGROUP_GPU */ #endif /* __DMA_BUF_H__ */ diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 0c05561cad6e..6321e7636538 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -10,6 +10,7 @@ #define _DMA_HEAPS_H =20 #include +#include #include =20 struct dma_heap; @@ -59,6 +60,20 @@ void *dma_heap_get_drvdata(struct dma_heap *heap); */ const char *dma_heap_get_name(struct dma_heap *heap); =20 +#ifdef CONFIG_CGROUP_GPU +/** + * dma_heap_get_gpucg_bucket() - get a pointer to the struct gpucg_bucket = for the heap. + * @heap: DMA-Heap to retrieve gpucg_bucket for + * + * Returns: + * The gpucg_bucket struct for the heap. + */ +struct gpucg_bucket *dma_heap_get_gpucg_bucket(struct dma_heap *heap); +#else /* CONFIG_CGROUP_GPU */ +static inline struct gpucg_bucket *dma_heap_get_gpucg_bucket(struct dma_he= ap *heap) +{ return NULL; } +#endif /* CONFIG_CGROUP_GPU */ + /** * dma_heap_add - adds a heap to dmabuf heaps * @exp_info: information needed to register this heap --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41BAAC433EF for ; Tue, 10 May 2022 23:57:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238997AbiEJX5o (ORCPT ); Tue, 10 May 2022 19:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238757AbiEJX5K (ORCPT ); Tue, 10 May 2022 19:57:10 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23A9B208215 for ; Tue, 10 May 2022 16:57:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d134-20020a25e68c000000b006483b1adcc3so403737ybh.11 for ; Tue, 10 May 2022 16:57:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=v8gt6xjWudwEV2qln2N7SwuUJc9JxxsnlnvbLCYxUc0=; b=loPY8BnZXxZrmNCWwTUfS3qnOnH67SZdt82uLV668C+yIIJpUM1rapWhZgScGIl9UE JFdoQvmgq7+PYtQoNo7Xt49JaVYYmqeh2y94j0P91OEseICBN/VZcZdJ3lAn7XpCBFZ3 ZAVIHeyQUE+wezChaUS94nW2/UcpI4NzobrLV1STkXosAruMEI00Fa/sE0BRXri67Gz2 q05IqbfZP5GchqKe/UA5reCqq9DB1Z8ctk9/fnRwNjkinW13EkBWelw/KRjhayVVXg4q Q+RrUAtEcwIKZeShieINdDTV4tIt2WaYR50DZuhNAsEvXdGj1OBBCwLDh1FP0SxBPBN8 7Qmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=v8gt6xjWudwEV2qln2N7SwuUJc9JxxsnlnvbLCYxUc0=; b=YP6eG1TsyPtndiu87xQYqx0Q6YtKjKTSAWQ3YMKxBQ2OYfNIcED9COhFbKToWxPAD/ Z/7w8y4GDsebvW66P2fcMlQ9s60jknP/uMe3j0WKx0cbPR0Fyq3K+BjkyPFSSDx4RfNX c0tkaEMpBjsMxC4p3E1abQIGtcySnCZCdh0MyA9CBJRI4pOvwg6dRIaYdU6l9k0CNrLs DuZu3Jl2IANEB1msRMuaBgdJW3maS0HswBGWqpaylMlga9wVSCWT6P6XA6UwX+Ucaw5G maXpoZiQBrFYZX/Ltn/jliE7uZP5gsCMEdkEuiD8yt6SONC4Ek4bxAXSyeF5CrLu0pg3 O+sw== X-Gm-Message-State: AOAM530++nPFaNR0LcW/K+SNm8fZMMiJq6IA4oE5HHQlnTN89iXDx9V1 xpdPInXy8az4kC+a1SqOobgCaipVjgSJ+l0= X-Google-Smtp-Source: ABdhPJxuMd23Slzk/nX5zXAwKMfpsaSnwA9y71fBu5xFnMi06zJ5/oaDRiKKaGgRjX9ctDMnyPrityYdWY9YepQ= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a5b:8cb:0:b0:645:d65f:dcdd with SMTP id w11-20020a5b08cb000000b00645d65fdcddmr19840185ybq.233.1652227027524; Tue, 10 May 2022 16:57:07 -0700 (PDT) Date: Tue, 10 May 2022 23:56:48 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-5-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 4/6] dmabuf: Add gpu cgroup charge transfer function From: "T.J. Mercier" To: tjmercier@google.com, Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Tejun Heo , Zefan Li , Johannes Weiner Cc: daniel@ffwll.ch, hridya@google.com, jstultz@google.com, tkjos@android.com, cmllamas@google.com, surenb@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The dma_buf_transfer_charge function provides a way for processes to transfer charge of a buffer to a different process. This is essential for the cases where a central allocator process does allocations for various subsystems, hands over the fd to the client who requested the memory and drops all references to the allocated memory. Originally-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v5 changes Fix commit message which still contained the old name for dma_buf_transfer_charge per Michal Koutn=C3=BD. Modify the dma_buf_transfer_charge API to accept a task_struct instead of a gpucg. This avoids requiring the caller to manage the refcount of the gpucg upon failure and confusing ownership transfer logic. v4 changes Adjust ordering of charge/uncharge during transfer to avoid potentially hitting cgroup limit per Michal Koutn=C3=BD. v3 changes Use more common dual author commit message format per John Stultz. v2 changes Move dma-buf cgroup charge transfer from a dma_buf_op defined by every heap to a single dma-buf function for all heaps per Daniel Vetter and Christian K=C3=B6nig. --- drivers/dma-buf/dma-buf.c | 57 ++++++++++++++++++++++++++++++++++++++ include/linux/cgroup_gpu.h | 24 ++++++++++++++++ include/linux/dma-buf.h | 6 ++++ kernel/cgroup/gpu.c | 51 ++++++++++++++++++++++++++++++++++ 4 files changed, 138 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index bc89c44bd9b9..f3fb844925e2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1341,6 +1341,63 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct i= osys_map *map) } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); =20 +/** + * dma_buf_transfer_charge - Change the GPU cgroup to which the provided d= ma_buf is charged. + * @dmabuf: [in] buffer whose charge will be migrated to a different GPU c= group + * @target: [in] the task_struct of the destination process for the GPU cg= roup charge + * + * Only tasks that belong to the same cgroup the buffer is currently charg= ed to + * may call this function, otherwise it will return -EPERM. + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct task_struct *ta= rget) +{ + struct gpucg *current_gpucg, *target_gpucg, *to_release; + int ret; + + if (!dmabuf->gpucg || !dmabuf->gpucg_bucket) { + /* This dmabuf is not tracked under GPU cgroup accounting */ + return 0; + } + + current_gpucg =3D gpucg_get(current); + target_gpucg =3D gpucg_get(target); + to_release =3D target_gpucg; + + /* If the source and destination cgroups are the same, don't do anything.= */ + if (current_gpucg =3D=3D target_gpucg) { + ret =3D 0; + goto skip_transfer; + } + + /* + * Verify that the cgroup of the process requesting the transfer + * is the same as the one the buffer is currently charged to. + */ + mutex_lock(&dmabuf->lock); + if (current_gpucg !=3D dmabuf->gpucg) { + ret =3D -EPERM; + goto err; + } + + ret =3D gpucg_transfer_charge( + dmabuf->gpucg, target_gpucg, dmabuf->gpucg_bucket, dmabuf->size); + if (ret) + goto err; + + to_release =3D dmabuf->gpucg; + dmabuf->gpucg =3D target_gpucg; + +err: + mutex_unlock(&dmabuf->lock); +skip_transfer: + gpucg_put(current_gpucg); + gpucg_put(to_release); + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_transfer_charge, DMA_BUF); + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/cgroup_gpu.h b/include/linux/cgroup_gpu.h index cb228a16aa1f..7eb68f1507fb 100644 --- a/include/linux/cgroup_gpu.h +++ b/include/linux/cgroup_gpu.h @@ -75,6 +75,22 @@ int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucke= t *bucket, u64 size); */ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 = size); =20 +/** + * gpucg_transfer_charge - Transfer a GPU charge from one cgroup to anothe= r. + * + * @source: [in] The GPU cgroup the charge will be transferred from. + * @dest: [in] The GPU cgroup the charge will be transferred to. + * @bucket: [in] The GPU cgroup bucket corresponding to the charge. + * @size: [in] The size of the memory in bytes. + * This size will be rounded up to the nearest page s= ize. + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size); + /** * gpucg_register_bucket - Registers a bucket for memory accounting using = the GPU cgroup controller. * @@ -117,6 +133,14 @@ static inline void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size) {} =20 +static inline int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size) +{ + return 0; +} + static inline struct gpucg_bucket *gpucg_register_bucket(const char *name)= {} #endif /* CONFIG_CGROUP_GPU */ #endif /* _CGROUP_GPU_H */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 8e7c55c830b3..438ad8577b76 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -650,9 +651,14 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct ios= ys_map *map); void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *exp_info, struct gpucg *gpucg, struct gpucg_bucket *gpucg_bucket); + +int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct task_struct *ta= rget); #else/* CONFIG_CGROUP_GPU */ static inline void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *= exp_info, struct gpucg *gpucg, struct gpucg_bucket *gpucg_bucket) {} + +static inline int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct t= ask_struct *target) +{ return 0; } #endif /* CONFIG_CGROUP_GPU */ #endif /* __DMA_BUF_H__ */ diff --git a/kernel/cgroup/gpu.c b/kernel/cgroup/gpu.c index ad16ea15d427..038ea873a9d3 100644 --- a/kernel/cgroup/gpu.c +++ b/kernel/cgroup/gpu.c @@ -274,6 +274,57 @@ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_= bucket *bucket, u64 size) css_put(&gpucg->css); } =20 +int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size) +{ + struct page_counter *counter; + u64 nr_pages; + struct gpucg_resource_pool *rp_source, *rp_dest; + int ret =3D 0; + + nr_pages =3D PAGE_ALIGN(size) >> PAGE_SHIFT; + + mutex_lock(&gpucg_mutex); + rp_source =3D cg_rpool_find_locked(source, bucket); + if (unlikely(!rp_source)) { + ret =3D -ENOENT; + goto exit_early; + } + + rp_dest =3D cg_rpool_get_locked(dest, bucket); + if (IS_ERR(rp_dest)) { + ret =3D PTR_ERR(rp_dest); + goto exit_early; + } + + /* + * First uncharge from the pool it's currently charged to. This ordering = avoids double + * charging while the transfer is in progress, which could cause us to hi= t a limit. + * If the try_charge fails for this transfer, we need to be able to rever= se this uncharge, + * so we continue to hold the gpucg_mutex here. + */ + page_counter_uncharge(&rp_source->total, nr_pages); + css_put(&source->css); + + /* Now attempt the new charge */ + if (page_counter_try_charge(&rp_dest->total, nr_pages, &counter)) { + css_get(&dest->css); + } else { + /* + * The new charge failed, so reverse the uncharge from above. This shoul= d always + * succeed since charges on source are blocked by gpucg_mutex. + */ + WARN_ON(!page_counter_try_charge(&rp_source->total, nr_pages, &counter)); + css_get(&source->css); + ret =3D -ENOMEM; + } +exit_early: + mutex_unlock(&gpucg_mutex); + return ret; +} + struct gpucg_bucket *gpucg_register_bucket(const char *name) { struct gpucg_bucket *bucket, *b; --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDF5FC433EF for ; Tue, 10 May 2022 23:58:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239010AbiEJX6B (ORCPT ); Tue, 10 May 2022 19:58:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238849AbiEJX5U (ORCPT ); Tue, 10 May 2022 19:57:20 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0DE320AE6F for ; Tue, 10 May 2022 16:57:10 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id m136-20020a25268e000000b0064b233e03d1so395496ybm.14 for ; Tue, 10 May 2022 16:57:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=kOG1+H797fkxwtV29+2Ag9hnjKiMCzDaxWp5AwhjK7k=; b=jzLSV2271y6xdPq4Teb3cK3gd2o8XVEEDTpxsMUflK3MahRZ+vFjdBf58BAYp/GqEY HKwiRd2BYd0MvxlW725v4W4kRT7g0DD+9VOGZeqfxkRhT2B7GX1Cd5Zba7GrrPKKV90L rlAcfNxxUt/Z5x52PIkoxCpbFgy72iq4hZ9XpyuXCwZdD5H8gw6rnwAEwBlukOGgvJfW 8HSHXMd3NEJjZsAPE0lTLyqjSrwcUDU2VSEAxsXQeAeWX6AecQO3pqDYSA88gh5TGqOF V5eRXJewZyHwRjH5ubwA7Fnn+8fAAO3hdUyCoKOZYp4sAZJKjDFgtw3LrRcWFnNlAmQs +JsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=kOG1+H797fkxwtV29+2Ag9hnjKiMCzDaxWp5AwhjK7k=; b=dDrgTpXmr+wDxISVqcaaCMMww3jgar6tBb8jsIwqonnfyDLC7NrwijX/go0B3JdPQx Ko4WGCjsJNc9rqvfzB+LUzHJ9ToqsmLaJSId0D0sswZsmLHzO+TiX7LttxR1FnZh6oGj wV0ZlpVf7tk9DNGaWQbAC55G/C7KWXcbH4ZlWSyWy3iKt6N0HniSb39LzT3cI12TpDGm /HD9tY5PoaMs46XLkPBDCQ5xEf0vgfq03AlrqpVJX4rF1A7ibZzxpOnUTOt7B/LqnqNC oZJDDt044pnwvc7+CbMLo+qIF0+cyA8v4CeXsMUreeo1iIch7/DhkBC7G/Avr40F4aNu N1oQ== X-Gm-Message-State: AOAM533Agv0lhO2w+6VHO0phvr0A/UZ6ytz6C3LR+Q9l/F4KthfuTdwu hQrAigDBGwlliXY87PAYHqEwUwi3vwf5teo= X-Google-Smtp-Source: ABdhPJy031ysA5iqhqjsPwpsvgVySQVHHFUdnWnjLEzZsnGhL9aT6J/T4T8A2/r4+dlqTH/QXR2nIha7GS9T3Fs= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a05:6902:10:b0:64a:68dd:2211 with SMTP id l16-20020a056902001000b0064a68dd2211mr21667940ybh.473.1652227030100; Tue, 10 May 2022 16:57:10 -0700 (PDT) Date: Tue, 10 May 2022 23:56:49 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-6-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 5/6] binder: Add flags to relinquish ownership of fds From: "T.J. Mercier" To: tjmercier@google.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Hridya Valsaraju , Suren Baghdasaryan , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" Cc: daniel@ffwll.ch, tj@kernel.org, jstultz@google.com, cmllamas@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Hridya Valsaraju This patch introduces flags BINDER_FD_FLAG_XFER_CHARGE, and BINDER_FD_FLAG_XFER_CHARGE that a process sending an individual fd or fd array to another process over binder IPC can set to relinquish ownership of the fds being sent for memory accounting purposes. If the flag is found to be set during the fd or fd array translation and the fd is for a DMA-BUF, the buffer is uncharged from the sender's cgroup and charged to the receiving process's cgroup instead. It is up to the sending process to ensure that it closes the fds regardless of whether the transfer failed or succeeded. Most graphics shared memory allocations in Android are done by the graphics allocator HAL process. On requests from clients, the HAL process allocates memory and sends the fds to the clients over binder IPC. The graphics allocator HAL will not retain any references to the buffers. When the HAL sets *_FLAG_XFER_CHARGE for fd arrays holding DMA-BUF fds, or individual fd objects, the gpu cgroup controller will be able to correctly charge the buffers to the client processes instead of the graphics allocator HAL. Since this is a new feature exposed to userspace, the kernel and userspace must be compatible for the accounting to work for transfers. In all cases the allocation and transport of DMA buffers via binder will succeed, but only when both the kernel supports, and userspace depends on this feature will the transfer accounting work. The possible scenarios are detailed below: 1. new kernel + old userspace The kernel supports the feature but userspace does not use it. The old userspace won't mount the new cgroup controller, accounting is not performed, charge is not transferred. 2. old kernel + new userspace The new cgroup controller is not supported by the kernel, accounting is not performed, charge is not transferred. 3. old kernel + old userspace Same as #2 4. new kernel + new userspace Cgroup is mounted, feature is supported and used. Signed-off-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v6 changes Rename BINDER_FD{A}_FLAG_SENDER_NO_NEED -> BINDER_FD{A}_FLAG_XFER_CHARGE per Carlos Llamas. Return error on transfer failure per Carlos Llamas. v5 changes Support both binder_fd_array_object and binder_fd_object. This is necessary because new versions of Android will use binder_fd_object instead of binder_fd_array_object, and we need to support both. Use the new, simpler dma_buf_transfer_charge API. v3 changes Remove android from title per Todd Kjos. Use more common dual author commit message format per John Stultz. Include details on behavior for all combinations of kernel/userspace versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman. v2 changes Move dma-buf cgroup charge transfer from a dma_buf_op defined by every heap to a single dma-buf function for all heaps per Daniel Vetter and Christian K=C3=B6nig. --- drivers/android/binder.c | 31 +++++++++++++++++++++++++---- drivers/dma-buf/dma-buf.c | 4 ++-- include/linux/dma-buf.h | 2 +- include/uapi/linux/android/binder.h | 23 +++++++++++++++++---- 4 files changed, 49 insertions(+), 11 deletions(-) diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 8351c5638880..1f39b24498f1 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -42,6 +42,7 @@ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 +#include #include #include #include @@ -2170,7 +2171,7 @@ static int binder_translate_handle(struct flat_binder= _object *fp, return ret; } =20 -static int binder_translate_fd(u32 fd, binder_size_t fd_offset, +static int binder_translate_fd(u32 fd, binder_size_t fd_offset, __u32 flag= s, struct binder_transaction *t, struct binder_thread *thread, struct binder_transaction *in_reply_to) @@ -2208,6 +2209,26 @@ static int binder_translate_fd(u32 fd, binder_size_t= fd_offset, goto err_security; } =20 + if (IS_ENABLED(CONFIG_CGROUP_GPU) && (flags & BINDER_FD_FLAG_XFER_CHARGE)= ) { + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) { + binder_user_error( + "%d:%d got transaction with XFER_CHARGE for non-dmabuf fd, %d\n", + proc->pid, thread->pid, fd); + ret =3D -EINVAL; + goto err_dmabuf; + } + + dmabuf =3D file->private_data; + ret =3D dma_buf_transfer_charge(dmabuf, target_proc->tsk); + if (ret) { + pr_warn("%d:%d Unable to transfer DMA-BUF fd charge to %d\n", + proc->pid, thread->pid, target_proc->pid); + goto err_xfer; + } + } + /* * Add fixup record for this transaction. The allocation * of the fd in the target needs to be done from a @@ -2226,6 +2247,8 @@ static int binder_translate_fd(u32 fd, binder_size_t = fd_offset, return ret; =20 err_alloc: +err_xfer: +err_dmabuf: err_security: fput(file); err_fget: @@ -2528,7 +2551,7 @@ static int binder_translate_fd_array(struct list_head= *pf_head, =20 ret =3D copy_from_user(&fd, sender_ufda_base + sender_uoffset, sizeof(fd= )); if (!ret) - ret =3D binder_translate_fd(fd, offset, t, thread, + ret =3D binder_translate_fd(fd, offset, fda->flags, t, thread, in_reply_to); if (ret) return ret > 0 ? -EINVAL : ret; @@ -3179,8 +3202,8 @@ static void binder_transaction(struct binder_proc *pr= oc, struct binder_fd_object *fp =3D to_binder_fd_object(hdr); binder_size_t fd_offset =3D object_offset + (uintptr_t)&fp->fd - (uintptr_t)fp; - int ret =3D binder_translate_fd(fp->fd, fd_offset, t, - thread, in_reply_to); + int ret =3D binder_translate_fd(fp->fd, fd_offset, fp->flags, + t, thread, in_reply_to); =20 fp->pad_binder =3D 0; if (ret < 0 || diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f3fb844925e2..36ed6cd4ddcc 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -31,7 +31,6 @@ =20 #include "dma-buf-sysfs-stats.h" =20 -static inline int is_dma_buf_file(struct file *); =20 struct dma_buf_list { struct list_head head; @@ -400,10 +399,11 @@ static const struct file_operations dma_buf_fops =3D { /* * is_dma_buf_file - Check if struct file* is associated with dma_buf */ -static inline int is_dma_buf_file(struct file *file) +int is_dma_buf_file(struct file *file) { return file->f_op =3D=3D &dma_buf_fops; } +EXPORT_SYMBOL_NS_GPL(is_dma_buf_file, DMA_BUF); =20 static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 438ad8577b76..2b9812758fee 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -614,7 +614,7 @@ dma_buf_attachment_is_dynamic(struct dma_buf_attachment= *attach) { return !!attach->importer_ops; } - +int is_dma_buf_file(struct file *file); struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev); struct dma_buf_attachment * diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/andro= id/binder.h index 11157fae8a8e..d17e791b38ab 100644 --- a/include/uapi/linux/android/binder.h +++ b/include/uapi/linux/android/binder.h @@ -91,14 +91,14 @@ struct flat_binder_object { /** * struct binder_fd_object - describes a filedescriptor to be fixed up. * @hdr: common header structure - * @pad_flags: padding to remain compatible with old userspace code + * @flags: One or more BINDER_FD_FLAG_* flags * @pad_binder: padding to remain compatible with old userspace code * @fd: file descriptor * @cookie: opaque data, used by user-space */ struct binder_fd_object { struct binder_object_header hdr; - __u32 pad_flags; + __u32 flags; union { binder_uintptr_t pad_binder; __u32 fd; @@ -107,6 +107,17 @@ struct binder_fd_object { binder_uintptr_t cookie; }; =20 +enum { + /** + * @BINDER_FD_FLAG_XFER_CHARGE + * + * When set, the sender of a binder_fd_object wishes to relinquish owners= hip of the fd for + * memory accounting purposes. If the fd is for a DMA-BUF, the buffer is = uncharged from the + * sender's cgroup and charged to the receiving process's cgroup instead. + */ + BINDER_FD_FLAG_XFER_CHARGE =3D 0x2000, +}; + /* struct binder_buffer_object - object describing a userspace buffer * @hdr: common header structure * @flags: one or more BINDER_BUFFER_* flags @@ -141,7 +152,7 @@ enum { =20 /* struct binder_fd_array_object - object describing an array of fds in a = buffer * @hdr: common header structure - * @pad: padding to ensure correct alignment + * @flags: One or more BINDER_FDA_FLAG_* flags * @num_fds: number of file descriptors in the buffer * @parent: index in offset array to buffer holding the fd array * @parent_offset: start offset of fd array in the buffer @@ -162,12 +173,16 @@ enum { */ struct binder_fd_array_object { struct binder_object_header hdr; - __u32 pad; + __u32 flags; binder_size_t num_fds; binder_size_t parent; binder_size_t parent_offset; }; =20 +enum { + BINDER_FDA_FLAG_XFER_CHARGE =3D BINDER_FD_FLAG_XFER_CHARGE, +}; + /* * On 64-bit platforms where user code may run in 32-bits the driver must * translate the buffer (and local binder) addresses appropriately. --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7090C433F5 for ; Tue, 10 May 2022 23:58:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229684AbiEJX6L (ORCPT ); Tue, 10 May 2022 19:58:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238909AbiEJX5W (ORCPT ); Tue, 10 May 2022 19:57:22 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B205920AE4F for ; Tue, 10 May 2022 16:57:13 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f7dbceab08so2978777b3.10 for ; Tue, 10 May 2022 16:57:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vYLS3oj++fkZhz2B1aA1LN2l8oqjivSrY2nMWaiYle8=; b=d1oSO0BeVg1FVxSB/qXK8BCqlJsRw7FmfjlebAH3PpntaqCDpI/aS1vsH8ZVzGwhQu TNju9VykU/22gAKFZoYLFbZG+GJtoxgosLEmutZB+V3gTK7ntiditzw3hvVgnoDA0WWd Tytw5lqrinQNnPb8K7njZ1f47XfQ2vjaiVT8wzf1O5FMBnSuN3h9Ei6/00fGTVF3xJzr YQP/Ujlw3R2TkrXVoYnkU3c9rpqeXIwxt5yn9DCHuKLTRODszz68q2w0h2c3bGrvqHOy vd/NKrDI+cq1jTI+z1l9/hVFG4xDlJ5+aDdiOkjj3QR8g4gwKkbJ1uyP0IVlJ+U7n8/k tvRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vYLS3oj++fkZhz2B1aA1LN2l8oqjivSrY2nMWaiYle8=; b=JLfEqJlYuz3/87PdMIjqpmu3Fg5psdU3rCTA+AMrtbaUVmZO9Y2WyP9uBQedhb5DLb LL6xMRT1LSoNWHC5yAlN0NWQvd5HKPKbJOJCbD+mHlk4Za6CqHIFApDubWcGzdgLBOSc Zde0ULWjqDt7Wjn0wY78AUulCsexQP7VTJAOhJVnnzq70DOhBQy+6A4jjDyWpEspRFCz +4KmUEhGzwVNG8WyPtvE8btGePs2Prvp9phgvvCWsmGWcX5q5QtMwfy8o/SNWT4ebEBE 7K2TO/+HrchrJJdRIVgrFVWjDOWwuMcaqEHVWylSghOjqROgPIGWNgiYxTA+44X6ShqM 2nzQ== X-Gm-Message-State: AOAM532TvhyCIFjPdZ4aWPZ0riEG43QWzRgWGG7TcOGKSLTtX7YBNDiQ s77NPPkHyl5jmpegd2RnwobZa/oT7pS9kaE= X-Google-Smtp-Source: ABdhPJyiJFTC6bmOoLACJ6XbVXoBht/tGZA4r8ZTNhnhWEL/FeyX7YINW87E6pH33bpV5H9d+Oomlo5v9wOaYH8= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a25:ab50:0:b0:648:4bb1:833a with SMTP id u74-20020a25ab50000000b006484bb1833amr20097446ybi.179.1652227033383; Tue, 10 May 2022 16:57:13 -0700 (PDT) Date: Tue, 10 May 2022 23:56:50 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-7-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 6/6] selftests: Add binder cgroup gpu memory transfer tests From: "T.J. Mercier" To: tjmercier@google.com, Shuah Khan Cc: daniel@ffwll.ch, tj@kernel.org, hridya@google.com, christian.koenig@amd.com, jstultz@google.com, tkjos@android.com, cmllamas@google.com, surenb@google.com, kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com, skhan@linuxfoundation.org, kernel-team@android.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" These tests verify that the cgroup GPU memory charge is transferred correctly when a dmabuf is passed between processes in two different cgroups and the sender specifies BINDER_FD_FLAG_XFER_CHARGE or BINDER_FDA_FLAG_XFER_CHARGE in the binder transaction data containing the dmabuf file descriptor. Signed-off-by: T.J. Mercier --- v6 changes Rename BINDER_FD{A}_FLAG_SENDER_NO_NEED -> BINDER_FD{A}_FLAG_XFER_CHARGE per Carlos Llamas. v5 changes Tests for both binder_fd_array_object and binder_fd_object. Return error code instead of struct binder{fs}_ctx. Use ifdef __ANDROID__ to choose platform-dependent temp path instead of a runtime fallback. Ensure binderfs_mntpt ends with a trailing '/' character instead of prepending it where used. v4 changes Skip test if not run as root per Shuah Khan. Add better logging for abnormal child termination per Shuah Khan. --- .../selftests/drivers/android/binder/Makefile | 8 + .../drivers/android/binder/binder_util.c | 250 +++++++++ .../drivers/android/binder/binder_util.h | 32 ++ .../selftests/drivers/android/binder/config | 4 + .../binder/test_dmabuf_cgroup_transfer.c | 526 ++++++++++++++++++ 5 files changed, 820 insertions(+) create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile create mode 100644 tools/testing/selftests/drivers/android/binder/binder_u= til.c create mode 100644 tools/testing/selftests/drivers/android/binder/binder_u= til.h create mode 100644 tools/testing/selftests/drivers/android/binder/config create mode 100644 tools/testing/selftests/drivers/android/binder/test_dma= buf_cgroup_transfer.c diff --git a/tools/testing/selftests/drivers/android/binder/Makefile b/tool= s/testing/selftests/drivers/android/binder/Makefile new file mode 100644 index 000000000000..726439d10675 --- /dev/null +++ b/tools/testing/selftests/drivers/android/binder/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS +=3D -Wall + +TEST_GEN_PROGS =3D test_dmabuf_cgroup_transfer + +include ../../../lib.mk + +$(OUTPUT)/test_dmabuf_cgroup_transfer: ../../../cgroup/cgroup_util.c binde= r_util.c diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.c b= /tools/testing/selftests/drivers/android/binder/binder_util.c new file mode 100644 index 000000000000..cdd97cb0bb60 --- /dev/null +++ b/tools/testing/selftests/drivers/android/binder/binder_util.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "binder_util.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +static const size_t BINDER_MMAP_SIZE =3D 64 * 1024; + +static void binderfs_unmount(const char *mountpoint) +{ + if (umount2(mountpoint, MNT_DETACH)) + fprintf(stderr, "Failed to unmount binderfs at %s: %s\n", + mountpoint, strerror(errno)); + else + fprintf(stderr, "Binderfs unmounted: %s\n", mountpoint); + + if (rmdir(mountpoint)) + fprintf(stderr, "Failed to remove binderfs mount %s: %s\n", + mountpoint, strerror(errno)); + else + fprintf(stderr, "Binderfs mountpoint destroyed: %s\n", mountpoint); +} + +int create_binderfs(struct binderfs_ctx *ctx, const char *name) +{ + int fd, ret, saved_errno; + struct binderfs_device device =3D { 0 }; + + /* + * P_tmpdir is set to "/tmp/" on Android platforms where Binder is most c= ommonly used, but + * this path does not actually exist on Android. For Android we'll try us= ing + * "/data/local/tmp" and P_tmpdir for non-Android platforms. + * + * This mount point should have a trailing '/' character, but mkdtemp req= uires that the last + * six characters (before the first null terminator) must be "XXXXXX". Ma= nually append an + * additional null character in the string literal to allocate a characte= r array of the + * correct final size, which we will replace with a '/' after successful = completion of the + * mkdtemp call. + */ +#ifdef __ANDROID__ + char binderfs_mntpt[] =3D "/data/local/tmp/binderfs_XXXXXX\0"; +#else + /* P_tmpdir may or may not contain a trailing '/' separator. We always ap= pend one here. */ + char binderfs_mntpt[] =3D P_tmpdir "/binderfs_XXXXXX\0"; +#endif + static const char BINDER_CONTROL_NAME[] =3D "binder-control"; + char device_path[strlen(binderfs_mntpt) + 1 + strlen(BINDER_CONTROL_NAME)= + 1]; + + if (mkdtemp(binderfs_mntpt) =3D=3D NULL) { + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s.\n", + binderfs_mntpt, strerror(errno)); + return -1; + } + binderfs_mntpt[strlen(binderfs_mntpt)] =3D '/'; + fprintf(stderr, "Binderfs mountpoint created at %s\n", binderfs_mntpt); + + if (mount(NULL, binderfs_mntpt, "binder", 0, 0)) { + perror("Could not mount binderfs"); + rmdir(binderfs_mntpt); + return -1; + } + fprintf(stderr, "Binderfs mounted at %s\n", binderfs_mntpt); + + strncpy(device.name, name, sizeof(device.name)); + snprintf(device_path, sizeof(device_path), "%s%s", binderfs_mntpt, BINDER= _CONTROL_NAME); + fd =3D open(device_path, O_RDONLY | O_CLOEXEC); + if (!fd) { + fprintf(stderr, "Failed to open %s device", BINDER_CONTROL_NAME); + binderfs_unmount(binderfs_mntpt); + return -1; + } + + ret =3D ioctl(fd, BINDER_CTL_ADD, &device); + saved_errno =3D errno; + close(fd); + errno =3D saved_errno; + if (ret) { + perror("Failed to allocate new binder device"); + binderfs_unmount(binderfs_mntpt); + return -1; + } + + fprintf(stderr, "Allocated new binder device with major %d, minor %d, and= name %s at %s\n", + device.major, device.minor, device.name, binderfs_mntpt); + + ctx->name =3D strdup(name); + ctx->mountpoint =3D strdup(binderfs_mntpt); + + return 0; +} + +void destroy_binderfs(struct binderfs_ctx *ctx) +{ + char path[PATH_MAX]; + + snprintf(path, sizeof(path), "%s%s", ctx->mountpoint, ctx->name); + + if (unlink(path)) + fprintf(stderr, "Failed to unlink binder device %s: %s\n", path, strerro= r(errno)); + else + fprintf(stderr, "Destroyed binder %s at %s\n", ctx->name, ctx->mountpoin= t); + + binderfs_unmount(ctx->mountpoint); + + free(ctx->name); + free(ctx->mountpoint); +} + +int open_binder(const struct binderfs_ctx *bfs_ctx, struct binder_ctx *ctx) +{ + char path[PATH_MAX]; + + snprintf(path, sizeof(path), "%s%s", bfs_ctx->mountpoint, bfs_ctx->name); + ctx->fd =3D open(path, O_RDWR | O_NONBLOCK | O_CLOEXEC); + if (ctx->fd < 0) { + fprintf(stderr, "Error opening binder device %s: %s\n", path, strerror(e= rrno)); + return -1; + } + + ctx->memory =3D mmap(NULL, BINDER_MMAP_SIZE, PROT_READ, MAP_SHARED, ctx->= fd, 0); + if (ctx->memory =3D=3D NULL) { + perror("Error mapping binder memory"); + close(ctx->fd); + ctx->fd =3D -1; + return -1; + } + + return 0; +} + +void close_binder(struct binder_ctx *ctx) +{ + if (munmap(ctx->memory, BINDER_MMAP_SIZE)) + perror("Failed to unmap binder memory"); + ctx->memory =3D NULL; + + if (close(ctx->fd)) + perror("Failed to close binder"); + ctx->fd =3D -1; +} + +int become_binder_context_manager(int binder_fd) +{ + return ioctl(binder_fd, BINDER_SET_CONTEXT_MGR, 0); +} + +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writ= esize, + void *readbuf, binder_size_t readsize) +{ + int err; + struct binder_write_read bwr =3D { + .write_buffer =3D (binder_uintptr_t)writebuf, + .write_size =3D writesize, + .read_buffer =3D (binder_uintptr_t)readbuf, + .read_size =3D readsize + }; + + do { + if (ioctl(binder_fd, BINDER_WRITE_READ, &bwr) >=3D 0) + err =3D 0; + else + err =3D -errno; + } while (err =3D=3D -EINTR); + + if (err < 0) { + perror("BINDER_WRITE_READ"); + return -1; + } + + if (bwr.write_consumed < writesize) { + fprintf(stderr, "Binder did not consume full write buffer %llu %llu\n", + bwr.write_consumed, writesize); + return -1; + } + + return bwr.read_consumed; +} + +static const char *reply_string(int cmd) +{ + switch (cmd) { + case BR_ERROR: + return "BR_ERROR"; + case BR_OK: + return "BR_OK"; + case BR_TRANSACTION_SEC_CTX: + return "BR_TRANSACTION_SEC_CTX"; + case BR_TRANSACTION: + return "BR_TRANSACTION"; + case BR_REPLY: + return "BR_REPLY"; + case BR_ACQUIRE_RESULT: + return "BR_ACQUIRE_RESULT"; + case BR_DEAD_REPLY: + return "BR_DEAD_REPLY"; + case BR_TRANSACTION_COMPLETE: + return "BR_TRANSACTION_COMPLETE"; + case BR_INCREFS: + return "BR_INCREFS"; + case BR_ACQUIRE: + return "BR_ACQUIRE"; + case BR_RELEASE: + return "BR_RELEASE"; + case BR_DECREFS: + return "BR_DECREFS"; + case BR_ATTEMPT_ACQUIRE: + return "BR_ATTEMPT_ACQUIRE"; + case BR_NOOP: + return "BR_NOOP"; + case BR_SPAWN_LOOPER: + return "BR_SPAWN_LOOPER"; + case BR_FINISHED: + return "BR_FINISHED"; + case BR_DEAD_BINDER: + return "BR_DEAD_BINDER"; + case BR_CLEAR_DEATH_NOTIFICATION_DONE: + return "BR_CLEAR_DEATH_NOTIFICATION_DONE"; + case BR_FAILED_REPLY: + return "BR_FAILED_REPLY"; + case BR_FROZEN_REPLY: + return "BR_FROZEN_REPLY"; + case BR_ONEWAY_SPAM_SUSPECT: + return "BR_ONEWAY_SPAM_SUSPECT"; + default: + return "Unknown"; + }; +} + +int expect_binder_reply(int32_t actual, int32_t expected) +{ + if (actual !=3D expected) { + fprintf(stderr, "Expected %s but received %s\n", + reply_string(expected), reply_string(actual)); + return -1; + } + return 0; +} + diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.h b= /tools/testing/selftests/drivers/android/binder/binder_util.h new file mode 100644 index 000000000000..adc2b20e8d0a --- /dev/null +++ b/tools/testing/selftests/drivers/android/binder/binder_util.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef SELFTEST_BINDER_UTIL_H +#define SELFTEST_BINDER_UTIL_H + +#include + +#include + +struct binderfs_ctx { + char *name; + char *mountpoint; +}; + +struct binder_ctx { + int fd; + void *memory; +}; + +int create_binderfs(struct binderfs_ctx *ctx, const char *name); +void destroy_binderfs(struct binderfs_ctx *ctx); + +int open_binder(const struct binderfs_ctx *bfs_ctx, struct binder_ctx *ctx= ); +void close_binder(struct binder_ctx *ctx); + +int become_binder_context_manager(int binder_fd); + +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writ= esize, + void *readbuf, binder_size_t readsize); + +int expect_binder_reply(int32_t actual, int32_t expected); +#endif diff --git a/tools/testing/selftests/drivers/android/binder/config b/tools/= testing/selftests/drivers/android/binder/config new file mode 100644 index 000000000000..fcc5f8f693b3 --- /dev/null +++ b/tools/testing/selftests/drivers/android/binder/config @@ -0,0 +1,4 @@ +CONFIG_CGROUP_GPU=3Dy +CONFIG_ANDROID=3Dy +CONFIG_ANDROID_BINDERFS=3Dy +CONFIG_ANDROID_BINDER_IPC=3Dy diff --git a/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgr= oup_transfer.c b/tools/testing/selftests/drivers/android/binder/test_dmabuf= _cgroup_transfer.c new file mode 100644 index 000000000000..4d468c1dc4e3 --- /dev/null +++ b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_tra= nsfer.c @@ -0,0 +1,526 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * These tests verify that the cgroup GPU memory charge is transferred cor= rectly when a dmabuf is + * passed between processes in two different cgroups and the sender specif= ies + * BINDER_FD_FLAG_XFER_CHARGE or BINDER_FDA_FLAG_XFER_CHARGE in the binder= transaction data + * containing the dmabuf file descriptor. + * + * The parent test process becomes the binder context manager, then forks = a child who initiates a + * transaction with the context manager by specifying a target of 0. The c= ontext manager reply + * contains a dmabuf file descriptor (or an array of one file descriptor) = which was allocated by the + * parent, but should be charged to the child cgroup after the binder tran= saction. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "binder_util.h" +#include "../../../cgroup/cgroup_util.h" +#include "../../../kselftest.h" +#include "../../../kselftest_harness.h" + +#include +#include +#include + +#define UNUSED(x) ((void)(x)) + +static const unsigned int BINDER_CODE =3D 8675309; /* Any number will work= here */ + +struct cgroup_ctx { + char *root; + char *source; + char *dest; +}; + +void destroy_cgroups(struct __test_metadata *_metadata, struct cgroup_ctx = *ctx) +{ + if (ctx->source !=3D NULL) { + TH_LOG("Destroying cgroup: %s", ctx->source); + rmdir(ctx->source); + free(ctx->source); + } + + if (ctx->dest !=3D NULL) { + TH_LOG("Destroying cgroup: %s", ctx->dest); + rmdir(ctx->dest); + free(ctx->dest); + } + + free(ctx->root); + ctx->root =3D ctx->source =3D ctx->dest =3D NULL; +} + +struct cgroup_ctx create_cgroups(struct __test_metadata *_metadata) +{ + struct cgroup_ctx ctx =3D {0}; + char root[PATH_MAX], *tmp; + static const char template[] =3D "/gpucg_XXXXXX"; + + if (cg_find_unified_root(root, sizeof(root))) { + TH_LOG("Could not find cgroups root"); + return ctx; + } + + if (cg_read_strstr(root, "cgroup.controllers", "gpu")) { + TH_LOG("Could not find GPU controller"); + return ctx; + } + + if (cg_write(root, "cgroup.subtree_control", "+gpu")) { + TH_LOG("Could not enable GPU controller"); + return ctx; + } + + ctx.root =3D strdup(root); + + snprintf(root, sizeof(root), "%s/%s", ctx.root, template); + tmp =3D mkdtemp(root); + if (tmp =3D=3D NULL) { + TH_LOG("%s - Could not create source cgroup", strerror(errno)); + destroy_cgroups(_metadata, &ctx); + return ctx; + } + ctx.source =3D strdup(tmp); + + snprintf(root, sizeof(root), "%s/%s", ctx.root, template); + tmp =3D mkdtemp(root); + if (tmp =3D=3D NULL) { + TH_LOG("%s - Could not create destination cgroup", strerror(errno)); + destroy_cgroups(_metadata, &ctx); + return ctx; + } + ctx.dest =3D strdup(tmp); + + TH_LOG("Created cgroups: %s %s", ctx.source, ctx.dest); + + return ctx; +} + +int dmabuf_heap_alloc(int fd, size_t len, int *dmabuf_fd) +{ + struct dma_heap_allocation_data data =3D { + .len =3D len, + .fd =3D 0, + .fd_flags =3D O_RDONLY | O_CLOEXEC, + .heap_flags =3D 0, + }; + int ret; + + if (!dmabuf_fd) + return -EINVAL; + + ret =3D ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data); + if (ret < 0) + return ret; + *dmabuf_fd =3D (int)data.fd; + return ret; +} + +/* The system heap is known to export dmabufs with support for cgroup trac= king */ +int alloc_dmabuf_from_system_heap(struct __test_metadata *_metadata, size_= t bytes) +{ + int heap_fd =3D -1, dmabuf_fd =3D -1; + static const char * const heap_path =3D "/dev/dma_heap/system"; + + heap_fd =3D open(heap_path, O_RDONLY); + if (heap_fd < 0) { + TH_LOG("%s - open %s failed!\n", strerror(errno), heap_path); + return -1; + } + + if (dmabuf_heap_alloc(heap_fd, bytes, &dmabuf_fd)) + TH_LOG("dmabuf allocation failed! - %s", strerror(errno)); + close(heap_fd); + + return dmabuf_fd; +} + +int binder_request_dmabuf(int binder_fd) +{ + int ret; + + /* + * We just send an empty binder_buffer_object to initiate a transaction + * with the context manager, who should respond with a single dmabuf + * inside a binder_fd_array_object or a binder_fd_object. + */ + + struct binder_buffer_object bbo =3D { + .hdr.type =3D BINDER_TYPE_PTR, + .flags =3D 0, + .buffer =3D 0, + .length =3D 0, + .parent =3D 0, /* No parent */ + .parent_offset =3D 0 /* No parent */ + }; + + binder_size_t offsets[] =3D {0}; + + struct { + int32_t cmd; + struct binder_transaction_data btd; + } __attribute__((packed)) bc =3D { + .cmd =3D BC_TRANSACTION, + .btd =3D { + .target =3D { 0 }, + .cookie =3D 0, + .code =3D BINDER_CODE, + .flags =3D TF_ACCEPT_FDS, /* We expect a FD/FDA in the reply */ + .data_size =3D sizeof(bbo), + .offsets_size =3D sizeof(offsets), + .data.ptr =3D { + (binder_uintptr_t)&bbo, + (binder_uintptr_t)offsets + } + }, + }; + + struct { + int32_t reply_noop; + } __attribute__((packed)) br; + + ret =3D do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br)); + if (ret >=3D sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) { + return -1; + } else if (ret < sizeof(br)) { + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret); + return -1; + } + return 0; +} + +int send_dmabuf_reply_fda(int binder_fd, struct binder_transaction_data *t= r, int dmabuf_fd) +{ + int ret; + /* + * The trailing 0 is to achieve the necessary alignment for the binder + * buffer_size. + */ + int fdarray[] =3D { dmabuf_fd, 0 }; + + struct binder_buffer_object bbo =3D { + .hdr.type =3D BINDER_TYPE_PTR, + .flags =3D 0, + .buffer =3D (binder_uintptr_t)fdarray, + .length =3D sizeof(fdarray), + .parent =3D 0, /* No parent */ + .parent_offset =3D 0 /* No parent */ + }; + + struct binder_fd_array_object bfdao =3D { + .hdr.type =3D BINDER_TYPE_FDA, + .flags =3D BINDER_FDA_FLAG_XFER_CHARGE, + .num_fds =3D 1, + .parent =3D 0, /* The binder_buffer_object */ + .parent_offset =3D 0 /* FDs follow immediately */ + }; + + uint64_t sz =3D sizeof(fdarray); + uint8_t data[sizeof(sz) + sizeof(bbo) + sizeof(bfdao)]; + binder_size_t offsets[] =3D {sizeof(sz), sizeof(sz)+sizeof(bbo)}; + + memcpy(data, &sz, sizeof(sz)); + memcpy(data + sizeof(sz), &bbo, sizeof(bbo)); + memcpy(data + sizeof(sz) + sizeof(bbo), &bfdao, sizeof(bfdao)); + + struct { + int32_t cmd; + struct binder_transaction_data_sg btd; + } __attribute__((packed)) bc =3D { + .cmd =3D BC_REPLY_SG, + .btd.transaction_data =3D { + .target =3D { tr->target.handle }, + .cookie =3D tr->cookie, + .code =3D BINDER_CODE, + .flags =3D 0, + .data_size =3D sizeof(data), + .offsets_size =3D sizeof(offsets), + .data.ptr =3D { + (binder_uintptr_t)data, + (binder_uintptr_t)offsets + } + }, + .btd.buffers_size =3D sizeof(fdarray) + }; + + struct { + int32_t reply_noop; + } __attribute__((packed)) br; + + ret =3D do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br)); + if (ret >=3D sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) { + return -1; + } else if (ret < sizeof(br)) { + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret); + return -1; + } + return 0; +} + +int send_dmabuf_reply_fd(int binder_fd, struct binder_transaction_data *tr= , int dmabuf_fd) +{ + int ret; + + struct binder_fd_object bfdo =3D { + .hdr.type =3D BINDER_TYPE_FD, + .flags =3D BINDER_FD_FLAG_XFER_CHARGE, + .fd =3D dmabuf_fd + }; + + binder_size_t offset =3D 0; + + struct { + int32_t cmd; + struct binder_transaction_data btd; + } __attribute__((packed)) bc =3D { + .cmd =3D BC_REPLY, + .btd =3D { + .target =3D { tr->target.handle }, + .cookie =3D tr->cookie, + .code =3D BINDER_CODE, + .flags =3D 0, + .data_size =3D sizeof(bfdo), + .offsets_size =3D sizeof(offset), + .data.ptr =3D { + (binder_uintptr_t)&bfdo, + (binder_uintptr_t)&offset + } + } + }; + + struct { + int32_t reply_noop; + } __attribute__((packed)) br; + + ret =3D do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br)); + if (ret >=3D sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) { + return -1; + } else if (ret < sizeof(br)) { + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret); + return -1; + } + return 0; +} + +struct binder_transaction_data *binder_wait_for_transaction(int binder_fd, + uint32_t *readbuf, + size_t readsize) +{ + static const int MAX_EVENTS =3D 1, EPOLL_WAIT_TIME_MS =3D 3 * 1000; + struct binder_reply { + int32_t reply0; + int32_t reply1; + struct binder_transaction_data btd; + } *br; + struct binder_transaction_data *ret =3D NULL; + struct epoll_event events[MAX_EVENTS]; + int epoll_fd, num_events, readcount; + uint32_t bc[] =3D { BC_ENTER_LOOPER }; + + do_binder_write_read(binder_fd, &bc, sizeof(bc), NULL, 0); + + epoll_fd =3D epoll_create1(EPOLL_CLOEXEC); + if (epoll_fd =3D=3D -1) { + perror("epoll_create"); + return NULL; + } + + events[0].events =3D EPOLLIN; + if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, binder_fd, &events[0])) { + perror("epoll_ctl add"); + goto err_close; + } + + num_events =3D epoll_wait(epoll_fd, events, MAX_EVENTS, EPOLL_WAIT_TIME_M= S); + if (num_events < 0) { + perror("epoll_wait"); + goto err_ctl; + } else if (num_events =3D=3D 0) { + fprintf(stderr, "No events\n"); + goto err_ctl; + } + + readcount =3D do_binder_write_read(binder_fd, NULL, 0, readbuf, readsize); + fprintf(stderr, "Read %d bytes from binder\n", readcount); + + if (readcount < (int)sizeof(struct binder_reply)) { + fprintf(stderr, "read_consumed not large enough\n"); + goto err_ctl; + } + + br =3D (struct binder_reply *)readbuf; + if (expect_binder_reply(br->reply0, BR_NOOP)) + goto err_ctl; + + if (br->reply1 =3D=3D BR_TRANSACTION) { + if (br->btd.code =3D=3D BINDER_CODE) + ret =3D &br->btd; + else + fprintf(stderr, "Received transaction with unexpected code: %u\n", + br->btd.code); + } else { + expect_binder_reply(br->reply1, BR_TRANSACTION_COMPLETE); + } + +err_ctl: + if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, binder_fd, NULL)) + perror("epoll_ctl del"); +err_close: + close(epoll_fd); + return ret; +} + +static int child_request_dmabuf_transfer(const char *cgroup, void *arg) +{ + UNUSED(cgroup); + int ret =3D -1; + uint32_t readbuf[32]; + struct binderfs_ctx bfs_ctx =3D *(struct binderfs_ctx *)arg; + struct binder_ctx b_ctx; + + fprintf(stderr, "Child PID: %d\n", getpid()); + + if (open_binder(&bfs_ctx, &b_ctx)) { + fprintf(stderr, "Child unable to open binder\n"); + return -1; + } + + if (binder_request_dmabuf(b_ctx.fd)) + goto err; + + /* The child must stay alive until the binder reply is received */ + if (binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf)) =3D= =3D NULL) + ret =3D 0; + + /* + * We don't close the received dmabuf here so that the parent can + * inspect the cgroup gpu memory charges to verify the charge transfer + * completed successfully. + */ +err: + close_binder(&b_ctx); + fprintf(stderr, "Child done\n"); + return ret; +} + +static const char * const GPUMEM_FILENAME =3D "gpu.memory.current"; +static const size_t ONE_MiB =3D 1024 * 1024; + +FIXTURE(fix) { + int dmabuf_fd; + struct binderfs_ctx bfs_ctx; + struct binder_ctx b_ctx; + struct cgroup_ctx cg_ctx; + struct binder_transaction_data *tr; + pid_t child_pid; +}; + +FIXTURE_SETUP(fix) +{ + long memsize; + uint32_t readbuf[32]; + struct flat_binder_object *fbo; + struct binder_buffer_object *bbo; + + if (geteuid() !=3D 0) + ksft_exit_skip("Need to be root to mount binderfs\n"); + + if (create_binderfs(&self->bfs_ctx, "testbinder")) + ksft_exit_skip("The Android binderfs filesystem is not available\n"); + + self->cg_ctx =3D create_cgroups(_metadata); + if (self->cg_ctx.root =3D=3D NULL) { + destroy_binderfs(&self->bfs_ctx); + ksft_exit_skip("cgroup v2 isn't mounted\n"); + } + + ASSERT_EQ(cg_enter_current(self->cg_ctx.source), 0) { + TH_LOG("Could not move parent to cgroup: %s", self->cg_ctx.source); + } + + self->dmabuf_fd =3D alloc_dmabuf_from_system_heap(_metadata, ONE_MiB); + ASSERT_GE(self->dmabuf_fd, 0); + TH_LOG("Allocated dmabuf"); + + memsize =3D cg_read_key_long(self->cg_ctx.source, GPUMEM_FILENAME, "syste= m-heap"); + ASSERT_EQ(memsize, ONE_MiB) { + TH_LOG("GPU memory used after allocation: %ld but it should be %lu", + memsize, (unsigned long)ONE_MiB); + } + + ASSERT_EQ(open_binder(&self->bfs_ctx, &self->b_ctx), 0) { + TH_LOG("Parent unable to open binder"); + } + TH_LOG("Opened binder at %s/%s", self->bfs_ctx.mountpoint, self->bfs_ctx.= name); + + ASSERT_EQ(become_binder_context_manager(self->b_ctx.fd), 0) { + TH_LOG("Cannot become context manager: %s", strerror(errno)); + } + + self->child_pid =3D cg_run_nowait( + self->cg_ctx.dest, child_request_dmabuf_transfer, &self->bfs_ctx); + ASSERT_GT(self->child_pid, 0) { + TH_LOG("Error forking: %s", strerror(errno)); + } + + self->tr =3D binder_wait_for_transaction(self->b_ctx.fd, readbuf, sizeof(= readbuf)); + ASSERT_NE(self->tr, NULL) { + TH_LOG("Error receiving transaction request from child"); + } + fbo =3D (struct flat_binder_object *)self->tr->data.ptr.buffer; + ASSERT_EQ(fbo->hdr.type, BINDER_TYPE_PTR) { + TH_LOG("Did not receive a buffer object from child"); + } + bbo =3D (struct binder_buffer_object *)fbo; + ASSERT_EQ(bbo->length, 0) { + TH_LOG("Did not receive an empty buffer object from child"); + } + + TH_LOG("Received transaction from child"); +} + +FIXTURE_TEARDOWN(fix) +{ + close_binder(&self->b_ctx); + close(self->dmabuf_fd); + destroy_cgroups(_metadata, &self->cg_ctx); + destroy_binderfs(&self->bfs_ctx); +} + + +void verify_transfer_success(struct _test_data_fix *self, struct __test_me= tadata *_metadata) +{ + ASSERT_EQ(cg_read_key_long(self->cg_ctx.dest, GPUMEM_FILENAME, "system-he= ap"), ONE_MiB) { + TH_LOG("Destination cgroup does not have system-heap charge!"); + } + ASSERT_EQ(cg_read_key_long(self->cg_ctx.source, GPUMEM_FILENAME, "system-= heap"), 0) { + TH_LOG("Source cgroup still has system-heap charge!"); + } + TH_LOG("Charge transfer succeeded!"); +} + +TEST_F(fix, individual_fd) +{ + send_dmabuf_reply_fd(self->b_ctx.fd, self->tr, self->dmabuf_fd); + verify_transfer_success(self, _metadata); +} + +TEST_F(fix, fd_array) +{ + send_dmabuf_reply_fda(self->b_ctx.fd, self->tr, self->dmabuf_fd); + verify_transfer_success(self, _metadata); +} + +TEST_HARNESS_MAIN --=20 2.36.0.512.ge40c2bad7a-goog