From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9232C83F2D for ; Sun, 3 Sep 2023 17:08:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344049AbjICRIm (ORCPT ); Sun, 3 Sep 2023 13:08:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233425AbjICRIl (ORCPT ); Sun, 3 Sep 2023 13:08:41 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0CF2106 for ; Sun, 3 Sep 2023 10:08:38 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id F115A66071E6; Sun, 3 Sep 2023 18:08:35 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760917; bh=2QMQsm3VuaBAzd2hf1hxEcuNZ98u5n7d6ZnuVfXHKfQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RhhkhEvoJCogGkvyvWoHKhm+xTpekckfX0h1BUlFSzo2oC+wM13wVgjzpP0K4SZ43 36WLbDwz7rId3hVhXCJ/44Y09/VIgougM+g15V6Ogjh6fVZSfQtcxFco/S3iqTguJK 17bE4GUJGx4jvyKfDqk32xVt+DbRAeMA9jX9JFpGvcZTaMmRuqJlDbH1s+UGt+zgzF z7pdyckT6jAm00almLgjV+Vp460nuoE1bdJvqcEJzIYauX8ZOLa9FOlgPYKLEDUq7f +O+MICMUrNd2+KfEsaHKXV/O96oOPGbm2aP4SC3ZI8M7Kpf+MZOYLRCC0HMfDb4Cp9 rGJP9xyZtrpAA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 01/20] drm/shmem-helper: Fix UAF in error path when freeing SGT of imported GEM Date: Sun, 3 Sep 2023 20:07:17 +0300 Message-ID: <20230903170736.513347-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Freeing drm-shmem GEM right after creating it using drm_gem_shmem_prime_import_sg_table() frees SGT of the imported dma-buf and then dma-buf frees this SGT second time. The v3d_prime_import_sg_table() is example of a error code path where dma-buf's SGT is freed by drm-shmem and then it's freed second time by dma_buf_unmap_attachment() in drm_gem_prime_import_dev(). Add drm-shmem GEM flag telling that this is imported SGT shall not be treated as own SGT, fixing the use-after-free bug. Cc: stable@vger.kernel.org Fixes: 2194a63a818d ("drm: Add library for shmem backed GEM objects") Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 13 ++++++++++++- include/drm/drm_gem_shmem_helper.h | 7 +++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index e435f986cd13..6693d4061ca1 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) =20 if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); - } else { + } else if (!shmem->imported_sgt) { dma_resv_lock(shmem->base.resv, NULL); =20 drm_WARN_ON(obj->dev, shmem->vmap_use_count); @@ -765,6 +765,17 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device = *dev, =20 shmem->sgt =3D sgt; =20 + /* + * drm_gem_shmem_prime_import_sg_table() can be called from a + * driver specific ->import_sg_table() implementations that + * may fail, in that case drm_gem_shmem_free() will be invoked + * without assigned drm_gem_object::import_attach. + * + * This flag lets drm_gem_shmem_free() differentiate whether + * SGT belongs to dmabuf and shall not be freed by drm-shmem. + */ + shmem->imported_sgt =3D true; + drm_dbg_prime(dev, "size =3D %zu\n", size); =20 return &shmem->base; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index bf0c31aa8fbe..ec70a98a8fe1 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -73,6 +73,13 @@ struct drm_gem_shmem_object { */ unsigned int vmap_use_count; =20 + /** + * @imported_sgt: + * + * True if SG table belongs to imported dma-buf. + */ + bool imported_sgt : 1; + /** * @pages_mark_dirty_on_put: * --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD7CAC71153 for ; Sun, 3 Sep 2023 17:08:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344499AbjICRIr (ORCPT ); Sun, 3 Sep 2023 13:08:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230313AbjICRIp (ORCPT ); Sun, 3 Sep 2023 13:08:45 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D3AE106 for ; Sun, 3 Sep 2023 10:08:42 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id BAF8B66071F8; Sun, 3 Sep 2023 18:08:37 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760919; bh=P7R5QSmRFRQLCbs8803cAJaEncR1mYKqji6ZAGoWj5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lkK0I9ocYRZBQDuh09NtCXaRVJrAvForIAafkU4qlgHjYVmitRxj92NQLBrDPFytT Zb/vANNPODwCgMo6qfPDx88sKabt8oZnElTB/IPaf/eI237BJGD//JQ0KbMcBwv8Hj EqDPvDNOZuZXj55e/dL+DFin7el6lBbcaBA4ffDBs9cmw2zHIOVeZp4E/gHzn9SuwA b2C/XT/jXL3w/D3I+MtCuoJ8h1kiP6ZFcBXSn1Oh44aEl+jXgw9vJk3wm2sSBJdiYR XIXgePBE4xCZj4ot6HH2SOQGIIokfRHpC7OhqZV/7FW+eCm8Mb8XTQx2ukr1EU6xbp oKIXaC/bwoDXQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 02/20] drm/shmem-helper: Use flag for tracking page count bumped by get_pages_sgt() Date: Sun, 3 Sep 2023 20:07:18 +0300 Message-ID: <20230903170736.513347-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use separate flag for tracking page count bumped by shmem->sgt to avoid imbalanced page counter during of drm_gem_shmem_free() time. It's fragile to assume that populated shmem->pages at a freeing time means that the count was bumped by drm_gem_shmem_get_pages_sgt(), using a flag removes the ambiguity. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 11 ++++++++++- drivers/gpu/drm/lima/lima_gem.c | 1 + include/drm/drm_gem_shmem_helper.h | 7 +++++++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 6693d4061ca1..848435e08eb2 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -152,8 +152,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *s= hmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + if (shmem->pages) { drm_gem_shmem_put_pages(shmem); + drm_WARN_ON(obj->dev, !shmem->got_pages_sgt); + } =20 drm_WARN_ON(obj->dev, shmem->pages_use_count); =20 @@ -693,6 +695,13 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_lo= cked(struct drm_gem_shmem_ if (ret) goto err_free_sgt; =20 + /* + * This flag prevents imbalanced pages_use_count during + * drm_gem_shmem_free(), where pages_use_count=3D1 only if + * drm_gem_shmem_get_pages_sgt() was used by a driver. + */ + shmem->got_pages_sgt =3D true; + shmem->sgt =3D sgt; =20 return sgt; diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_ge= m.c index 4f9736e5f929..67c39b95e30e 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -48,6 +48,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *v= m) =20 bo->base.pages =3D pages; bo->base.pages_use_count =3D 1; + bo->base.got_pages_sgt =3D true; =20 mapping_set_unevictable(mapping); } diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index ec70a98a8fe1..a53c0874b3c4 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -73,6 +73,13 @@ struct drm_gem_shmem_object { */ unsigned int vmap_use_count; =20 + /** + * @got_pages_sgt: + * + * True if SG table was retrieved using drm_gem_shmem_get_pages_sgt() + */ + bool got_pages_sgt : 1; + /** * @imported_sgt: * --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1232CA0FE3 for ; Sun, 3 Sep 2023 17:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344723AbjICRIw (ORCPT ); Sun, 3 Sep 2023 13:08:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344210AbjICRIp (ORCPT ); Sun, 3 Sep 2023 13:08:45 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0662DA4 for ; Sun, 3 Sep 2023 10:08:42 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 8D1A46607284; Sun, 3 Sep 2023 18:08:39 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760920; bh=x/957Q63aZNgd2pgJOjfAVQpimSWMBU2xPvHQ+4AM3g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mNc0S14VBMUqKyfkhga0gUIjoPgG7FaXuNeagO+xykBwPkRiTabmLyqOlwQks3LhE ohgAAO8/NE7I/vGiqa8tce6tr7hESjRKmOwVvIAZ4lCofZKhtXw2grlbEV/3uv4Hx8 VBmEg4q5JtR9WhqS338FlQ6+DJd5ioCKN/MRR4jiSOEuWsSHvITR60v7XSQJ9+XfiF GdZIetIXdguIZr0TqnHmcCyHfjRr1T0lRTdDuCxAzcP1uJ3c+jyojMxiNcu6ZLXNCd 5yNBRMmW2OJm0Fju6ffGWS42QXXwFUbUmWLbWTnl0Pz9cQ2Kg/V9zo7MkFTk/7EtWh 42Icwb8D98wiA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 03/20] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Date: Sun, 3 Sep 2023 20:07:19 +0300 Message-ID: <20230903170736.513347-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make drm/gem API function names consistent by having locked function use the _locked postfix in the name, while the unlocked variants don't use the _unlocked postfix. Rename drm_gem_v/unmap() function names to make them consistent with the rest of the API functions. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_client.c | 6 +++--- drivers/gpu/drm/drm_gem.c | 20 ++++++++++---------- drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +++--- drivers/gpu/drm/drm_internal.h | 4 ++-- drivers/gpu/drm/drm_prime.c | 4 ++-- drivers/gpu/drm/lima/lima_sched.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +++--- include/drm/drm_gem.h | 4 ++-- 9 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 037e36f2049c..29306657117a 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -265,7 +265,7 @@ void drm_client_dev_restore(struct drm_device *dev) static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { if (buffer->gem) { - drm_gem_vunmap_unlocked(buffer->gem, &buffer->map); + drm_gem_vunmap(buffer->gem, &buffer->map); drm_gem_object_put(buffer->gem); } =20 @@ -349,7 +349,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret =3D drm_gem_vmap_unlocked(buffer->gem, map); + ret =3D drm_gem_vmap(buffer->gem, map); if (ret) return ret; =20 @@ -371,7 +371,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer = *buffer) { struct iosys_map *map =3D &buffer->map; =20 - drm_gem_vunmap_unlocked(buffer->gem, map); + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); =20 diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 6129b89bb366..fae5832bb0bd 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1173,7 +1173,7 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } =20 -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map) { int ret; =20 @@ -1190,9 +1190,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct i= osys_map *map) =20 return 0; } -EXPORT_SYMBOL(drm_gem_vmap); +EXPORT_SYMBOL(drm_gem_vmap_locked); =20 -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *m= ap) { dma_resv_assert_held(obj->resv); =20 @@ -1205,27 +1205,27 @@ void drm_gem_vunmap(struct drm_gem_object *obj, str= uct iosys_map *map) /* Always set the mapping to NULL. Callers may rely on this. */ iosys_map_clear(map); } -EXPORT_SYMBOL(drm_gem_vunmap); +EXPORT_SYMBOL(drm_gem_vunmap_locked); =20 -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *ma= p) +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; =20 dma_resv_lock(obj->resv, NULL); - ret =3D drm_gem_vmap(obj, map); + ret =3D drm_gem_vmap_locked(obj, map); dma_resv_unlock(obj->resv); =20 return ret; } -EXPORT_SYMBOL(drm_gem_vmap_unlocked); +EXPORT_SYMBOL(drm_gem_vmap); =20 -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map = *map) +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { dma_resv_lock(obj->resv, NULL); - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); dma_resv_unlock(obj->resv); } -EXPORT_SYMBOL(drm_gem_vunmap_unlocked); +EXPORT_SYMBOL(drm_gem_vunmap); =20 /** * drm_gem_lock_reservations - Sets up the ww context and acquires diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm= /drm_gem_framebuffer_helper.c index 3bdb6ba37ff4..3808f47310bf 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct = iosys_map *map, ret =3D -EINVAL; goto err_drm_gem_vunmap; } - ret =3D drm_gem_vmap_unlocked(obj, &map[i]); + ret =3D drm_gem_vmap(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct = iosys_map *map, obj =3D drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } return ret; } @@ -411,7 +411,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, stru= ct iosys_map *map) continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index ba12acd55139..243d9aa52881 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -175,8 +175,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned= int indent, =20 int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *m= ap); =20 /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 63b709a67471..57ac5623f09a 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -682,7 +682,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct= iosys_map *map) { struct drm_gem_object *obj =3D dma_buf->priv; =20 - return drm_gem_vmap(obj, map); + return drm_gem_vmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); =20 @@ -698,7 +698,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, str= uct iosys_map *map) { struct drm_gem_object *obj =3D dma_buf->priv; =20 - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); =20 diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_= sched.c index ffd91a5ee299..843487128544 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lim= a_sched_task *task) } else { buffer_chunk->size =3D lima_bo_size(bo); =20 - ret =3D drm_gem_vmap_unlocked(&bo->base.base, &map); + ret =3D drm_gem_vmap(&bo->base.base, &map); if (ret) { kvfree(et); goto out; @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lim= a_sched_task *task) =20 memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); =20 - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); } =20 buffer_chunk =3D (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/pan= frost/panfrost_dump.c index e7942ac449c6..0f30bbea9895 100644 --- a/drivers/gpu/drm/panfrost/panfrost_dump.c +++ b/drivers/gpu/drm/panfrost/panfrost_dump.c @@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job) goto dump_header; } =20 - ret =3D drm_gem_vmap_unlocked(&bo->base.base, &map); + ret =3D drm_gem_vmap(&bo->base.base, &map); if (ret) { dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); iter.hdr->bomap.valid =3D 0; @@ -236,7 +236,7 @@ void panfrost_core_dump(struct panfrost_job *job) vaddr =3D map.vaddr; memcpy(iter.data, vaddr, bo->base.base.size); =20 - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); =20 iter.hdr->bomap.valid =3D 1; =20 diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/= panfrost/panfrost_perfcnt.c index ba9b6e2b2636..52befead08c6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfro= st_device *pfdev, goto err_close_bo; } =20 - ret =3D drm_gem_vmap_unlocked(&bo->base, &map); + ret =3D drm_gem_vmap(&bo->base, &map); if (ret) goto err_put_mapping; perfcnt->buf =3D map.vaddr; @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfro= st_device *pfdev, return 0; =20 err_vunmap: - drm_gem_vunmap_unlocked(&bo->base, &map); + drm_gem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfr= ost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); =20 perfcnt->user =3D NULL; - drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); + drm_gem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf =3D NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bc9f6aa2f3fe..110a9c0ea42b 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -518,8 +518,8 @@ struct page **drm_gem_get_pages(struct drm_gem_object *= obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); =20 -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *ma= p); -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map = *map); +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); =20 int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles, int count, struct drm_gem_object ***objs_out); --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C885C83F2D for ; Sun, 3 Sep 2023 17:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344564AbjICRIu (ORCPT ); Sun, 3 Sep 2023 13:08:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344384AbjICRIq (ORCPT ); Sun, 3 Sep 2023 13:08:46 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CE93106 for ; Sun, 3 Sep 2023 10:08:43 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 15EDB66071F4; Sun, 3 Sep 2023 18:08:41 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760922; bh=JSMRwiEgAtRV+rZZyvXx/rNB4W7SizXZUQgRnIwQVuQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UtkberEj2p5mchU3AeGdlzHZQfBcB4sMk9n5l/8S5mfBWTMyGoWgBdZWdn45tyQ+B zMrjKWKwlI2TO2kOj0d+R4Q4LRCZzWGfwEFi2g8E8A17tgR3ksu6qLvoFjePlRK4/R 4iCD3pkiCgC42YcyJwxAkrSriFN5J8pTu3syG0NwpBriQ4tha7Tx1ThaC1p2W6NZs0 56J93DC2O0c6OCqAZrPJlh0Kekxh8NlJLaJpLfDvPrEmpb1OeLatNMOAd+67jT1oJO c3CJHLRN4TNfnnesDcLCAxNQf2k5J4RABGq80d2MnJW+jwAyz72KPC1pyavzUSJT0O tGevfT2Ngzg0g== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 04/20] drm/gem: Add _locked postfix to functions that have unlocked counterpart Date: Sun, 3 Sep 2023 20:07:20 +0300 Message-ID: <20230903170736.513347-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add _locked postfix to drm_gem functions that have unlocked counterpart functions to make GEM functions naming more consistent and intuitive in regards to the locking requirements. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 6 +++--- include/drm/drm_gem.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index fae5832bb0bd..8c0268944199 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1488,10 +1488,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, EXPORT_SYMBOL(drm_gem_lru_scan); =20 /** - * drm_gem_evict - helper to evict backing pages for a GEM object + * drm_gem_evict_locked - helper to evict backing pages for a GEM object * @obj: obj in question */ -int drm_gem_evict(struct drm_gem_object *obj) +int drm_gem_evict_locked(struct drm_gem_object *obj) { dma_resv_assert_held(obj->resv); =20 @@ -1503,4 +1503,4 @@ int drm_gem_evict(struct drm_gem_object *obj) =20 return 0; } -EXPORT_SYMBOL(drm_gem_evict); +EXPORT_SYMBOL(drm_gem_evict_locked); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 110a9c0ea42b..d8dedb1f0968 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -542,7 +542,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned long *remaining, bool (*shrink)(struct drm_gem_object *obj)); =20 -int drm_gem_evict(struct drm_gem_object *obj); +int drm_gem_evict_locked(struct drm_gem_object *obj); =20 #ifdef CONFIG_LOCKDEP /** --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E075CC83F3E for ; Sun, 3 Sep 2023 17:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344741AbjICRIx (ORCPT ); Sun, 3 Sep 2023 13:08:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344501AbjICRIs (ORCPT ); Sun, 3 Sep 2023 13:08:48 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06BB410D for ; Sun, 3 Sep 2023 10:08:45 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 90A5966071EB; Sun, 3 Sep 2023 18:08:42 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760923; bh=fRLodgl07tao4BtPSyoT/6xlg7PcCsmg2piBCDZKcAU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QtGYq8O0g9C4ZYlxVdjzMixfcWByVNYNQUa22ApGRn5b8LkXa7O/c5kspKXsCBwSw s9dk4e3ZDApNNLRCWfGPAwMivujnr0zAyDEpWJDMj/F5HQOE7RaBphONIASDTtcLq+ NpCM+TBVdp/ZSgJ0nxFEb4aRIEd7jikaRApz7yto3wqJsBjpjog7dmNg6EN395RUVS Um1k9oAaosLcAaPbDrV2ZTv8tlDXBhTpOrCX+Xjpcw4TkJTOeeoogJ2x0IOj4DDRUP OBDsuykbiQ+xItdm4k3THhDDQbiHkEp0qKPUU/y9u6aqgJaXqMtdln5WAd/QoG4vLZ n1qj2VjrFXq/Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 05/20] drm/v3d: Replace open-coded drm_gem_shmem_free() with drm_gem_object_put() Date: Sun, 3 Sep 2023 20:07:21 +0300 Message-ID: <20230903170736.513347-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The drm_gem_shmem_free() doesn't put GEM's kref to zero, which becomes important with addition of the shrinker support to drm-shmem that will use kref=3D0 in order to prevent taking lock during special GEM-freeing time in order to avoid spurious lockdep warning about locking ordering vs fs_reclaim code paths. Replace open-coded drm_gem_shmem_free() with drm_gem_object_put() that drops kref to zero before freeing GEM. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/v3d/v3d_bo.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index 8b3229a37c6d..70c1095d6eec 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -33,16 +33,18 @@ void v3d_free_object(struct drm_gem_object *obj) struct v3d_dev *v3d =3D to_v3d_dev(obj->dev); struct v3d_bo *bo =3D to_v3d_bo(obj); =20 - v3d_mmu_remove_ptes(bo); + if (drm_mm_node_allocated(&bo->node)) { + v3d_mmu_remove_ptes(bo); =20 - mutex_lock(&v3d->bo_lock); - v3d->bo_stats.num_allocated--; - v3d->bo_stats.pages_allocated -=3D obj->size >> PAGE_SHIFT; - mutex_unlock(&v3d->bo_lock); + mutex_lock(&v3d->bo_lock); + v3d->bo_stats.num_allocated--; + v3d->bo_stats.pages_allocated -=3D obj->size >> PAGE_SHIFT; + mutex_unlock(&v3d->bo_lock); =20 - spin_lock(&v3d->mm_lock); - drm_mm_remove_node(&bo->node); - spin_unlock(&v3d->mm_lock); + spin_lock(&v3d->mm_lock); + drm_mm_remove_node(&bo->node); + spin_unlock(&v3d->mm_lock); + } =20 /* GPU execution may have dirtied any pages in the BO. */ bo->base.pages_mark_dirty_on_put =3D true; @@ -142,7 +144,7 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, st= ruct drm_file *file_priv, return bo; =20 free_obj: - drm_gem_shmem_free(shmem_obj); + drm_gem_object_put(&shmem_obj->base); return ERR_PTR(ret); } =20 @@ -160,7 +162,7 @@ v3d_prime_import_sg_table(struct drm_device *dev, =20 ret =3D v3d_bo_create_finish(obj); if (ret) { - drm_gem_shmem_free(&to_v3d_bo(obj)->base); + drm_gem_object_put(obj); return ERR_PTR(ret); } =20 --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A2ACA0FE9 for ; Sun, 3 Sep 2023 17:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344867AbjICRIz (ORCPT ); Sun, 3 Sep 2023 13:08:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344548AbjICRIu (ORCPT ); Sun, 3 Sep 2023 13:08:50 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97B6D110 for ; Sun, 3 Sep 2023 10:08:46 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1F26C660729F; Sun, 3 Sep 2023 18:08:44 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760925; bh=+AikQZciT7OSpx2D1soEKFZvpuAqTt4uGz0mu/TZmic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lKhnpdYjng/k1JWea/NPG2sjHP2rZ9h8RlXm09C/98a0X5pY3Qvj2UOyFCWhM2qWd 60Ui5Dzzn/0jfTQaSxPms97xG9KZ0yE7IhEW6uor+W8DfG9C0bI1bcYEW7W2ecq8qA nXLE++xOzvKZc5iH1S6vwCn2lfSVE0ciMjqXjxhpteN9nIeaA0NifvFlzH1P9h27/G EaTVOdfqFHyrn0+KKJbv/nNg+8Ayn9S0cR5VffGgAb2Go7L8x7e/9NFVoZUkPLmUv2 YRJB3BFQloXWOaD/py0NIHZ5fJsMMtRCKtkReDI3CIqEvZFHgu6SRzv8LZWImW+DWw 6TDDhBhV4xcGw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 06/20] drm/virtio: Replace drm_gem_shmem_free() with drm_gem_object_put() Date: Sun, 3 Sep 2023 20:07:22 +0300 Message-ID: <20230903170736.513347-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Prepare virtio_gpu_object_create() to addition of memory shrinker support by replacing open-coded drm_gem_shmem_free() with drm_gem_object_put() that decrements GEM refcount to 0, which becomes important for drm-shmem because it will start to use GEM's refcount during the shmem's BO freeing time in order to prevent spurious lockdep warning about resv lock ordering vs fs_reclaim code paths. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virt= io/virtgpu_object.c index c7e74cf13022..343b13428125 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -244,6 +244,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *= vgdev, err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_free_gem: - drm_gem_shmem_free(shmem_obj); + drm_gem_object_put(&bo->base.base); return ret; } --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9DC1C71153 for ; Sun, 3 Sep 2023 17:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344628AbjICRJA (ORCPT ); Sun, 3 Sep 2023 13:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344700AbjICRIw (ORCPT ); Sun, 3 Sep 2023 13:08:52 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32E04128 for ; Sun, 3 Sep 2023 10:08:48 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id A739766071FD; Sun, 3 Sep 2023 18:08:45 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760927; bh=a3Jl5T1JBxJ69b0aFO2DL1kTq5IjHG08e/SOb/CCWQQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iZbRYAMPFji3mJWz1xKN7F+VJe2LDEWD4DEbTdt4DyOx3DfcI3i2NnlpS0NEafN58 720DGnQrzexbAHwpfRcNiw7lORE6UB0nRDLaHxpO78QMiJXg3U1MmGkshwneX5Yxc9 I3C8h+37do6fYmDtF67pdOs7E0JE/dGPRFybf4oCdjS442aH8o13rqX0ghcwidSI3u 5uYLu24KKLB7CfTOAqQhHa29skfVpbfUgOE/M1rXOa31Y3pQaWlTKeYwjZXj7RjgYG eqbXu8PwUFBglEImOUFz3aEys1AD+d5EHG5S0ZDV0gpadYXIQktCZU5MSPAQ8FnwDj heOCinEXA5kUA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 07/20] drm/shmem-helper: Make all exported symbols GPL Date: Sun, 3 Sep 2023 20:07:23 +0300 Message-ID: <20230903170736.513347-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make all drm-shmem exported symbols GPL to make them consistent with the rest of drm-shmem symbols. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 848435e08eb2..5c777adf1bcb 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -228,7 +228,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_objec= t *shmem) shmem->pages_mark_accessed_on_put); shmem->pages =3D NULL; } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); =20 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -273,7 +273,7 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shme= m) =20 return ret; } -EXPORT_SYMBOL(drm_gem_shmem_pin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_pin); =20 /** * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object @@ -292,7 +292,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *s= hmem) drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } -EXPORT_SYMBOL(drm_gem_shmem_unpin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); =20 /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object @@ -362,7 +362,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shm= em, =20 return ret; } -EXPORT_SYMBOL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); =20 /* * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object @@ -398,7 +398,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *= shmem, =20 shmem->vaddr =3D NULL; } -EXPORT_SYMBOL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); =20 static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -437,7 +437,7 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *= shmem, int madv) =20 return (madv >=3D 0); } -EXPORT_SYMBOL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); =20 void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { @@ -469,7 +469,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *s= hmem) =20 invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); =20 /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -644,7 +644,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); } -EXPORT_SYMBOL(drm_gem_shmem_print_info); +EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); =20 /** * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01670C83F3E for ; Sun, 3 Sep 2023 17:09:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344873AbjICRJC (ORCPT ); Sun, 3 Sep 2023 13:09:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344931AbjICRI7 (ORCPT ); Sun, 3 Sep 2023 13:08:59 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04E97189 for ; Sun, 3 Sep 2023 10:08:50 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 3D1D266071C9; Sun, 3 Sep 2023 18:08:47 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760928; bh=riIHlNRTeQNETZAEgMxtvKPtSOxau5fm66spgKyGFg0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SuU+7U/4F8Saum3lWpGqKR8jiaCtrNntj6ibewXrhorkXLd0SsTeaqdkA/BC/fDZl iKQJyeZ3Z4r80Iz6L5N/Zd+5ZLMNcvEBVDesMxMaAsVveCG+0Qsgt8erbY4Mr07F71 Q4GF8NJ3bnxUfJYfY85zsJ+4CL0pybuS1n32q+wSBdkO6gEHKgBmObGyPfztwXiCKO byk2zvKl+IJGlvU3WczEQqJeIFlcGWBYFHlk5XNpdQ1jhCD8ndgQpy6IxN8e5YebTz +k+VP0y9/MEbeicXc7zScpEPCXlIDbzJk3gMwK0JUWT0fkD0m7+yhZnFf40FCCKJHr Bi6kpMeJUSoIQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 08/20] drm/shmem-helper: Refactor locked/unlocked functions Date: Sun, 3 Sep 2023 20:07:24 +0300 Message-ID: <20230903170736.513347-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add locked and remove unlocked postfixes from drm-shmem function names, making names consistent with the drm/gem core code. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 64 +++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 8 +-- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 6 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- drivers/gpu/drm/v3d/v3d_bo.c | 4 +- drivers/gpu/drm/virtio/virtgpu_object.c | 4 +- include/drm/drm_gem_shmem_helper.h | 36 +++++------ 9 files changed, 64 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 5c777adf1bcb..2b50d1a7f718 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -43,8 +43,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_fu= ncs =3D { .pin =3D drm_gem_shmem_object_pin, .unpin =3D drm_gem_shmem_object_unpin, .get_sg_table =3D drm_gem_shmem_object_get_sg_table, - .vmap =3D drm_gem_shmem_object_vmap, - .vunmap =3D drm_gem_shmem_object_vunmap, + .vmap =3D drm_gem_shmem_object_vmap_locked, + .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, }; @@ -153,7 +153,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) kfree(shmem->sgt); } if (shmem->pages) { - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, !shmem->got_pages_sgt); } =20 @@ -167,7 +167,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); =20 -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shm= em) { struct drm_gem_object *obj =3D &shmem->base; struct page **pages; @@ -201,12 +201,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shm= em_object *shmem) } =20 /* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a= shmem GEM object + * drm_gem_shmem_put_pages_locked - Decrease use count on the backing page= s for a shmem GEM object * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when u= se drops to zero. */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; =20 @@ -228,7 +228,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_objec= t *shmem) shmem->pages_mark_accessed_on_put); shmem->pages =3D NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); =20 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -236,7 +236,7 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shme= m_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 - ret =3D drm_gem_shmem_get_pages(shmem); + ret =3D drm_gem_shmem_get_pages_locked(shmem); =20 return ret; } @@ -245,7 +245,7 @@ static void drm_gem_shmem_unpin_locked(struct drm_gem_s= hmem_object *shmem) { dma_resv_assert_held(shmem->base.resv); =20 - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } =20 /** @@ -295,7 +295,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *s= hmem) EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); =20 /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM ob= ject * @shmem: shmem GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's back= ing * store. @@ -304,13 +304,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); * exists for the buffer backing the shmem GEM object. It hides the differ= ences * between dma-buf imported and natively allocated objects. * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(= ). + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_= locked(). * * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj =3D &shmem->base; int ret =3D 0; @@ -333,7 +333,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shm= em, return 0; } =20 - ret =3D drm_gem_shmem_get_pages(shmem); + ret =3D drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; =20 @@ -356,28 +356,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *s= hmem, =20 err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); err_zero_use: shmem->vmap_use_count =3D 0; =20 return ret; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked); =20 /* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM o= bject * @shmem: shmem GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count dr= ops to - * zero. + * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use c= ount + * drops to zero. * * This function hides the differences between dma-buf imported and native= ly * allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj =3D &shmem->base; =20 @@ -393,12 +393,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object= *shmem, return; =20 vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } =20 shmem->vaddr =3D NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked); =20 static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -426,7 +426,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_= priv, /* Update madvise status, returns true if not purged, else * false or -errno. */ -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int m= adv) { dma_resv_assert_held(shmem->base.resv); =20 @@ -437,9 +437,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *= shmem, int madv) =20 return (madv >=3D 0); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); =20 -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; struct drm_device *dev =3D obj->dev; @@ -453,7 +453,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *s= hmem) kfree(shmem->sgt); shmem->sgt =3D NULL; =20 - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); =20 shmem->madv =3D -1; =20 @@ -469,7 +469,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *s= hmem) =20 invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); =20 /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -566,7 +566,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struc= t *vma) struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); =20 drm_gem_vm_close(vma); @@ -613,7 +613,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shm= em, struct vm_area_struct } =20 dma_resv_lock(shmem->base.resv, NULL); - ret =3D drm_gem_shmem_get_pages(shmem); + ret =3D drm_gem_shmem_get_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); =20 if (ret) @@ -681,7 +681,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 - ret =3D drm_gem_shmem_get_pages(shmem); + ret =3D drm_gem_shmem_get_pages_locked(shmem); if (ret) return ERR_PTR(ret); =20 @@ -710,7 +710,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } =20 diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_ge= m.c index 67c39b95e30e..ec8f718aa539 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -181,7 +181,7 @@ static int lima_gem_pin(struct drm_gem_object *obj) if (bo->heap_size) return -EINVAL; =20 - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } =20 static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) @@ -191,7 +191,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, st= ruct iosys_map *map) if (bo->heap_size) return -EINVAL; =20 - return drm_gem_shmem_vmap(&bo->base, map); + return drm_gem_shmem_object_vmap_locked(obj, map); } =20 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct= *vma) @@ -201,7 +201,7 @@ static int lima_gem_mmap(struct drm_gem_object *obj, st= ruct vm_area_struct *vma) if (bo->heap_size) return -EINVAL; =20 - return drm_gem_shmem_mmap(&bo->base, vma); + return drm_gem_shmem_object_mmap(obj, vma); } =20 static const struct drm_gem_object_funcs lima_gem_funcs =3D { @@ -213,7 +213,7 @@ static const struct drm_gem_object_funcs lima_gem_funcs= =3D { .unpin =3D drm_gem_shmem_object_unpin, .get_sg_table =3D drm_gem_shmem_object_get_sg_table, .vmap =3D lima_gem_vmap, - .vunmap =3D drm_gem_shmem_object_vunmap, + .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D lima_gem_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panf= rost/panfrost_drv.c index a2ab99698ca8..175443eacead 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -436,7 +436,7 @@ static int panfrost_ioctl_madvise(struct drm_device *de= v, void *data, } } =20 - args->retained =3D drm_gem_shmem_madvise(&bo->base, args->madv); + args->retained =3D drm_gem_shmem_madvise_locked(&bo->base, args->madv); =20 if (args->retained) { if (args->madv =3D=3D PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panf= rost/panfrost_gem.c index 3c812fbd126f..59c8c73c6a59 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -192,7 +192,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) if (bo->is_heap) return -EINVAL; =20 - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } =20 static const struct drm_gem_object_funcs panfrost_gem_funcs =3D { @@ -203,8 +203,8 @@ static const struct drm_gem_object_funcs panfrost_gem_f= uncs =3D { .pin =3D panfrost_gem_pin, .unpin =3D drm_gem_shmem_object_unpin, .get_sg_table =3D drm_gem_shmem_object_get_sg_table, - .vmap =3D drm_gem_shmem_object_vmap, - .vunmap =3D drm_gem_shmem_object_vunmap, + .vmap =3D drm_gem_shmem_object_vmap_locked, + .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu= /drm/panfrost/panfrost_gem_shrinker.c index 6a71a2555f85..72193bd734e1 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -52,7 +52,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) goto unlock_mappings; =20 panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge(&bo->base); + drm_gem_shmem_purge_locked(&bo->base); ret =3D true; =20 dma_resv_unlock(shmem->base.resv); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panf= rost/panfrost_mmu.c index c0123d09f699..7771769f0ce0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -535,7 +535,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, err_map: sg_free_table(sgt); err_pages: - drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_put_pages_locked(&bo->base); err_unlock: dma_resv_unlock(obj->resv); err_bo: diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index 70c1095d6eec..6a0e7b6177d5 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -58,8 +58,8 @@ static const struct drm_gem_object_funcs v3d_gem_funcs = =3D { .pin =3D drm_gem_shmem_object_pin, .unpin =3D drm_gem_shmem_object_unpin, .get_sg_table =3D drm_gem_shmem_object_get_sg_table, - .vmap =3D drm_gem_shmem_object_vmap, - .vunmap =3D drm_gem_shmem_object_vunmap, + .vmap =3D drm_gem_shmem_object_vmap_locked, + .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virt= io/virtgpu_object.c index 343b13428125..97020ed56b81 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -106,8 +106,8 @@ static const struct drm_gem_object_funcs virtio_gpu_shm= em_funcs =3D { .pin =3D drm_gem_shmem_object_pin, .unpin =3D drm_gem_shmem_object_unpin, .get_sg_table =3D drm_gem_shmem_object_get_sg_table, - .vmap =3D drm_gem_shmem_object_vmap, - .vunmap =3D drm_gem_shmem_object_vunmap, + .vmap =3D drm_gem_shmem_object_vmap_locked, + .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, }; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index a53c0874b3c4..808083279fd5 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -113,16 +113,16 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, = size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); =20 -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_= struct *vma); =20 -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int m= adv); =20 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object = *shmem) { @@ -131,7 +131,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct dr= m_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } =20 -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); =20 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *s= hmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *= shmem); @@ -222,22 +222,22 @@ static inline struct sg_table *drm_gem_shmem_object_g= et_sg_table(struct drm_gem_ } =20 /* - * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap() + * drm_gem_shmem_object_vmap_locked - GEM object function for drm_gem_shme= m_vmap_locked() * @obj: GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's back= ing store. * - * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem= helpers should - * use it as their &drm_gem_object_funcs.vmap handler. + * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ th= e shmem + * helpers should use it as their &drm_gem_object_funcs.vmap handler. * * Returns: * 0 on success or a negative error code on failure. */ -static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, - struct iosys_map *map) +static inline int drm_gem_shmem_object_vmap_locked(struct drm_gem_object *= obj, + struct iosys_map *map) { struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 - return drm_gem_shmem_vmap(shmem, map); + return drm_gem_shmem_vmap_locked(shmem, map); } =20 /* @@ -245,15 +245,15 @@ static inline int drm_gem_shmem_object_vmap(struct dr= m_gem_object *obj, * @obj: GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shm= em helpers should - * use it as their &drm_gem_object_funcs.vunmap handler. + * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ = the shmem + * helpers should use it as their &drm_gem_object_funcs.vunmap handler. */ -static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, - struct iosys_map *map) +static inline void drm_gem_shmem_object_vunmap_locked(struct drm_gem_objec= t *obj, + struct iosys_map *map) { struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 - drm_gem_shmem_vunmap(shmem, map); + drm_gem_shmem_vunmap_locked(shmem, map); } =20 /** --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8CD2CA0FE9 for ; Sun, 3 Sep 2023 17:09:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344959AbjICRJD (ORCPT ); Sun, 3 Sep 2023 13:09:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344572AbjICRJA (ORCPT ); Sun, 3 Sep 2023 13:09:00 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 418561A5 for ; Sun, 3 Sep 2023 10:08:51 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id C696266071E6; Sun, 3 Sep 2023 18:08:48 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760930; bh=i6QDpNWVz0jKAQf6/ciammmRahmRQ60i8wi2JYzFNWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BKKReQTws4xxbVo0RarOX+HiexwSjqJcyoQ8W8j1AfvzGKH98XWUDh969MgmkBKLH X8Q9LDNUGE5S0suLC7DRl9wkcrZtfJ8WTwLF6x7sq2wDFztb60tYs/GT3lgrAHAZSE CKAUJUjDZsu+nUoCaaCsMxRsaSfOAIwi2Iqm7sQO5MCis6GOqXMdv5esiNwoTgUOoZ oM7/DLO1ld/yl6VlLRAAnTQEcoHMI+szRmEvw6BcfcWme/Gpn1nM3a22+okBehdXqb S3ZSpsJa7btaPC5spAGIizGLMrswazW5mHc0D2dWfvaqZFB5qHtVlqCchzCyhHo7il yMTpohJvjst0Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 09/20] drm/shmem-helper: Remove obsoleted is_iomem test Date: Sun, 3 Sep 2023 20:07:25 +0300 Message-ID: <20230903170736.513347-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Everything that uses the mapped buffer should be agnostic to is_iomem. The only reason for the is_iomem test is that we're setting shmem->vaddr to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove the obsoleted is_iomem test to clean up the code. Suggested-by: Thomas Zimmermann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 2b50d1a7f718..25e99468ced2 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -317,12 +317,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_obj= ect *shmem, =20 if (obj->import_attach) { ret =3D dma_buf_vmap(obj->import_attach->dmabuf, map); - if (!ret) { - if (drm_WARN_ON(obj->dev, map->is_iomem)) { - dma_buf_vunmap(obj->import_attach->dmabuf, map); - return -EIO; - } - } } else { pgprot_t prot =3D PAGE_KERNEL; =20 --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 513EBCA0FE9 for ; Sun, 3 Sep 2023 17:09:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344962AbjICRJL (ORCPT ); Sun, 3 Sep 2023 13:09:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344661AbjICRJG (ORCPT ); Sun, 3 Sep 2023 13:09:06 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD22B10C for ; Sun, 3 Sep 2023 10:08:52 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 4BDC366072B0; Sun, 3 Sep 2023 18:08:50 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760931; bh=7rk0oOgFvKXBqty2fz2L0StvvcIwJhpwP0k+0Avtm2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QIfZEEyKjzA7iShhP4zkRA5JknETlKVn6+nz+MtfDEKHBfFCpRB7p3zb2IepRk0NF 89r0VrsTGcvp6b6HSABx6eJvQX3wyCRKvP5wUUhTTAJXRCNCPInNLYah3o6g6C+77e OHi4adamaGznLlZIUsJtzr6zgkJ7wvA45RD4AQ+en7K0ewaIh/jtAC4tKDax8XNhUf m14UkNtixI6eWr8puflP2BbPn6KqGs/mVIMB6Cq7RI51lDcKCZfP36SBS+N8nXlNi3 +PORZl54UM0A3eReaSu5GZiSzXB2Rvdc9mX3shWhf9me7BxLGcI6/+nOK6nzko+rYo ObtGwVniFG8/Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 10/20] drm/shmem-helper: Add and use pages_pin_count Date: Sun, 3 Sep 2023 20:07:26 +0300 Message-ID: <20230903170736.513347-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add separate pages_pin_count for tracking of whether drm-shmem pages are moveable or not. With the addition of memory shrinker support to drm-shmem, the pages_use_count will no longer determine whether pages are hard-pinned in memory, but whether pages exist and are soft-pinned (and could be swapped out). The pages_pin_count > 1 will hard-pin pages in memory. Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 24 ++++++++++++++++-------- include/drm/drm_gem_shmem_helper.h | 10 ++++++++++ 2 files changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 25e99468ced2..7e1e674e2c9f 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -236,18 +236,16 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_sh= mem_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret =3D drm_gem_shmem_get_pages_locked(shmem); + if (!ret) + refcount_set(&shmem->pages_pin_count, 1); =20 return ret; } =20 -static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) -{ - dma_resv_assert_held(shmem->base.resv); - - drm_gem_shmem_put_pages_locked(shmem); -} - /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -265,6 +263,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shme= m) =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret =3D dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; @@ -288,8 +289,14 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *= shmem) =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 + if (refcount_dec_not_one(&shmem->pages_pin_count)) + return; + dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_unpin_locked(shmem); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -634,6 +641,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, if (shmem->base.import_attach) return; =20 + drm_printf_indent(p, indent, "pages_pin_count=3D%u\n", refcount_read(&shm= em->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=3D%u\n", shmem->pages_use_c= ount); drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 808083279fd5..1cd74ae5761a 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -39,6 +39,16 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; =20 + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * The pages allowed to be evicted and purged by memory + * shrinker only when the count is zero, otherwise pages + * are hard-pinned in memory. + */ + refcount_t pages_pin_count; + /** * @madv: State for madvise * --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B37C71153 for ; Sun, 3 Sep 2023 17:09:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344958AbjICRJO (ORCPT ); Sun, 3 Sep 2023 13:09:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345059AbjICRJL (ORCPT ); Sun, 3 Sep 2023 13:09:11 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA57210DE for ; Sun, 3 Sep 2023 10:08:54 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id D8886660729F; Sun, 3 Sep 2023 18:08:51 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760933; bh=5WiiQ8wTCL/OUKI03grnn5Q6a9/8HBXj83/5G8VHb4Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HYEbkoG6ChpgC8NAe5R5gf6vXsSAEnTNE2Z3p11LgusJpwLxmd9IX+7J+Vrc5bHQB sqnDiaFxGhq2nXmrMMthlFi1qjLRPnOGZ33++6xlwkbnG6YViNEU8A62qAJ8VF8RIY AWsohCpuqQGnEgwGKdcDGiq+lAnzonSq+JvLvLxgknTsNe9GD0Zn1LIKx7ft08VJ+o u1k/c4p8/LamC5QXfMJU6VbdN6WViiabHTt8IsRw1uk5ahA8wVUyF7ML3A2KG0+5+Y 2EpajoSlBrlWKVdNODMQ2J5bOK7TyZTzb8ZWXWjHqF24Xl+Ef/+qaS594YNVThBOWx QXnh/W5liNETg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 11/20] drm/shmem-helper: Use refcount_t for pages_use_count Date: Sun, 3 Sep 2023 20:07:27 +0300 Message-ID: <20230903170736.513347-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use atomic refcount_t helper for pages_use_count to optimize pin/unpin functions by skipping reservation locking while GEM's pin refcount > 1. Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++-------------- drivers/gpu/drm/lima/lima_gem.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- include/drm/drm_gem_shmem_helper.h | 2 +- 4 files changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 7e1e674e2c9f..a0faef3e762d 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -152,12 +152,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *= shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) { + if (refcount_read(&shmem->pages_use_count)) { drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, !shmem->got_pages_sgt); } =20 - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); =20 dma_resv_unlock(shmem->base.resv); } @@ -174,14 +174,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_= gem_shmem_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 - if (shmem->pages_use_count++ > 0) + if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; =20 pages =3D drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count =3D 0; return PTR_ERR(pages); } =20 @@ -197,6 +196,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_ge= m_shmem_object *shmem) =20 shmem->pages =3D pages; =20 + refcount_set(&shmem->pages_use_count, 1); + return 0; } =20 @@ -212,21 +213,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_sh= mem_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 - if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - return; - - if (--shmem->pages_use_count > 0) - return; - + if (refcount_dec_and_test(&shmem->pages_use_count)) { #ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif =20 - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages =3D NULL; + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages =3D NULL; + } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); =20 @@ -553,8 +550,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) * mmap'd, vm_open() just grabs an additional reference for the new * mm the vma is getting copied into (ie. on fork()). */ - if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - shmem->pages_use_count++; + drm_WARN_ON_ONCE(obj->dev, + !refcount_inc_not_zero(&shmem->pages_use_count)); =20 dma_resv_unlock(shmem->base.resv); =20 @@ -642,7 +639,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, return; =20 drm_printf_indent(p, indent, "pages_pin_count=3D%u\n", refcount_read(&shm= em->pages_pin_count)); - drm_printf_indent(p, indent, "pages_use_count=3D%u\n", shmem->pages_use_c= ount); + drm_printf_indent(p, indent, "pages_use_count=3D%u\n", refcount_read(&shm= em->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_ge= m.c index ec8f718aa539..4be2fccbf6d9 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -47,8 +47,8 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *v= m) } =20 bo->base.pages =3D pages; - bo->base.pages_use_count =3D 1; bo->base.got_pages_sgt =3D true; + refcount_set(&bo->base.pages_use_count, 1); =20 mapping_set_unevictable(mapping); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panf= rost/panfrost_mmu.c index 7771769f0ce0..a91252053aa3 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -487,7 +487,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, goto err_unlock; } bo->base.pages =3D pages; - bo->base.pages_use_count =3D 1; + refcount_set(&bo->base.pages_use_count, 1); } else { pages =3D bo->base.pages; if (pages[page_offset]) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 1cd74ae5761a..bd545428a7ee 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -37,7 +37,7 @@ struct drm_gem_shmem_object { * Reference count on the pages table. * The pages are put when the count reaches zero. */ - unsigned int pages_use_count; + refcount_t pages_use_count; =20 /** * @pages_pin_count: --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28B84C83F3E for ; Sun, 3 Sep 2023 17:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344787AbjICRJT (ORCPT ); Sun, 3 Sep 2023 13:09:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345030AbjICRJR (ORCPT ); Sun, 3 Sep 2023 13:09:17 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AD0F1711 for ; Sun, 3 Sep 2023 10:08:56 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6B46C66072B5; Sun, 3 Sep 2023 18:08:53 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760934; bh=Beunc+mcO9wcz3oaeRK21W7oP8OXFEtJrBh9G3jJNXw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ISdjD8eE29K3nwkheXywz+1PkycH+ZX+hGqDg9w6Hl0jWQ+vGyjy+G9B+9Q92mrPn aVjxBK5qcdKRcU1nIVRi82VydWmhfgkGIjGYMtK6AlUQT2b0ZcctVNI8G5OBjnUgqR ynHvQLApwnUsfrAhq5R7t8QU1bnWmvxQbpdmi7qX0+VdTUqsqweem0KPUtIbqReK8h dqH/GjVvUa/3GvBgBWEXzBdUJ8rwy5oxERCUbH9UsItVrwcyQ62tvZwDOVDE1ZRXEl yz5u2PPB/K6xnvvn4U2PXbuWkhpPMvGSid8CrAdwOWpszUDRf6dt9apK0Lm2vvA1j8 23ZawegxyzDrA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 12/20] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages() Date: Sun, 3 Sep 2023 20:07:28 +0300 Message-ID: <20230903170736.513347-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add lockless drm_gem_shmem_get_pages() helper that skips taking reservation lock if pages_use_count is non-zero, leveraging from atomicity of the refcount_t. Make drm_gem_shmem_mmap() to utilize the new helper. Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index a0faef3e762d..d93ebfef20c7 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -227,6 +227,20 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shm= em_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); =20 +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int ret; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + dma_resv_lock(shmem->base.resv, NULL); + ret =3D drm_gem_shmem_get_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} + static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { int ret; @@ -610,10 +624,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *sh= mem, struct vm_area_struct return ret; } =20 - dma_resv_lock(shmem->base.resv, NULL); - ret =3D drm_gem_shmem_get_pages_locked(shmem); - dma_resv_unlock(shmem->base.resv); - + ret =3D drm_gem_shmem_get_pages(shmem); if (ret) return ret; =20 --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C50ACC83F2D for ; Sun, 3 Sep 2023 17:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345002AbjICRJn (ORCPT ); Sun, 3 Sep 2023 13:09:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjICRJm (ORCPT ); Sun, 3 Sep 2023 13:09:42 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59634CE1 for ; Sun, 3 Sep 2023 10:09:10 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id DD35866072BC; Sun, 3 Sep 2023 18:08:54 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760936; bh=pOKiy71IMLlm3lGoxRhMtwlXgaG4qiertKmg8u8awXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kQQZicD0WIoPLFu0eJF00353J4Gis9ZIIXtkM6XY6dEyw8VaO/PIOF7Ia2qTj1hAo 1/c6VWcgxHNfHlSpnL1NM5GFmXAT+GqHeUUwnXVyk6VgoJUpEtc3H+fujup8WcG8/1 Rer/i/iKowfL9KlKVjcZmwtjnwrkj5A4Tql7xVSYhpDOBcdtsTUfNhnbM++jdohmpe 0sRjxtLjSHyk0k81Nfl0nCM0n0e9NTGQOREuJG8UOCgU5WzqAWYtX2L3dbEhd//JAJ jOxCSm/9kgN84NGFXMVRl02Hddy07rgLaHp0nX2WUGq8LgFXzLBznfmZ1PG1muf0fP iVCJ1Z8qACEVQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 13/20] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Date: Sun, 3 Sep 2023 20:07:29 +0300 Message-ID: <20230903170736.513347-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The vmapped pages shall be pinned in memory and previously get/put_pages() were implicitly hard-pinning/unpinning the pages. This will no longer be the case with addition of memory shrinker because pages_use_count > 0 won't determine anymore whether pages are hard-pinned (they will be soft-pinned), while the new pages_pin_count will do the hard-pinning. Switch the vmap/vunmap() to use pin/unpin() functions in a preparation of addition of the memory shrinker support to drm-shmem. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 ++++++++++++------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index d93ebfef20c7..899f655a65bb 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -257,6 +257,14 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shm= em_object *shmem) return ret; } =20 +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); +} + /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -304,10 +312,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *= shmem) return; =20 dma_resv_lock(shmem->base.resv, NULL); - - if (refcount_dec_and_test(&shmem->pages_pin_count)) - drm_gem_shmem_put_pages_locked(shmem); - + drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -345,7 +350,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_obje= ct *shmem, return 0; } =20 - ret =3D drm_gem_shmem_get_pages_locked(shmem); + ret =3D drm_gem_shmem_pin_locked(shmem); if (ret) goto err_zero_use; =20 @@ -368,7 +373,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_obje= ct *shmem, =20 err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); err_zero_use: shmem->vmap_use_count =3D 0; =20 @@ -405,7 +410,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_o= bject *shmem, return; =20 vunmap(shmem->vaddr); - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); } =20 shmem->vaddr =3D NULL; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index bd545428a7ee..396958a98c34 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -137,7 +137,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_o= bject *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object = *shmem) { return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && !shmem->base.dma_buf && !shmem->base.import_attach; } =20 --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02C60C83F2D for ; Sun, 3 Sep 2023 17:09:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345014AbjICRJd (ORCPT ); Sun, 3 Sep 2023 13:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjICRJc (ORCPT ); Sun, 3 Sep 2023 13:09:32 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 905EACC7 for ; Sun, 3 Sep 2023 10:09:05 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7847D66072BD; Sun, 3 Sep 2023 18:08:56 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760937; bh=xQ3O/CfREx8/y3+iseLMLFtv6DQyEDqz0Qz5SBzkNCs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PAWer2Cm6um14QGdE5M6gF/ecL4RwBbsky+uRZ7fffx/i0+iiJITSXEpwCCjbE5VJ 3vpOi4EDBiEF05gwol3iejTQoACZxMbOC9VS63xSlWd8WqJ1pZMnye8AbG04n0oKwL wccl2eCQ8bShoDC5TlK9RxzFVduee5AJIBSvOUy/vJjBydpMOIu8WFvKes068rKEvg olGmE/5BsWUftt4R40OSOBO/S3WTnYHMz3DhB3cMe/xukEAhTEe/lHZ0POrJ5qacfs oET/5IFgzJrZBHI1hBfVHzKqCOm0Lj32RQRe58qif/IGCm8i0E/jgjsNz/t0cbnQNs SEiR+UCrPNaMA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 14/20] drm/shmem-helper: Use refcount_t for vmap_use_count Date: Sun, 3 Sep 2023 20:07:30 +0300 Message-ID: <20230903170736.513347-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use refcount_t helper for vmap_use_count to make refcounting consistent with pages_use_count and pages_pin_count that use refcount_t. This will allow to optimize unlocked vmappings by skipping reservation locking if refcnt > 1 and also makes vmapping to benefit from the refcount_t's overflow checks. Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 28 +++++++++++--------------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 899f655a65bb..4633a418faba 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -144,7 +144,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) } else if (!shmem->imported_sgt) { dma_resv_lock(shmem->base.resv, NULL); =20 - drm_WARN_ON(obj->dev, shmem->vmap_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); =20 if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, @@ -345,23 +345,25 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_ob= ject *shmem, =20 dma_resv_assert_held(shmem->base.resv); =20 - if (shmem->vmap_use_count++ > 0) { + if (refcount_inc_not_zero(&shmem->vmap_use_count)) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; } =20 ret =3D drm_gem_shmem_pin_locked(shmem); if (ret) - goto err_zero_use; + return ret; =20 if (shmem->map_wc) prot =3D pgprot_writecombine(prot); shmem->vaddr =3D vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot); - if (!shmem->vaddr) + if (!shmem->vaddr) { ret =3D -ENOMEM; - else + } else { iosys_map_set_vaddr(map, shmem->vaddr); + refcount_set(&shmem->vmap_use_count, 1); + } } =20 if (ret) { @@ -374,8 +376,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_obje= ct *shmem, err_put_pages: if (!obj->import_attach) drm_gem_shmem_unpin_locked(shmem); -err_zero_use: - shmem->vmap_use_count =3D 0; =20 return ret; } @@ -403,14 +403,10 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem= _object *shmem, } else { dma_resv_assert_held(shmem->base.resv); =20 - if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) - return; - - if (--shmem->vmap_use_count > 0) - return; - - vunmap(shmem->vaddr); - drm_gem_shmem_unpin_locked(shmem); + if (refcount_dec_and_test(&shmem->vmap_use_count)) { + vunmap(shmem->vaddr); + drm_gem_shmem_unpin_locked(shmem); + } } =20 shmem->vaddr =3D NULL; @@ -656,7 +652,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, =20 drm_printf_indent(p, indent, "pages_pin_count=3D%u\n", refcount_read(&shm= em->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=3D%u\n", refcount_read(&shm= em->pages_use_count)); - drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); + drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", refcount_read(&shme= m->vmap_use_count)); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 396958a98c34..63e91e8f2d5c 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -81,7 +81,7 @@ struct drm_gem_shmem_object { * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int vmap_use_count; + refcount_t vmap_use_count; =20 /** * @got_pages_sgt: --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84D6FC71153 for ; Sun, 3 Sep 2023 17:09:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345100AbjICRJo (ORCPT ); Sun, 3 Sep 2023 13:09:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjICRJn (ORCPT ); Sun, 3 Sep 2023 13:09:43 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 779EACCD for ; Sun, 3 Sep 2023 10:09:09 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 0186E66072DE; Sun, 3 Sep 2023 18:08:57 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760939; bh=u2d0CW59oyRwVxHtpQuBYqf56+g4KltySw9Q+ePXjw4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Box75CBzyoUI6TRRT3DH/ZEz6GOb2clBfQxDv+J9RBUIXPdHSXUocyUR6k7QBhkGC z8duZaW+HcVBiSubv2Hl/10JRUiGUjJ1hEOlTPs6HXzYk4eUWHMG3I9WC2Ju+89OGv gnhNqykjamTpfGNaA0CfDPBCGC12pZnJ81h/U9sXJaCxuGGsmYHGo8Rr2TRxX8fP7o EtfvtTt6L2eelbXG1Qbwee9VQ2tUoLAjogZjM6MTjFL3SRQWkKerKcOQRDNJm0z1RR Wr1ZS7WSLFM8SuT4yaR5mVfFXFCEZ6JjAi5Kcwa2vlPHbZokP1HC/u0yOYnJkK7cXt tl8ykktkp4MFA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 15/20] drm/shmem-helper: Add memory shrinker Date: Sun, 3 Sep 2023 20:07:31 +0300 Message-ID: <20230903170736.513347-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce common drm-shmem shrinker for DRM drivers. To start using drm-shmem shrinker drivers should do the following: 1. Implement evict() callback of GEM object where driver should check whether object is purgeable or evictable using drm-shmem helpers and perform the shrinking action 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device), which will register drm-shmem shrinker 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise() Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 442 ++++++++++++++++-- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +- include/drm/drm_device.h | 10 +- include/drm/drm_gem_shmem_helper.h | 71 ++- 4 files changed, 494 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 4633a418faba..a0a6c7e505c8 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include =20 @@ -88,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t siz= e, bool private) if (ret) goto err_release; =20 - INIT_LIST_HEAD(&shmem->madv_list); - if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -128,6 +127,62 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(stru= ct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); =20 +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + return (shmem->madv >=3D 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && + !shmem->base.dma_buf && !shmem->base.import_attach && + shmem->sgt && !shmem->evicted; +} + +static void +drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj =3D &shmem->base; + struct drm_gem_shmem *shmem_mm =3D obj->dev->shmem_mm; + struct drm_gem_shmem_shrinker *shmem_shrinker =3D &shmem_mm->shrinker; + + dma_resv_assert_held(shmem->base.resv); + + if (!shmem_shrinker || obj->import_attach) + return; + + if (shmem->madv < 0) + drm_gem_lru_remove(&shmem->base); + else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(= shmem)) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evictable, &shmem->base); + else if (shmem->evicted) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evicted, &shmem->base); + else if (!shmem->pages) + drm_gem_lru_remove(&shmem->base); + else + drm_gem_lru_move_tail(&shmem_shrinker->lru_pinned, &shmem->base); +} + +static void +drm_gem_shmem_do_release_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj =3D &shmem->base; + + if (!shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >=3D 0); + return; + } + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages =3D NULL; +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,8 +197,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else if (!shmem->imported_sgt) { - dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); =20 if (shmem->sgt) { @@ -152,14 +205,27 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *= shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (refcount_read(&shmem->pages_use_count)) { - drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, !shmem->got_pages_sgt); + + /* + * Destroying the object is a special case.. drm_gem_shmem_free() + * calls many things that WARN_ON if the obj lock is not held. But + * acquiring the obj lock in drm_gem_shmem_release_pages_locked() can + * cause a locking order inversion between reservation_ww_class_mutex + * and fs_reclaim. + * + * This deadlock is not actually possible, because no one should + * be already holding the lock when drm_gem_shmem_free() is called. + * Unfortunately lockdep is not aware of this detail. So when the + * refcount drops to zero, don't touch the reservation lock. + */ + if (shmem->got_pages_sgt && + refcount_dec_and_test(&shmem->pages_use_count)) { + drm_gem_shmem_do_release_pages_locked(shmem); + shmem->got_pages_sgt =3D false; } =20 drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - - dma_resv_unlock(shmem->base.resv); + drm_WARN_ON(obj->dev, shmem->got_pages_sgt); } =20 drm_gem_object_release(obj); @@ -167,15 +233,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *= shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); =20 -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shm= em) +static int +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem, bool init) { struct drm_gem_object *obj =3D &shmem->base; struct page **pages; =20 dma_resv_assert_held(shmem->base.resv); =20 - if (refcount_inc_not_zero(&shmem->pages_use_count)) + if (shmem->madv < 0) { + drm_WARN_ON(obj->dev, shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted); return 0; + } + + if (drm_WARN_ON(obj->dev, !(init ^ refcount_read(&shmem->pages_use_count)= ))) + return -EINVAL; =20 pages =3D drm_gem_get_pages(obj); if (IS_ERR(pages)) { @@ -196,8 +273,36 @@ static int drm_gem_shmem_get_pages_locked(struct drm_g= em_shmem_object *shmem) =20 shmem->pages =3D pages; =20 + return 0; +} + +static void +drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + drm_gem_shmem_do_release_pages_locked(shmem); +} + +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shm= em) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) + return -ENOMEM; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + err =3D drm_gem_shmem_acquire_pages(shmem, true); + if (err) + return err; + refcount_set(&shmem->pages_use_count, 1); =20 + drm_gem_shmem_update_pages_state_locked(shmem); + return 0; } =20 @@ -209,20 +314,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_= gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj =3D &shmem->base; - dma_resv_assert_held(shmem->base.resv); =20 if (refcount_dec_and_test(&shmem->pages_use_count)) { -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif - - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages =3D NULL; + drm_gem_shmem_release_pages_locked(shmem); + drm_gem_shmem_update_pages_state_locked(shmem); } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -251,8 +347,15 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shm= em_object *shmem) return 0; =20 ret =3D drm_gem_shmem_get_pages_locked(shmem); - if (!ret) + if (!ret) { + ret =3D drm_gem_shmem_swapin_locked(shmem); + if (ret) { + drm_gem_shmem_put_pages_locked(shmem); + return ret; + } + refcount_set(&shmem->pages_pin_count, 1); + } =20 return ret; } @@ -448,29 +551,54 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem= _object *shmem, int madv) =20 madv =3D shmem->madv; =20 + drm_gem_shmem_update_pages_state_locked(shmem); + return (madv >=3D 0); } EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); =20 -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) +{ + struct drm_gem_object *obj =3D &shmem->base; + int ret; + + ret =3D dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + return ret; + + ret =3D drm_gem_shmem_madvise_locked(shmem, madv); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); + +static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *= shmem) { struct drm_gem_object *obj =3D &shmem->base; struct drm_device *dev =3D obj->dev; =20 dma_resv_assert_held(shmem->base.resv); =20 - drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + if (shmem->evicted) + return; =20 dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + drm_gem_shmem_release_pages_locked(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + sg_free_table(shmem->sgt); kfree(shmem->sgt); shmem->sgt =3D NULL; +} =20 - drm_gem_shmem_put_pages_locked(shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj =3D &shmem->base; =20 - shmem->madv =3D -1; + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); =20 - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + drm_gem_shmem_unpin_pages_locked(shmem); drm_gem_free_mmap_offset(obj); =20 /* Our goal here is to return as much of the memory as @@ -481,9 +609,59 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_o= bject *shmem) shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); =20 invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv =3D -1; + shmem->evicted =3D false; + drm_gem_shmem_update_pages_state_locked(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); =20 +/** + * drm_gem_shmem_swapin_locked() - Moves shmem GEM back to memory and enab= les + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evict= ed + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj =3D &shmem->base; + struct sg_table *sgt; + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->evicted) { + err =3D drm_gem_shmem_acquire_pages(shmem, false); + if (err) + return err; + + sgt =3D drm_gem_shmem_get_sg_table(shmem); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + err =3D dma_map_sgtable(obj->dev->dev, sgt, + DMA_BIDIRECTIONAL, 0); + if (err) { + sg_free_table(sgt); + kfree(sgt); + return err; + } + + shmem->sgt =3D sgt; + shmem->evicted =3D false; + + drm_gem_shmem_update_pages_state_locked(shmem); + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin_locked); + /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for @@ -530,22 +708,34 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault= *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + bool pages_unpinned; + int err; =20 /* We don't use vmf->pgoff since that has the fake offset */ page_offset =3D (vmf->address - vma->vm_start) >> PAGE_SHIFT; =20 dma_resv_lock(shmem->base.resv, NULL); =20 - if (page_offset >=3D num_pages || - drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || - shmem->madv < 0) { + /* Sanity-check that we have the pages pointer when it should present */ + pages_unpinned =3D (shmem->evicted || shmem->madv < 0 || + !refcount_read(&shmem->pages_use_count)); + drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned); + + if (page_offset >=3D num_pages || (!shmem->pages && !shmem->evicted)) { ret =3D VM_FAULT_SIGBUS; } else { + err =3D drm_gem_shmem_swapin_locked(shmem); + if (err) { + ret =3D VM_FAULT_OOM; + goto unlock; + } + page =3D shmem->pages[page_offset]; =20 ret =3D vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } =20 +unlock: dma_resv_unlock(shmem->base.resv); =20 return ret; @@ -568,6 +758,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) drm_WARN_ON_ONCE(obj->dev, !refcount_inc_not_zero(&shmem->pages_use_count)); =20 + drm_gem_shmem_update_pages_state_locked(shmem); dma_resv_unlock(shmem->base.resv); =20 drm_gem_vm_open(vma); @@ -653,7 +844,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, drm_printf_indent(p, indent, "pages_pin_count=3D%u\n", refcount_read(&shm= em->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=3D%u\n", refcount_read(&shm= em->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", refcount_read(&shme= m->vmap_use_count)); + drm_printf_indent(p, indent, "evicted=3D%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=3D%d\n", shmem->madv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); =20 @@ -715,6 +908,8 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ =20 shmem->sgt =3D sgt; =20 + drm_gem_shmem_update_pages_state_locked(shmem); + return sgt; =20 err_free_sgt: @@ -802,6 +997,191 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device= *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); =20 +static struct drm_gem_shmem_shrinker * +to_drm_gem_shmem_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker =3D + to_drm_gem_shmem_shrinker(shrinker); + unsigned long count =3D shmem_shrinker->lru_evictable.count; + + if (count >=3D SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj =3D &shmem->base; + + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem)); + drm_WARN_ON(obj->dev, shmem->evicted); + + drm_gem_shmem_unpin_pages_locked(shmem); + + shmem->evicted =3D true; + drm_gem_shmem_update_pages_state_locked(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked); + +static bool drm_gem_shmem_shrinker_evict_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_evictable(shmem) || + get_nr_swap_pages() < obj->size >> PAGE_SHIFT) + return false; + + err =3D drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static bool drm_gem_shmem_shrinker_purge_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_purgeable(shmem)) + return false; + + err =3D drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker; + unsigned long nr_to_scan =3D sc->nr_to_scan; + unsigned long remaining =3D 0; + unsigned long freed =3D 0; + + shmem_shrinker =3D to_drm_gem_shmem_shrinker(shrinker); + + /* purge as many objects as we can */ + freed +=3D drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan, &remaining, + drm_gem_shmem_shrinker_purge_locked); + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed +=3D drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan - freed, &remaining, + drm_gem_shmem_shrinker_evict_locked); + + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP; +} + +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm, + const char *shrinker_name) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker =3D &shmem_mm->shrinker; + int err; + + shmem_shrinker->base.count_objects =3D drm_gem_shmem_shrinker_count_objec= ts; + shmem_shrinker->base.scan_objects =3D drm_gem_shmem_shrinker_scan_objects; + shmem_shrinker->base.seeks =3D DEFAULT_SEEKS; + + mutex_init(&shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_evictable, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_evicted, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_pinned, &shmem_shrinker->lock); + + err =3D register_shrinker(&shmem_shrinker->base, shrinker_name); + if (err) { + mutex_destroy(&shmem_shrinker->lock); + return err; + } + + return 0; +} + +static void drm_gem_shmem_shrinker_release(struct drm_device *dev, + struct drm_gem_shmem *shmem_mm) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker =3D &shmem_mm->shrinker; + + unregister_shrinker(&shmem_shrinker->base); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evictable.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evicted.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_pinned.list)); + mutex_destroy(&shmem_shrinker->lock); +} + +static int drm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + if (drm_WARN_ON(dev, dev->shmem_mm)) + return -EBUSY; + + dev->shmem_mm =3D kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL); + if (!dev->shmem_mm) + return -ENOMEM; + + err =3D drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique); + if (err) + goto free_gem_shmem; + + return 0; + +free_gem_shmem: + kfree(dev->shmem_mm); + dev->shmem_mm =3D NULL; + + return err; +} + +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr) +{ + struct drm_gem_shmem *shmem_mm =3D dev->shmem_mm; + + drm_gem_shmem_shrinker_release(dev, shmem_mm); + dev->shmem_mm =3D NULL; + kfree(shmem_mm); +} + +/** + * drmm_gem_shmem_init() - Initialize drm-shmem internals + * @dev: DRM device + * + * Cleanup is automatically managed as part of DRM device releasing. + * Calling this function multiple times will result in a error. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drmm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + err =3D drm_gem_shmem_init(dev); + if (err) + return err; + + err =3D drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL); + if (err) + return err; + + return 0; +} +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu= /drm/panfrost/panfrost_gem_shrinker.c index 72193bd734e1..1aa94fff7072 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -15,6 +15,13 @@ #include "panfrost_gem.h" #include "panfrost_mmu.h" =20 +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *s= hmem) +{ + return (shmem->madv > 0) && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && + !shmem->base.dma_buf && !shmem->base.import_attach; +} + static unsigned long panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_contr= ol *sc) { @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, st= ruct shrink_control *sc return 0; =20 list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) + if (panfrost_gem_shmem_is_purgeable(shmem)) count +=3D shmem->base.size >> PAGE_SHIFT; } =20 diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 7cf4afae2e79..a978f0cb5e84 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; =20 struct inode; =20 @@ -290,8 +291,13 @@ struct drm_device { /** @vma_offset_manager: GEM information */ struct drm_vma_offset_manager *vma_offset_manager; =20 - /** @vram_mm: VRAM MM memory manager */ - struct drm_vram_mm *vram_mm; + union { + /** @vram_mm: VRAM MM memory manager */ + struct drm_vram_mm *vram_mm; + + /** @shmem_mm: SHMEM GEM memory manager */ + struct drm_gem_shmem *shmem_mm; + }; =20 /** * @switch_power_state: diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 63e91e8f2d5c..65c99da25048 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include =20 #include #include @@ -13,6 +14,7 @@ #include =20 struct dma_buf_attachment; +struct drm_device; struct drm_mode_create_dumb; struct drm_printer; struct sg_table; @@ -53,8 +55,8 @@ struct drm_gem_shmem_object { * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; =20 @@ -115,6 +117,12 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @evicted: True if shmem pages are evicted by the memory shrinker. + * Used internally by memory shrinker. + */ + bool evicted : 1; }; =20 #define to_drm_gem_shmem_obj(obj) \ @@ -133,14 +141,22 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem= _object *shmem, int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_= struct *vma); =20 int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int m= adv); +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); =20 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object = *shmem) { - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; + dma_resv_assert_held(shmem->base.resv); + + return (shmem->madv > 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && + !shmem->base.dma_buf && !shmem->base.import_attach && + (shmem->sgt || shmem->evicted); } =20 +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem); + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); =20 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *s= hmem); @@ -284,6 +300,53 @@ static inline int drm_gem_shmem_object_mmap(struct drm= _gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } =20 +/** + * drm_gem_shmem_object_madvise - unlocked GEM object function for drm_gem= _shmem_madvise_locked() + * @obj: GEM object + * @madv: Madvise value + * + * This function wraps drm_gem_shmem_madvise_locked(), providing unlocked = variant. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +static inline int drm_gem_shmem_object_madvise(struct drm_gem_object *obj,= int madv) +{ + struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_madvise(shmem, madv); +} + +/** + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory man= ager + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_pinned: List of pinned shmem GEM objects */ + struct drm_gem_lru lru_pinned; + + /** @lru_evictable: List of shmem GEM objects to be evicted */ + struct drm_gem_lru lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct drm_gem_lru lru_evicted; +}; + +/** + * struct drm_gem_shmem - GEM shmem memory manager + */ +struct drm_gem_shmem { + /** @shrinker: GEM shmem shrinker */ + struct drm_gem_shmem_shrinker shrinker; +}; + +int drmm_gem_shmem_init(struct drm_device *dev); + /* * Driver ops */ --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CE2EC83F2D for ; Sun, 3 Sep 2023 17:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345102AbjICRJ6 (ORCPT ); Sun, 3 Sep 2023 13:09:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230378AbjICRJ5 (ORCPT ); Sun, 3 Sep 2023 13:09:57 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99EE912E for ; Sun, 3 Sep 2023 10:09:24 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id B5E9266072DF; Sun, 3 Sep 2023 18:08:59 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760941; bh=2G2nrthGOqeJDASm7cRt2JjiWExvPegoftVe3hpYRtU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EQUNPRVSbsyMsU07foqrhzeLN964cu99xfNGXUDYpgmFQWK2kca+pmYicNaRYulGU 1DSa2i7gvLIp7z6Xbw/2AgliaZnQgDcLZ7W93//J6+8MadBZZoS+H3cWFYQXw37nrD G0Xk1r4IluaKzOmZu59/9Kb+IbQcvuJmohnSIZHRlEla8/yjBnBiPtDeF0MoedaHzB u6nKdpR2fIl5ic+gkzzpB+7vcsk84YpGIjIhRf/361s/mVprKpyzsOyhmG/p/lA13T UFemNz+3YNm3KSaaLJnre5VhPU/1ztd80wNl3m42/OrW/r7+IdUmymLJUg0IhxFhaO GK4vRNX4MHdBw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 16/20] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Date: Sun, 3 Sep 2023 20:07:32 +0300 Message-ID: <20230903170736.513347-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Export drm_gem_shmem_get_pages_sgt_locked() that will be used by virtio-gpu shrinker during GEM swap-in operation done under the held reservation lock. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index a0a6c7e505c8..afd9b97ba593 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -874,7 +874,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_= gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); =20 -static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_= shmem_object *shmem) +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_o= bject *shmem) { struct drm_gem_object *obj =3D &shmem->base; int ret; @@ -919,6 +919,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked); =20 /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 65c99da25048..2d7debc23ac1 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -161,6 +161,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_ob= ject *shmem); =20 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *s= hmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *= shmem); +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_o= bject *shmem); =20 void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53E7EC71153 for ; Sun, 3 Sep 2023 17:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345247AbjICRKC (ORCPT ); Sun, 3 Sep 2023 13:10:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345233AbjICRKA (ORCPT ); Sun, 3 Sep 2023 13:10:00 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFA1B115 for ; Sun, 3 Sep 2023 10:09:28 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 440B766072B4; Sun, 3 Sep 2023 18:09:01 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760942; bh=iD/RGR8vyEtS3p/My8ajCUUp4er25O/60kSxAeFzEeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UUbFwz87YgbExB3TFYh5CPpd6GPU2XtfXxv7MkrsfBLcy86/cN8Gvv1+fEZvQ3yBB aN/3FtvkusSe5oR5LofAbAyZ6P8WSVlikXx+ANRn0Vqp9sQvWkJgBzxzwr8igsgLjF dE7Ut7D491Lt/UydZt+io5DOgl+ZrIWkldl9xFcJ3qctcbhiHHMghVYc3/hsrOAurb UZfoWxnRgn3y4nHGpdFuTnhhk4Dx3GIUD9HH0E5BVRs18qqpZyynJeZj88WqkkYGgf BqqrY8zskN30SyOy94gjBCGj5TpwInpRmKVmuQLt8cx0iYsEO3uxvt9njb/UrBgMtR 4kIA+9exTxj7Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 17/20] drm/virtio: Pin display framebuffer BO Date: Sun, 3 Sep 2023 20:07:33 +0300 Message-ID: <20230903170736.513347-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Prepare to addition of memory shrinker support by pinning display framebuffer BO pages in memory while they are in use by display on host. Shrinker is free to relocate framebuffer BO pages if it doesn't know that pages are in use, thus pin the pages to disallow shrinker to move them. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 ++ drivers/gpu/drm/virtio/virtgpu_gem.c | 19 +++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_plane.c | 17 +++++++++++++++-- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/= virtgpu_drv.h index 4126c384286b..5a4b74b7b318 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -313,6 +313,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object= _array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); =20 /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/= virtgpu_gem.c index 7db48d17ee3a..625c05d625bf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,22 @@ void virtio_gpu_array_put_free_work(struct work_struct= *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) +{ + int err; + + if (virtio_gpu_is_shmem(bo)) { + err =3D drm_gem_shmem_pin(&bo->base); + if (err) + return err; + } + + return 0; +} + +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo) +{ + if (virtio_gpu_is_shmem(bo)) + drm_gem_shmem_unpin(&bo->base); +} diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virti= o/virtgpu_plane.c index a2e045f3a000..def57b01a826 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -238,20 +238,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_pla= ne *plane, struct virtio_gpu_device *vgdev =3D dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; =20 if (!new_state->fb) return 0; =20 vgfb =3D to_virtio_gpu_framebuffer(new_state->fb); bo =3D gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type =3D=3D DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + err =3D virtio_gpu_gem_pin(bo); + if (err) + return err; + + if (plane->type =3D=3D DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; =20 if (bo->dumb && (plane->state->fb !=3D new_state->fb)) { vgfb->fence =3D virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + virtio_gpu_gem_unpin(bo); return -ENOMEM; + } } =20 return 0; @@ -261,15 +269,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_pl= ane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; =20 if (!state->fb) return; =20 vgfb =3D to_virtio_gpu_framebuffer(state->fb); + bo =3D gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence =3D NULL; } + + virtio_gpu_gem_unpin(bo); } =20 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C5DBC83F3E for ; Sun, 3 Sep 2023 17:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245523AbjICRKL (ORCPT ); Sun, 3 Sep 2023 13:10:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244519AbjICRKB (ORCPT ); Sun, 3 Sep 2023 13:10:01 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C424199 for ; Sun, 3 Sep 2023 10:09:30 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id CF6FA66072E7; Sun, 3 Sep 2023 18:09:02 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760944; bh=RA+A0kUbp1llMe7XUDKjCL+L8m+dYX+5lkVHEZr8h5w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TXUF7itbHE/Ktu6INzKM6DshAtDV5Pi9DkbbUzipRkKPr9JFfO15qKf8zj7UyJARj TZTo03sXoel+VXZBPENlik2ETgto2+GGMLz6T8f64+jlAtUkHiw46alZJ/OMlTkJSc DBs/YUwn80IhR8eVj0ht5djXRpUzxtFKeA1AL6YD7MhCJcp/K5g1ePv6hvYuKk4ABe zP1EtZuFnD2njjIE6SxkcnTGDkVF7zyS93pZ14B0I8CLpIqQkmb/lhNgYVi+b3BIWV UifICtVYLR88vggYvHviP2CJszRb2fzl/G779ukw5zEq5tlTqviY1WI8upFIu3cxUZ TmITYqLJtLXYA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 18/20] drm/virtio: Attach shmem BOs dynamically Date: Sun, 3 Sep 2023 20:07:34 +0300 Message-ID: <20230903170736.513347-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Prepare for addition of memory shrinker support by attaching shmem pages to host dynamically on first use. The attachment vq command wasn't fenced and there was no vq kick made in the BO creation code path, hence the the attachment already was happening dynamically, but implicitly. Making attachment explicitly dynamic will allow to simplify and reuse more code when shrinker will be added. The virtio_gpu_object_shmem_init() now works under held reservation lock, which will be important to have for shrinker. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 7 +++ drivers/gpu/drm/virtio/virtgpu_gem.c | 26 ++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 32 ++++++---- drivers/gpu/drm/virtio/virtgpu_object.c | 80 ++++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_submit.c | 15 ++++- 5 files changed, 132 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/= virtgpu_drv.h index 5a4b74b7b318..8c82530eae82 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -89,6 +89,7 @@ struct virtio_gpu_object { uint32_t hw_res_handle; bool dumb; bool created; + bool detached; bool host3d_blob, guest_blob; uint32_t blob_mem, blob_flags; =20 @@ -313,6 +314,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object= _array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); =20 @@ -458,6 +461,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device = *vgdev, =20 bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo); =20 +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo); + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo); + int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid); /* virtgpu_prime.c */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/= virtgpu_gem.c index 625c05d625bf..97e67064c97e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -295,6 +295,26 @@ void virtio_gpu_array_put_free_work(struct work_struct= *work) spin_unlock(&vgdev->obj_free_lock); } =20 +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo; + int ret =3D 0; + u32 i; + + for (i =3D 0; i < objs->nents; i++) { + bo =3D gem_to_virtio_gpu_obj(objs->objs[i]); + + if (virtio_gpu_is_shmem(bo) && bo->detached) { + ret =3D virtio_gpu_reattach_shmem_object_locked(bo); + if (ret) + break; + } + } + + return ret; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; @@ -303,6 +323,12 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) err =3D drm_gem_shmem_pin(&bo->base); if (err) return err; + + err =3D virtio_gpu_reattach_shmem_object(bo); + if (err) { + drm_gem_shmem_unpin(&bo->base); + return err; + } } =20 return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virti= o/virtgpu_ioctl.c index b24b11f25197..070c29cea26a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -246,6 +246,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct = drm_device *dev, if (ret !=3D 0) goto err_put_free; =20 + ret =3D virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence =3D virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret =3D -ENOMEM; @@ -288,11 +292,25 @@ static int virtio_gpu_transfer_to_host_ioctl(struct d= rm_device *dev, void *data, goto err_put_free; } =20 + ret =3D virtio_gpu_array_lock_resv(objs); + if (ret !=3D 0) + goto err_put_free; + + ret =3D virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + + fence =3D virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) { + ret =3D -ENOMEM; + goto err_unlock; + } + if (!vgdev->has_virgl_3d) { virtio_gpu_cmd_transfer_to_host_2d (vgdev, offset, args->box.w, args->box.h, args->box.x, args->box.y, - objs, NULL); + objs, fence); } else { virtio_gpu_create_context(dev, file); =20 @@ -301,23 +319,13 @@ static int virtio_gpu_transfer_to_host_ioctl(struct d= rm_device *dev, void *data, goto err_put_free; } =20 - ret =3D virtio_gpu_array_lock_resv(objs); - if (ret !=3D 0) - goto err_put_free; - - ret =3D -ENOMEM; - fence =3D virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, - 0); - if (!fence) - goto err_unlock; - virtio_gpu_cmd_transfer_to_host_3d (vgdev, vfpriv ? vfpriv->ctx_id : 0, offset, args->level, args->stride, args->layer_stride, &args->box, objs, fence); - dma_fence_put(&fence->f); } + dma_fence_put(&fence->f); virtio_gpu_notify(vgdev); return 0; =20 diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virt= io/virtgpu_object.c index 97020ed56b81..044b08aa78ac 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -142,10 +142,13 @@ static int virtio_gpu_object_shmem_init(struct virtio= _gpu_device *vgdev, struct sg_table *pages; int si; =20 - pages =3D drm_gem_shmem_get_pages_sgt(&bo->base); + pages =3D drm_gem_shmem_get_pages_sgt_locked(&bo->base); if (IS_ERR(pages)) return PTR_ERR(pages); =20 + if (!ents) + return 0; + if (use_dma_api) *nents =3D pages->nents; else @@ -176,6 +179,40 @@ static int virtio_gpu_object_shmem_init(struct virtio_= gpu_device *vgdev, return 0; } =20 +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev =3D bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + if (!bo->detached) + return 0; + + err =3D virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + + bo->detached =3D false; + + return 0; +} + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo) +{ + int ret; + + ret =3D dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + return ret; + ret =3D virtio_gpu_reattach_shmem_object_locked(bo); + dma_resv_unlock(bo->base.base.resv); + + return ret; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -202,45 +239,60 @@ int virtio_gpu_object_create(struct virtio_gpu_device= *vgdev, =20 bo->dumb =3D params->dumb; =20 - ret =3D virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret !=3D 0) - goto err_put_id; + if (bo->blob_mem =3D=3D VIRTGPU_BLOB_MEM_GUEST) + bo->guest_blob =3D true; =20 if (fence) { ret =3D -ENOMEM; objs =3D virtio_gpu_array_alloc(1); if (!objs) - goto err_free_entry; + goto err_put_id; virtio_gpu_array_add_obj(objs, &bo->base.base); =20 ret =3D virtio_gpu_array_lock_resv(objs); if (ret !=3D 0) goto err_put_objs; + } else { + ret =3D dma_resv_lock(bo->base.base.resv, NULL); + if (ret) + goto err_put_id; } =20 if (params->blob) { - if (params->blob_mem =3D=3D VIRTGPU_BLOB_MEM_GUEST) - bo->guest_blob =3D true; + ret =3D virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret) + goto err_unlock_objs; + } else { + ret =3D virtio_gpu_object_shmem_init(vgdev, bo, NULL, NULL); + if (ret) + goto err_unlock_objs; =20 + bo->detached =3D true; + } + + if (params->blob) virtio_gpu_cmd_resource_create_blob(vgdev, bo, params, ents, nents); - } else if (params->virgl) { + else if (params->virgl) virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } else { + else virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } + + if (!fence) + dma_resv_unlock(bo->base.base.resv); =20 *bo_ptr =3D bo; return 0; =20 +err_unlock_objs: + if (fence) + virtio_gpu_array_unlock_resv(objs); + else + dma_resv_unlock(bo->base.base.resv); err_put_objs: virtio_gpu_array_put_free(objs); -err_free_entry: - kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_free_gem: diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virt= io/virtgpu_submit.c index 3c00135ead45..94867f485a64 100644 --- a/drivers/gpu/drm/virtio/virtgpu_submit.c +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c @@ -465,8 +465,19 @@ static void virtio_gpu_install_out_fence_fd(struct vir= tio_gpu_submit *submit) =20 static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit) { - if (submit->buflist) - return virtio_gpu_array_lock_resv(submit->buflist); + int err; + + if (submit->buflist) { + err =3D virtio_gpu_array_lock_resv(submit->buflist); + if (err) + return err; + + err =3D virtio_gpu_array_prepare(submit->vgdev, submit->buflist); + if (err) { + virtio_gpu_array_unlock_resv(submit->buflist); + return err; + } + } =20 return 0; } --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B9C9C83F2D for ; Sun, 3 Sep 2023 17:10:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237862AbjICRKO (ORCPT ); Sun, 3 Sep 2023 13:10:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345167AbjICRKC (ORCPT ); Sun, 3 Sep 2023 13:10:02 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 304411AB for ; Sun, 3 Sep 2023 10:09:32 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 48C4F66072E8; Sun, 3 Sep 2023 18:09:04 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760945; bh=a9exc1odr2ZjcAOVqsqj9AwrRzi7iQXeLcKoEkRBekQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WC9Y9NaQ7gN5C9Dqad/4dQY3dWcEQh0l+Mwy4D02ip4NLxsbl0X71DJARq0MMr73S PYakVvFA0klEKcV4FJTKpX3wdEni+Tq0h2rUJ7M9OyECFiT73UFQi9hgmEeDjn1hmN 5+tNcex4Rrdu0TqFdROgnaG+C3wFNOENkWY543AUDgxzmiL0ns+O7scOVhQYalHaDI a6cfXmWbJvj803jS0ii+urXciwSYFqW7wrPUaJXrZiX3EEVF4On4zGUx59uK3PJhsw VprenIKyq25L8oP34Bjh6FcBJn9MVBonx4cFPLz7wbzwSr8ji9gw4Ylv51W1YxP7Yp vkIh+PfZM4BAg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 19/20] drm/virtio: Support memory shrinking Date: Sun, 3 Sep 2023 20:07:35 +0300 Message-ID: <20230903170736.513347-20-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Support generic drm-shmem memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP file or partition. Acked-by: Gerd Hoffmann Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 13 +++++- drivers/gpu/drm/virtio/virtgpu_gem.c | 35 ++++++++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 25 ++++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 8 ++++ drivers/gpu/drm/virtio/virtgpu_object.c | 61 +++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++++++++++++++++ include/uapi/drm/virtgpu_drm.h | 14 ++++++ 7 files changed, 195 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/= virtgpu_drv.h index 8c82530eae82..a34da2036221 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -278,7 +278,7 @@ struct virtio_gpu_fpriv { }; =20 /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *fi= le); =20 @@ -316,6 +316,8 @@ void virtio_gpu_array_put_free_delayed(struct virtio_gp= u_device *vgdev, void virtio_gpu_array_put_free_work(struct work_struct *work); int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); =20 @@ -329,6 +331,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_d= evice *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -349,6 +353,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device = *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, @@ -499,4 +506,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct drm_file *file); =20 +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/= virtgpu_gem.c index 97e67064c97e..748f7bbb0e6d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -147,10 +147,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_objec= t *obj, struct virtio_gpu_device *vgdev =3D obj->dev->dev_private; struct virtio_gpu_fpriv *vfpriv =3D file->driver_priv; struct virtio_gpu_object_array *objs; + struct virtio_gpu_object *bo; =20 if (!vgdev->has_virgl_3d) return; =20 + bo =3D gem_to_virtio_gpu_obj(obj); + + /* + * Purged BO was already detached and released, the resource ID + * is invalid by now. + */ + if (!virtio_gpu_gem_madvise(bo, VIRTGPU_MADV_WILLNEED)) + return; + objs =3D virtio_gpu_array_alloc(1); if (!objs) return; @@ -315,6 +325,31 @@ int virtio_gpu_array_prepare(struct virtio_gpu_device = *vgdev, return ret; } =20 +int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + if (virtio_gpu_is_shmem(bo)) + return drm_gem_shmem_object_madvise(&bo->base.base, madv); + + return 1; +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev =3D bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err =3D virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created =3D false; + } + + return 0; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virti= o/virtgpu_ioctl.c index 070c29cea26a..44a99166efdc 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -676,6 +676,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_de= vice *dev, return ret; } =20 +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args =3D data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj =3D drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo =3D gem_to_virtio_gpu_obj(obj); + args->retained =3D virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] =3D { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -715,4 +737,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_= IOCTLS] =3D { =20 DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/= virtgpu_kms.c index 5a3b5aaed1f3..43e237082cec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -245,6 +245,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct= drm_device *dev) goto err_scanouts; } =20 + ret =3D drmm_gem_shmem_init(dev); + if (ret) { + DRM_ERROR("shmem init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); =20 if (num_capsets) @@ -259,6 +265,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct = drm_device *dev) } return 0; =20 +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virt= io/virtgpu_object.c index 044b08aa78ac..dc4df8f2d89c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -97,6 +97,60 @@ static void virtio_gpu_free_object(struct drm_gem_object= *obj) virtio_gpu_cleanup_object(bo); } =20 +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev =3D bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; + + if (bo->detached) + return 0; + + fence =3D virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + bo->detached =3D true; + + return 0; +} + +static int virtio_gpu_shmem_evict(struct drm_gem_object *obj) +{ + struct virtio_gpu_object *bo =3D gem_to_virtio_gpu_obj(obj); + int err; + + /* blob is not movable, it's impossible to detach it from host */ + if (bo->blob_mem) + return -EBUSY; + + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + err =3D virtio_gpu_detach_object_fenced(bo); + if (err) + return err; + + if (drm_gem_shmem_is_purgeable(&bo->base)) { + err =3D virtio_gpu_gem_host_mem_release(bo); + if (err) + return err; + + drm_gem_shmem_purge_locked(&bo->base); + } else { + bo->base.pages_mark_dirty_on_put =3D 1; + drm_gem_shmem_evict_locked(&bo->base); + } + + return 0; +} + static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs =3D { .free =3D virtio_gpu_free_object, .open =3D virtio_gpu_gem_object_open, @@ -110,6 +164,7 @@ static const struct drm_gem_object_funcs virtio_gpu_shm= em_funcs =3D { .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, + .evict =3D virtio_gpu_shmem_evict, }; =20 bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) @@ -189,6 +244,10 @@ int virtio_gpu_reattach_shmem_object_locked(struct vir= tio_gpu_object *bo) if (!bo->detached) return 0; =20 + err =3D drm_gem_shmem_swapin_locked(&bo->base); + if (err) + return err; + err =3D virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (err) return err; @@ -238,6 +297,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *= vgdev, goto err_free_gem; =20 bo->dumb =3D params->dumb; + bo->blob_mem =3D params->blob_mem; + bo->blob_flags =3D params->blob_flags; =20 if (bo->blob_mem =3D=3D VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob =3D true; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/v= irtgpu_vq.c index b1a00c0c25a7..14ab470f413a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -545,6 +545,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_d= evice *vgdev, virtio_gpu_cleanup_object(bo); } =20 +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p =3D virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id =3D cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -645,6 +660,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_g= pu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } =20 +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p =3D virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id =3D cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *v= gdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1107,6 +1139,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_devi= ce *vgdev, ents, nents, NULL); } =20 +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index b1d0e56565bc..4caba71b2740 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c =20 #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -211,6 +212,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; =20 +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additio= nal @@ -261,6 +271,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) =20 +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif --=20 2.41.0 From nobody Sun Feb 8 14:37:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36CD4C71153 for ; Sun, 3 Sep 2023 17:10:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345458AbjICRKP (ORCPT ); Sun, 3 Sep 2023 13:10:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345287AbjICRKG (ORCPT ); Sun, 3 Sep 2023 13:10:06 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 671CECC6 for ; Sun, 3 Sep 2023 10:09:35 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id C8D2D66072EC; Sun, 3 Sep 2023 18:09:05 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693760947; bh=3ZmhT4rcv4VD7TowHv7dl9kR9qK0/fa544r9LN2eg6c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A+KNKAlZnGuIagvhZMVRrfl6xY7hJQSaFhcl+ly1sBMkt5nMyD6+/YjatQoqC0lwa AGyQWbtjqbNs/PDpQVkkdhM6CT1A5mi9DL6ya82kIOFFuSmM3MYv1ojOOGLTjSLmb3 FRgduy9DHf3xH4VTAsyzskzTXA8yI3tndGYO64hryhNIDIKLh7/Ot4hl14mYqlGTUh ulocDycNPJaUH9wQ6DLq3GkFhIwsOBmxcDBGGUJxtlbvMB91XXO+fqmLPNn/7Oj31d AsV6gdBRD8qk2GO9ccs1q6v4yamcweDtNFj9LZ8i3enFbsdWvj06G1mkr2K4QxODjf LW0pj08lrAXaQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v16 20/20] drm/panfrost: Switch to generic memory shrinker Date: Sun, 3 Sep 2023 20:07:36 +0300 Message-ID: <20230903170736.513347-21-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230903170736.513347-1-dmitry.osipenko@collabora.com> References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace Panfrost's custom memory shrinker with a common drm-shmem memory shrinker. Tested-by: Steven Price # Firefly-RK3288 Reviewed-by: Steven Price Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 27 ++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 30 ++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 129 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- include/drm/drm_gem_shmem_helper.h | 7 - 8 files changed, 47 insertions(+), 178 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/M= akefile index 7da2b3f02ed9..11622e22cf15 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y :=3D \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/p= anfrost/panfrost_device.h index b0126b9fbadc..dcc2571c092b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -116,10 +116,6 @@ struct panfrost_device { atomic_t pending; } reset; =20 - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker shrinker; - struct panfrost_devfreq pfdevfreq; }; =20 diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panf= rost/panfrost_drv.c index 175443eacead..8cf338c2a03b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -170,7 +170,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } =20 - atomic_inc(&bo->gpu_usecount); job->mappings[i] =3D mapping; } =20 @@ -395,7 +394,6 @@ static int panfrost_ioctl_madvise(struct drm_device *de= v, void *data, { struct panfrost_file_priv *priv =3D file_priv->driver_priv; struct drm_panfrost_madvise *args =3D data; - struct panfrost_device *pfdev =3D dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret =3D 0; @@ -408,11 +406,15 @@ static int panfrost_ioctl_madvise(struct drm_device *= dev, void *data, =20 bo =3D to_panfrost_bo(gem_obj); =20 + if (bo->is_heap) { + args->retained =3D 1; + goto out_put_object; + } + ret =3D dma_resv_lock_interruptible(bo->base.base.resv, NULL); if (ret) goto out_put_object; =20 - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv =3D=3D PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -438,17 +440,8 @@ static int panfrost_ioctl_madvise(struct drm_device *d= ev, void *data, =20 args->retained =3D drm_gem_shmem_madvise_locked(&bo->base, args->madv); =20 - if (args->retained) { - if (args->madv =3D=3D PANFROST_MADV_DONTNEED) - list_move_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv =3D=3D PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); dma_resv_unlock(bo->base.base.resv); out_put_object: drm_gem_object_put(gem_obj); @@ -577,9 +570,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private =3D pfdev; pfdev->ddev =3D ddev; =20 - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err =3D panfrost_device_init(pfdev); if (err) { if (err !=3D -EPROBE_DEFER) @@ -601,10 +591,14 @@ static int panfrost_probe(struct platform_device *pde= v) if (err < 0) goto err_out1; =20 - panfrost_gem_shrinker_init(ddev); + err =3D drmm_gem_shmem_init(ddev); + if (err < 0) + goto err_out2; =20 return 0; =20 +err_out2: + drm_dev_unregister(ddev); err_out1: pm_runtime_disable(pfdev->dev); panfrost_device_fini(pfdev); @@ -620,7 +614,6 @@ static void panfrost_remove(struct platform_device *pde= v) struct drm_device *ddev =3D pfdev->ddev; =20 drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); =20 pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panf= rost/panfrost_gem.c index 59c8c73c6a59..00165fca7f3d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_obje= ct *obj) struct panfrost_gem_object *bo =3D to_panfrost_bo(obj); struct panfrost_device *pfdev =3D obj->dev->dev_private; =20 - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - /* * If we still have mappings attached to the BO, there's a problem in * our refcounting. @@ -195,6 +185,25 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_object_pin(obj); } =20 +static int panfrost_shmem_evict(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo =3D to_panfrost_bo(obj); + + if (!drm_gem_shmem_is_purgeable(&bo->base)) + return -EBUSY; + + if (!mutex_trylock(&bo->mappings.lock)) + return -EBUSY; + + panfrost_gem_teardown_mappings_locked(bo); + + drm_gem_shmem_purge_locked(&bo->base); + + mutex_unlock(&bo->mappings.lock); + + return 0; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs =3D { .free =3D panfrost_gem_free_object, .open =3D panfrost_gem_open, @@ -207,6 +216,7 @@ static const struct drm_gem_object_funcs panfrost_gem_f= uncs =3D { .vunmap =3D drm_gem_shmem_object_vunmap_locked, .mmap =3D drm_gem_shmem_object_mmap, .vm_ops =3D &drm_gem_shmem_vm_ops, + .evict =3D panfrost_shmem_evict, }; =20 /** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panf= rost/panfrost_gem.h index ad2877eeeccd..6ad1bcedb932 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; =20 - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - bool noexec :1; bool is_heap :1; }; @@ -81,7 +75,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); =20 -void panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu= /drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 1aa94fff7072..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,129 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *s= hmem) -{ - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; -} - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_contr= ol *sc) -{ - struct panfrost_device *pfdev =3D - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem; - unsigned long count =3D 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (panfrost_gem_shmem_is_purgeable(shmem)) - count +=3D shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo =3D to_panfrost_bo(obj); - bool ret =3D false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!dma_resv_trylock(shmem->base.resv)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); - ret =3D true; - - dma_resv_unlock(shmem->base.resv); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_contro= l *sc) -{ - struct panfrost_device *pfdev =3D - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed =3D 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >=3D sc->nr_to_scan) - break; - if (drm_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed +=3D shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -void panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev =3D dev->dev_private; - pfdev->shrinker.count_objects =3D panfrost_gem_shrinker_count; - pfdev->shrinker.scan_objects =3D panfrost_gem_shrinker_scan; - pfdev->shrinker.seeks =3D DEFAULT_SEEKS; - WARN_ON(register_shrinker(&pfdev->shrinker, "drm-panfrost")); -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev =3D dev->dev_private; - - if (pfdev->shrinker.nr_deferred) { - unregister_shrinker(&pfdev->shrinker); - } -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panf= rost/panfrost_job.c index a8b4827dc425..755128eb6c45 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -272,6 +272,19 @@ static void panfrost_attach_object_fences(struct drm_g= em_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } =20 +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_co= unt) +{ + struct panfrost_gem_object *bo; + int ret =3D 0; + + while (!ret && bo_count--) { + bo =3D to_panfrost_bo(bos[bo_count]); + ret =3D bo->base.madv ? -ENOMEM : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev =3D job->pfdev; @@ -283,6 +296,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; =20 + ret =3D panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); =20 @@ -324,7 +341,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; =20 - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 2d7debc23ac1..13fe34236a0b 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -60,13 +60,6 @@ struct drm_gem_shmem_object { */ int madv; =20 - /** - * @madv_list: List entry for madvise tracking - * - * Typically used by drivers to track purgeable objects - */ - struct list_head madv_list; - /** * @sgt: Scatter/gather table for imported PRIME buffers */ --=20 2.41.0