From nobody Tue Sep 16 02:26:20 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82D96C67871 for ; Sun, 8 Jan 2023 21:06:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236291AbjAHVGI (ORCPT ); Sun, 8 Jan 2023 16:06:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236351AbjAHVFp (ORCPT ); Sun, 8 Jan 2023 16:05:45 -0500 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CD8EF033 for ; Sun, 8 Jan 2023 13:05:44 -0800 (PST) Received: from workpc.. (109-252-117-89.nat.spd-mgts.ru [109.252.117.89]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 9DA746602D55; Sun, 8 Jan 2023 21:05:37 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1673211943; bh=HRNrI13pCO0JTcaezulgWqUGzqG40+vk9lZ3RK+UUfM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=etqZdme+ox9XDBYrARcBGZBYJ8Z0u6FwY2iURxLW31RKqhRUnhm4UtIwDc7/JzT2L vwvaz1FJqgK1wxrwHfVEFgGiEn5RDRLDFTQ+WKJyC6goM8QZT92C6H0FICxrgiA4rE LyTFdxHsa5wmStWbFHWsO6eGfcyE34JwFreVayP6O4yNU59V2Z4FCge/xX04drLjMn UPxDhE1X/OhBCrx9/EE6TyU7eQAvEZgSi/FWW0DK5waZVSXLZpKP7GH+afXg5aFlNQ BRPxsVSrIlxCWhhMnEBiCAJsvZ1L5UTfG1J1kH3PJR7RBcVLSSbcHuhqlIgQY480uP S75NE0GCJYXKQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Alyssa Rosenzweig , Rob Herring , Sean Paul , Dmitry Baryshkov , Abhinav Kumar Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Date: Mon, 9 Jan 2023 00:04:40 +0300 Message-Id: <20230108210445.3948344-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230108210445.3948344-1-dmitry.osipenko@collabora.com> References: <20230108210445.3948344-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" DMA-buf core has its own refcounting of vmaps, use it instead of drm-shmem counting. This change prepares drm-shmem for addition of memory shrinker support where drm-shmem will use a single dma-buf reservation lock for all operations performed over dma-bufs. Signed-off-by: Dmitry Osipenko Reviewed-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++++----------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 5006f7da7f2d..1392cbd3cc02 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -301,24 +301,22 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_s= hmem_object *shmem, struct drm_gem_object *obj =3D &shmem->base; int ret =3D 0; =20 - if (shmem->vmap_use_count++ > 0) { - iosys_map_set_vaddr(map, shmem->vaddr); - return 0; - } - if (obj->import_attach) { ret =3D dma_buf_vmap(obj->import_attach->dmabuf, map); if (!ret) { if (drm_WARN_ON(obj->dev, map->is_iomem)) { dma_buf_vunmap(obj->import_attach->dmabuf, map); - ret =3D -EIO; - goto err_put_pages; + return -EIO; } - shmem->vaddr =3D map->vaddr; } } else { pgprot_t prot =3D PAGE_KERNEL; =20 + if (shmem->vmap_use_count++ > 0) { + iosys_map_set_vaddr(map, shmem->vaddr); + return 0; + } + ret =3D drm_gem_shmem_get_pages(shmem); if (ret) goto err_zero_use; @@ -384,15 +382,15 @@ static void drm_gem_shmem_vunmap_locked(struct drm_ge= m_shmem_object *shmem, { struct drm_gem_object *obj =3D &shmem->base; =20 - if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) - return; - - if (--shmem->vmap_use_count > 0) - return; - if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) + return; + + if (--shmem->vmap_use_count > 0) + return; + vunmap(shmem->vaddr); drm_gem_shmem_put_pages(shmem); } @@ -660,7 +658,14 @@ void drm_gem_shmem_print_info(const struct drm_gem_shm= em_object *shmem, struct drm_printer *p, unsigned int indent) { drm_printf_indent(p, indent, "pages_use_count=3D%u\n", shmem->pages_use_c= ount); - drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); + + if (shmem->base.import_attach) + drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", + shmem->base.dma_buf->vmapping_counter); + else + drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", + shmem->vmap_use_count); + drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); } EXPORT_SYMBOL(drm_gem_shmem_print_info); --=20 2.38.1