From nobody Sun Feb 8 18:49:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CEFBC77B7A for ; Mon, 29 May 2023 22:46:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230090AbjE2Wql (ORCPT ); Mon, 29 May 2023 18:46:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230045AbjE2WqW (ORCPT ); Mon, 29 May 2023 18:46:22 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 365C9D2; Mon, 29 May 2023 15:46:19 -0700 (PDT) Received: from workpc.. (109-252-150-34.dynamic.spd-mgts.ru [109.252.150.34]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 984CA6606E6D; Mon, 29 May 2023 23:46:16 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1685400378; bh=PhixrnfkE/QHqEKJlmvN3zPdWdFT4925K9Nga9RuPEw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kGO2W2W/vBo3LFoTZ2zDIog3QAGz3n2xUbjwEpcRfOOc1Gr9IvbYbXepmPYieL5K7 ark3yfhEFnRcOkyuvnVq0pcR9/frerArCl3z0wuEZha6mqSOMDAv2IE2CpSd1aWMsq FHfoaumPtH1EAT88LzRre5MkziuJJx1YTUgu4R45FdDCm0/TDRdaATWdTYHLnxeRtR 9E83EaMR9cWM4XYrR4KpN4E16sbdSYc32U1egS6avacBbfY9DT7Z/y4lFsh1iwl4lN qsgajsBQqwgFWGM7bXKbVSPKbIOx5gmz1yrI0Z95Chyrurf4X3uGrm2sz+4jTj0Ba6 1Zs+NIyB3HKSw== From: Dmitry Osipenko To: Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , Benjamin Gaignard , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v4 6/6] drm/shmem-helper: Switch to reservation lock Date: Tue, 30 May 2023 01:39:35 +0300 Message-Id: <20230529223935.2672495-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230529223935.2672495-1-dmitry.osipenko@collabora.com> References: <20230529223935.2672495-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace all drm-shmem locks with a GEM reservation lock. This makes locks consistent with dma-buf locking convention where importers are responsible for holding reservation lock for all operations performed over dma-bufs, preventing deadlock between dma-buf importers and exporters. Suggested-by: Daniel Vetter Acked-by: Thomas Zimmermann Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 210 ++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 8 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +- include/drm/drm_gem_shmem_helper.h | 14 +- 6 files changed, 116 insertions(+), 148 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 4ea6507a77e5..a783d2245599 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t siz= e, bool private) if (ret) goto err_release; =20 - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); =20 if (!private) { @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *= shmem) { struct drm_gem_object *obj =3D &shmem->base; =20 - drm_WARN_ON(obj->dev, shmem->vmap_use_count); - if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { + dma_resv_lock(shmem->base.resv, NULL); + + drm_WARN_ON(obj->dev, shmem->vmap_use_count); + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -154,22 +154,24 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *= shmem) } if (shmem->pages) drm_gem_shmem_put_pages(shmem); - } =20 - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, shmem->pages_use_count); + + dma_resv_unlock(shmem->base.resv); + } =20 drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); =20 -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shm= em) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; struct page **pages; =20 + dma_resv_assert_held(shmem->base.resv); + if (shmem->pages_use_count++ > 0) return 0; =20 @@ -197,35 +199,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_= gem_shmem_object *shmem) } =20 /* - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a= shmem GEM object * @shmem: shmem GEM object * - * This function makes sure that backing pages exists for the shmem GEM ob= ject - * and increases the use count. - * - * Returns: - * 0 on success or a negative error code on failure. + * This function decreases the use count and puts the backing pages when u= se drops to zero. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; - int ret; =20 - drm_WARN_ON(obj->dev, obj->import_attach); - - ret =3D mutex_lock_interruptible(&shmem->pages_lock); - if (ret) - return ret; - ret =3D drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_get_pages); - -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *sh= mem) -{ - struct drm_gem_object *obj =3D &shmem->base; + dma_resv_assert_held(shmem->base.resv); =20 if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) return; @@ -243,20 +226,25 @@ static void drm_gem_shmem_put_pages_locked(struct drm= _gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages =3D NULL; } +EXPORT_SYMBOL(drm_gem_shmem_put_pages); =20 -/* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a= shmem GEM object - * @shmem: shmem GEM object - * - * This function decreases the use count and puts the backing pages when u= se drops to zero. - */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); - drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + int ret; + + dma_resv_assert_held(shmem->base.resv); + + ret =3D drm_gem_shmem_get_pages(shmem); + + return ret; +} + +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + drm_gem_shmem_put_pages(shmem); } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); =20 /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object @@ -271,10 +259,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; + int ret; =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 - return drm_gem_shmem_get_pages(shmem); + ret =3D dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) + return ret; + ret =3D drm_gem_shmem_pin_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; } EXPORT_SYMBOL(drm_gem_shmem_pin); =20 @@ -291,12 +286,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object = *shmem) =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 - drm_gem_shmem_put_pages(shmem); + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_unpin_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_unpin); =20 -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +/* + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's back= ing + * store. + * + * This function makes sure that a contiguous kernel virtual address mappi= ng + * exists for the buffer backing the shmem GEM object. It hides the differ= ences + * between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(= ). + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj =3D &shmem->base; int ret =3D 0; @@ -312,6 +324,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shm= em_object *shmem, } else { pgprot_t prot =3D PAGE_KERNEL; =20 + dma_resv_assert_held(shmem->base.resv); + if (shmem->vmap_use_count++ > 0) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; @@ -346,45 +360,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_s= hmem_object *shmem, =20 return ret; } +EXPORT_SYMBOL(drm_gem_shmem_vmap); =20 /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object - * @map: Returns the kernel virtual address of the SHMEM GEM object's back= ing - * store. - * - * This function makes sure that a contiguous kernel virtual address mappi= ng - * exists for the buffer backing the shmem GEM object. It hides the differ= ences - * between dma-buf imported and natively allocated objects. + * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(= ). + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap(). The mapping is only removed when the use count dr= ops to + * zero. * - * Returns: - * 0 on success or a negative error code on failure. + * This function hides the differences between dma-buf imported and native= ly + * allocated objects. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - int ret; - - ret =3D mutex_lock_interruptible(&shmem->vmap_lock); - if (ret) - return ret; - ret =3D drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_vmap); - -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj =3D &shmem->base; =20 if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + dma_resv_assert_held(shmem->base.resv); + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) return; =20 @@ -397,26 +396,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem= _shmem_object *shmem, =20 shmem->vaddr =3D NULL; } - -/* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object - * @shmem: shmem GEM object - * @map: Kernel virtual address where the SHMEM GEM object was mapped - * - * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count dr= ops to - * zero. - * - * This function hides the differences between dma-buf imported and native= ly - * allocated objects. - */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); -} EXPORT_SYMBOL(drm_gem_shmem_vunmap); =20 static int @@ -447,24 +426,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *fil= e_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); =20 if (shmem->madv >=3D 0) shmem->madv =3D madv; =20 madv =3D shmem->madv; =20 - mutex_unlock(&shmem->pages_lock); - return (madv >=3D 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); =20 -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; struct drm_device *dev =3D obj->dev; =20 + dma_resv_assert_held(shmem->base.resv); + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); =20 dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -472,7 +451,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_ob= ject *shmem) kfree(shmem->sgt); shmem->sgt =3D NULL; =20 - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); =20 shmem->madv =3D -1; =20 @@ -488,17 +467,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_o= bject *shmem) =20 invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge_locked); - -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!mutex_trylock(&shmem->pages_lock)) - return false; - drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return true; -} EXPORT_SYMBOL(drm_gem_shmem_purge); =20 /** @@ -551,7 +519,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *= vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset =3D (vmf->address - vma->vm_start) >> PAGE_SHIFT; =20 - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); =20 if (page_offset >=3D num_pages || drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || @@ -563,7 +531,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *= vmf) ret =3D vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } =20 - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); =20 return ret; } @@ -575,7 +543,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); =20 /* * We should have already pinned the pages when the buffer was first @@ -585,7 +553,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) shmem->pages_use_count++; =20 - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); =20 drm_gem_vm_open(vma); } @@ -595,7 +563,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_stru= ct *vma) struct drm_gem_object *obj =3D vma->vm_private_data; struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages(shmem); + dma_resv_unlock(shmem->base.resv); + drm_gem_vm_close(vma); } =20 @@ -633,7 +604,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *sh= mem, struct vm_area_struct return ret; } =20 + dma_resv_lock(shmem->base.resv, NULL); ret =3D drm_gem_shmem_get_pages(shmem); + dma_resv_unlock(shmem->base.resv); + if (ret) return ret; =20 @@ -699,7 +673,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ =20 drm_WARN_ON(obj->dev, obj->import_attach); =20 - ret =3D drm_gem_shmem_get_pages_locked(shmem); + ret =3D drm_gem_shmem_get_pages(shmem); if (ret) return ERR_PTR(ret); =20 @@ -721,7 +695,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); return ERR_PTR(ret); } =20 @@ -746,11 +720,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct d= rm_gem_shmem_object *shmem) int ret; struct sg_table *sgt; =20 - ret =3D mutex_lock_interruptible(&shmem->pages_lock); + ret =3D dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ERR_PTR(ret); sgt =3D drm_gem_shmem_get_pages_sgt_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); =20 return sgt; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_ge= m.c index 10252dc11a22..4f9736e5f929 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *v= m) =20 new_size =3D min(new_size, bo->base.base.size); =20 - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); =20 if (bo->base.pages) { pages =3D bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *v= m) pages =3D kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } =20 @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm = *vm) struct page *page =3D shmem_read_mapping_page(mapping, i); =20 if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] =3D page; } =20 - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); =20 ret =3D sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panf= rost/panfrost_drv.c index bbada731bbbd..d9dda6acdfac 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -407,6 +407,10 @@ static int panfrost_ioctl_madvise(struct drm_device *d= ev, void *data, =20 bo =3D to_panfrost_bo(gem_obj); =20 + ret =3D dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + goto out_put_object; + mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv =3D=3D PANFROST_MADV_DONTNEED) { @@ -444,7 +448,8 @@ static int panfrost_ioctl_madvise(struct drm_device *de= v, void *data, out_unlock_mappings: mutex_unlock(&bo->mappings.lock); mutex_unlock(&pfdev->shrinker_lock); - + dma_resv_unlock(bo->base.base.resv); +out_put_object: drm_gem_object_put(gem_obj); return ret; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu= /drm/panfrost/panfrost_gem_shrinker.c index bf0170782f25..6a71a2555f85 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *o= bj) if (!mutex_trylock(&bo->mappings.lock)) return false; =20 - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; =20 panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); + drm_gem_shmem_purge(&bo->base); ret =3D true; =20 - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); =20 unlock_mappings: mutex_unlock(&bo->mappings.lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panf= rost/panfrost_mmu.c index 666a5e53fe19..0679df57f394 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfros= t_device *pfdev, int as, page_offset =3D addr >> PAGE_SHIFT; page_offset -=3D bomapping->mmnode.start; =20 - mutex_lock(&bo->base.pages_lock); + obj =3D &bo->base.base; + + dma_resv_lock(obj->resv, NULL); =20 if (!bo->base.pages) { bo->sgts =3D kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); ret =3D -ENOMEM; - goto err_bo; + goto err_unlock; } =20 pages =3D kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts =3D NULL; - mutex_unlock(&bo->base.pages_lock); ret =3D -ENOMEM; - goto err_bo; + goto err_unlock; } bo->base.pages =3D pages; bo->base.pages_use_count =3D 1; @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, pages =3D bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); goto out; } } @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfros= t_device *pfdev, int as, for (i =3D page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] =3D shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); ret =3D PTR_ERR(pages[i]); goto err_pages; } } =20 - mutex_unlock(&bo->base.pages_lock); - sgt =3D &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret =3D sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); =20 out: + dma_resv_unlock(obj->resv); + panfrost_gem_mapping_put(bomapping); =20 return 0; @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, sg_free_table(sgt); err_pages: drm_gem_shmem_put_pages(&bo->base); +err_unlock: + dma_resv_unlock(obj->resv); err_bo: panfrost_gem_mapping_put(bomapping); return ret; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 5994fed5e327..20ddcd799df9 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; =20 - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -65,11 +60,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; =20 - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ @@ -109,7 +99,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, = size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); =20 -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct dr= m_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } =20 -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); =20 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *s= hmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *= shmem); --=20 2.40.1