From nobody Mon Feb 9 17:56:12 2026 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A1192356BE for ; Sat, 22 Mar 2025 21:30:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742679002; cv=pass; b=eaaS674WBFnVKJjYT4N9l2GXe9q/xEQH6gI1L9rcds4TcEQXrFsgDyI4crH/vxgWI17HzJZ+9i0tOoes+OxYjtkZYK9sr1icNPFm1HNREuYYE4QsGdwlKyVIUS2TQ20ZU+hTSMlh6WNybY4PPB99rdjOSyZ+MzrC5xgA0/PL7G4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742679002; c=relaxed/simple; bh=f92HpNzAZbbH0b8kvkrBR3i1x0/zZ7pmWaJXJSF1wEo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bAaRM2abA5xyzMDQwD1fO5fWbt0ftZH9mv3WCR4WKOQ2MgLGuc4y5yZat3Y8PyBE1Hz/E+tOeHhAt3yqqyJ/1ThL8KvEtThvaQLN5icHeBAi5ssU0QNEIo8s+3Ouu11yF4ikI8huFjrsRjWnUzR3/5ffIQFG8Xlos31+P2q1Sb4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=dmitry.osipenko@collabora.com header.b=NDoVCoHa; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=dmitry.osipenko@collabora.com header.b="NDoVCoHa" ARC-Seal: i=1; a=rsa-sha256; t=1742678978; cv=none; d=zohomail.com; s=zohoarc; b=dgdmVAtdxzpYIaicsL4pVTDcc6K9J04nUZsoCHGPLhwsbafXonwDI6XUXvSOoFUx35/bE1RdIR9G+OhoBJph9M1bKyTYRaJ/MBntJOhC/+q9ZztzGW4ti2HedzxlUcnUbO3kBtqbt0qg6JqCq91XfWK7aeEnr7gvVBL646uFt/0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742678978; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=MML9D8wdRcQXvopNsGy9CrKhkITUugHvf3JIuAXSRC4=; b=b6wAzpgVDBRzdmElj1u9PY25Ln7GoPOq5p1qRJ67tXfudjOrFq8wnuIS4mH/4hnwEJe5xPKGoPnE+sZCdY5GMS4YaEiYaSmq9FYZureYfYQ15x7LYZaJ0vYry/DyerTqW76KiDsHniAczh4fM1wb1hSVUawc+rpgvpX54uWS7ew= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=dmitry.osipenko@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742678978; s=zohomail; d=collabora.com; i=dmitry.osipenko@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Message-Id:Reply-To; bh=MML9D8wdRcQXvopNsGy9CrKhkITUugHvf3JIuAXSRC4=; b=NDoVCoHakEMQkKKO84yiJG/HIc37gxKWvQZhBaHv4Kj0W2TPLSSLiihAjYK7juHr bFZNcrXBIv+volUSQSVS5QHM4+rjT5qM4fDRbV9US17SBf0g68mNXdv4FH0Bld8TOva dD89J5J8zmWko9ngeAsej6W6Q2ChGW4JufpQRMX0= Received: by mx.zohomail.com with SMTPS id 1742678976540282.669899349429; Sat, 22 Mar 2025 14:29:36 -0700 (PDT) From: Dmitry Osipenko To: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Gerd Hoffmann , Qiang Yu , Steven Price , Boris Brezillon , Frank Binns , Matt Coster Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com Subject: [PATCH v20 08/10] drm/shmem-helper: Use refcount_t for pages_use_count Date: Sun, 23 Mar 2025 00:26:06 +0300 Message-ID: <20250322212608.40511-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250322212608.40511-1-dmitry.osipenko@collabora.com> References: <20250322212608.40511-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMailClient: External Content-Type: text/plain; charset="utf-8" Use atomic refcount_t helper for pages_use_count to optimize pin/unpin functions by skipping reservation locking while GEM's pin refcount > 1. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 33 ++++++++++------------ drivers/gpu/drm/lima/lima_gem.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- drivers/gpu/drm/tests/drm_gem_shmem_test.c | 8 +++--- include/drm/drm_gem_shmem_helper.h | 2 +- 5 files changed, 22 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index d338b36f4eaa..6fb96e790abd 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -176,7 +176,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) if (shmem->pages) drm_gem_shmem_put_pages_locked(shmem); =20 - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); =20 dma_resv_unlock(shmem->base.resv); @@ -194,14 +194,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_= gem_shmem_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 - if (shmem->pages_use_count++ > 0) + if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; =20 pages =3D drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count =3D 0; return PTR_ERR(pages); } =20 @@ -217,6 +216,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_ge= m_shmem_object *shmem) =20 shmem->pages =3D pages; =20 + refcount_set(&shmem->pages_use_count, 1); + return 0; } =20 @@ -232,21 +233,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_sh= mem_object *shmem) =20 dma_resv_assert_held(shmem->base.resv); =20 - if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - return; - - if (--shmem->pages_use_count > 0) - return; - + if (refcount_dec_and_test(&shmem->pages_use_count)) { #ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif =20 - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages =3D NULL; + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages =3D NULL; + } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); =20 @@ -582,8 +579,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) * mmap'd, vm_open() just grabs an additional reference for the new * mm the vma is getting copied into (ie. on fork()). */ - if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - shmem->pages_use_count++; + drm_WARN_ON_ONCE(obj->dev, + !refcount_inc_not_zero(&shmem->pages_use_count)); =20 dma_resv_unlock(shmem->base.resv); =20 @@ -674,7 +671,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, return; =20 drm_printf_indent(p, indent, "pages_pin_count=3D%u\n", refcount_read(&shm= em->pages_pin_count)); - drm_printf_indent(p, indent, "pages_use_count=3D%u\n", shmem->pages_use_c= ount); + drm_printf_indent(p, indent, "pages_use_count=3D%u\n", refcount_read(&shm= em->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_ge= m.c index 609221351cde..5deec673c11e 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *v= m) } =20 bo->base.pages =3D pages; - bo->base.pages_use_count =3D 1; + refcount_set(&bo->base.pages_use_count, 1); =20 mapping_set_unevictable(mapping); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panf= rost/panfrost_mmu.c index b91019cd5acb..4a0b4bf03f1a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -489,7 +489,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_= device *pfdev, int as, goto err_unlock; } bo->base.pages =3D pages; - bo->base.pages_use_count =3D 1; + refcount_set(&bo->base.pages_use_count, 1); } else { pages =3D bo->base.pages; if (pages[page_offset]) { diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/t= ests/drm_gem_shmem_test.c index 98884966bb92..1459cdb0c413 100644 --- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c +++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c @@ -134,7 +134,7 @@ static void drm_gem_shmem_test_pin_pages(struct kunit *= test) shmem =3D drm_gem_shmem_create(drm_dev, TEST_SIZE); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, shmem); KUNIT_EXPECT_NULL(test, shmem->pages); - KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 0); + KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 0); =20 ret =3D kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem= ); KUNIT_ASSERT_EQ(test, ret, 0); @@ -142,14 +142,14 @@ static void drm_gem_shmem_test_pin_pages(struct kunit= *test) ret =3D drm_gem_shmem_pin(shmem); KUNIT_ASSERT_EQ(test, ret, 0); KUNIT_ASSERT_NOT_NULL(test, shmem->pages); - KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 1); + KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 1); =20 for (i =3D 0; i < (shmem->base.size >> PAGE_SHIFT); i++) KUNIT_ASSERT_NOT_NULL(test, shmem->pages[i]); =20 drm_gem_shmem_unpin(shmem); KUNIT_EXPECT_NULL(test, shmem->pages); - KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 0); + KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 0); } =20 /* @@ -251,7 +251,7 @@ static void drm_gem_shmem_test_get_sg_table(struct kuni= t *test) sgt =3D drm_gem_shmem_get_pages_sgt(shmem); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt); KUNIT_ASSERT_NOT_NULL(test, shmem->pages); - KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 1); + KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 1); KUNIT_EXPECT_PTR_EQ(test, sgt, shmem->sgt); =20 for_each_sgtable_sg(sgt, sg, si) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index d411215fe494..3a4be433d5f0 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -37,7 +37,7 @@ struct drm_gem_shmem_object { * Reference count on the pages table. * The pages are put when the count reaches zero. */ - unsigned int pages_use_count; + refcount_t pages_use_count; =20 /** * @pages_pin_count: --=20 2.49.0