[PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin

Dmitry Osipenko posted 10 patches 10 months, 3 weeks ago
[PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Dmitry Osipenko 10 months, 3 weeks ago
The vmapped pages shall be pinned in memory and previously get/put_pages()
were implicitly hard-pinning/unpinning the pages. This will no longer be
the case with addition of memory shrinker because pages_use_count > 0 won't
determine anymore whether pages are hard-pinned (they will be soft-pinned),
while the new pages_pin_count will do the hard-pinning. Switch the
vmap/vunmap() to use pin/unpin() functions in a preparation of addition
of the memory shrinker support to drm-shmem.

Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++---
 include/drm/drm_gem_shmem_helper.h     | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 6fb96e790abd..84a196bbe44f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -360,7 +360,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 			return 0;
 		}
 
-		ret = drm_gem_shmem_get_pages_locked(shmem);
+		ret = drm_gem_shmem_pin_locked(shmem);
 		if (ret)
 			goto err_zero_use;
 
@@ -383,7 +383,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 
 err_put_pages:
 	if (!drm_gem_is_imported(obj))
-		drm_gem_shmem_put_pages_locked(shmem);
+		drm_gem_shmem_unpin_locked(shmem);
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
@@ -420,7 +420,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 			return;
 
 		vunmap(shmem->vaddr);
-		drm_gem_shmem_put_pages_locked(shmem);
+		drm_gem_shmem_unpin_locked(shmem);
 	}
 
 	shmem->vaddr = NULL;
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 3a4be433d5f0..8b9bba87ae63 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -130,7 +130,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
 {
 	return (shmem->madv > 0) &&
-		!shmem->vmap_use_count && shmem->sgt &&
+		!refcount_read(&shmem->pages_pin_count) && shmem->sgt &&
 		!shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
 }
 
-- 
2.49.0
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Thomas Zimmermann 10 months, 1 week ago
Hi

Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
> The vmapped pages shall be pinned in memory and previously get/put_pages()
> were implicitly hard-pinning/unpinning the pages. This will no longer be
> the case with addition of memory shrinker because pages_use_count > 0 won't
> determine anymore whether pages are hard-pinned (they will be soft-pinned),
> while the new pages_pin_count will do the hard-pinning. Switch the
> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> of the memory shrinker support to drm-shmem.

I've meanwhile rediscovered this patch and I'm sure this is not correct. 
Vmap should not pin AFAIK. It is possible to vmap if the buffer has been 
pinned, but that's not automatic.  For other vmaps it is necessary to 
hold the reservation lock to prevent the buffer from moving.

Best regards
Thomas

>
> Acked-by: Maxime Ripard <mripard@kernel.org>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++---
>   include/drm/drm_gem_shmem_helper.h     | 2 +-
>   2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 6fb96e790abd..84a196bbe44f 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -360,7 +360,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   			return 0;
>   		}
>   
> -		ret = drm_gem_shmem_get_pages_locked(shmem);
> +		ret = drm_gem_shmem_pin_locked(shmem);
>   		if (ret)
>   			goto err_zero_use;
>   
> @@ -383,7 +383,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   
>   err_put_pages:
>   	if (!drm_gem_is_imported(obj))
> -		drm_gem_shmem_put_pages_locked(shmem);
> +		drm_gem_shmem_unpin_locked(shmem);
>   err_zero_use:
>   	shmem->vmap_use_count = 0;
>   
> @@ -420,7 +420,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>   			return;
>   
>   		vunmap(shmem->vaddr);
> -		drm_gem_shmem_put_pages_locked(shmem);
> +		drm_gem_shmem_unpin_locked(shmem);
>   	}
>   
>   	shmem->vaddr = NULL;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 3a4be433d5f0..8b9bba87ae63 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -130,7 +130,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
>   static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
>   {
>   	return (shmem->madv > 0) &&
> -		!shmem->vmap_use_count && shmem->sgt &&
> +		!refcount_read(&shmem->pages_pin_count) && shmem->sgt &&
>   		!shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
>   }
>   

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Dmitry Osipenko 10 months, 1 week ago
On 4/2/25 15:47, Thomas Zimmermann wrote:
> Hi
> 
> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>> The vmapped pages shall be pinned in memory and previously get/
>> put_pages()
>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>> the case with addition of memory shrinker because pages_use_count > 0
>> won't
>> determine anymore whether pages are hard-pinned (they will be soft-
>> pinned),
>> while the new pages_pin_count will do the hard-pinning. Switch the
>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>> of the memory shrinker support to drm-shmem.
> 
> I've meanwhile rediscovered this patch and I'm sure this is not correct.
> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> pinned, but that's not automatic.  For other vmaps it is necessary to
> hold the reservation lock to prevent the buffer from moving.

Hi, with vmap() you're getting a kernel address. The GEM's memory should
be not movable while it's vmapped as we can't handle kernel page faults.

Not sure what you're meaning by the "other vmaps", please clarify.

-- 
Best regards,
Dmitry
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Boris Brezillon 10 months, 1 week ago
On Wed, 2 Apr 2025 15:58:55 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:

> On 4/2/25 15:47, Thomas Zimmermann wrote:
> > Hi
> > 
> > Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:  
> >> The vmapped pages shall be pinned in memory and previously get/
> >> put_pages()
> >> were implicitly hard-pinning/unpinning the pages. This will no longer be
> >> the case with addition of memory shrinker because pages_use_count > 0
> >> won't
> >> determine anymore whether pages are hard-pinned (they will be soft-
> >> pinned),
> >> while the new pages_pin_count will do the hard-pinning. Switch the
> >> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> >> of the memory shrinker support to drm-shmem.  
> > 
> > I've meanwhile rediscovered this patch and I'm sure this is not correct.
> > Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> > pinned, but that's not automatic.  For other vmaps it is necessary to
> > hold the reservation lock to prevent the buffer from moving.

Hm, is this problematic though? If you want to vmap() inside a section
that's protected by the resv lock, you can

- drm_gem_shmem_vmap_locked()
- do whatever you need to do with the vaddr,
- drm_gem_shmem_vunmap_locked()

and the {pin,page_use}_count will be back to their original values.
Those are just ref counters, and I doubt the overhead of
incrementing/decrementing them makes a difference compared to the heavy
page-allocation/vmap operations...
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Thomas Zimmermann 10 months, 1 week ago
Hi

Am 02.04.25 um 15:21 schrieb Boris Brezillon:
> On Wed, 2 Apr 2025 15:58:55 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>> The vmapped pages shall be pinned in memory and previously get/
>>>> put_pages()
>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>> won't
>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>> pinned),
>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>> of the memory shrinker support to drm-shmem.
>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>> pinned, but that's not automatic.  For other vmaps it is necessary to
>>> hold the reservation lock to prevent the buffer from moving.
> Hm, is this problematic though? If you want to vmap() inside a section
> that's protected by the resv lock, you can
>
> - drm_gem_shmem_vmap_locked()
> - do whatever you need to do with the vaddr,
> - drm_gem_shmem_vunmap_locked()
>
> and the {pin,page_use}_count will be back to their original values.
> Those are just ref counters, and I doubt the overhead of
> incrementing/decrementing them makes a difference compared to the heavy
> page-allocation/vmap operations...

I once tried to add pin as part of vmap, so that pages stay in place. 
Christian was very clear about not doing this. I found this made a lot 
of sense: vmap means "make the memory available to the CPU". The memory 
location doesn't matter much here. Pin means something like "make the 
memory available to the GPU". But which GPU depends on the caller: calls 
via GEM refer to the local GPU, calls via dma-buf usually refer to the 
importer's GPU. That GPU uncertainty makes pin problematic already.

In your case, vmap an pin both intent to hold the shmem pages in memory. 
They might be build on top of the same implementation, but one should 
not be implemented with the other because of their different meanings.

More generally speaking, I've meanwhile come to the conclusion that pin 
should not even exist in the GEM interface. It's an internal operation 
of TTM and reveals too much about what happens within the 
implementation. Instead GEM should be free to move buffers around. 
Dma-buf importers should only tell exporters to make buffers available 
to them, but not how to do this. AFAIK that's what dma-buf's 
attach/detach is for.

Best regards
Thomas


-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Boris Brezillon 10 months, 1 week ago
On Thu, 3 Apr 2025 09:20:00 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Hi
> 
> Am 02.04.25 um 15:21 schrieb Boris Brezillon:
> > On Wed, 2 Apr 2025 15:58:55 +0300
> > Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> >  
> >> On 4/2/25 15:47, Thomas Zimmermann wrote:  
> >>> Hi
> >>>
> >>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:  
> >>>> The vmapped pages shall be pinned in memory and previously get/
> >>>> put_pages()
> >>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
> >>>> the case with addition of memory shrinker because pages_use_count > 0
> >>>> won't
> >>>> determine anymore whether pages are hard-pinned (they will be soft-
> >>>> pinned),
> >>>> while the new pages_pin_count will do the hard-pinning. Switch the
> >>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> >>>> of the memory shrinker support to drm-shmem.  
> >>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
> >>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> >>> pinned, but that's not automatic.  For other vmaps it is necessary to
> >>> hold the reservation lock to prevent the buffer from moving.  
> > Hm, is this problematic though? If you want to vmap() inside a section
> > that's protected by the resv lock, you can
> >
> > - drm_gem_shmem_vmap_locked()
> > - do whatever you need to do with the vaddr,
> > - drm_gem_shmem_vunmap_locked()
> >
> > and the {pin,page_use}_count will be back to their original values.
> > Those are just ref counters, and I doubt the overhead of
> > incrementing/decrementing them makes a difference compared to the heavy
> > page-allocation/vmap operations...  
> 
> I once tried to add pin as part of vmap, so that pages stay in place. 
> Christian was very clear about not doing this. I found this made a lot 
> of sense: vmap means "make the memory available to the CPU". The memory 
> location doesn't matter much here. Pin means something like "make the 
> memory available to the GPU". But which GPU depends on the caller: calls 
> via GEM refer to the local GPU, calls via dma-buf usually refer to the 
> importer's GPU. That GPU uncertainty makes pin problematic already.

Okay, so it looks more like a naming issue then. The intent here is to
make sure the page array doesn't disappear while we have a kernel
mapping active (address returned by vmap()). The reason we went from
pages_count to pages_use_count+pin_count is because we have two kind of
references in drm_gem_shmem:

- weak references (tracked with pages_use_count). Those are
  usually held by GPU VMs, and they are weak in the sense they
  shouldn't prevent the shrinker to reclaim them if the GPU VM is idle.
  The other user of weak references is userspace mappings of GEM
  objects (mmap()), because then we can repopulate those with our fault
  handler.
- hard references (tracked with pin_count) which are used to prevent
  the shrinker from even considering the GEM as reclaimable. And clearly
  kernel mappings fall in that case, because otherwise we could reclaim
  pages that might be dereferenced by the CPU later on. It's also used
  to implement drm_gem_pin because it's the same mechanism really,
  hence the name

> 
> In your case, vmap an pin both intent to hold the shmem pages in memory. 
> They might be build on top of the same implementation, but one should 
> not be implemented with the other because of their different meanings.

But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
we call drm_gem_shmem_pin_locked(), but that's an internal function to
make sure the pages are allocated and stay around until
drm_gem_shmem_vunmap_locked() is called.

I guess we could rename pin_count into hard_refcount or
page_residency_count or xxx_count, and change the pin/unpin_locked()
function names accordingly, but that's just a naming detail, it doesn't
force you to call drm_gem_pin() to vmap() your GEM, it's something we
do internally.

> 
> More generally speaking, I've meanwhile come to the conclusion that pin 
> should not even exist in the GEM interface. It's an internal operation 
> of TTM and reveals too much about what happens within the 
> implementation. Instead GEM should be free to move buffers around.

Well, yes and no. There are situations where you simply can't move
things around if there are active users, and vmap() is one of those
AFAICT.
 
> Dma-buf importers should only tell exporters to make buffers available 
> to them, but not how to do this. AFAIK that's what dma-buf's 
> attach/detach is for.

And that's what they do, no? attach() tells the exporter to give the
importer a way to access those buffers, and given the exporter has no
clue about when/how the exporter will access those, there's no other way
but to pin the pages. Am I missing something here?
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Thomas Zimmermann 10 months, 1 week ago
Hi

Am 03.04.25 um 10:50 schrieb Boris Brezillon:
> On Thu, 3 Apr 2025 09:20:00 +0200
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>> Hi
>>
>> Am 02.04.25 um 15:21 schrieb Boris Brezillon:
>>> On Wed, 2 Apr 2025 15:58:55 +0300
>>> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>>   
>>>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>>>> The vmapped pages shall be pinned in memory and previously get/
>>>>>> put_pages()
>>>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>>>> won't
>>>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>>>> pinned),
>>>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>>>> of the memory shrinker support to drm-shmem.
>>>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>>>> pinned, but that's not automatic.  For other vmaps it is necessary to
>>>>> hold the reservation lock to prevent the buffer from moving.
>>> Hm, is this problematic though? If you want to vmap() inside a section
>>> that's protected by the resv lock, you can
>>>
>>> - drm_gem_shmem_vmap_locked()
>>> - do whatever you need to do with the vaddr,
>>> - drm_gem_shmem_vunmap_locked()
>>>
>>> and the {pin,page_use}_count will be back to their original values.
>>> Those are just ref counters, and I doubt the overhead of
>>> incrementing/decrementing them makes a difference compared to the heavy
>>> page-allocation/vmap operations...
>> I once tried to add pin as part of vmap, so that pages stay in place.
>> Christian was very clear about not doing this. I found this made a lot
>> of sense: vmap means "make the memory available to the CPU". The memory
>> location doesn't matter much here. Pin means something like "make the
>> memory available to the GPU". But which GPU depends on the caller: calls
>> via GEM refer to the local GPU, calls via dma-buf usually refer to the
>> importer's GPU. That GPU uncertainty makes pin problematic already.
> Okay, so it looks more like a naming issue then. The intent here is to

It's certainly possible to see this as a problem naming.

> make sure the page array doesn't disappear while we have a kernel
> mapping active (address returned by vmap()). The reason we went from
> pages_count to pages_use_count+pin_count is because we have two kind of
> references in drm_gem_shmem:
>
> - weak references (tracked with pages_use_count). Those are
>    usually held by GPU VMs, and they are weak in the sense they
>    shouldn't prevent the shrinker to reclaim them if the GPU VM is idle.
>    The other user of weak references is userspace mappings of GEM
>    objects (mmap()), because then we can repopulate those with our fault
>    handler.
> - hard references (tracked with pin_count) which are used to prevent
>    the shrinker from even considering the GEM as reclaimable. And clearly
>    kernel mappings fall in that case, because otherwise we could reclaim
>    pages that might be dereferenced by the CPU later on. It's also used
>    to implement drm_gem_pin because it's the same mechanism really,
>    hence the name

Yeah, this should be rename IMHO. Pin is a TTM operation that fixes 
buffers in certain locations. Drivers do this internally. It has nothing 
to do with gem-shmem.

There's also a pin operation in GEM BOs' drm_gem_object_funcs, but it is 
only called for PRIME-exported buffers and not for general use. For 
gem-shmem, the callback would be implemented on top of the hard references.

And there's also a pin in dma_buf_ops. The term 'pin' is somewhat 
overloaded already.

>
>> In your case, vmap an pin both intent to hold the shmem pages in memory.
>> They might be build on top of the same implementation, but one should
>> not be implemented with the other because of their different meanings.
> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> we call drm_gem_shmem_pin_locked(), but that's an internal function to
> make sure the pages are allocated and stay around until
> drm_gem_shmem_vunmap_locked() is called.
>
> I guess we could rename pin_count into hard_refcount or
> page_residency_count or xxx_count, and change the pin/unpin_locked()
> function names accordingly, but that's just a naming detail, it doesn't
> force you to call drm_gem_pin() to vmap() your GEM, it's something we
> do internally.

Such a rename would be much appreciated. page_residency_count seems 
appropriate.

>
>> More generally speaking, I've meanwhile come to the conclusion that pin
>> should not even exist in the GEM interface. It's an internal operation
>> of TTM and reveals too much about what happens within the
>> implementation. Instead GEM should be free to move buffers around.
> Well, yes and no. There are situations where you simply can't move
> things around if there are active users, and vmap() is one of those
> AFAICT.

Sure. What I mean here is that pin/unpin is something of an 
implementation detail. IMHO the pin/unpin callbacks in 
drm_gem_object_funcs should get different names, such as pin_exported. 
They are not for general use.

>   
>> Dma-buf importers should only tell exporters to make buffers available
>> to them, but not how to do this. AFAIK that's what dma-buf's
>> attach/detach is for.
> And that's what they do, no? attach() tells the exporter to give the
> importer a way to access those buffers, and given the exporter has no
> clue about when/how the exporter will access those, there's no other way
> but to pin the pages. Am I missing something here?

Yeah, that's what they do.

Best regards
Thomas


-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Boris Brezillon 10 months, 1 week ago
On Fri, 4 Apr 2025 10:01:50 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> >> In your case, vmap an pin both intent to hold the shmem pages in memory.
> >> They might be build on top of the same implementation, but one should
> >> not be implemented with the other because of their different meanings.  
> > But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> > we call drm_gem_shmem_pin_locked(), but that's an internal function to
> > make sure the pages are allocated and stay around until
> > drm_gem_shmem_vunmap_locked() is called.
> >
> > I guess we could rename pin_count into hard_refcount or
> > page_residency_count or xxx_count, and change the pin/unpin_locked()
> > function names accordingly, but that's just a naming detail, it doesn't
> > force you to call drm_gem_pin() to vmap() your GEM, it's something we
> > do internally.  
> 
> Such a rename would be much appreciated. page_residency_count seems 
> appropriate.

On a second thought, I think I prefer
'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at
the gem-vram changes you just posted, it looks like gem-shmem is not the
only one to use the term 'pin' for this page pinning thing, so if we go
and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
the outlier yet again :-).
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Thomas Zimmermann 10 months, 1 week ago
Hi

Am 04.04.25 um 16:52 schrieb Boris Brezillon:
> On Fri, 4 Apr 2025 10:01:50 +0200
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>>>> In your case, vmap an pin both intent to hold the shmem pages in memory.
>>>> They might be build on top of the same implementation, but one should
>>>> not be implemented with the other because of their different meanings.
>>> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
>>> we call drm_gem_shmem_pin_locked(), but that's an internal function to
>>> make sure the pages are allocated and stay around until
>>> drm_gem_shmem_vunmap_locked() is called.
>>>
>>> I guess we could rename pin_count into hard_refcount or
>>> page_residency_count or xxx_count, and change the pin/unpin_locked()
>>> function names accordingly, but that's just a naming detail, it doesn't
>>> force you to call drm_gem_pin() to vmap() your GEM, it's something we
>>> do internally.
>> Such a rename would be much appreciated. page_residency_count seems
>> appropriate.
> On a second thought, I think I prefer
> 'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at

Ok

> the gem-vram changes you just posted, it looks like gem-shmem is not the
> only one to use the term 'pin' for this page pinning thing, so if we go
> and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
> the outlier yet again :-).

Which one do you mean?

- The code in gem-vram does pinning in the TTM sense of the term.

- From the client code, the pinning got removed.

- The GEM pin/unpin callbacks are still there, but the long-term plan is 
to go to dma-buf pin callbacks AFAICT.

- Renaming the dma-buf pin/unpin would be a kernel-wide change. Not 
likely to happen.

Best regards
Thomas


-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Posted by Boris Brezillon 10 months, 1 week ago
On Fri, 4 Apr 2025 16:58:20 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Hi
> 
> Am 04.04.25 um 16:52 schrieb Boris Brezillon:
> > On Fri, 4 Apr 2025 10:01:50 +0200
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >  
> >>>> In your case, vmap an pin both intent to hold the shmem pages in memory.
> >>>> They might be build on top of the same implementation, but one should
> >>>> not be implemented with the other because of their different meanings.  
> >>> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> >>> we call drm_gem_shmem_pin_locked(), but that's an internal function to
> >>> make sure the pages are allocated and stay around until
> >>> drm_gem_shmem_vunmap_locked() is called.
> >>>
> >>> I guess we could rename pin_count into hard_refcount or
> >>> page_residency_count or xxx_count, and change the pin/unpin_locked()
> >>> function names accordingly, but that's just a naming detail, it doesn't
> >>> force you to call drm_gem_pin() to vmap() your GEM, it's something we
> >>> do internally.  
> >> Such a rename would be much appreciated. page_residency_count seems
> >> appropriate.  
> > On a second thought, I think I prefer
> > 'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at  
> 
> Ok
> 
> > the gem-vram changes you just posted, it looks like gem-shmem is not the
> > only one to use the term 'pin' for this page pinning thing, so if we go
> > and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
> > the outlier yet again :-).  
> 
> Which one do you mean?
> 
> - The code in gem-vram does pinning in the TTM sense of the term.

Sorry, but I still don't see why pinning should be a TTM only thing. If
I read the doc of drm_gem_vram_pin():

"
Pinning a buffer object ensures that it is not evicted from a memory
region. A pinned buffer object has to be unpinned before it can be
pinned to another region. If the pl_flag argument is 0, the buffer is
pinned at its current location (video RAM or system memory).
"

And this pinning is not so different from the pinning we have in
gem-shmem: making buffer object memory unevictable/unmovable.

> 
> - From the client code, the pinning got removed.
> 
> - The GEM pin/unpin callbacks are still there, but the long-term plan is 
> to go to dma-buf pin callbacks AFAICT.
> 
> - Renaming the dma-buf pin/unpin would be a kernel-wide change. Not 
> likely to happen.

Again, I'm not sure why we'd want to do that anyway. Just because the
TTM pinning semantics might be slightly different from the
GEM/dma-buf ones doesn't mean the pinning term should be
prohibited outside the TTM scope. The concept of pinning is pretty
generic and applies to system memory too AFAICT.