[PATCH 20/21] binder: reverse locking order in shrinker callback

Carlos Llamas posted 21 patches 2 years, 1 month ago
Only 19 patches received!
There is a newer version of this series
[PATCH 20/21] binder: reverse locking order in shrinker callback
Posted by Carlos Llamas 2 years, 1 month ago
The locking order currently requires the alloc->mutex to be acquired
first followed by the mmap lock. However, the alloc->mutex is converted
into a spinlock in subsequent commits so the order needs to be reversed
to avoid nesting the sleeping mmap lock under the spinlock.

The shrinker's callback binder_alloc_free_page() is the only place that
needs to be reordered since other functions have been refactored and no
longer nest these locks.

Some minor cosmetic changes are also included in this patch.

Signed-off-by: Carlos Llamas <cmllamas@google.com>
---
 drivers/android/binder_alloc.c | 44 ++++++++++++++++------------------
 1 file changed, 20 insertions(+), 24 deletions(-)

diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index e739be7f2dd4..0ba9f524e0ff 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1065,35 +1065,38 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
 				       void *cb_arg)
 	__must_hold(lock)
 {
-	struct mm_struct *mm = NULL;
-	struct binder_lru_page *page = container_of(item,
-						    struct binder_lru_page,
-						    lru);
-	struct binder_alloc *alloc;
+	struct binder_lru_page *page = container_of(item, typeof(*page), lru);
+	struct binder_alloc *alloc = page->alloc;
+	struct mm_struct *mm = alloc->mm;
+	struct vm_area_struct *vma;
 	unsigned long page_addr;
 	size_t index;
-	struct vm_area_struct *vma;
 
-	alloc = page->alloc;
+	if (!mmget_not_zero(mm))
+		goto err_mmget;
+	if (!mmap_read_trylock(mm))
+		goto err_mmap_read_lock_failed;
 	if (!mutex_trylock(&alloc->mutex))
 		goto err_get_alloc_mutex_failed;
-
 	if (!page->page_ptr)
 		goto err_page_already_freed;
 
 	index = page - alloc->pages;
 	page_addr = alloc->buffer + index * PAGE_SIZE;
 
-	mm = alloc->mm;
-	if (!mmget_not_zero(mm))
-		goto err_mmget;
-	if (!mmap_read_trylock(mm))
-		goto err_mmap_read_lock_failed;
 	vma = vma_lookup(mm, page_addr);
 	if (vma && vma != binder_alloc_get_vma(alloc))
 		goto err_invalid_vma;
 
+	trace_binder_unmap_kernel_start(alloc, index);
+
+	__free_page(page->page_ptr);
+	page->page_ptr = NULL;
+
+	trace_binder_unmap_kernel_end(alloc, index);
+
 	list_lru_isolate(lru, item);
+	mutex_unlock(&alloc->mutex);
 	spin_unlock(lock);
 
 	if (vma) {
@@ -1103,28 +1106,21 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
 
 		trace_binder_unmap_user_end(alloc, index);
 	}
+
 	mmap_read_unlock(mm);
 	mmput_async(mm);
 
-	trace_binder_unmap_kernel_start(alloc, index);
-
-	__free_page(page->page_ptr);
-	page->page_ptr = NULL;
-
-	trace_binder_unmap_kernel_end(alloc, index);
-
 	spin_lock(lock);
-	mutex_unlock(&alloc->mutex);
 	return LRU_REMOVED_RETRY;
 
 err_invalid_vma:
+err_page_already_freed:
+	mutex_unlock(&alloc->mutex);
+err_get_alloc_mutex_failed:
 	mmap_read_unlock(mm);
 err_mmap_read_lock_failed:
 	mmput_async(mm);
 err_mmget:
-err_page_already_freed:
-	mutex_unlock(&alloc->mutex);
-err_get_alloc_mutex_failed:
 	return LRU_SKIP;
 }
 
-- 
2.42.0.869.gea05f2083d-goog
Re: [PATCH 20/21] binder: reverse locking order in shrinker callback
Posted by Alice Ryhl 2 years, 1 month ago
Carlos Llamas <cmllamas@google.com> writes:
> +	trace_binder_unmap_kernel_start(alloc, index);
> +
> +	__free_page(page->page_ptr);
> +	page->page_ptr = NULL;
> +
> +	trace_binder_unmap_kernel_end(alloc, index);
> +
>  	list_lru_isolate(lru, item);
> +	mutex_unlock(&alloc->mutex);
>  	spin_unlock(lock);
>  
>  	if (vma) {
>  		trace_binder_unmap_user_start(alloc, index);
>  
>  		zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
>  
>  		trace_binder_unmap_user_end(alloc, index);
>  	}
> +
>  	mmap_read_unlock(mm);
>  	mmput_async(mm);
>  
> -	trace_binder_unmap_kernel_start(alloc, index);
> -
> -	__free_page(page->page_ptr);
> -	page->page_ptr = NULL;
> -
> -	trace_binder_unmap_kernel_end(alloc, index);

This reverses the order so that __free_page is called before
zap_page_range_single. Is that okay?

Another option would be to do something along these lines:

    page_to_free = page->page_ptr;
    page->page_ptr = NULL;

    [snip]

    mmap_read_unlock(mm);
    mmput_async(mm);
    __free_page(page_to_free);

This way you wouldn't reverse the order. Also, you reduce the amount of
time you spend holding the mmap read lock, since the page is freed
without holding the lock.

Other than the above, this LGTM.

Alice
Re: [PATCH 20/21] binder: reverse locking order in shrinker callback
Posted by Carlos Llamas 2 years, 1 month ago
On Tue, Nov 07, 2023 at 09:08:46AM +0000, Alice Ryhl wrote:
> Carlos Llamas <cmllamas@google.com> writes:
> This reverses the order so that __free_page is called before
> zap_page_range_single. Is that okay?

It is not OK. Userspace would still have access to the page between
__free_page() and zap_page() calls. This is bad, so thanks for catching
this.

> Another option would be to do something along these lines:
> 
>     page_to_free = page->page_ptr;
>     page->page_ptr = NULL;
> 
>     [snip]
> 
>     mmap_read_unlock(mm);
>     mmput_async(mm);
>     __free_page(page_to_free);
> 
> This way you wouldn't reverse the order. Also, you reduce the amount of
> time you spend holding the mmap read lock, since the page is freed
> without holding the lock.
> 

Thanks, I've added this approach to v2.

--
Carlos Llamas