[PATCH v2 27/28] binder: reverse locking order in shrinker callback

Carlos Llamas posted 28 patches 2 years ago
Only 27 patches received!
[PATCH v2 27/28] binder: reverse locking order in shrinker callback
Posted by Carlos Llamas 2 years ago
The locking order currently requires the alloc->mutex to be acquired
first followed by the mmap lock. However, the alloc->mutex is converted
into a spinlock in subsequent commits so the order needs to be reversed
to avoid nesting the sleeping mmap lock under the spinlock.

The shrinker's callback binder_alloc_free_page() is the only place that
needs to be reordered since other functions have been refactored and no
longer nest these locks.

Some minor cosmetic changes are also included in this patch.

Signed-off-by: Carlos Llamas <cmllamas@google.com>
---
 drivers/android/binder_alloc.c | 46 ++++++++++++++++------------------
 1 file changed, 22 insertions(+), 24 deletions(-)

diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 5783675f2970..a3e56637db4f 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1061,35 +1061,39 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
 				       void *cb_arg)
 	__must_hold(lock)
 {
-	struct mm_struct *mm = NULL;
-	struct binder_lru_page *page = container_of(item,
-						    struct binder_lru_page,
-						    lru);
-	struct binder_alloc *alloc;
+	struct binder_lru_page *page = container_of(item, typeof(*page), lru);
+	struct binder_alloc *alloc = page->alloc;
+	struct mm_struct *mm = alloc->mm;
+	struct vm_area_struct *vma;
+	struct page *page_to_free;
 	unsigned long page_addr;
 	size_t index;
-	struct vm_area_struct *vma;
 
-	alloc = page->alloc;
+	if (!mmget_not_zero(mm))
+		goto err_mmget;
+	if (!mmap_read_trylock(mm))
+		goto err_mmap_read_lock_failed;
 	if (!mutex_trylock(&alloc->mutex))
 		goto err_get_alloc_mutex_failed;
-
 	if (!page->page_ptr)
 		goto err_page_already_freed;
 
 	index = page - alloc->pages;
 	page_addr = alloc->buffer + index * PAGE_SIZE;
 
-	mm = alloc->mm;
-	if (!mmget_not_zero(mm))
-		goto err_mmget;
-	if (!mmap_read_trylock(mm))
-		goto err_mmap_read_lock_failed;
 	vma = vma_lookup(mm, page_addr);
 	if (vma && vma != binder_alloc_get_vma(alloc))
 		goto err_invalid_vma;
 
+	trace_binder_unmap_kernel_start(alloc, index);
+
+	page_to_free = page->page_ptr;
+	page->page_ptr = NULL;
+
+	trace_binder_unmap_kernel_end(alloc, index);
+
 	list_lru_isolate(lru, item);
+	mutex_unlock(&alloc->mutex);
 	spin_unlock(lock);
 
 	if (vma) {
@@ -1099,28 +1103,22 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
 
 		trace_binder_unmap_user_end(alloc, index);
 	}
+
 	mmap_read_unlock(mm);
 	mmput_async(mm);
-
-	trace_binder_unmap_kernel_start(alloc, index);
-
-	__free_page(page->page_ptr);
-	page->page_ptr = NULL;
-
-	trace_binder_unmap_kernel_end(alloc, index);
+	__free_page(page_to_free);
 
 	spin_lock(lock);
-	mutex_unlock(&alloc->mutex);
 	return LRU_REMOVED_RETRY;
 
 err_invalid_vma:
+err_page_already_freed:
+	mutex_unlock(&alloc->mutex);
+err_get_alloc_mutex_failed:
 	mmap_read_unlock(mm);
 err_mmap_read_lock_failed:
 	mmput_async(mm);
 err_mmget:
-err_page_already_freed:
-	mutex_unlock(&alloc->mutex);
-err_get_alloc_mutex_failed:
 	return LRU_SKIP;
 }
 
-- 
2.43.0.rc2.451.g8631bc7472-goog
[PATCH v2 27/28] binder: reverse locking order in shrinker callback
Posted by Alice Ryhl 2 years ago
> +	trace_binder_unmap_kernel_start(alloc, index);
> +
> +	page_to_free = page->page_ptr;
> +	page->page_ptr = NULL;
> +
> +	trace_binder_unmap_kernel_end(alloc, index);

What are these trace calls used for? Previously they wrapped the call to
__free_page, and happened after the `trace_binder_unmap_user_*` calls.

I'm guessing they should be moved down to wrap the call to __free_page.

Alice
Re: [PATCH v2 27/28] binder: reverse locking order in shrinker callback
Posted by Carlos Llamas 2 years ago
On Mon, Dec 04, 2023 at 11:57:42AM +0000, 'Alice Ryhl' via kernel-team wrote:
> > +	trace_binder_unmap_kernel_start(alloc, index);
> > +
> > +	page_to_free = page->page_ptr;
> > +	page->page_ptr = NULL;
> > +
> > +	trace_binder_unmap_kernel_end(alloc, index);
> 
> What are these trace calls used for? Previously they wrapped the call to
> __free_page, and happened after the `trace_binder_unmap_user_*` calls.
> 

It also used to wrap an unmap_kernel_range() call, which explains the
naming of the tracepoint I suppose. It looked like this:

	trace_binder_unmap_kernel_start(alloc, index);

	unmap_kernel_range(page_addr, PAGE_SIZE);
	__free_page(page->page_ptr);
	page->page_ptr = NULL;

	trace_binder_unmap_kernel_end(alloc, index);

My thinking was that we care more about the page->page_ptr clearing but
it sounds like we could just drop the tracepoint, we no longer have a
kernel mapped area.

--
Carlos Llamas
Re: [PATCH v2 27/28] binder: reverse locking order in shrinker callback
Posted by Alice Ryhl 2 years ago
On Mon, Dec 4, 2023 at 3:45 PM Carlos Llamas <cmllamas@google.com> wrote:
>
> On Mon, Dec 04, 2023 at 11:57:42AM +0000, 'Alice Ryhl' via kernel-team wrote:
> > > +   trace_binder_unmap_kernel_start(alloc, index);
> > > +
> > > +   page_to_free = page->page_ptr;
> > > +   page->page_ptr = NULL;
> > > +
> > > +   trace_binder_unmap_kernel_end(alloc, index);
> >
> > What are these trace calls used for? Previously they wrapped the call to
> > __free_page, and happened after the `trace_binder_unmap_user_*` calls.
> >
>
> It also used to wrap an unmap_kernel_range() call, which explains the
> naming of the tracepoint I suppose. It looked like this:
>
>         trace_binder_unmap_kernel_start(alloc, index);
>
>         unmap_kernel_range(page_addr, PAGE_SIZE);
>         __free_page(page->page_ptr);
>         page->page_ptr = NULL;
>
>         trace_binder_unmap_kernel_end(alloc, index);
>
> My thinking was that we care more about the page->page_ptr clearing but
> it sounds like we could just drop the tracepoint, we no longer have a
> kernel mapped area.

Ok. Feel free to remove or move the trace calls however it needs to be
done and add my tag:

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Re: [PATCH v2 27/28] binder: reverse locking order in shrinker callback
Posted by Carlos Llamas 2 years ago
On Mon, Dec 04, 2023 at 03:47:22PM +0100, Alice Ryhl wrote:
> Ok. Feel free to remove or move the trace calls however it needs to be
> done and add my tag:

Sounds good. I'll remove the trace calls in a follow up patch since it
needs to be separate, thanks.

--
Carlos Llamas