From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D11C2209684 for ; Tue, 3 Dec 2024 21:55:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262918; cv=none; b=XFITJoIp8Na5sCu+AaMlo7rfBLCRtvkMewKoNk123VWFyl2lxKobYN5x7NZpXn7RT6hXGaPhrBqXjA/XYymkAvmtrY4MwdVQ7Cl7V6EoSwTXqiI4bf7L+gwwf+5IqDLS9mLGd8XvKjKURUly8K2lRVct1bdGN9DHK/P/gL9M6ZE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262918; c=relaxed/simple; bh=1b3dMELyLxamvht5e+LV7/lTV4/qEiNlyQkh3+gkwdU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=alca03r2wcq5uQZpd4tBZEcIRfhu29sKYfXtiHAO6E7mrV454xK/yvvOH8IENQMmqbnWxv0MZ/hkC3e0WM6ugacsjHkMAHhu8KLBT5uxw7hzRsVNjd0kuClRCkwkBUPUY4WDS9I2ZI3BMlkpwGj/U8uGwW4t93gzoc2qHORZeuw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HNojayIy; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HNojayIy" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee534bdee3so4674276a91.0 for ; Tue, 03 Dec 2024 13:55:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262916; x=1733867716; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=awrGx+G7Hl/os9Vh+fl4vRm8eJvHryPmjNf0DeKDZHM=; b=HNojayIyuTAW/R0o3ug6c67VXe0yGzeigg3k662rxeDWE21a4BTjhQ0AGNxRHcOIeH 9HugAChOf8wBCG/KffD+W8DmilOCCy14/lT+rNnXw/cyQ0jSFLomvUcIfCBfPpXpQ80K r3w4ftO+eW6jO7CO9uA0ZxBbwTMU3F6zGWMxBvCgEI6ekvm3+3LZHja+M57RQpzl8QnP MrFxEtqkD5QEwXDKZCKdPxdnRzUa2I3kRg4bjxYhIIuh2jgAf+UZUFMmVVQgfRt+/laR eYtRWIMKpLmvURWK2X5U4MYDpn28LxMOsi9lSctzjESQf3/TvMzbeUVmqjKIkfTO+XbN +jpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262916; x=1733867716; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=awrGx+G7Hl/os9Vh+fl4vRm8eJvHryPmjNf0DeKDZHM=; b=VSluTB9+m84647cWzJW6YwoO3FneCU1OLqVhFK8kid8FPROB9Jw1r44uDITmRzu3CC CktLwKBYTX3mj8xJgcy16VxM9XXp7hQvLPfZ/ez4kWB/7vwZKRhNrScwhcGMtCMOZyXc fnh43jBmRn92wjSuTEX5wXqW8+ssObdHvZLlsICu4U9NHO58VkbEF9SNf2u2Incvkqb/ EL3Rd9WIfDdNnrR1p5SkyNjnn//JbiMg9GDdXgdZKI4uNkbOop0i3AELBDC7Cko2wUvX 0V9qm48k62eTh4Y2ERZ3EQvfOSNaoEkV8wfoXjM9h6Lrwe2hnX5YP9xfghh5EiLn8GQ0 8rlQ== X-Gm-Message-State: AOJu0YyL+BiEDdl4Tzi9aIDo5b0lmSkn1g+FeGB9VrCSjM46kB2lL11J zxCvukOcyQR4B9q/Me1OE4MTQ8UixpHBpQ3AskBnH1/xorenaonAsLAWoeMEXnOtzSeLujH9mNj hJWHm+QRPyg== X-Google-Smtp-Source: AGHT+IHVj/a9bvZ4VL+l9wBJSeJ69tsuvok8PCipIA5ro4bv9rROTXR66JxOwMBEPNnZ69pOCpYxwg3BUeJ5VA== X-Received: from pfxa18.prod.google.com ([2002:a05:6a00:1d12:b0:725:202d:aaff]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3e81:b0:2ea:8fee:508d with SMTP id 98e67ed59e1d1-2ef0124c4c4mr5516597a91.30.1733262916021; Tue, 03 Dec 2024 13:55:16 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:35 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-2-cmllamas@google.com> Subject: [PATCH v6 1/9] Revert "binder: switch alloc->mutex to spinlock_t" From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Mukesh Ojha Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509. In preparation for concurrent page installations, restore the original alloc->mutex which will serialize zap_page_range_single() against page installations in subsequent patches (instead of the mmap_sem). Resolved trivial conflicts with commit 2c10a20f5e84a ("binder_alloc: Fix sleeping function called from invalid context") and commit da0c02516c50 ("mm/list_lru: simplify the list_lru walk callback function"). Cc: Mukesh Ojha Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 46 +++++++++++++++++----------------- drivers/android/binder_alloc.h | 10 ++++---- 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index a738e7745865..52f6aa3232e1 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -169,9 +169,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(stru= ct binder_alloc *alloc, { struct binder_buffer *buffer; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_prepare_to_free_locked(alloc, user_ptr); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return buffer; } =20 @@ -597,10 +597,10 @@ struct binder_buffer *binder_alloc_new_buf(struct bin= der_alloc *alloc, if (!next) return ERR_PTR(-ENOMEM); =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_new_buf_locked(alloc, next, size, is_async); if (IS_ERR(buffer)) { - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); goto out; } =20 @@ -608,7 +608,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, buffer->offsets_size =3D offsets_size; buffer->extra_buffers_size =3D extra_buffers_size; buffer->pid =3D current->tgid; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); =20 ret =3D binder_install_buffer_pages(alloc, buffer, size); if (ret) { @@ -785,17 +785,17 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, * We could eliminate the call to binder_alloc_clear_buf() * from binder_alloc_deferred_release() by moving this to * binder_free_buf_locked(). However, that could - * increase contention for the alloc->lock if clear_on_free - * is used frequently for large buffers. This lock is not + * increase contention for the alloc mutex if clear_on_free + * is used frequently for large buffers. The mutex is not * needed for correctness here. */ if (buffer->clear_on_free) { binder_alloc_clear_buf(alloc, buffer); buffer->clear_on_free =3D false; } - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); binder_free_buf_locked(alloc, buffer); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -893,7 +893,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) struct binder_buffer *buffer; =20 buffers =3D 0; - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); BUG_ON(alloc->vma); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { @@ -940,7 +940,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) page_count++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); kvfree(alloc->pages); if (alloc->mm) mmdrop(alloc->mm); @@ -964,7 +964,7 @@ void binder_alloc_print_allocated(struct seq_file *m, struct binder_buffer *buffer; struct rb_node *n; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n; n =3D rb_next(n)) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", @@ -974,7 +974,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -991,7 +991,7 @@ void binder_alloc_print_pages(struct seq_file *m, int lru =3D 0; int free =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); /* * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. @@ -1007,7 +1007,7 @@ void binder_alloc_print_pages(struct seq_file *m, lru++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); } @@ -1023,10 +1023,10 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) struct rb_node *n; int count =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n !=3D NULL; n =3D rb_nex= t(n)) count++; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return count; } =20 @@ -1070,8 +1070,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_mmget; if (!mmap_read_trylock(mm)) goto err_mmap_read_lock_failed; - if (!spin_trylock(&alloc->lock)) - goto err_get_alloc_lock_failed; + if (!mutex_trylock(&alloc->mutex)) + goto err_get_alloc_mutex_failed; if (!page->page_ptr) goto err_page_already_freed; =20 @@ -1090,7 +1090,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); spin_unlock(&lru->lock); =20 if (vma) { @@ -1109,8 +1109,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 err_invalid_vma: err_page_already_freed: - spin_unlock(&alloc->lock); -err_get_alloc_lock_failed: + mutex_unlock(&alloc->mutex); +err_get_alloc_mutex_failed: mmap_read_unlock(mm); err_mmap_read_lock_failed: mmput_async(mm); @@ -1145,7 +1145,7 @@ void binder_alloc_init(struct binder_alloc *alloc) alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; mmgrab(alloc->mm); - spin_lock_init(&alloc->lock); + mutex_init(&alloc->mutex); INIT_LIST_HEAD(&alloc->buffers); } =20 diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index c02c8ebcb466..33c5f971c0a5 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include #include @@ -72,7 +72,7 @@ struct binder_lru_page { =20 /** * struct binder_alloc - per-binder proc state for binder allocator - * @lock: protects binder_alloc fields + * @mutex: protects binder_alloc fields * @vma: vm_area_struct passed to mmap_handler * (invariant after mmap) * @mm: copy of task->mm (invariant after open) @@ -96,7 +96,7 @@ struct binder_lru_page { * struct binder_buffer objects used to track the user buffers */ struct binder_alloc { - spinlock_t lock; + struct mutex mutex; struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; @@ -153,9 +153,9 @@ binder_alloc_get_free_async_space(struct binder_alloc *= alloc) { size_t free_async_space; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); free_async_space =3D alloc->free_async_space; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return free_async_space; } =20 --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34F6120A5FA for ; Tue, 3 Dec 2024 21:55:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262920; cv=none; b=EqC/ugYefbVgzB5dJEguZUofkyD6y3GSdk0VP8SdP3A/t+wnfdZ6Yk5dclnY4jNEt5425SYBkek54v0zcoycZyybcW5HaL7Y/eAjN7MYE5LriFQ2JOWGySh96tAprBxbF0WVTYThAMWx0vo7UatNEsZ1JXGf92WbmNZJyllJ+Pk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262920; c=relaxed/simple; bh=feto92b180sVitSUi6WA4dfzTO6KX//Xofn2Sv0b/xs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sVnf7matonMmwq3x18Mu+YVj2LzoAvpDmYMaEtrFtrV62ONqtWTIBXjWNEZqVkfzGccvsOxqLx8cxVxJfcetCCBcphDPJbbC5twJhrMyLR0PdFUNuYvYUzp4+0hh84sspAnN28ErngF4GeQyUhNhUXJBAe7eiDcwfLWvpq799sg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B+l91Jci; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B+l91Jci" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee5f6fa3feso6000283a91.0 for ; Tue, 03 Dec 2024 13:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262918; x=1733867718; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ybt3W/utb4idptlSsffJBzP4pVTicA4UuTKY+AOtES8=; b=B+l91JciG+VKXvQYWhKEFn8Q1sTPTT6R/q6Y/t54yTMZhYaETWRO0crPalU/q6JbsC dJYjghmOYTzeCbEQ0GIVaZ16RKx3cByk1p5LTN7p6Mm7TotUDWoCHp+7TiTo0amOTKs6 ZtBz863UzmWgEoo9BawfOjArCmPnE1E7Exok+wPiAn9GVJitqvq4rbNyLI5ld82rzZLl CojRASfl3K3AcRUDRBYTJugUlHroH5dv8lILEvHyxfTCIDucpAkWZayxRDKPAd1UGV40 c0f0XdHyvR+XaiDqGB6xwrpW3us58ekJDllLN5p9HA7oqDXA0JWHUR5kYXgpYyNNQRnk PGvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262918; x=1733867718; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ybt3W/utb4idptlSsffJBzP4pVTicA4UuTKY+AOtES8=; b=Ob2I6M1rK0PsJmvjOnCNV272QAUeCWKlofOvLSxYwJGnlYr0No7dEZlrGwgPnB26pr 76CIL1iCBLWa5aPRgp4jNyfc5Lljzyq8J18X9OJ2CXMJ+miq26XZbGSvpwdQYlM2wO4E cgnYdfMDMgad6n/8guWXQtYNrO9xj2xZs1WU0s+M+wAqILdlJGorVKeyJNYf8vJ4uKc5 6bZtNdhSxVj643yiZoGsRhEJG2Fmsp614Uz0tPULXfnNEb081Uh2iMYJCj8cKqp70jMF DvTBQyFlRGiEngS/HD3yMsI6sjQdfvyPz3i5Sml36Fsu64R+PxH/uRIc+kNyUP6KiwUS hWLA== X-Gm-Message-State: AOJu0YzUEPy59xwUYWDvKxTazmmet7kuKHpf2Ma6zW8xjgCrlOCjLe3g pLlZG2JRHoYog/F2/lrNp6Dp2m2k+D4IPgCdLmZgrjWZUdM2L5nFIDFDpsmAmlMPajTIGBMqm1m mwfrH4uRvZQ== X-Google-Smtp-Source: AGHT+IG6LOHr8nv/5OQkg2nS8yTZbbXLg7Kko7qvE20Jg+aPBhyBNIWSEjDB4gfRA/S7rXb1WX7MD+b7L5OsTw== X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2ee:4b69:50e1]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2547:b0:2ee:fa3f:4740 with SMTP id 98e67ed59e1d1-2ef012759b6mr6171548a91.35.1733262918471; Tue, 03 Dec 2024 13:55:18 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:36 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-3-cmllamas@google.com> Subject: [PATCH v6 2/9] binder: concurrent page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, David Hildenbrand , Barry Song , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow multiple callers to install pages simultaneously by switching the mmap_sem from write-mode to read-mode. Races to the same PTE are handled using get_user_pages_remote() to retrieve the already installed page. This method significantly reduces contention in the mmap semaphore. To ensure safety, vma_lookup() is used (instead of alloc->vma) to avoid operating on an isolated VMA. In addition, zap_page_range_single() is called under the alloc->mutex to avoid racing with the shrinker. Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: David Hildenbrand Cc: Barry Song Cc: Suren Baghdasaryan Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 65 +++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 52f6aa3232e1..f26283c2c768 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -221,26 +221,14 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, struct binder_lru_page *lru_page, unsigned long addr) { + struct vm_area_struct *vma; struct page *page; - int ret =3D 0; + long npages; + int ret; =20 if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - /* - * Protected with mmap_sem in write mode as multiple tasks - * might race to install the same page. - */ - mmap_write_lock(alloc->mm); - if (binder_get_installed_page(lru_page)) - goto out; - - if (!alloc->vma) { - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto out; - } - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); if (!page) { pr_err("%d: failed to allocate page\n", alloc->pid); @@ -248,19 +236,48 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, goto out; } =20 - ret =3D vm_insert_page(alloc->vma, addr, page); - if (ret) { + mmap_read_lock(alloc->mm); + vma =3D vma_lookup(alloc->mm, addr); + if (!vma || vma !=3D alloc->vma) { + __free_page(page); + pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); + ret =3D -ESRCH; + goto unlock; + } + + ret =3D vm_insert_page(vma, addr, page); + switch (ret) { + case -EBUSY: + /* + * EBUSY is ok. Someone installed the pte first but the + * lru_page->page_ptr has not been updated yet. Discard + * our page and look up the one already installed. + */ + ret =3D 0; + __free_page(page); + npages =3D get_user_pages_remote(alloc->mm, addr, 1, + FOLL_NOFAULT, &page, NULL); + if (npages <=3D 0) { + pr_err("%d: failed to find page at offset %lx\n", + alloc->pid, addr - alloc->buffer); + ret =3D -ESRCH; + break; + } + fallthrough; + case 0: + /* Mark page installation complete and safe to use */ + binder_set_installed_page(lru_page, page); + break; + default: + __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->buffer, ret); - __free_page(page); ret =3D -ENOMEM; - goto out; + break; } - - /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); +unlock: + mmap_read_unlock(alloc->mm); out: - mmap_write_unlock(alloc->mm); mmput_async(alloc->mm); return ret; } @@ -1090,7 +1107,6 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - mutex_unlock(&alloc->mutex); spin_unlock(&lru->lock); =20 if (vma) { @@ -1101,6 +1117,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_user_end(alloc, index); } =20 + mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); __free_page(page_to_free); --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EDCF20ADE4 for ; Tue, 3 Dec 2024 21:55:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262922; cv=none; b=Yfpf+LrmwpJRP3uR7DfkcaCCieQYKft2+C823EQSD4mQH1S/U82lcgA87imIzUgWMK7yxhnizVpPakrrHXcRjvFBC89Yk24lpII4ju5tK2VeyDGoqQ4fIcWzVVksI/DiPRgN64MpTbs8xPkaUJMZ93HN9uWKvcWZy57lTpJaV5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262922; c=relaxed/simple; bh=Uw4pdPuj7RQuQpX2ZJLEmQOMp+U9isTb7zR1xRQDXz4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t7HQtFCA5FYC+Nvm4tdClsvyaEStfZ46u6r30xkAN86H6z9B2Fs7KXx/Xe7R4wD3ocvag1CERU7bCU0+oSzXroaY4+Cn91cUKG6ualPrQmeUeLIwmrodDNXm4BpqEeYQxPfCQwgx74Y7zNdvHBsy+hjB7PJxl6UqykiCzNJRS9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=drE7YeN6; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="drE7YeN6" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7251698c10aso5737668b3a.0 for ; Tue, 03 Dec 2024 13:55:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262920; x=1733867720; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gCCqnAEZK+ZKpJb5t5zYVlo4wXMTT1QvqaeCY/ftsfA=; b=drE7YeN6V2GmQmgaYpI1jPTKsFeqY1IPosO2Xng5FIZ0R/CQNB1wPZ5MKHzhS4uMBC UuC3dRaUDHAwdtKp86KiqKYk4+bqJcUXabMt7c7hRZL2LsoX3W0nIscHYxAXm5FgB+p3 V+yizb12hHUyoPGJM3xKjYwmZp694o5gC+Z5a3DTzcgCLIK8h++8rskqjkStcrzCo7sT C5C19d3gYsmPCpZnUlhikYBxzyoZuRc3f00vJj4YJSzQV/uoJ4Dkq4mB0EVaCp0rXBLN z1Cc38To1HpQZvAECnE3zDZaebxzaBc+aGT8+9wvppurN7pabLWpi1oUiSN+D06AyROZ vDKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262920; x=1733867720; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gCCqnAEZK+ZKpJb5t5zYVlo4wXMTT1QvqaeCY/ftsfA=; b=AneHitD2UdgYo1c5s9M0XEwjMjuLebFYWVRigxfN7jW9Qt7Xd9iaZz59axhme3OMJE yMez6HEV0ypD4jHFKJjJCHFK7peU9XEbcucSggEt1+BwySix2dZvveLYtkS4ZOpQf++L 5r7OobSTC8/kfzxYCXu26F1YXqgwBmkRO3KQLog02P6YU7XaWYP5VoSaFPMrwpSkPy9i hI7Ef6b1eYwDQEYIPRzv879AieF/uNGU3Rzw06NiaxxUETp1iHznCJN8NSnd4iAxsZcy QAJd2k4/2P4i11CjnrLWm3vizqjKpC/3JtCRzqKOlqmZzwdlr/jFtQBx5pKrs0KqgoQt /ICQ== X-Gm-Message-State: AOJu0YweeRR4hHnpv6g/YClHZru/czNk3wgCz5g5WD9oEjfQmxqyOdRn o9WDSqpVxKg8eJhJRo83TnnhlMYmwry2pY/4KU8ZupyUxqrwlumm/slmnS+7z4BzTuipVd2boyA ULYJaLoh4BA== X-Google-Smtp-Source: AGHT+IFU6dUxmyTsRGZeIXrIqF1/FJ6uWBI5rS4YG76QjdHGSO5HrdORaRH9j2JEHYUJCjvfteUKzBUWGeoQFw== X-Received: from pfbcr9.prod.google.com ([2002:a05:6a00:f09:b0:71e:6711:537c]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:181b:b0:71e:427a:68de with SMTP id d2e1a72fcca58-7257fcd7ce6mr6044569b3a.24.1733262920677; Tue, 03 Dec 2024 13:55:20 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:37 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-4-cmllamas@google.com> Subject: [PATCH v6 3/9] binder: select correct nid for pages in LRU From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The numa node id for binder pages is currently being derived from the lru entry under struct binder_lru_page. However, this object doesn't reflect the node id of the struct page items allocated separately. Instead, select the correct node id from the page itself. This was made possible since commit 0a97c01cd20b ("list_lru: allow explicit memcg and NUMA node selection"). Cc: Nhat Pham Cc: Johannes Weiner Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index f26283c2c768..1f02bec78451 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -210,7 +210,10 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add_obj(&binder_freelist, &page->lru); + ret =3D list_lru_add(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!ret); =20 trace_binder_free_lru_end(alloc, index); @@ -334,7 +337,10 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, if (page->page_ptr) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del_obj(&binder_freelist, &page->lru); + on_lru =3D list_lru_del(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!on_lru); =20 trace_binder_alloc_lru_end(alloc, index); @@ -947,8 +953,10 @@ void binder_alloc_deferred_release(struct binder_alloc= *alloc) if (!alloc->pages[i].page_ptr) continue; =20 - on_lru =3D list_lru_del_obj(&binder_freelist, - &alloc->pages[i].lru); + on_lru =3D list_lru_del(&binder_freelist, + &alloc->pages[i].lru, + page_to_nid(alloc->pages[i].page_ptr), + NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4068D20B1EE for ; Tue, 3 Dec 2024 21:55:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262926; cv=none; b=OfYUU+A4fc/UZD7GiNmvATwYHK8ElZgFVyiDeY+TOLAK2dwkzHt1tEwylz8Xe7vKUj5Go6NouGk/+19iqf6kFzB+lUz+Kamh1sWXGNujo07nf43/yoWbMClvHmcCAsJTR5CC0Ived5jMy3L6KmpG+BydYQWCMXcICkAGpB0ODPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262926; c=relaxed/simple; bh=v2Vqb0C/RrzN9E6F6em14ZSMqGdIYr65QkP6Lxjzs/A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mOg+I7BSfiVVqWTl76FSgl3Pu9HBAm0SjaxsiP74rVJr9Izr3jogFySZlOu64OdpjTitsQcx62rtDqx1VhQUaW/A1Tzot7JYG2WPU13DoeAJr7unaejq9jn4MhaZORN5BZw0RwhubLVBwpC7o2FoNpyj/LbEAMvnRDKjP63kD2c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a7F71anz; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a7F71anz" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7eaac1e95ffso4698863a12.2 for ; Tue, 03 Dec 2024 13:55:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262923; x=1733867723; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3x8dXmboKoljl/paH1tR6nAjbYohziE0faQAieVSRFk=; b=a7F71anzdwwwuRy0TCOECUVq89HeWtxxNRpQPKsPLr3CWCE8sPW9/muzS2ki+TS0dg MXFMbZMzPTvkuzSwMcFCn8G2egXuHB6Tt5MArWmGs+63q3Eqejjyh7l3P4GdIVIX+j5S AJIvr8G9hm8triAEvdYt1fjkizqnLAGN+Enr1FTnz+erhx+TyWHcn4zokdhr4YhFDhXH MxfY3ql8GshzPMyOyHl4zqE1VlNFuWyaejF9jozf3ixnD31J2uDvw7A1kUHCC5KmK7GH zmM3nwpom24MIkWJtynjTXA/NR9oMAN8Ok2RUF4s6DzAPRjX9ENYAHmocgLw+ostyfwc Z6Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262923; x=1733867723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3x8dXmboKoljl/paH1tR6nAjbYohziE0faQAieVSRFk=; b=FjboS4Tj042rWIb0Nd9eAEiskfIEFLHIa6aKTAABFaaINHyz3N3KbXn2JV4y/W/Av2 DrxC8DYwwN/BwfQR5/Azss8v4idF9r+B2fyom3i/3+ytlISCJfYgOqtbY/DNcruoAtbO Uw63my8NMtYZ4AcmnzP7RJmXhEyen3mxBJdmuJjozLnY+8ByEFySuTX3vmYvl8/WtZnt R5NodK3lAUc5m59ZWLzfuxupswz0oeRv8adSzfadQLQTp8h0GbAs/8XKWuxeZmUIUvCv vhekdcNmXYOpWSGeTbEqmfqZbBsnfw7sPYfYC0KXCny1uzmonPKdFyvnR+eFyY4ZqBix WcfQ== X-Gm-Message-State: AOJu0YzhX/iGvqOd8kbktZJNxIAD333f1oaXmMBIbLco0DpTMWK/KeCS gevKQt/AQQp2ATjgWiAk/ffxxh+7pyXz11jhUQ2G3eJR4fEryg/8uYLuJ2Ry8b8ZbGI+MRev2C5 iZBJ9sa75AQ== X-Google-Smtp-Source: AGHT+IFmTDZioHUsffNoZpOTIyYLRMRF2XpdNpqLb2HoPxvSroKe3M3WgV4RmtgDsBTmzacYNxkqDSudk9H17Q== X-Received: from pfbjc2.prod.google.com ([2002:a05:6a00:6c82:b0:725:35b9:641b]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7f9a:b0:1db:915b:ab11 with SMTP id adf61e73a8af0-1e1653c834bmr6415196637.24.1733262923466; Tue, 03 Dec 2024 13:55:23 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:38 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-5-cmllamas@google.com> Subject: [PATCH v6 4/9] binder: store shrinker metadata under page->private From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Matthew Wilcox , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of pre-allocating an entire array of struct binder_lru_page in alloc->pages, install the shrinker metadata under page->private. This ensures the memory is allocated and released as needed alongside pages. By converting the alloc->pages[] into an array of struct page pointers, we can access these pages directly and only reference the shrinker metadata where it's being used (e.g. inside the shrinker's callback). Rename struct binder_lru_page to struct binder_shrinker_mdata to better reflect its purpose. Add convenience functions that wrap the allocation and freeing of pages along with their shrinker metadata. Note I've reworked this patch to avoid using page->lru and page->index directly, as Matthew pointed out that these are being removed [1]. Link: https://lore.kernel.org/all/ZzziucEm3np6e7a0@casper.infradead.org/ [1] Cc: Matthew Wilcox Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 130 ++++++++++++++---------- drivers/android/binder_alloc.h | 25 +++-- drivers/android/binder_alloc_selftest.c | 14 +-- 3 files changed, 99 insertions(+), 70 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 1f02bec78451..fd82ecefd961 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -176,25 +176,26 @@ struct binder_buffer *binder_alloc_prepare_to_free(st= ruct binder_alloc *alloc, } =20 static inline void -binder_set_installed_page(struct binder_lru_page *lru_page, +binder_set_installed_page(struct binder_alloc *alloc, + unsigned long index, struct page *page) { /* Pairs with acquire in binder_get_installed_page() */ - smp_store_release(&lru_page->page_ptr, page); + smp_store_release(&alloc->pages[index], page); } =20 static inline struct page * -binder_get_installed_page(struct binder_lru_page *lru_page) +binder_get_installed_page(struct binder_alloc *alloc, unsigned long index) { /* Pairs with release in binder_set_installed_page() */ - return smp_load_acquire(&lru_page->page_ptr); + return smp_load_acquire(&alloc->pages[index]); } =20 static void binder_lru_freelist_add(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, false, start, end); =20 @@ -203,16 +204,15 @@ static void binder_lru_freelist_add(struct binder_all= oc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (!binder_get_installed_page(page)) + page =3D binder_get_installed_page(alloc, index); + if (!page) continue; =20 trace_binder_free_lru_start(alloc, index); =20 ret =3D list_lru_add(&binder_freelist, - &page->lru, - page_to_nid(page->page_ptr), + page_to_lru(page), + page_to_nid(page), NULL); WARN_ON(!ret); =20 @@ -220,8 +220,39 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static struct page *binder_page_alloc(struct binder_alloc *alloc, + unsigned long index) +{ + struct binder_shrinker_mdata *mdata; + struct page *page; + + page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + if (!page) + return NULL; + + /* allocate and install shrinker metadata under page->private */ + mdata =3D kzalloc(sizeof(*mdata), GFP_KERNEL); + if (!mdata) { + __free_page(page); + return NULL; + } + + mdata->alloc =3D alloc; + mdata->page_index =3D index; + INIT_LIST_HEAD(&mdata->lru); + set_page_private(page, (unsigned long)mdata); + + return page; +} + +static void binder_free_page(struct page *page) +{ + kfree((void *)page_private(page)); + __free_page(page); +} + static int binder_install_single_page(struct binder_alloc *alloc, - struct binder_lru_page *lru_page, + unsigned long index, unsigned long addr) { struct vm_area_struct *vma; @@ -232,9 +263,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + page =3D binder_page_alloc(alloc, index); if (!page) { - pr_err("%d: failed to allocate page\n", alloc->pid); ret =3D -ENOMEM; goto out; } @@ -242,7 +272,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, mmap_read_lock(alloc->mm); vma =3D vma_lookup(alloc->mm, addr); if (!vma || vma !=3D alloc->vma) { - __free_page(page); + binder_free_page(page); pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); ret =3D -ESRCH; goto unlock; @@ -253,11 +283,11 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, case -EBUSY: /* * EBUSY is ok. Someone installed the pte first but the - * lru_page->page_ptr has not been updated yet. Discard + * alloc->pages[index] has not been updated yet. Discard * our page and look up the one already installed. */ ret =3D 0; - __free_page(page); + binder_free_page(page); npages =3D get_user_pages_remote(alloc->mm, addr, 1, FOLL_NOFAULT, &page, NULL); if (npages <=3D 0) { @@ -269,10 +299,10 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, fallthrough; case 0: /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); + binder_set_installed_page(alloc, index, page); break; default: - __free_page(page); + binder_free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->buffer, ret); ret =3D -ENOMEM; @@ -289,7 +319,6 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, struct binder_buffer *buffer, size_t size) { - struct binder_lru_page *page; unsigned long start, final; unsigned long page_addr; =20 @@ -301,14 +330,12 @@ static int binder_install_buffer_pages(struct binder_= alloc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (binder_get_installed_page(page)) + if (binder_get_installed_page(alloc, index)) continue; =20 trace_binder_alloc_page_start(alloc, index); =20 - ret =3D binder_install_single_page(alloc, page, page_addr); + ret =3D binder_install_single_page(alloc, index, page_addr); if (ret) return ret; =20 @@ -322,8 +349,8 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, static void binder_lru_freelist_del(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, true, start, end); =20 @@ -332,14 +359,14 @@ static void binder_lru_freelist_del(struct binder_all= oc *alloc, bool on_lru; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; + page =3D binder_get_installed_page(alloc, index); =20 - if (page->page_ptr) { + if (page) { trace_binder_alloc_lru_start(alloc, index); =20 on_lru =3D list_lru_del(&binder_freelist, - &page->lru, - page_to_nid(page->page_ptr), + page_to_lru(page), + page_to_nid(page), NULL); WARN_ON(!on_lru); =20 @@ -760,11 +787,10 @@ static struct page *binder_alloc_get_page(struct bind= er_alloc *alloc, (buffer->user_data - alloc->buffer); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; - struct binder_lru_page *lru_page; =20 - lru_page =3D &alloc->pages[index]; *pgoffp =3D pgoff; - return lru_page->page_ptr; + + return alloc->pages[index]; } =20 /** @@ -839,7 +865,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, { struct binder_buffer *buffer; const char *failure_string; - int ret, i; + int ret; =20 if (unlikely(vma->vm_mm !=3D alloc->mm)) { ret =3D -EINVAL; @@ -862,17 +888,12 @@ int binder_alloc_mmap_handler(struct binder_alloc *al= loc, alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), GFP_KERNEL); - if (alloc->pages =3D=3D NULL) { + if (!alloc->pages) { ret =3D -ENOMEM; failure_string =3D "alloc page array"; goto err_alloc_pages_failed; } =20 - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - alloc->pages[i].alloc =3D alloc; - INIT_LIST_HEAD(&alloc->pages[i].lru); - } - buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) { ret =3D -ENOMEM; @@ -948,20 +969,22 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) int i; =20 for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + struct page *page; bool on_lru; =20 - if (!alloc->pages[i].page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) continue; =20 on_lru =3D list_lru_del(&binder_freelist, - &alloc->pages[i].lru, - page_to_nid(alloc->pages[i].page_ptr), + page_to_lru(page), + page_to_nid(page), NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, on_lru ? "on lru" : "active"); - __free_page(alloc->pages[i].page_ptr); + binder_free_page(page); page_count++; } } @@ -1010,7 +1033,7 @@ void binder_alloc_print_allocated(struct seq_file *m, void binder_alloc_print_pages(struct seq_file *m, struct binder_alloc *alloc) { - struct binder_lru_page *page; + struct page *page; int i; int active =3D 0; int lru =3D 0; @@ -1023,10 +1046,10 @@ void binder_alloc_print_pages(struct seq_file *m, */ if (binder_alloc_get_vma(alloc) !=3D NULL) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D &alloc->pages[i]; - if (!page->page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) free++; - else if (list_empty(&page->lru)) + else if (list_empty(page_to_lru(page))) active++; else lru++; @@ -1083,8 +1106,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, void *cb_arg) __must_hold(&lru->lock) { - struct binder_lru_page *page =3D container_of(item, typeof(*page), lru); - struct binder_alloc *alloc =3D page->alloc; + struct binder_shrinker_mdata *mdata =3D container_of(item, typeof(*mdata)= , lru); + struct binder_alloc *alloc =3D mdata->alloc; struct mm_struct *mm =3D alloc->mm; struct vm_area_struct *vma; struct page *page_to_free; @@ -1097,10 +1120,8 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, goto err_mmap_read_lock_failed; if (!mutex_trylock(&alloc->mutex)) goto err_get_alloc_mutex_failed; - if (!page->page_ptr) - goto err_page_already_freed; =20 - index =3D page - alloc->pages; + index =3D mdata->page_index; page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); @@ -1109,8 +1130,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 trace_binder_unmap_kernel_start(alloc, index); =20 - page_to_free =3D page->page_ptr; - page->page_ptr =3D NULL; + page_to_free =3D alloc->pages[index]; + binder_set_installed_page(alloc, index, NULL); =20 trace_binder_unmap_kernel_end(alloc, index); =20 @@ -1128,12 +1149,11 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); - __free_page(page_to_free); + binder_free_page(page_to_free); =20 return LRU_REMOVED_RETRY; =20 err_invalid_vma: -err_page_already_freed: mutex_unlock(&alloc->mutex); err_get_alloc_mutex_failed: mmap_read_unlock(mm); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 33c5f971c0a5..d71f99189ef5 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -59,17 +59,26 @@ struct binder_buffer { }; =20 /** - * struct binder_lru_page - page object used for binder shrinker - * @page_ptr: pointer to physical page in mmap'd space - * @lru: entry in binder_freelist - * @alloc: binder_alloc for a proc + * struct binder_shrinker_mdata - binder metadata used to reclaim pages + * @lru: LRU entry in binder_freelist + * @alloc: binder_alloc owning the page to reclaim + * @page_index: offset in @alloc->pages[] into the page to reclaim */ -struct binder_lru_page { +struct binder_shrinker_mdata { struct list_head lru; - struct page *page_ptr; struct binder_alloc *alloc; + unsigned long page_index; }; =20 +static inline struct list_head *page_to_lru(struct page *p) +{ + struct binder_shrinker_mdata *mdata; + + mdata =3D (struct binder_shrinker_mdata *)page_private(p); + + return &mdata->lru; +} + /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields @@ -83,7 +92,7 @@ struct binder_lru_page { * @allocated_buffers: rb tree of allocated buffers sorted by address * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space - * @pages: array of binder_lru_page + * @pages: array of struct page * * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -104,7 +113,7 @@ struct binder_alloc { struct rb_root free_buffers; struct rb_root allocated_buffers; size_t free_async_space; - struct binder_lru_page *pages; + struct page **pages; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 81442fe20a69..a4c650843bee 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -105,10 +105,10 @@ static bool check_buffer_pages_allocated(struct binde= r_alloc *alloc, page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - if (!alloc->pages[page_index].page_ptr || - !list_empty(&alloc->pages[page_index].lru)) { + if (!alloc->pages[page_index] || + !list_empty(page_to_lru(alloc->pages[page_index]))) { pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index].page_ptr ? + alloc->pages[page_index] ? "lru" : "free", page_index); return false; } @@ -148,10 +148,10 @@ static void binder_selftest_free_buf(struct binder_al= loc *alloc, * if binder shrinker ran during binder_alloc_free_buf * calls above. */ - if (list_empty(&alloc->pages[i].lru)) { + if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i].page_ptr ? "alloc" : "free", i); + alloc->pages[i] ? "alloc" : "free", i); binder_selftest_failures++; } } @@ -168,9 +168,9 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) } =20 for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i].page_ptr) { + if (alloc->pages[i]) { pr_err("expect free but is %s at page index %d\n", - list_empty(&alloc->pages[i].lru) ? + list_empty(page_to_lru(alloc->pages[i])) ? "alloc" : "lru", i); binder_selftest_failures++; } --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 052F1209F5B for ; Tue, 3 Dec 2024 21:55:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262930; cv=none; b=ZFOs2HphX0l4+ycV3gImj/aRC9NS+gVvu0tAsKQaibc41oZVQ/QJeTlg++z+KiAOTmQWyFiCjPJYWe0t73ED1Ae/2YbzFw8dETBIxCNDbeHtswXISIpYUbEwrsUzw3inc70IuD+o5tfSLixH1xWYcFOj76RC9jXYNGqZZf8zjHU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262930; c=relaxed/simple; bh=msK5KcGQmHUu2fA38/levDh0PvlQn0YxxPRKG6VndmU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hV1jOaFWoNp4NQ1WgYY9vT/KgW1rRrfKXKbZgsu2AY8XBA7AAyRRomN3scaaIBB93HgcPt72Bj/YYrkWOgwHDIKrp6sXfRXRthJekckmeJclqyRrwBwpH56rsyb21IWuJXVimkHWHJpSxaB+qD+H4+FYQ8PYWKuIHqUO31AoLvQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NIHEcfGw; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NIHEcfGw" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7254237c888so5601329b3a.1 for ; Tue, 03 Dec 2024 13:55:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262926; x=1733867726; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9Fa+MnaWny57URM8+FFXTKSPVzeXZgh6NYzjgpD0ip0=; b=NIHEcfGw0QIUGQFMBIV2S56pVyJZmGJU9YJ+5mkpZ163hF0uJaXKU09ZkPhZ9lCbvf Wf1GW6YlIOBPykU9O+xX3cRvCPUCQTl3cGrnXHPrEDfdx9A4Ta6ocVY6KcfTc8jATFZG W253tD2u8nBlAQGHPMUAi9b+dnf4pMU8umzsWpL5apN9IhG6HlRl7g+/8emY4WKI3I7I B/oRSNM5Fv69oN/t3qdHIH8bus6MpzMfRBbvojy7jFylEUPmLBY+THlZtO7zXgBTZV3j EU+ZETMO2325QOG7T+Xo1W1zk/PdKJVs7i+Msq/lPu0pKj7aiPaO2EEy8zFs5LjnR+pv SnWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262926; x=1733867726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9Fa+MnaWny57URM8+FFXTKSPVzeXZgh6NYzjgpD0ip0=; b=G0at+w/SBB7DqReKnrtntuMRs2egZovOUwu9ITQUH4vYTXfEq3kuhCVE0TgCV5Dh3Q xnvXpD5b4rK+GFxpwtpUKu2V8qEmqFaMPfv+wJOT1R87U6Pae+R8RUe1IrqDONyX1e9V oHWhUy5lQCc85JlrASxe+hc8x/iRiSZ/rRZeIlE3Vjnes0rQvYaVXMl7ofcyIYbULI1c UXowEBttA5lQ874O5m1Xy+asMZdUkDykzeCg/LthkgsahZol0Mj7YzYo8rgZAPPtu2LD 7h+8xS/ihqZ3DWDTF5OvM/7RetZmjlSbglTpDfBLPYiY7PIwqZz7+7nSQBmTFRWUWnDh ywvw== X-Gm-Message-State: AOJu0YxKc/4bgytzZWbchwBwlRHFYU8Gx+KLdLHq5pKwXAPtEJ45mxVk 6pmYBzUlhaO4ivo2TYu9RncwREAIrbZREULW8dcXl8z7qSBs9NOSVIO5Cv/FVEWkkbj0NZf0X9H 7uaLJgxSL2Q== X-Google-Smtp-Source: AGHT+IEqqiODcQspNsL6jtBxOiOYh6104GsAAwyMN2VyJbnGTiB/0lwaTbuQqREg4UwVYng36fBASdmJIATpgw== X-Received: from pfau12.prod.google.com ([2002:a05:6a00:aa8c:b0:724:edad:f712]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:8889:0:b0:71e:4786:98ee with SMTP id d2e1a72fcca58-7257fcccc8dmr5274675b3a.21.1733262926183; Tue, 03 Dec 2024 13:55:26 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:39 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-6-cmllamas@google.com> Subject: [PATCH v6 5/9] binder: replace alloc->vma with alloc->mapped From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Minchan Kim , "Liam R. Howlett" , Matthew Wilcox Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is unsafe to use alloc->vma outside of the mmap_sem. Instead, add a new boolean alloc->mapped to save the vma state (mapped or unmmaped) and use this as a replacement for alloc->vma to validate several paths. Using the alloc->vma caused several performance and security issues in the past. Now that it has been replaced with either vm_lookup() or the alloc->mapped state, we can finally remove it. Cc: Minchan Kim Cc: Liam R. Howlett Cc: Matthew Wilcox Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 48 +++++++++++++------------ drivers/android/binder_alloc.h | 6 ++-- drivers/android/binder_alloc_selftest.c | 2 +- 3 files changed, 30 insertions(+), 26 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index fd82ecefd961..60ca0e541d6f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -220,6 +220,19 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static inline +void binder_alloc_set_mapped(struct binder_alloc *alloc, bool state) +{ + /* pairs with smp_load_acquire in binder_alloc_is_mapped() */ + smp_store_release(&alloc->mapped, state); +} + +static inline bool binder_alloc_is_mapped(struct binder_alloc *alloc) +{ + /* pairs with smp_store_release in binder_alloc_set_mapped() */ + return smp_load_acquire(&alloc->mapped); +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index) { @@ -271,7 +284,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, =20 mmap_read_lock(alloc->mm); vma =3D vma_lookup(alloc->mm, addr); - if (!vma || vma !=3D alloc->vma) { + if (!vma || !binder_alloc_is_mapped(alloc)) { binder_free_page(page); pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); ret =3D -ESRCH; @@ -379,20 +392,6 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, } } =20 -static inline void binder_alloc_set_vma(struct binder_alloc *alloc, - struct vm_area_struct *vma) -{ - /* pairs with smp_load_acquire in binder_alloc_get_vma() */ - smp_store_release(&alloc->vma, vma); -} - -static inline struct vm_area_struct *binder_alloc_get_vma( - struct binder_alloc *alloc) -{ - /* pairs with smp_store_release in binder_alloc_set_vma() */ - return smp_load_acquire(&alloc->vma); -} - static void debug_no_space_locked(struct binder_alloc *alloc) { size_t largest_alloc_size =3D 0; @@ -626,7 +625,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, int ret; =20 /* Check binder_alloc is fully initialized */ - if (!binder_alloc_get_vma(alloc)) { + if (!binder_alloc_is_mapped(alloc)) { binder_alloc_debug(BINDER_DEBUG_USER_ERROR, "%d: binder_alloc_buf, no vma\n", alloc->pid); @@ -908,7 +907,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, alloc->free_async_space =3D alloc->buffer_size / 2; =20 /* Signal binder_alloc is fully initialized */ - binder_alloc_set_vma(alloc, vma); + binder_alloc_set_mapped(alloc, true); =20 return 0; =20 @@ -938,7 +937,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) =20 buffers =3D 0; mutex_lock(&alloc->mutex); - BUG_ON(alloc->vma); + BUG_ON(alloc->mapped); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); @@ -1044,7 +1043,7 @@ void binder_alloc_print_pages(struct seq_file *m, * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. */ - if (binder_alloc_get_vma(alloc) !=3D NULL) { + if (binder_alloc_is_mapped(alloc)) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { page =3D binder_get_installed_page(alloc, i); if (!page) @@ -1084,12 +1083,12 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) * @alloc: binder_alloc for this proc * * Called from binder_vma_close() when releasing address space. - * Clears alloc->vma to prevent new incoming transactions from + * Clears alloc->mapped to prevent new incoming transactions from * allocating more buffers. */ void binder_alloc_vma_close(struct binder_alloc *alloc) { - binder_alloc_set_vma(alloc, NULL); + binder_alloc_set_mapped(alloc, false); } =20 /** @@ -1125,7 +1124,12 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); - if (vma && vma !=3D binder_alloc_get_vma(alloc)) + /* + * Since a binder_alloc can only be mapped once, we ensure + * the vma corresponds to this mapping by checking whether + * the binder_alloc is still mapped. + */ + if (vma && !binder_alloc_is_mapped(alloc)) goto err_invalid_vma; =20 trace_binder_unmap_kernel_start(alloc, index); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index d71f99189ef5..3ebb12afd4de 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -82,8 +82,6 @@ static inline struct list_head *page_to_lru(struct page *= p) /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields - * @vma: vm_area_struct passed to mmap_handler - * (invariant after mmap) * @mm: copy of task->mm (invariant after open) * @buffer: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc @@ -96,6 +94,8 @@ static inline struct list_head *page_to_lru(struct page *= p) * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages + * @mapped: whether the vm area is mapped, each binder instanc= e is + * allowed a single mapping throughout its lifetime * @oneway_spam_detected: %true if oneway spam detection fired, clear that * flag once the async buffer has returned to a healthy state * @@ -106,7 +106,6 @@ static inline struct list_head *page_to_lru(struct page= *p) */ struct binder_alloc { struct mutex mutex; - struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; struct list_head buffers; @@ -117,6 +116,7 @@ struct binder_alloc { size_t buffer_size; int pid; size_t pages_high; + bool mapped; bool oneway_spam_detected; }; =20 diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index a4c650843bee..6a64847a8555 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -291,7 +291,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc) if (!binder_selftest_run) return; mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->vma) + if (!binder_selftest_run || !alloc->mapped) goto done; pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B294120B217 for ; Tue, 3 Dec 2024 21:55:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262931; cv=none; b=J3L4yzHzSJFvHxsx4YTE3FPHJbUeT4gBiNQLj6GWdRKgEjVY7Wm/zCCHYr15oCLrP0ywz2BPB21fs0l+GMA2quZm565azh+XVSoTnKVkNiEqtdbozYXDhSKK/87JI3RmPxR1OtR/G5musWd97sQdhZC0oouZkhngB55FfaMaYuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262931; c=relaxed/simple; bh=xqfqP8xkEJ7J66IPFIxIp8fLqx8LOGY2HHlhpM2XzIw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PXQRfdZsdQ6nz6G3jPnagplzeAdHrA6SSsZeGAS9uZcXb0Oc4mdCmpDycVgbUeEgsDVxWETLjV8ZN4o/IH4PpHm8A2FsE0C/Ss0PmGUNBREtDP/Lic819VwJycqWg4grFicUXfXjZBX2khjgyI9nPTmCZ1bfGSmxTa73BS4hthA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QxSTQ0Aj; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QxSTQ0Aj" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee3c572485so7027145a91.2 for ; Tue, 03 Dec 2024 13:55:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262929; x=1733867729; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zHGi1G81yrh3QiLXg1fYp+w1BRJbvKYqnv2tqPeTTMA=; b=QxSTQ0Ajt9Lhf9HuCLYgM1A4sUFFZxtdHHlJ6ozTSdTOmgLVdbXy1Fz5XUlcJN5Sd1 vWfPLhd/Eg2f7AHzO+JLZn7Ie82LXHQFA8AfQT+xQVR5ot9MJ4Y6hU7nNIN7J8TkSb8J IxYHXjYEQCV88PmFG8LdOHY+mq1Sdw/YDyFKqyYpHh+1fuBuAx48fPljQUVnfvkid/JX ofDv51vWyuLGePuAEklGmFOr2QBFLli4lwABmOMG/nqP0U1hHZxYJtwkIt0SBO5umM9M mE3QhjoWKv1m5dDSZ9TLxeaGTcyodCr21gL5yz5VrZL9KaKqBSmLIJuM4LcIDxeyUGkW DNFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262929; x=1733867729; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zHGi1G81yrh3QiLXg1fYp+w1BRJbvKYqnv2tqPeTTMA=; b=lRkHcUK6v5BIF1Py9dy3wmhyb4SuksReAJl5AVoh3uyxAqFrO74KJFSbG59QURw9Ar wn79DsOFhdmz6wGPbJwn9Fr23QLF5e1wuSYlQ6iLPc87HDXdnn3A+WAFa1NDWGxRrG25 6V44u/Wbig6a5Nznwg2svruihjVcNm0fvoLvTq+5zV5aoXykHa4vmMexc2jvKktTPw/F jEZBDZ+af2a9mAEeMLVvwHJF6+UO6km3Sk9UQ46YoYyKf8cvlg+V85NRLpZGgxgniQF5 cgk8WLsQX42J502EYxd7UuLMuDewaRfJUkmbQsQknW2nVIU9qtYv/p/VNzuwdsIxAWih sGvg== X-Gm-Message-State: AOJu0YwlG9e67zzya9BtT4HKr5uWOTRiXER6LCOjYDqo+n9yH8hcYKaj qiIOoB9TeI1HN6UO57k/YUgF/fxmlUtOw07KJQTspXx1l6gv6Q4Po9uZlccKuMvSMrJEIC2eRGm f82PjlBcDNQ== X-Google-Smtp-Source: AGHT+IEbE2UMFfScV0vuMZWWFOmZauVg2vcNmH4D0orwTeF+Wy6H5aOCI3xqoYvWCm2IAVdsZRax8j5y5RlmUQ== X-Received: from pjbsw12.prod.google.com ([2002:a17:90b:2c8c:b0:2ee:4826:cae3]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5288:b0:2ea:3d2e:a0d7 with SMTP id 98e67ed59e1d1-2ef011fed46mr6452014a91.15.1733262929007; Tue, 03 Dec 2024 13:55:29 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:40 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-7-cmllamas@google.com> Subject: [PATCH v6 6/9] binder: rename alloc->buffer to vm_start From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The alloc->buffer field in struct binder_alloc stores the starting address of the mapped vma, rename this field to alloc->vm_start to better reflect its purpose. It also avoids confusion with the binder buffer concept, e.g. transaction->buffer. No functional changes in this patch. Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder.c | 2 +- drivers/android/binder_alloc.c | 28 ++++++++++++------------- drivers/android/binder_alloc.h | 4 ++-- drivers/android/binder_alloc_selftest.c | 2 +- drivers/android/binder_trace.h | 2 +- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/android/binder.c b/drivers/android/binder.c index ef353ca13c35..9962c606cabd 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -6374,7 +6374,7 @@ static void print_binder_transaction_ilocked(struct s= eq_file *m, seq_printf(m, " node %d", buffer->target_node->debug_id); seq_printf(m, " size %zd:%zd offset %lx\n", buffer->data_size, buffer->offsets_size, - proc->alloc.buffer - buffer->user_data); + proc->alloc.vm_start - buffer->user_data); } =20 static void print_binder_work_ilocked(struct seq_file *m, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 60ca0e541d6f..ce2bdf278b82 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -61,7 +61,7 @@ static size_t binder_alloc_buffer_size(struct binder_allo= c *alloc, struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) - return alloc->buffer + alloc->buffer_size - buffer->user_data; + return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } =20 @@ -203,7 +203,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, size_t index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); if (!page) continue; @@ -305,7 +305,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, FOLL_NOFAULT, &page, NULL); if (npages <=3D 0) { pr_err("%d: failed to find page at offset %lx\n", - alloc->pid, addr - alloc->buffer); + alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; break; } @@ -317,7 +317,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, default: binder_free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", - alloc->pid, __func__, addr - alloc->buffer, ret); + alloc->pid, __func__, addr - alloc->vm_start, ret); ret =3D -ENOMEM; break; } @@ -342,7 +342,7 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, unsigned long index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (binder_get_installed_page(alloc, index)) continue; =20 @@ -371,7 +371,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, unsigned long index; bool on_lru; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); =20 if (page) { @@ -723,8 +723,8 @@ static void binder_free_buf_locked(struct binder_alloc = *alloc, BUG_ON(buffer->free); BUG_ON(size > buffer_size); BUG_ON(buffer->transaction !=3D NULL); - BUG_ON(buffer->user_data < alloc->buffer); - BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); + BUG_ON(buffer->user_data < alloc->vm_start); + BUG_ON(buffer->user_data > alloc->vm_start + alloc->buffer_size); =20 if (buffer->async_transaction) { alloc->free_async_space +=3D buffer_size; @@ -783,7 +783,7 @@ static struct page *binder_alloc_get_page(struct binder= _alloc *alloc, pgoff_t *pgoffp) { binder_size_t buffer_space_offset =3D buffer_offset + - (buffer->user_data - alloc->buffer); + (buffer->user_data - alloc->vm_start); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; =20 @@ -882,7 +882,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, SZ_4M); mutex_unlock(&binder_alloc_mmap_lock); =20 - alloc->buffer =3D vma->vm_start; + alloc->vm_start =3D vma->vm_start; =20 alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), @@ -900,7 +900,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, goto err_alloc_buf_struct_failed; } =20 - buffer->user_data =3D alloc->buffer; + buffer->user_data =3D alloc->vm_start; list_add(&buffer->entry, &alloc->buffers); buffer->free =3D 1; binder_insert_free_buffer(alloc, buffer); @@ -915,7 +915,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, kvfree(alloc->pages); alloc->pages =3D NULL; err_alloc_pages_failed: - alloc->buffer =3D 0; + alloc->vm_start =3D 0; mutex_lock(&binder_alloc_mmap_lock); alloc->buffer_size =3D 0; err_already_mapped: @@ -1016,7 +1016,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", buffer->debug_id, - buffer->user_data - alloc->buffer, + buffer->user_data - alloc->vm_start, buffer->data_size, buffer->offsets_size, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); @@ -1121,7 +1121,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_get_alloc_mutex_failed; =20 index =3D mdata->page_index; - page_addr =3D alloc->buffer + index * PAGE_SIZE; + page_addr =3D alloc->vm_start + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); /* diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 3ebb12afd4de..feecd7414241 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -83,7 +83,7 @@ static inline struct list_head *page_to_lru(struct page *= p) * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields * @mm: copy of task->mm (invariant after open) - * @buffer: base of per-proc address space mapped via mmap + * @vm_start: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc * @free_buffers: rb tree of buffers available for allocation * sorted by size @@ -107,7 +107,7 @@ static inline struct list_head *page_to_lru(struct page= *p) struct binder_alloc { struct mutex mutex; struct mm_struct *mm; - unsigned long buffer; + unsigned long vm_start; struct list_head buffers; struct rb_root free_buffers; struct rb_root allocated_buffers; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 6a64847a8555..c88735c54848 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -104,7 +104,7 @@ static bool check_buffer_pages_allocated(struct binder_= alloc *alloc, end =3D PAGE_ALIGN(buffer->user_data + size); page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (!alloc->pages[page_index] || !list_empty(page_to_lru(alloc->pages[page_index]))) { pr_err("expect alloc but is %s at page index %d\n", diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h index fe38c6fc65d0..16de1b9e72f7 100644 --- a/drivers/android/binder_trace.h +++ b/drivers/android/binder_trace.h @@ -328,7 +328,7 @@ TRACE_EVENT(binder_update_page_range, TP_fast_assign( __entry->proc =3D alloc->pid; __entry->allocate =3D allocate; - __entry->offset =3D start - alloc->buffer; + __entry->offset =3D start - alloc->vm_start; __entry->size =3D end - start; ), TP_printk("proc=3D%d allocate=3D%d offset=3D%zu size=3D%zu", --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 203F220B21E for ; Tue, 3 Dec 2024 21:55:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262933; cv=none; b=h+2WYETjh/SCqtiCjPNHNTyIZaklj8togaR7Ob16dyvFzeaz4z/F+WogRyFT1jeZyGEDWnViEYGQT0pQapHwyiXSb0bXdnvRZvRhPasD+hOVnaPemy70vmCpFtshHIC3FOz+PYVXDlMUczc47yPKKNDEZ/pRrnSuUNoiDhuFAFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262933; c=relaxed/simple; bh=oq+O/8jSsxaIOAOpTxBSu3K18phBhR4QBfPwlsmJAek=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fft9FqO5LfhKx6cDlI6qHGqGCEI0KaIZPjD6bNPDyVKM6XCYrPMIfkjSkRx816qMuq1y6q2YwcJeJBf8fzoqhIzBSTrLVvY9l49tk6NL+WBDFZcZbaJzjUtXK2sZ8Ls7n5wzKNkMl18seuyJmrcOB1yZ5zGQW0BDVYu+kBdLNgQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=navMQVas; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="navMQVas" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2154e1cdf72so39386155ad.3 for ; Tue, 03 Dec 2024 13:55:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262931; x=1733867731; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MPXJ2872zU944HGrm8VzJkh2Cc9BZttgBo1z7c+E8aU=; b=navMQVasLZpC6DsIdoguFustIVmgA7z5CDpfQQwyTbLad44VvhQzOv6U7aa0Xo8t5I /wtW6Ta5CJOv0r3qSdlOkZ3EvqiAv8b7kI4wLDWGai9SliFS+4vQoOd1Kni6EPgDGhwO 0SMrxjcWkAkw/CMhuYa5J1DSBzIW/4kAjL0EryowkAvTi79ZJzFz4H4XQYlLzJ0gjosC OiuSVzc6tVc/ckddFFECY6Gt+p1YytjwUZ76+9kiLi4n4+bgcueE6VfRsqZWX1bFKxD+ aYBc7myRamSln1KZ/TAS5PCY4KwKgw4zhLcEnCanDj+8B4xli9WthMEkqXRomn3SmMbA 63vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262931; x=1733867731; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MPXJ2872zU944HGrm8VzJkh2Cc9BZttgBo1z7c+E8aU=; b=qXXNkJPP/ciD6+1NxJg7pCXO3ltDSeUA1sJe9FzbyZjuZTrma6Z4ESML798jjY59EX lPaz+bIYwGDCDBUPgm4Ik8WQsB1AikT+UKu+uOLG+6L3nzjs5UopV0TsUWc64pTlI1ky wCjtHltoIpB9s1iyriuI0V8nwcbV9zxhi6GNJVUz3Vga/PvMLnd3YVb+qFGzwHn5k8PJ xg2bMGPH6BmdM4UKGWD9m/l+pB+jqrmTZy/phWTwmwQnuzK0/Ue975BhFVEnpAasqpVq vpWhm4W60LywelHaJcSo9D0l+SbCW3uSyQE3MgmWGyCZWS2T29ecTiabsfCRRoSvX2fv IQAg== X-Gm-Message-State: AOJu0YyuYTGOQ0Kcz7KSQ2OqaxSPoBZmL4yq9dFvVLK5eZImbLtu4VIn 8cthtac3fgAzNQoPiZLI/7zETLpCHS9vv3Unj5ZzZfYeoeCwpbsokG/sibyIi/D+ytHwhM/RLJF 4shXQydfZiw== X-Google-Smtp-Source: AGHT+IHo1lXs605vY4Q9s6sdjOgWez3tqOA2C6jAa7Jl0HAoAqtvVWK61ONGYszX7lELB3MBqatyPrB1GA5gUw== X-Received: from pfwz33.prod.google.com ([2002:a05:6a00:1da1:b0:725:381c:396a]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:db12:b0:215:44fe:1640 with SMTP id d9443c01a7336-215bcfbcdbcmr50656045ad.3.1733262931261; Tue, 03 Dec 2024 13:55:31 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:41 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-8-cmllamas@google.com> Subject: [PATCH v6 7/9] binder: use per-vma lock in page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner , Barry Song , Hillf Danton , Lorenzo Stoakes Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use per-vma locking for concurrent page installations, this minimizes contention with unrelated vmas improving performance. The mmap_lock is still acquired when needed though, e.g. before get_user_pages_remote(). Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: Nhat Pham Cc: Johannes Weiner Cc: Barry Song Cc: Suren Baghdasaryan Cc: Hillf Danton Cc: Lorenzo Stoakes Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 67 +++++++++++++++++++++++++--------- 1 file changed, 50 insertions(+), 17 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index ce2bdf278b82..0c54e50841c8 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -233,6 +233,53 @@ static inline bool binder_alloc_is_mapped(struct binde= r_alloc *alloc) return smp_load_acquire(&alloc->mapped); } =20 +static struct page *binder_page_lookup(struct binder_alloc *alloc, + unsigned long addr) +{ + struct mm_struct *mm =3D alloc->mm; + struct page *page; + long npages =3D 0; + + /* + * Find an existing page in the remote mm. If missing, + * don't attempt to fault-in just propagate an error. + */ + mmap_read_lock(mm); + if (binder_alloc_is_mapped(alloc)) + npages =3D get_user_pages_remote(mm, addr, 1, FOLL_NOFAULT, + &page, NULL); + mmap_read_unlock(mm); + + return npages > 0 ? page : NULL; +} + +static int binder_page_insert(struct binder_alloc *alloc, + unsigned long addr, + struct page *page) +{ + struct mm_struct *mm =3D alloc->mm; + struct vm_area_struct *vma; + int ret =3D -ESRCH; + + /* attempt per-vma lock first */ + vma =3D lock_vma_under_rcu(mm, addr); + if (vma) { + if (binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + vma_end_read(vma); + return ret; + } + + /* fall back to mmap_lock */ + mmap_read_lock(mm); + vma =3D vma_lookup(mm, addr); + if (vma && binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + mmap_read_unlock(mm); + + return ret; +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index) { @@ -268,9 +315,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, unsigned long index, unsigned long addr) { - struct vm_area_struct *vma; struct page *page; - long npages; int ret; =20 if (!mmget_not_zero(alloc->mm)) @@ -282,16 +327,7 @@ static int binder_install_single_page(struct binder_al= loc *alloc, goto out; } =20 - mmap_read_lock(alloc->mm); - vma =3D vma_lookup(alloc->mm, addr); - if (!vma || !binder_alloc_is_mapped(alloc)) { - binder_free_page(page); - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto unlock; - } - - ret =3D vm_insert_page(vma, addr, page); + ret =3D binder_page_insert(alloc, addr, page); switch (ret) { case -EBUSY: /* @@ -301,9 +337,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, */ ret =3D 0; binder_free_page(page); - npages =3D get_user_pages_remote(alloc->mm, addr, 1, - FOLL_NOFAULT, &page, NULL); - if (npages <=3D 0) { + page =3D binder_page_lookup(alloc, addr); + if (!page) { pr_err("%d: failed to find page at offset %lx\n", alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; @@ -321,8 +356,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, ret =3D -ENOMEM; break; } -unlock: - mmap_read_unlock(alloc->mm); out: mmput_async(alloc->mm); return ret; --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3012B20A5D8 for ; Tue, 3 Dec 2024 21:55:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262936; cv=none; b=UfW08CUHoyYXRYCWutfBnoBJD7b1kPYfLhFDWQZ962MDfmaLZmQ2dERVYqOy6q3w+clMXvCDlFitroa+ldb0P041O8Wp11R/z0uESNuHEK6hX5nucjMgoFvIPgt/j913I0hMhgkSlw46k7pieihlRSh+civsfx5Tl/i+6bTGHcI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262936; c=relaxed/simple; bh=Hg9fnL3gDvBL63BcSw9k4I1SGZC7TGor3Q6XV0SE8KQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hCzj5QYuJ2LXC5tB/G2WF4uQpW5TrnGkjr532060mDAl8gGlXpuQT3KHBKDEZx5v5qz3SaoYat0GZROLy8xNg5CSY9XENfLBePrWkW1DjoQED4Wq7qr1PK259+q2TuJ4suqmv7E/1g3VNOUFY+7kABwxZOXqjIMDp7O63mAmzEs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WLRusOH7; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WLRusOH7" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee9f66cb12so3522929a91.1 for ; Tue, 03 Dec 2024 13:55:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262933; x=1733867733; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nTHnYLNXxRVqIhYw/iiATQBbhG+EPSSwui359ARy6Zw=; b=WLRusOH7/TfJgJXCXWOU9T4UeIcWbZQLmbhxbvVt75i/dQPzq5+9fj6LZC2tQbcG9c Oq4sxDaDDQV3PjVzxTaI8fxb23Zve73IuOBG408VFtwt8RR0biu3D2pv3V+Y4eir4Eqg fEihBdCdseuIE9jwfM+B4iLMwzsZMsY0xke2YNgBsbONwUSFeFfdV+4Vmbu/j5i94kv1 o0GOz8PNZTc+QITzakL7edZDq6522/S7sxbH6slMUHDUeinnekFNfXXDu/xkg+wsp0xR RVhzz+3/tRXJrW4teE6KcJdrdb8gDPriUCMgeBfOAihXJwvBCwNNHPXnyc+hSsCmPyOr UtxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262933; x=1733867733; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nTHnYLNXxRVqIhYw/iiATQBbhG+EPSSwui359ARy6Zw=; b=fNjN2Z9E+Frv9S3N0eq8aBM76cApFaECj4lW3OGlX2SYjYS58YpzQwiQcVB4uVPhMP EYL3dLLjK9MB7k2HNJZykwWh4Ic0OlBBxUML4iqTISffay9xVS3rQQCa1dEoFKhsWWwZ Ut6Q1Y8J9gxX4+MKucG1fba5qc+89eIE8qOZ15hGerQzQow+MiFgoImnPJpblvi8mjjF K1kMHCtHx9QNOvhelW06+rPP5WHTGkVV0FadI8P8HuH5JWnpEhb7P7W2MafJqorfRkD9 0lBHtt0HyaXbYDleA/lKl3yF0L8tSp3k54kzboc7QjQ/R+nAV4A4OHwSHq/etCx++Quk CKTQ== X-Gm-Message-State: AOJu0Yye9WhUylvMoMMGJ1BtnFgBek63lU2QJ0/DcQf2QEYcA/fqjRl2 ofEvfDaHhCa4M6Z7Y3dnf3FHxxs13cbrtSWXpFgHlwZZ6J2i1w71W6StvNIjp5XdkeO4dnPPSrL vztbH1QeRSw== X-Google-Smtp-Source: AGHT+IHDaz8HMetpGTEx2zP5p+74f1udRv1tWE5LibIMiuX7tXSy/Rrh4psSGbjSK29kM4aTfTy5D49IXURipQ== X-Received: from pfbbv3.prod.google.com ([2002:a05:6a00:4143:b0:725:34dd:498f]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:41cd:b0:202:cbf:2d6f with SMTP id d9443c01a7336-215bd193289mr40994015ad.57.1733262933394; Tue, 03 Dec 2024 13:55:33 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:42 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-9-cmllamas@google.com> Subject: [PATCH v6 8/9] binder: propagate vm_insert_page() errors From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of always overriding errors with -ENOMEM, propagate the specific error code returned by vm_insert_page(). This allows for more accurate error logs and handling. Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 0c54e50841c8..5a221296b30c 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -353,7 +353,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, binder_free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->vm_start, ret); - ret =3D -ENOMEM; break; } out: --=20 2.47.0.338.g60cca15819-goog From nobody Fri Dec 19 06:34:56 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3356620C483 for ; Tue, 3 Dec 2024 21:55:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262937; cv=none; b=hrThPLhtn/M1QzrkwiaAytmfR9XhJ/eofmxxktqNHQQ1asA0NvlTjVXp3UppkQ1TJHIBMLSsAa2/Fgx8qstn8mMgZguQ5uZ1JcloYlmr27POSZxPkmTQjlpM1YYvsAaxCEyeAl3pjMI0mguS/RqmRSvviCJAOyGnHXsFyxkCEms= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733262937; c=relaxed/simple; bh=jIMFjQIvhAgRch+NfQnhZsI2ErhgVUwZ3o5/ejm7D+s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iFopj49U0jM94ApNF1uooQ0VpcSmzV67yjnuqjfuHybz5Sw7qh5Y5Gz1F7DE1JZ/+6TnZOH3GOH9XT5XUYO0Uzu57Z7yX2j51AXc7iDPhD63dcaJlabUwweThYFPxtbH1VLH3Kgqj5rY9SAU2+lLLWfG2xY3YlWzK/59g9TBzHM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yKLgLIlo; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yKLgLIlo" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7eaa7b24162so5784783a12.1 for ; Tue, 03 Dec 2024 13:55:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733262935; x=1733867735; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k/YUxHrv+12gMgYKuvcmKD0rUQxPjfYrLN7M2OuFebg=; b=yKLgLIlokq5V+ufUQICxYupEssXKyMWfzq3lX/4JWLr8DGNWVr+7TijBstvSZzcmlU Odk+eLIoPl1MHpbJBw6ZrXxshs8+OuBeEPSSUtGbt2h6blhLQ/tPrPH+uA0qE8cwGVEF BMoFAb+NDNPrMHGWShYy2t2VCDBAvE0GOfrY3dGh9QIiweB90virZSasmXW38LX1W7eS QlJE/iFtJzJdMJp0ALqZjozt1GN64nJ7SpRJ3b67KiHlVoZu9PRLUM9lT0r6cMVO49yS A1fEBZWWyH4LvEb1bVn0g06QKASwzAoxM553hCszdKdqYtXi/qKMf1y+YgOiBirFlexe 1d5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733262935; x=1733867735; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k/YUxHrv+12gMgYKuvcmKD0rUQxPjfYrLN7M2OuFebg=; b=B6/YUa0xWPtuILOVboxifk1DU81W6+jdmSfyqo0AJGncNSgja0SQsWL6s1QVL8iJxN NY4cHNhZipmyZHnEL8z5o7gBPhIvEpdOvhRbkqdoE8O1I2GuBRc7R4zc5NjwmT6leo7O vsMJItS4PxCBq/kgi7mAG3I6Skp4vuE8ShAd5xSVzFQzPukKwfMbqqUWi/vD1+j+xJ5K wli1WvvizdrRjtgSRIQ7AV+kQs3iWjGLbWoFF9RgVRXCARScdlj3n7dsuo2rQMKzm0a5 nFKjvh3P9vne3vbCKQagi5P47hJjCA/UlMJ4OSLR466bFZdS9LR5AL0jnO4zROGL2QYC U0lw== X-Gm-Message-State: AOJu0Yx0v5+vWHsa2CXjq77P12hyBW03VsIBd2agclTqsYyYXN75rmpZ bsC/tcz4GoKY3pO8jjjSChJGCjpYOJEogsqCHmBU0swHRsVSvECg8BMGM3FklLF0t023cksSSx/ 7UOsQBrSQNA== X-Google-Smtp-Source: AGHT+IFgwsMtaHHxjbSQxacD0z3q6fPbkuXcnUeDrn4WSoJt5ZuqpkWhaWpRj8qcKcYD7ihEUfE6hGoZoQEPXw== X-Received: from pfbbx24.prod.google.com ([2002:a05:6a00:4298:b0:724:f73b:3c65]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:8403:b0:1e0:be48:177d with SMTP id adf61e73a8af0-1e1653a798dmr6811031637.3.1733262935391; Tue, 03 Dec 2024 13:55:35 -0800 (PST) Date: Tue, 3 Dec 2024 21:54:43 +0000 In-Reply-To: <20241203215452.2820071-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241203215452.2820071-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203215452.2820071-10-cmllamas@google.com> Subject: [PATCH v6 9/9] binder: use per-vma lock in page reclaiming From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use per-vma locking in the shrinker's callback when reclaiming pages, similar to the page installation logic. This minimizes contention with unrelated vmas improving performance. The mmap_sem is still acquired if the per-vma lock cannot be obtained. Cc: Suren Baghdasaryan Suggested-by: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 5a221296b30c..b58b54f253e6 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1143,19 +1143,28 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, struct vm_area_struct *vma; struct page *page_to_free; unsigned long page_addr; + int mm_locked =3D 0; size_t index; =20 if (!mmget_not_zero(mm)) goto err_mmget; - if (!mmap_read_trylock(mm)) - goto err_mmap_read_lock_failed; - if (!mutex_trylock(&alloc->mutex)) - goto err_get_alloc_mutex_failed; =20 index =3D mdata->page_index; page_addr =3D alloc->vm_start + index * PAGE_SIZE; =20 - vma =3D vma_lookup(mm, page_addr); + /* attempt per-vma lock first */ + vma =3D lock_vma_under_rcu(mm, page_addr); + if (!vma) { + /* fall back to mmap_lock */ + if (!mmap_read_trylock(mm)) + goto err_mmap_read_lock_failed; + mm_locked =3D 1; + vma =3D vma_lookup(mm, page_addr); + } + + if (!mutex_trylock(&alloc->mutex)) + goto err_get_alloc_mutex_failed; + /* * Since a binder_alloc can only be mapped once, we ensure * the vma corresponds to this mapping by checking whether @@ -1183,7 +1192,10 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, } =20 mutex_unlock(&alloc->mutex); - mmap_read_unlock(mm); + if (mm_locked) + mmap_read_unlock(mm); + else + vma_end_read(vma); mmput_async(mm); binder_free_page(page_to_free); =20 @@ -1192,7 +1204,10 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, err_invalid_vma: mutex_unlock(&alloc->mutex); err_get_alloc_mutex_failed: - mmap_read_unlock(mm); + if (mm_locked) + mmap_read_unlock(mm); + else + vma_end_read(vma); err_mmap_read_lock_failed: mmput_async(mm); err_mmget: --=20 2.47.0.338.g60cca15819-goog