From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A8011DDC2E for ; Tue, 26 Nov 2024 18:40:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646428; cv=none; b=Xtc426KFwhV9FTHv4lzK62tgFKEF+gnLa8mhCsDLrRrUDHh+AobXaSe9QTNkZI14HxRl2PKMdWRGbmeQXP3AM1iUyNZAznouFAJTB8hVhTuy9Nj0M8FQURdSpTZMYqJWO1kzLr6aFNimI14kA9/QBqYicBzrYY+992pCbG2gHvI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646428; c=relaxed/simple; bh=1b3dMELyLxamvht5e+LV7/lTV4/qEiNlyQkh3+gkwdU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nSG7IjixNcUwv6x9FiQEhajuZZTFkQx/K20Z11nPNf6J9d2cRNvfDe/mNHkV0LT3+e5vf7F82AYm3HM4y5O0mPp5oy3iQwDdn9PYRTAgaGzDnS4ijQUSH29+1d4NJGMojrF8dxkpAuQX0JN7XEkOyMPexfwrYxhk+wlA+C27YlM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YlGh/mmK; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YlGh/mmK" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7eaac1e95ffso5323252a12.2 for ; Tue, 26 Nov 2024 10:40:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646426; x=1733251226; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=awrGx+G7Hl/os9Vh+fl4vRm8eJvHryPmjNf0DeKDZHM=; b=YlGh/mmKXVEu4lP2T2PV5LBrqJNvjo7uuKmiIf3UMPl3WKVaBvuenHhhMc4elXv6mJ R1Aym0yoquZXqxBuE7vRUfcy37ea0HP/ELeToOsvPjDGsDcIPKq5On+5HJyazarqQveY axjsGhpRwMCeMYT873Mz/lHpHC0+UB5kBL5V3dKzzVLTQlVJyPoIVzWWdhfU7+keB1OD I6oT9xVrmFDfNtTPYMezEm1F4EC0tbhBHiZ5zdVCL6WXfELGmGGiljcnzjcElcWPWjmN cJq50ov5/wg/B/RptnthpBw+MTPD593jnkM7F/Y4wD43DGJ7cj/JQOdNEXC8EM2xXX+/ 1JoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646426; x=1733251226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=awrGx+G7Hl/os9Vh+fl4vRm8eJvHryPmjNf0DeKDZHM=; b=S4TxR1fOguKvKsiDETgb5G3/zxQLLCwGlw8QUW831TI9VbTlu9VvyTDXMvYve9Jfq3 OGh0ln3PX9w8SxZp2TK/kVxg6c4Zuws5Kzo0QGt1AVRbhzE43xNh1wZPRiuxcfOy9zrv R2kQNQwiDECUtfEjVdBvdG3WS6hMaQMkyObxzu0d4rJoaOW3MqLjCYTPW78j4HmwFp8O 7qDDris6PdtVtPFUO7duyIv38bFs63JKrP3yU7Id09gG6vidhzszVhmBfN953xC7twDS clBGXq+yO9lHb20OyGREduGQrRGrDLBg2W/UJYhiKycUWsC8DiwQMEP3jQNqkEkCVfaz nC+A== X-Gm-Message-State: AOJu0YxmFEurlrAHJBN3l1mE4AZQeke+7m9+BoYUb7xLtoaeeG0tcOjH rbV/aYA5l235JmkLxbw4YIGnnbLHAx6xwnoeIC9iDOMb29Ts7z7gfEBbjlvOeEjsesmYr/auz9Y N65F6gymwsA== X-Google-Smtp-Source: AGHT+IHDDpNX9GMPQmz8d6u/R0ATzXftiCJYQ3Dt7Ne1B3Je173GsCwR60jDp3J5zjrLR96SKwZIyvCEP+TZ6g== X-Received: from pfwo3.prod.google.com ([2002:a05:6a00:1bc3:b0:724:d819:72]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:158a:b0:1cf:27bf:8e03 with SMTP id adf61e73a8af0-1e0e0b10997mr656858637.26.1732646426258; Tue, 26 Nov 2024 10:40:26 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:04 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-2-cmllamas@google.com> Subject: [PATCH v5 1/9] Revert "binder: switch alloc->mutex to spinlock_t" From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Mukesh Ojha Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509. In preparation for concurrent page installations, restore the original alloc->mutex which will serialize zap_page_range_single() against page installations in subsequent patches (instead of the mmap_sem). Resolved trivial conflicts with commit 2c10a20f5e84a ("binder_alloc: Fix sleeping function called from invalid context") and commit da0c02516c50 ("mm/list_lru: simplify the list_lru walk callback function"). Cc: Mukesh Ojha Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 46 +++++++++++++++++----------------- drivers/android/binder_alloc.h | 10 ++++---- 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index a738e7745865..52f6aa3232e1 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -169,9 +169,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(stru= ct binder_alloc *alloc, { struct binder_buffer *buffer; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_prepare_to_free_locked(alloc, user_ptr); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return buffer; } =20 @@ -597,10 +597,10 @@ struct binder_buffer *binder_alloc_new_buf(struct bin= der_alloc *alloc, if (!next) return ERR_PTR(-ENOMEM); =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_new_buf_locked(alloc, next, size, is_async); if (IS_ERR(buffer)) { - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); goto out; } =20 @@ -608,7 +608,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, buffer->offsets_size =3D offsets_size; buffer->extra_buffers_size =3D extra_buffers_size; buffer->pid =3D current->tgid; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); =20 ret =3D binder_install_buffer_pages(alloc, buffer, size); if (ret) { @@ -785,17 +785,17 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, * We could eliminate the call to binder_alloc_clear_buf() * from binder_alloc_deferred_release() by moving this to * binder_free_buf_locked(). However, that could - * increase contention for the alloc->lock if clear_on_free - * is used frequently for large buffers. This lock is not + * increase contention for the alloc mutex if clear_on_free + * is used frequently for large buffers. The mutex is not * needed for correctness here. */ if (buffer->clear_on_free) { binder_alloc_clear_buf(alloc, buffer); buffer->clear_on_free =3D false; } - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); binder_free_buf_locked(alloc, buffer); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -893,7 +893,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) struct binder_buffer *buffer; =20 buffers =3D 0; - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); BUG_ON(alloc->vma); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { @@ -940,7 +940,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) page_count++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); kvfree(alloc->pages); if (alloc->mm) mmdrop(alloc->mm); @@ -964,7 +964,7 @@ void binder_alloc_print_allocated(struct seq_file *m, struct binder_buffer *buffer; struct rb_node *n; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n; n =3D rb_next(n)) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", @@ -974,7 +974,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -991,7 +991,7 @@ void binder_alloc_print_pages(struct seq_file *m, int lru =3D 0; int free =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); /* * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. @@ -1007,7 +1007,7 @@ void binder_alloc_print_pages(struct seq_file *m, lru++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); } @@ -1023,10 +1023,10 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) struct rb_node *n; int count =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n !=3D NULL; n =3D rb_nex= t(n)) count++; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return count; } =20 @@ -1070,8 +1070,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_mmget; if (!mmap_read_trylock(mm)) goto err_mmap_read_lock_failed; - if (!spin_trylock(&alloc->lock)) - goto err_get_alloc_lock_failed; + if (!mutex_trylock(&alloc->mutex)) + goto err_get_alloc_mutex_failed; if (!page->page_ptr) goto err_page_already_freed; =20 @@ -1090,7 +1090,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); spin_unlock(&lru->lock); =20 if (vma) { @@ -1109,8 +1109,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 err_invalid_vma: err_page_already_freed: - spin_unlock(&alloc->lock); -err_get_alloc_lock_failed: + mutex_unlock(&alloc->mutex); +err_get_alloc_mutex_failed: mmap_read_unlock(mm); err_mmap_read_lock_failed: mmput_async(mm); @@ -1145,7 +1145,7 @@ void binder_alloc_init(struct binder_alloc *alloc) alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; mmgrab(alloc->mm); - spin_lock_init(&alloc->lock); + mutex_init(&alloc->mutex); INIT_LIST_HEAD(&alloc->buffers); } =20 diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index c02c8ebcb466..33c5f971c0a5 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include #include @@ -72,7 +72,7 @@ struct binder_lru_page { =20 /** * struct binder_alloc - per-binder proc state for binder allocator - * @lock: protects binder_alloc fields + * @mutex: protects binder_alloc fields * @vma: vm_area_struct passed to mmap_handler * (invariant after mmap) * @mm: copy of task->mm (invariant after open) @@ -96,7 +96,7 @@ struct binder_lru_page { * struct binder_buffer objects used to track the user buffers */ struct binder_alloc { - spinlock_t lock; + struct mutex mutex; struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; @@ -153,9 +153,9 @@ binder_alloc_get_free_async_space(struct binder_alloc *= alloc) { size_t free_async_space; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); free_async_space =3D alloc->free_async_space; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return free_async_space; } =20 --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38EC01DE3A3 for ; Tue, 26 Nov 2024 18:40:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646430; cv=none; b=IluoMa6N+GeiGEqZi9G0AYlp821Hxg/9RXiEiO9+XehtTu2i+oh52dCO8c62hki9m9xViupnEU3vgQQ7iGPiGX2lyRWhMqAjTWaSIIAjb5xaJzJ5ciik/ktoU+h2B77SSjFdVTcKkj3hMVDb72H5cB4b2Bw+5aAOOJOExz9BBJw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646430; c=relaxed/simple; bh=feto92b180sVitSUi6WA4dfzTO6KX//Xofn2Sv0b/xs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WGYNIjsC0L5PR1MKpvUB2ZGTOVzrnCYGtoo95F/a2WLIH55tVEAfkyS8SJJBb+RaaoVR5eAqyc8HMbAP80yBqSW4YOa5lKhjuQjJ0AKDiIUe/dv6w7IkGQeUYH5DLAhN1yKuvvmsADeoqVdKh6d5KXZOMRT3FD1UzaSVSprGw/Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PZGdWAAC; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PZGdWAAC" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7f80959f894so32971a12.1 for ; Tue, 26 Nov 2024 10:40:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646428; x=1733251228; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ybt3W/utb4idptlSsffJBzP4pVTicA4UuTKY+AOtES8=; b=PZGdWAACnMUyzakwPSl9KmJ5+3VyOym9QmPv9ME2MUIMW4eab4VHhMeV+CGx0oAA1w hY9y7fcPHma74YhfGzwBwYqd5aQT5S5wfcfYWHOpj2GDeztoSatp3rSuISBCCR1/LuIm entrPXuq02CQktniZ6aFQJn2JyTHW3qQFkpoKIm9eu2rT3hlvY23kmPp1bkEEDCQgsV3 bYQZKpqfCm6owalf4o2Jusay7/2MoaZYMkJjr2sGcKqgKuC65SJwnBZp3qqRfE8GB9o0 +Sd/YLIAPyIivqui6oceESf8oRth/gkYBTs2xHszg3Vs2ty0flUhuNGYL4sRBubK5T5s AYzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646428; x=1733251228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ybt3W/utb4idptlSsffJBzP4pVTicA4UuTKY+AOtES8=; b=eVU+s2nm/+SI8WCceCwNq/RfgX+Gn0NCmG6Jo4Ct2vx4hPkaG+5t1ibqHbJJuXnFVa 3hiU8w5hkI/MY32wDQyu9cB6ayShlhFLwX2zeYNQX8tfWmZGD9Ayqw2t876W5Y0NCyjP UGj6JiSIcJhOpkqko9DhQeHtHFeZYH5/ZyT1BFaXwA6J2OI8zR+hfgqnTGFzZg2C4WL8 wO+b0eBdZrjUk8WWxO22TDLAKjgpkYlN6rH/u3jghqb1tTyLy1hdihUbvQRzXhpHlE8u MAHTMbVo3UAaAllytOkdenaGQXTaZzWWe7MgGEu+/aprFoaXl9/isOys19jsqEgEu6dl OzYg== X-Gm-Message-State: AOJu0Yw/NywdNNfcEQk4JTEm5IGuyN2QC2D2Rw4TE9fXpDWA+hS1oFae BDg0yt419dLf1G3hPeE4Wqju+0BX7brS+deOKCu5jCWgpBU3qMs+loEt1SUWiGX38uClUZNpbga czVQqiLE+UQ== X-Google-Smtp-Source: AGHT+IFT1a1fQiePaQ1qUbP8EcbwdJ+SyQ0GgRw+Bp/RTCFxiVF9NmX8FOecxW7TMwAhdeR4HvcuLsZDme45eA== X-Received: from pgbct7.prod.google.com ([2002:a05:6a02:2107:b0:7fc:1b9c:dcba]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:729e:b0:1e0:c5f9:75b6 with SMTP id adf61e73a8af0-1e0e13b0786mr455547637.20.1732646428328; Tue, 26 Nov 2024 10:40:28 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:05 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-3-cmllamas@google.com> Subject: [PATCH v5 2/9] binder: concurrent page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, David Hildenbrand , Barry Song , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow multiple callers to install pages simultaneously by switching the mmap_sem from write-mode to read-mode. Races to the same PTE are handled using get_user_pages_remote() to retrieve the already installed page. This method significantly reduces contention in the mmap semaphore. To ensure safety, vma_lookup() is used (instead of alloc->vma) to avoid operating on an isolated VMA. In addition, zap_page_range_single() is called under the alloc->mutex to avoid racing with the shrinker. Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: David Hildenbrand Cc: Barry Song Cc: Suren Baghdasaryan Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 65 +++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 52f6aa3232e1..f26283c2c768 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -221,26 +221,14 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, struct binder_lru_page *lru_page, unsigned long addr) { + struct vm_area_struct *vma; struct page *page; - int ret =3D 0; + long npages; + int ret; =20 if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - /* - * Protected with mmap_sem in write mode as multiple tasks - * might race to install the same page. - */ - mmap_write_lock(alloc->mm); - if (binder_get_installed_page(lru_page)) - goto out; - - if (!alloc->vma) { - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto out; - } - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); if (!page) { pr_err("%d: failed to allocate page\n", alloc->pid); @@ -248,19 +236,48 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, goto out; } =20 - ret =3D vm_insert_page(alloc->vma, addr, page); - if (ret) { + mmap_read_lock(alloc->mm); + vma =3D vma_lookup(alloc->mm, addr); + if (!vma || vma !=3D alloc->vma) { + __free_page(page); + pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); + ret =3D -ESRCH; + goto unlock; + } + + ret =3D vm_insert_page(vma, addr, page); + switch (ret) { + case -EBUSY: + /* + * EBUSY is ok. Someone installed the pte first but the + * lru_page->page_ptr has not been updated yet. Discard + * our page and look up the one already installed. + */ + ret =3D 0; + __free_page(page); + npages =3D get_user_pages_remote(alloc->mm, addr, 1, + FOLL_NOFAULT, &page, NULL); + if (npages <=3D 0) { + pr_err("%d: failed to find page at offset %lx\n", + alloc->pid, addr - alloc->buffer); + ret =3D -ESRCH; + break; + } + fallthrough; + case 0: + /* Mark page installation complete and safe to use */ + binder_set_installed_page(lru_page, page); + break; + default: + __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->buffer, ret); - __free_page(page); ret =3D -ENOMEM; - goto out; + break; } - - /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); +unlock: + mmap_read_unlock(alloc->mm); out: - mmap_write_unlock(alloc->mm); mmput_async(alloc->mm); return ret; } @@ -1090,7 +1107,6 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - mutex_unlock(&alloc->mutex); spin_unlock(&lru->lock); =20 if (vma) { @@ -1101,6 +1117,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_user_end(alloc, index); } =20 + mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); __free_page(page_to_free); --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CADB1DE4E4 for ; Tue, 26 Nov 2024 18:40:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646432; cv=none; b=OX/TVxY+iX33Z0gWIxoK4eAkCMteZXs0l/olxTQsG07Z1hlq1iXN/x+zq8hydkbvtP2PpO/JwWRwVuCtKRY7JvLH4jbhV/l7ZS9laXAyN+g46J1JDEYTgFDYsfrVsrwhRAXL/3jwA92rbfsq8mz1g2Q399nYfUMx47+3tp7bSxU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646432; c=relaxed/simple; bh=Uw4pdPuj7RQuQpX2ZJLEmQOMp+U9isTb7zR1xRQDXz4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oBZb82y45ITXJVUf5pcYX0mHZHK3C/A8b2hoI8t+Sp5bU0+Bs3+KoZml/do9QdbqK7kFGRhfNkTEaYdEUprNGM5+nDsZNl4MFNLLwaGbMPF2Mg1ZFgKxeUIUHz0hhrdpg1rruQNK4QXbSeB5zN54izT3p5UR6W+AGWM9dOewo4o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rrpNlNYR; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rrpNlNYR" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7eaac1e95ffso5323287a12.2 for ; Tue, 26 Nov 2024 10:40:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646430; x=1733251230; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gCCqnAEZK+ZKpJb5t5zYVlo4wXMTT1QvqaeCY/ftsfA=; b=rrpNlNYRMlZlUH+zBu7vZRuRVtkV7txZRf8MhHFXcgMUJNYPC4vXZiCY7fjPjLMHhv zZ0641KdDSwPRbNTid92VwrkC+S4MjsLLtO/De/tUBoJtTK98MUJRW8KZSegZR+lqnAl E7xLQEnvyGuXdmTmE4uWP0kzvmYpQ8EDzuL8NNoyI8z36ZE6NpfjMjTh08dXPcQBgSpk JyN4aASF2bxN9RIRuUTC0hQxbC+kMbE0qqQZgauQqUl8o6mfVNSSHC/B8I2OjwiXFBB+ 1eQGIvtebQdccZtph6UquNwW5zydGWiBwQnKNiNF56QpeUosuUmk1hO71PwFf4Zlrocm TjhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646430; x=1733251230; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gCCqnAEZK+ZKpJb5t5zYVlo4wXMTT1QvqaeCY/ftsfA=; b=YMjNNGZZTKWPOKhj7XjhbMSKFCXDgvEvmmV1AIzTfLP84fvv+vfcJGPdvxFc7/SFHy Ty9eRUz0/qoE+0Aj8BQIoPo1rK/raaBTHJylSLaIxWL27wLTWAEt3CRUu2zUOzbVM+i9 cP3lIxaa4QBGVZ7MyoNto5IOPIXadh08jSG8x3bXBAr2PhGb0++dObdDqNMNmkcMq4NK /WE+LrTaLNasv9VlZ1jGJ0V+U2iIef+FM5gHPJTets1tW1TgSyJrH8OPyjTz1J14w7bI gAfxh+eoWFh4kpANKZNXaYJMQ88u1ysN01z+K2lhm5yPdDSP6GuvYYIDl/fpTNr9DbwO FOHA== X-Gm-Message-State: AOJu0YzU8SKS5uNJLph5Ip+E1L+3oh5J8uG22d4o5hABftPmF3HVr0sZ Lx8IQEr4EUWzI4vbBmQ8yc3doVaOAmb4vJmIkAU2zcYaWNLHJF8ZOPf1pG4pJfecoXNat6TvN9+ f2zVzu8OmSw== X-Google-Smtp-Source: AGHT+IE3yzrEZwCo4N76XVKa7eBbAO1SbymavsoukvdxbSIMWMXgWq7Tgew0q/GrTc0LySGW5F07uG0FVGb37w== X-Received: from pfau8.prod.google.com ([2002:a05:6a00:aa88:b0:725:303c:628e]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3987:b0:1d9:1a77:3875 with SMTP id adf61e73a8af0-1e0e0b7e159mr532048637.42.1732646430405; Tue, 26 Nov 2024 10:40:30 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:06 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-4-cmllamas@google.com> Subject: [PATCH v5 3/9] binder: select correct nid for pages in LRU From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The numa node id for binder pages is currently being derived from the lru entry under struct binder_lru_page. However, this object doesn't reflect the node id of the struct page items allocated separately. Instead, select the correct node id from the page itself. This was made possible since commit 0a97c01cd20b ("list_lru: allow explicit memcg and NUMA node selection"). Cc: Nhat Pham Cc: Johannes Weiner Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index f26283c2c768..1f02bec78451 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -210,7 +210,10 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add_obj(&binder_freelist, &page->lru); + ret =3D list_lru_add(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!ret); =20 trace_binder_free_lru_end(alloc, index); @@ -334,7 +337,10 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, if (page->page_ptr) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del_obj(&binder_freelist, &page->lru); + on_lru =3D list_lru_del(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!on_lru); =20 trace_binder_alloc_lru_end(alloc, index); @@ -947,8 +953,10 @@ void binder_alloc_deferred_release(struct binder_alloc= *alloc) if (!alloc->pages[i].page_ptr) continue; =20 - on_lru =3D list_lru_del_obj(&binder_freelist, - &alloc->pages[i].lru); + on_lru =3D list_lru_del(&binder_freelist, + &alloc->pages[i].lru, + page_to_nid(alloc->pages[i].page_ptr), + NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 886D51DED79 for ; Tue, 26 Nov 2024 18:40:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646435; cv=none; b=o+N39+xw6Iq+sjthGeLiTlkUGZGb2vkorXoATLhqceRZ8WM0XD+FeD5GZLzdqBFtTHSBGmiGoopVl/48pues0f5iOSSkcJCj36dzBfB3N16UjzBm2rO1EjkXO3Xok6P6rkhe5wJ221po37/HZTHxHuuqXqu2guwLrkTYYwtbHRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646435; c=relaxed/simple; bh=EFZAoa0X4sa7cLMChhVEZF5gt5U4xEfiybdwYO8XUsI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AO6iLFsJSRZK+mHbGHA1YSmPoCN57ebil86GYa0P8Ch+XI4GxaqnAwGSemaEZy+UOPjb/9t0j0yC2xZAV6IIzn84paqRwEgINIb5uYHSEn8D7DaO2UUq1IANitD7tIc7avMsLxEBxIZFVhV/1GIuG4BPsxV+Vdcogy7jBBk147U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d0AZehfQ; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d0AZehfQ" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7250863ae6dso2794218b3a.0 for ; Tue, 26 Nov 2024 10:40:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646433; x=1733251233; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DDbBK2CzmB7KaUz0AD/Y6eYY0m5m+8wh6yd4SSsEff4=; b=d0AZehfQmjdvh+nGEvI1F4wxlhjBogtiXAhbK0LW2gJkIICeSB6lKaARfSoK71wk9p IoTk5mRFuhGLBAw+8uiE8BMS8xdJplhN5mi+LDcgVqVAmmQPTkhAh6pyxacqhDqfTs2j xluSGiL1yY9fgUvEeC6iMFsu/mlylcG4StpAETnMSlTlXHSqaMAiSl3TODkDvxhV092h pNh77PBHfqXpDFMXl9Xw6m6rLPsXF9lSaSrSwwYvO+ipyNHDtegRT7dv8AIDxDca0j38 CHfAjx9pfeT90rf7FwVmSJ7rJQ1KxgLSfHxc1tU330sGKiL7Wib/nO7+yneRupU55VwD GBYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646433; x=1733251233; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DDbBK2CzmB7KaUz0AD/Y6eYY0m5m+8wh6yd4SSsEff4=; b=oleN7XAqE1D4vOUcCcGKSvMw0Bl6oce+TFqCTvEONdDxgH9d2t9a/ztbLZYbPrNguk w0tspDsc5ZcxKi6EI3fsN7IbmmjlAo8S2aV91FFAaM03YGilm681MjwaKaG+vp8gQa4I jWEtaELPZZAhbwf4/ygxmM2Hn13mQjqnJWuZZV29XrqbwHosLEmbEz57FCGH6nDEJMHP nzm81RjuhVIhePiRXzRduSLGYOyNDHSHHY6AMZFap9N6Q9whPMxEXHipM7uumWReVzUY oQenAtK2OLDaMQCGcjc/lMuVoybduzjBipkGGMzJjmytTt90RShl9S7BLP9BhpxmA9NB U3lg== X-Gm-Message-State: AOJu0YxQtIXf+WBgl/LokIG8OBmDuqUJRzrDjJnNfRgfNxBKJ7TAqGt3 5UMLsqKEaMaCblSI3f9xn1xvqDeqFIKEHXRc2D1fyMZjAcJ9iAfi7f6TCrD/1ZDiky5SnLYluGn 3P4ZHBVoNEw== X-Google-Smtp-Source: AGHT+IH55pdY1CJwuEye/wPBA9wIdbV3dqGAKK7BOZvzn4mpmWr5qJ6Cbj5KjwcstqoHYg0DiqQb0Mv8XoeYqw== X-Received: from pfbcj13.prod.google.com ([2002:a05:6a00:298d:b0:725:30ab:ff6b]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3c8c:b0:71e:5d1d:1aa7 with SMTP id d2e1a72fcca58-7253011eef9mr212273b3a.17.1732646433047; Tue, 26 Nov 2024 10:40:33 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:07 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-5-cmllamas@google.com> Subject: [PATCH v5 4/9] binder: remove struct binder_lru_page From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Matthew Wilcox , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove the redundant struct binder_lru_page concept. Instead, let's use available struct page->lru and page->private members directly to achieve the same functionality. This reduces the maximum memory allocated for alloc->pages from 32768 down to 8192 bytes (aarch64). Savings are per binder instance. Note that Matthew pointed out that some of the page members used in this patch (e.g. page->lru) are likely going to be removed in the near future [1]. Binder will adopt an alternative solution when this happens. Link: https://lore.kernel.org/all/ZzziucEm3np6e7a0@casper.infradead.org/ [1] Cc: Matthew Wilcox Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 104 ++++++++++++------------ drivers/android/binder_alloc.h | 16 +--- drivers/android/binder_alloc_selftest.c | 14 ++-- 3 files changed, 63 insertions(+), 71 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 1f02bec78451..21edd16bf23d 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -176,25 +176,26 @@ struct binder_buffer *binder_alloc_prepare_to_free(st= ruct binder_alloc *alloc, } =20 static inline void -binder_set_installed_page(struct binder_lru_page *lru_page, +binder_set_installed_page(struct binder_alloc *alloc, + unsigned long index, struct page *page) { /* Pairs with acquire in binder_get_installed_page() */ - smp_store_release(&lru_page->page_ptr, page); + smp_store_release(&alloc->pages[index], page); } =20 static inline struct page * -binder_get_installed_page(struct binder_lru_page *lru_page) +binder_get_installed_page(struct binder_alloc *alloc, unsigned long index) { /* Pairs with release in binder_set_installed_page() */ - return smp_load_acquire(&lru_page->page_ptr); + return smp_load_acquire(&alloc->pages[index]); } =20 static void binder_lru_freelist_add(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, false, start, end); =20 @@ -203,16 +204,15 @@ static void binder_lru_freelist_add(struct binder_all= oc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (!binder_get_installed_page(page)) + page =3D binder_get_installed_page(alloc, index); + if (!page) continue; =20 trace_binder_free_lru_start(alloc, index); =20 ret =3D list_lru_add(&binder_freelist, &page->lru, - page_to_nid(page->page_ptr), + page_to_nid(page), NULL); WARN_ON(!ret); =20 @@ -220,8 +220,25 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static struct page *binder_page_alloc(struct binder_alloc *alloc, + unsigned long index, + unsigned long addr) +{ + struct page *page; + + page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + if (!page) + return NULL; + + page->private =3D (unsigned long)alloc; + INIT_LIST_HEAD(&page->lru); + page->index =3D index; + + return page; +} + static int binder_install_single_page(struct binder_alloc *alloc, - struct binder_lru_page *lru_page, + unsigned long index, unsigned long addr) { struct vm_area_struct *vma; @@ -232,9 +249,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + page =3D binder_page_alloc(alloc, index, addr); if (!page) { - pr_err("%d: failed to allocate page\n", alloc->pid); ret =3D -ENOMEM; goto out; } @@ -253,7 +269,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, case -EBUSY: /* * EBUSY is ok. Someone installed the pte first but the - * lru_page->page_ptr has not been updated yet. Discard + * alloc->pages[index] has not been updated yet. Discard * our page and look up the one already installed. */ ret =3D 0; @@ -269,7 +285,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, fallthrough; case 0: /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); + binder_set_installed_page(alloc, index, page); break; default: __free_page(page); @@ -289,7 +305,6 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, struct binder_buffer *buffer, size_t size) { - struct binder_lru_page *page; unsigned long start, final; unsigned long page_addr; =20 @@ -301,14 +316,12 @@ static int binder_install_buffer_pages(struct binder_= alloc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (binder_get_installed_page(page)) + if (binder_get_installed_page(alloc, index)) continue; =20 trace_binder_alloc_page_start(alloc, index); =20 - ret =3D binder_install_single_page(alloc, page, page_addr); + ret =3D binder_install_single_page(alloc, index, page_addr); if (ret) return ret; =20 @@ -322,8 +335,8 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, static void binder_lru_freelist_del(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, true, start, end); =20 @@ -332,14 +345,14 @@ static void binder_lru_freelist_del(struct binder_all= oc *alloc, bool on_lru; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; + page =3D binder_get_installed_page(alloc, index); =20 - if (page->page_ptr) { + if (page) { trace_binder_alloc_lru_start(alloc, index); =20 on_lru =3D list_lru_del(&binder_freelist, &page->lru, - page_to_nid(page->page_ptr), + page_to_nid(page), NULL); WARN_ON(!on_lru); =20 @@ -760,11 +773,10 @@ static struct page *binder_alloc_get_page(struct bind= er_alloc *alloc, (buffer->user_data - alloc->buffer); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; - struct binder_lru_page *lru_page; =20 - lru_page =3D &alloc->pages[index]; *pgoffp =3D pgoff; - return lru_page->page_ptr; + + return binder_get_installed_page(alloc, index); } =20 /** @@ -839,7 +851,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, { struct binder_buffer *buffer; const char *failure_string; - int ret, i; + int ret; =20 if (unlikely(vma->vm_mm !=3D alloc->mm)) { ret =3D -EINVAL; @@ -862,17 +874,12 @@ int binder_alloc_mmap_handler(struct binder_alloc *al= loc, alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), GFP_KERNEL); - if (alloc->pages =3D=3D NULL) { + if (!alloc->pages) { ret =3D -ENOMEM; failure_string =3D "alloc page array"; goto err_alloc_pages_failed; } =20 - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - alloc->pages[i].alloc =3D alloc; - INIT_LIST_HEAD(&alloc->pages[i].lru); - } - buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) { ret =3D -ENOMEM; @@ -948,20 +955,22 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) int i; =20 for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + struct page *page; bool on_lru; =20 - if (!alloc->pages[i].page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) continue; =20 on_lru =3D list_lru_del(&binder_freelist, - &alloc->pages[i].lru, - page_to_nid(alloc->pages[i].page_ptr), + &page->lru, + page_to_nid(page), NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, on_lru ? "on lru" : "active"); - __free_page(alloc->pages[i].page_ptr); + __free_page(page); page_count++; } } @@ -1010,7 +1019,7 @@ void binder_alloc_print_allocated(struct seq_file *m, void binder_alloc_print_pages(struct seq_file *m, struct binder_alloc *alloc) { - struct binder_lru_page *page; + struct page *page; int i; int active =3D 0; int lru =3D 0; @@ -1023,8 +1032,8 @@ void binder_alloc_print_pages(struct seq_file *m, */ if (binder_alloc_get_vma(alloc) !=3D NULL) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D &alloc->pages[i]; - if (!page->page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) free++; else if (list_empty(&page->lru)) active++; @@ -1083,11 +1092,10 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, void *cb_arg) __must_hold(&lru->lock) { - struct binder_lru_page *page =3D container_of(item, typeof(*page), lru); - struct binder_alloc *alloc =3D page->alloc; + struct page *page =3D container_of(item, typeof(*page), lru); + struct binder_alloc *alloc =3D (struct binder_alloc *)page->private; struct mm_struct *mm =3D alloc->mm; struct vm_area_struct *vma; - struct page *page_to_free; unsigned long page_addr; size_t index; =20 @@ -1097,10 +1105,8 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, goto err_mmap_read_lock_failed; if (!mutex_trylock(&alloc->mutex)) goto err_get_alloc_mutex_failed; - if (!page->page_ptr) - goto err_page_already_freed; =20 - index =3D page - alloc->pages; + index =3D page->index; page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); @@ -1109,8 +1115,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 trace_binder_unmap_kernel_start(alloc, index); =20 - page_to_free =3D page->page_ptr; - page->page_ptr =3D NULL; + binder_set_installed_page(alloc, index, NULL); =20 trace_binder_unmap_kernel_end(alloc, index); =20 @@ -1128,12 +1133,11 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); - __free_page(page_to_free); + __free_page(page); =20 return LRU_REMOVED_RETRY; =20 err_invalid_vma: -err_page_already_freed: mutex_unlock(&alloc->mutex); err_get_alloc_mutex_failed: mmap_read_unlock(mm); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 33c5f971c0a5..69cce5bac1e1 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -58,18 +58,6 @@ struct binder_buffer { int pid; }; =20 -/** - * struct binder_lru_page - page object used for binder shrinker - * @page_ptr: pointer to physical page in mmap'd space - * @lru: entry in binder_freelist - * @alloc: binder_alloc for a proc - */ -struct binder_lru_page { - struct list_head lru; - struct page *page_ptr; - struct binder_alloc *alloc; -}; - /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields @@ -83,7 +71,7 @@ struct binder_lru_page { * @allocated_buffers: rb tree of allocated buffers sorted by address * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space - * @pages: array of binder_lru_page + * @pages: array of struct page * * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -104,7 +92,7 @@ struct binder_alloc { struct rb_root free_buffers; struct rb_root allocated_buffers; size_t free_async_space; - struct binder_lru_page *pages; + struct page **pages; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 81442fe20a69..c6941b9abad9 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -105,10 +105,10 @@ static bool check_buffer_pages_allocated(struct binde= r_alloc *alloc, page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - if (!alloc->pages[page_index].page_ptr || - !list_empty(&alloc->pages[page_index].lru)) { + if (!alloc->pages[page_index] || + !list_empty(&alloc->pages[page_index]->lru)) { pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index].page_ptr ? + alloc->pages[page_index] ? "lru" : "free", page_index); return false; } @@ -148,10 +148,10 @@ static void binder_selftest_free_buf(struct binder_al= loc *alloc, * if binder shrinker ran during binder_alloc_free_buf * calls above. */ - if (list_empty(&alloc->pages[i].lru)) { + if (list_empty(&alloc->pages[i]->lru)) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i].page_ptr ? "alloc" : "free", i); + alloc->pages[i] ? "alloc" : "free", i); binder_selftest_failures++; } } @@ -168,9 +168,9 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) } =20 for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i].page_ptr) { + if (alloc->pages[i]) { pr_err("expect free but is %s at page index %d\n", - list_empty(&alloc->pages[i].lru) ? + list_empty(&alloc->pages[i]->lru) ? "alloc" : "lru", i); binder_selftest_failures++; } --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7AB911DE2A8 for ; Tue, 26 Nov 2024 18:40:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646438; cv=none; b=S1DHSLT4Pw/lEKrFpcqJg8XvSF+qQk4zGGd+1daC3+PByMwA/fR89nOXhTOd5dAOV680EI3icmflHdBH4qsAHWa773FIUYPJCn/9bBCwdB3nqFspF3eiRc9tK7gL+AOH1LlGlB8W63ZQHQ2x5VK5uYrcGAySI19fzcDnkRED/LQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646438; c=relaxed/simple; bh=+YueLBpXhct/5r9ESZ6mKeySFEVOoXDlJQzfwzcQSmg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qQs1Lw583wPaFyDszSRpmXIJ8aGn7v2zenJ7Ill1lkbzQhsxBZooWmX4t9X5FvqiIRVBJblsGPtt1mYg9BIh/ln3xkyo1o1NE+/0NF8tMzPUBltkEkIFlXB7LmMMv3HrVqXEXJkxVj30a1n2jCmIzsicnlxGut/Az6PJZ0VkyAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mdtLmVgk; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mdtLmVgk" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ea50590a43so6861289a91.2 for ; Tue, 26 Nov 2024 10:40:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646436; x=1733251236; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GUZblWB5BGFZ8fypeg+VMjlX//kBM9YKQ4HZ8RcYsmk=; b=mdtLmVgkw1p1ysoBb0298FvezmbAA3hFiNGB0DAYVCn0QtIm3PO1uovlYGVaHpqmG8 5YElMl09/c7jJKBO1gdA22yT4itf4v/Xmn9jo8d7uphgohkjTLwJIwTtQO4YrU6dYSeL 7HlVS27Q85RX9bPHv7u10gcU7gwT/Df/iTc/dz8THUYnyAP7rMUE7PPmAeIFwFX5Jmfy BQlC9lMOeT+XII/vPZhbXbqGvogkd6eh251KYnmSzetoIcBp5B0clxWsfX8Gd+hBiiql wDXgwnZv9pPhfmHuUW3OcLXX7lzalHabhCO77GAb0IISdHKv7xfVc9cHObiqWnhOPljX 0xmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646436; x=1733251236; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GUZblWB5BGFZ8fypeg+VMjlX//kBM9YKQ4HZ8RcYsmk=; b=bT4iwntqlVbuzXQkhCKrCNiHVr7ezv8GU4HFBs+NQRKQdrmV1xkBhQnFia7ms0s37F 0eOhBt/UuKZ2aCb5G2bDmhfxJ3lcRf3lpRtoQd2j1B/1fGM5GV63V7fmQ31uLu26Fhbl zcqNjSJ8CfMFD/XYI3MzDRoWR4XNLfUQBK/fs4eGXzDZEVA5AyvNBuo7SlofFGmOwsSJ b24r0RfoI5ZB2J11Vl9AX5bZcc3NWstDow/nABqb0bpUmii+h8lwkkJAIysrFC63Mw9E yDbzPdF++OJYzXEN6ZKEk5yppbm+JEmI+Ku0tknKEszkRoCUyLsMlPquB2a0DXMaf8GJ jK3A== X-Gm-Message-State: AOJu0YwUFMrliVMe+fDv2BLOiy3VgBkyRaH9wuQ/ANMOLy2o/YRDq5aa gt9GOOkWQZFs0ansEJ3zfKLmqrA3pDmahBpo7D1h/ujYA7NeNCyfiqROkfVw+b5HmstrlDryDQ3 Bzuk/3ppK0w== X-Google-Smtp-Source: AGHT+IGGU9v0gG3aucNpeuV+kJjhHCGgIknliWLmvGq4MlILCWcmMX78+slqY+1jay/q0sAu+QBml5Kf6WFLTg== X-Received: from pjyd15.prod.google.com ([2002:a17:90a:dfcf:b0:2e0:5d64:e34c]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:544e:b0:2ea:7a22:5390 with SMTP id 98e67ed59e1d1-2ee08e9d43emr409621a91.5.1732646435778; Tue, 26 Nov 2024 10:40:35 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:08 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-6-cmllamas@google.com> Subject: [PATCH v5 5/9] binder: replace alloc->vma with alloc->mapped From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Minchan Kim , "Liam R. Howlett" , Matthew Wilcox Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is unsafe to use alloc->vma outside of the mmap_sem. Instead, add a new boolean alloc->mapped to save the vma state (mapped or unmmaped) and use this as a replacement for alloc->vma to validate several paths. Using the alloc->vma caused several performance and security issues in the past. Now that it has been replaced with either vm_lookup() or the alloc->mapped state, we can finally remove it. Cc: Minchan Kim Cc: Liam R. Howlett Cc: Matthew Wilcox Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 44 ++++++++++++------------- drivers/android/binder_alloc.h | 6 ++-- drivers/android/binder_alloc_selftest.c | 2 +- 3 files changed, 26 insertions(+), 26 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 21edd16bf23d..4c308d860478 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -220,6 +220,19 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static inline +void binder_alloc_set_mapped(struct binder_alloc *alloc, bool state) +{ + /* pairs with smp_load_acquire in binder_alloc_is_mapped() */ + smp_store_release(&alloc->mapped, state); +} + +static inline bool binder_alloc_is_mapped(struct binder_alloc *alloc) +{ + /* pairs with smp_store_release in binder_alloc_set_mapped() */ + return smp_load_acquire(&alloc->mapped); +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index, unsigned long addr) @@ -257,7 +270,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, =20 mmap_read_lock(alloc->mm); vma =3D vma_lookup(alloc->mm, addr); - if (!vma || vma !=3D alloc->vma) { + if (!vma || !binder_alloc_is_mapped(alloc)) { __free_page(page); pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); ret =3D -ESRCH; @@ -365,20 +378,6 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, } } =20 -static inline void binder_alloc_set_vma(struct binder_alloc *alloc, - struct vm_area_struct *vma) -{ - /* pairs with smp_load_acquire in binder_alloc_get_vma() */ - smp_store_release(&alloc->vma, vma); -} - -static inline struct vm_area_struct *binder_alloc_get_vma( - struct binder_alloc *alloc) -{ - /* pairs with smp_store_release in binder_alloc_set_vma() */ - return smp_load_acquire(&alloc->vma); -} - static void debug_no_space_locked(struct binder_alloc *alloc) { size_t largest_alloc_size =3D 0; @@ -612,7 +611,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, int ret; =20 /* Check binder_alloc is fully initialized */ - if (!binder_alloc_get_vma(alloc)) { + if (!binder_alloc_is_mapped(alloc)) { binder_alloc_debug(BINDER_DEBUG_USER_ERROR, "%d: binder_alloc_buf, no vma\n", alloc->pid); @@ -894,7 +893,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, alloc->free_async_space =3D alloc->buffer_size / 2; =20 /* Signal binder_alloc is fully initialized */ - binder_alloc_set_vma(alloc, vma); + binder_alloc_set_mapped(alloc, true); =20 return 0; =20 @@ -924,7 +923,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) =20 buffers =3D 0; mutex_lock(&alloc->mutex); - BUG_ON(alloc->vma); + BUG_ON(alloc->mapped); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); @@ -1030,7 +1029,7 @@ void binder_alloc_print_pages(struct seq_file *m, * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. */ - if (binder_alloc_get_vma(alloc) !=3D NULL) { + if (binder_alloc_is_mapped(alloc)) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { page =3D binder_get_installed_page(alloc, i); if (!page) @@ -1070,12 +1069,12 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) * @alloc: binder_alloc for this proc * * Called from binder_vma_close() when releasing address space. - * Clears alloc->vma to prevent new incoming transactions from + * Clears alloc->mapped to prevent new incoming transactions from * allocating more buffers. */ void binder_alloc_vma_close(struct binder_alloc *alloc) { - binder_alloc_set_vma(alloc, NULL); + binder_alloc_set_mapped(alloc, false); } =20 /** @@ -1110,7 +1109,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); - if (vma && vma !=3D binder_alloc_get_vma(alloc)) + /* ensure the vma corresponds to the binder mapping */ + if (vma && !binder_alloc_is_mapped(alloc)) goto err_invalid_vma; =20 trace_binder_unmap_kernel_start(alloc, index); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 69cce5bac1e1..634bc2e03729 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -61,8 +61,6 @@ struct binder_buffer { /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields - * @vma: vm_area_struct passed to mmap_handler - * (invariant after mmap) * @mm: copy of task->mm (invariant after open) * @buffer: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc @@ -75,6 +73,8 @@ struct binder_buffer { * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages + * @mapped: whether the vm area is mapped, each binder instanc= e is + * allowed a single mapping throughout its lifetime * @oneway_spam_detected: %true if oneway spam detection fired, clear that * flag once the async buffer has returned to a healthy state * @@ -85,7 +85,6 @@ struct binder_buffer { */ struct binder_alloc { struct mutex mutex; - struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; struct list_head buffers; @@ -96,6 +95,7 @@ struct binder_alloc { size_t buffer_size; int pid; size_t pages_high; + bool mapped; bool oneway_spam_detected; }; =20 diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index c6941b9abad9..2dda82d0d5e8 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -291,7 +291,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc) if (!binder_selftest_run) return; mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->vma) + if (!binder_selftest_run || !alloc->mapped) goto done; pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66BE61DF738 for ; Tue, 26 Nov 2024 18:40:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646441; cv=none; b=KaD7Dc9O4YzbHXcmYofIbjoFzyPetW+hFNmeH9DOF6dCWPhumJkM7V0rG3NX20YiXd/CMgctDvB+5b0zwq30wXMFKlPqoob41C1hWABli0DbL0x3qBNV9J4JxzFjEl+Xg9AXheAbgkivLysU8OV5vmr4rLhVzNqtQNRqd65m0gY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646441; c=relaxed/simple; bh=529pDcA5JPLb/a2r6sj/j43DpzHZfwPoAec/EkEcAiY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mQE75ciqj5dEyUSetBAl+e3bQeQTy2S+O23x59yX3ZWZ4JVjOWC4gLVV4kdeWYvYs2GJ+TE9u0nxxwMq5bH5I0EcG3zSKlHgvTJ2SrPKvk84oH+dTmrCPlitUlMjw1is0/yd2pmQZnMFVVICoNj4r9jYFnTvIWGBhwK8L0Q1tVI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rFspAPuv; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rFspAPuv" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-724e2c587b1so4252501b3a.3 for ; Tue, 26 Nov 2024 10:40:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646439; x=1733251239; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b4Gmm0ra7Qe3zgkH+GXJhDigFMNAU2CK+UxxRp3srkA=; b=rFspAPuvDs1aMR9yIEn96Bu/IEwOBPV0sEiZHX6AGfLofoLnVh9dfXwaqyYoZBSYtF XECXwBGN2M44lmehaYAoz/HLPIrV3I4aDrIJadq/+/ac3l+VE+uFGKhzKif5PDIP04it DRyKy2JkvxRSHHqzzkKJAPusXWvSxe+Lhx1R32CxvmHx21tSRODwLNBMMDINPf0IDoOJ NvWG18fJZs+gcCg3F5ivkDOVmberYjkswT1KyAzWrLeF/hOx26EM4we0HQkpuZme1rkb r0zJokTvDRr4n9ESfhuOjTIC4oRxyg8tcOxMicuu7bRkOItJ4Ex+B226CSV4961iSwCC S2lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646439; x=1733251239; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b4Gmm0ra7Qe3zgkH+GXJhDigFMNAU2CK+UxxRp3srkA=; b=RzGrneq8KNo/S4dAFhe896ykpE3FU4RL9M33dsmguoYC3khIOO+hf3j8V9Cotna0jN Ny32ftbGqRpyeQeXxWiKBwXtG50wpwUfPhlpKIBcSC8271wMG65TryUOAd5mkbQd1nEI irwEx0f//h+nYDMhd6y8x08WmYfGp5bDSdLEquXNcLinluSqzw5mPzMHSxn1TGwixyv7 vxtfd6NSbphYOOTFNYscBqqFxLATUuSair2e3A2SnwHcxVWp3n3WM2wqnLewPKx0AZUl S+l+Fn+jo4S0m1HylT+zt1eHPDMt5Xn9OLm04kZTlO8khoodMEBR7aU8EX4hzBkChJKa JxIw== X-Gm-Message-State: AOJu0Yzgrwiu7CPNUkSw1pj8TeNoqXjNje/dDr5fYrENGCZRcfoE4jQA DFf94jfc0MxZLOmLZahT+PKdZbrnleFNlL3MSPTPQ7ldrapiXtYuquKIT8kiGF+VG/IxMfXKQex XJpjWh75IzQ== X-Google-Smtp-Source: AGHT+IGpLShSN/PBPT785tbvVpPkVlzFr1LhtgCk0CZXWI9+cZ+eqpODl3N83ohL1dk4P0TdaPaaNyxILLakLw== X-Received: from pfaw8.prod.google.com ([2002:a05:6a00:ab88:b0:725:29e2:ecab]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4654:b0:724:de21:60f8 with SMTP id d2e1a72fcca58-7252ffd81e4mr260547b3a.1.1732646438768; Tue, 26 Nov 2024 10:40:38 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:09 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-7-cmllamas@google.com> Subject: [PATCH v5 6/9] binder: rename alloc->buffer to vm_start From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The alloc->buffer field in struct binder_alloc stores the starting address of the mapped vma, rename this field to alloc->vm_start to better reflect its purpose. It also avoids confusion with the binder buffer concept, e.g. transaction->buffer. No functional changes in this patch. Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder.c | 2 +- drivers/android/binder_alloc.c | 28 ++++++++++++------------- drivers/android/binder_alloc.h | 4 ++-- drivers/android/binder_alloc_selftest.c | 2 +- drivers/android/binder_trace.h | 2 +- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 978740537a1a..57265cabec43 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -6350,7 +6350,7 @@ static void print_binder_transaction_ilocked(struct s= eq_file *m, seq_printf(m, " node %d", buffer->target_node->debug_id); seq_printf(m, " size %zd:%zd offset %lx\n", buffer->data_size, buffer->offsets_size, - proc->alloc.buffer - buffer->user_data); + proc->alloc.vm_start - buffer->user_data); } =20 static void print_binder_work_ilocked(struct seq_file *m, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 4c308d860478..71b29bfd8a2e 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -61,7 +61,7 @@ static size_t binder_alloc_buffer_size(struct binder_allo= c *alloc, struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) - return alloc->buffer + alloc->buffer_size - buffer->user_data; + return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } =20 @@ -203,7 +203,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, size_t index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); if (!page) continue; @@ -291,7 +291,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, FOLL_NOFAULT, &page, NULL); if (npages <=3D 0) { pr_err("%d: failed to find page at offset %lx\n", - alloc->pid, addr - alloc->buffer); + alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; break; } @@ -303,7 +303,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, default: __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", - alloc->pid, __func__, addr - alloc->buffer, ret); + alloc->pid, __func__, addr - alloc->vm_start, ret); ret =3D -ENOMEM; break; } @@ -328,7 +328,7 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, unsigned long index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (binder_get_installed_page(alloc, index)) continue; =20 @@ -357,7 +357,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, unsigned long index; bool on_lru; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); =20 if (page) { @@ -709,8 +709,8 @@ static void binder_free_buf_locked(struct binder_alloc = *alloc, BUG_ON(buffer->free); BUG_ON(size > buffer_size); BUG_ON(buffer->transaction !=3D NULL); - BUG_ON(buffer->user_data < alloc->buffer); - BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); + BUG_ON(buffer->user_data < alloc->vm_start); + BUG_ON(buffer->user_data > alloc->vm_start + alloc->buffer_size); =20 if (buffer->async_transaction) { alloc->free_async_space +=3D buffer_size; @@ -769,7 +769,7 @@ static struct page *binder_alloc_get_page(struct binder= _alloc *alloc, pgoff_t *pgoffp) { binder_size_t buffer_space_offset =3D buffer_offset + - (buffer->user_data - alloc->buffer); + (buffer->user_data - alloc->vm_start); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; =20 @@ -868,7 +868,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, SZ_4M); mutex_unlock(&binder_alloc_mmap_lock); =20 - alloc->buffer =3D vma->vm_start; + alloc->vm_start =3D vma->vm_start; =20 alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), @@ -886,7 +886,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, goto err_alloc_buf_struct_failed; } =20 - buffer->user_data =3D alloc->buffer; + buffer->user_data =3D alloc->vm_start; list_add(&buffer->entry, &alloc->buffers); buffer->free =3D 1; binder_insert_free_buffer(alloc, buffer); @@ -901,7 +901,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, kvfree(alloc->pages); alloc->pages =3D NULL; err_alloc_pages_failed: - alloc->buffer =3D 0; + alloc->vm_start =3D 0; mutex_lock(&binder_alloc_mmap_lock); alloc->buffer_size =3D 0; err_already_mapped: @@ -1002,7 +1002,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", buffer->debug_id, - buffer->user_data - alloc->buffer, + buffer->user_data - alloc->vm_start, buffer->data_size, buffer->offsets_size, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); @@ -1106,7 +1106,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_get_alloc_mutex_failed; =20 index =3D page->index; - page_addr =3D alloc->buffer + index * PAGE_SIZE; + page_addr =3D alloc->vm_start + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); /* ensure the vma corresponds to the binder mapping */ diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 634bc2e03729..a6f26b1d8f5e 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -62,7 +62,7 @@ struct binder_buffer { * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields * @mm: copy of task->mm (invariant after open) - * @buffer: base of per-proc address space mapped via mmap + * @vm_start: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc * @free_buffers: rb tree of buffers available for allocation * sorted by size @@ -86,7 +86,7 @@ struct binder_buffer { struct binder_alloc { struct mutex mutex; struct mm_struct *mm; - unsigned long buffer; + unsigned long vm_start; struct list_head buffers; struct rb_root free_buffers; struct rb_root allocated_buffers; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 2dda82d0d5e8..d2d086d2c037 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -104,7 +104,7 @@ static bool check_buffer_pages_allocated(struct binder_= alloc *alloc, end =3D PAGE_ALIGN(buffer->user_data + size); page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (!alloc->pages[page_index] || !list_empty(&alloc->pages[page_index]->lru)) { pr_err("expect alloc but is %s at page index %d\n", diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h index fe38c6fc65d0..16de1b9e72f7 100644 --- a/drivers/android/binder_trace.h +++ b/drivers/android/binder_trace.h @@ -328,7 +328,7 @@ TRACE_EVENT(binder_update_page_range, TP_fast_assign( __entry->proc =3D alloc->pid; __entry->allocate =3D allocate; - __entry->offset =3D start - alloc->buffer; + __entry->offset =3D start - alloc->vm_start; __entry->size =3D end - start; ), TP_printk("proc=3D%d allocate=3D%d offset=3D%zu size=3D%zu", --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C820E1DF973 for ; Tue, 26 Nov 2024 18:40:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646443; cv=none; b=bkpw5/Ugv8tSSP/yCRgu1B/oPYGoFSUbnKkLXZ11nlGSizf9+xQkQwh0Ztb1HdS2QFpSzTPXzf6PXU6j8RJMMgcmwEF8By+Oj/l1aEUEFvuuzVk6EMcfn1FRK8MVXe7XutN4B0LdHYPepFhAbVdIs3e56Rg2GZk9lzs6OaM34O0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646443; c=relaxed/simple; bh=kobwXc6tmkTAygusWJ9qIVEQqqZsRYGXaPpK6AJbvPE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NhlkzVi5O4YOJGYyYgmyNqM2pEpXbXbDSH4+gej+xeHrwVMYqRaM3mi+n8u7wFCoE+WgBGySKi00UbeYAoa4PTnhZibW+guHq9EFKuiVD1z33YSccsmLqIk8+rAAe5i/ULW0qZ88qX3d1/IGDpZc/PVBJC9WZbT0BUIPAs7NUKY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=R4u6DUF4; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R4u6DUF4" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-72524409ab8so723098b3a.2 for ; Tue, 26 Nov 2024 10:40:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646441; x=1733251241; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sQAKTawIpYzMqOEmfUklmOtYHhl/FZVnlNMqQiUQFZQ=; b=R4u6DUF4HLpW3Hj+WiNMFCIarNlyOUAvvbqI0Y58+mOLaEU7gopPXKw7/oaQ1oUazV +AHso5lswswK9JJbHMt98kXL+rX0PAAeK5G6++KYPkSdCJvjgFMQjMvdkHHF3iUzS3XV uJDlp8DKlNNFKqjpkcAtd2DnN5Da6JrehkIc/KCU746lQxjp7d4mZ5xpJtF9jGnbDABd gIufCf90siY5q2opJsjHlW+a1kOQWIjgdx0OLmCRuZJXkmc84ole1CbGRB0mTJFArID6 TjxmpB8YtaBbil0dqAO7SGO9XI6F2U30qAAdEci13/FFifYN/OuHJoauMqrRLtZaIvdF oqpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646441; x=1733251241; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sQAKTawIpYzMqOEmfUklmOtYHhl/FZVnlNMqQiUQFZQ=; b=WoIaxK20uwqHetys0kv7sCEuXsUoNsw65+vY90U0NAWCyb4OqAuWhva+M1ULfH53IU scVaI1POgYPFGGW+WY7Pkrs49RGdfYj8iQevpok0ZkFWp0609HYm3R5zODMC75mgz3UW GRs33/3tvyH+hUn04dMGRRKsWXKR30vl8mecE+dv2v8aDAdyNoJtPW+PqQbhpqEXszMu iK+ONLPkNIyQMhP7wfTkwFKjYwp5z6VMfjHleLmBjMVrMUZ+Z87cTKicnb8QDr0hxOmd STY7Wb9Ot+LcjJVYKjC85eBxn9sexWba3MrUeukM4NvIHRw/Imf5VgWinyQKssHahg+8 NbSA== X-Gm-Message-State: AOJu0YzugpiOpCEddZAADDoF4crbD4dIb1gwUzXVWHYRyuUXWdgNDRye rb+83VXdiuQsqgT1baaPMO7Hrjrjdiu03u0fBti6xnH7uXhHZFVUuCE0sL49ka2ZcHqry3fc/Df aRfDjLKSqEg== X-Google-Smtp-Source: AGHT+IGNlsge5O5WMP8QAnHXkYLMBHylGDOARCueNtukNeyuQLPFmoCK0YExqxBDRZmhW8oKC8XQCzwrIJEhmg== X-Received: from pfwz5.prod.google.com ([2002:a05:6a00:1d85:b0:725:22:4960]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:2e91:b0:71e:795f:92f0 with SMTP id d2e1a72fcca58-7252ff9987fmr309443b3a.3.1732646441070; Tue, 26 Nov 2024 10:40:41 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:10 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-8-cmllamas@google.com> Subject: [PATCH v5 7/9] binder: use per-vma lock in page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner , Barry Song , Hillf Danton , Lorenzo Stoakes Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use per-vma locking for concurrent page installations, this minimizes contention with unrelated vmas improving performance. The mmap_lock is still acquired when needed though, e.g. before get_user_pages_remote(). Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: Nhat Pham Cc: Johannes Weiner Cc: Barry Song Cc: Suren Baghdasaryan Cc: Hillf Danton Cc: Lorenzo Stoakes Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 67 +++++++++++++++++++++++++--------- 1 file changed, 50 insertions(+), 17 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 71b29bfd8a2e..f550dec4b790 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -233,6 +233,53 @@ static inline bool binder_alloc_is_mapped(struct binde= r_alloc *alloc) return smp_load_acquire(&alloc->mapped); } =20 +static struct page *binder_page_lookup(struct binder_alloc *alloc, + unsigned long addr) +{ + struct mm_struct *mm =3D alloc->mm; + struct page *page; + long npages =3D 0; + + /* + * Find an existing page in the remote mm. If missing, + * don't attempt to fault-in just propagate an error. + */ + mmap_read_lock(mm); + if (binder_alloc_is_mapped(alloc)) + npages =3D get_user_pages_remote(mm, addr, 1, FOLL_NOFAULT, + &page, NULL); + mmap_read_unlock(mm); + + return npages > 0 ? page : NULL; +} + +static int binder_page_insert(struct binder_alloc *alloc, + unsigned long addr, + struct page *page) +{ + struct mm_struct *mm =3D alloc->mm; + struct vm_area_struct *vma; + int ret =3D -ESRCH; + + /* attempt per-vma lock first */ + vma =3D lock_vma_under_rcu(mm, addr); + if (vma) { + if (binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + vma_end_read(vma); + return ret; + } + + /* fall back to mmap_lock */ + mmap_read_lock(mm); + vma =3D vma_lookup(mm, addr); + if (vma && binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + mmap_read_unlock(mm); + + return ret; +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index, unsigned long addr) @@ -254,9 +301,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, unsigned long index, unsigned long addr) { - struct vm_area_struct *vma; struct page *page; - long npages; int ret; =20 if (!mmget_not_zero(alloc->mm)) @@ -268,16 +313,7 @@ static int binder_install_single_page(struct binder_al= loc *alloc, goto out; } =20 - mmap_read_lock(alloc->mm); - vma =3D vma_lookup(alloc->mm, addr); - if (!vma || !binder_alloc_is_mapped(alloc)) { - __free_page(page); - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto unlock; - } - - ret =3D vm_insert_page(vma, addr, page); + ret =3D binder_page_insert(alloc, addr, page); switch (ret) { case -EBUSY: /* @@ -287,9 +323,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, */ ret =3D 0; __free_page(page); - npages =3D get_user_pages_remote(alloc->mm, addr, 1, - FOLL_NOFAULT, &page, NULL); - if (npages <=3D 0) { + page =3D binder_page_lookup(alloc, addr); + if (!page) { pr_err("%d: failed to find page at offset %lx\n", alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; @@ -307,8 +342,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, ret =3D -ENOMEM; break; } -unlock: - mmap_read_unlock(alloc->mm); out: mmput_async(alloc->mm); return ret; --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDF031DF99F for ; Tue, 26 Nov 2024 18:40:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646445; cv=none; b=CZB2R3iw6W53YAtGpAdMzuscY0TImZgb7mkRpTJTBXeANxi9jX7J2wqZzU5GWNPZ8TSF82x1ttPusasWaFjWGD44Wuc5P7EQpTEwfE9gcCaWzqsSnQ5FX5S6yQy/T+oLxhY3izGKLhNLj6KidtyZnLW7qDsMfjNpoOzVqUYj5A8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646445; c=relaxed/simple; bh=W1uos8U0fyZ4z1CKRpKfES+fI7812NfAud0CZtZrJgo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jgNwErAohGtqGhYYFtXECeZl40us0pUmmVUgCxkm8Ig/K2YQZAjHY5z9NpfdW2YQKltbXO8JVHMKHJ/AT32+RVjMlI6sYxRSjTgSuzGdE0XrVYx0uVd/cgxICHnAEUwWTfzkqiaR+RXyanomJUY2BYbp/f7uu7dompGzTnBWOsY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IU/UWMgf; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IU/UWMgf" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7eb0a32fc5aso5232761a12.2 for ; Tue, 26 Nov 2024 10:40:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646443; x=1733251243; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4aAcSP4tCC4KjpwxBl16IgKI38w3EH12i0rt/RCl3F4=; b=IU/UWMgfDLeAKkoIORuQGSxhFsv3vb+eaQQXJ24tEaYE/WOmBVatk63/zYOXuJ04Zu QajYjW5S3B4peZRXgkEI2lIhBDGfuHX6fdeX5+fTomyOoD/Zn0KV3a03LH9ilD0Nbwzf E20Zo0mBW1+hqvgxkxZepQGqs6pnL3/sBDmneWZM25hKRpCzieMqin1gCBvMogGuOUTu LE7eQ7BbRWAcIYzP4WzieBZ543raNUmUT/yFTRB++iTO5ORgmiTqo6blBCy0KpLt68bn ncitNYQJu3qr1HKElu+8yD3BI1ZQCwjv46QlHWThzA8sirYTpJLDMOAjQeo8maEzuVHH HoCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646443; x=1733251243; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4aAcSP4tCC4KjpwxBl16IgKI38w3EH12i0rt/RCl3F4=; b=DvJHOPKxhewOOMkps54kio1mHBOc4SULcuNyJ0MMIjlBeKG253uRv/EZbJmsrXdnF0 Da3l/Pxdhrjn200AmJKl4nPp/IWh0b4Cm5ycQ+q0ikDK4rc1EUScWBTCOo9yQ4aMYwjy KZ4JhhYy6DibiRfuXp4VvZrAy99Jy5g3GFwbtw2jFIqAWAuAWG19aOxzxW4wjEV7djA+ guaq05H2fkNme62TI7OXarR833byNodKUMmOgUlL+wuAJNqpNntq07vMEL2x8lueJlcd Q4NKHyQQj6vwFgpwrGaFE9A8m/Y3SzUWJ0Mrax84aPpoFZWVMgYCz2JoCpAH53bhjdSw tyyA== X-Gm-Message-State: AOJu0YzYgWukzX05dgHEYTREUbWIGInmdZHUtQx9FV0wtIS4M+twUfY0 /WIxs2BLV2fCptF0N7+zkjyEP05DHLYTU+yj/C5C2EDzBvyPpz9v4gBAThptBgb98TKItuAUZh/ K1ScbJkBwTg== X-Google-Smtp-Source: AGHT+IE/nlDO6Pbjyr2D8bkoeDIG4jcFf2KoHDlZ7rdZCoEDIFVfpaHOptF8AlbF7nkwcnEiT8N/mBYO1omyxA== X-Received: from pfxa39.prod.google.com ([2002:a05:6a00:1d27:b0:724:f18b:c05b]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:3944:b0:1e0:d2f5:6eca with SMTP id adf61e73a8af0-1e0e0c67e7dmr693917637.46.1732646443244; Tue, 26 Nov 2024 10:40:43 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:11 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-9-cmllamas@google.com> Subject: [PATCH v5 8/9] binder: propagate vm_insert_page() errors From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of always overriding errors with -ENOMEM, propagate the specific error code returned by vm_insert_page(). This allows for more accurate error logs and handling. Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index f550dec4b790..339db88c1522 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -339,7 +339,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->vm_start, ret); - ret =3D -ENOMEM; break; } out: --=20 2.47.0.338.g60cca15819-goog From nobody Sat Feb 7 05:01:33 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2669F1DE3CA for ; Tue, 26 Nov 2024 18:40:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646447; cv=none; b=QbIlLJtF2x9GUcwznGn5FuWJO5wT2HxJTiRKKfLTb+fmT5HkN+2+JP/EDlCEOGWnzCA2QoDCqzc0ztBgSU6wB/C28WYcYgFafGwM9spOPgGRR49dZBUX0qrPOGQEob4x5QIracdKTvWyoylEMi2vua/pS8ZJMkhodCradqMdE6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732646447; c=relaxed/simple; bh=DVbIdWXyM3lRNsCoIo6/WLQp2IDa5LGGZa1uB9F6FMw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aFHGuX3Du5DUiS0gGg46ZO9amZhfdc4qIAuIsM4dmNLmSeJdtA5UYcby2MVF3ysSGsD4hDrpQm03/UZj43QLE0KxYGWG/WnkCHpdf4wPl6xIGpSsOWPauQnwrv8xZR72NSrnhrxVLRRBGcH1e+ngLUTypPsoqev64nUuKFW6jIU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2aORK4XU; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2aORK4XU" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2eaadc7c431so6880599a91.0 for ; Tue, 26 Nov 2024 10:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732646445; x=1733251245; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qB9hna6YxkapW5S11BsFlpVZaJ15RhGZBwePsnA/2cA=; b=2aORK4XUPZR6rkyTaq8gLJ9yaZbbeQnd0Dz41RtEkRpQVFTd3no/9looRqdApcR7QZ QM6QTzD0AYqaJ3MpWgUGvG8VuAKmnbikziDkPZ3tV4Gs9eFz1NAVIIiel5wtLns0i/Ej Gp4yZFwIbtxonCGCOjXucePgNR61b02BCgLIlBqL8Y7DJ4CL2aHcX7oNOAWGSrWpoFbx 7nEUl/S6yl8H/7HTpZfxKeb/TDXlztdC/ENL4eI8lTcWzXJTuPe2JpnLFZz4h7lh1CnI 3Aufno6klILjV+E05RIIXoZUCnfsvO+czA4R5OOZJVWT2iG6VBtam0X11d/xTkoh+1AT JWjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732646445; x=1733251245; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qB9hna6YxkapW5S11BsFlpVZaJ15RhGZBwePsnA/2cA=; b=B9sbZGKdHfO1mXWedJz0b0WftGhn8vaxIcVOFzdmpj2rcyNDtgkLfMtVHEZEASXF63 X+SUx/tXmxD3DIm4ociDsUN3F7MmkBRJHa2ZYS5abDWGJf/N7KVq9/P41Xo6FcAagAYJ phjumy8mgtlqVgmUuNReeXgmt5gZ3hXNtObB8yDhC8o5rKVj5fTERztpqGVohqdWibAU JJX8j+hdggfw6nnMd7rkPKmVWnSeZjZe1zIr6zB4qkqltIiNrpTML0JRyeOLgqc6oI+E TH7CbGzpgfsrpIx5JT01ysvidsxpkVYJ1MXp35K6zNcx5GswR68DugVpxJ/iqMzXLEIx le9g== X-Gm-Message-State: AOJu0YwKRRZeZD6eHC+kBZlh+S7ZepLAeDGT7NfFy70VpiWbDCLqECUE /zGehCBZH3L8pLKxbn+NpyBKl2V5q5kxk2Y8USrFsB2k6ZVDg7GOTUwdogKVRCePlIjX2smroYs CD0WCahy6PQ== X-Google-Smtp-Source: AGHT+IGqmbDN2bbNqCc3JKy6L65i9HhRhpGsNJDmcYRPsv2M1uE0xe4h225c+x65j4Zt3x9HBx7b6UgR/jr6Og== X-Received: from pjbsw15.prod.google.com ([2002:a17:90b:2c8f:b0:2e1:8750:2b46]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3d04:b0:2ea:356f:51a6 with SMTP id 98e67ed59e1d1-2ee08eb2bc7mr401047a91.14.1732646445512; Tue, 26 Nov 2024 10:40:45 -0800 (PST) Date: Tue, 26 Nov 2024 18:40:12 +0000 In-Reply-To: <20241126184021.45292-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241126184021.45292-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241126184021.45292-10-cmllamas@google.com> Subject: [PATCH v5 9/9] binder: use per-vma lock in page reclaiming From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use per-vma locking in the shrinker's callback when reclaiming pages, similar to the page installation logic. This minimizes contention with unrelated vmas improving performance. The mmap_sem is still acquired if the per-vma lock cannot be obtained. Cc: Suren Baghdasaryan Suggested-by: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 339db88c1522..8c10c1a6f459 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1128,19 +1128,28 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, struct mm_struct *mm =3D alloc->mm; struct vm_area_struct *vma; unsigned long page_addr; + int mm_locked =3D 0; size_t index; =20 if (!mmget_not_zero(mm)) goto err_mmget; - if (!mmap_read_trylock(mm)) - goto err_mmap_read_lock_failed; - if (!mutex_trylock(&alloc->mutex)) - goto err_get_alloc_mutex_failed; =20 index =3D page->index; page_addr =3D alloc->vm_start + index * PAGE_SIZE; =20 - vma =3D vma_lookup(mm, page_addr); + /* attempt per-vma lock first */ + vma =3D lock_vma_under_rcu(mm, page_addr); + if (!vma) { + /* fall back to mmap_lock */ + if (!mmap_read_trylock(mm)) + goto err_mmap_read_lock_failed; + mm_locked =3D 1; + vma =3D vma_lookup(mm, page_addr); + } + + if (!mutex_trylock(&alloc->mutex)) + goto err_get_alloc_mutex_failed; + /* ensure the vma corresponds to the binder mapping */ if (vma && !binder_alloc_is_mapped(alloc)) goto err_invalid_vma; @@ -1163,7 +1172,10 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, } =20 mutex_unlock(&alloc->mutex); - mmap_read_unlock(mm); + if (mm_locked) + mmap_read_unlock(mm); + else + vma_end_read(vma); mmput_async(mm); __free_page(page); =20 @@ -1172,7 +1184,10 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, err_invalid_vma: mutex_unlock(&alloc->mutex); err_get_alloc_mutex_failed: - mmap_read_unlock(mm); + if (mm_locked) + mmap_read_unlock(mm); + else + vma_end_read(vma); err_mmap_read_lock_failed: mmput_async(mm); err_mmget: --=20 2.47.0.338.g60cca15819-goog