From nobody Sat Nov 23 23:48:00 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F3721AA1E2 for ; Fri, 8 Nov 2024 19:11:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093069; cv=none; b=X7l3CYyqcEf3GI3cexYG5L595q8oVDtx0CGMHp0hIG7YS/UwjAAWrZ+eodaJjv4tPd9fi2jbbTbpESAZHS/xD8Z84rmtDgcyErC4nu/UVCMtfqoJvm7r392QVQygP8q36qrA1NUHS6bw5CR1C/1iADLVdBn1soTYzxL1z+PHjyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093069; c=relaxed/simple; bh=5qWgUMoLPo2vgQImG/ude6bNdDeZLsq+jdkxnKkAU8g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jovO9dictaAf9nwXEVwxEQ5sNLoB5/mmh2DJp3gsGWbrboMCvHPDjqo0H8yoJ7/MWLAp/OTydoeC/sslCA/oiFxh++3Q5x+ZQ+lZCsaHMQ8aboGkF/jIt+QOREG4ajVzT6gaAYLvuFiTpRvecVw7vVehYx7ODpecRdTceEe1/qs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B83R5gCC; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B83R5gCC" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e38fabff35so48475607b3.0 for ; Fri, 08 Nov 2024 11:11:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093067; x=1731697867; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pvmcaEovRetVNqLkHm0l6aGdRO6OSFNaEU0I68EcOXE=; b=B83R5gCC3WvQI+BCzysZrX95NrqUYXZF/tpLtD/lH1pmeO+Ijj0o3Jih3Oa2YUtbVB zbyVpaJIXp340ncS9/KQFGybpgQlikBMg0rYe049p+MjVNw2ROHcIAgbDhMzO+S93dOS qJIKqbqXiN1QBRnq2d9rAxVAKQ46d8kJGDdmP9J487CgxUc1QKQVOpqYgKx7SEs6jYio /AEhz/u9ZcDCHR9siLbfzCeiXcwEfO3AYbX6sEV8BzNafFbjms92tKKvQ5v4J299skeQ FA0KSps1SqOckWUKlETUwJ/mkJ3DA2mWmYnPUn3rfPE1jpHeT3IkQxdnjEW9VQiz9OHe YI0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093067; x=1731697867; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pvmcaEovRetVNqLkHm0l6aGdRO6OSFNaEU0I68EcOXE=; b=PiIqAxecfpCrTbCsIQY3XPBRvzw2WbeRceJ4eKs7zjGiqumC4gt+1VMmh5OuMxKjAN w8a5Odvyvpmv6W474AwumRJChvdHeFRLNf7BbpZLUBfGkuvXw8SV+RaCe12t5xCHxE+q JnSw6EsyurtB4W3eQmGxzV0BzzyoG993Iy4xUZjhc+hx+NsTmZHh8LfXmFQcOsuJEtye T4HZVUPkIwp7eCsVRmqhEN3fVCHLRYbn+eTTuIF/pQY2RZ73O4AnGORUkNfEmqHUXb/p 1B4/ioXoZuf3UrL6EGdsYqFyvzb6yIbIqvsYphPnl9iO+sMHQpl9+0HvEipQwekpgcaF DcoQ== X-Gm-Message-State: AOJu0Ywb/eyeMxiU8Nsj+oInMP82nVq4x6ssv8YWplfq8kdzkQq9mWsD vIdCCuydSl9VU71jcq60iL7jYoJa5SypWxwK6LQeQcaFJBZwNeXagh0prCcQYgL9N/gNcnpAXOn ppm3yl40Nkg== X-Google-Smtp-Source: AGHT+IGJGZ7XxYYkX0+q89PxvpdxjVoiLcO5VI9vDDVYnRaNQAbwNQDLUZoRTMnpj1x8IbyGP3f2K4bfDaVHcg== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a25:2706:0:b0:e33:3af2:4631 with SMTP id 3f1490d57ef6-e337f861c85mr8191276.4.1731093067026; Fri, 08 Nov 2024 11:11:07 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:43 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-2-cmllamas@google.com> Subject: [PATCH v3 1/8] Revert "binder: switch alloc->mutex to spinlock_t" From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Mukesh Ojha Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509. In preparation for concurrent page installations, restore the original alloc->mutex which will serialize zap_page_range_single() against page installations in subsequent patches (instead of the mmap_sem). Trivial conflicts with commit 2c10a20f5e84a ("binder_alloc: Fix sleeping function called from invalid context") were resolved. Cc: Mukesh Ojha Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 46 +++++++++++++++++----------------- drivers/android/binder_alloc.h | 10 ++++---- 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index b3acbc4174fb..7241bf4a3ff2 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -169,9 +169,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(stru= ct binder_alloc *alloc, { struct binder_buffer *buffer; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_prepare_to_free_locked(alloc, user_ptr); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return buffer; } =20 @@ -597,10 +597,10 @@ struct binder_buffer *binder_alloc_new_buf(struct bin= der_alloc *alloc, if (!next) return ERR_PTR(-ENOMEM); =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); buffer =3D binder_alloc_new_buf_locked(alloc, next, size, is_async); if (IS_ERR(buffer)) { - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); goto out; } =20 @@ -608,7 +608,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, buffer->offsets_size =3D offsets_size; buffer->extra_buffers_size =3D extra_buffers_size; buffer->pid =3D current->tgid; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); =20 ret =3D binder_install_buffer_pages(alloc, buffer, size); if (ret) { @@ -785,17 +785,17 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, * We could eliminate the call to binder_alloc_clear_buf() * from binder_alloc_deferred_release() by moving this to * binder_free_buf_locked(). However, that could - * increase contention for the alloc->lock if clear_on_free - * is used frequently for large buffers. This lock is not + * increase contention for the alloc mutex if clear_on_free + * is used frequently for large buffers. The mutex is not * needed for correctness here. */ if (buffer->clear_on_free) { binder_alloc_clear_buf(alloc, buffer); buffer->clear_on_free =3D false; } - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); binder_free_buf_locked(alloc, buffer); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -893,7 +893,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) struct binder_buffer *buffer; =20 buffers =3D 0; - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); BUG_ON(alloc->vma); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { @@ -940,7 +940,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) page_count++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); kvfree(alloc->pages); if (alloc->mm) mmdrop(alloc->mm); @@ -964,7 +964,7 @@ void binder_alloc_print_allocated(struct seq_file *m, struct binder_buffer *buffer; struct rb_node *n; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n; n =3D rb_next(n)) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", @@ -974,7 +974,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); } =20 /** @@ -991,7 +991,7 @@ void binder_alloc_print_pages(struct seq_file *m, int lru =3D 0; int free =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); /* * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. @@ -1007,7 +1007,7 @@ void binder_alloc_print_pages(struct seq_file *m, lru++; } } - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); } @@ -1023,10 +1023,10 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) struct rb_node *n; int count =3D 0; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); for (n =3D rb_first(&alloc->allocated_buffers); n !=3D NULL; n =3D rb_nex= t(n)) count++; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return count; } =20 @@ -1071,8 +1071,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_mmget; if (!mmap_read_trylock(mm)) goto err_mmap_read_lock_failed; - if (!spin_trylock(&alloc->lock)) - goto err_get_alloc_lock_failed; + if (!mutex_trylock(&alloc->mutex)) + goto err_get_alloc_mutex_failed; if (!page->page_ptr) goto err_page_already_freed; =20 @@ -1091,7 +1091,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); spin_unlock(lock); =20 if (vma) { @@ -1111,8 +1111,8 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 err_invalid_vma: err_page_already_freed: - spin_unlock(&alloc->lock); -err_get_alloc_lock_failed: + mutex_unlock(&alloc->mutex); +err_get_alloc_mutex_failed: mmap_read_unlock(mm); err_mmap_read_lock_failed: mmput_async(mm); @@ -1147,7 +1147,7 @@ void binder_alloc_init(struct binder_alloc *alloc) alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; mmgrab(alloc->mm); - spin_lock_init(&alloc->lock); + mutex_init(&alloc->mutex); INIT_LIST_HEAD(&alloc->buffers); } =20 diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 70387234477e..a5181916942e 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include #include @@ -72,7 +72,7 @@ struct binder_lru_page { =20 /** * struct binder_alloc - per-binder proc state for binder allocator - * @lock: protects binder_alloc fields + * @mutex: protects binder_alloc fields * @vma: vm_area_struct passed to mmap_handler * (invariant after mmap) * @mm: copy of task->mm (invariant after open) @@ -96,7 +96,7 @@ struct binder_lru_page { * struct binder_buffer objects used to track the user buffers */ struct binder_alloc { - spinlock_t lock; + struct mutex mutex; struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; @@ -153,9 +153,9 @@ binder_alloc_get_free_async_space(struct binder_alloc *= alloc) { size_t free_async_space; =20 - spin_lock(&alloc->lock); + mutex_lock(&alloc->mutex); free_async_space =3D alloc->free_async_space; - spin_unlock(&alloc->lock); + mutex_unlock(&alloc->mutex); return free_async_space; } =20 --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8006B1AA1F8 for ; Fri, 8 Nov 2024 19:11:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093072; cv=none; b=VLwZNJUtnx4rgvN4vAeIrRb79CAfMnrhJHs8XjF2VRyDAnm6n1NIzpjM61izbbebHSIdb9+t833+CakCfumDHZhPS6b2nx0L86B5u7Y6Kk9fKtqhdbfCpo/O+rZbtsLX2gcZvr3Lo3uLeV/LW+wssk45HldMEzOXjQFEFdWNFAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093072; c=relaxed/simple; bh=x2cxwtxxXz74zqVKovw/9jGRMN/NyDnD8Mupa0u1K+c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PzFJ0Qwp9/LtUSLk9jeg2LdMZydt1p4t/fuA35x/n9MCHy62azCnGg2zjk35GQXmQwS1wI/tvbmNpsJWg1wpRBlwNZ5pGXQ0cMMJGXIFjfiEShU0XTzqBxRTiu5RkHCfLiUSNv+KGJ2R4dJuWLe3QAvfW5xrGZCidUyAqsnNV2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N7ClKEBg; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N7ClKEBg" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea8a5e86e9so46630537b3.2 for ; Fri, 08 Nov 2024 11:11:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093069; x=1731697869; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PFVz8KSy2QmrXI/JCwqbfSS+RrYNzoax0b45RkBYR+A=; b=N7ClKEBgS8hsDuBHZzGLoPBb+DBzVOn2+1vU+84v6gH8XUByuvPDeyID7qnG6IpXAd c29eiHDwdq9svJcYC/yTRniFqZd9bhFSdTq2ZDJxUdVybVGcAFcNJZTZIIot6ZCUXHG2 sUCpstcwjXaPTYEgoE4waJjmMJD0wMzoXtStEuImxntxMbdSMgH4N8WUFb0ha5Rp43v7 bX+LEROzpd7aZiGuTic9dt+YOwprdp+nPLZGICJ9Gz8v02gFS19RZKL+C5nGHvH0qOOu 7YoUH+WxllW/DDYsnumt/Y/vtfVCp/RyrVNEV0FhiRlr5xi8QglECih9VZvyGH+9b7nV s4SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093069; x=1731697869; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PFVz8KSy2QmrXI/JCwqbfSS+RrYNzoax0b45RkBYR+A=; b=FdVrZGhNuZ6+rml5fUSSCayVEfTcJiTfFA/Os2QlLgYZIk8NsgceMfOTwLkOlAI7dO 5ekkIAlo5uOuEmCq13NhbcqZhGNWsvFrL0yywFB214t0IluOT6+pb+7oa21MC3xrw+c7 wNzH7j4PzEsj3y7FbhmIe1HxhV7bSL3IbhJ+oPv/OtlL11LuQ8DovA2fMWy2KpfrHLYV kJnocJbXvtSdA55DK9fmJxHM94n4UzwUb9OSzvP37PnfThG7lCy/Rw3eP/R1h/JWMxPc cDOjvl89rlAUzPh+tzsH7HVHYunYtgy47DEC4XfG7A/WJLRY099NRFknjkrB4HDzv9xt jawA== X-Gm-Message-State: AOJu0YxS7BIXa8tLcFZWwnOAxveKG+Skq9ijFaeR2ihoWc8oMQ2A2xDe lyZMyDnJ6sThY3/T3ofYAihwGsCUST+WiFu30gSIp/OLZ+XBnQEbgVQM3PUg9oj+DZgjb3zKdQ5 kUMdddJiIqA== X-Google-Smtp-Source: AGHT+IG8xhDtuN5sIAPLrZBqYqqwdAzlm9ogIUbu6ekeecrSWufcan4fgIEmxR07BQByCs5Dx8vh6qeTTWV5Cw== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a81:ad0e:0:b0:6ea:571e:ac46 with SMTP id 00721157ae682-6eaddf95af5mr167787b3.6.1731093069597; Fri, 08 Nov 2024 11:11:09 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:44 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-3-cmllamas@google.com> Subject: [PATCH v3 2/8] binder: concurrent page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, David Hildenbrand , Barry Song , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow multiple callers to install pages simultaneously by downgrading the mmap_sem to non-exclusive mode. Races to the same PTE are handled using get_user_pages_remote() to retrieve the already installed page. This method significantly reduces contention in the mmap semaphore. To ensure safety, vma_lookup() is used (instead of alloc->vma) to avoid operating on an isolated VMA. In addition, zap_page_range_single() is called under the alloc->mutex to avoid racing with the shrinker. Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: David Hildenbrand Cc: Barry Song Cc: Suren Baghdasaryan Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 64 +++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 24 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 7241bf4a3ff2..2ab520c285b3 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -221,26 +221,14 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, struct binder_lru_page *lru_page, unsigned long addr) { + struct vm_area_struct *vma; struct page *page; - int ret =3D 0; + long npages; + int ret; =20 if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - /* - * Protected with mmap_sem in write mode as multiple tasks - * might race to install the same page. - */ - mmap_write_lock(alloc->mm); - if (binder_get_installed_page(lru_page)) - goto out; - - if (!alloc->vma) { - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto out; - } - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); if (!page) { pr_err("%d: failed to allocate page\n", alloc->pid); @@ -248,19 +236,47 @@ static int binder_install_single_page(struct binder_a= lloc *alloc, goto out; } =20 - ret =3D vm_insert_page(alloc->vma, addr, page); - if (ret) { + mmap_read_lock(alloc->mm); + vma =3D vma_lookup(alloc->mm, addr); + if (!vma || vma !=3D alloc->vma) { + __free_page(page); + pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); + ret =3D -ESRCH; + goto unlock; + } + + ret =3D vm_insert_page(vma, addr, page); + switch (ret) { + case -EBUSY: + /* + * EBUSY is ok. Someone installed the pte first but the + * lru_page->page_ptr has not been updated yet. Discard + * our page and look up the one already installed. + */ + ret =3D 0; + __free_page(page); + npages =3D get_user_pages_remote(alloc->mm, addr, 1, 0, &page, NULL); + if (npages <=3D 0) { + pr_err("%d: failed to find page at offset %lx\n", + alloc->pid, addr - alloc->buffer); + ret =3D -ESRCH; + break; + } + fallthrough; + case 0: + /* Mark page installation complete and safe to use */ + binder_set_installed_page(lru_page, page); + break; + default: + __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->buffer, ret); - __free_page(page); ret =3D -ENOMEM; - goto out; + break; } - - /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); +unlock: + mmap_read_unlock(alloc->mm); out: - mmap_write_unlock(alloc->mm); mmput_async(alloc->mm); return ret; } @@ -1091,7 +1107,6 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_kernel_end(alloc, index); =20 list_lru_isolate(lru, item); - mutex_unlock(&alloc->mutex); spin_unlock(lock); =20 if (vma) { @@ -1102,6 +1117,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, trace_binder_unmap_user_end(alloc, index); } =20 + mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); __free_page(page_to_free); --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87F191C1F0A for ; Fri, 8 Nov 2024 19:11:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093074; cv=none; b=ZZQb9W7tpb6QVFzlbu8LA2m+CZCTLFjwkrqVTeoOb2vJIjLe1yEhqYCahJ8xMYMSAa+DBy77owYAqiRuPWTNaQdLKRjMNMBqR53QS6/PLk/09WPasky+T0vfR32GXv1Nd5DBSYhJoUkGW3C457ktH414+x2fIPzzMAaCn/gtyHM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093074; c=relaxed/simple; bh=Hx0EQ2Kr8Du49RUPIOBNzvmHVS9zJDGYynBmxc8SeWg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EEf4/owKVfMHZPP+o8YY55rzx7fHr2mhZEFLXWbZtigXtaWKvHmIXyxX8NuNfXy234WUjEFEce3wE5ys+Ka07iCxYU+bSjqRIY9+t/8eSMnTtgOqpSCBWNzeF02k7auKUqIfyo2iGhb8Y5BazBk3MjpbzKD2Dk7flch1WoeSnkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gpFErAKl; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gpFErAKl" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-210d5ea3cf5so20123375ad.0 for ; Fri, 08 Nov 2024 11:11:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093072; x=1731697872; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kL+UVwsljIj2o4nUo6MdjhrdjZyw3OwGno0tlANfzu8=; b=gpFErAKlVh/Ry7gy0JWlSI+hTQaizDLDyXNXEltK4Erfd1ip87ZmBSH4t7GBangR+O 6krqFBmA1TtYCQUeWlvSYUsPgRL55C3eK649syEsWH1A75+8OfDIGfRcyPczxdfUBHgV HjdVuJ78Z3Zh2m4Awe6x2k/kUwoD93sm43ug4KkPScEnZOODYaketR0+bLNQrNZJ18NK TRMokEYy/pnQYRHhaaZFx32cLuQB0iepU8zDmLWIMw7o3IcMT875R8EGC32gLwgGPRbk 0OYxdIzo3PYWPlyPVaASNqMHzEqWv8C79qyHpkm5yB84A4jFCONGRtglEZ5suRhPSKBp 0ISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093072; x=1731697872; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kL+UVwsljIj2o4nUo6MdjhrdjZyw3OwGno0tlANfzu8=; b=lTBSgxelrsGJ7WeDGnUpO3hxYarCWMmxHH9xJdPf4XrhsXRpDKIAs0+Ut4bvlYU4NW Dkr1bNBD2f7b5HeA2VQmqd8Y6SK3iCYy1aEdFFAkWvIkL4yuISHLuhCMNgUu0KC8zTVH eVKR94EHXSl/0Q+NtF/cG6XskTTCYyloPlyxROzLJpefVkXRGIrSjh9QahHe1NF3eJsR Y+/CExtE6q7WX9FRyPwG3RD9KkMQ8d1d6Wge1SC/bFqqMjcBSBO03e1ra5Mup96ofzW6 D+AgFXfYb4pq+QNAptGez/q5XGo3C47i0mnUMSGb3mfLbayrePXqTIcqjVWcTH0mi85b bmQA== X-Gm-Message-State: AOJu0Yz2DvY2846xMnCLbXlC7LTJKGFGEqkTEVCBHrMgzBMfmfW8AfMJ Al09wGgHB4Htxa7BcSVVT2NdyCamBIl0p/YasQRMjKkx/SUaM7/BiP2O9cs+pfA3SNjiGEd2TbE WvgGIJhZKEg== X-Google-Smtp-Source: AGHT+IHRr7ZlJpb9+f8Jos9UYbFt60q2R1BbncgQSIT5ne9f4EbEpE5p7UrzLpZPx3cALUYA5ZhYpm1xlxDXyA== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a17:902:6bc3:b0:205:76b1:1765 with SMTP id d9443c01a7336-2118370cb1bmr218485ad.3.1731093071936; Fri, 08 Nov 2024 11:11:11 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:45 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-4-cmllamas@google.com> Subject: [PATCH v3 3/8] binder: select correct nid for pages in LRU From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The numa node id for binder pages is currently being derived from the lru entry under struct binder_lru_page. However, this object doesn't reflect the node id of the struct page items allocated separately. Instead, select the correct node id from the page itself. This was made possible since commit 0a97c01cd20b ("list_lru: allow explicit memcg and NUMA node selection"). Cc: Nhat Pham Cc: Johannes Weiner Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 2ab520c285b3..08daf0b75b12 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -210,7 +210,10 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add_obj(&binder_freelist, &page->lru); + ret =3D list_lru_add(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!ret); =20 trace_binder_free_lru_end(alloc, index); @@ -333,7 +336,10 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, if (page->page_ptr) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del_obj(&binder_freelist, &page->lru); + on_lru =3D list_lru_del(&binder_freelist, + &page->lru, + page_to_nid(page->page_ptr), + NULL); WARN_ON(!on_lru); =20 trace_binder_alloc_lru_end(alloc, index); @@ -946,8 +952,10 @@ void binder_alloc_deferred_release(struct binder_alloc= *alloc) if (!alloc->pages[i].page_ptr) continue; =20 - on_lru =3D list_lru_del_obj(&binder_freelist, - &alloc->pages[i].lru); + on_lru =3D list_lru_del(&binder_freelist, + &alloc->pages[i].lru, + page_to_nid(alloc->pages[i].page_ptr), + NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFCC51C1F39 for ; Fri, 8 Nov 2024 19:11:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093077; cv=none; b=HqgpP0iRmR6QH4pwOb7OzKOpk9Q1hgkSfhScQA8tcbQmxOCZ4lbfDW2RPwr/nkk0b3YGbZ+JHucspJ2YFL7OWahWfKyXeKMVCBbOoKtnnGYsMYhk2DOoLUHLcxEe5X+Alpbajx6YTMSPd3u4zz4sefxEc21d8lTbBO4wXNZbjJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093077; c=relaxed/simple; bh=mu53YfN8uED/x8YaUska7SdjO4E3E/Vz0rWcDNfJHsI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AZSXI8TQmHyjlIJmSwZn/hlUC7AVT3Df1rEHm7WvOIWfe44ufong8MKZJ1nPaGj6Yrezf7YrPrR7MG1gywRZ9xWOr4WePRwLZNtEDaKdr3eg5kJhXubqYXTiSLQLmKnv7f/HW8yuIp79//mU05dSpUoZkKVFdzkNi1Bk9Lr//BM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eiWMHnRU; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eiWMHnRU" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7edd6662154so1784303a12.0 for ; Fri, 08 Nov 2024 11:11:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093075; x=1731697875; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jjwtyry6zLO6+pprBafqQtAFXwH5HyWxsr6L5kHZH6E=; b=eiWMHnRUloVzcRbKa1BZ7TO1b8ZKR8Bcm43ThzKN5Y1vRAQBsgHrklhZCcPi6ufJCg 4Fz7yqEH8fEA25p8luMbxUm/0BpCpwsiVf+Ydb/h5u0tX+4+sd4BMr/I20eZsKG7VmMW x09ZL7vOh2EbqbLEeqRYeThtF/vA0hP8L2T3Al27JeQeuXSwWYFO+Xe9fjmQNSK2N/X5 Jf5h1T6UZSCMjNTI6Q0DZOPLS1ZsGhMCdKKIago65dKrhU5swIsPWslDYS4AcmCVOHbG 25pM0bSVeQ2CONkeHEDQ7GNjpdHdxlDvznCfkEISVIbvltHBbZ9p7L94aTFu+mJJLgtC PKIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093075; x=1731697875; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jjwtyry6zLO6+pprBafqQtAFXwH5HyWxsr6L5kHZH6E=; b=Tsm7I+x3+vixSEkn7tNPw4HxSmQk7joFBeAowV7lT445SBb+bdlxihn2b7sj3JsmY7 QdNNCqdInTOslsn3D31IWlp1+rWJVMa9c9yJiWbk+wEyI2l4Xm6QQJCK9IfJhRRVhcvW dF/iMSM4QUsz13s6KLMJndUKXH5IyEdSOo79jXCfxybP/bMKXn8xMzhRtLJRpjifap1u R36nglgCqXcezD1zJH48qgOFLj5//C5aYKI/juA2Rnn25q2XsrWvWTiGcZEDworf7nqO INmrl8KUnTlSSF3+wr6zF1Wi7iuzVrOB5e5+78ATPQZrSSNdWdrERDR8OFm69cpHVaSa DYJA== X-Gm-Message-State: AOJu0YwF6IL9JkA1VFErmXfZr3XB89w9fb02unRiReOUB/8FzHx+ZfbB UDpC6jnXC6ykkb2MpDt6GLvAzX9ofX4ZgoVJb1EfzSVl59uqBd9ePsg7mVsH/yq+YD6zzVpLL2x YAyQd3Ejy/Q== X-Google-Smtp-Source: AGHT+IG783FyHLF2bMePMzlL7j8Na44LSW5vCrLFcmT012mbutGX0278sIQszn75mu8sgnbQjyDLjZxB0XSfig== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a05:6a02:4d02:b0:7ea:6d63:1070 with SMTP id 41be03b00d2f7-7f430aa6603mr29435a12.4.1731093074636; Fri, 08 Nov 2024 11:11:14 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:46 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-5-cmllamas@google.com> Subject: [PATCH v3 4/8] binder: remove struct binder_lru_page From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Matthew Wilcox , "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove the redundant struct binder_lru_page concept. Instead, let's use available struct page->lru and page->private members directly to achieve the same functionality. This reduces the maximum memory allocated for alloc->pages from 32768 down to 8192 bytes (aarch64). Savings are per binder instance. Cc: Matthew Wilcox Cc: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 104 ++++++++++++------------ drivers/android/binder_alloc.h | 16 +--- drivers/android/binder_alloc_selftest.c | 14 ++-- 3 files changed, 63 insertions(+), 71 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 08daf0b75b12..457fa937aa8f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -176,25 +176,26 @@ struct binder_buffer *binder_alloc_prepare_to_free(st= ruct binder_alloc *alloc, } =20 static inline void -binder_set_installed_page(struct binder_lru_page *lru_page, +binder_set_installed_page(struct binder_alloc *alloc, + unsigned long index, struct page *page) { /* Pairs with acquire in binder_get_installed_page() */ - smp_store_release(&lru_page->page_ptr, page); + smp_store_release(&alloc->pages[index], page); } =20 static inline struct page * -binder_get_installed_page(struct binder_lru_page *lru_page) +binder_get_installed_page(struct binder_alloc *alloc, unsigned long index) { /* Pairs with release in binder_set_installed_page() */ - return smp_load_acquire(&lru_page->page_ptr); + return smp_load_acquire(&alloc->pages[index]); } =20 static void binder_lru_freelist_add(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, false, start, end); =20 @@ -203,16 +204,15 @@ static void binder_lru_freelist_add(struct binder_all= oc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (!binder_get_installed_page(page)) + page =3D binder_get_installed_page(alloc, index); + if (!page) continue; =20 trace_binder_free_lru_start(alloc, index); =20 ret =3D list_lru_add(&binder_freelist, &page->lru, - page_to_nid(page->page_ptr), + page_to_nid(page), NULL); WARN_ON(!ret); =20 @@ -220,8 +220,25 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static struct page *binder_page_alloc(struct binder_alloc *alloc, + unsigned long index, + unsigned long addr) +{ + struct page *page; + + page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + if (!page) + return NULL; + + page->private =3D (unsigned long)alloc; + INIT_LIST_HEAD(&page->lru); + page->index =3D index; + + return page; +} + static int binder_install_single_page(struct binder_alloc *alloc, - struct binder_lru_page *lru_page, + unsigned long index, unsigned long addr) { struct vm_area_struct *vma; @@ -232,9 +249,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, if (!mmget_not_zero(alloc->mm)) return -ESRCH; =20 - page =3D alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); + page =3D binder_page_alloc(alloc, index, addr); if (!page) { - pr_err("%d: failed to allocate page\n", alloc->pid); ret =3D -ENOMEM; goto out; } @@ -253,7 +269,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, case -EBUSY: /* * EBUSY is ok. Someone installed the pte first but the - * lru_page->page_ptr has not been updated yet. Discard + * alloc->pages[index] has not been updated yet. Discard * our page and look up the one already installed. */ ret =3D 0; @@ -268,7 +284,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, fallthrough; case 0: /* Mark page installation complete and safe to use */ - binder_set_installed_page(lru_page, page); + binder_set_installed_page(alloc, index, page); break; default: __free_page(page); @@ -288,7 +304,6 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, struct binder_buffer *buffer, size_t size) { - struct binder_lru_page *page; unsigned long start, final; unsigned long page_addr; =20 @@ -300,14 +315,12 @@ static int binder_install_buffer_pages(struct binder_= alloc *alloc, int ret; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; - - if (binder_get_installed_page(page)) + if (binder_get_installed_page(alloc, index)) continue; =20 trace_binder_alloc_page_start(alloc, index); =20 - ret =3D binder_install_single_page(alloc, page, page_addr); + ret =3D binder_install_single_page(alloc, index, page_addr); if (ret) return ret; =20 @@ -321,8 +334,8 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, static void binder_lru_freelist_del(struct binder_alloc *alloc, unsigned long start, unsigned long end) { - struct binder_lru_page *page; unsigned long page_addr; + struct page *page; =20 trace_binder_update_page_range(alloc, true, start, end); =20 @@ -331,14 +344,14 @@ static void binder_lru_freelist_del(struct binder_all= oc *alloc, bool on_lru; =20 index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - page =3D &alloc->pages[index]; + page =3D binder_get_installed_page(alloc, index); =20 - if (page->page_ptr) { + if (page) { trace_binder_alloc_lru_start(alloc, index); =20 on_lru =3D list_lru_del(&binder_freelist, &page->lru, - page_to_nid(page->page_ptr), + page_to_nid(page), NULL); WARN_ON(!on_lru); =20 @@ -759,11 +772,10 @@ static struct page *binder_alloc_get_page(struct bind= er_alloc *alloc, (buffer->user_data - alloc->buffer); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; - struct binder_lru_page *lru_page; =20 - lru_page =3D &alloc->pages[index]; *pgoffp =3D pgoff; - return lru_page->page_ptr; + + return binder_get_installed_page(alloc, index); } =20 /** @@ -838,7 +850,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, { struct binder_buffer *buffer; const char *failure_string; - int ret, i; + int ret; =20 if (unlikely(vma->vm_mm !=3D alloc->mm)) { ret =3D -EINVAL; @@ -861,17 +873,12 @@ int binder_alloc_mmap_handler(struct binder_alloc *al= loc, alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), GFP_KERNEL); - if (alloc->pages =3D=3D NULL) { + if (!alloc->pages) { ret =3D -ENOMEM; failure_string =3D "alloc page array"; goto err_alloc_pages_failed; } =20 - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - alloc->pages[i].alloc =3D alloc; - INIT_LIST_HEAD(&alloc->pages[i].lru); - } - buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) { ret =3D -ENOMEM; @@ -947,20 +954,22 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) int i; =20 for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + struct page *page; bool on_lru; =20 - if (!alloc->pages[i].page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) continue; =20 on_lru =3D list_lru_del(&binder_freelist, - &alloc->pages[i].lru, - page_to_nid(alloc->pages[i].page_ptr), + &page->lru, + page_to_nid(page), NULL); binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, "%s: %d: page %d %s\n", __func__, alloc->pid, i, on_lru ? "on lru" : "active"); - __free_page(alloc->pages[i].page_ptr); + __free_page(page); page_count++; } } @@ -1009,7 +1018,7 @@ void binder_alloc_print_allocated(struct seq_file *m, void binder_alloc_print_pages(struct seq_file *m, struct binder_alloc *alloc) { - struct binder_lru_page *page; + struct page *page; int i; int active =3D 0; int lru =3D 0; @@ -1022,8 +1031,8 @@ void binder_alloc_print_pages(struct seq_file *m, */ if (binder_alloc_get_vma(alloc) !=3D NULL) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D &alloc->pages[i]; - if (!page->page_ptr) + page =3D binder_get_installed_page(alloc, i); + if (!page) free++; else if (list_empty(&page->lru)) active++; @@ -1083,11 +1092,10 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, void *cb_arg) __must_hold(lock) { - struct binder_lru_page *page =3D container_of(item, typeof(*page), lru); - struct binder_alloc *alloc =3D page->alloc; + struct page *page =3D container_of(item, typeof(*page), lru); + struct binder_alloc *alloc =3D (struct binder_alloc *)page->private; struct mm_struct *mm =3D alloc->mm; struct vm_area_struct *vma; - struct page *page_to_free; unsigned long page_addr; size_t index; =20 @@ -1097,10 +1105,8 @@ enum lru_status binder_alloc_free_page(struct list_h= ead *item, goto err_mmap_read_lock_failed; if (!mutex_trylock(&alloc->mutex)) goto err_get_alloc_mutex_failed; - if (!page->page_ptr) - goto err_page_already_freed; =20 - index =3D page - alloc->pages; + index =3D page->index; page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); @@ -1109,8 +1115,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, =20 trace_binder_unmap_kernel_start(alloc, index); =20 - page_to_free =3D page->page_ptr; - page->page_ptr =3D NULL; + binder_set_installed_page(alloc, index, NULL); =20 trace_binder_unmap_kernel_end(alloc, index); =20 @@ -1128,13 +1133,12 @@ enum lru_status binder_alloc_free_page(struct list_= head *item, mutex_unlock(&alloc->mutex); mmap_read_unlock(mm); mmput_async(mm); - __free_page(page_to_free); + __free_page(page); =20 spin_lock(lock); return LRU_REMOVED_RETRY; =20 err_invalid_vma: -err_page_already_freed: mutex_unlock(&alloc->mutex); err_get_alloc_mutex_failed: mmap_read_unlock(mm); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index a5181916942e..5c2473e95494 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -58,18 +58,6 @@ struct binder_buffer { int pid; }; =20 -/** - * struct binder_lru_page - page object used for binder shrinker - * @page_ptr: pointer to physical page in mmap'd space - * @lru: entry in binder_freelist - * @alloc: binder_alloc for a proc - */ -struct binder_lru_page { - struct list_head lru; - struct page *page_ptr; - struct binder_alloc *alloc; -}; - /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields @@ -83,7 +71,7 @@ struct binder_lru_page { * @allocated_buffers: rb tree of allocated buffers sorted by address * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space - * @pages: array of binder_lru_page + * @pages: array of struct page * * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -104,7 +92,7 @@ struct binder_alloc { struct rb_root free_buffers; struct rb_root allocated_buffers; size_t free_async_space; - struct binder_lru_page *pages; + struct page **pages; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 81442fe20a69..c6941b9abad9 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -105,10 +105,10 @@ static bool check_buffer_pages_allocated(struct binde= r_alloc *alloc, page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; - if (!alloc->pages[page_index].page_ptr || - !list_empty(&alloc->pages[page_index].lru)) { + if (!alloc->pages[page_index] || + !list_empty(&alloc->pages[page_index]->lru)) { pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index].page_ptr ? + alloc->pages[page_index] ? "lru" : "free", page_index); return false; } @@ -148,10 +148,10 @@ static void binder_selftest_free_buf(struct binder_al= loc *alloc, * if binder shrinker ran during binder_alloc_free_buf * calls above. */ - if (list_empty(&alloc->pages[i].lru)) { + if (list_empty(&alloc->pages[i]->lru)) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i].page_ptr ? "alloc" : "free", i); + alloc->pages[i] ? "alloc" : "free", i); binder_selftest_failures++; } } @@ -168,9 +168,9 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) } =20 for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i].page_ptr) { + if (alloc->pages[i]) { pr_err("expect free but is %s at page index %d\n", - list_empty(&alloc->pages[i].lru) ? + list_empty(&alloc->pages[i]->lru) ? "alloc" : "lru", i); binder_selftest_failures++; } --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE1E61AA1E9 for ; Fri, 8 Nov 2024 19:11:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093082; cv=none; b=IOTfwdc92mGWUliet60lf+THq9SfxKFEQyfTKeS3RmLBRZ3gNJFMOykOGyttetuvEubjWpsWPpyaw+aMNVaCSOgUO02ac2cL9Ci2JjZoaOObAidasbF5LE/lqAhpLHWq5Si4q8jePa+JjQZ9yjbuTT8wfVrU1mzcg2Py15i2y4A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093082; c=relaxed/simple; bh=iC9h1z38ITthByFqWXDG6gmMON/YuVdoJXgCqGTpz1A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tSa0IxCD0eXMLJmL27RlsO7MQkBaX03HNh7H1ezNf0NuttFTZI3K20lyjcVKhlP78H6TPRiCm9zdMdx8H8CGFqPYnY/34VNcSjH9BdzO/P9wM5+mf4re9vGg0t+6+LR/8PTvz4gRMT3cK+4hSDxr3et9CgEJkGiy07Ey/9XNk04= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MHZMPpQZ; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MHZMPpQZ" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e33152c8225so5242122276.0 for ; Fri, 08 Nov 2024 11:11:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093077; x=1731697877; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mamCdB/fpcXnWfOHApiuvYC+ri2/tZGlXfY0LPaAxUA=; b=MHZMPpQZ+l9TjA6Gv4KQbn8iHAltyUTvTygyLCv0oWar3OVB4b+SYuFAU8X92DiHIb sX5U0CfzjHXGYsHXOw1NONILVezlUW54RyudrMY1mo2Ju0MMWZ0L+o9QrbzPR3bVa9UX df7jRRlhPcg412/IO7wY3yypvKXTRfILdgqnu7GNDPbxWfO4YwdMytt9xOY13BrcBfxc rk1osIHO79si+a9AtmkcANU+YWyG8oCn/siA735d4t7L1klJNv06zqpMZkEJlUkFGZae jQFxPabm4YhBWFYtCZEcyBNq+oaMWEn+WvGmPQdQDKjwOOHu3ZM/0TXfZ5tnRoN1E3fS +wyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093077; x=1731697877; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mamCdB/fpcXnWfOHApiuvYC+ri2/tZGlXfY0LPaAxUA=; b=cZIgT3XxIjlk85f6aH4gohnf2pdVkRmP5Vi7ARCUYeFI6GNr+bheVl7hMuHVSUw9T5 /F3raKA2vOvjtPpl+V3ez+KZEst5e9wAfzJOA+iyq86pubHWKy9Fz3HayvZy2aEIep5t lHQgfstTjCMnN+QC+9ucrsVZ/nv9uspAk8MrrSxUuzQNE4iQpUQ5zClXYEAysB5jmhjy cyeZWCRz22va0EGXVKQxw56i91slD/sDCCePPTjAqcBm+TyJfoGopqfdd9viTdeikIyJ WW8hO7PAs1ttvxJntrU+V/wRPLgrb+BwHXhRheIq93/PcHKcaYHjKm6Oi/xRXX41qI/D E1UQ== X-Gm-Message-State: AOJu0Yw01N7SGrU7MrR4k3RPAGCwGQG2i4dPv8EJONuSs+EW5ju3h8s3 x8aREUO03fAAjw8mPwVbEh907MSbqShORlH9Vj94y8qxAtFb13lmo7Hd9vYsHwN9etJj7QbiJDe IpQ3lrVU+Cg== X-Google-Smtp-Source: AGHT+IEzyPkmUSowVZatkrS0Zy0Ne1u4xKp/AkzwqmQVtCNYZUgZw6mTes3ZfJ+vodn/vhR3B+oCVgjgHeHuXg== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a5b:c09:0:b0:e16:68fb:f261 with SMTP id 3f1490d57ef6-e337f8ed425mr2693276.5.1731093077577; Fri, 08 Nov 2024 11:11:17 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:47 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-6-cmllamas@google.com> Subject: [PATCH v3 5/8] binder: replace alloc->vma with alloc->mapped From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Minchan Kim , "Liam R. Howlett" , Matthew Wilcox Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is unsafe to use alloc->vma outside of the mmap_sem. Instead, add a new boolean alloc->mapped to save the vma state (mapped or unmmaped) and use this as a replacement for alloc->vma to validate several paths. Using the alloc->vma caused several performance and security issues in the past. Now that it has been replaced with either vm_lookup() or the alloc->mapped state, we can finally remove it. Cc: Minchan Kim Cc: Liam R. Howlett Cc: Matthew Wilcox Cc: Suren Baghdasaryan Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 43 ++++++++++++------------- drivers/android/binder_alloc.h | 6 ++-- drivers/android/binder_alloc_selftest.c | 2 +- 3 files changed, 25 insertions(+), 26 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 457fa937aa8f..a6697b5a4b2f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -220,6 +220,19 @@ static void binder_lru_freelist_add(struct binder_allo= c *alloc, } } =20 +static inline +void binder_alloc_set_mapped(struct binder_alloc *alloc, bool state) +{ + /* pairs with smp_load_acquire in binder_alloc_is_mapped() */ + smp_store_release(&alloc->mapped, state); +} + +static inline bool binder_alloc_is_mapped(struct binder_alloc *alloc) +{ + /* pairs with smp_store_release in binder_alloc_set_mapped() */ + return smp_load_acquire(&alloc->mapped); +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index, unsigned long addr) @@ -257,7 +270,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, =20 mmap_read_lock(alloc->mm); vma =3D vma_lookup(alloc->mm, addr); - if (!vma || vma !=3D alloc->vma) { + if (!vma || !binder_alloc_is_mapped(alloc)) { __free_page(page); pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); ret =3D -ESRCH; @@ -364,20 +377,6 @@ static void binder_lru_freelist_del(struct binder_allo= c *alloc, } } =20 -static inline void binder_alloc_set_vma(struct binder_alloc *alloc, - struct vm_area_struct *vma) -{ - /* pairs with smp_load_acquire in binder_alloc_get_vma() */ - smp_store_release(&alloc->vma, vma); -} - -static inline struct vm_area_struct *binder_alloc_get_vma( - struct binder_alloc *alloc) -{ - /* pairs with smp_store_release in binder_alloc_set_vma() */ - return smp_load_acquire(&alloc->vma); -} - static void debug_no_space_locked(struct binder_alloc *alloc) { size_t largest_alloc_size =3D 0; @@ -611,7 +610,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, int ret; =20 /* Check binder_alloc is fully initialized */ - if (!binder_alloc_get_vma(alloc)) { + if (!binder_alloc_is_mapped(alloc)) { binder_alloc_debug(BINDER_DEBUG_USER_ERROR, "%d: binder_alloc_buf, no vma\n", alloc->pid); @@ -893,7 +892,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, alloc->free_async_space =3D alloc->buffer_size / 2; =20 /* Signal binder_alloc is fully initialized */ - binder_alloc_set_vma(alloc, vma); + binder_alloc_set_mapped(alloc, true); =20 return 0; =20 @@ -923,7 +922,7 @@ void binder_alloc_deferred_release(struct binder_alloc = *alloc) =20 buffers =3D 0; mutex_lock(&alloc->mutex); - BUG_ON(alloc->vma); + BUG_ON(alloc->mapped); =20 while ((n =3D rb_first(&alloc->allocated_buffers))) { buffer =3D rb_entry(n, struct binder_buffer, rb_node); @@ -1029,7 +1028,7 @@ void binder_alloc_print_pages(struct seq_file *m, * Make sure the binder_alloc is fully initialized, otherwise we might * read inconsistent state. */ - if (binder_alloc_get_vma(alloc) !=3D NULL) { + if (binder_alloc_is_mapped(alloc)) { for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { page =3D binder_get_installed_page(alloc, i); if (!page) @@ -1069,12 +1068,12 @@ int binder_alloc_get_allocated_count(struct binder_= alloc *alloc) * @alloc: binder_alloc for this proc * * Called from binder_vma_close() when releasing address space. - * Clears alloc->vma to prevent new incoming transactions from + * Clears alloc->mapped to prevent new incoming transactions from * allocating more buffers. */ void binder_alloc_vma_close(struct binder_alloc *alloc) { - binder_alloc_set_vma(alloc, NULL); + binder_alloc_set_mapped(alloc, false); } =20 /** @@ -1110,7 +1109,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, page_addr =3D alloc->buffer + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); - if (vma && vma !=3D binder_alloc_get_vma(alloc)) + if (vma && !binder_alloc_is_mapped(alloc)) goto err_invalid_vma; =20 trace_binder_unmap_kernel_start(alloc, index); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 5c2473e95494..a3c043cb8343 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -61,8 +61,6 @@ struct binder_buffer { /** * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields - * @vma: vm_area_struct passed to mmap_handler - * (invariant after mmap) * @mm: copy of task->mm (invariant after open) * @buffer: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc @@ -75,6 +73,8 @@ struct binder_buffer { * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages + * @mapped: whether the vm area is mapped, each binder instanc= e is + * allowed a single mapping throughout its lifetime * @oneway_spam_detected: %true if oneway spam detection fired, clear that * flag once the async buffer has returned to a healthy state * @@ -85,7 +85,6 @@ struct binder_buffer { */ struct binder_alloc { struct mutex mutex; - struct vm_area_struct *vma; struct mm_struct *mm; unsigned long buffer; struct list_head buffers; @@ -96,6 +95,7 @@ struct binder_alloc { size_t buffer_size; int pid; size_t pages_high; + bool mapped; bool oneway_spam_detected; }; =20 diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index c6941b9abad9..2dda82d0d5e8 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -291,7 +291,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc) if (!binder_selftest_run) return; mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->vma) + if (!binder_selftest_run || !alloc->mapped) goto done; pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 571DA1EBA0C for ; Fri, 8 Nov 2024 19:11:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093083; cv=none; b=K2zlJNMZLvXf8PjFImOH0p8R0Z0J3hhXcO8UN4vgtdLh2nYl/Jij/8Cec5YN53SFvnuSZxHW2K+HstBMvL4k6KZMaAEuZE0s8W9vetlGK/C4S2WSGa+X43BQKsoXheJTa3P9tpUOz+xzsoH89TL3w/JwIhDZBf5IYDkQwdQsgM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093083; c=relaxed/simple; bh=AlYkDMaOHpURAI/JcOKfZUbLLSJ0Lxyop+6desmuhnQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KnVcxhdeFmWxGr0Bfds0fUY+jOjPYYcKS0lO4qn/8XeGjEOfM+R8f0L54Wflpkwt+YB8Jcq0pYN1VCj7vbl58FT0nVeCcL8UqeQSHnv2cCaEHnAQ/CEaoteh0GnpWTjm4iyR7ym71yJlju+euAQbyrL7KB9xiVCo+NzlPDhkaok= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oY8HPcdx; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oY8HPcdx" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-720738f8040so2034162b3a.2 for ; Fri, 08 Nov 2024 11:11:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093080; x=1731697880; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OxO0bWh8K6LE5JehtANYtMg4flCydjOM0+4L5atuHkc=; b=oY8HPcdxY4yQdbNqm4ky3ULQKZMqPYbrm8AdSadLC1oiVO1lbCNe2VQyUueNCp3dTN S/mihwHWahTXX47G6bxHwydA9MOUS4+jN6DDGl8pssXoXILCtoDMwl5jO14RULkqYv4Q 5J8HFWFhzIpxo/757XNCNA4a4NDj7ZOHx7zu1OXTih71WLOUuqJ124/LJVpc0sd5kb2f d5awLiaKNp1D8EdS6G1I+lG3pniNF4kmKhE/R6trWYTLRsSMi1VJYaueBgzb8yo5Spgc B9M6yyHkIiclboCdl/iiq1Mow3CaE3wcyBxaYrjnKbnnC8E2WaAzhqa21jq3oBNqx6WI jffg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093080; x=1731697880; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OxO0bWh8K6LE5JehtANYtMg4flCydjOM0+4L5atuHkc=; b=GeQFy7nwn0gotPFuKDRn+uezjmxtqCaeEbaWx7mvBTE7a9A4wOd1Vr7rcf0LGF7OLw aqGHSN2rh/XET7XJllVUWF9VEq8k2mmWddPANZ8TvNqEYDErgZa1SJWT4pzA+eObW2PH IPZq9hFfpFUCA2LEQh73jEWOgcJx/P8SO1oAPNidDmwAmrjbkXFZ0XMxOEoDvUMzjS1S Bz+nNpiivURP3/3zjiCb7cStFDu0uq9MyMMwlRUFhmDtHsDvjlCAh03GdrnyS5DNwGGm sUQyJZcc7NfgiSUt43W/klJSlHbVA6XcJ35NWcNAL/MEgVvxadR2IgYGD14ZK+caEBLo 3EGQ== X-Gm-Message-State: AOJu0Yx6KJVG5ipu+2OWVNNuEFB5STuQeapLzNyQgv0L1xTQei5ZYUpS txWe0PDgbzwIBTnG2KFTZSVpSJFGl8ze+EQMXQif8RY8Wm1ZD/8UKBQfdI+8CB1uq4cvYrpHyPO uCSEcUqfw8g== X-Google-Smtp-Source: AGHT+IEDiJO7caezidZqUhVhtAYdZoMi+BM4JCPu80AeV3YDXnoG1Wh60btU6XRNvyKi9hEyBBrMXnPLbaYCPg== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a05:6a00:69ad:b0:71e:5b2b:991c with SMTP id d2e1a72fcca58-724133b1490mr64800b3a.5.1731093080456; Fri, 08 Nov 2024 11:11:20 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:48 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-7-cmllamas@google.com> Subject: [PATCH v3 6/8] binder: rename alloc->buffer to vm_start From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The alloc->buffer field in struct binder_alloc stores the starting address of the mapped vma, rename this field to alloc->vm_start to better reflect its purpose. It also avoids confusion with the binder buffer concept, e.g. transaction->buffer. No functional changes in this patch. Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder.c | 2 +- drivers/android/binder_alloc.c | 28 ++++++++++++------------- drivers/android/binder_alloc.h | 4 ++-- drivers/android/binder_alloc_selftest.c | 2 +- drivers/android/binder_trace.h | 2 +- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 978740537a1a..57265cabec43 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -6350,7 +6350,7 @@ static void print_binder_transaction_ilocked(struct s= eq_file *m, seq_printf(m, " node %d", buffer->target_node->debug_id); seq_printf(m, " size %zd:%zd offset %lx\n", buffer->data_size, buffer->offsets_size, - proc->alloc.buffer - buffer->user_data); + proc->alloc.vm_start - buffer->user_data); } =20 static void print_binder_work_ilocked(struct seq_file *m, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index a6697b5a4b2f..3716ffd00baf 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -61,7 +61,7 @@ static size_t binder_alloc_buffer_size(struct binder_allo= c *alloc, struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) - return alloc->buffer + alloc->buffer_size - buffer->user_data; + return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } =20 @@ -203,7 +203,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, size_t index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); if (!page) continue; @@ -290,7 +290,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, npages =3D get_user_pages_remote(alloc->mm, addr, 1, 0, &page, NULL); if (npages <=3D 0) { pr_err("%d: failed to find page at offset %lx\n", - alloc->pid, addr - alloc->buffer); + alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; break; } @@ -302,7 +302,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, default: __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", - alloc->pid, __func__, addr - alloc->buffer, ret); + alloc->pid, __func__, addr - alloc->vm_start, ret); ret =3D -ENOMEM; break; } @@ -327,7 +327,7 @@ static int binder_install_buffer_pages(struct binder_al= loc *alloc, unsigned long index; int ret; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (binder_get_installed_page(alloc, index)) continue; =20 @@ -356,7 +356,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, unsigned long index; bool on_lru; =20 - index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; page =3D binder_get_installed_page(alloc, index); =20 if (page) { @@ -708,8 +708,8 @@ static void binder_free_buf_locked(struct binder_alloc = *alloc, BUG_ON(buffer->free); BUG_ON(size > buffer_size); BUG_ON(buffer->transaction !=3D NULL); - BUG_ON(buffer->user_data < alloc->buffer); - BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); + BUG_ON(buffer->user_data < alloc->vm_start); + BUG_ON(buffer->user_data > alloc->vm_start + alloc->buffer_size); =20 if (buffer->async_transaction) { alloc->free_async_space +=3D buffer_size; @@ -768,7 +768,7 @@ static struct page *binder_alloc_get_page(struct binder= _alloc *alloc, pgoff_t *pgoffp) { binder_size_t buffer_space_offset =3D buffer_offset + - (buffer->user_data - alloc->buffer); + (buffer->user_data - alloc->vm_start); pgoff_t pgoff =3D buffer_space_offset & ~PAGE_MASK; size_t index =3D buffer_space_offset >> PAGE_SHIFT; =20 @@ -867,7 +867,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, SZ_4M); mutex_unlock(&binder_alloc_mmap_lock); =20 - alloc->buffer =3D vma->vm_start; + alloc->vm_start =3D vma->vm_start; =20 alloc->pages =3D kvcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), @@ -885,7 +885,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, goto err_alloc_buf_struct_failed; } =20 - buffer->user_data =3D alloc->buffer; + buffer->user_data =3D alloc->vm_start; list_add(&buffer->entry, &alloc->buffers); buffer->free =3D 1; binder_insert_free_buffer(alloc, buffer); @@ -900,7 +900,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, kvfree(alloc->pages); alloc->pages =3D NULL; err_alloc_pages_failed: - alloc->buffer =3D 0; + alloc->vm_start =3D 0; mutex_lock(&binder_alloc_mmap_lock); alloc->buffer_size =3D 0; err_already_mapped: @@ -1001,7 +1001,7 @@ void binder_alloc_print_allocated(struct seq_file *m, buffer =3D rb_entry(n, struct binder_buffer, rb_node); seq_printf(m, " buffer %d: %lx size %zd:%zd:%zd %s\n", buffer->debug_id, - buffer->user_data - alloc->buffer, + buffer->user_data - alloc->vm_start, buffer->data_size, buffer->offsets_size, buffer->extra_buffers_size, buffer->transaction ? "active" : "delivered"); @@ -1106,7 +1106,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, goto err_get_alloc_mutex_failed; =20 index =3D page->index; - page_addr =3D alloc->buffer + index * PAGE_SIZE; + page_addr =3D alloc->vm_start + index * PAGE_SIZE; =20 vma =3D vma_lookup(mm, page_addr); if (vma && !binder_alloc_is_mapped(alloc)) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index a3c043cb8343..088829ce6668 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -62,7 +62,7 @@ struct binder_buffer { * struct binder_alloc - per-binder proc state for binder allocator * @mutex: protects binder_alloc fields * @mm: copy of task->mm (invariant after open) - * @buffer: base of per-proc address space mapped via mmap + * @vm_start: base of per-proc address space mapped via mmap * @buffers: list of all buffers for this proc * @free_buffers: rb tree of buffers available for allocation * sorted by size @@ -86,7 +86,7 @@ struct binder_buffer { struct binder_alloc { struct mutex mutex; struct mm_struct *mm; - unsigned long buffer; + unsigned long vm_start; struct list_head buffers; struct rb_root free_buffers; struct rb_root allocated_buffers; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 2dda82d0d5e8..d2d086d2c037 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -104,7 +104,7 @@ static bool check_buffer_pages_allocated(struct binder_= alloc *alloc, end =3D PAGE_ALIGN(buffer->user_data + size); page_addr =3D buffer->user_data; for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->buffer) / PAGE_SIZE; + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; if (!alloc->pages[page_index] || !list_empty(&alloc->pages[page_index]->lru)) { pr_err("expect alloc but is %s at page index %d\n", diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h index fe38c6fc65d0..16de1b9e72f7 100644 --- a/drivers/android/binder_trace.h +++ b/drivers/android/binder_trace.h @@ -328,7 +328,7 @@ TRACE_EVENT(binder_update_page_range, TP_fast_assign( __entry->proc =3D alloc->pid; __entry->allocate =3D allocate; - __entry->offset =3D start - alloc->buffer; + __entry->offset =3D start - alloc->vm_start; __entry->size =3D end - start; ), TP_printk("proc=3D%d allocate=3D%d offset=3D%zu size=3D%zu", --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FE371F26CD for ; Fri, 8 Nov 2024 19:11:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093085; cv=none; b=geAwBIn4CbHxQjahVjY08Xsz7PTz1VcKV1PzyOrhSldJsPMJfTUUtDZI/51HLn004uMZ4txidEZnaG1y0C5dGGcG31QTM/lv1995NtC8Omp6nvKkOjtlNaenyjsh/zzXIjkcydy0X8O+ZkPz4qmL8k+RfukwU9clLh4HxcDDNGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093085; c=relaxed/simple; bh=Zf14GsUaaV/Hd+n5Eqit8C7fwMts9LP+7onXrbB1zbo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VoPanZ1egdUctTOQhm+Y2789AG4AuZHfHn+pd9CxSMHL210okWy6G8/mCGOy6TkcfGJX8XXdXnhu6HBK2WXFiDM+bQ9xh42lMqigD97zmpGipeSCgBOxVjga/9Ua54jJl68Pf7VgaChMecHXba+snhqxDK5W1mF8yHS/SaUMtUM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xJcc1fsG; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xJcc1fsG" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e58e838d5so2530355b3a.1 for ; Fri, 08 Nov 2024 11:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093083; x=1731697883; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bihSAmRaHDXTnS0e+3LMpgHo5grgLUy7mkmndxXDooA=; b=xJcc1fsGlRhYVqYC7ul6OgNIleIgmV1WM7UKpwjv9RIqgoWGmSfIg6+2cKc2/x/Pto AzCpBLV/5DsuYA/Ubg9TaWtsO+aBxSX3Sbbpn8OurDpC9qEDfyXqLsOS87keJjoQabtE ZA4tVe3+c8AcRsHs9j5k8vieoPCYJBBQvWQ/9MwX74aQpV3OYs+/At2weLAUDHlJUgWA qQn/emwfFEjExuBxDV129ZTRYzrYGKkCeODqO69sdB7xZ9bLy795u1QzjFHTeCcUXSMy Qtpv5aRMdOl1y3A+LTYP1VnJ6aTQXl5oWxtvhlik3pni1bMC0U077h4/XBfmy5b1SPk7 0jog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093083; x=1731697883; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bihSAmRaHDXTnS0e+3LMpgHo5grgLUy7mkmndxXDooA=; b=ehJ2iYc9a+QBgyeePM3obEAIhLrbh/+tu79hR3HdULo5f2xYyPbqaoSIK3IEJ4Eq89 +S7AB3QQXV/m4QL4F9u0OoZiWaR2+EsPxd18IhTGfeXdA3sEjxiF/kGS2fXgBHUjwSBP XmJJm/k9DvJg97jnLZjX2uXFdMb8m1FrKum87SFroUKdpQuthLAczTrOgc086kDTupNK UfjELMLkcdlt145xrm75H30RKyVcbsp7lzvCfcGLHQvi024lLVq6P5E1/CrFNQhhC43s fAcC9UWelxh0/+WujM/DupbXj3I/0sFH6LFcoP/T9D+5+sNtgHGUSYphd/On6xNjD/ci 0Oeg== X-Gm-Message-State: AOJu0YzLofysfYoesvEK2Cilbai3Rk1BmFvVZfyzDVtolV1V0SrrdeYy O/kozmKu8Y36p46yhCtsAuvneFgCLCYnc2X+Wge1eeculIC4PsrigoslsYXGfcWTiKX/NcLZjCo pzNPDeoGdTA== X-Google-Smtp-Source: AGHT+IFIkRKPr2Ao3obMPDZOnF/7hv7AraUE5ymQZDdJ7T8jv+4v4wWT8CDAJleJwFtW94pytjTbOy2oMM4Bkg== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a05:6a00:2e9e:b0:71e:5dff:32dd with SMTP id d2e1a72fcca58-72413297368mr282628b3a.2.1731093083057; Fri, 08 Nov 2024 11:11:23 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:49 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-8-cmllamas@google.com> Subject: [PATCH v3 7/8] binder: use per-vma lock in page installation From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Nhat Pham , Johannes Weiner , Barry Song , Hillf Danton , Lorenzo Stoakes Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use per-vma locking for concurrent page installations, this minimizes contention with unrelated vmas improving performance. The mmap_lock is still acquired when needed though, e.g. before get_user_pages_remote(). Many thanks to Barry Song who posted a similar approach [1]. Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/= [1] Cc: Nhat Pham Cc: Johannes Weiner Cc: Barry Song Cc: Suren Baghdasaryan Cc: Hillf Danton Cc: Lorenzo Stoakes Reviewed-by: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 61 +++++++++++++++++++++++++--------- 1 file changed, 45 insertions(+), 16 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 3716ffd00baf..7d2cad9beebb 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -233,6 +233,48 @@ static inline bool binder_alloc_is_mapped(struct binde= r_alloc *alloc) return smp_load_acquire(&alloc->mapped); } =20 +static struct page *binder_page_lookup(struct binder_alloc *alloc, + unsigned long addr) +{ + struct mm_struct *mm =3D alloc->mm; + struct page *page; + long ret =3D 0; + + mmap_read_lock(mm); + if (binder_alloc_is_mapped(alloc)) + ret =3D get_user_pages_remote(mm, addr, 1, 0, &page, NULL); + mmap_read_unlock(mm); + + return ret > 0 ? page : NULL; +} + +static int binder_page_insert(struct binder_alloc *alloc, + unsigned long addr, + struct page *page) +{ + struct mm_struct *mm =3D alloc->mm; + struct vm_area_struct *vma; + int ret =3D -ESRCH; + + /* attempt per-vma lock first */ + vma =3D lock_vma_under_rcu(mm, addr); + if (vma) { + if (binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + vma_end_read(vma); + return ret; + } + + /* fall back to mmap_lock */ + mmap_read_lock(mm); + vma =3D vma_lookup(mm, addr); + if (vma && binder_alloc_is_mapped(alloc)) + ret =3D vm_insert_page(vma, addr, page); + mmap_read_unlock(mm); + + return ret; +} + static struct page *binder_page_alloc(struct binder_alloc *alloc, unsigned long index, unsigned long addr) @@ -254,9 +296,7 @@ static int binder_install_single_page(struct binder_all= oc *alloc, unsigned long index, unsigned long addr) { - struct vm_area_struct *vma; struct page *page; - long npages; int ret; =20 if (!mmget_not_zero(alloc->mm)) @@ -268,16 +308,7 @@ static int binder_install_single_page(struct binder_al= loc *alloc, goto out; } =20 - mmap_read_lock(alloc->mm); - vma =3D vma_lookup(alloc->mm, addr); - if (!vma || !binder_alloc_is_mapped(alloc)) { - __free_page(page); - pr_err("%d: %s failed, no vma\n", alloc->pid, __func__); - ret =3D -ESRCH; - goto unlock; - } - - ret =3D vm_insert_page(vma, addr, page); + ret =3D binder_page_insert(alloc, addr, page); switch (ret) { case -EBUSY: /* @@ -287,8 +318,8 @@ static int binder_install_single_page(struct binder_all= oc *alloc, */ ret =3D 0; __free_page(page); - npages =3D get_user_pages_remote(alloc->mm, addr, 1, 0, &page, NULL); - if (npages <=3D 0) { + page =3D binder_page_lookup(alloc, addr); + if (!page) { pr_err("%d: failed to find page at offset %lx\n", alloc->pid, addr - alloc->vm_start); ret =3D -ESRCH; @@ -306,8 +337,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, ret =3D -ENOMEM; break; } -unlock: - mmap_read_unlock(alloc->mm); out: mmput_async(alloc->mm); return ret; --=20 2.47.0.277.g8800431eea-goog From nobody Sat Nov 23 23:48:00 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 318AE1F26FA for ; Fri, 8 Nov 2024 19:11:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093087; cv=none; b=I3Gkzgv6W3hx+WOiayLe7tV37LHffMV1gc4+5T61jZtb0ZYTirnslKDfEUW57iU11oNH/e+pvyQ38C9ZU99cxM12MsDFVBpqGEZATb/mfW14p9FHraW7LjY0WwlryceuhhRPLg6nosEbFHY3FwW6ZNaBEDUwcFeWAscLA8ZVifc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731093087; c=relaxed/simple; bh=oXmo3s4uftDvdjY703eY53QegBRUavVqxslFvZe6qZE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DPjnHkZxtrAm0wXlW8b4Z0sqwRcZg1FiTKy+SixSv5F29ooUR1pysiXv98ibiy5Sj7YLieC1Yk2tXVXJr5OZ30EfCAYXYZEDwBe/XtZ6RrmxlTRLCbZ6qne07XDMQTmlzvPaLgaW4s1YlfOpKCbkk+N9axz6ZKaDSMFoeW/x9To= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2gZpXBUz; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2gZpXBUz" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e35199eb2bso49755317b3.3 for ; Fri, 08 Nov 2024 11:11:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731093085; x=1731697885; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BDB9tBERWBcfIXqT9RtU/9i5IUOaPQLG2Xfsj/iAwwg=; b=2gZpXBUz4lOLgSLFEtH2dMB/Xnka8bkM0okpQjOMyH7zZmAAUN2Iq/gktcOvtBgAl2 DSE/odAWi8j+HykRuTy/UEIu9xJqnNdTwxZ3Js89s8TOn5+CAa3uZD+64MES5lTMwaUD jqtbfteaT2OF54ur9Niyh+LXzjDrLBStMkaXkucuLz5v3g4Wcn51U2nh5zgU6My6Y25h HxqaJYknqb3QpwaKsPA1OhcaKWJ493214yBF/dVq5U1aluQKngFw3goReJcl8Nq6Tw7i FoHOKNZUsQEY9PwWqLIZES2Vp2yhN6B/cw8N3WioDT9Y/+grzdFAsfafJu0jsG815ag5 ZCKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731093085; x=1731697885; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BDB9tBERWBcfIXqT9RtU/9i5IUOaPQLG2Xfsj/iAwwg=; b=H9Dovi/PKtdLBUuDbr+IYh7ofjMdZDVESX8mtMOkZIOsd3OtBt7ZD9uwso6+zgBTQE 7s452CM9NpJOsrvgJVWjUYANQk1Jg5O98qw/ibhLwkRbrX02/Ob7wcjufYFrVR2K8pER zYffT3ztudLHc/JgwqHdpn+vOXo/ioUmtNdRcA/5q/q/+bWolzZLItxdYFERVw1fH3Hh 09lI7ShVR6Yiqc/RSuM1UFZobDu23AdTQYcmQ9RLw+VeX26xP9l4wX0q1wVQ1x0YDOPE R8RM3fVidXBuYzOBGLsz34xivKIGhkVJZjuQrUNiJz2z0L4Yn2jirZ8m5zs+CcIDf4J9 1lHA== X-Gm-Message-State: AOJu0Yzdhsw30yiRtk6COFLDnkNUnikx7CFLsvi2VfwkbL4AjVfdmoMc OBMf9QRCOgspPmmFY7F9VWJFeIbgqCSG2qtkXTQAxud1sSIYp+0612eFKQKi4iR2lmvFAg1RfKW 5G7mqseuUHw== X-Google-Smtp-Source: AGHT+IGKQWTQrVOeGxJLhY6zsmXRHGk6UTdU0M7oaBS0NM4mJiA+JBFfZWaH36l8/W+fVD5IIc5QjZ01KGKgEg== X-Received: from xllamas.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5070]) (user=cmllamas job=sendgmr) by 2002:a0d:cf43:0:b0:6e3:1f46:77a6 with SMTP id 00721157ae682-6eaddd8b07emr400287b3.2.1731093085200; Fri, 08 Nov 2024 11:11:25 -0800 (PST) Date: Fri, 8 Nov 2024 19:10:50 +0000 In-Reply-To: <20241108191057.3288442-1-cmllamas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241108191057.3288442-1-cmllamas@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108191057.3288442-9-cmllamas@google.com> Subject: [PATCH v3 8/8] binder: propagate vm_insert_page() errors From: Carlos Llamas To: Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of always overriding errors with -ENOMEM, propagate the specific error code returned by vm_insert_page(). This allows for more accurate error logs and handling. Cc: Suren Baghdasaryan Signed-off-by: Carlos Llamas --- drivers/android/binder_alloc.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 7d2cad9beebb..dd15519e321f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -334,7 +334,6 @@ static int binder_install_single_page(struct binder_all= oc *alloc, __free_page(page); pr_err("%d: %s failed to insert page at offset %lx with %d\n", alloc->pid, __func__, addr - alloc->vm_start, ret); - ret =3D -ENOMEM; break; } out: --=20 2.47.0.277.g8800431eea-goog