From nobody Mon Oct 6 20:58:58 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3BA81CAA6D for ; Thu, 17 Jul 2025 01:10:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714622; cv=none; b=cVixnaZzXMp9BhBmiDNei/6o6gkRmtqNUq4ku/MK8V6cI5nelUb7nYO6CtkarA9DkmKwkQ2gZ0TORjoQX6fAlHqrP/JYZ9fiGGvvSGBX9v+gWq/eSkh9RvxOhq601+yys2r+U5ETuTGLWI8kfPBMewEM9EkiKegm0Ytkk3dvvJ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714622; c=relaxed/simple; bh=tIPGEFHSsr2ODa6GIudg/oc/a5xUiSZoosuxsXWDsBQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rQVbBSVciBKp5qtqg0LeewMAadu22lh7ePT54zYIDrjmliuahaXVkTbt7Wegmy2PBmmPNC3amflkwIR3dvKoEXgE6mujlurDsX8olRTN5JCZraOhPETxbKpDi1WLpar0XBj40kaXSROMO4uHtAXz054qWN2/Zuu7wiDhnGAcgCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GOjBXfpY; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GOjBXfpY" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-748f30d56d1so186183b3a.3 for ; Wed, 16 Jul 2025 18:10:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714620; x=1753319420; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rXYnyktGm60WOJnLhvXwFUhbTbABdh/mFZxi2QEZBo0=; b=GOjBXfpY2LfKhJX/pM1gYU0H/qe8qEkKiBQKKxR+8FGmwgr0Lj+9Gt9sQ0tUG3EI3N Z+9KOLYNkcf2ztxe3TikO56EcRKLsHe+SJx+nZ9Zqex/GjwFrq/btVP6y18gH0Z7EW8x wZKeWAsalpXEg8GKyvvWSf8/RAL7fgBGCIm8a64dqqqwocYbgY+YyZpE9D6bHEq2OG7l uMKkX8pi8qpw/ga3i5J8/ZRMxvFXH+cK6/mZzCRVsR+a3X9eTmZAE9fQ59Blq9auMF6k HJFzuxbvVftlgXIPJw3az9r4Q24BItXZuOS+QbmlUohHmK6QG3j5SIZlNAL6bqstO81v bY9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714620; x=1753319420; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rXYnyktGm60WOJnLhvXwFUhbTbABdh/mFZxi2QEZBo0=; b=RSAaFIKR6G4EQQMgBM9uwQMj2JVcSESHYo3uOTtpKEzN7O0Duo8VS8XNQz/P4kWgVy fL5KXddvZhufiriDYpBdxFR6hRW82hEUOIhFW/IMsNUth+udKRmZwvuAaNJFC3HTZb3W bVIwn22dTuMngcVRfhQa1Fiyd34vKk6GObNVwa1bx1vNzcRNwv1wVyEJ/ui3x5wU0bgE 2PFhoI43fpijGUTAZWK3SIgUqhDPW3Su4OKC+57zo4cehFw9ayDLjZtf341lOp6/I2mN kZjBHsL9UyJm9ISASmjqOtl7PL7bKqEJS5fY85BGfDmH18o5xMteXdG10We8evzzeLgk qK/g== X-Gm-Message-State: AOJu0YwMuNe91gbReWmEqKzAimsU0aCm4xrYyIwQFjVhkzZiiaaZX1QS Vz4X/fy4BpHVBzRjlCX6fTDrHHk4bd76GEcdxeOy4C4SL1o87+OTgZgFkEuLOQTKIrxGDXfbGQ4 noTHZJpNxSo9hVatFGFqR/UAD6iYiJsWc4H8OQYqngFuYAe3EVHZUhS9WhmIMJFNj2Fz3sSk+z+ KyHh69xIX8renkHTHMc84cpj90K7ks7bJ+ZtSRK6+HfuX04avyOA== X-Google-Smtp-Source: AGHT+IGZCQV61cHQmaZLL5LWYSAB2/YmRqw6hOQy3r+6HYDMD4gqbEEsPZwQADhr5AYbRuuxb06gW7BaO0uw X-Received: from pfbki26.prod.google.com ([2002:a05:6a00:949a:b0:748:f3b0:4db2]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1410:b0:748:f41d:69d2 with SMTP id d2e1a72fcca58-756e7acfa48mr7805647b3a.4.1752714619969; Wed, 16 Jul 2025 18:10:19 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:04 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-2-ynaffit@google.com> Subject: [PATCH v4 1/6] binder: Fix selftest page indexing From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The binder allocator selftest was only checking the last page of buffers that ended on a page boundary. Correct the page indexing to account for buffers that are not page-aligned. Acked-by: Carlos Llamas Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes Reviewed-by: Kees Cook --- v4: * Fixed unaligned comment * Collected tags --- drivers/android/binder_alloc_selftest.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index c88735c54848..de5bd848d042 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -142,7 +142,7 @@ static void binder_selftest_free_buf(struct binder_allo= c *alloc, for (i =3D 0; i < BUFFER_NUM; i++) binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 - for (i =3D 0; i < end / PAGE_SIZE; i++) { + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { /** * Error message on a free page can be false positive * if binder shrinker ran during binder_alloc_free_buf --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Mon Oct 6 20:58:58 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A27E1DE3C3 for ; Thu, 17 Jul 2025 01:10:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714627; cv=none; b=O2Q42ZiRCpXv5EO7YJZ5KKpWqS4HhVZH8+O7IbT1f/eH3p4zdmLR/+NAtqisRmR3NAv0rzzc+CqAvxXYL/izUbnlUyfUzAUs1p6EutJHMkfzLiXuUf/Knxxd6roqHhtnsQ+Xb5ezkiSRtovvIkvq495b/7CwHh+5+1dptb4IHDo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714627; c=relaxed/simple; bh=K2awaZpAJuOtWqai3S5U+wUt64RXm5fZbbdjwuVbQ4Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kp49KgzTPf1daU84+5Ec/+RDl6gdzRj/Re7Oh7N3Cq95qVtgiMn2vQw+vMO8we9T2BlnFBTBmdYIZX/HOpwiLa4l5ErBDrGdPgwgMCz2yl965Hc6OP2e0v70YzXKUilZYJKGOF113wam3epAnK1kv/TRACyw7QmrMhoF5OpDT+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aYxkft2B; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aYxkft2B" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-740774348f6so440684b3a.1 for ; Wed, 16 Jul 2025 18:10:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714624; x=1753319424; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X9F22aOeClVcZUalYp7Mw24VMJBartALFFChTIR9NkM=; b=aYxkft2BKX3DWg+q/OO1pL/4nKopwjpS7SpKuWVc/H6TNkqkBZxBdAITtRzQDdfqKM qXSqDOnP1WpPkj9pfRmi4weqbYOQUsvsMY2PhTk4VYuQtiZURWFqGyoUNW2shA38mCZ3 anrv38F3JQq2b9otxUqH/M0+GCZo+Vhap/6P2WFo8EDGQndOqHAry0lustxIIyZgIWbC wOO0hZjnpknTTnArGgfL0JHJ5njekckRd4OHlayYXwoRAVZjvPUr/fJYJZQl1f2O8Cm7 iaIbpTi2CkebHQrKGukSZdKBUtkU0WBdbawg/zsNM5RXXvObcF0CnCdm98ELY82/fhJG zpUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714624; x=1753319424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X9F22aOeClVcZUalYp7Mw24VMJBartALFFChTIR9NkM=; b=WJ+ILhtM/vqj8qCZNNEI2Rf4/Bhe1mh2ZBxQMuO5piyoMuwfhPxEvOvIYd4YeXhD8y HxmUuBifPDVipMh2xhqgAYxSeAbqpM4w4buLhpuUOLaeBd7bL6+7RqZJhFbaBRu+Fm9+ jmzW3++EfRsvLj8aviKxMIgi99gXlKzNv01BoVduvAHbrNCHKMXHyK9HmEkkj0iw86YF UaM2lM1Loy135vJAgS4AQgfJQILSMnAcw51d6N/OvFO871q4NL97HfBbEHZXbCfWxMfj LqulGtPvbEqAPtfyvwbXW9wiKgKevYNFvw/SHYxIKBA1pBGkdJ1MfeK+fzGtDxVgkSWj uvLg== X-Gm-Message-State: AOJu0YyWLgIIr23o0A9HOHG57KZmG5kix/xQCQdg3tfmHUrwXxbXXGDg koftLI8z1Qj25VAkjorq/17/tZ/up3WGPW3aAd1qPoM84lLZNwaoAFLK1QAtcndncQPbt/+9GiS 0HwhMrA5rEby+Tv7GvPemyr+XqOQdvk89qSGrymHUf9oZlVxvU0uSx+PKO05NHBwoTvsbIZCj/k zwmDWY6p/YLpgYAKp84aNblxhCQwCf4tr4dMtEHFnCuNG1GFJ5kQ== X-Google-Smtp-Source: AGHT+IHeyufygucPmjyKG7ZisqfH2+Ux0zXu28nZqMN6HeA12IFuzeWkyqSXm1zxXedyKEXss0ljJSp+0mor X-Received: from pfbhw13.prod.google.com ([2002:a05:6a00:890d:b0:748:e22c:600c]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:10ca:b0:740:6f69:8d94 with SMTP id d2e1a72fcca58-756e546e78cmr7290794b3a.0.1752714624390; Wed, 16 Jul 2025 18:10:24 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:05 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-3-ynaffit@google.com> Subject: [PATCH v4 2/6] binder: Store lru freelist in binder_alloc From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Store a pointer to the free pages list that the binder allocator should use for a process inside of struct binder_alloc. This change allows binder allocator code to be tested and debugged deterministically while a system is using binder; i.e., without interfering with other binder processes and independently of the shrinker. This is necessary to convert the current binder_alloc_selftest into a kunit test that does not rely on hijacking an existing binder_proc to run. A binder process's binder_alloc->freelist should not be changed after it is initialized. A sole exception is the process that runs the existing binder_alloc selftest. Its freelist can be temporarily replaced for the duration of the test because it runs as a single thread before any pages can be added to the global binder freelist, and the test frees every page it allocates before dropping the binder_selftest_lock. This exception allows the existing selftest to be used to check for regressions, but it will be dropped when the binder_alloc tests are converted to kunit in a subsequent patch in this series. Acked-by: Carlos Llamas Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes Reviewed-by: Kees Cook --- v4: * Collected tag --- drivers/android/binder_alloc.c | 25 +++++++---- drivers/android/binder_alloc.h | 3 +- drivers/android/binder_alloc_selftest.c | 59 ++++++++++++++++++++----- 3 files changed, 67 insertions(+), 20 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index fcfaf1b899c8..2e89f9127883 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -26,7 +26,7 @@ #include "binder_alloc.h" #include "binder_trace.h" =20 -struct list_lru binder_freelist; +static struct list_lru binder_freelist; =20 static DEFINE_MUTEX(binder_alloc_mmap_lock); =20 @@ -210,7 +210,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add(&binder_freelist, + ret =3D list_lru_add(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -409,7 +409,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, if (page) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1007,7 +1007,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) if (!page) continue; =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1229,6 +1229,17 @@ binder_shrink_scan(struct shrinker *shrink, struct s= hrink_control *sc) =20 static struct shrinker *binder_shrinker; =20 +static void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) +{ + alloc->pid =3D current->group_leader->pid; + alloc->mm =3D current->mm; + mmgrab(alloc->mm); + mutex_init(&alloc->mutex); + INIT_LIST_HEAD(&alloc->buffers); + alloc->freelist =3D freelist; +} + /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on * @alloc: binder_alloc for this proc @@ -1238,11 +1249,7 @@ static struct shrinker *binder_shrinker; */ void binder_alloc_init(struct binder_alloc *alloc) { - alloc->pid =3D current->group_leader->pid; - alloc->mm =3D current->mm; - mmgrab(alloc->mm); - mutex_init(&alloc->mutex); - INIT_LIST_HEAD(&alloc->buffers); + __binder_alloc_init(alloc, &binder_freelist); } =20 int binder_alloc_shrinker_init(void) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index feecd7414241..aa05a9df1360 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -15,7 +15,6 @@ #include #include =20 -extern struct list_lru binder_freelist; struct binder_transaction; =20 /** @@ -91,6 +90,7 @@ static inline struct list_head *page_to_lru(struct page *= p) * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space * @pages: array of struct page * + * @freelist: lru list to use for free pages (invariant after in= it) * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -113,6 +113,7 @@ struct binder_alloc { struct rb_root allocated_buffers; size_t free_async_space; struct page **pages; + struct list_lru *freelist; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index de5bd848d042..8b18b22aa3de 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -8,8 +8,9 @@ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 -#include #include +#include +#include #include "binder_alloc.h" =20 #define BUFFER_NUM 5 @@ -18,6 +19,7 @@ static bool binder_selftest_run =3D true; static int binder_selftest_failures; static DEFINE_MUTEX(binder_selftest_lock); +static struct list_lru binder_selftest_freelist; =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -143,11 +145,6 @@ static void binder_selftest_free_buf(struct binder_all= oc *alloc, binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { - /** - * Error message on a free page can be false positive - * if binder shrinker ran during binder_alloc_free_buf - * calls above. - */ if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", @@ -162,8 +159,8 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) int i; unsigned long count; =20 - while ((count =3D list_lru_count(&binder_freelist))) { - list_lru_walk(&binder_freelist, binder_alloc_free_page, + while ((count =3D list_lru_count(&binder_selftest_freelist))) { + list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, NULL, count); } =20 @@ -187,7 +184,7 @@ static void binder_selftest_alloc_free(struct binder_al= loc *alloc, =20 /* Allocate from lru. */ binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_freelist)) + if (list_lru_count(&binder_selftest_freelist)) pr_err("lru list should be empty but is not\n"); =20 binder_selftest_free_buf(alloc, buffers, sizes, seq, end); @@ -275,6 +272,20 @@ static void binder_selftest_alloc_offset(struct binder= _alloc *alloc, } } =20 +int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) +{ + struct page *page; + int allocated =3D 0; + int i; + + for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + page =3D alloc->pages[i]; + if (page) + allocated++; + } + return allocated; +} + /** * binder_selftest_alloc() - Test alloc and free of buffer pages. * @alloc: Pointer to alloc struct. @@ -286,6 +297,7 @@ static void binder_selftest_alloc_offset(struct binder_= alloc *alloc, */ void binder_selftest_alloc(struct binder_alloc *alloc) { + struct list_lru *prev_freelist; size_t end_offset[BUFFER_NUM]; =20 if (!binder_selftest_run) @@ -293,14 +305,41 @@ void binder_selftest_alloc(struct binder_alloc *alloc) mutex_lock(&binder_selftest_lock); if (!binder_selftest_run || !alloc->mapped) goto done; + + prev_freelist =3D alloc->freelist; + + /* + * It is not safe to modify this process's alloc->freelist if it has any + * pages on a freelist. Since the test runs before any binder ioctls can + * be dealt with, none of its pages should be allocated yet. + */ + if (binder_selftest_alloc_get_page_count(alloc)) { + pr_err("process has existing alloc state\n"); + goto cleanup; + } + + if (list_lru_init(&binder_selftest_freelist)) { + pr_err("failed to init test freelist\n"); + goto cleanup; + } + + alloc->freelist =3D &binder_selftest_freelist; + pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); - binder_selftest_run =3D false; if (binder_selftest_failures > 0) pr_info("%d tests FAILED\n", binder_selftest_failures); else pr_info("PASSED\n"); =20 + if (list_lru_count(&binder_selftest_freelist)) + pr_err("expect test freelist to be empty\n"); + +cleanup: + /* Even if we didn't run the test, it's no longer thread-safe. */ + binder_selftest_run =3D false; + alloc->freelist =3D prev_freelist; + list_lru_destroy(&binder_selftest_freelist); done: mutex_unlock(&binder_selftest_lock); } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Mon Oct 6 20:58:58 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D8641E260A for ; Thu, 17 Jul 2025 01:10:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714630; cv=none; b=TGMrfdxMloS+Rwlg5diCFvLV/P9lMl3/UGJ5Hmpr1f1v6f45bP7FXldIFdeUhrMk3IFJyDgx8bJNSUndK0zc04gRJp7rVF5QyU9GV+3budVrxfyzOzy+lN3nnXC+WfGhjupWV7Cz5iqLDTQFB3Eh5vLH2of4I8FcU19egZk8JsY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714630; c=relaxed/simple; bh=UCTY/ToR6H/kxtSRZLCamKknTXWIZQ+jo/lnUBZnojc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e2USpQcTvlpMrN6MzixDWd03aKktioPTJpZdYE0aFw/frk0bhib4PJVfN4LuL1WTAYmNhiXCwIQQuUDVv6cI5/eoALVYC7JirnUwNr18Zv9mga4gwdd3UDV5+Few+Va4UggliZ6ad1DLihs5NDovZYakyKFHro5mpoX1ZMes35Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YjQcVzBN; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YjQcVzBN" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3132e7266d3so445137a91.2 for ; Wed, 16 Jul 2025 18:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714628; x=1753319428; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SNFQfALT87iSSNOUBXVFi9G5jROQeOXrFq606md2L0U=; b=YjQcVzBNdGQ8OdMDkIk2NLd3hhO/j+XT6DSylT1kY/LK+cXkrn2AmEYrxjQuzvkD8m 35TzHgTRCrp/AGj0JybXQ+b+UF5TiC2bobx5NQ7Ivlhs2x2yhQbWjCIqN6E1L9REZlXj 1FgBSccdvsoRdbOyVRXc9E6uZzcyfGikiyyopNjIxVO7E34ZEoe78c8hXOg6dU8EYSQU 2FFhtV+GKrMPsAv3ogVHovNq67ADjv8xLlnsFdDsovv4kMldszIrl4FdnWhMMaYJgAZ3 0dmQM3aIQk4O+KOMwANNM9+lSeSgwbHBUtczGRl1d5ku4Q8c+TG/4WrEPQ6/+jQ7p1fp 0lxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714628; x=1753319428; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SNFQfALT87iSSNOUBXVFi9G5jROQeOXrFq606md2L0U=; b=wgszAg83Ucyo5ItWfy+abXUsUdiWiAsg0PcTX0lJx6Ku84uCMoItYI+KEwP+FnCH0W b8u9XE/ujBgjbi67XgwI3U0jwIgvx53XAb9TrLm95wdFjhijFJ0f+awnRjnSphxj0z5n dp8UJ8GFyf4T9iL5XDiLNeJi/gaSrIfx3luWHBBQM97h2vW5KYQ1wQ4BtMe5U/Y89dXs fqvubeNyl7Hgfq7ZsaIimZ3IRIzW+rG1Hyv9kjgR56q+BQjfi8HICgJctnUJBrUCc3ac H4cyicNUPIZGf9A42Uysnzpai9wCWl7TqC7Uo3ovHQpYj/O1izVNi4deu+XW5en1qi6w yr2w== X-Gm-Message-State: AOJu0YwwIg//dDtZ1f7bbYqhmVJpRm4iO9HdGx2ERjRAGslUtrZzYuU2 J6yRa6GHVYYbi8RIo8/bcf/WVG4ebHencNSlkVPonbz8E1vwdKHnRFSlFdTrj6jqKsDLEyWeQ3v Otu2Mf8fwcFzaX5gR/VLirKlok2AOlZY/6O0SzQEyfdA5ocJSb7e8nfcGFMJEtRBXWmRAtmXUZM 7m6Dl1kYdGhdWN8Wg8Aj6f+kfYwxM1gFuv+9ExbeGAmudmdc78rQ== X-Google-Smtp-Source: AGHT+IGx3MVGA6pUSyhH2gNBdLEyBdBSFdOnzz7ucaK+tES6SS4pU+iQWEiBeVIiVovBOQr7Ppuekyg5+XSe X-Received: from pjg13.prod.google.com ([2002:a17:90b:3f4d:b0:31c:2fe4:33b9]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:39cf:b0:312:959:dc3e with SMTP id 98e67ed59e1d1-31c9f47c3d8mr7023956a91.10.1752714628497; Wed, 16 Jul 2025 18:10:28 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:06 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-4-ynaffit@google.com> Subject: [PATCH v4 3/6] kunit: test: Export kunit_attach_mm() From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, Kees Cook Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Tests can allocate from virtual memory using kunit_vm_mmap(), which transparently creates and attaches an mm_struct to the test runner if one is not already attached. This is suitable for most cases, except for when the code under test must access a task's mm before performing an mmap. Expose kunit_attach_mm() as part of the interface for those cases. This does not change the existing behavior. Cc: David Gow Reviewed-by: Carlos Llamas Reviewed-by: Kees Cook Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes --- v4: * Collected tags --- include/kunit/test.h | 12 ++++++++++++ lib/kunit/user_alloc.c | 4 ++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/include/kunit/test.h b/include/kunit/test.h index 39c768f87dc9..d958ee53050e 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -531,6 +531,18 @@ static inline char *kunit_kstrdup(struct kunit *test, = const char *str, gfp_t gfp */ const char *kunit_kstrdup_const(struct kunit *test, const char *str, gfp_t= gfp); =20 +/** + * kunit_attach_mm() - Create and attach a new mm if it doesn't already ex= ist. + * + * Allocates a &struct mm_struct and attaches it to @current. In most case= s, call + * kunit_vm_mmap() without calling kunit_attach_mm() directly. Only necess= ary when + * code under test accesses the mm before executing the mmap (e.g., to per= form + * additional initialization beforehand). + * + * Return: 0 on success, -errno on failure. + */ +int kunit_attach_mm(void); + /** * kunit_vm_mmap() - Allocate KUnit-tracked vm_mmap() area * @test: The test context object. diff --git a/lib/kunit/user_alloc.c b/lib/kunit/user_alloc.c index 46951be018be..b8cac765e620 100644 --- a/lib/kunit/user_alloc.c +++ b/lib/kunit/user_alloc.c @@ -22,8 +22,7 @@ struct kunit_vm_mmap_params { unsigned long offset; }; =20 -/* Create and attach a new mm if it doesn't already exist. */ -static int kunit_attach_mm(void) +int kunit_attach_mm(void) { struct mm_struct *mm; =20 @@ -49,6 +48,7 @@ static int kunit_attach_mm(void) =20 return 0; } +EXPORT_SYMBOL_GPL(kunit_attach_mm); =20 static int kunit_vm_mmap_init(struct kunit_resource *res, void *context) { --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Mon Oct 6 20:58:58 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E1741E521D for ; Thu, 17 Jul 2025 01:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714635; cv=none; b=awBR1MKfPiY53B89IiVuQuHRZi7vxauDNgM5Czx6p3OWgHwZ0NZo/kCPUYwryAUyAjlikppsKN3wM5T7ciDzOffQMvZmZDt9VwsElJR5lUBZe7WwiX/PtzypUUbhNE2ALSgckWQmXSFD8mBMB3wMyCxcLmH0/w362ZwEptiiAS4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714635; c=relaxed/simple; bh=R45/Drl4MCkAIB5ZH0MlbHKVnH3k9tj2cgqElMDkNPI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n3TOYaU1w+X0REvdI8fMWWlMMESA9z+xz78JkQa/z0cRr/eJcUTFZ/ychlEeTVmWikOMGKuQ5ZuBU3pns/p+pd6d9MwSQ32lUc8zUUYEB6issK7/Rm5BiKklKLpDaLkLz8oS4/Wu9Duz5omvA1yF3VRIW8JVyVXgqmoMJjX2xBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MjAMQV6u; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MjAMQV6u" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-23689228a7fso5224835ad.1 for ; Wed, 16 Jul 2025 18:10:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714633; x=1753319433; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=e3HgaVgkV8y/i+XjIBc/JDF7MiiLKDRl5JV2/hXzwng=; b=MjAMQV6uannrm9LiBxTaR7iWZEbTu6LDn0UA8Erv4+IfybCrP0HegHSlb7laR+lH47 BaVkXfNgudRVbhS2AeZMTrLEh1Aa/3G4+a91F+HPBk8hyJSVD1HBSU+X1pCfPWi5i1Qx TJLyP+hoykk0G7+ufTy4t5RMDOaf9GPdboY94RjX4u6Er6Gda79FTcWOwqcoQP9cC6AY EacN0+tq/Jmz1QnSX7Oxq7fYF01nB547BsB6vwifzKIvnIq/chhjX8rL/9hvOtCMuzDb iadyAqrb63fSYqMEtuM0HozE4wGH64+wdIrdyKt0zox/mLB3uK+Ns2FxwffMUnDm0gGQ tMOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714633; x=1753319433; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=e3HgaVgkV8y/i+XjIBc/JDF7MiiLKDRl5JV2/hXzwng=; b=RAizLMCcGYTSDhQzWcxmdVJxAAgTQtuql2RU3gAFpG7gYIr0w3s7Cm4MnjAclb6z83 aE2nd6GQ28/iuNi1Z2GS2qXcZIE7lpmAyslLvxIg3LUaqR22veQfe0XRRH2R8h2cRFSx JtXhlgiFM82wpKyLqU3XLH86stohiskOwPDin9M4Dj/PiyaRM44NTulstcxrOdOli9aG vmxejCsfLYnGr3cZpu4UFJ3ZbDF99rVUHrDBfIv4smR5ZdBdnCA9WxRs53qyiVSfGSUI Tt4sJ4Qr0EHB7I8+Xa2MQ9vgx2zg7cCHaT3gzBrAUbJ2RrNhiFlp11E17K3VR6Gpmdbt LC4A== X-Gm-Message-State: AOJu0Yxeht7/t5XU97kON8B5xijgFrfCf1GNf6lCRyw1HfmEuhcdRhyZ y5cmPGMxMpDCAjIuyNxWpgYU3fZpWgZ3w9fJP4lWM08Fp20usglnd/07i33B0icik+r+OlMttxo 7zmYiTNGFAQLCGGhBQsuRSNk+16zUSEKy8UuVp5gMexyAzd5j4+vkNEkUpxl+z1p8nk0DDOjrpk um8e4M7bK1dbq+L2c8PZ75JC/FUSA3885ckRoWbIokeiw8d494rw== X-Google-Smtp-Source: AGHT+IG73tdRUkBLJKZdksbYf3zxs3REiEddbD25ho8DDh1XJPka6tRWL27Ece8r/h/YPaseJecL8N0D4OAe X-Received: from plbje11.prod.google.com ([2002:a17:903:264b:b0:23a:cc37:8ec4]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:dac2:b0:236:15b7:62e8 with SMTP id d9443c01a7336-23e3033883bmr12382235ad.25.1752714632734; Wed, 16 Jul 2025 18:10:32 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:07 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-5-ynaffit@google.com> Subject: [PATCH v4 4/6] binder: Scaffolding for binder_alloc KUnit tests From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, Kees Cook Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add setup and teardown for testing binder allocator code with KUnit. Include minimal test cases to verify that tests are initialized correctly. Tested-by: Rae Moar Acked-by: Carlos Llamas Reviewed-by: Kees Cook Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes --- v2: * Added Tested-by tag v3: * Split kunit lib change into separate change v4: * Added Google Copyright to new files * Collected tags --- drivers/android/Kconfig | 11 ++ drivers/android/Makefile | 1 + drivers/android/binder.c | 5 +- drivers/android/binder_alloc.c | 15 +- drivers/android/binder_alloc.h | 6 + drivers/android/binder_internal.h | 4 + drivers/android/tests/.kunitconfig | 7 + drivers/android/tests/Makefile | 6 + drivers/android/tests/binder_alloc_kunit.c | 169 +++++++++++++++++++++ 9 files changed, 218 insertions(+), 6 deletions(-) create mode 100644 drivers/android/tests/.kunitconfig create mode 100644 drivers/android/tests/Makefile create mode 100644 drivers/android/tests/binder_alloc_kunit.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index 07aa8ae0a058..b1bc7183366c 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -47,4 +47,15 @@ config ANDROID_BINDER_IPC_SELFTEST exhaustively with combinations of various buffer sizes and alignments. =20 +config ANDROID_BINDER_ALLOC_KUNIT_TEST + tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS + depends on ANDROID_BINDER_IPC && KUNIT + default KUNIT_ALL_TESTS + help + This feature builds the binder alloc KUnit tests. + + Each test case runs using a pared-down binder_alloc struct and + test-specific freelist, which allows this KUnit module to be loaded + for testing without interfering with a running system. + endmenu diff --git a/drivers/android/Makefile b/drivers/android/Makefile index c9d3d0c99c25..74d02a335d4e 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -4,3 +4,4 @@ ccflags-y +=3D -I$(src) # needed for trace events obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index c463ca4a8fff..9dfe90c284fc 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -68,6 +68,8 @@ #include #include =20 +#include + #include =20 #include @@ -5956,10 +5958,11 @@ static void binder_vma_close(struct vm_area_struct = *vma) binder_alloc_vma_close(&proc->alloc); } =20 -static vm_fault_t binder_vm_fault(struct vm_fault *vmf) +VISIBLE_IF_KUNIT vm_fault_t binder_vm_fault(struct vm_fault *vmf) { return VM_FAULT_SIGBUS; } +EXPORT_SYMBOL_IF_KUNIT(binder_vm_fault); =20 static const struct vm_operations_struct binder_vm_ops =3D { .open =3D binder_vma_open, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 2e89f9127883..c79e5c6721f0 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "binder_alloc.h" #include "binder_trace.h" =20 @@ -57,13 +58,14 @@ static struct binder_buffer *binder_buffer_prev(struct = binder_buffer *buffer) return list_entry(buffer->entry.prev, struct binder_buffer, entry); } =20 -static size_t binder_alloc_buffer_size(struct binder_alloc *alloc, - struct binder_buffer *buffer) +VISIBLE_IF_KUNIT size_t binder_alloc_buffer_size(struct binder_alloc *allo= c, + struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_buffer_size); =20 static void binder_insert_free_buffer(struct binder_alloc *alloc, struct binder_buffer *new_buffer) @@ -959,7 +961,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, failure_string, ret); return ret; } - +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_mmap_handler); =20 void binder_alloc_deferred_release(struct binder_alloc *alloc) { @@ -1028,6 +1030,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) "%s: %d buffers %d, pages %d\n", __func__, alloc->pid, buffers, page_count); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_deferred_release); =20 /** * binder_alloc_print_allocated() - print buffer info @@ -1122,6 +1125,7 @@ void binder_alloc_vma_close(struct binder_alloc *allo= c) { binder_alloc_set_mapped(alloc, false); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_vma_close); =20 /** * binder_alloc_free_page() - shrinker callback to free pages @@ -1229,8 +1233,8 @@ binder_shrink_scan(struct shrinker *shrink, struct sh= rink_control *sc) =20 static struct shrinker *binder_shrinker; =20 -static void __binder_alloc_init(struct binder_alloc *alloc, - struct list_lru *freelist) +VISIBLE_IF_KUNIT void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) { alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; @@ -1239,6 +1243,7 @@ static void __binder_alloc_init(struct binder_alloc *= alloc, INIT_LIST_HEAD(&alloc->buffers); alloc->freelist =3D freelist; } +EXPORT_SYMBOL_IF_KUNIT(__binder_alloc_init); =20 /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index aa05a9df1360..dc8dce2469a7 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -188,5 +188,11 @@ int binder_alloc_copy_from_buffer(struct binder_alloc = *alloc, binder_size_t buffer_offset, size_t bytes); =20 +#if IS_ENABLED(CONFIG_KUNIT) +void __binder_alloc_init(struct binder_alloc *alloc, struct list_lru *free= list); +size_t binder_alloc_buffer_size(struct binder_alloc *alloc, + struct binder_buffer *buffer); +#endif + #endif /* _LINUX_BINDER_ALLOC_H */ =20 diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_int= ernal.h index 1ba5caf1d88d..b5d3014fb4dc 100644 --- a/drivers/android/binder_internal.h +++ b/drivers/android/binder_internal.h @@ -592,4 +592,8 @@ void binder_add_device(struct binder_device *device); */ void binder_remove_device(struct binder_device *device); =20 +#if IS_ENABLED(CONFIG_KUNIT) +vm_fault_t binder_vm_fault(struct vm_fault *vmf); +#endif + #endif /* _LINUX_BINDER_INTERNAL_H */ diff --git a/drivers/android/tests/.kunitconfig b/drivers/android/tests/.ku= nitconfig new file mode 100644 index 000000000000..39b76bab9d9a --- /dev/null +++ b/drivers/android/tests/.kunitconfig @@ -0,0 +1,7 @@ +# +# Copyright 2025 Google LLC. +# + +CONFIG_KUNIT=3Dy +CONFIG_ANDROID_BINDER_IPC=3Dy +CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST=3Dy diff --git a/drivers/android/tests/Makefile b/drivers/android/tests/Makefile new file mode 100644 index 000000000000..27268418eb03 --- /dev/null +++ b/drivers/android/tests/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Copyright 2025 Google LLC. +# + +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D binder_alloc_kunit.o diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c new file mode 100644 index 000000000000..5e49078c9b23 --- /dev/null +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -0,0 +1,169 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for binder allocator code. + * + * Copyright 2025 Google LLC. + * Author: Tiffany Yang + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../binder_alloc.h" +#include "../binder_internal.h" + +MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); + +#define BINDER_MMAP_SIZE SZ_128K + +struct binder_alloc_test { + struct binder_alloc alloc; + struct list_lru binder_test_freelist; + struct file *filp; + unsigned long mmap_uaddr; +}; + +static void binder_alloc_test_init_freelist(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + KUNIT_EXPECT_PTR_EQ(test, priv->alloc.freelist, + &priv->binder_test_freelist); +} + +static void binder_alloc_test_mmap(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + struct binder_alloc *alloc =3D &priv->alloc; + struct binder_buffer *buf; + struct rb_node *n; + + KUNIT_EXPECT_EQ(test, alloc->mapped, true); + KUNIT_EXPECT_EQ(test, alloc->buffer_size, BINDER_MMAP_SIZE); + + n =3D rb_first(&alloc->allocated_buffers); + KUNIT_EXPECT_PTR_EQ(test, n, NULL); + + n =3D rb_first(&alloc->free_buffers); + buf =3D rb_entry(n, struct binder_buffer, rb_node); + KUNIT_EXPECT_EQ(test, binder_alloc_buffer_size(alloc, buf), + BINDER_MMAP_SIZE); + KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); +} + +/* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ + +static void binder_alloc_test_vma_close(struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D vma->vm_private_data; + + binder_alloc_vma_close(alloc); +} + +static const struct vm_operations_struct binder_alloc_test_vm_ops =3D { + .close =3D binder_alloc_test_vma_close, + .fault =3D binder_vm_fault, +}; + +static int binder_alloc_test_mmap_handler(struct file *filp, + struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D filp->private_data; + + vm_flags_mod(vma, VM_DONTCOPY | VM_MIXEDMAP, VM_MAYWRITE); + + vma->vm_ops =3D &binder_alloc_test_vm_ops; + vma->vm_private_data =3D alloc; + + return binder_alloc_mmap_handler(alloc, vma); +} + +static const struct file_operations binder_alloc_test_fops =3D { + .mmap =3D binder_alloc_test_mmap_handler, +}; + +static int binder_alloc_test_init(struct kunit *test) +{ + struct binder_alloc_test *priv; + int ret; + + priv =3D kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + test->priv =3D priv; + + ret =3D list_lru_init(&priv->binder_test_freelist); + if (ret) { + kunit_err(test, "Failed to initialize test freelist\n"); + return ret; + } + + /* __binder_alloc_init requires mm to be attached */ + ret =3D kunit_attach_mm(); + if (ret) { + kunit_err(test, "Failed to attach mm\n"); + return ret; + } + __binder_alloc_init(&priv->alloc, &priv->binder_test_freelist); + + priv->filp =3D anon_inode_getfile("binder_alloc_kunit", + &binder_alloc_test_fops, &priv->alloc, + O_RDWR | O_CLOEXEC); + if (IS_ERR_OR_NULL(priv->filp)) { + kunit_err(test, "Failed to open binder alloc test driver file\n"); + return priv->filp ? PTR_ERR(priv->filp) : -ENOMEM; + } + + priv->mmap_uaddr =3D kunit_vm_mmap(test, priv->filp, 0, BINDER_MMAP_SIZE, + PROT_READ, MAP_PRIVATE | MAP_NORESERVE, + 0); + if (!priv->mmap_uaddr) { + kunit_err(test, "Could not map the test's transaction memory\n"); + return -ENOMEM; + } + + return 0; +} + +static void binder_alloc_test_exit(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + /* Close the backing file to make sure binder_alloc_vma_close runs */ + if (!IS_ERR_OR_NULL(priv->filp)) + fput(priv->filp); + + if (priv->alloc.mm) + binder_alloc_deferred_release(&priv->alloc); + + /* Make sure freelist is empty */ + KUNIT_EXPECT_EQ(test, list_lru_count(&priv->binder_test_freelist), 0); + list_lru_destroy(&priv->binder_test_freelist); +} + +static struct kunit_case binder_alloc_test_cases[] =3D { + KUNIT_CASE(binder_alloc_test_init_freelist), + KUNIT_CASE(binder_alloc_test_mmap), + {} +}; + +static struct kunit_suite binder_alloc_test_suite =3D { + .name =3D "binder_alloc", + .test_cases =3D binder_alloc_test_cases, + .init =3D binder_alloc_test_init, + .exit =3D binder_alloc_test_exit, +}; + +kunit_test_suite(binder_alloc_test_suite); + +MODULE_AUTHOR("Tiffany Yang "); +MODULE_DESCRIPTION("Binder Alloc KUnit tests"); +MODULE_LICENSE("GPL"); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Mon Oct 6 20:58:58 2025 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 270431E9B31 for ; Thu, 17 Jul 2025 01:10:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714640; cv=none; b=EXzGIBQsAOo6MN8kuAAnoYMoP/mFvd8Ct2elqLOpM93X59VYWvlRb8ry72geBNiS8NrZOxSc9A9sBlsdNjDN7c4q1HS3So5w6L+Az4NjXXFUILqjEawf51dTmHu4meI2rxC1dhA95X4lnF6Q9D5eaam+mLqY54KNNLMvsLpT99s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714640; c=relaxed/simple; bh=1XpxJoz/K2MJD8BBZAXa3Bw6DS7+vv4nNmw55hb+Ycg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dx4zDzikhzcBLqELxUAwjrOyw2iinIoIGZHTZVOLZGwEcDqJcmX27x5j/37lfMxujVWVnD+iSA2i1Vaquof2o34wzykjzAjOooSAYkQ/9a0w4oPja8N6S8iKk2/IpsTaQosAaLNgR0XBEfLJhde+FomtEwt9UO1rd7ecxvm/UjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NbINUkL+; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NbINUkL+" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-2d9e7fbfedaso517135fac.3 for ; Wed, 16 Jul 2025 18:10:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714637; x=1753319437; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l/tbF1KbTwbJnxW66rPAl9E5TP88uI8e3RX84fZNyAU=; b=NbINUkL+jaE8uSalGOoogr78Lfa51N/maO2Jbxdub/uLXWYtuuPCQfbIggmy4BVLNF 1I+pw/nq2W8Ad//f5OyDtACN061paiuJSkQ3/J7jo3cT0Bg2UsBB6T8B4mYRNfzFMbLp 3C7VEOkt3zddHwLMEP2dfmerTcBvGR1w1bNkLWySncIdhLVsUXgKBki2G23PDd+llWy9 jvW0UdI3n6LUSA3tP0eOERJB7bYuxVXi2c9TbOtKaQ1qpS/pZVVDjB6S3AnBmkA/DGj0 /N8t7ti2eJy2ZUCRMH038YqLTlJM16h4dnqEegiGafK8nFoD8goPMgtCQd+Ll8bhtNh1 TkLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714637; x=1753319437; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l/tbF1KbTwbJnxW66rPAl9E5TP88uI8e3RX84fZNyAU=; b=G/IjSNiiUP177+MldcbX980yGrmzWlx3qdvp0VLBN8WqOdkDgx/pMo4Cx4zQS8hTqd fLfqNiCTqKyMtZe5lwZsmKn7WwZlQJq6b6YDkyAhJ/pg27gZ/NmZTvuGDFWEI/Ol8+DQ 0e9h3S5GbxF11yo2lGS4UzhFmR9eT2cxZunOp8h1D9Y2Zi0UfuyDRa0ub1n8w9xSO9On EU1t0uBA1K6c3Hz3bptWQ8WcVi+Z5jPRSlj4V6aVct1/TTmzeLq9PEB+/WsCMGMGRRcD 4kQyiWs2rYfpBWtQo8g6vxaBU8GH3b7Yoj60H8j+PwMdMwqyzdNj5It9INiEZSFaDTY7 0vZw== X-Gm-Message-State: AOJu0Ywwyn6SgvKLjTMZNHGxVeGWOFBnhrMU/vxM5HFuOXKewfZFPuVU J1njzPDQ/aCiywzbzg0rDouL0Fs8BH2T4Eilkn6JmhZfZ8Xki0FwCKF83IFyO2KTTddfC+QPU9y Bi3/ndld5YxK/7UEtrwzguTn+7kdq7HccqFmw8XwSl7aszJn7X18U/7ITt3xxJhwykLgnkRmvhV xWWwp2AMxFw2lQKQ6wGG+vUDlQ6mmTXV/Ls8NiHgLlf5JJ9W0Uig== X-Google-Smtp-Source: AGHT+IGV3+DRsA6UzsoBZc7ZYYqjmSj+eRlA6NkPsEcis9UhP40vVP8NFuUbNRZeoDXc3m18kKWa5JuVuYhL X-Received: from oabrc11.prod.google.com ([2002:a05:6871:61cb:b0:2ff:9f42:9536]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:c115:b0:2ff:90bf:4140 with SMTP id 586e51a60fabf-2ffb250de08mr3676091fac.34.1752714637207; Wed, 16 Jul 2025 18:10:37 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:08 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-6-ynaffit@google.com> Subject: [PATCH v4 5/6] binder: Convert binder_alloc selftests to KUnit From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the existing binder_alloc_selftest tests into KUnit tests. These tests allocate and free an exhaustive combination of buffers with various sizes and alignments. This change allows them to be run without blocking or otherwise interfering with other processes in binder. This test is refactored into more meaningful cases in the subsequent patch. Acked-by: Carlos Llamas Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes Reviewed-by: Kees Cook --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281837.hReNHJjO-lkp@i= ntel.com/ v4: * Collected tag --- drivers/android/Kconfig | 10 - drivers/android/Makefile | 1 - drivers/android/binder.c | 5 - drivers/android/binder_alloc.c | 3 + drivers/android/binder_alloc.h | 5 - drivers/android/binder_alloc_selftest.c | 345 --------------------- drivers/android/tests/binder_alloc_kunit.c | 279 +++++++++++++++++ 7 files changed, 282 insertions(+), 366 deletions(-) delete mode 100644 drivers/android/binder_alloc_selftest.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index b1bc7183366c..5b3b8041f827 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -37,16 +37,6 @@ config ANDROID_BINDER_DEVICES created. Each binder device has its own context manager, and is therefore logically separated from the other devices. =20 -config ANDROID_BINDER_IPC_SELFTEST - bool "Android Binder IPC Driver Selftest" - depends on ANDROID_BINDER_IPC - help - This feature allows binder selftest to run. - - Binder selftest checks the allocation and free of binder buffers - exhaustively with combinations of various buffer sizes and - alignments. - config ANDROID_BINDER_ALLOC_KUNIT_TEST tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS depends on ANDROID_BINDER_IPC && KUNIT diff --git a/drivers/android/Makefile b/drivers/android/Makefile index 74d02a335d4e..c5d47be0276c 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -3,5 +3,4 @@ ccflags-y +=3D -I$(src) # needed for trace events =20 obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o -obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 9dfe90c284fc..7b2653a5d59c 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -5718,11 +5718,6 @@ static long binder_ioctl(struct file *filp, unsigned= int cmd, unsigned long arg) struct binder_thread *thread; void __user *ubuf =3D (void __user *)arg; =20 - /*pr_info("binder_ioctl: %d:%d %x %lx\n", - proc->pid, current->pid, cmd, arg);*/ - - binder_selftest_alloc(&proc->alloc); - trace_binder_ioctl(cmd, arg); =20 ret =3D wait_event_interruptible(binder_user_error_wait, binder_stop_on_u= ser_error < 2); diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index c79e5c6721f0..74a184014fa7 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -701,6 +701,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, out: return buffer; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_new_buf); =20 static unsigned long buffer_start_page(struct binder_buffer *buffer) { @@ -879,6 +880,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, binder_free_buf_locked(alloc, buffer); mutex_unlock(&alloc->mutex); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_buf); =20 /** * binder_alloc_mmap_handler() - map virtual address space for proc @@ -1217,6 +1219,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, err_mmget: return LRU_SKIP; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_page); =20 static unsigned long binder_shrink_count(struct shrinker *shrink, struct shrink_control *sc) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index dc8dce2469a7..bed97c2cad92 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -121,11 +121,6 @@ struct binder_alloc { bool oneway_spam_detected; }; =20 -#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST -void binder_selftest_alloc(struct binder_alloc *alloc); -#else -static inline void binder_selftest_alloc(struct binder_alloc *alloc) {} -#endif enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, void *cb_arg); diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c deleted file mode 100644 index 8b18b22aa3de..000000000000 --- a/drivers/android/binder_alloc_selftest.c +++ /dev/null @@ -1,345 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* binder_alloc_selftest.c - * - * Android IPC Subsystem - * - * Copyright (C) 2017 Google, Inc. - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include "binder_alloc.h" - -#define BUFFER_NUM 5 -#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) - -static bool binder_selftest_run =3D true; -static int binder_selftest_failures; -static DEFINE_MUTEX(binder_selftest_lock); -static struct list_lru binder_selftest_freelist; - -/** - * enum buf_end_align_type - Page alignment of a buffer - * end with regard to the end of the previous buffer. - * - * In the pictures below, buf2 refers to the buffer we - * are aligning. buf1 refers to previous buffer by addr. - * Symbol [ means the start of a buffer, ] means the end - * of a buffer, and | means page boundaries. - */ -enum buf_end_align_type { - /** - * @SAME_PAGE_UNALIGNED: The end of this buffer is on - * the same page as the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 ][ ... - * buf1 ]|[ buf2 ][ ... - */ - SAME_PAGE_UNALIGNED =3D 0, - /** - * @SAME_PAGE_ALIGNED: When the end of the previous buffer - * is not page aligned, the end of this buffer is on the - * same page as the end of the previous buffer and is page - * aligned. When the previous buffer is page aligned, the - * end of this buffer is aligned to the next page boundary. - * Examples: - * buf1 ][ buf2 ]| ... - * buf1 ]|[ buf2 ]| ... - */ - SAME_PAGE_ALIGNED, - /** - * @NEXT_PAGE_UNALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 ][ ... - */ - NEXT_PAGE_UNALIGNED, - /** - * @NEXT_PAGE_ALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is page aligned. Examples: - * buf1 ][ buf2 | buf2 ]| ... - * buf1 ]|[ buf2 | buf2 ]| ... - */ - NEXT_PAGE_ALIGNED, - /** - * @NEXT_NEXT_UNALIGNED: The end of this buffer is on - * the page that follows the page after the end of the - * previous buffer and is not page aligned. Examples: - * buf1 ][ buf2 | buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 | buf2 ][ ... - */ - NEXT_NEXT_UNALIGNED, - /** - * @LOOP_END: The number of enum values in &buf_end_align_type. - * It is used for controlling loop termination. - */ - LOOP_END, -}; - -static void pr_err_size_seq(size_t *sizes, int *seq) -{ - int i; - - pr_err("alloc sizes: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - pr_err("free seq: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); -} - -static bool check_buffer_pages_allocated(struct binder_alloc *alloc, - struct binder_buffer *buffer, - size_t size) -{ - unsigned long page_addr; - unsigned long end; - int page_index; - - end =3D PAGE_ALIGN(buffer->user_data + size); - page_addr =3D buffer->user_data; - for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; - if (!alloc->pages[page_index] || - !list_empty(page_to_lru(alloc->pages[page_index]))) { - pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index] ? - "lru" : "free", page_index); - return false; - } - } - return true; -} - -static void binder_selftest_alloc_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) { - buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); - if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(alloc, buffers[i], - sizes[i])) { - pr_err_size_seq(sizes, seq); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) - binder_alloc_free_buf(alloc, buffers[seq[i]]); - - for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { - if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(sizes, seq); - pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i] ? "alloc" : "free", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_page(struct binder_alloc *alloc) -{ - int i; - unsigned long count; - - while ((count =3D list_lru_count(&binder_selftest_freelist))) { - list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, - NULL, count); - } - - for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i]) { - pr_err("expect free but is %s at page index %d\n", - list_empty(page_to_lru(alloc->pages[i])) ? - "alloc" : "lru", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_alloc_free(struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) -{ - struct binder_buffer *buffers[BUFFER_NUM]; - - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - - /* Allocate from lru. */ - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_selftest_freelist)) - pr_err("lru list should be empty but is not\n"); - - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - binder_selftest_free_page(alloc); -} - -static bool is_dup(int *seq, int index, int val) -{ - int i; - - for (i =3D 0; i < index; i++) { - if (seq[i] =3D=3D val) - return true; - } - return false; -} - -/* Generate BUFFER_NUM factorial free orders. */ -static void binder_selftest_free_seq(struct binder_alloc *alloc, - size_t *sizes, int *seq, - int index, size_t end) -{ - int i; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_free(alloc, sizes, seq, end); - return; - } - for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) - continue; - seq[index] =3D i; - binder_selftest_free_seq(alloc, sizes, seq, index + 1, end); - } -} - -static void binder_selftest_alloc_size(struct binder_alloc *alloc, - size_t *end_offset) -{ - int i; - int seq[BUFFER_NUM] =3D {0}; - size_t front_sizes[BUFFER_NUM]; - size_t back_sizes[BUFFER_NUM]; - size_t last_offset, offset =3D 0; - - for (i =3D 0; i < BUFFER_NUM; i++) { - last_offset =3D offset; - offset =3D end_offset[i]; - front_sizes[i] =3D offset - last_offset; - back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; - } - /* - * Buffers share the first or last few pages. - * Only BUFFER_NUM - 1 buffer sizes are adjustable since - * we need one giant buffer before getting to the last page. - */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - binder_selftest_free_seq(alloc, front_sizes, seq, 0, - end_offset[BUFFER_NUM - 1]); - binder_selftest_free_seq(alloc, back_sizes, seq, 0, alloc->buffer_size); -} - -static void binder_selftest_alloc_offset(struct binder_alloc *alloc, - size_t *end_offset, int index) -{ - int align; - size_t end, prev; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_size(alloc, end_offset); - return; - } - prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; - end =3D prev; - - BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); - - for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { - if (align % 2) - end =3D ALIGN(end, PAGE_SIZE); - else - end +=3D BUFFER_MIN_SIZE; - end_offset[index] =3D end; - binder_selftest_alloc_offset(alloc, end_offset, index + 1); - } -} - -int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) -{ - struct page *page; - int allocated =3D 0; - int i; - - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D alloc->pages[i]; - if (page) - allocated++; - } - return allocated; -} - -/** - * binder_selftest_alloc() - Test alloc and free of buffer pages. - * @alloc: Pointer to alloc struct. - * - * Allocate BUFFER_NUM buffers to cover all page alignment cases, - * then free them in all orders possible. Check that pages are - * correctly allocated, put onto lru when buffers are freed, and - * are freed when binder_alloc_free_page is called. - */ -void binder_selftest_alloc(struct binder_alloc *alloc) -{ - struct list_lru *prev_freelist; - size_t end_offset[BUFFER_NUM]; - - if (!binder_selftest_run) - return; - mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->mapped) - goto done; - - prev_freelist =3D alloc->freelist; - - /* - * It is not safe to modify this process's alloc->freelist if it has any - * pages on a freelist. Since the test runs before any binder ioctls can - * be dealt with, none of its pages should be allocated yet. - */ - if (binder_selftest_alloc_get_page_count(alloc)) { - pr_err("process has existing alloc state\n"); - goto cleanup; - } - - if (list_lru_init(&binder_selftest_freelist)) { - pr_err("failed to init test freelist\n"); - goto cleanup; - } - - alloc->freelist =3D &binder_selftest_freelist; - - pr_info("STARTED\n"); - binder_selftest_alloc_offset(alloc, end_offset, 0); - if (binder_selftest_failures > 0) - pr_info("%d tests FAILED\n", binder_selftest_failures); - else - pr_info("PASSED\n"); - - if (list_lru_count(&binder_selftest_freelist)) - pr_err("expect test freelist to be empty\n"); - -cleanup: - /* Even if we didn't run the test, it's no longer thread-safe. */ - binder_selftest_run =3D false; - alloc->freelist =3D prev_freelist; - list_lru_destroy(&binder_selftest_freelist); -done: - mutex_unlock(&binder_selftest_lock); -} diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 5e49078c9b23..2f6077e96ae6 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -24,6 +24,265 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); =20 #define BINDER_MMAP_SIZE SZ_128K =20 +#define BUFFER_NUM 5 +#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) + +static int binder_alloc_test_failures; + +/** + * enum buf_end_align_type - Page alignment of a buffer + * end with regard to the end of the previous buffer. + * + * In the pictures below, buf2 refers to the buffer we + * are aligning. buf1 refers to previous buffer by addr. + * Symbol [ means the start of a buffer, ] means the end + * of a buffer, and | means page boundaries. + */ +enum buf_end_align_type { + /** + * @SAME_PAGE_UNALIGNED: The end of this buffer is on + * the same page as the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 ][ ... + * buf1 ]|[ buf2 ][ ... + */ + SAME_PAGE_UNALIGNED =3D 0, + /** + * @SAME_PAGE_ALIGNED: When the end of the previous buffer + * is not page aligned, the end of this buffer is on the + * same page as the end of the previous buffer and is page + * aligned. When the previous buffer is page aligned, the + * end of this buffer is aligned to the next page boundary. + * Examples: + * buf1 ][ buf2 ]| ... + * buf1 ]|[ buf2 ]| ... + */ + SAME_PAGE_ALIGNED, + /** + * @NEXT_PAGE_UNALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 ][ ... + */ + NEXT_PAGE_UNALIGNED, + /** + * @NEXT_PAGE_ALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is page aligned. Examples: + * buf1 ][ buf2 | buf2 ]| ... + * buf1 ]|[ buf2 | buf2 ]| ... + */ + NEXT_PAGE_ALIGNED, + /** + * @NEXT_NEXT_UNALIGNED: The end of this buffer is on + * the page that follows the page after the end of the + * previous buffer and is not page aligned. Examples: + * buf1 ][ buf2 | buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 | buf2 ][ ... + */ + NEXT_NEXT_UNALIGNED, + /** + * @LOOP_END: The number of enum values in &buf_end_align_type. + * It is used for controlling loop termination. + */ + LOOP_END, +}; + +static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +{ + int i; + + kunit_err(test, "alloc sizes: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%zu]", sizes[i]); + pr_cont("\n"); + kunit_err(test, "free seq: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%d]", seq[i]); + pr_cont("\n"); +} + +static bool check_buffer_pages_allocated(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffer, + size_t size) +{ + unsigned long page_addr; + unsigned long end; + int page_index; + + end =3D PAGE_ALIGN(buffer->user_data + size); + page_addr =3D buffer->user_data; + for (; page_addr < end; page_addr +=3D PAGE_SIZE) { + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; + if (!alloc->pages[page_index] || + !list_empty(page_to_lru(alloc->pages[page_index]))) { + kunit_err(test, "expect alloc but is %s at page index %d\n", + alloc->pages[page_index] ? + "lru" : "free", page_index); + return false; + } + } + return true; +} + +static void binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); + if (IS_ERR(buffers[i]) || + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { + pr_err_size_seq(test, sizes, seq); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) + binder_alloc_free_buf(alloc, buffers[seq[i]]); + + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { + if (list_empty(page_to_lru(alloc->pages[i]))) { + pr_err_size_seq(test, sizes, seq); + kunit_err(test, "expect lru but is %s at page index %d\n", + alloc->pages[i] ? "alloc" : "free", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) +{ + unsigned long count; + int i; + + while ((count =3D list_lru_count(alloc->freelist))) { + list_lru_walk(alloc->freelist, binder_alloc_free_page, + NULL, count); + } + + for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { + if (alloc->pages[i]) { + kunit_err(test, "expect free but is %s at page index %d\n", + list_empty(page_to_lru(alloc->pages[i])) ? + "alloc" : "lru", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_alloc_free(struct kunit *test, + struct binder_alloc *alloc, + size_t *sizes, int *seq, size_t end) +{ + struct binder_buffer *buffers[BUFFER_NUM]; + + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + + /* Allocate from lru. */ + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + if (list_lru_count(alloc->freelist)) + kunit_err(test, "lru list should be empty but is not\n"); + + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + binder_alloc_test_free_page(test, alloc); +} + +static bool is_dup(int *seq, int index, int val) +{ + int i; + + for (i =3D 0; i < index; i++) { + if (seq[i] =3D=3D val) + return true; + } + return false; +} + +/* Generate BUFFER_NUM factorial free orders. */ +static void permute_frees(struct kunit *test, struct binder_alloc *alloc, + size_t *sizes, int *seq, int index, size_t end) +{ + int i; + + if (index =3D=3D BUFFER_NUM) { + binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + return; + } + for (i =3D 0; i < BUFFER_NUM; i++) { + if (is_dup(seq, index, i)) + continue; + seq[index] =3D i; + permute_frees(test, alloc, sizes, seq, index + 1, end); + } +} + +static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset) +{ + size_t last_offset, offset =3D 0; + size_t front_sizes[BUFFER_NUM]; + size_t back_sizes[BUFFER_NUM]; + int seq[BUFFER_NUM] =3D {0}; + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + last_offset =3D offset; + offset =3D end_offset[i]; + front_sizes[i] =3D offset - last_offset; + back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; + } + /* + * Buffers share the first or last few pages. + * Only BUFFER_NUM - 1 buffer sizes are adjustable since + * we need one giant buffer before getting to the last page. + */ + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + permute_frees(test, alloc, front_sizes, seq, 0, + end_offset[BUFFER_NUM - 1]); + permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); +} + +static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset, int index) +{ + size_t end, prev; + int align; + + if (index =3D=3D BUFFER_NUM) { + gen_buf_sizes(test, alloc, end_offset); + return; + } + prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; + end =3D prev; + + BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); + + for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { + if (align % 2) + end =3D ALIGN(end, PAGE_SIZE); + else + end +=3D BUFFER_MIN_SIZE; + end_offset[index] =3D end; + gen_buf_offsets(test, alloc, end_offset, index + 1); + } +} + struct binder_alloc_test { struct binder_alloc alloc; struct list_lru binder_test_freelist; @@ -59,6 +318,25 @@ static void binder_alloc_test_mmap(struct kunit *test) KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); } =20 +/** + * binder_alloc_exhaustive_test() - Exhaustively test alloc and free of bu= ffer pages. + * @test: The test context object. + * + * Allocate BUFFER_NUM buffers to cover all page alignment cases, + * then free them in all orders possible. Check that pages are + * correctly allocated, put onto lru when buffers are freed, and + * are freed when binder_alloc_free_page() is called. + */ +static void binder_alloc_exhaustive_test(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + size_t end_offset[BUFFER_NUM]; + + gen_buf_offsets(test, &priv->alloc, end_offset, 0); + + KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); +} + /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ =20 static void binder_alloc_test_vma_close(struct vm_area_struct *vma) @@ -152,6 +430,7 @@ static void binder_alloc_test_exit(struct kunit *test) static struct kunit_case binder_alloc_test_cases[] =3D { KUNIT_CASE(binder_alloc_test_init_freelist), KUNIT_CASE(binder_alloc_test_mmap), + KUNIT_CASE(binder_alloc_exhaustive_test), {} }; =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Mon Oct 6 20:58:58 2025 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 634BA1DA10B for ; Thu, 17 Jul 2025 01:10:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714644; cv=none; b=FCzGLQGOIgyHnNR5+tapazXRvMjlH9BUYL08wd63rO/0cFRb+k7oKTmZDHOud2ylGahlv8XO6qje9+qdJTMU5wE9/p9cCiUVaUXZANAKFfn3fjoVaZCKrJAgsqR3E5euToWi+M37ZNf81DuHWJvUdtG9/hFxubZSwCDYQcMDZ3U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752714644; c=relaxed/simple; bh=gVjRcz8uf6Zvb6Qsl4X7UhIgb0DNmTUTLZ2nEa9VguQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FSayBfKJBoXjTMYdKBa4bi9EG/0GpXi2zcxQkpeIANtMQnrTRajUboHWoPPbkBhgEARbAWz7rQk6Wt/Dgchblbr5w/DYm5Rt1VefKFoGZA0uxVaABC30s92g0ofUtN1zojI/GtuWxS80TNHd3PULbuE1AynH1ximzedefr5Nf9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xClBLbWD; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xClBLbWD" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b26e33ae9d5so397676a12.1 for ; Wed, 16 Jul 2025 18:10:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752714641; x=1753319441; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CWnQRxNCBI5smoClGNV2WoMDHzg2Ry+I5lkI2YBnPkQ=; b=xClBLbWDHrVj3n6nq2dO41LPg6tjZJ0l6tgEHNMIGRXtTiLH7v7pqIC36JEhmOVfvK z4DV3yZf0adqCIDQ9qpA/WSmuFYymBLfhycr1bwObYB2SNv7LzSARS1XFSJVFnMYUCU2 pJ5pyiiY09aVV9YVhHEWh0XAe2QyrKw1skcLncYCs6YXGKhUi6lcz+BrJPTHF1TulgL2 w0sxBdppnbo52/LT4z2FAdWGqMgxhCMCKhRGK2bA/6JMyiRZL28x9Qseu3VF00eykBS+ GJLFoEViVhajU9WPXBC7Vc6MMRDU63N+aMJdrDeAIYBc7kZ9TmNo15ulVlTPo5QmOYcG Tb4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752714641; x=1753319441; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CWnQRxNCBI5smoClGNV2WoMDHzg2Ry+I5lkI2YBnPkQ=; b=ryF2on0FMwmwldZNlysksJbwUVqtBJEEn4Bplqi/h8ntkTFp8tIZNSzD+0l2Mrd4ZH 7DazUCfPdPsiI/TTGs6+FHT8ommlLDK9ZJBwjL+idM1hz5bGelBxRbAVBqp02/snoh8M tIfibAIjLtHJFgqHeQS5YlXN8N86lhMXj0k3UWCe1Uh2ZiZyr9qbDr8LxVhyhjt6dji6 u6LveIu6eVX92tqGAv/npFLht2XfJDH32OGDofw4XaTZPs6wv/j+dSbYTxHqf+9ucRWO KRKSipOtKUjqqs8J4msrn960ovDz6cHXjIOsQa6V9RaNiJlgZaYU8E3Ql4hngc+dKAOZ TiUA== X-Gm-Message-State: AOJu0YwvYPrKSLX3JSCdkFHCFUikM554MdW3kQx2sbOYTJGDyvwWAmKg jEbh4wBpbNZVsB6Z72lsnNXwdp+AMoYDp8rpT4GH7gOYfoRXtWDICxZnLgw0e5CbVw8/UEDSrOA dVe8r/hqJUGMENs2dN1t0POCduTxO3j+CE/yyfDD8DipPSU7cMiSFlxZtGJrXQGaOYeFlwxnqph nDansCF6dc/u9F+qe2oVFcE53AFKjH330o3ZpT9SNn4tsLpY3t9g== X-Google-Smtp-Source: AGHT+IEpbUWvUFEJYbDyCmd3R9Df4nrfFgF0AMHrQKS+Vp6pVqfn6MISuxEsOLiO//CrBDoxCfHNqPtyGmXE X-Received: from pgbi37.prod.google.com ([2002:a63:5425:0:b0:b1f:fd39:8314]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:3d07:b0:234:8b24:108d with SMTP id adf61e73a8af0-23812947ebamr7492831637.22.1752714641247; Wed, 16 Jul 2025 18:10:41 -0700 (PDT) Date: Wed, 16 Jul 2025 18:10:09 -0700 In-Reply-To: <20250717011011.3365074-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250717011011.3365074-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250717011011.3365074-7-ynaffit@google.com> Subject: [PATCH v4 6/6] binder: encapsulate individual alloc test cases From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, Kees Cook Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each case tested by the binder allocator test is defined by 3 parameters: the end alignment type of each requested buffer allocation, whether those buffers share the front or back pages of the allotted address space, and the order in which those buffers should be released. The alignment type represents how a binder buffer may be laid out within or across page boundaries and relative to other buffers, and it's used along with whether the buffers cover part (sharing the front pages) of or all (sharing the back pages) of the vma to calculate the sizes passed into each test. binder_alloc_test_alloc recursively generates each possible arrangement of alignment types and then tests that the binder_alloc code tracks pages correctly when those buffers are allocated and then freed in every possible order at both ends of the address space. While they provide comprehensive coverage, they are poor candidates to be represented as KUnit test cases, which must be statically enumerated. For 5 buffers and 5 end alignment types, the test case array would have 750,000 entries. This change structures the recursive calls into meaningful test cases so that failures are easier to interpret. Cc: Kees Cook Acked-by: Carlos Llamas Signed-off-by: Tiffany Yang Reviewed-by: Joel Fernandes Reviewed-by: Kees Cook --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281959.hfOTIUjS-lkp@i= ntel.com/ v4: * Replace snprintf with seq_buf functions * Collected tag --- drivers/android/tests/binder_alloc_kunit.c | 226 ++++++++++++++++----- 1 file changed, 175 insertions(+), 51 deletions(-) diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 2f6077e96ae6..9b884d977f76 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -15,6 +15,7 @@ #include #include #include +#include #include =20 #include "../binder_alloc.h" @@ -27,7 +28,16 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); #define BUFFER_NUM 5 #define BUFFER_MIN_SIZE (PAGE_SIZE / 8) =20 -static int binder_alloc_test_failures; +#define FREESEQ_BUFLEN ((3 * BUFFER_NUM) + 1) + +#define ALIGN_TYPE_STRLEN (12) + +#define ALIGNMENTS_BUFLEN (((ALIGN_TYPE_STRLEN + 6) * BUFFER_NUM) + 1) + +#define PRINT_ALL_CASES (0) + +/* 5^5 alignment combinations * 2 places to share pages * 5! free sequence= s */ +#define TOTAL_EXHAUSTIVE_CASES (3125 * 2 * 120) =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -89,18 +99,42 @@ enum buf_end_align_type { LOOP_END, }; =20 -static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +static const char *const buf_end_align_type_strs[LOOP_END] =3D { + [SAME_PAGE_UNALIGNED] =3D "SP_UNALIGNED", + [SAME_PAGE_ALIGNED] =3D " SP_ALIGNED ", + [NEXT_PAGE_UNALIGNED] =3D "NP_UNALIGNED", + [NEXT_PAGE_ALIGNED] =3D " NP_ALIGNED ", + [NEXT_NEXT_UNALIGNED] =3D "NN_UNALIGNED", +}; + +struct binder_alloc_test_case_info { + char alignments[ALIGNMENTS_BUFLEN]; + struct seq_buf alignments_sb; + size_t *buffer_sizes; + int *free_sequence; + bool front_pages; +}; + +static void stringify_free_seq(struct kunit *test, int *seq, struct seq_bu= f *sb) { int i; =20 - kunit_err(test, "alloc sizes: "); for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - kunit_err(test, "free seq: "); + seq_buf_printf(sb, "[%d]", seq[i]); + + KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(sb)); +} + +static void stringify_alignments(struct kunit *test, int *alignments, + struct seq_buf *sb) +{ + int i; + for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); + seq_buf_printf(sb, "[ %d:%s ]", i, + buf_end_align_type_strs[alignments[i]]); + + KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(sb)); } =20 static bool check_buffer_pages_allocated(struct kunit *test, @@ -127,28 +161,30 @@ static bool check_buffer_pages_allocated(struct kunit= *test, return true; } =20 -static void binder_alloc_test_alloc_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) +static unsigned long binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) { buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { - pr_err_size_seq(test, sizes, seq); - binder_alloc_test_failures++; - } + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) + failures++; } + + return failures; } =20 -static void binder_alloc_test_free_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) +static unsigned long binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) @@ -156,17 +192,19 @@ static void binder_alloc_test_free_buf(struct kunit *= test, =20 for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(test, sizes, seq); kunit_err(test, "expect lru but is %s at page index %d\n", alloc->pages[i] ? "alloc" : "free", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_free_page(struct kunit *test, - struct binder_alloc *alloc) +static unsigned long binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) { + unsigned long failures =3D 0; unsigned long count; int i; =20 @@ -180,27 +218,70 @@ static void binder_alloc_test_free_page(struct kunit = *test, kunit_err(test, "expect free but is %s at page index %d\n", list_empty(page_to_lru(alloc->pages[i])) ? "alloc" : "lru", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_alloc_free(struct kunit *test, +/* Executes one full test run for the given test case. */ +static bool binder_alloc_test_alloc_free(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) + struct binder_alloc_test_case_info *tc, + size_t end) { + unsigned long pages =3D PAGE_ALIGN(end) / PAGE_SIZE; struct binder_buffer *buffers[BUFFER_NUM]; - - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + unsigned long failures; + bool failed =3D false; + + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial allocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial buffers not freed correctly: %lu/%lu pages not on lru list= ", + failures, pages); =20 /* Allocate from lru. */ - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - if (list_lru_count(alloc->freelist)) - kunit_err(test, "lru list should be empty but is not\n"); - - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); - binder_alloc_test_free_page(test, alloc); + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D list_lru_count(alloc->freelist); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "lru list should be empty after reallocation but still has %lu page= s", + failures); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocated buffers not freed correctly: %lu/%lu pages not on lru = list", + failures, pages); + + failures =3D binder_alloc_test_free_page(test, alloc); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Failed to clean up allocated pages: %lu/%lu pages still installed", + failures, (alloc->buffer_size / PAGE_SIZE)); + + return failed; } =20 static bool is_dup(int *seq, int index, int val) @@ -216,24 +297,45 @@ static bool is_dup(int *seq, int index, int val) =20 /* Generate BUFFER_NUM factorial free orders. */ static void permute_frees(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, int index, size_t end) + struct binder_alloc_test_case_info *tc, + unsigned long *runs, unsigned long *failures, + int index, size_t end) { + bool case_failed; int i; =20 if (index =3D=3D BUFFER_NUM) { - binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + DECLARE_SEQ_BUF(freeseq_sb, FREESEQ_BUFLEN); + + case_failed =3D binder_alloc_test_alloc_free(test, alloc, tc, end); + *runs +=3D 1; + *failures +=3D case_failed; + + if (case_failed || PRINT_ALL_CASES) { + stringify_free_seq(test, tc->free_sequence, + &freeseq_sb); + kunit_err(test, "case %lu: [%s] | %s - %s - %s", *runs, + case_failed ? "FAILED" : "PASSED", + tc->front_pages ? "front" : "back ", + seq_buf_str(&tc->alignments_sb), + seq_buf_str(&freeseq_sb)); + } + return; } for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) + if (is_dup(tc->free_sequence, index, i)) continue; - seq[index] =3D i; - permute_frees(test, alloc, sizes, seq, index + 1, end); + tc->free_sequence[index] =3D i; + permute_frees(test, alloc, tc, runs, failures, index + 1, end); } } =20 -static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset) +static void gen_buf_sizes(struct kunit *test, + struct binder_alloc *alloc, + struct binder_alloc_test_case_info *tc, + size_t *end_offset, unsigned long *runs, + unsigned long *failures) { size_t last_offset, offset =3D 0; size_t front_sizes[BUFFER_NUM]; @@ -241,31 +343,46 @@ static void gen_buf_sizes(struct kunit *test, struct = binder_alloc *alloc, int seq[BUFFER_NUM] =3D {0}; int i; =20 + tc->free_sequence =3D seq; for (i =3D 0; i < BUFFER_NUM; i++) { last_offset =3D offset; offset =3D end_offset[i]; front_sizes[i] =3D offset - last_offset; back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; } + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + /* * Buffers share the first or last few pages. * Only BUFFER_NUM - 1 buffer sizes are adjustable since * we need one giant buffer before getting to the last page. */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - permute_frees(test, alloc, front_sizes, seq, 0, + tc->front_pages =3D true; + tc->buffer_sizes =3D front_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, end_offset[BUFFER_NUM - 1]); - permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); + + tc->front_pages =3D false; + tc->buffer_sizes =3D back_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, alloc->buffer_size); } =20 static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset, int index) + size_t *end_offset, int *alignments, + unsigned long *runs, unsigned long *failures, + int index) { size_t end, prev; int align; =20 if (index =3D=3D BUFFER_NUM) { - gen_buf_sizes(test, alloc, end_offset); + struct binder_alloc_test_case_info tc =3D {0}; + + seq_buf_init(&tc.alignments_sb, tc.alignments, + ALIGNMENTS_BUFLEN); + stringify_alignments(test, alignments, &tc.alignments_sb); + + gen_buf_sizes(test, alloc, &tc, end_offset, runs, failures); return; } prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; @@ -279,7 +396,9 @@ static void gen_buf_offsets(struct kunit *test, struct = binder_alloc *alloc, else end +=3D BUFFER_MIN_SIZE; end_offset[index] =3D end; - gen_buf_offsets(test, alloc, end_offset, index + 1); + alignments[index] =3D align; + gen_buf_offsets(test, alloc, end_offset, alignments, runs, + failures, index + 1); } } =20 @@ -331,10 +450,15 @@ static void binder_alloc_exhaustive_test(struct kunit= *test) { struct binder_alloc_test *priv =3D test->priv; size_t end_offset[BUFFER_NUM]; + int alignments[BUFFER_NUM]; + unsigned long failures =3D 0; + unsigned long runs =3D 0; =20 - gen_buf_offsets(test, &priv->alloc, end_offset, 0); + gen_buf_offsets(test, &priv->alloc, end_offset, alignments, &runs, + &failures, 0); =20 - KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); + KUNIT_EXPECT_EQ(test, runs, TOTAL_EXHAUSTIVE_CASES); + KUNIT_EXPECT_EQ(test, failures, 0); } =20 /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ --=20 2.50.0.727.gbf7dc18ff4-goog