From nobody Wed Oct 8 05:22:11 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 775381531E3 for ; Wed, 2 Jul 2025 01:04:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418299; cv=none; b=Of++oe1vyOAr+TRFL8UD7p0/7+T+2yXSTWfh55YwzuHX0uThlhca16sksjkoi2Uwt4L9VIzuxt4CFEtk2/SnCghgFWuphC6lgyM8wkcu3X4gVal5L2b8EQq6xWFatqma38BriD05cQ7fV1aEMfo6znooVwmCqCg0uULEo2pdwwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418299; c=relaxed/simple; bh=XwSHl3R6x0geSRen2/MMqiPJvr7MDKRDFgnW0OKJPCg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eHP8jYoTZzoKdw7RyPh85kALV6FqJPyT0yqIfqFlGWtWsg7DBjPiqOCIbyFG50N1oiWYVU6SWe+imeYwV92ZqV3Uu1KBHjRlp1h+A7zZjiCV6zeThvcJFTkiV26UI+LjQlX5akIXk2NNRPdu7pNvUBDCajctvcxuvADsQm5gxCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UX6ITa08; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UX6ITa08" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-748d96b974cso5603781b3a.2 for ; Tue, 01 Jul 2025 18:04:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751418298; x=1752023098; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U2ToBQsH2KBAAKqSrwB/wtB0CqIjOe4g4OmnGE6U2PU=; b=UX6ITa08aT1ycdJKfb9qk5uZWJUkeOOtgeal/uEJ2jhXZvRJcpapM9qhsjtZRWE9dZ npYk40Ur6og+FqRMw5DyLg5XlDpe8JgDAYrI//80Fwzez9cDni6MdbJSVAuUoEPVW9h0 20AESGsieHAoyqJKFDtQbOs+Wexme9o6TacSMM4y/eLUr+ZRIuz3G9T3GfcuqHFx2wgu z6BWSK4DqZAtt6yEDLpkM4enkQevqfhDpoA7yG7P6utWk5k85R8xrDVWhN+RoMR7NVUH y2YH5wgjyOxnGn92giwGwBrL7lVnaHw7Z2NO5etdN6IVHl1hDfpK8f2b/MyqyVSNlfFD lJCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751418298; x=1752023098; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U2ToBQsH2KBAAKqSrwB/wtB0CqIjOe4g4OmnGE6U2PU=; b=Du4U/LHxYxJF8ISumC6/BtawCoA6vUmxNnBQbxb51F2QdEgCUb9O6bkqxY1qGV8MHQ 6mggh9eW1gT1/w5ecIE2JHs03xMpxxB8qs5XBcVKZwQvXj+J2out4Sc1rTJJPL9igOsV TdMYT2IDpxJKCk3NpoSPGCz53iR4wVc80lI88N5aVojmW77AS4bBegJA4N115KWPe10n 0AmlqUrCIheSgm3nyqGdtf5U+TT8aWkVfuX/+DlPcdu4pw9ekByEakm8XK0t7Mh8uxPP aiiGdA5cKbPqfNMhhmHamcusUNWwyUZN7gM22LGmFbMaAYymesHrY8UHAkosC1H62XtW 6bqA== X-Gm-Message-State: AOJu0YzLfBNY1dcOvRHnZ8hVCPGYLliM9OBycVk+PZ3DYgE5aEbf+vbS H5jLiZGewStcqBxpUZuDVv8GeJnsiM4OfKK3QYF0Gst1IlqwkNV40C01+j7B30EstXqltAWoCN1 8nKJRso+p/czmwpIHMHjbIW3+n+71zmFIbLINiZGAGW9bIacyIVzKhtC1sGWVRh04tvxGJnd5NV /nHFlSdkKW0xZZ+wAEkExkIY79aV6UD/h5NHsS7azL3dNf8bjeQA== X-Google-Smtp-Source: AGHT+IHjxkJhNpPJlRTQPrSVhZEAgwC4vRTmDtOI94X9/M4vSCN6ez22PLOMuaeL7edmsjjLJkFV7JIuOA5k X-Received: from pfcj5.prod.google.com ([2002:a05:6a00:1745:b0:746:1bf8:e16]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4b53:b0:736:33fd:f57d with SMTP id d2e1a72fcca58-74b50f51d98mr1094467b3a.17.1751418297579; Tue, 01 Jul 2025 18:04:57 -0700 (PDT) Date: Tue, 1 Jul 2025 18:04:41 -0700 In-Reply-To: <20250702010447.2994412-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250702010447.2994412-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250702010447.2994412-2-ynaffit@google.com> Subject: [PATCH v2 1/5] binder: Fix selftest page indexing From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The binder allocator selftest was only checking the last page of buffers that ended on a page boundary. Correct the page indexing to account for buffers that are not page-aligned. Signed-off-by: Tiffany Yang --- drivers/android/binder_alloc_selftest.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index c88735c54848..486af3ec3c02 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -142,12 +142,12 @@ static void binder_selftest_free_buf(struct binder_al= loc *alloc, for (i =3D 0; i < BUFFER_NUM; i++) binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 - for (i =3D 0; i < end / PAGE_SIZE; i++) { /** * Error message on a free page can be false positive * if binder shrinker ran during binder_alloc_free_buf * calls above. */ + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 05:22:11 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3433C137923 for ; Wed, 2 Jul 2025 01:05:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418303; cv=none; b=EUiPxHoJp3RPoYlR4xFL+cDekEwPBOdeebz1Q+nvp4kvvDoOBhIRnngQVfHMHoWjK9PIbjvlMYm3oFdrqZHjblwycKryZabydo/1B3dhQBkUyhLbCAD93BIHJ1NollIS2eeOnWVWIHQZ2CE6nA81eW4bz/upeUe/xIUPIqCaPiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418303; c=relaxed/simple; bh=yWmNmWgvKvda0eYoTS5d0V7bZ75rInx7GtYuQcH7ugI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lyqVSAmwUhNgcDu5PM8S0nYeJI8Rhppp8yCtZS5Uo3vcO5LYcsCyonzOvGEtz27Bh70Oyq2Xn9RuyUyEO4zBCMoy5VQlSBn2fGkPKb8hiljdR5DIxO0313hd4AYp0l9PwZrzhB4UNDAQF8er4cbOFoQKaY8X6zUfrf+Kvw9f9jg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=npa+Gd24; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="npa+Gd24" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-74858256d38so3745102b3a.2 for ; Tue, 01 Jul 2025 18:05:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751418302; x=1752023102; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yIXYjl4Oy7cLC2gt6wngeVcZAy4rgnTDnLgmQ+C2I1Y=; b=npa+Gd24vyG3ACoAzPH5qeDkoEJ+AbPF1qt3lGqHCYCYhGFIFHorUo6VZZcEAyjd/O qxj9t8hi3VilRUCvLbDbjOyh0HHddFc3A/DyQQyS3fs7iN+9hOqgsawVVYjMMrHCi+Xb RpCPUzBoA6V6APCWCKyZ0yqB/02ZBsQlZm1nCdjr0P6GhdAS7Q8CnpXGdd+RHDquDcM9 wfUYwyIe71VBym8oFjUIEMvCX7U2ycg2zRNXcBTxjW3rWwG7JIHvRPLzELYy4YRpZ/l7 BQfz/LmU9MhpkPY+C66ONjJJel7kOnE9T9+E5yN/Piq4se4yXYXOFmyDjGAMDa5lJVT8 MgAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751418302; x=1752023102; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yIXYjl4Oy7cLC2gt6wngeVcZAy4rgnTDnLgmQ+C2I1Y=; b=k3ciWYLRHAEW4c8YNrwxSSE6ZAFFWcxlh4AttRdBo6SnWPdQeCuB1mn2okP5Q/DwCq b7/3dz0G9csG7GnTDFn2V8ZFEt5XSEvEN+tRzzQsBUeyO62dxt9RoXu5H0fNWwYb2QdN weYJTx7+scRG+d32CqjuUFpm5BVj6K546V7qv5IGn9vGO77zsA8w5I67f78KRjafBvg9 4M9FbTjsckZLbUBacREVqhMQ0P/A6lTEPSCasgXXDCiKsm19mHSW/h92ucgnJvySFrpO m0YFci0GHLB2ayx8h2QZVyeYroDXi4B06kdewOTDv/WRMw6S139ZNDHlTwciOqIejhJ4 Jzmw== X-Gm-Message-State: AOJu0YwmIT7J/qwCq7OXGMkm7lGrlkPfFq6TecQ1JZYH6wY/TkYTMHeP OONAHgOfIMa/r32C21CChCsPfG3Nf5XbsMRbfpYmgsgrTIHUP/FMSY0HjdgFq5AKD3gnXvhjsS/ K/jBDr+TTOBQQEkxVVh3pVlAqImzk66GBYyVdBadkQGVUTWXCBewdJb2mWO4flPAs1aLudUQyeg aSwJhHnJENBocZ8S9lBlGGzu7hwhyZVt25xsWDxdot8qRB68cl8g== X-Google-Smtp-Source: AGHT+IGmIEJHf6ytdXjoRc4RSr5HUZa9EXQBQTvJIBsl7slHc8NIDLI1uctLgzR/SdeJ0vCy6bGQq5k9jFBD X-Received: from pgbdn12.prod.google.com ([2002:a05:6a02:e0c:b0:b2f:637a:a7d0]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:62c6:b0:220:82c7:309c with SMTP id adf61e73a8af0-222d7db195bmr1878753637.7.1751418301541; Tue, 01 Jul 2025 18:05:01 -0700 (PDT) Date: Tue, 1 Jul 2025 18:04:42 -0700 In-Reply-To: <20250702010447.2994412-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250702010447.2994412-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250702010447.2994412-3-ynaffit@google.com> Subject: [PATCH v2 2/5] binder: Store lru freelist in binder_alloc From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Store a pointer to the free pages list that the binder allocator should use for a process inside of struct binder_alloc. This change allows binder allocator code to be tested and debugged deterministically while a system is using binder; i.e., without interfering with other binder processes and independently of the shrinker. This is necessary to convert the current binder_alloc_selftest into a kunit test that does not rely on hijacking an existing binder_proc to run. A binder process's binder_alloc->freelist should not be changed after it is initialized. A sole exception is the process that runs the existing binder_alloc selftest. Its freelist can be temporarily replaced for the duration of the test because it runs as a single thread before any pages can be added to the global binder freelist, and the test frees every page it allocates before dropping the binder_selftest_lock. This exception allows the existing selftest to be used to check for regressions, but it will be dropped when the binder_alloc tests are converted to kunit in a subsequent patch in this series. Signed-off-by: Tiffany Yang --- drivers/android/binder_alloc.c | 25 +++++++---- drivers/android/binder_alloc.h | 3 +- drivers/android/binder_alloc_selftest.c | 59 ++++++++++++++++++++----- 3 files changed, 67 insertions(+), 20 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index fcfaf1b899c8..2e89f9127883 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -26,7 +26,7 @@ #include "binder_alloc.h" #include "binder_trace.h" =20 -struct list_lru binder_freelist; +static struct list_lru binder_freelist; =20 static DEFINE_MUTEX(binder_alloc_mmap_lock); =20 @@ -210,7 +210,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add(&binder_freelist, + ret =3D list_lru_add(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -409,7 +409,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, if (page) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1007,7 +1007,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) if (!page) continue; =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1229,6 +1229,17 @@ binder_shrink_scan(struct shrinker *shrink, struct s= hrink_control *sc) =20 static struct shrinker *binder_shrinker; =20 +static void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) +{ + alloc->pid =3D current->group_leader->pid; + alloc->mm =3D current->mm; + mmgrab(alloc->mm); + mutex_init(&alloc->mutex); + INIT_LIST_HEAD(&alloc->buffers); + alloc->freelist =3D freelist; +} + /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on * @alloc: binder_alloc for this proc @@ -1238,11 +1249,7 @@ static struct shrinker *binder_shrinker; */ void binder_alloc_init(struct binder_alloc *alloc) { - alloc->pid =3D current->group_leader->pid; - alloc->mm =3D current->mm; - mmgrab(alloc->mm); - mutex_init(&alloc->mutex); - INIT_LIST_HEAD(&alloc->buffers); + __binder_alloc_init(alloc, &binder_freelist); } =20 int binder_alloc_shrinker_init(void) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index feecd7414241..aa05a9df1360 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -15,7 +15,6 @@ #include #include =20 -extern struct list_lru binder_freelist; struct binder_transaction; =20 /** @@ -91,6 +90,7 @@ static inline struct list_head *page_to_lru(struct page *= p) * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space * @pages: array of struct page * + * @freelist: lru list to use for free pages (invariant after in= it) * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -113,6 +113,7 @@ struct binder_alloc { struct rb_root allocated_buffers; size_t free_async_space; struct page **pages; + struct list_lru *freelist; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 486af3ec3c02..8b18b22aa3de 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -8,8 +8,9 @@ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 -#include #include +#include +#include #include "binder_alloc.h" =20 #define BUFFER_NUM 5 @@ -18,6 +19,7 @@ static bool binder_selftest_run =3D true; static int binder_selftest_failures; static DEFINE_MUTEX(binder_selftest_lock); +static struct list_lru binder_selftest_freelist; =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -142,11 +144,6 @@ static void binder_selftest_free_buf(struct binder_all= oc *alloc, for (i =3D 0; i < BUFFER_NUM; i++) binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 - /** - * Error message on a free page can be false positive - * if binder shrinker ran during binder_alloc_free_buf - * calls above. - */ for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); @@ -162,8 +159,8 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) int i; unsigned long count; =20 - while ((count =3D list_lru_count(&binder_freelist))) { - list_lru_walk(&binder_freelist, binder_alloc_free_page, + while ((count =3D list_lru_count(&binder_selftest_freelist))) { + list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, NULL, count); } =20 @@ -187,7 +184,7 @@ static void binder_selftest_alloc_free(struct binder_al= loc *alloc, =20 /* Allocate from lru. */ binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_freelist)) + if (list_lru_count(&binder_selftest_freelist)) pr_err("lru list should be empty but is not\n"); =20 binder_selftest_free_buf(alloc, buffers, sizes, seq, end); @@ -275,6 +272,20 @@ static void binder_selftest_alloc_offset(struct binder= _alloc *alloc, } } =20 +int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) +{ + struct page *page; + int allocated =3D 0; + int i; + + for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + page =3D alloc->pages[i]; + if (page) + allocated++; + } + return allocated; +} + /** * binder_selftest_alloc() - Test alloc and free of buffer pages. * @alloc: Pointer to alloc struct. @@ -286,6 +297,7 @@ static void binder_selftest_alloc_offset(struct binder_= alloc *alloc, */ void binder_selftest_alloc(struct binder_alloc *alloc) { + struct list_lru *prev_freelist; size_t end_offset[BUFFER_NUM]; =20 if (!binder_selftest_run) @@ -293,14 +305,41 @@ void binder_selftest_alloc(struct binder_alloc *alloc) mutex_lock(&binder_selftest_lock); if (!binder_selftest_run || !alloc->mapped) goto done; + + prev_freelist =3D alloc->freelist; + + /* + * It is not safe to modify this process's alloc->freelist if it has any + * pages on a freelist. Since the test runs before any binder ioctls can + * be dealt with, none of its pages should be allocated yet. + */ + if (binder_selftest_alloc_get_page_count(alloc)) { + pr_err("process has existing alloc state\n"); + goto cleanup; + } + + if (list_lru_init(&binder_selftest_freelist)) { + pr_err("failed to init test freelist\n"); + goto cleanup; + } + + alloc->freelist =3D &binder_selftest_freelist; + pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); - binder_selftest_run =3D false; if (binder_selftest_failures > 0) pr_info("%d tests FAILED\n", binder_selftest_failures); else pr_info("PASSED\n"); =20 + if (list_lru_count(&binder_selftest_freelist)) + pr_err("expect test freelist to be empty\n"); + +cleanup: + /* Even if we didn't run the test, it's no longer thread-safe. */ + binder_selftest_run =3D false; + alloc->freelist =3D prev_freelist; + list_lru_destroy(&binder_selftest_freelist); done: mutex_unlock(&binder_selftest_lock); } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 05:22:11 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D428191F74 for ; Wed, 2 Jul 2025 01:05:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418308; cv=none; b=nhGTPQ/japPNeD+gffUkmLe8Nht32DZn3YHGLej0sVrhBFh/EhmT2IutIJ8JaYuKlbVoxhX9TgfBAeof/FWYGpr+FGNWxXkfA8Hyjht/MDQaL3YQRCPwCxJwSaJbTpEHF1tkyF0U7TxbfgRBrfCPQ/2bw4IB/BefkEZrqEHIb8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418308; c=relaxed/simple; bh=akhN0rAiuPfEfFQKVsuBtbXEQ1xJzklsZHDKjOZ6/P0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gtRyVbUXRI3CEw6eGIFgh2D7LwZX84jKb3m3TRraVxxPxbCzseA9xvwLHcOUqg+qsLLOi0jkAZhD/CbvZAj/cz/84MXW9hLoKuRy6Jnoy19aL6inbzWhOUYNvv8570tPwVW7K67vJPnY4INNsAzdMzwLXD+lGCrJNSJXqZidKQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c/6ISmwH; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c/6ISmwH" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-74890390d17so5492805b3a.2 for ; Tue, 01 Jul 2025 18:05:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751418306; x=1752023106; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zUMxac7rV1vFL/OM+QCb3qj0RT5OliCNTeqqQCeH6uY=; b=c/6ISmwHccAmTgSwd3axCw5nUtYpT5Nha7nNrrs6IMw815PjOFHP84g3cRPbXGbyLy cEeMjLs325G7PTqTO+eWqRl6LWP+FeihPCG0TU4GzqMM7ngrBt5CIrhDA40rfDxGTcWW b3qZu3VgfQWgYNqtLGgvd2AgDVn6Lj8EljN/XFF+sxUfIBpOrlIOjvQ7XcIbRVy+uPzA CoWxSGZMyDq1M9bO6gz0UBVO8oiotNpZgcS+D52l1bRcSyDL0pLODwC2BIcDlKeYegJ0 JPq5LEv4ffDhGd8DArH386I75e+d/hA7A/jloRhyVk5i1RYb1n3Vps+aiqZeCKK7JjEa ihbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751418306; x=1752023106; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zUMxac7rV1vFL/OM+QCb3qj0RT5OliCNTeqqQCeH6uY=; b=bmmyGnlQrLOO7rbiPqdGCrQS4tlrnBwUO/e+BY4nZp3GN2xV76/i8zbn7NWpfv2gna grsrPskjPKm/oL2kOr9nlqk2kC+9ACKbJbBnBTMuFcBNmQwoBlIlKN5DPRfHi0XFQffT D3+mOMhsp7X3nLFf/4HJtIhHQ7apO1qxxbBBH5TsJO2o4bLl4/wgE0UjbR09NBBCiSu5 YLUqlJbDGTFR0+AuuN6DN/32S3k0453vPoG8qbTImFiTV4i/F4w/OKnfv9zsf+po7Id0 xUbs+km1ajoArGn9StOBwq+P7C5ezOh60QZV9/w12ALo75duclfHuHFtIepT13SfVvMe mgDg== X-Gm-Message-State: AOJu0Yyqr8a6vxjzmyVOLeWS/mkwpmM2zn2nb+x8VJKlOjrIHH3NMUMD 9rhHJgNLp0Ulr//xgmuEUu/4rGaGGvPHDxx+CO+EaGTPMmChrX5CcglDiWUiyJXNfK45ZF4/o/4 9FOf64cWuE6fe5GlO5er2MCpdxWdY21Z280N1QpJbAIlyv7aP3RO5UaTekEfgin7NH6E4e9a8Qc 6oc+V/ZlOBEwvtfwZaUqi6097jfESo0aBNkj/6p5Ynt4Q8xd+J+A== X-Google-Smtp-Source: AGHT+IHbZ3aiLdl5lDDo6eT9x4pxuDMEZ/k0gajshrXmfJjDdyEigkhT0MPaXMwWJLoEzLTM/pdgAEerBPzH X-Received: from pfbho9.prod.google.com ([2002:a05:6a00:8809:b0:747:7188:c30]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:91a8:b0:749:421:efcc with SMTP id d2e1a72fcca58-74b50e5cb83mr1040969b3a.5.1751418305635; Tue, 01 Jul 2025 18:05:05 -0700 (PDT) Date: Tue, 1 Jul 2025 18:04:43 -0700 In-Reply-To: <20250702010447.2994412-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250702010447.2994412-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250702010447.2994412-4-ynaffit@google.com> Subject: [PATCH v2 3/5] binder: Scaffolding for binder_alloc KUnit tests From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add setup and teardown for testing binder allocator code with KUnit. Include minimal test cases to verify that tests are initialized correctly. Tested-by: Rae Moar Signed-off-by: Tiffany Yang --- v2: * Added tested-by tag --- drivers/android/Kconfig | 11 ++ drivers/android/Makefile | 1 + drivers/android/binder.c | 5 +- drivers/android/binder_alloc.c | 15 +- drivers/android/binder_alloc.h | 6 + drivers/android/binder_internal.h | 4 + drivers/android/tests/.kunitconfig | 3 + drivers/android/tests/Makefile | 3 + drivers/android/tests/binder_alloc_kunit.c | 166 +++++++++++++++++++++ include/kunit/test.h | 12 ++ lib/kunit/user_alloc.c | 4 +- 11 files changed, 222 insertions(+), 8 deletions(-) create mode 100644 drivers/android/tests/.kunitconfig create mode 100644 drivers/android/tests/Makefile create mode 100644 drivers/android/tests/binder_alloc_kunit.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index 07aa8ae0a058..b1bc7183366c 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -47,4 +47,15 @@ config ANDROID_BINDER_IPC_SELFTEST exhaustively with combinations of various buffer sizes and alignments. =20 +config ANDROID_BINDER_ALLOC_KUNIT_TEST + tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS + depends on ANDROID_BINDER_IPC && KUNIT + default KUNIT_ALL_TESTS + help + This feature builds the binder alloc KUnit tests. + + Each test case runs using a pared-down binder_alloc struct and + test-specific freelist, which allows this KUnit module to be loaded + for testing without interfering with a running system. + endmenu diff --git a/drivers/android/Makefile b/drivers/android/Makefile index c9d3d0c99c25..74d02a335d4e 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -4,3 +4,4 @@ ccflags-y +=3D -I$(src) # needed for trace events obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index c463ca4a8fff..9dfe90c284fc 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -68,6 +68,8 @@ #include #include =20 +#include + #include =20 #include @@ -5956,10 +5958,11 @@ static void binder_vma_close(struct vm_area_struct = *vma) binder_alloc_vma_close(&proc->alloc); } =20 -static vm_fault_t binder_vm_fault(struct vm_fault *vmf) +VISIBLE_IF_KUNIT vm_fault_t binder_vm_fault(struct vm_fault *vmf) { return VM_FAULT_SIGBUS; } +EXPORT_SYMBOL_IF_KUNIT(binder_vm_fault); =20 static const struct vm_operations_struct binder_vm_ops =3D { .open =3D binder_vma_open, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 2e89f9127883..c79e5c6721f0 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "binder_alloc.h" #include "binder_trace.h" =20 @@ -57,13 +58,14 @@ static struct binder_buffer *binder_buffer_prev(struct = binder_buffer *buffer) return list_entry(buffer->entry.prev, struct binder_buffer, entry); } =20 -static size_t binder_alloc_buffer_size(struct binder_alloc *alloc, - struct binder_buffer *buffer) +VISIBLE_IF_KUNIT size_t binder_alloc_buffer_size(struct binder_alloc *allo= c, + struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_buffer_size); =20 static void binder_insert_free_buffer(struct binder_alloc *alloc, struct binder_buffer *new_buffer) @@ -959,7 +961,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, failure_string, ret); return ret; } - +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_mmap_handler); =20 void binder_alloc_deferred_release(struct binder_alloc *alloc) { @@ -1028,6 +1030,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) "%s: %d buffers %d, pages %d\n", __func__, alloc->pid, buffers, page_count); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_deferred_release); =20 /** * binder_alloc_print_allocated() - print buffer info @@ -1122,6 +1125,7 @@ void binder_alloc_vma_close(struct binder_alloc *allo= c) { binder_alloc_set_mapped(alloc, false); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_vma_close); =20 /** * binder_alloc_free_page() - shrinker callback to free pages @@ -1229,8 +1233,8 @@ binder_shrink_scan(struct shrinker *shrink, struct sh= rink_control *sc) =20 static struct shrinker *binder_shrinker; =20 -static void __binder_alloc_init(struct binder_alloc *alloc, - struct list_lru *freelist) +VISIBLE_IF_KUNIT void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) { alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; @@ -1239,6 +1243,7 @@ static void __binder_alloc_init(struct binder_alloc *= alloc, INIT_LIST_HEAD(&alloc->buffers); alloc->freelist =3D freelist; } +EXPORT_SYMBOL_IF_KUNIT(__binder_alloc_init); =20 /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index aa05a9df1360..dc8dce2469a7 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -188,5 +188,11 @@ int binder_alloc_copy_from_buffer(struct binder_alloc = *alloc, binder_size_t buffer_offset, size_t bytes); =20 +#if IS_ENABLED(CONFIG_KUNIT) +void __binder_alloc_init(struct binder_alloc *alloc, struct list_lru *free= list); +size_t binder_alloc_buffer_size(struct binder_alloc *alloc, + struct binder_buffer *buffer); +#endif + #endif /* _LINUX_BINDER_ALLOC_H */ =20 diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_int= ernal.h index 1ba5caf1d88d..b5d3014fb4dc 100644 --- a/drivers/android/binder_internal.h +++ b/drivers/android/binder_internal.h @@ -592,4 +592,8 @@ void binder_add_device(struct binder_device *device); */ void binder_remove_device(struct binder_device *device); =20 +#if IS_ENABLED(CONFIG_KUNIT) +vm_fault_t binder_vm_fault(struct vm_fault *vmf); +#endif + #endif /* _LINUX_BINDER_INTERNAL_H */ diff --git a/drivers/android/tests/.kunitconfig b/drivers/android/tests/.ku= nitconfig new file mode 100644 index 000000000000..a73601231049 --- /dev/null +++ b/drivers/android/tests/.kunitconfig @@ -0,0 +1,3 @@ +CONFIG_KUNIT=3Dy +CONFIG_ANDROID_BINDER_IPC=3Dy +CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST=3Dy diff --git a/drivers/android/tests/Makefile b/drivers/android/tests/Makefile new file mode 100644 index 000000000000..6780967e573b --- /dev/null +++ b/drivers/android/tests/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D binder_alloc_kunit.o diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c new file mode 100644 index 000000000000..4b68b5687d33 --- /dev/null +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -0,0 +1,166 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for binder allocator code + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../binder_alloc.h" +#include "../binder_internal.h" + +MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); + +#define BINDER_MMAP_SIZE SZ_128K + +struct binder_alloc_test { + struct binder_alloc alloc; + struct list_lru binder_test_freelist; + struct file *filp; + unsigned long mmap_uaddr; +}; + +static void binder_alloc_test_init_freelist(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + KUNIT_EXPECT_PTR_EQ(test, priv->alloc.freelist, + &priv->binder_test_freelist); +} + +static void binder_alloc_test_mmap(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + struct binder_alloc *alloc =3D &priv->alloc; + struct binder_buffer *buf; + struct rb_node *n; + + KUNIT_EXPECT_EQ(test, alloc->mapped, true); + KUNIT_EXPECT_EQ(test, alloc->buffer_size, BINDER_MMAP_SIZE); + + n =3D rb_first(&alloc->allocated_buffers); + KUNIT_EXPECT_PTR_EQ(test, n, NULL); + + n =3D rb_first(&alloc->free_buffers); + buf =3D rb_entry(n, struct binder_buffer, rb_node); + KUNIT_EXPECT_EQ(test, binder_alloc_buffer_size(alloc, buf), + BINDER_MMAP_SIZE); + KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); +} + +/* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ + +static void binder_alloc_test_vma_close(struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D vma->vm_private_data; + + binder_alloc_vma_close(alloc); +} + +static const struct vm_operations_struct binder_alloc_test_vm_ops =3D { + .close =3D binder_alloc_test_vma_close, + .fault =3D binder_vm_fault, +}; + +static int binder_alloc_test_mmap_handler(struct file *filp, + struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D filp->private_data; + + vm_flags_mod(vma, VM_DONTCOPY | VM_MIXEDMAP, VM_MAYWRITE); + + vma->vm_ops =3D &binder_alloc_test_vm_ops; + vma->vm_private_data =3D alloc; + + return binder_alloc_mmap_handler(alloc, vma); +} + +static const struct file_operations binder_alloc_test_fops =3D { + .mmap =3D binder_alloc_test_mmap_handler, +}; + +static int binder_alloc_test_init(struct kunit *test) +{ + struct binder_alloc_test *priv; + int ret; + + priv =3D kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + test->priv =3D priv; + + ret =3D list_lru_init(&priv->binder_test_freelist); + if (ret) { + kunit_err(test, "Failed to initialize test freelist\n"); + return ret; + } + + /* __binder_alloc_init requires mm to be attached */ + ret =3D kunit_attach_mm(); + if (ret) { + kunit_err(test, "Failed to attach mm\n"); + return ret; + } + __binder_alloc_init(&priv->alloc, &priv->binder_test_freelist); + + priv->filp =3D anon_inode_getfile("binder_alloc_kunit", + &binder_alloc_test_fops, &priv->alloc, + O_RDWR | O_CLOEXEC); + if (IS_ERR_OR_NULL(priv->filp)) { + kunit_err(test, "Failed to open binder alloc test driver file\n"); + return priv->filp ? PTR_ERR(priv->filp) : -ENOMEM; + } + + priv->mmap_uaddr =3D kunit_vm_mmap(test, priv->filp, 0, BINDER_MMAP_SIZE, + PROT_READ, MAP_PRIVATE | MAP_NORESERVE, + 0); + if (!priv->mmap_uaddr) { + kunit_err(test, "Could not map the test's transaction memory\n"); + return -ENOMEM; + } + + return 0; +} + +static void binder_alloc_test_exit(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + /* Close the backing file to make sure binder_alloc_vma_close runs */ + if (!IS_ERR_OR_NULL(priv->filp)) + fput(priv->filp); + + if (priv->alloc.mm) + binder_alloc_deferred_release(&priv->alloc); + + /* Make sure freelist is empty */ + KUNIT_EXPECT_EQ(test, list_lru_count(&priv->binder_test_freelist), 0); + list_lru_destroy(&priv->binder_test_freelist); +} + +static struct kunit_case binder_alloc_test_cases[] =3D { + KUNIT_CASE(binder_alloc_test_init_freelist), + KUNIT_CASE(binder_alloc_test_mmap), + {} +}; + +static struct kunit_suite binder_alloc_test_suite =3D { + .name =3D "binder_alloc", + .test_cases =3D binder_alloc_test_cases, + .init =3D binder_alloc_test_init, + .exit =3D binder_alloc_test_exit, +}; + +kunit_test_suite(binder_alloc_test_suite); + +MODULE_AUTHOR("Tiffany Yang "); +MODULE_DESCRIPTION("Binder Alloc KUnit tests"); +MODULE_LICENSE("GPL"); diff --git a/include/kunit/test.h b/include/kunit/test.h index 39c768f87dc9..d958ee53050e 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -531,6 +531,18 @@ static inline char *kunit_kstrdup(struct kunit *test, = const char *str, gfp_t gfp */ const char *kunit_kstrdup_const(struct kunit *test, const char *str, gfp_t= gfp); =20 +/** + * kunit_attach_mm() - Create and attach a new mm if it doesn't already ex= ist. + * + * Allocates a &struct mm_struct and attaches it to @current. In most case= s, call + * kunit_vm_mmap() without calling kunit_attach_mm() directly. Only necess= ary when + * code under test accesses the mm before executing the mmap (e.g., to per= form + * additional initialization beforehand). + * + * Return: 0 on success, -errno on failure. + */ +int kunit_attach_mm(void); + /** * kunit_vm_mmap() - Allocate KUnit-tracked vm_mmap() area * @test: The test context object. diff --git a/lib/kunit/user_alloc.c b/lib/kunit/user_alloc.c index 46951be018be..b8cac765e620 100644 --- a/lib/kunit/user_alloc.c +++ b/lib/kunit/user_alloc.c @@ -22,8 +22,7 @@ struct kunit_vm_mmap_params { unsigned long offset; }; =20 -/* Create and attach a new mm if it doesn't already exist. */ -static int kunit_attach_mm(void) +int kunit_attach_mm(void) { struct mm_struct *mm; =20 @@ -49,6 +48,7 @@ static int kunit_attach_mm(void) =20 return 0; } +EXPORT_SYMBOL_GPL(kunit_attach_mm); =20 static int kunit_vm_mmap_init(struct kunit_resource *res, void *context) { --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 05:22:11 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DFE219DF62 for ; Wed, 2 Jul 2025 01:05:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418312; cv=none; b=FefvGBoKhsN291yWC4B+vDvVIvBMEQFNWougOgjX+2I8VAKWRhyKEAdTVA4qrN8eE+0OnoPFUxvinjMRlJPH6fJv7Z2HJRpYbVnVSndDYrchZnpDeLqbNA5y7M6oTeqCVc1ZK37VIWA55Fcu3zEFtrA218pvQyLqhlJOH525Ung= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418312; c=relaxed/simple; bh=Gn22KLjM5RduxhhDtsFnmFa8tgBQyv1znAbgWopR4Ys=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PTJme2n77c2LHp35R7fSww4ccGGrxp0k5ZZijaV30t7OV7f0I+uJbtdk7QLq81q+aBKQ3pI5F4czkS5fkAPbZqrgNRt7q0p4p0b5XLLSrYsNGO0V+dcdifnsMxE4U1lEHb0xsyea0wdcctlf9FKODs2hORQF2s6LBQ+uZoejXYM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ADka2JmZ; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ADka2JmZ" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-748b4d5c045so3385814b3a.1 for ; Tue, 01 Jul 2025 18:05:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751418310; x=1752023110; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BzRpOzw7cWdqKxrH7uIaRWwtiKYX4f2ihxYRzPHLp4M=; b=ADka2JmZRZPoNzmSM0ABWVOT4pzUYaFtX9weYG0hiBkLjcjF1oWfLQ+fLeUo6BpJSt rTB3GPVUIrq82pXiPlO3noHO6hunT4M6JChknbITODJSnoz7ZsysivxUK+OZUM5z9Wo7 d9ijgG/IAYj0AUxakKcRBXUNjhv56DyLV/hMUqJVYi9QNdpQVOp5aojKYJJkjiubWNGK sst4tHEozVSfpu7MdcgA/MhhY3zSvN93/E43m8C6eMfTlBpMcS5SpZMpSBKkdL9zNKUi Qh6Xiw4UtzYw8PxfKvmFAaHM1JRWJLjWRrMyj/zrvv3dZ1J5S+EytBQrhLyjXHEmnDae p/SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751418310; x=1752023110; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BzRpOzw7cWdqKxrH7uIaRWwtiKYX4f2ihxYRzPHLp4M=; b=JaDzopkUzjBNBEnVDUDvh/n57rCoLfsOIT4RfNx7AIsT2lGEGTn2a/twJcgJP4bLJc jckBFci7qb3Zg0WHGKs25osdEd2IomassGE7249lxxhup4HeoVwxeJdKByfF0iUaHgLs /Moga1zgVVDGcSwO6U70KFklQSokomL1Elb0akqukLnmYh/f3/g6dm9odjB/fQrFPWIA B7Q1iwJ/3nNUh1zCJXLHhAsn9vC4B3oMs03D3jEzGqfn9dSycmcSEMC2QPP0f0BKpY19 Ob6kp6bRvPzVtCEc3zkpRBiW3waiyITZ8Iyru49v5D877Wh8A0k4cdZvKZBDWoIrxylB 0ecA== X-Gm-Message-State: AOJu0Yy5yTjl71mvu7Z6PTnd++gAG5cgZVyQBHRSx9LZ7jpYGNIF7ZXA qePoXBjOIy/amQ2d5V5pZBzlphcG0xSJummRNcOwmcQ/DrbQ6qE1r2SZomUE1aIrH2765p5uKjc K0fZuVNAVKM8/TFKrOPHi/OpvQ37psQeuVevahD5swDAdbxJFvn8MJ1GMq9fwzmYo1R7hTuxsvt /xr0nvuumRMagNpfmF1YaQ+A1EcaHsvhA66yZH0dzbknJyYCVVxw== X-Google-Smtp-Source: AGHT+IF7/lGf/b381oSGdP5icYvAo2bvhpODg3eBTfMJnVO0FVN5JXpl9iOlaihvqM7pxc74oqTUlvsMDJN3 X-Received: from pfwz18.prod.google.com ([2002:a05:6a00:1d92:b0:748:f3b0:4db2]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:88cf:0:b0:749:1d18:2c74 with SMTP id d2e1a72fcca58-74b50dfa0c2mr1102816b3a.10.1751418309714; Tue, 01 Jul 2025 18:05:09 -0700 (PDT) Date: Tue, 1 Jul 2025 18:04:44 -0700 In-Reply-To: <20250702010447.2994412-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250702010447.2994412-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250702010447.2994412-5-ynaffit@google.com> Subject: [PATCH v2 4/5] binder: Convert binder_alloc selftests to KUnit From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the existing binder_alloc_selftest tests into KUnit tests. These tests allocate and free an exhaustive combination of buffers with various sizes and alignments. This change allows them to be run without blocking or otherwise interfering with other processes in binder. This test is refactored into more meaningful cases in the subsequent patch. Signed-off-by: Tiffany Yang --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281837.hReNHJjO-lkp@i= ntel.com/ --- drivers/android/Kconfig | 10 - drivers/android/Makefile | 1 - drivers/android/binder.c | 5 - drivers/android/binder_alloc.c | 3 + drivers/android/binder_alloc.h | 5 - drivers/android/binder_alloc_selftest.c | 345 --------------------- drivers/android/tests/binder_alloc_kunit.c | 279 +++++++++++++++++ 7 files changed, 282 insertions(+), 366 deletions(-) delete mode 100644 drivers/android/binder_alloc_selftest.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index b1bc7183366c..5b3b8041f827 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -37,16 +37,6 @@ config ANDROID_BINDER_DEVICES created. Each binder device has its own context manager, and is therefore logically separated from the other devices. =20 -config ANDROID_BINDER_IPC_SELFTEST - bool "Android Binder IPC Driver Selftest" - depends on ANDROID_BINDER_IPC - help - This feature allows binder selftest to run. - - Binder selftest checks the allocation and free of binder buffers - exhaustively with combinations of various buffer sizes and - alignments. - config ANDROID_BINDER_ALLOC_KUNIT_TEST tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS depends on ANDROID_BINDER_IPC && KUNIT diff --git a/drivers/android/Makefile b/drivers/android/Makefile index 74d02a335d4e..c5d47be0276c 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -3,5 +3,4 @@ ccflags-y +=3D -I$(src) # needed for trace events =20 obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o -obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 9dfe90c284fc..7b2653a5d59c 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -5718,11 +5718,6 @@ static long binder_ioctl(struct file *filp, unsigned= int cmd, unsigned long arg) struct binder_thread *thread; void __user *ubuf =3D (void __user *)arg; =20 - /*pr_info("binder_ioctl: %d:%d %x %lx\n", - proc->pid, current->pid, cmd, arg);*/ - - binder_selftest_alloc(&proc->alloc); - trace_binder_ioctl(cmd, arg); =20 ret =3D wait_event_interruptible(binder_user_error_wait, binder_stop_on_u= ser_error < 2); diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index c79e5c6721f0..74a184014fa7 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -701,6 +701,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, out: return buffer; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_new_buf); =20 static unsigned long buffer_start_page(struct binder_buffer *buffer) { @@ -879,6 +880,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, binder_free_buf_locked(alloc, buffer); mutex_unlock(&alloc->mutex); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_buf); =20 /** * binder_alloc_mmap_handler() - map virtual address space for proc @@ -1217,6 +1219,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, err_mmget: return LRU_SKIP; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_page); =20 static unsigned long binder_shrink_count(struct shrinker *shrink, struct shrink_control *sc) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index dc8dce2469a7..bed97c2cad92 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -121,11 +121,6 @@ struct binder_alloc { bool oneway_spam_detected; }; =20 -#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST -void binder_selftest_alloc(struct binder_alloc *alloc); -#else -static inline void binder_selftest_alloc(struct binder_alloc *alloc) {} -#endif enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, void *cb_arg); diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c deleted file mode 100644 index 8b18b22aa3de..000000000000 --- a/drivers/android/binder_alloc_selftest.c +++ /dev/null @@ -1,345 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* binder_alloc_selftest.c - * - * Android IPC Subsystem - * - * Copyright (C) 2017 Google, Inc. - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include "binder_alloc.h" - -#define BUFFER_NUM 5 -#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) - -static bool binder_selftest_run =3D true; -static int binder_selftest_failures; -static DEFINE_MUTEX(binder_selftest_lock); -static struct list_lru binder_selftest_freelist; - -/** - * enum buf_end_align_type - Page alignment of a buffer - * end with regard to the end of the previous buffer. - * - * In the pictures below, buf2 refers to the buffer we - * are aligning. buf1 refers to previous buffer by addr. - * Symbol [ means the start of a buffer, ] means the end - * of a buffer, and | means page boundaries. - */ -enum buf_end_align_type { - /** - * @SAME_PAGE_UNALIGNED: The end of this buffer is on - * the same page as the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 ][ ... - * buf1 ]|[ buf2 ][ ... - */ - SAME_PAGE_UNALIGNED =3D 0, - /** - * @SAME_PAGE_ALIGNED: When the end of the previous buffer - * is not page aligned, the end of this buffer is on the - * same page as the end of the previous buffer and is page - * aligned. When the previous buffer is page aligned, the - * end of this buffer is aligned to the next page boundary. - * Examples: - * buf1 ][ buf2 ]| ... - * buf1 ]|[ buf2 ]| ... - */ - SAME_PAGE_ALIGNED, - /** - * @NEXT_PAGE_UNALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 ][ ... - */ - NEXT_PAGE_UNALIGNED, - /** - * @NEXT_PAGE_ALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is page aligned. Examples: - * buf1 ][ buf2 | buf2 ]| ... - * buf1 ]|[ buf2 | buf2 ]| ... - */ - NEXT_PAGE_ALIGNED, - /** - * @NEXT_NEXT_UNALIGNED: The end of this buffer is on - * the page that follows the page after the end of the - * previous buffer and is not page aligned. Examples: - * buf1 ][ buf2 | buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 | buf2 ][ ... - */ - NEXT_NEXT_UNALIGNED, - /** - * @LOOP_END: The number of enum values in &buf_end_align_type. - * It is used for controlling loop termination. - */ - LOOP_END, -}; - -static void pr_err_size_seq(size_t *sizes, int *seq) -{ - int i; - - pr_err("alloc sizes: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - pr_err("free seq: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); -} - -static bool check_buffer_pages_allocated(struct binder_alloc *alloc, - struct binder_buffer *buffer, - size_t size) -{ - unsigned long page_addr; - unsigned long end; - int page_index; - - end =3D PAGE_ALIGN(buffer->user_data + size); - page_addr =3D buffer->user_data; - for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; - if (!alloc->pages[page_index] || - !list_empty(page_to_lru(alloc->pages[page_index]))) { - pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index] ? - "lru" : "free", page_index); - return false; - } - } - return true; -} - -static void binder_selftest_alloc_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) { - buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); - if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(alloc, buffers[i], - sizes[i])) { - pr_err_size_seq(sizes, seq); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) - binder_alloc_free_buf(alloc, buffers[seq[i]]); - - for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { - if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(sizes, seq); - pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i] ? "alloc" : "free", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_page(struct binder_alloc *alloc) -{ - int i; - unsigned long count; - - while ((count =3D list_lru_count(&binder_selftest_freelist))) { - list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, - NULL, count); - } - - for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i]) { - pr_err("expect free but is %s at page index %d\n", - list_empty(page_to_lru(alloc->pages[i])) ? - "alloc" : "lru", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_alloc_free(struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) -{ - struct binder_buffer *buffers[BUFFER_NUM]; - - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - - /* Allocate from lru. */ - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_selftest_freelist)) - pr_err("lru list should be empty but is not\n"); - - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - binder_selftest_free_page(alloc); -} - -static bool is_dup(int *seq, int index, int val) -{ - int i; - - for (i =3D 0; i < index; i++) { - if (seq[i] =3D=3D val) - return true; - } - return false; -} - -/* Generate BUFFER_NUM factorial free orders. */ -static void binder_selftest_free_seq(struct binder_alloc *alloc, - size_t *sizes, int *seq, - int index, size_t end) -{ - int i; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_free(alloc, sizes, seq, end); - return; - } - for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) - continue; - seq[index] =3D i; - binder_selftest_free_seq(alloc, sizes, seq, index + 1, end); - } -} - -static void binder_selftest_alloc_size(struct binder_alloc *alloc, - size_t *end_offset) -{ - int i; - int seq[BUFFER_NUM] =3D {0}; - size_t front_sizes[BUFFER_NUM]; - size_t back_sizes[BUFFER_NUM]; - size_t last_offset, offset =3D 0; - - for (i =3D 0; i < BUFFER_NUM; i++) { - last_offset =3D offset; - offset =3D end_offset[i]; - front_sizes[i] =3D offset - last_offset; - back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; - } - /* - * Buffers share the first or last few pages. - * Only BUFFER_NUM - 1 buffer sizes are adjustable since - * we need one giant buffer before getting to the last page. - */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - binder_selftest_free_seq(alloc, front_sizes, seq, 0, - end_offset[BUFFER_NUM - 1]); - binder_selftest_free_seq(alloc, back_sizes, seq, 0, alloc->buffer_size); -} - -static void binder_selftest_alloc_offset(struct binder_alloc *alloc, - size_t *end_offset, int index) -{ - int align; - size_t end, prev; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_size(alloc, end_offset); - return; - } - prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; - end =3D prev; - - BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); - - for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { - if (align % 2) - end =3D ALIGN(end, PAGE_SIZE); - else - end +=3D BUFFER_MIN_SIZE; - end_offset[index] =3D end; - binder_selftest_alloc_offset(alloc, end_offset, index + 1); - } -} - -int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) -{ - struct page *page; - int allocated =3D 0; - int i; - - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D alloc->pages[i]; - if (page) - allocated++; - } - return allocated; -} - -/** - * binder_selftest_alloc() - Test alloc and free of buffer pages. - * @alloc: Pointer to alloc struct. - * - * Allocate BUFFER_NUM buffers to cover all page alignment cases, - * then free them in all orders possible. Check that pages are - * correctly allocated, put onto lru when buffers are freed, and - * are freed when binder_alloc_free_page is called. - */ -void binder_selftest_alloc(struct binder_alloc *alloc) -{ - struct list_lru *prev_freelist; - size_t end_offset[BUFFER_NUM]; - - if (!binder_selftest_run) - return; - mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->mapped) - goto done; - - prev_freelist =3D alloc->freelist; - - /* - * It is not safe to modify this process's alloc->freelist if it has any - * pages on a freelist. Since the test runs before any binder ioctls can - * be dealt with, none of its pages should be allocated yet. - */ - if (binder_selftest_alloc_get_page_count(alloc)) { - pr_err("process has existing alloc state\n"); - goto cleanup; - } - - if (list_lru_init(&binder_selftest_freelist)) { - pr_err("failed to init test freelist\n"); - goto cleanup; - } - - alloc->freelist =3D &binder_selftest_freelist; - - pr_info("STARTED\n"); - binder_selftest_alloc_offset(alloc, end_offset, 0); - if (binder_selftest_failures > 0) - pr_info("%d tests FAILED\n", binder_selftest_failures); - else - pr_info("PASSED\n"); - - if (list_lru_count(&binder_selftest_freelist)) - pr_err("expect test freelist to be empty\n"); - -cleanup: - /* Even if we didn't run the test, it's no longer thread-safe. */ - binder_selftest_run =3D false; - alloc->freelist =3D prev_freelist; - list_lru_destroy(&binder_selftest_freelist); -done: - mutex_unlock(&binder_selftest_lock); -} diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 4b68b5687d33..9e185e2036e5 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -21,6 +21,265 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); =20 #define BINDER_MMAP_SIZE SZ_128K =20 +#define BUFFER_NUM 5 +#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) + +static int binder_alloc_test_failures; + +/** + * enum buf_end_align_type - Page alignment of a buffer + * end with regard to the end of the previous buffer. + * + * In the pictures below, buf2 refers to the buffer we + * are aligning. buf1 refers to previous buffer by addr. + * Symbol [ means the start of a buffer, ] means the end + * of a buffer, and | means page boundaries. + */ +enum buf_end_align_type { + /** + * @SAME_PAGE_UNALIGNED: The end of this buffer is on + * the same page as the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 ][ ... + * buf1 ]|[ buf2 ][ ... + */ + SAME_PAGE_UNALIGNED =3D 0, + /** + * @SAME_PAGE_ALIGNED: When the end of the previous buffer + * is not page aligned, the end of this buffer is on the + * same page as the end of the previous buffer and is page + * aligned. When the previous buffer is page aligned, the + * end of this buffer is aligned to the next page boundary. + * Examples: + * buf1 ][ buf2 ]| ... + * buf1 ]|[ buf2 ]| ... + */ + SAME_PAGE_ALIGNED, + /** + * @NEXT_PAGE_UNALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 ][ ... + */ + NEXT_PAGE_UNALIGNED, + /** + * @NEXT_PAGE_ALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is page aligned. Examples: + * buf1 ][ buf2 | buf2 ]| ... + * buf1 ]|[ buf2 | buf2 ]| ... + */ + NEXT_PAGE_ALIGNED, + /** + * @NEXT_NEXT_UNALIGNED: The end of this buffer is on + * the page that follows the page after the end of the + * previous buffer and is not page aligned. Examples: + * buf1 ][ buf2 | buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 | buf2 ][ ... + */ + NEXT_NEXT_UNALIGNED, + /** + * @LOOP_END: The number of enum values in &buf_end_align_type. + * It is used for controlling loop termination. + */ + LOOP_END, +}; + +static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +{ + int i; + + kunit_err(test, "alloc sizes: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%zu]", sizes[i]); + pr_cont("\n"); + kunit_err(test, "free seq: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%d]", seq[i]); + pr_cont("\n"); +} + +static bool check_buffer_pages_allocated(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffer, + size_t size) +{ + unsigned long page_addr; + unsigned long end; + int page_index; + + end =3D PAGE_ALIGN(buffer->user_data + size); + page_addr =3D buffer->user_data; + for (; page_addr < end; page_addr +=3D PAGE_SIZE) { + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; + if (!alloc->pages[page_index] || + !list_empty(page_to_lru(alloc->pages[page_index]))) { + kunit_err(test, "expect alloc but is %s at page index %d\n", + alloc->pages[page_index] ? + "lru" : "free", page_index); + return false; + } + } + return true; +} + +static void binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); + if (IS_ERR(buffers[i]) || + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { + pr_err_size_seq(test, sizes, seq); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) + binder_alloc_free_buf(alloc, buffers[seq[i]]); + + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { + if (list_empty(page_to_lru(alloc->pages[i]))) { + pr_err_size_seq(test, sizes, seq); + kunit_err(test, "expect lru but is %s at page index %d\n", + alloc->pages[i] ? "alloc" : "free", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) +{ + unsigned long count; + int i; + + while ((count =3D list_lru_count(alloc->freelist))) { + list_lru_walk(alloc->freelist, binder_alloc_free_page, + NULL, count); + } + + for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { + if (alloc->pages[i]) { + kunit_err(test, "expect free but is %s at page index %d\n", + list_empty(page_to_lru(alloc->pages[i])) ? + "alloc" : "lru", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_alloc_free(struct kunit *test, + struct binder_alloc *alloc, + size_t *sizes, int *seq, size_t end) +{ + struct binder_buffer *buffers[BUFFER_NUM]; + + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + + /* Allocate from lru. */ + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + if (list_lru_count(alloc->freelist)) + kunit_err(test, "lru list should be empty but is not\n"); + + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + binder_alloc_test_free_page(test, alloc); +} + +static bool is_dup(int *seq, int index, int val) +{ + int i; + + for (i =3D 0; i < index; i++) { + if (seq[i] =3D=3D val) + return true; + } + return false; +} + +/* Generate BUFFER_NUM factorial free orders. */ +static void permute_frees(struct kunit *test, struct binder_alloc *alloc, + size_t *sizes, int *seq, int index, size_t end) +{ + int i; + + if (index =3D=3D BUFFER_NUM) { + binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + return; + } + for (i =3D 0; i < BUFFER_NUM; i++) { + if (is_dup(seq, index, i)) + continue; + seq[index] =3D i; + permute_frees(test, alloc, sizes, seq, index + 1, end); + } +} + +static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset) +{ + size_t last_offset, offset =3D 0; + size_t front_sizes[BUFFER_NUM]; + size_t back_sizes[BUFFER_NUM]; + int seq[BUFFER_NUM] =3D {0}; + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + last_offset =3D offset; + offset =3D end_offset[i]; + front_sizes[i] =3D offset - last_offset; + back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; + } + /* + * Buffers share the first or last few pages. + * Only BUFFER_NUM - 1 buffer sizes are adjustable since + * we need one giant buffer before getting to the last page. + */ + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + permute_frees(test, alloc, front_sizes, seq, 0, + end_offset[BUFFER_NUM - 1]); + permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); +} + +static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset, int index) +{ + size_t end, prev; + int align; + + if (index =3D=3D BUFFER_NUM) { + gen_buf_sizes(test, alloc, end_offset); + return; + } + prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; + end =3D prev; + + BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); + + for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { + if (align % 2) + end =3D ALIGN(end, PAGE_SIZE); + else + end +=3D BUFFER_MIN_SIZE; + end_offset[index] =3D end; + gen_buf_offsets(test, alloc, end_offset, index + 1); + } +} + struct binder_alloc_test { struct binder_alloc alloc; struct list_lru binder_test_freelist; @@ -56,6 +315,25 @@ static void binder_alloc_test_mmap(struct kunit *test) KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); } =20 +/** + * binder_alloc_exhaustive_test() - Exhaustively test alloc and free of bu= ffer pages. + * @test: The test context object. + * + * Allocate BUFFER_NUM buffers to cover all page alignment cases, + * then free them in all orders possible. Check that pages are + * correctly allocated, put onto lru when buffers are freed, and + * are freed when binder_alloc_free_page() is called. + */ +static void binder_alloc_exhaustive_test(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + size_t end_offset[BUFFER_NUM]; + + gen_buf_offsets(test, &priv->alloc, end_offset, 0); + + KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); +} + /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ =20 static void binder_alloc_test_vma_close(struct vm_area_struct *vma) @@ -149,6 +427,7 @@ static void binder_alloc_test_exit(struct kunit *test) static struct kunit_case binder_alloc_test_cases[] =3D { KUNIT_CASE(binder_alloc_test_init_freelist), KUNIT_CASE(binder_alloc_test_mmap), + KUNIT_CASE(binder_alloc_exhaustive_test), {} }; =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 05:22:11 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9174413635E for ; Wed, 2 Jul 2025 01:05:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418343; cv=none; b=uswivDQVpVi6Asvrj9VUa2GD4689LjcVvUSfXqktxUvYwb+I/YDwbwjHm1WpiUv3zmp65PzRSysEtwbVyGu9fp9hoouTOBRnNqKlSqSZi/On+0P7GsUsW+01AjhHTeXyG5CZ/M5SDPN0IkdP+zAYA7dRc7V8+SwSzLQuYZ+z/MY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751418343; c=relaxed/simple; bh=f2UjTPl4HUGhaJZGvJzyLZ61geaJfkxh73V9CoWdrVc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mD1ee0huutV7Kn+ixWwFE9yzC0cOzhsQW78RKXMVLyCOUVTCjniI5wzeR3V4IUCB63ZI/HztpnehC/odK4MLknixaKb2SxHSezYRk7KcRYe7MAGWkDlu92+sCBOZjn2FUqlM5+xzJxzRpKyui+kVRK3BKYm1IRLTMo8NUrV9Qe8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mKymX8KG; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mKymX8KG" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7489d1f5e9fso9243176b3a.0 for ; Tue, 01 Jul 2025 18:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751418341; x=1752023141; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M/pcrYvaU6t8T3R2xq0AsGjXqPB0jvdSPEGz2fSJtgo=; b=mKymX8KG/TsADw9rQANNgx4O04IItLsIxbZr73MeowEK/ivhYqKqOD6CgzIfHiZerC WLi0oJ+/VQsUy2ecPG5840a2OfDhUX5MB9Zeo65NF+MsHGpNURA+hbkQQAye/JwWEODL RRXo38QSXacAxyPVBuyuIvbbP0954LdvDdQFtgwRdDtdvqN1F27axJCjbWnqiaij0JO6 zoAFTun7BG5LZJjaGWePwmapEuHOC2n+70TvzgTrudr3u3sst9oMj772n/b3S2xbY+i+ DJ+lHdUogvEkJ+rIflFWa2ke7pmZqFV5eGaTPEOpC2+xmwW11SFxKGQ23saRYnS0SUtl KREw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751418341; x=1752023141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M/pcrYvaU6t8T3R2xq0AsGjXqPB0jvdSPEGz2fSJtgo=; b=LggAp5B5lI16c4f19/aA9gvYK7THV5vedr3HXNWreZS3u+9lA2nPaE9EO/pihhK6VF ZBGnRDnLJHhvI8p6+GN6fOAHv9qH8Xmk5oGIp5XYfY9s8LFMG+MJnJ4PtTktlYlvvA9+ EbKae4R5ZRkD82TKuTDsM4o5zA7u2gvBnZCqoXrnKnPTTwJ9zljjVxyBVx8Ibaqy+el7 uEQnj8XPq4mHJ/ZhWcZxBtH7E9LwG4abM8AbIqqbhGs7LDeeZ6AKUhUVnGJFfR25Ic85 w6duQoAF0DRA1yGhFcDXCWbEVrU2Qo5sJL7CbxGDVBQv3w8PMuWNA0JgoABphlM7inOz n5Wg== X-Gm-Message-State: AOJu0Yxp+tCTmLcgJp4v9HJ94f2JwX8YLKDYrtF6KT589r9qf94GE1fl kfn3W3nEENlCmFourRBOh2ZC8uM2RMW0rHz4LFKHm0bfTxevpaAKEmEB7aSJ8wn/H7KdvZORAQn Iid43JkWiy6CgqX2dvXjc9V9J/hih+wlJHGKOGhx6nN1sKp2yJnOYVt6REK/P37bewrGUJQphjR w6MT/QMABJj5j0IX1mE4d4a2V4uYbQ/bmLLuMEbrZZr07e1Ct7Lw== X-Google-Smtp-Source: AGHT+IG5NLCTHeOaDLRDPmmOHIoEu82NGYVy1EMlQGWadW+vYXwIaO+BajMV3PWLuKbGBx45Tays0awo8y1u X-Received: from pfqn28.prod.google.com ([2002:aa7:985c:0:b0:746:1fcb:a9cc]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:6a0a:b0:1f5:717b:46dc with SMTP id adf61e73a8af0-222d7e8389amr1975822637.27.1751418340836; Tue, 01 Jul 2025 18:05:40 -0700 (PDT) Date: Tue, 1 Jul 2025 18:04:45 -0700 In-Reply-To: <20250702010447.2994412-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250702010447.2994412-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250702010447.2994412-6-ynaffit@google.com> Subject: [PATCH v2 5/5] binder: encapsulate individual alloc test cases From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each case tested by the binder allocator test is defined by 3 parameters: the end alignment type of each requested buffer allocation, whether those buffers share the front or back pages of the allotted address space, and the order in which those buffers should be released. The alignment type represents how a binder buffer may be laid out within or across page boundaries and relative to other buffers, and it's used along with whether the buffers cover part (sharing the front pages) of or all (sharing the back pages) of the vma to calculate the sizes passed into each test. binder_alloc_test_alloc recursively generates each possible arrangement of alignment types and then tests that the binder_alloc code tracks pages correctly when those buffers are allocated and then freed in every possible order at both ends of the address space. While they provide comprehensive coverage, they are poor candidates to be represented as KUnit test cases, which must be statically enumerated. For 5 buffers and 5 end alignment types, the test case array would have 750,000 entries. This change structures the recursive calls into meaningful test cases so that failures are easier to interpret. Signed-off-by: Tiffany Yang --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281959.hfOTIUjS-lkp@i= ntel.com/ --- drivers/android/tests/binder_alloc_kunit.c | 234 ++++++++++++++++----- 1 file changed, 181 insertions(+), 53 deletions(-) diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 9e185e2036e5..02aa4a135eb5 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -24,7 +24,16 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); #define BUFFER_NUM 5 #define BUFFER_MIN_SIZE (PAGE_SIZE / 8) =20 -static int binder_alloc_test_failures; +#define FREESEQ_BUFLEN ((3 * BUFFER_NUM) + 1) + +#define ALIGN_TYPE_STRLEN (12) + +#define ALIGNMENTS_BUFLEN (((ALIGN_TYPE_STRLEN + 6) * BUFFER_NUM) + 1) + +#define PRINT_ALL_CASES (0) + +/* 5^5 alignment combinations * 2 places to share pages * 5! free sequence= s */ +#define TOTAL_EXHAUSTIVE_CASES (3125 * 2 * 120) =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -86,18 +95,49 @@ enum buf_end_align_type { LOOP_END, }; =20 -static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +static const char *const buf_end_align_type_strs[LOOP_END] =3D { + [SAME_PAGE_UNALIGNED] =3D "SP_UNALIGNED", + [SAME_PAGE_ALIGNED] =3D " SP_ALIGNED ", + [NEXT_PAGE_UNALIGNED] =3D "NP_UNALIGNED", + [NEXT_PAGE_ALIGNED] =3D " NP_ALIGNED ", + [NEXT_NEXT_UNALIGNED] =3D "NN_UNALIGNED", +}; + +struct binder_alloc_test_case_info { + size_t *buffer_sizes; + int *free_sequence; + char alignments[ALIGNMENTS_BUFLEN]; + bool front_pages; +}; + +static void stringify_free_seq(struct kunit *test, int *seq, char *buf, + size_t buf_len) { + size_t bytes =3D 0; int i; =20 - kunit_err(test, "alloc sizes: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - kunit_err(test, "free seq: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); + for (i =3D 0; i < BUFFER_NUM; i++) { + bytes +=3D snprintf(buf + bytes, buf_len - bytes, "[%d]", seq[i]); + if (bytes >=3D buf_len) + break; + } + KUNIT_EXPECT_LT(test, bytes, buf_len); +} + +static void stringify_alignments(struct kunit *test, int *alignments, + char *buf, size_t buf_len) +{ + size_t bytes =3D 0; + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + bytes +=3D snprintf(buf + bytes, buf_len - bytes, "[ %d:%s ]", i, + buf_end_align_type_strs[alignments[i]]); + if (bytes >=3D buf_len) + break; + } + + KUNIT_EXPECT_LT(test, bytes, buf_len); } =20 static bool check_buffer_pages_allocated(struct kunit *test, @@ -124,28 +164,30 @@ static bool check_buffer_pages_allocated(struct kunit= *test, return true; } =20 -static void binder_alloc_test_alloc_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) +static unsigned long binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) { buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { - pr_err_size_seq(test, sizes, seq); - binder_alloc_test_failures++; - } + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) + failures++; } + + return failures; } =20 -static void binder_alloc_test_free_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) +static unsigned long binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) @@ -153,17 +195,19 @@ static void binder_alloc_test_free_buf(struct kunit *= test, =20 for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(test, sizes, seq); kunit_err(test, "expect lru but is %s at page index %d\n", alloc->pages[i] ? "alloc" : "free", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_free_page(struct kunit *test, - struct binder_alloc *alloc) +static unsigned long binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) { + unsigned long failures =3D 0; unsigned long count; int i; =20 @@ -177,27 +221,70 @@ static void binder_alloc_test_free_page(struct kunit = *test, kunit_err(test, "expect free but is %s at page index %d\n", list_empty(page_to_lru(alloc->pages[i])) ? "alloc" : "lru", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_alloc_free(struct kunit *test, +/* Executes one full test run for the given test case. */ +static bool binder_alloc_test_alloc_free(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) + struct binder_alloc_test_case_info *tc, + size_t end) { + unsigned long pages =3D PAGE_ALIGN(end) / PAGE_SIZE; struct binder_buffer *buffers[BUFFER_NUM]; - - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + unsigned long failures; + bool failed =3D false; + + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial allocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial buffers not freed correctly: %lu/%lu pages not on lru list= ", + failures, pages); =20 /* Allocate from lru. */ - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - if (list_lru_count(alloc->freelist)) - kunit_err(test, "lru list should be empty but is not\n"); - - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); - binder_alloc_test_free_page(test, alloc); + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D list_lru_count(alloc->freelist); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "lru list should be empty after reallocation but still has %lu page= s", + failures); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocated buffers not freed correctly: %lu/%lu pages not on lru = list", + failures, pages); + + failures =3D binder_alloc_test_free_page(test, alloc); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Failed to clean up allocated pages: %lu/%lu pages still installed", + failures, (alloc->buffer_size / PAGE_SIZE)); + + return failed; } =20 static bool is_dup(int *seq, int index, int val) @@ -213,24 +300,44 @@ static bool is_dup(int *seq, int index, int val) =20 /* Generate BUFFER_NUM factorial free orders. */ static void permute_frees(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, int index, size_t end) + struct binder_alloc_test_case_info *tc, + unsigned long *runs, unsigned long *failures, + int index, size_t end) { + bool case_failed; int i; =20 if (index =3D=3D BUFFER_NUM) { - binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + char freeseq_buf[FREESEQ_BUFLEN]; + + case_failed =3D binder_alloc_test_alloc_free(test, alloc, tc, end); + *runs +=3D 1; + *failures +=3D case_failed; + + if (case_failed || PRINT_ALL_CASES) { + stringify_free_seq(test, tc->free_sequence, freeseq_buf, + FREESEQ_BUFLEN); + kunit_err(test, "case %lu: [%s] | %s - %s - %s", *runs, + case_failed ? "FAILED" : "PASSED", + tc->front_pages ? "front" : "back ", + tc->alignments, freeseq_buf); + } + return; } for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) + if (is_dup(tc->free_sequence, index, i)) continue; - seq[index] =3D i; - permute_frees(test, alloc, sizes, seq, index + 1, end); + tc->free_sequence[index] =3D i; + permute_frees(test, alloc, tc, runs, failures, index + 1, end); } } =20 -static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset) +static void gen_buf_sizes(struct kunit *test, + struct binder_alloc *alloc, + struct binder_alloc_test_case_info *tc, + size_t *end_offset, unsigned long *runs, + unsigned long *failures) { size_t last_offset, offset =3D 0; size_t front_sizes[BUFFER_NUM]; @@ -238,31 +345,45 @@ static void gen_buf_sizes(struct kunit *test, struct = binder_alloc *alloc, int seq[BUFFER_NUM] =3D {0}; int i; =20 + tc->free_sequence =3D seq; for (i =3D 0; i < BUFFER_NUM; i++) { last_offset =3D offset; offset =3D end_offset[i]; front_sizes[i] =3D offset - last_offset; back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; } + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + /* * Buffers share the first or last few pages. * Only BUFFER_NUM - 1 buffer sizes are adjustable since * we need one giant buffer before getting to the last page. */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - permute_frees(test, alloc, front_sizes, seq, 0, + tc->front_pages =3D true; + tc->buffer_sizes =3D front_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, end_offset[BUFFER_NUM - 1]); - permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); + + tc->front_pages =3D false; + tc->buffer_sizes =3D back_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, alloc->buffer_size); } =20 static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset, int index) + size_t *end_offset, int *alignments, + unsigned long *runs, unsigned long *failures, + int index) { size_t end, prev; int align; =20 if (index =3D=3D BUFFER_NUM) { - gen_buf_sizes(test, alloc, end_offset); + struct binder_alloc_test_case_info tc =3D {0}; + + stringify_alignments(test, alignments, tc.alignments, + ALIGNMENTS_BUFLEN); + + gen_buf_sizes(test, alloc, &tc, end_offset, runs, failures); return; } prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; @@ -276,7 +397,9 @@ static void gen_buf_offsets(struct kunit *test, struct = binder_alloc *alloc, else end +=3D BUFFER_MIN_SIZE; end_offset[index] =3D end; - gen_buf_offsets(test, alloc, end_offset, index + 1); + alignments[index] =3D align; + gen_buf_offsets(test, alloc, end_offset, alignments, runs, + failures, index + 1); } } =20 @@ -328,10 +451,15 @@ static void binder_alloc_exhaustive_test(struct kunit= *test) { struct binder_alloc_test *priv =3D test->priv; size_t end_offset[BUFFER_NUM]; + int alignments[BUFFER_NUM]; + unsigned long failures =3D 0; + unsigned long runs =3D 0; =20 - gen_buf_offsets(test, &priv->alloc, end_offset, 0); + gen_buf_offsets(test, &priv->alloc, end_offset, alignments, &runs, + &failures, 0); =20 - KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); + KUNIT_EXPECT_EQ(test, runs, TOTAL_EXHAUSTIVE_CASES); + KUNIT_EXPECT_EQ(test, failures, 0); } =20 /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ --=20 2.50.0.727.gbf7dc18ff4-goog