From nobody Tue Oct 7 03:50:36 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 508EA27702D for ; Mon, 14 Jul 2025 18:53:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519215; cv=none; b=j80y5tXNvPhBoR+VIpHnEUpLJU8mNS4d9dUAIVbbelDbnz9NBhIPThh2lt6kdfdmMhC/it3/lxve4RN6rW6IpWzHOa4XFrKxfqEvnL7ZoRJZVaVvdLsL0LhlaFlF2ExE2WywXH4Nxt+mIYWGMxJMJOEyylOTZNcf+qRzro/hfng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519215; c=relaxed/simple; bh=XwSHl3R6x0geSRen2/MMqiPJvr7MDKRDFgnW0OKJPCg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YJBJ3iYHn71nfY/PztyJhSjF4A3HmxvpHb8go86BhTRkXxv3aEbrv5Epfk4a0WBFXanwsbXodCbUokPkdjd9XA/kF7qiyj2D1GFRDjYQ1gPBW5bDyqioWPJz3CYDgGZ/LstMygu58sVDQydoxJGDSeKrEIfLzqhfeEkhkVMu+YA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j+O7YNfs; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j+O7YNfs" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2e9b1f85b2bso5581059fac.0 for ; Mon, 14 Jul 2025 11:53:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519213; x=1753124013; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U2ToBQsH2KBAAKqSrwB/wtB0CqIjOe4g4OmnGE6U2PU=; b=j+O7YNfs64EISFMnzJ77sAwPWhyHmP2fxsHGUIPfp0tQzkxaphWjqlYD71Wa3hUVkr 7pAf+EaebfDYzx1UfoUSCAP7dT49muGT6eCvX768vwI8PINtOuVBN+6KeaR/XvNn0sW/ aLQUCutPe3Pl2DUkLFnyZoifXW5fVHCbmELMEli98aVzdxKVwrfoD0frhkVoGf6SedZp 7YRYEI3MI0U6Xp1tp+R45dUQbGp5nQdQAhzGcaJB6IqktUV273IyoCDvXal6A+HNulT/ nyKeQus7IlSV0I8G55knBIuGdr4FmXuBtCcwmNB+Pm0QsepYvPbOY1SHk1oV3LzsJu9x rt/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519213; x=1753124013; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U2ToBQsH2KBAAKqSrwB/wtB0CqIjOe4g4OmnGE6U2PU=; b=ALWWuEg+xEABn4gcdI7SPfZ7D5B03W9EDRrrqGgTJMsK9CdU/1OS/nTVfFIu1Pe9ev JNVWHvqs3MUqe9TL4fyjRWredaVyax5tWN7UlOhGu6s3/jKXqNihPObJJGMIeq5q8dou hnonkrmrNPLgQRaovEM60Crg+RfY1ptxPQekphyhhs6P6qoaLcnYqLCpbYylBSlpQYIV bSrnNen998k7eOwyAX9OHsgtDZu0NrWz4hhKDAQ664kirNa8zii0wg2i211k2MtdPgZf LeDUwxzj264DOw0hF0UatGOddCyHn5KHt/05gyxoAF7yB2b+/AsJBDDXjx7Kz6lUdLZk lAIg== X-Gm-Message-State: AOJu0YxK5MaRMUpOMvNjaUwbbGywxgKeIUDcsCr/u/bJw4+dN8Y49dZc 8cwrCQ+XZrMYF169619WUlRwKxjsLWVK7UOrvDLuIZ2bqNQbOwdu2vHsRLSzX4SFDyhxxTTbX96 pi1kGzqDjAgTZN2OecQDvJ4X5g4/C5Jg2ol74OV6He9ZyjsKqgiwjbpO9l1AMDoMZmhbm04Vp8h Lu47ijAUw53ruji2pGKyFXWJg1ofql+kz+nO7hQ3JsXGoghDJrmA== X-Google-Smtp-Source: AGHT+IFitcv4rcRBTerguEKCsh7TKl705K/Y/N/RPPwgQCEScQ+VKqIuk28UlrV6d2EJBb5zRjTc0wf115oi X-Received: from oabra17.prod.google.com ([2002:a05:6871:6111:b0:2e9:7781:2867]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:2b05:b0:2d4:ef88:97af with SMTP id 586e51a60fabf-2ff26e7a249mr9942418fac.3.1752519213343; Mon, 14 Jul 2025 11:53:33 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:14 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-2-ynaffit@google.com> Subject: [PATCH v3 1/6] binder: Fix selftest page indexing From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The binder allocator selftest was only checking the last page of buffers that ended on a page boundary. Correct the page indexing to account for buffers that are not page-aligned. Signed-off-by: Tiffany Yang Acked-by: Carlos Llamas --- drivers/android/binder_alloc_selftest.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index c88735c54848..486af3ec3c02 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -142,12 +142,12 @@ static void binder_selftest_free_buf(struct binder_al= loc *alloc, for (i =3D 0; i < BUFFER_NUM; i++) binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 - for (i =3D 0; i < end / PAGE_SIZE; i++) { /** * Error message on a free page can be false positive * if binder shrinker ran during binder_alloc_free_buf * calls above. */ + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); pr_err("expect lru but is %s at page index %d\n", --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:50:36 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56B3A277803 for ; Mon, 14 Jul 2025 18:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519220; cv=none; b=pLIkz2XP8B1gRzml8y/XAeQeNNEBZFZ2IVbkJ+lvJ5ImqyO6xc54GJnlCMgJId4sFWd0aiC4kMav0rZ/+BCtGgh3g+LFVnKH5n4Vz1VAa3FdciVFXXSKvm19dGcXvsZH1OQXGp7O8v9adP/EOBeiQ7amrAidT9LMAA6yaXiXv5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519220; c=relaxed/simple; bh=yWmNmWgvKvda0eYoTS5d0V7bZ75rInx7GtYuQcH7ugI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MGJ03PeMjPdcnTaP0/Xife2bclqFrCz9rroSmFpj2ZC5Titis43gHRE7PFyF8KvSwKXKrQN6fsxHmCiJx9ygy0tERHv8TjX7p1tF/9k2TmyX+za2OYW+8sHfOKzChYXtyEvdjkV7GLLpM3A8Hb21eApru241iAyDThlaaOg1Wbg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1/IR6SC3; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1/IR6SC3" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-235c897d378so45731925ad.1 for ; Mon, 14 Jul 2025 11:53:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519218; x=1753124018; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yIXYjl4Oy7cLC2gt6wngeVcZAy4rgnTDnLgmQ+C2I1Y=; b=1/IR6SC39VOp2NB1ftf7oGUlvbLe6odkxWZ8GrX2sVKAIf3MsSp/1a8eFXoT7yQ3B7 +3qNqQPMOprAe8tp1qvwNUNXurg4XBzIYjXd9g/gch/dJ59D+rtVI881OzwqxmJkdqHF zS+nqf65SgNKS1RG3gpN3eaOIrCmkYyg3LDxOqk5yyak3mdRakdrj7WUyLdauVOADWRg ST1ML8MCpmAfp5z8q2BQKaI+NgO4z/98iI2inMy1L2lFWgtBJ87YSmfwbI2FXKZcBGun xASo/Ej2sv1lpx4DfQtUyxNn0cM+eTzWh5zykJKWPmePUOsVeowEtpyEr7k71J35IqPe 3+0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519218; x=1753124018; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yIXYjl4Oy7cLC2gt6wngeVcZAy4rgnTDnLgmQ+C2I1Y=; b=uQSJN2LQerfuPF0zsOSXspoLt4Gaw4Cb596K2VxJtL/FyU1sCFnYQFXTInKBbrv0dC Chq54rRPpa0u48cB/6rKwI+I6rxcSGXU8Sj0LN8MXNHP08nja4k5W5Tv7GExOVh/40A+ y8tFLqvAXLgtZmzzRcct3bXwyJVUrEKtVL6RdT0VDkwJdvQVRvhx/oY83ejt5d9xiavm 6ZEmpZCe9CVj2H+pLBM+/WC5cNLbwbag577fQnygUDg10LkxtiqQXVSsWY7ahAsXZXdu Cafi/QNwxXIVA7WK8S6vn5/vqSWSZ5FsMLgVonkhoK3ePFDXTnF4XRZ+n6UzpreFPAbX VeCA== X-Gm-Message-State: AOJu0YwY9yQvDohLHxnmH0uLoIU5F7nIFRd/KVMY33V4W3FBZ0ZfVJzq VSCy+q4XUyBgoha+jRDyCCbPv/RhuM62j74YOA3mrOsbS6fmR/tNN+f8n/bQMp4yILVb85gef5m JjeW95QByAnH9C5lF/iMDs4nq9VJT3JKTPY+Dm063dYZhTHJjg75f4ypSJG7neG6WCU6gNSB7Ht ZUlNkr8s2zAU2e+qEXJ7qmE8Fc6a8BhEQOTRKAGYk6svLoXbCaEg== X-Google-Smtp-Source: AGHT+IHkdxQBkNJ9u5fZvf31QYZmJmI78/imSoERcG85hl6ZxFXWcMCdQ6WF5agEQ/mYR3aMFUmQE1c5Wzd8 X-Received: from plbi11.prod.google.com ([2002:a17:903:20cb:b0:236:8084:af9f]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:fa7:b0:234:11f9:a72b with SMTP id d9443c01a7336-23dede96972mr178592455ad.50.1752519217585; Mon, 14 Jul 2025 11:53:37 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:15 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-3-ynaffit@google.com> Subject: [PATCH v3 2/6] binder: Store lru freelist in binder_alloc From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Store a pointer to the free pages list that the binder allocator should use for a process inside of struct binder_alloc. This change allows binder allocator code to be tested and debugged deterministically while a system is using binder; i.e., without interfering with other binder processes and independently of the shrinker. This is necessary to convert the current binder_alloc_selftest into a kunit test that does not rely on hijacking an existing binder_proc to run. A binder process's binder_alloc->freelist should not be changed after it is initialized. A sole exception is the process that runs the existing binder_alloc selftest. Its freelist can be temporarily replaced for the duration of the test because it runs as a single thread before any pages can be added to the global binder freelist, and the test frees every page it allocates before dropping the binder_selftest_lock. This exception allows the existing selftest to be used to check for regressions, but it will be dropped when the binder_alloc tests are converted to kunit in a subsequent patch in this series. Signed-off-by: Tiffany Yang Acked-by: Carlos Llamas --- drivers/android/binder_alloc.c | 25 +++++++---- drivers/android/binder_alloc.h | 3 +- drivers/android/binder_alloc_selftest.c | 59 ++++++++++++++++++++----- 3 files changed, 67 insertions(+), 20 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index fcfaf1b899c8..2e89f9127883 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -26,7 +26,7 @@ #include "binder_alloc.h" #include "binder_trace.h" =20 -struct list_lru binder_freelist; +static struct list_lru binder_freelist; =20 static DEFINE_MUTEX(binder_alloc_mmap_lock); =20 @@ -210,7 +210,7 @@ static void binder_lru_freelist_add(struct binder_alloc= *alloc, =20 trace_binder_free_lru_start(alloc, index); =20 - ret =3D list_lru_add(&binder_freelist, + ret =3D list_lru_add(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -409,7 +409,7 @@ static void binder_lru_freelist_del(struct binder_alloc= *alloc, if (page) { trace_binder_alloc_lru_start(alloc, index); =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1007,7 +1007,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) if (!page) continue; =20 - on_lru =3D list_lru_del(&binder_freelist, + on_lru =3D list_lru_del(alloc->freelist, page_to_lru(page), page_to_nid(page), NULL); @@ -1229,6 +1229,17 @@ binder_shrink_scan(struct shrinker *shrink, struct s= hrink_control *sc) =20 static struct shrinker *binder_shrinker; =20 +static void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) +{ + alloc->pid =3D current->group_leader->pid; + alloc->mm =3D current->mm; + mmgrab(alloc->mm); + mutex_init(&alloc->mutex); + INIT_LIST_HEAD(&alloc->buffers); + alloc->freelist =3D freelist; +} + /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on * @alloc: binder_alloc for this proc @@ -1238,11 +1249,7 @@ static struct shrinker *binder_shrinker; */ void binder_alloc_init(struct binder_alloc *alloc) { - alloc->pid =3D current->group_leader->pid; - alloc->mm =3D current->mm; - mmgrab(alloc->mm); - mutex_init(&alloc->mutex); - INIT_LIST_HEAD(&alloc->buffers); + __binder_alloc_init(alloc, &binder_freelist); } =20 int binder_alloc_shrinker_init(void) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index feecd7414241..aa05a9df1360 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -15,7 +15,6 @@ #include #include =20 -extern struct list_lru binder_freelist; struct binder_transaction; =20 /** @@ -91,6 +90,7 @@ static inline struct list_head *page_to_lru(struct page *= p) * @free_async_space: VA space available for async buffers. This is * initialized at mmap time to 1/2 the full VA space * @pages: array of struct page * + * @freelist: lru list to use for free pages (invariant after in= it) * @buffer_size: size of address space specified via mmap * @pid: pid for associated binder_proc (invariant after in= it) * @pages_high: high watermark of offset in @pages @@ -113,6 +113,7 @@ struct binder_alloc { struct rb_root allocated_buffers; size_t free_async_space; struct page **pages; + struct list_lru *freelist; size_t buffer_size; int pid; size_t pages_high; diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c index 486af3ec3c02..8b18b22aa3de 100644 --- a/drivers/android/binder_alloc_selftest.c +++ b/drivers/android/binder_alloc_selftest.c @@ -8,8 +8,9 @@ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 -#include #include +#include +#include #include "binder_alloc.h" =20 #define BUFFER_NUM 5 @@ -18,6 +19,7 @@ static bool binder_selftest_run =3D true; static int binder_selftest_failures; static DEFINE_MUTEX(binder_selftest_lock); +static struct list_lru binder_selftest_freelist; =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -142,11 +144,6 @@ static void binder_selftest_free_buf(struct binder_all= oc *alloc, for (i =3D 0; i < BUFFER_NUM; i++) binder_alloc_free_buf(alloc, buffers[seq[i]]); =20 - /** - * Error message on a free page can be false positive - * if binder shrinker ran during binder_alloc_free_buf - * calls above. - */ for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { pr_err_size_seq(sizes, seq); @@ -162,8 +159,8 @@ static void binder_selftest_free_page(struct binder_all= oc *alloc) int i; unsigned long count; =20 - while ((count =3D list_lru_count(&binder_freelist))) { - list_lru_walk(&binder_freelist, binder_alloc_free_page, + while ((count =3D list_lru_count(&binder_selftest_freelist))) { + list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, NULL, count); } =20 @@ -187,7 +184,7 @@ static void binder_selftest_alloc_free(struct binder_al= loc *alloc, =20 /* Allocate from lru. */ binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_freelist)) + if (list_lru_count(&binder_selftest_freelist)) pr_err("lru list should be empty but is not\n"); =20 binder_selftest_free_buf(alloc, buffers, sizes, seq, end); @@ -275,6 +272,20 @@ static void binder_selftest_alloc_offset(struct binder= _alloc *alloc, } } =20 +int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) +{ + struct page *page; + int allocated =3D 0; + int i; + + for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { + page =3D alloc->pages[i]; + if (page) + allocated++; + } + return allocated; +} + /** * binder_selftest_alloc() - Test alloc and free of buffer pages. * @alloc: Pointer to alloc struct. @@ -286,6 +297,7 @@ static void binder_selftest_alloc_offset(struct binder_= alloc *alloc, */ void binder_selftest_alloc(struct binder_alloc *alloc) { + struct list_lru *prev_freelist; size_t end_offset[BUFFER_NUM]; =20 if (!binder_selftest_run) @@ -293,14 +305,41 @@ void binder_selftest_alloc(struct binder_alloc *alloc) mutex_lock(&binder_selftest_lock); if (!binder_selftest_run || !alloc->mapped) goto done; + + prev_freelist =3D alloc->freelist; + + /* + * It is not safe to modify this process's alloc->freelist if it has any + * pages on a freelist. Since the test runs before any binder ioctls can + * be dealt with, none of its pages should be allocated yet. + */ + if (binder_selftest_alloc_get_page_count(alloc)) { + pr_err("process has existing alloc state\n"); + goto cleanup; + } + + if (list_lru_init(&binder_selftest_freelist)) { + pr_err("failed to init test freelist\n"); + goto cleanup; + } + + alloc->freelist =3D &binder_selftest_freelist; + pr_info("STARTED\n"); binder_selftest_alloc_offset(alloc, end_offset, 0); - binder_selftest_run =3D false; if (binder_selftest_failures > 0) pr_info("%d tests FAILED\n", binder_selftest_failures); else pr_info("PASSED\n"); =20 + if (list_lru_count(&binder_selftest_freelist)) + pr_err("expect test freelist to be empty\n"); + +cleanup: + /* Even if we didn't run the test, it's no longer thread-safe. */ + binder_selftest_run =3D false; + alloc->freelist =3D prev_freelist; + list_lru_destroy(&binder_selftest_freelist); done: mutex_unlock(&binder_selftest_lock); } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:50:36 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B87727781E for ; Mon, 14 Jul 2025 18:53:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519224; cv=none; b=l/zZ7i7fN4WzWUmvb6efJQf3moJJStqeuVHvmB2dnuS7zselP8Mm6KT3CZfAi90or5qOH6m/cinLyoQGM7tOWrBahmz8vPvHDqR2Dz+wjj3iq8KkBEVIvdTKWFtVbdz1JELzBrFCxVmIlSMhVrWDJ4/Z001pcb7joxNc3wGKtdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519224; c=relaxed/simple; bh=/UeEGxlOewjC8jnB2qaVMXdrCNukFU+x+7O0REEOr4s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XO6ZWiAAszuuHi37JbXd9ZRLMDxNWap9qLfxqfDls/oAxUYT6s085yvhzkrVGLCBd3o9GLHbeNlNyGR4LqrOZ/j0HMfHis4Ols1J9zHT5yH2tBvvhlVPPwxLtqq0/SIzNChbd7LTZOK1p/lli7Bg/DC1PxLLgDCQPwmIgz2jwqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VEoVx/WS; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VEoVx/WS" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-74ea83a6c1bso2054790b3a.0 for ; Mon, 14 Jul 2025 11:53:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519222; x=1753124022; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BSzIQRK9wYKqtO6q1xNWC7mMsqLr8/+7XvJF1HX+3iY=; b=VEoVx/WSPxvy4XBJuHGvgPaX7CQEFesAVHXl3ITt2fMfoKpwniXS1yXx+/MiZGcDcj alEyVgEgpKyEOmkBukUEx2ZHWOFwWGLjr1niUcTyYWMW65QUHxMZEzSS7UDM2VyBqLAj hG01MM4Qfv4tUEc8ciiOM9V3grAQ07d18VmkguXsin/ZO17L4bS3/arQA5q2rFWh2YWv c1pofVHirZywhqyt/bDO0jdqgz8K213CZcK40/FwgMX4yooz9keVqLKjDBLteZ5uasw6 qu4KuNOd+LWKFXn6DgBK7N4BAclyozGf2cR7Za2WBQD/wi48qbwT1uYNcmLOmXEtL+ND +E6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519222; x=1753124022; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BSzIQRK9wYKqtO6q1xNWC7mMsqLr8/+7XvJF1HX+3iY=; b=fU0WaaxaBAnqNFWbqAkRllGfJ5MUvexrGKjoL+n1egIltGRcY05jKEa1dWwpWAkU7W aYvM2ROanO69ykTU1Ws17R7982O+jrZSRVb+v1UovQvp8smdDqHKNlyRQfcfmr2OlDtJ azetINodzXbRslnNE7abIMt6Q2gcShKqi/2kfAGks4XmWxKvYF7IlokNtZFqreCX+iNr 7YvyEyVP+jxfZmIXfM094KjmSpN/OsQBbo6Fe9hKz0xInCIJv9qC9XAaO7y50UU0ljan 1lZDYpmtALX/GtnkvlcrWVLJFGIVA8DSV/Tieru6w0Kmet8Z+jac5Yi/KdyyfYRVWcQy 3k1g== X-Gm-Message-State: AOJu0Ywm4bHIxbbSNOcwc1tY31UVkesGi9qyGX2IQcK8+SydFT47tjge D0H8JF1Z5cA5T/5mPwlwpM8eJSESRgdNuxSMrYHjRZi2Mqe8hv+XTY/7KkF1UgFtuF8ogIPYcc6 YpAU5Et+3hYOFIY9rsltpRKv+RzVxYs8AVK/oSS7C8oq5BcwbGb6fCinwM4BpVMGpiBC136Qp7u IJGyyphhF0Hw4z02ptC73yo8ppkUQOfubjfUyOLPLQwl6xY08h9A== X-Google-Smtp-Source: AGHT+IF07yPy6tCu1SdFlppIyRM/VDPyyOePOaj6ysHT2Tq1ycCysMq/tn27RC7mbbIVpnq5MOo0bbF6p0Xf X-Received: from pfbmy27.prod.google.com ([2002:a05:6a00:6d5b:b0:746:301b:10ca]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1708:b0:748:e1e4:71ec with SMTP id d2e1a72fcca58-755b40b6e22mr3568b3a.12.1752519221734; Mon, 14 Jul 2025 11:53:41 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:16 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-4-ynaffit@google.com> Subject: [PATCH v3 3/6] kunit: test: Export kunit_attach_mm() From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Tests can allocate from virtual memory using kunit_vm_mmap(), which transparently creates and attaches an mm_struct to the test runner if one is not already attached. This is suitable for most cases, except for when the code under test must access a task's mm before performing an mmap. Expose kunit_attach_mm() as part of the interface for those cases. This does not change the existing behavior. Cc: David Gow Signed-off-by: Tiffany Yang Reviewed-by: Carlos Llamas Reviewed-by: Kees Cook --- include/kunit/test.h | 12 ++++++++++++ lib/kunit/user_alloc.c | 4 ++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/include/kunit/test.h b/include/kunit/test.h index 39c768f87dc9..d958ee53050e 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -531,6 +531,18 @@ static inline char *kunit_kstrdup(struct kunit *test, = const char *str, gfp_t gfp */ const char *kunit_kstrdup_const(struct kunit *test, const char *str, gfp_t= gfp); =20 +/** + * kunit_attach_mm() - Create and attach a new mm if it doesn't already ex= ist. + * + * Allocates a &struct mm_struct and attaches it to @current. In most case= s, call + * kunit_vm_mmap() without calling kunit_attach_mm() directly. Only necess= ary when + * code under test accesses the mm before executing the mmap (e.g., to per= form + * additional initialization beforehand). + * + * Return: 0 on success, -errno on failure. + */ +int kunit_attach_mm(void); + /** * kunit_vm_mmap() - Allocate KUnit-tracked vm_mmap() area * @test: The test context object. diff --git a/lib/kunit/user_alloc.c b/lib/kunit/user_alloc.c index 46951be018be..b8cac765e620 100644 --- a/lib/kunit/user_alloc.c +++ b/lib/kunit/user_alloc.c @@ -22,8 +22,7 @@ struct kunit_vm_mmap_params { unsigned long offset; }; =20 -/* Create and attach a new mm if it doesn't already exist. */ -static int kunit_attach_mm(void) +int kunit_attach_mm(void) { struct mm_struct *mm; =20 @@ -49,6 +48,7 @@ static int kunit_attach_mm(void) =20 return 0; } +EXPORT_SYMBOL_GPL(kunit_attach_mm); =20 static int kunit_vm_mmap_init(struct kunit_resource *res, void *context) { --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:50:36 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF9CC277035 for ; Mon, 14 Jul 2025 18:53:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519228; cv=none; b=XAA7yrLMjBI8LpBVXwPtNPI5Xs90ZU9Eqcb9nxwqmUtrVPUpAz4W9zOYi7AR9aryNBUG4BzAcLc8YnQDquZgtmbd6a0gGKvws6RdjVQWRjwQo8J2G3SVSDVuRdqsl5O4gUGEPHXv18GjfAJYtiE0G4BijZqGMSzUSt+FsneM7fs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519228; c=relaxed/simple; bh=dcpHMxzoitGO4CklD1dSEcqzvg/dc9QRNarylOoW1dQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g4J8k1qdxCpJKksUyrUiNHKZglCNtqhKsu4LYISnpFh93X2uB/e0MdVDkwVvEEFU9KLXUVOA4ppVuXAt6bLc2tx3JQyiNVgJq/CVY8LYhTKK6VR89KaHxZ7XZcmMiusCVnkZ/gWqbQvrTvhRmtsDq9oRA0vo6qmKEXAlV3mxjoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hlc1LVyD; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hlc1LVyD" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b3928ad6176so4193693a12.3 for ; Mon, 14 Jul 2025 11:53:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519226; x=1753124026; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=25fhtpbT8fF0C2pKBuladxYH5BzC/Ilp0ouQJ49oeRE=; b=hlc1LVyDr+tSSwrnkAlPNVBw+5eIba5Lp3MeJytv0ZnFve4UY5pwb68yo1a2AKPCqt RFdALAZIn32wt5OID04FlQxaJFVifFk9Py19dAO7ph8Olzy5J7JteZNkcoflPy3Lu9EB Tm0y7pZKCHhG3lSwZsO49eE7HwJV44jqISbL4CCNmiLEMC1C0FlBb8pv7d/agwlZZ680 /EFX3Xsm/1XlGz2uEDPew4aMppR2NIc9rn9PJaK+DAMDaLi40+dbvOb3/3KiZca+7Zno hEszP9u753xs2Mx6fk8sLLUfDr6vVJYz2iW1IHsL3zoLGiBSP0YFAWA56vzU19+K3j+c mhfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519226; x=1753124026; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=25fhtpbT8fF0C2pKBuladxYH5BzC/Ilp0ouQJ49oeRE=; b=NahFE4oqH+WgdcmG05YPPj0K2mRJXrgmRCCeO/tQSBQBrFWIr0aN5KTSspDthcILP/ klIFkoXVSnxDL66sDxlc84Hd42VO6OPA42lQpWrzOqkCrOzhIJXRoc+qv7Fs7+sIaQvh KTXWd9uqeRLL4/sp8dnsty4ncZi/LIMz0XEXRejvjVNVfhJqseY3yfoYHcjLVCieJuH3 +gz33ke0zWUW3dZjvso5i/qkCVqNYkkpmMqyqy0Un2fA3gRtzp+hzMCPyx9/tEBeOoW6 LVF7HqR2Zl5MnFV6QFHE7raOm+knxENzbjL/I+sNK12ghn3w3zjrw/VmvvL4s8KbIGa9 1x7Q== X-Gm-Message-State: AOJu0YzyoNkvbCzXzWOIaxwdQPESS8mYXuVIOpTTYxfU+61+6SrMY487 GaQZIskOeviQ3vBIOQiRfGCPMp4OrVgde9Y0JTtD1BWFBFq0NmrUyBI0qI+NYaDv/AfSOh3Rc+L iiIhTQw5qdOnCm3FxNtuL1i2a+Ej+JisPCTSbhGgBm3iP+hDuz1Bzckg4QosBmkPvOZvMytmBYe B/KuIneV6+RjOz9l2X3hjK7k2l1CoxXJvlkgYnZG5H255e0jIi3w== X-Google-Smtp-Source: AGHT+IF/ExZ31Dv1OjiEpWD8Qqis8KJI3ZYRrbwCPGkrg+D2u+5qABfdzXNo72o3c0Mj8q9JyPG7YvnhDuoD X-Received: from pjboe17.prod.google.com ([2002:a17:90b:3951:b0:309:f831:28e0]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5804:b0:311:b005:93d4 with SMTP id 98e67ed59e1d1-31c50e2c50dmr20159027a91.25.1752519226210; Mon, 14 Jul 2025 11:53:46 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:17 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-5-ynaffit@google.com> Subject: [PATCH v3 4/6] binder: Scaffolding for binder_alloc KUnit tests From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add setup and teardown for testing binder allocator code with KUnit. Include minimal test cases to verify that tests are initialized correctly. Tested-by: Rae Moar Signed-off-by: Tiffany Yang Acked-by: Carlos Llamas Reviewed-by: Kees Cook --- v2: * Added tested-by tag v3: * Split kunit lib change into separate change --- drivers/android/Kconfig | 11 ++ drivers/android/Makefile | 1 + drivers/android/binder.c | 5 +- drivers/android/binder_alloc.c | 15 +- drivers/android/binder_alloc.h | 6 + drivers/android/binder_internal.h | 4 + drivers/android/tests/.kunitconfig | 3 + drivers/android/tests/Makefile | 3 + drivers/android/tests/binder_alloc_kunit.c | 166 +++++++++++++++++++++ 9 files changed, 208 insertions(+), 6 deletions(-) create mode 100644 drivers/android/tests/.kunitconfig create mode 100644 drivers/android/tests/Makefile create mode 100644 drivers/android/tests/binder_alloc_kunit.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index 07aa8ae0a058..b1bc7183366c 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -47,4 +47,15 @@ config ANDROID_BINDER_IPC_SELFTEST exhaustively with combinations of various buffer sizes and alignments. =20 +config ANDROID_BINDER_ALLOC_KUNIT_TEST + tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS + depends on ANDROID_BINDER_IPC && KUNIT + default KUNIT_ALL_TESTS + help + This feature builds the binder alloc KUnit tests. + + Each test case runs using a pared-down binder_alloc struct and + test-specific freelist, which allows this KUnit module to be loaded + for testing without interfering with a running system. + endmenu diff --git a/drivers/android/Makefile b/drivers/android/Makefile index c9d3d0c99c25..74d02a335d4e 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -4,3 +4,4 @@ ccflags-y +=3D -I$(src) # needed for trace events obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index c463ca4a8fff..9dfe90c284fc 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -68,6 +68,8 @@ #include #include =20 +#include + #include =20 #include @@ -5956,10 +5958,11 @@ static void binder_vma_close(struct vm_area_struct = *vma) binder_alloc_vma_close(&proc->alloc); } =20 -static vm_fault_t binder_vm_fault(struct vm_fault *vmf) +VISIBLE_IF_KUNIT vm_fault_t binder_vm_fault(struct vm_fault *vmf) { return VM_FAULT_SIGBUS; } +EXPORT_SYMBOL_IF_KUNIT(binder_vm_fault); =20 static const struct vm_operations_struct binder_vm_ops =3D { .open =3D binder_vma_open, diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 2e89f9127883..c79e5c6721f0 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "binder_alloc.h" #include "binder_trace.h" =20 @@ -57,13 +58,14 @@ static struct binder_buffer *binder_buffer_prev(struct = binder_buffer *buffer) return list_entry(buffer->entry.prev, struct binder_buffer, entry); } =20 -static size_t binder_alloc_buffer_size(struct binder_alloc *alloc, - struct binder_buffer *buffer) +VISIBLE_IF_KUNIT size_t binder_alloc_buffer_size(struct binder_alloc *allo= c, + struct binder_buffer *buffer) { if (list_is_last(&buffer->entry, &alloc->buffers)) return alloc->vm_start + alloc->buffer_size - buffer->user_data; return binder_buffer_next(buffer)->user_data - buffer->user_data; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_buffer_size); =20 static void binder_insert_free_buffer(struct binder_alloc *alloc, struct binder_buffer *new_buffer) @@ -959,7 +961,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *allo= c, failure_string, ret); return ret; } - +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_mmap_handler); =20 void binder_alloc_deferred_release(struct binder_alloc *alloc) { @@ -1028,6 +1030,7 @@ void binder_alloc_deferred_release(struct binder_allo= c *alloc) "%s: %d buffers %d, pages %d\n", __func__, alloc->pid, buffers, page_count); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_deferred_release); =20 /** * binder_alloc_print_allocated() - print buffer info @@ -1122,6 +1125,7 @@ void binder_alloc_vma_close(struct binder_alloc *allo= c) { binder_alloc_set_mapped(alloc, false); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_vma_close); =20 /** * binder_alloc_free_page() - shrinker callback to free pages @@ -1229,8 +1233,8 @@ binder_shrink_scan(struct shrinker *shrink, struct sh= rink_control *sc) =20 static struct shrinker *binder_shrinker; =20 -static void __binder_alloc_init(struct binder_alloc *alloc, - struct list_lru *freelist) +VISIBLE_IF_KUNIT void __binder_alloc_init(struct binder_alloc *alloc, + struct list_lru *freelist) { alloc->pid =3D current->group_leader->pid; alloc->mm =3D current->mm; @@ -1239,6 +1243,7 @@ static void __binder_alloc_init(struct binder_alloc *= alloc, INIT_LIST_HEAD(&alloc->buffers); alloc->freelist =3D freelist; } +EXPORT_SYMBOL_IF_KUNIT(__binder_alloc_init); =20 /** * binder_alloc_init() - called by binder_open() for per-proc initializati= on diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index aa05a9df1360..dc8dce2469a7 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -188,5 +188,11 @@ int binder_alloc_copy_from_buffer(struct binder_alloc = *alloc, binder_size_t buffer_offset, size_t bytes); =20 +#if IS_ENABLED(CONFIG_KUNIT) +void __binder_alloc_init(struct binder_alloc *alloc, struct list_lru *free= list); +size_t binder_alloc_buffer_size(struct binder_alloc *alloc, + struct binder_buffer *buffer); +#endif + #endif /* _LINUX_BINDER_ALLOC_H */ =20 diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_int= ernal.h index 1ba5caf1d88d..b5d3014fb4dc 100644 --- a/drivers/android/binder_internal.h +++ b/drivers/android/binder_internal.h @@ -592,4 +592,8 @@ void binder_add_device(struct binder_device *device); */ void binder_remove_device(struct binder_device *device); =20 +#if IS_ENABLED(CONFIG_KUNIT) +vm_fault_t binder_vm_fault(struct vm_fault *vmf); +#endif + #endif /* _LINUX_BINDER_INTERNAL_H */ diff --git a/drivers/android/tests/.kunitconfig b/drivers/android/tests/.ku= nitconfig new file mode 100644 index 000000000000..a73601231049 --- /dev/null +++ b/drivers/android/tests/.kunitconfig @@ -0,0 +1,3 @@ +CONFIG_KUNIT=3Dy +CONFIG_ANDROID_BINDER_IPC=3Dy +CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST=3Dy diff --git a/drivers/android/tests/Makefile b/drivers/android/tests/Makefile new file mode 100644 index 000000000000..6780967e573b --- /dev/null +++ b/drivers/android/tests/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D binder_alloc_kunit.o diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c new file mode 100644 index 000000000000..4b68b5687d33 --- /dev/null +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -0,0 +1,166 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for binder allocator code + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../binder_alloc.h" +#include "../binder_internal.h" + +MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); + +#define BINDER_MMAP_SIZE SZ_128K + +struct binder_alloc_test { + struct binder_alloc alloc; + struct list_lru binder_test_freelist; + struct file *filp; + unsigned long mmap_uaddr; +}; + +static void binder_alloc_test_init_freelist(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + KUNIT_EXPECT_PTR_EQ(test, priv->alloc.freelist, + &priv->binder_test_freelist); +} + +static void binder_alloc_test_mmap(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + struct binder_alloc *alloc =3D &priv->alloc; + struct binder_buffer *buf; + struct rb_node *n; + + KUNIT_EXPECT_EQ(test, alloc->mapped, true); + KUNIT_EXPECT_EQ(test, alloc->buffer_size, BINDER_MMAP_SIZE); + + n =3D rb_first(&alloc->allocated_buffers); + KUNIT_EXPECT_PTR_EQ(test, n, NULL); + + n =3D rb_first(&alloc->free_buffers); + buf =3D rb_entry(n, struct binder_buffer, rb_node); + KUNIT_EXPECT_EQ(test, binder_alloc_buffer_size(alloc, buf), + BINDER_MMAP_SIZE); + KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); +} + +/* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ + +static void binder_alloc_test_vma_close(struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D vma->vm_private_data; + + binder_alloc_vma_close(alloc); +} + +static const struct vm_operations_struct binder_alloc_test_vm_ops =3D { + .close =3D binder_alloc_test_vma_close, + .fault =3D binder_vm_fault, +}; + +static int binder_alloc_test_mmap_handler(struct file *filp, + struct vm_area_struct *vma) +{ + struct binder_alloc *alloc =3D filp->private_data; + + vm_flags_mod(vma, VM_DONTCOPY | VM_MIXEDMAP, VM_MAYWRITE); + + vma->vm_ops =3D &binder_alloc_test_vm_ops; + vma->vm_private_data =3D alloc; + + return binder_alloc_mmap_handler(alloc, vma); +} + +static const struct file_operations binder_alloc_test_fops =3D { + .mmap =3D binder_alloc_test_mmap_handler, +}; + +static int binder_alloc_test_init(struct kunit *test) +{ + struct binder_alloc_test *priv; + int ret; + + priv =3D kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + test->priv =3D priv; + + ret =3D list_lru_init(&priv->binder_test_freelist); + if (ret) { + kunit_err(test, "Failed to initialize test freelist\n"); + return ret; + } + + /* __binder_alloc_init requires mm to be attached */ + ret =3D kunit_attach_mm(); + if (ret) { + kunit_err(test, "Failed to attach mm\n"); + return ret; + } + __binder_alloc_init(&priv->alloc, &priv->binder_test_freelist); + + priv->filp =3D anon_inode_getfile("binder_alloc_kunit", + &binder_alloc_test_fops, &priv->alloc, + O_RDWR | O_CLOEXEC); + if (IS_ERR_OR_NULL(priv->filp)) { + kunit_err(test, "Failed to open binder alloc test driver file\n"); + return priv->filp ? PTR_ERR(priv->filp) : -ENOMEM; + } + + priv->mmap_uaddr =3D kunit_vm_mmap(test, priv->filp, 0, BINDER_MMAP_SIZE, + PROT_READ, MAP_PRIVATE | MAP_NORESERVE, + 0); + if (!priv->mmap_uaddr) { + kunit_err(test, "Could not map the test's transaction memory\n"); + return -ENOMEM; + } + + return 0; +} + +static void binder_alloc_test_exit(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + + /* Close the backing file to make sure binder_alloc_vma_close runs */ + if (!IS_ERR_OR_NULL(priv->filp)) + fput(priv->filp); + + if (priv->alloc.mm) + binder_alloc_deferred_release(&priv->alloc); + + /* Make sure freelist is empty */ + KUNIT_EXPECT_EQ(test, list_lru_count(&priv->binder_test_freelist), 0); + list_lru_destroy(&priv->binder_test_freelist); +} + +static struct kunit_case binder_alloc_test_cases[] =3D { + KUNIT_CASE(binder_alloc_test_init_freelist), + KUNIT_CASE(binder_alloc_test_mmap), + {} +}; + +static struct kunit_suite binder_alloc_test_suite =3D { + .name =3D "binder_alloc", + .test_cases =3D binder_alloc_test_cases, + .init =3D binder_alloc_test_init, + .exit =3D binder_alloc_test_exit, +}; + +kunit_test_suite(binder_alloc_test_suite); + +MODULE_AUTHOR("Tiffany Yang "); +MODULE_DESCRIPTION("Binder Alloc KUnit tests"); +MODULE_LICENSE("GPL"); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:50:36 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D121B278157 for ; Mon, 14 Jul 2025 18:53:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519235; cv=none; b=qyWqTM7Uk969zoSIfhTpnbPC1fiB02NMfg+9xQa5i9NFk3GAteL2D+se83xgwEjvw+cOY8FyBL6olRu3mxCG2iDHPHvPZmyY/sr84JSbOBblhgx2O4IbqppWscY9bVFiBdGN4zQgTuI3D5X8QpNm6sYK15E8y9piVnDHQxc9r7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519235; c=relaxed/simple; bh=Gn22KLjM5RduxhhDtsFnmFa8tgBQyv1znAbgWopR4Ys=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hWHie5EJCDwRJPZw0aWU+hQ8jT+vDig8upe6yIxlwnMeZO/dFdCFsp3EN9VkCvIQNvYkPu80U1IBSXTX5WZoM8SLAw3njJ3Rnr/Dwisoe1CdkNqxro3fvTXpB0aShkhV+NMrGK9MOog5MIIxHgBehc9P3w51XSQgaKfiGqMXLcc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XQL3FuS0; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XQL3FuS0" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-23692793178so42291375ad.0 for ; Mon, 14 Jul 2025 11:53:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519232; x=1753124032; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BzRpOzw7cWdqKxrH7uIaRWwtiKYX4f2ihxYRzPHLp4M=; b=XQL3FuS0NkWGk0bwa5QSNdPGvrPWGJTGXJFYgJxTHYBcKSoeVaaVI9Jh1hNqOf3K2S dfKvtKJ84g/4rO/qsLEgn58YUaJ658SqgpO4+LLcfYdTg5peXg1xvYe1BCQRu+LoDnPc k0Hnb4gw3838BOGH82h2PiyeJ/gWClFliA9Sdtk2ynuUG3KpA+k3pKtICNIEo0X6icxV S0lNj27ZQwcAPx/TNdkI8rCNLKb9fCTFS6LVK6vMocznbyRnIdatr1hZTweCrtqY8Fob CUWZf3p5K4x2T9Vb7I7fqnnDpON/DrqyoGxraAZUTv3pRHtVTSS+L3+ogJBx32xQ9x1t ogeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519232; x=1753124032; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BzRpOzw7cWdqKxrH7uIaRWwtiKYX4f2ihxYRzPHLp4M=; b=tzJKH6i2dMUFmSXdubnBwGZ2fb7dvDPIRvLxylUXn82qocM2wI2iURLsoqPDux6b/x lC3i8+Zdhxm7OGCwt98s2zLVY5yF4ZUFWSO2CAl0uzH/gEg42xBP64s++GsAlmerZEtD Ov8JvzTxTdHDAdOKAXrbCMcaBg+OytBgTL60UNsB18mWSrZOhkz9NF3XyApEmOafGoRm Pjw4XPGcMDFpCIZin8eSCvHUj5RF7OTokN1yJowBLrZI8QCbIYlscOBIVxtjqOSKy1eT cR+VYM3ZiFedWJNfIUxd5hhK0g6YnE5gEnhWi+jz2/f5a7Ofn26P1XUuKKru9Z7rJBgT L0qw== X-Gm-Message-State: AOJu0YwtNG8TEx2fnwJw2RfmbJZUAO9RwtvpSZyDqgQpxp1/nQYHT9lB b0l/IXpV6FvsQzX6d3wR6G0sMAz73RUxztDYv4B3OhAFkSaznUTzwrCHUDp5nOe3EToimQ7O2mC sj9Pq69LnOCSMuyRvF5YZBMaRt2D+TR+aWED3W2bMuCUM1HYYtrkgM8NFc/7h20eb7J8uaFsaTf a8/zdVjx6HcuVMWJMYbWYsSxuR6j9rvSCUszt/2ML00RAhPtMlUg== X-Google-Smtp-Source: AGHT+IEO3ziVNfbF6FmnqYZLTNtdLbczau/b4HhteX0msWBjlrf0fG5R+lIA1ShXW15NVKgIVP7U/PpYyn8L X-Received: from plblm6.prod.google.com ([2002:a17:903:2986:b0:236:9738:9180]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f792:b0:234:8ec1:4af1 with SMTP id d9443c01a7336-23dedbb3e8dmr183984135ad.0.1752519232020; Mon, 14 Jul 2025 11:53:52 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:18 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-6-ynaffit@google.com> Subject: [PATCH v3 5/6] binder: Convert binder_alloc selftests to KUnit From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the existing binder_alloc_selftest tests into KUnit tests. These tests allocate and free an exhaustive combination of buffers with various sizes and alignments. This change allows them to be run without blocking or otherwise interfering with other processes in binder. This test is refactored into more meaningful cases in the subsequent patch. Signed-off-by: Tiffany Yang Acked-by: Carlos Llamas --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281837.hReNHJjO-lkp@i= ntel.com/ --- drivers/android/Kconfig | 10 - drivers/android/Makefile | 1 - drivers/android/binder.c | 5 - drivers/android/binder_alloc.c | 3 + drivers/android/binder_alloc.h | 5 - drivers/android/binder_alloc_selftest.c | 345 --------------------- drivers/android/tests/binder_alloc_kunit.c | 279 +++++++++++++++++ 7 files changed, 282 insertions(+), 366 deletions(-) delete mode 100644 drivers/android/binder_alloc_selftest.c diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index b1bc7183366c..5b3b8041f827 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -37,16 +37,6 @@ config ANDROID_BINDER_DEVICES created. Each binder device has its own context manager, and is therefore logically separated from the other devices. =20 -config ANDROID_BINDER_IPC_SELFTEST - bool "Android Binder IPC Driver Selftest" - depends on ANDROID_BINDER_IPC - help - This feature allows binder selftest to run. - - Binder selftest checks the allocation and free of binder buffers - exhaustively with combinations of various buffer sizes and - alignments. - config ANDROID_BINDER_ALLOC_KUNIT_TEST tristate "KUnit Tests for Android Binder Alloc" if !KUNIT_ALL_TESTS depends on ANDROID_BINDER_IPC && KUNIT diff --git a/drivers/android/Makefile b/drivers/android/Makefile index 74d02a335d4e..c5d47be0276c 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -3,5 +3,4 @@ ccflags-y +=3D -I$(src) # needed for trace events =20 obj-$(CONFIG_ANDROID_BINDERFS) +=3D binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) +=3D binder.o binder_alloc.o -obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) +=3D binder_alloc_selftest.o obj-$(CONFIG_ANDROID_BINDER_ALLOC_KUNIT_TEST) +=3D tests/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 9dfe90c284fc..7b2653a5d59c 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -5718,11 +5718,6 @@ static long binder_ioctl(struct file *filp, unsigned= int cmd, unsigned long arg) struct binder_thread *thread; void __user *ubuf =3D (void __user *)arg; =20 - /*pr_info("binder_ioctl: %d:%d %x %lx\n", - proc->pid, current->pid, cmd, arg);*/ - - binder_selftest_alloc(&proc->alloc); - trace_binder_ioctl(cmd, arg); =20 ret =3D wait_event_interruptible(binder_user_error_wait, binder_stop_on_u= ser_error < 2); diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index c79e5c6721f0..74a184014fa7 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -701,6 +701,7 @@ struct binder_buffer *binder_alloc_new_buf(struct binde= r_alloc *alloc, out: return buffer; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_new_buf); =20 static unsigned long buffer_start_page(struct binder_buffer *buffer) { @@ -879,6 +880,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc, binder_free_buf_locked(alloc, buffer); mutex_unlock(&alloc->mutex); } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_buf); =20 /** * binder_alloc_mmap_handler() - map virtual address space for proc @@ -1217,6 +1219,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, err_mmget: return LRU_SKIP; } +EXPORT_SYMBOL_IF_KUNIT(binder_alloc_free_page); =20 static unsigned long binder_shrink_count(struct shrinker *shrink, struct shrink_control *sc) diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index dc8dce2469a7..bed97c2cad92 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -121,11 +121,6 @@ struct binder_alloc { bool oneway_spam_detected; }; =20 -#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST -void binder_selftest_alloc(struct binder_alloc *alloc); -#else -static inline void binder_selftest_alloc(struct binder_alloc *alloc) {} -#endif enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, void *cb_arg); diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/bind= er_alloc_selftest.c deleted file mode 100644 index 8b18b22aa3de..000000000000 --- a/drivers/android/binder_alloc_selftest.c +++ /dev/null @@ -1,345 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* binder_alloc_selftest.c - * - * Android IPC Subsystem - * - * Copyright (C) 2017 Google, Inc. - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include "binder_alloc.h" - -#define BUFFER_NUM 5 -#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) - -static bool binder_selftest_run =3D true; -static int binder_selftest_failures; -static DEFINE_MUTEX(binder_selftest_lock); -static struct list_lru binder_selftest_freelist; - -/** - * enum buf_end_align_type - Page alignment of a buffer - * end with regard to the end of the previous buffer. - * - * In the pictures below, buf2 refers to the buffer we - * are aligning. buf1 refers to previous buffer by addr. - * Symbol [ means the start of a buffer, ] means the end - * of a buffer, and | means page boundaries. - */ -enum buf_end_align_type { - /** - * @SAME_PAGE_UNALIGNED: The end of this buffer is on - * the same page as the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 ][ ... - * buf1 ]|[ buf2 ][ ... - */ - SAME_PAGE_UNALIGNED =3D 0, - /** - * @SAME_PAGE_ALIGNED: When the end of the previous buffer - * is not page aligned, the end of this buffer is on the - * same page as the end of the previous buffer and is page - * aligned. When the previous buffer is page aligned, the - * end of this buffer is aligned to the next page boundary. - * Examples: - * buf1 ][ buf2 ]| ... - * buf1 ]|[ buf2 ]| ... - */ - SAME_PAGE_ALIGNED, - /** - * @NEXT_PAGE_UNALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is not page aligned. Examples: - * buf1 ][ buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 ][ ... - */ - NEXT_PAGE_UNALIGNED, - /** - * @NEXT_PAGE_ALIGNED: The end of this buffer is on - * the page next to the end of the previous buffer and - * is page aligned. Examples: - * buf1 ][ buf2 | buf2 ]| ... - * buf1 ]|[ buf2 | buf2 ]| ... - */ - NEXT_PAGE_ALIGNED, - /** - * @NEXT_NEXT_UNALIGNED: The end of this buffer is on - * the page that follows the page after the end of the - * previous buffer and is not page aligned. Examples: - * buf1 ][ buf2 | buf2 | buf2 ][ ... - * buf1 ]|[ buf2 | buf2 | buf2 ][ ... - */ - NEXT_NEXT_UNALIGNED, - /** - * @LOOP_END: The number of enum values in &buf_end_align_type. - * It is used for controlling loop termination. - */ - LOOP_END, -}; - -static void pr_err_size_seq(size_t *sizes, int *seq) -{ - int i; - - pr_err("alloc sizes: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - pr_err("free seq: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); -} - -static bool check_buffer_pages_allocated(struct binder_alloc *alloc, - struct binder_buffer *buffer, - size_t size) -{ - unsigned long page_addr; - unsigned long end; - int page_index; - - end =3D PAGE_ALIGN(buffer->user_data + size); - page_addr =3D buffer->user_data; - for (; page_addr < end; page_addr +=3D PAGE_SIZE) { - page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; - if (!alloc->pages[page_index] || - !list_empty(page_to_lru(alloc->pages[page_index]))) { - pr_err("expect alloc but is %s at page index %d\n", - alloc->pages[page_index] ? - "lru" : "free", page_index); - return false; - } - } - return true; -} - -static void binder_selftest_alloc_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) { - buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); - if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(alloc, buffers[i], - sizes[i])) { - pr_err_size_seq(sizes, seq); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_buf(struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) -{ - int i; - - for (i =3D 0; i < BUFFER_NUM; i++) - binder_alloc_free_buf(alloc, buffers[seq[i]]); - - for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { - if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(sizes, seq); - pr_err("expect lru but is %s at page index %d\n", - alloc->pages[i] ? "alloc" : "free", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_free_page(struct binder_alloc *alloc) -{ - int i; - unsigned long count; - - while ((count =3D list_lru_count(&binder_selftest_freelist))) { - list_lru_walk(&binder_selftest_freelist, binder_alloc_free_page, - NULL, count); - } - - for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { - if (alloc->pages[i]) { - pr_err("expect free but is %s at page index %d\n", - list_empty(page_to_lru(alloc->pages[i])) ? - "alloc" : "lru", i); - binder_selftest_failures++; - } - } -} - -static void binder_selftest_alloc_free(struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) -{ - struct binder_buffer *buffers[BUFFER_NUM]; - - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - - /* Allocate from lru. */ - binder_selftest_alloc_buf(alloc, buffers, sizes, seq); - if (list_lru_count(&binder_selftest_freelist)) - pr_err("lru list should be empty but is not\n"); - - binder_selftest_free_buf(alloc, buffers, sizes, seq, end); - binder_selftest_free_page(alloc); -} - -static bool is_dup(int *seq, int index, int val) -{ - int i; - - for (i =3D 0; i < index; i++) { - if (seq[i] =3D=3D val) - return true; - } - return false; -} - -/* Generate BUFFER_NUM factorial free orders. */ -static void binder_selftest_free_seq(struct binder_alloc *alloc, - size_t *sizes, int *seq, - int index, size_t end) -{ - int i; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_free(alloc, sizes, seq, end); - return; - } - for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) - continue; - seq[index] =3D i; - binder_selftest_free_seq(alloc, sizes, seq, index + 1, end); - } -} - -static void binder_selftest_alloc_size(struct binder_alloc *alloc, - size_t *end_offset) -{ - int i; - int seq[BUFFER_NUM] =3D {0}; - size_t front_sizes[BUFFER_NUM]; - size_t back_sizes[BUFFER_NUM]; - size_t last_offset, offset =3D 0; - - for (i =3D 0; i < BUFFER_NUM; i++) { - last_offset =3D offset; - offset =3D end_offset[i]; - front_sizes[i] =3D offset - last_offset; - back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; - } - /* - * Buffers share the first or last few pages. - * Only BUFFER_NUM - 1 buffer sizes are adjustable since - * we need one giant buffer before getting to the last page. - */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - binder_selftest_free_seq(alloc, front_sizes, seq, 0, - end_offset[BUFFER_NUM - 1]); - binder_selftest_free_seq(alloc, back_sizes, seq, 0, alloc->buffer_size); -} - -static void binder_selftest_alloc_offset(struct binder_alloc *alloc, - size_t *end_offset, int index) -{ - int align; - size_t end, prev; - - if (index =3D=3D BUFFER_NUM) { - binder_selftest_alloc_size(alloc, end_offset); - return; - } - prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; - end =3D prev; - - BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); - - for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { - if (align % 2) - end =3D ALIGN(end, PAGE_SIZE); - else - end +=3D BUFFER_MIN_SIZE; - end_offset[index] =3D end; - binder_selftest_alloc_offset(alloc, end_offset, index + 1); - } -} - -int binder_selftest_alloc_get_page_count(struct binder_alloc *alloc) -{ - struct page *page; - int allocated =3D 0; - int i; - - for (i =3D 0; i < alloc->buffer_size / PAGE_SIZE; i++) { - page =3D alloc->pages[i]; - if (page) - allocated++; - } - return allocated; -} - -/** - * binder_selftest_alloc() - Test alloc and free of buffer pages. - * @alloc: Pointer to alloc struct. - * - * Allocate BUFFER_NUM buffers to cover all page alignment cases, - * then free them in all orders possible. Check that pages are - * correctly allocated, put onto lru when buffers are freed, and - * are freed when binder_alloc_free_page is called. - */ -void binder_selftest_alloc(struct binder_alloc *alloc) -{ - struct list_lru *prev_freelist; - size_t end_offset[BUFFER_NUM]; - - if (!binder_selftest_run) - return; - mutex_lock(&binder_selftest_lock); - if (!binder_selftest_run || !alloc->mapped) - goto done; - - prev_freelist =3D alloc->freelist; - - /* - * It is not safe to modify this process's alloc->freelist if it has any - * pages on a freelist. Since the test runs before any binder ioctls can - * be dealt with, none of its pages should be allocated yet. - */ - if (binder_selftest_alloc_get_page_count(alloc)) { - pr_err("process has existing alloc state\n"); - goto cleanup; - } - - if (list_lru_init(&binder_selftest_freelist)) { - pr_err("failed to init test freelist\n"); - goto cleanup; - } - - alloc->freelist =3D &binder_selftest_freelist; - - pr_info("STARTED\n"); - binder_selftest_alloc_offset(alloc, end_offset, 0); - if (binder_selftest_failures > 0) - pr_info("%d tests FAILED\n", binder_selftest_failures); - else - pr_info("PASSED\n"); - - if (list_lru_count(&binder_selftest_freelist)) - pr_err("expect test freelist to be empty\n"); - -cleanup: - /* Even if we didn't run the test, it's no longer thread-safe. */ - binder_selftest_run =3D false; - alloc->freelist =3D prev_freelist; - list_lru_destroy(&binder_selftest_freelist); -done: - mutex_unlock(&binder_selftest_lock); -} diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 4b68b5687d33..9e185e2036e5 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -21,6 +21,265 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); =20 #define BINDER_MMAP_SIZE SZ_128K =20 +#define BUFFER_NUM 5 +#define BUFFER_MIN_SIZE (PAGE_SIZE / 8) + +static int binder_alloc_test_failures; + +/** + * enum buf_end_align_type - Page alignment of a buffer + * end with regard to the end of the previous buffer. + * + * In the pictures below, buf2 refers to the buffer we + * are aligning. buf1 refers to previous buffer by addr. + * Symbol [ means the start of a buffer, ] means the end + * of a buffer, and | means page boundaries. + */ +enum buf_end_align_type { + /** + * @SAME_PAGE_UNALIGNED: The end of this buffer is on + * the same page as the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 ][ ... + * buf1 ]|[ buf2 ][ ... + */ + SAME_PAGE_UNALIGNED =3D 0, + /** + * @SAME_PAGE_ALIGNED: When the end of the previous buffer + * is not page aligned, the end of this buffer is on the + * same page as the end of the previous buffer and is page + * aligned. When the previous buffer is page aligned, the + * end of this buffer is aligned to the next page boundary. + * Examples: + * buf1 ][ buf2 ]| ... + * buf1 ]|[ buf2 ]| ... + */ + SAME_PAGE_ALIGNED, + /** + * @NEXT_PAGE_UNALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is not page aligned. Examples: + * buf1 ][ buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 ][ ... + */ + NEXT_PAGE_UNALIGNED, + /** + * @NEXT_PAGE_ALIGNED: The end of this buffer is on + * the page next to the end of the previous buffer and + * is page aligned. Examples: + * buf1 ][ buf2 | buf2 ]| ... + * buf1 ]|[ buf2 | buf2 ]| ... + */ + NEXT_PAGE_ALIGNED, + /** + * @NEXT_NEXT_UNALIGNED: The end of this buffer is on + * the page that follows the page after the end of the + * previous buffer and is not page aligned. Examples: + * buf1 ][ buf2 | buf2 | buf2 ][ ... + * buf1 ]|[ buf2 | buf2 | buf2 ][ ... + */ + NEXT_NEXT_UNALIGNED, + /** + * @LOOP_END: The number of enum values in &buf_end_align_type. + * It is used for controlling loop termination. + */ + LOOP_END, +}; + +static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +{ + int i; + + kunit_err(test, "alloc sizes: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%zu]", sizes[i]); + pr_cont("\n"); + kunit_err(test, "free seq: "); + for (i =3D 0; i < BUFFER_NUM; i++) + pr_cont("[%d]", seq[i]); + pr_cont("\n"); +} + +static bool check_buffer_pages_allocated(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffer, + size_t size) +{ + unsigned long page_addr; + unsigned long end; + int page_index; + + end =3D PAGE_ALIGN(buffer->user_data + size); + page_addr =3D buffer->user_data; + for (; page_addr < end; page_addr +=3D PAGE_SIZE) { + page_index =3D (page_addr - alloc->vm_start) / PAGE_SIZE; + if (!alloc->pages[page_index] || + !list_empty(page_to_lru(alloc->pages[page_index]))) { + kunit_err(test, "expect alloc but is %s at page index %d\n", + alloc->pages[page_index] ? + "lru" : "free", page_index); + return false; + } + } + return true; +} + +static void binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); + if (IS_ERR(buffers[i]) || + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { + pr_err_size_seq(test, sizes, seq); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) +{ + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) + binder_alloc_free_buf(alloc, buffers[seq[i]]); + + for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { + if (list_empty(page_to_lru(alloc->pages[i]))) { + pr_err_size_seq(test, sizes, seq); + kunit_err(test, "expect lru but is %s at page index %d\n", + alloc->pages[i] ? "alloc" : "free", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) +{ + unsigned long count; + int i; + + while ((count =3D list_lru_count(alloc->freelist))) { + list_lru_walk(alloc->freelist, binder_alloc_free_page, + NULL, count); + } + + for (i =3D 0; i < (alloc->buffer_size / PAGE_SIZE); i++) { + if (alloc->pages[i]) { + kunit_err(test, "expect free but is %s at page index %d\n", + list_empty(page_to_lru(alloc->pages[i])) ? + "alloc" : "lru", i); + binder_alloc_test_failures++; + } + } +} + +static void binder_alloc_test_alloc_free(struct kunit *test, + struct binder_alloc *alloc, + size_t *sizes, int *seq, size_t end) +{ + struct binder_buffer *buffers[BUFFER_NUM]; + + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + + /* Allocate from lru. */ + binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); + if (list_lru_count(alloc->freelist)) + kunit_err(test, "lru list should be empty but is not\n"); + + binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + binder_alloc_test_free_page(test, alloc); +} + +static bool is_dup(int *seq, int index, int val) +{ + int i; + + for (i =3D 0; i < index; i++) { + if (seq[i] =3D=3D val) + return true; + } + return false; +} + +/* Generate BUFFER_NUM factorial free orders. */ +static void permute_frees(struct kunit *test, struct binder_alloc *alloc, + size_t *sizes, int *seq, int index, size_t end) +{ + int i; + + if (index =3D=3D BUFFER_NUM) { + binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + return; + } + for (i =3D 0; i < BUFFER_NUM; i++) { + if (is_dup(seq, index, i)) + continue; + seq[index] =3D i; + permute_frees(test, alloc, sizes, seq, index + 1, end); + } +} + +static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset) +{ + size_t last_offset, offset =3D 0; + size_t front_sizes[BUFFER_NUM]; + size_t back_sizes[BUFFER_NUM]; + int seq[BUFFER_NUM] =3D {0}; + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + last_offset =3D offset; + offset =3D end_offset[i]; + front_sizes[i] =3D offset - last_offset; + back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; + } + /* + * Buffers share the first or last few pages. + * Only BUFFER_NUM - 1 buffer sizes are adjustable since + * we need one giant buffer before getting to the last page. + */ + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + permute_frees(test, alloc, front_sizes, seq, 0, + end_offset[BUFFER_NUM - 1]); + permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); +} + +static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, + size_t *end_offset, int index) +{ + size_t end, prev; + int align; + + if (index =3D=3D BUFFER_NUM) { + gen_buf_sizes(test, alloc, end_offset); + return; + } + prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; + end =3D prev; + + BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >=3D PAGE_SIZE); + + for (align =3D SAME_PAGE_UNALIGNED; align < LOOP_END; align++) { + if (align % 2) + end =3D ALIGN(end, PAGE_SIZE); + else + end +=3D BUFFER_MIN_SIZE; + end_offset[index] =3D end; + gen_buf_offsets(test, alloc, end_offset, index + 1); + } +} + struct binder_alloc_test { struct binder_alloc alloc; struct list_lru binder_test_freelist; @@ -56,6 +315,25 @@ static void binder_alloc_test_mmap(struct kunit *test) KUNIT_EXPECT_TRUE(test, list_is_last(&buf->entry, &alloc->buffers)); } =20 +/** + * binder_alloc_exhaustive_test() - Exhaustively test alloc and free of bu= ffer pages. + * @test: The test context object. + * + * Allocate BUFFER_NUM buffers to cover all page alignment cases, + * then free them in all orders possible. Check that pages are + * correctly allocated, put onto lru when buffers are freed, and + * are freed when binder_alloc_free_page() is called. + */ +static void binder_alloc_exhaustive_test(struct kunit *test) +{ + struct binder_alloc_test *priv =3D test->priv; + size_t end_offset[BUFFER_NUM]; + + gen_buf_offsets(test, &priv->alloc, end_offset, 0); + + KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); +} + /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ =20 static void binder_alloc_test_vma_close(struct vm_area_struct *vma) @@ -149,6 +427,7 @@ static void binder_alloc_test_exit(struct kunit *test) static struct kunit_case binder_alloc_test_cases[] =3D { KUNIT_CASE(binder_alloc_test_init_freelist), KUNIT_CASE(binder_alloc_test_mmap), + KUNIT_CASE(binder_alloc_exhaustive_test), {} }; =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:50:36 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A703E278E79 for ; Mon, 14 Jul 2025 18:53:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519238; cv=none; b=Jh1J5hIcWhOhiqpBlbPtmW5Uqgeh92wQGymoHaIPhPrRbYRwWNn2AwITNqxUYKgfZ7nBvelbtxWM31+TZCXJm6UPtp2EeNVrCNUIFlm5l0Vj+Lvewp4THN2Glj89w10qqy7l/a2sbFHoqtK9Rzfo6oYULMmzqdws1G3mdZLJGws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752519238; c=relaxed/simple; bh=f2UjTPl4HUGhaJZGvJzyLZ61geaJfkxh73V9CoWdrVc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k9nbhi88aEC0u1kRTlTmRl3mgut4tWeIX0qlQ9Q+8NpoQtbAQRrljTnVCOewmUJmkeqzeLDz3Sfad1QcHPJS7ZRkGgk9+Vm1rNmxYgjH68oR7WumP4938Wg9qlzXkr4Ts6H1cDmlqugdaVntPxXAKyPL9ezQpYXaZYzOrdyaY+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jwAyz53U; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jwAyz53U" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-235c897d378so45733415ad.1 for ; Mon, 14 Jul 2025 11:53:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752519236; x=1753124036; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M/pcrYvaU6t8T3R2xq0AsGjXqPB0jvdSPEGz2fSJtgo=; b=jwAyz53UeHQFTs7llBkc4tHwlg7xZcPZwXaDcE3Wi50dXE49iSwRpgDTD/v9GzzbbU 3ZN9lnHHprQz0ZoRDQccST62Ic3vb0p15OtBeM1XLvCGzuHmrKCgwzJU0uNAKC9VyD2Q NUWTdYQGecyYOJnDwtWRSpgGp338YzftF3IjSRNfH+HIS1UW6vHJAmmWAog3QvCZviOD 7MCRA5s6RG3cph8zYDeFIFZQN7z2cnQC5INnNY3pc61ZlgdNsGNytRfs73i16X6K0Lmg uKWnFZawZiQZCzJPkOB6ljCxclIP97pPjNAbqOxi++Tm8aMYAR0CkXNbiUawxvuPorcj bXzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752519236; x=1753124036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M/pcrYvaU6t8T3R2xq0AsGjXqPB0jvdSPEGz2fSJtgo=; b=lWxzPCsmUaHcLtM6WAgcJJVuPss2Npq+ItTBlJKE96Xd8Be9dkh6u/k9y2pIELTn9R IDfhqXzYfw1sqvc9jgZ2qzRR9GtI7+Ut3I72FISzPH8aczPD+xfnYnEc2/bIBGAvvVC6 6muV7MqVhjaqGQVH5yBXN3GG7VijKsyTF4KHz3dD0np4I/1MJcpJynqbRXwKR/iJjyxx /nyOBg1HyMa5M+O2uWDsBH1RNrpq6Iu4twxeMDGkMLhYO4f0RKFLklWKh/+keTSSFL5r DMcVkziMhGZw37s8TxtSlJ2OECLD5rmmVcaIBDAnCF22seKoF5zmh9A926XoFGr2h81e 1iYw== X-Gm-Message-State: AOJu0Yze2lvbEn5yRq1EZ4lbLUxk9scI6W0r8Yr0qoM/BlzEgQSnKPzr HTV/Ew72IYQNy3/ZSZ6Nm4Ep81FNAe0WifWjTKzyfE4SBQV/liZ7Rzs3GiESXT4yOkIIGobMDod HPNX4/mqK3BHOROZ5o6cdjpVu0Zp8n63KLT6xzw/mAKBAgh1+wboyXKr2pJUCGsPFmfIT54re5s /NYMz5+qDJ08p16VM7/NaKRkgnyj4n2/YfDnWRCvtFKrOBUFQ0Ig== X-Google-Smtp-Source: AGHT+IFuAJJVA8cnE0q/yLnIMpTIFAIzWFeiIBquuBk2kN5jrBUgxr/Gzb6qwsRLUm5yOofr69uekGt2W5yr X-Received: from plhn16.prod.google.com ([2002:a17:903:1110:b0:223:690d:fd84]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:fc8e:b0:234:d679:72e3 with SMTP id d9443c01a7336-23dede876aemr190672635ad.42.1752519235929; Mon, 14 Jul 2025 11:53:55 -0700 (PDT) Date: Mon, 14 Jul 2025 11:53:19 -0700 In-Reply-To: <20250714185321.2417234-1-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714185321.2417234-1-ynaffit@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714185321.2417234-7-ynaffit@google.com> Subject: [PATCH v3 6/6] binder: encapsulate individual alloc test cases From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: keescook@google.com, kernel-team@android.com, Greg Kroah-Hartman , "=?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Brendan Higgins , David Gow , Rae Moar , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each case tested by the binder allocator test is defined by 3 parameters: the end alignment type of each requested buffer allocation, whether those buffers share the front or back pages of the allotted address space, and the order in which those buffers should be released. The alignment type represents how a binder buffer may be laid out within or across page boundaries and relative to other buffers, and it's used along with whether the buffers cover part (sharing the front pages) of or all (sharing the back pages) of the vma to calculate the sizes passed into each test. binder_alloc_test_alloc recursively generates each possible arrangement of alignment types and then tests that the binder_alloc code tracks pages correctly when those buffers are allocated and then freed in every possible order at both ends of the address space. While they provide comprehensive coverage, they are poor candidates to be represented as KUnit test cases, which must be statically enumerated. For 5 buffers and 5 end alignment types, the test case array would have 750,000 entries. This change structures the recursive calls into meaningful test cases so that failures are easier to interpret. Signed-off-by: Tiffany Yang Acked-by: Carlos Llamas --- v2: * Fix build warning Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202506281959.hfOTIUjS-lkp@i= ntel.com/ --- drivers/android/tests/binder_alloc_kunit.c | 234 ++++++++++++++++----- 1 file changed, 181 insertions(+), 53 deletions(-) diff --git a/drivers/android/tests/binder_alloc_kunit.c b/drivers/android/t= ests/binder_alloc_kunit.c index 9e185e2036e5..02aa4a135eb5 100644 --- a/drivers/android/tests/binder_alloc_kunit.c +++ b/drivers/android/tests/binder_alloc_kunit.c @@ -24,7 +24,16 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); #define BUFFER_NUM 5 #define BUFFER_MIN_SIZE (PAGE_SIZE / 8) =20 -static int binder_alloc_test_failures; +#define FREESEQ_BUFLEN ((3 * BUFFER_NUM) + 1) + +#define ALIGN_TYPE_STRLEN (12) + +#define ALIGNMENTS_BUFLEN (((ALIGN_TYPE_STRLEN + 6) * BUFFER_NUM) + 1) + +#define PRINT_ALL_CASES (0) + +/* 5^5 alignment combinations * 2 places to share pages * 5! free sequence= s */ +#define TOTAL_EXHAUSTIVE_CASES (3125 * 2 * 120) =20 /** * enum buf_end_align_type - Page alignment of a buffer @@ -86,18 +95,49 @@ enum buf_end_align_type { LOOP_END, }; =20 -static void pr_err_size_seq(struct kunit *test, size_t *sizes, int *seq) +static const char *const buf_end_align_type_strs[LOOP_END] =3D { + [SAME_PAGE_UNALIGNED] =3D "SP_UNALIGNED", + [SAME_PAGE_ALIGNED] =3D " SP_ALIGNED ", + [NEXT_PAGE_UNALIGNED] =3D "NP_UNALIGNED", + [NEXT_PAGE_ALIGNED] =3D " NP_ALIGNED ", + [NEXT_NEXT_UNALIGNED] =3D "NN_UNALIGNED", +}; + +struct binder_alloc_test_case_info { + size_t *buffer_sizes; + int *free_sequence; + char alignments[ALIGNMENTS_BUFLEN]; + bool front_pages; +}; + +static void stringify_free_seq(struct kunit *test, int *seq, char *buf, + size_t buf_len) { + size_t bytes =3D 0; int i; =20 - kunit_err(test, "alloc sizes: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%zu]", sizes[i]); - pr_cont("\n"); - kunit_err(test, "free seq: "); - for (i =3D 0; i < BUFFER_NUM; i++) - pr_cont("[%d]", seq[i]); - pr_cont("\n"); + for (i =3D 0; i < BUFFER_NUM; i++) { + bytes +=3D snprintf(buf + bytes, buf_len - bytes, "[%d]", seq[i]); + if (bytes >=3D buf_len) + break; + } + KUNIT_EXPECT_LT(test, bytes, buf_len); +} + +static void stringify_alignments(struct kunit *test, int *alignments, + char *buf, size_t buf_len) +{ + size_t bytes =3D 0; + int i; + + for (i =3D 0; i < BUFFER_NUM; i++) { + bytes +=3D snprintf(buf + bytes, buf_len - bytes, "[ %d:%s ]", i, + buf_end_align_type_strs[alignments[i]]); + if (bytes >=3D buf_len) + break; + } + + KUNIT_EXPECT_LT(test, bytes, buf_len); } =20 static bool check_buffer_pages_allocated(struct kunit *test, @@ -124,28 +164,30 @@ static bool check_buffer_pages_allocated(struct kunit= *test, return true; } =20 -static void binder_alloc_test_alloc_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq) +static unsigned long binder_alloc_test_alloc_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) { buffers[i] =3D binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); if (IS_ERR(buffers[i]) || - !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) { - pr_err_size_seq(test, sizes, seq); - binder_alloc_test_failures++; - } + !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) + failures++; } + + return failures; } =20 -static void binder_alloc_test_free_buf(struct kunit *test, - struct binder_alloc *alloc, - struct binder_buffer *buffers[], - size_t *sizes, int *seq, size_t end) +static unsigned long binder_alloc_test_free_buf(struct kunit *test, + struct binder_alloc *alloc, + struct binder_buffer *buffers[], + size_t *sizes, int *seq, size_t end) { + unsigned long failures =3D 0; int i; =20 for (i =3D 0; i < BUFFER_NUM; i++) @@ -153,17 +195,19 @@ static void binder_alloc_test_free_buf(struct kunit *= test, =20 for (i =3D 0; i <=3D (end - 1) / PAGE_SIZE; i++) { if (list_empty(page_to_lru(alloc->pages[i]))) { - pr_err_size_seq(test, sizes, seq); kunit_err(test, "expect lru but is %s at page index %d\n", alloc->pages[i] ? "alloc" : "free", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_free_page(struct kunit *test, - struct binder_alloc *alloc) +static unsigned long binder_alloc_test_free_page(struct kunit *test, + struct binder_alloc *alloc) { + unsigned long failures =3D 0; unsigned long count; int i; =20 @@ -177,27 +221,70 @@ static void binder_alloc_test_free_page(struct kunit = *test, kunit_err(test, "expect free but is %s at page index %d\n", list_empty(page_to_lru(alloc->pages[i])) ? "alloc" : "lru", i); - binder_alloc_test_failures++; + failures++; } } + + return failures; } =20 -static void binder_alloc_test_alloc_free(struct kunit *test, +/* Executes one full test run for the given test case. */ +static bool binder_alloc_test_alloc_free(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, size_t end) + struct binder_alloc_test_case_info *tc, + size_t end) { + unsigned long pages =3D PAGE_ALIGN(end) / PAGE_SIZE; struct binder_buffer *buffers[BUFFER_NUM]; - - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); + unsigned long failures; + bool failed =3D false; + + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial allocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Initial buffers not freed correctly: %lu/%lu pages not on lru list= ", + failures, pages); =20 /* Allocate from lru. */ - binder_alloc_test_alloc_buf(test, alloc, buffers, sizes, seq); - if (list_lru_count(alloc->freelist)) - kunit_err(test, "lru list should be empty but is not\n"); - - binder_alloc_test_free_buf(test, alloc, buffers, sizes, seq, end); - binder_alloc_test_free_page(test, alloc); + failures =3D binder_alloc_test_alloc_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocation failed: %lu/%u buffers with errors", + failures, BUFFER_NUM); + + failures =3D list_lru_count(alloc->freelist); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "lru list should be empty after reallocation but still has %lu page= s", + failures); + + failures =3D binder_alloc_test_free_buf(test, alloc, buffers, + tc->buffer_sizes, + tc->free_sequence, end); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Reallocated buffers not freed correctly: %lu/%lu pages not on lru = list", + failures, pages); + + failures =3D binder_alloc_test_free_page(test, alloc); + failed =3D failed || failures; + KUNIT_EXPECT_EQ_MSG(test, failures, 0, + "Failed to clean up allocated pages: %lu/%lu pages still installed", + failures, (alloc->buffer_size / PAGE_SIZE)); + + return failed; } =20 static bool is_dup(int *seq, int index, int val) @@ -213,24 +300,44 @@ static bool is_dup(int *seq, int index, int val) =20 /* Generate BUFFER_NUM factorial free orders. */ static void permute_frees(struct kunit *test, struct binder_alloc *alloc, - size_t *sizes, int *seq, int index, size_t end) + struct binder_alloc_test_case_info *tc, + unsigned long *runs, unsigned long *failures, + int index, size_t end) { + bool case_failed; int i; =20 if (index =3D=3D BUFFER_NUM) { - binder_alloc_test_alloc_free(test, alloc, sizes, seq, end); + char freeseq_buf[FREESEQ_BUFLEN]; + + case_failed =3D binder_alloc_test_alloc_free(test, alloc, tc, end); + *runs +=3D 1; + *failures +=3D case_failed; + + if (case_failed || PRINT_ALL_CASES) { + stringify_free_seq(test, tc->free_sequence, freeseq_buf, + FREESEQ_BUFLEN); + kunit_err(test, "case %lu: [%s] | %s - %s - %s", *runs, + case_failed ? "FAILED" : "PASSED", + tc->front_pages ? "front" : "back ", + tc->alignments, freeseq_buf); + } + return; } for (i =3D 0; i < BUFFER_NUM; i++) { - if (is_dup(seq, index, i)) + if (is_dup(tc->free_sequence, index, i)) continue; - seq[index] =3D i; - permute_frees(test, alloc, sizes, seq, index + 1, end); + tc->free_sequence[index] =3D i; + permute_frees(test, alloc, tc, runs, failures, index + 1, end); } } =20 -static void gen_buf_sizes(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset) +static void gen_buf_sizes(struct kunit *test, + struct binder_alloc *alloc, + struct binder_alloc_test_case_info *tc, + size_t *end_offset, unsigned long *runs, + unsigned long *failures) { size_t last_offset, offset =3D 0; size_t front_sizes[BUFFER_NUM]; @@ -238,31 +345,45 @@ static void gen_buf_sizes(struct kunit *test, struct = binder_alloc *alloc, int seq[BUFFER_NUM] =3D {0}; int i; =20 + tc->free_sequence =3D seq; for (i =3D 0; i < BUFFER_NUM; i++) { last_offset =3D offset; offset =3D end_offset[i]; front_sizes[i] =3D offset - last_offset; back_sizes[BUFFER_NUM - i - 1] =3D front_sizes[i]; } + back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; + /* * Buffers share the first or last few pages. * Only BUFFER_NUM - 1 buffer sizes are adjustable since * we need one giant buffer before getting to the last page. */ - back_sizes[0] +=3D alloc->buffer_size - end_offset[BUFFER_NUM - 1]; - permute_frees(test, alloc, front_sizes, seq, 0, + tc->front_pages =3D true; + tc->buffer_sizes =3D front_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, end_offset[BUFFER_NUM - 1]); - permute_frees(test, alloc, back_sizes, seq, 0, alloc->buffer_size); + + tc->front_pages =3D false; + tc->buffer_sizes =3D back_sizes; + permute_frees(test, alloc, tc, runs, failures, 0, alloc->buffer_size); } =20 static void gen_buf_offsets(struct kunit *test, struct binder_alloc *alloc, - size_t *end_offset, int index) + size_t *end_offset, int *alignments, + unsigned long *runs, unsigned long *failures, + int index) { size_t end, prev; int align; =20 if (index =3D=3D BUFFER_NUM) { - gen_buf_sizes(test, alloc, end_offset); + struct binder_alloc_test_case_info tc =3D {0}; + + stringify_alignments(test, alignments, tc.alignments, + ALIGNMENTS_BUFLEN); + + gen_buf_sizes(test, alloc, &tc, end_offset, runs, failures); return; } prev =3D index =3D=3D 0 ? 0 : end_offset[index - 1]; @@ -276,7 +397,9 @@ static void gen_buf_offsets(struct kunit *test, struct = binder_alloc *alloc, else end +=3D BUFFER_MIN_SIZE; end_offset[index] =3D end; - gen_buf_offsets(test, alloc, end_offset, index + 1); + alignments[index] =3D align; + gen_buf_offsets(test, alloc, end_offset, alignments, runs, + failures, index + 1); } } =20 @@ -328,10 +451,15 @@ static void binder_alloc_exhaustive_test(struct kunit= *test) { struct binder_alloc_test *priv =3D test->priv; size_t end_offset[BUFFER_NUM]; + int alignments[BUFFER_NUM]; + unsigned long failures =3D 0; + unsigned long runs =3D 0; =20 - gen_buf_offsets(test, &priv->alloc, end_offset, 0); + gen_buf_offsets(test, &priv->alloc, end_offset, alignments, &runs, + &failures, 0); =20 - KUNIT_EXPECT_EQ(test, binder_alloc_test_failures, 0); + KUNIT_EXPECT_EQ(test, runs, TOTAL_EXHAUSTIVE_CASES); + KUNIT_EXPECT_EQ(test, failures, 0); } =20 /* =3D=3D=3D=3D=3D End test cases =3D=3D=3D=3D=3D */ --=20 2.50.0.727.gbf7dc18ff4-goog