From nobody Thu Dec 18 01:27:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A91C25B47 for ; Fri, 27 Oct 2023 03:39:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345232AbjJ0Djj (ORCPT ); Thu, 26 Oct 2023 23:39:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345265AbjJ0Dje (ORCPT ); Thu, 26 Oct 2023 23:39:34 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6F261BC for ; Thu, 26 Oct 2023 20:39:30 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-28001f8079cso960566a91.2 for ; Thu, 26 Oct 2023 20:39:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1698377970; x=1698982770; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=foGetFZ9t8EgLLDV0Dnb7jyONvdftSjfOOfLaWzQbAY=; b=dPDVGIXxB5tiOYtjHLu9Vfx2CaH0zsNL1GtKkC5mI/hUX6lT+IL2gpevS08ipl7iP/ Lc5+rODUbNg5EO6a3I60zTuS+V9pm+rcjSYuRoD8n9Vvhtkr+MxmImF87v3Ie6JgKDvK L5ChQI93IzF8Z5Sxnv37du8Cw+Gt5Xwe4rmaf41MXtJzDdZ4IUxTqccV+eAUm9Xdx1/c PaQ8zCBZHz3xVi2W9hf9Vxjm6Na2wnZjRBunIN4DTx5RIn+ohWHiAEN6At4UR1SOPu6f jt+vhh8oMscMEZrxih3nOOMHFPemh/soPovruDgqLfcZ0yq2V6CuY3hw77GyVvZloqnl 1MWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698377970; x=1698982770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=foGetFZ9t8EgLLDV0Dnb7jyONvdftSjfOOfLaWzQbAY=; b=Z4oHDxWus2wR3taZ1UqnCoBV40wNy/eH3HKjQ3BLTu0cRDFv/Z06BYaqo9heoNl+fS EzCk5xIbMWQhtIyCyCXrAqGEySTI6qWQIB9TfF8A6khCr3eBhLdgeLjog1ag1TaMb5ws NLeaLT4Rnkbvcln3aVYRRvarikyzh7yzpddgvi/2XxNn1Cmh7s98Hlj6UQG73cktB975 UNYgSEONn9w2vmtuK3h154pAUDrG0BiVR5e2+tN9QSLm6Xf5qa8gvzR8DdJ2tOBrcHxv 6YlS+wD4J8disBpkmL/BnpNGtuaVoDC22wxzpHkyzMOsZxVAzwmhPCg7gWbZQPWwzSAR aBEw== X-Gm-Message-State: AOJu0Yz+3PbW+YqCiIYp6HETZGFI2ve6/KtAzvVG13y/pWvJGWNgUbZK QYrhdFAJGl8gl/d9wdoQ2sI8MQ== X-Google-Smtp-Source: AGHT+IEXV//tpE9s5GCs40kvntDCcan4ON4xoFdITglGv9jOXpY7Lz29M0cQjMI93TcPesAbndJDog== X-Received: by 2002:a17:90a:1a03:b0:280:f4a:86b4 with SMTP id 3-20020a17090a1a0300b002800f4a86b4mr240851pjk.17.1698377970124; Thu, 26 Oct 2023 20:39:30 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.189.7]) by smtp.gmail.com with ESMTPSA id ms19-20020a17090b235300b00267d9f4d340sm2345676pjb.44.2023.10.26.20.39.22 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 26 Oct 2023 20:39:29 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com, corbet@lwn.net, akpm@linux-foundation.org, willy@infradead.org, brauner@kernel.org, surenb@google.com, michael.christie@oracle.com, mjguzik@gmail.com, mathieu.desnoyers@efficios.com, npiggin@gmail.com, peterz@infradead.org, oliver.sang@intel.com, mst@redhat.com Cc: zhangpeng.00@bytedance.com, maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v7 04/10] radix tree test suite: Align kmem_cache_alloc_bulk() with kernel behavior. Date: Fri, 27 Oct 2023 11:38:39 +0800 Message-Id: <20231027033845.90608-5-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20231027033845.90608-1-zhangpeng.00@bytedance.com> References: <20231027033845.90608-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When kmem_cache_alloc_bulk() fails to allocate, leave the freed pointers in the array. This enables a more accurate simulation of the kernel's behavior and allows for testing potential double-free scenarios. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- tools/testing/radix-tree/linux.c | 45 +++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 12 deletions(-) diff --git a/tools/testing/radix-tree/linux.c b/tools/testing/radix-tree/li= nux.c index 61fe2601cb3a..4eb442206d01 100644 --- a/tools/testing/radix-tree/linux.c +++ b/tools/testing/radix-tree/linux.c @@ -93,13 +93,9 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, st= ruct list_lru *lru, return p; } =20 -void kmem_cache_free_locked(struct kmem_cache *cachep, void *objp) +void __kmem_cache_free_locked(struct kmem_cache *cachep, void *objp) { assert(objp); - uatomic_dec(&nr_allocated); - uatomic_dec(&cachep->nr_allocated); - if (kmalloc_verbose) - printf("Freeing %p to slab\n", objp); if (cachep->nr_objs > 10 || cachep->align) { memset(objp, POISON_FREE, cachep->size); free(objp); @@ -111,6 +107,15 @@ void kmem_cache_free_locked(struct kmem_cache *cachep,= void *objp) } } =20 +void kmem_cache_free_locked(struct kmem_cache *cachep, void *objp) +{ + uatomic_dec(&nr_allocated); + uatomic_dec(&cachep->nr_allocated); + if (kmalloc_verbose) + printf("Freeing %p to slab\n", objp); + __kmem_cache_free_locked(cachep, objp); +} + void kmem_cache_free(struct kmem_cache *cachep, void *objp) { pthread_mutex_lock(&cachep->lock); @@ -141,18 +146,17 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, = gfp_t gfp, size_t size, if (kmalloc_verbose) pr_debug("Bulk alloc %lu\n", size); =20 - if (!(gfp & __GFP_DIRECT_RECLAIM)) { - if (cachep->non_kernel < size) - return 0; - - cachep->non_kernel -=3D size; - } - pthread_mutex_lock(&cachep->lock); if (cachep->nr_objs >=3D size) { struct radix_tree_node *node; =20 for (i =3D 0; i < size; i++) { + if (!(gfp & __GFP_DIRECT_RECLAIM)) { + if (!cachep->non_kernel) + break; + cachep->non_kernel--; + } + node =3D cachep->objs; cachep->nr_objs--; cachep->objs =3D node->parent; @@ -163,11 +167,19 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, = gfp_t gfp, size_t size, } else { pthread_mutex_unlock(&cachep->lock); for (i =3D 0; i < size; i++) { + if (!(gfp & __GFP_DIRECT_RECLAIM)) { + if (!cachep->non_kernel) + break; + cachep->non_kernel--; + } + if (cachep->align) { posix_memalign(&p[i], cachep->align, cachep->size); } else { p[i] =3D malloc(cachep->size); + if (!p[i]) + break; } if (cachep->ctor) cachep->ctor(p[i]); @@ -176,6 +188,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, g= fp_t gfp, size_t size, } } =20 + if (i < size) { + size =3D i; + pthread_mutex_lock(&cachep->lock); + for (i =3D 0; i < size; i++) + __kmem_cache_free_locked(cachep, p[i]); + pthread_mutex_unlock(&cachep->lock); + return 0; + } + for (i =3D 0; i < size; i++) { uatomic_inc(&nr_allocated); uatomic_inc(&cachep->nr_allocated); --=20 2.20.1