From nobody Fri Jan 2 06:01:54 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63B16CDB482 for ; Mon, 16 Oct 2023 03:25:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232071AbjJPDYa (ORCPT ); Sun, 15 Oct 2023 23:24:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231415AbjJPDYD (ORCPT ); Sun, 15 Oct 2023 23:24:03 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14DF5EE for ; Sun, 15 Oct 2023 20:23:35 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id 41be03b00d2f7-5aaefc07e5cso1384653a12.0 for ; Sun, 15 Oct 2023 20:23:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1697426615; x=1698031415; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dqxLSD3NSY9H2G/bHFoEqq0XRO6Fu01n0ZJ8sfPfXdw=; b=PWbmx9YuTWFcMGW/NMbyKnIWGT7N3bsKa9JcdoQIYmcFGMDqT0JvXqUu6YRG48FMQT ibe0yEPZ0WgTmMqOO5OBDhOP3YERdeMpsH90uYgBSCjSnbT2t51KBTML5XjTDZHQG8ly ZLllYNAaqrStKl/rJZ4C3sdROmXTTeR7WRzV83FuAsMAx6LbBDtr6txM4p/v4JFdvP6P 0WaJFLZomueAfVemxBoCqcjXxhe4pSgwE7H62bU1L2p/aWr7+bU9LBBkOmrXvA/xHaP2 FlWdVeUBz/2XggJbo5cBvIniXzTvyR47I3PO9uH80C+lZWX5Xj7EHqN/pYPZgJxWQCKu Rkqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697426615; x=1698031415; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dqxLSD3NSY9H2G/bHFoEqq0XRO6Fu01n0ZJ8sfPfXdw=; b=mdHyxw4/9qKaEjiYwpL5oUZ2DOBvBW767+iiBebsQgLq+5SGC7MPTD9TAznFnWVgjC IMOiRypyvkGsNJOCnKrcLbpYXuMrDyj8xYiIuwfA+8GVZocjyUkwgOGxRCLm9BaSAEt3 0LVHDyS9xpr+TUTJ33Q2Bfnt4Ow8RxO7sjb7p4bKSV/K8oOvJMjxn8QExsU22M0ZD0nH HO8Y0GvPyst5mIeGsNcbxIWEeSQ9QOuL1yR1yHZzFXUJ6uGQa09JHNkhWdeRuDZx2r74 XTiUcZ/A01TYOp8o0RZYxARO4a5lNl6I6wwxVFLXwF20CYb3hp8kHjYkNFTCJtPTfqGv p63w== X-Gm-Message-State: AOJu0Yyc7A0U10ggXr51ZWv9IMidvGl7Hnxk3XbdMTImoIDmy0LQ5Nlx SwWceRnR1ODKJm0gecGWEOIlvg== X-Google-Smtp-Source: AGHT+IHmA+WtBu9NLQQzhIWS0nTFs0epfhl1qYL4Lf5DRNzLjrSSQh6nGRADfVlcdpst5WMiKPi/tw== X-Received: by 2002:a17:90b:4fc2:b0:278:f907:719d with SMTP id qa2-20020a17090b4fc200b00278f907719dmr25042669pjb.48.1697426615383; Sun, 15 Oct 2023 20:23:35 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([139.177.225.232]) by smtp.gmail.com with ESMTPSA id d8-20020a17090ae28800b0027758c7f585sm3452770pjz.52.2023.10.15.20.23.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 15 Oct 2023 20:23:35 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com, corbet@lwn.net, akpm@linux-foundation.org, willy@infradead.org, brauner@kernel.org, surenb@google.com, michael.christie@oracle.com, mjguzik@gmail.com, mathieu.desnoyers@efficios.com, npiggin@gmail.com, peterz@infradead.org, oliver.sang@intel.com, mst@redhat.com Cc: zhangpeng.00@bytedance.com, maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v5 08/10] maple_tree: Update check_forking() and bench_forking() Date: Mon, 16 Oct 2023 11:22:24 +0800 Message-Id: <20231016032226.59199-9-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20231016032226.59199-1-zhangpeng.00@bytedance.com> References: <20231016032226.59199-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Updated check_forking() and bench_forking() to use __mt_dup() to duplicate maple tree. Signed-off-by: Peng Zhang --- lib/test_maple_tree.c | 117 ++++++++++++++++++------------------ tools/include/linux/rwsem.h | 4 ++ 2 files changed, 62 insertions(+), 59 deletions(-) diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index de470950714f..3e4597fb49d3 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -1834,47 +1834,48 @@ static noinline void __init bench_mas_prev(struct m= aple_tree *mt) } #endif /* check_forking - simulate the kernel forking sequence with the tree. */ -static noinline void __init check_forking(struct maple_tree *mt) +static noinline void __init check_forking(void) { - - struct maple_tree newmt; - int i, nr_entries =3D 134; + struct maple_tree mt, newmt; + int i, nr_entries =3D 134, ret; void *val; - MA_STATE(mas, mt, 0, 0); - MA_STATE(newmas, mt, 0, 0); - struct rw_semaphore newmt_lock; + MA_STATE(mas, &mt, 0, 0); + MA_STATE(newmas, &newmt, 0, 0); + struct rw_semaphore mt_lock, newmt_lock; =20 + init_rwsem(&mt_lock); init_rwsem(&newmt_lock); =20 - for (i =3D 0; i <=3D nr_entries; i++) - mtree_store_range(mt, i*10, i*10 + 5, - xa_mk_value(i), GFP_KERNEL); + mt_init_flags(&mt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); + mt_set_external_lock(&mt, &mt_lock); =20 - mt_set_non_kernel(99999); mt_init_flags(&newmt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); mt_set_external_lock(&newmt, &newmt_lock); - newmas.tree =3D &newmt; - mas_reset(&newmas); - mas_reset(&mas); - down_write(&newmt_lock); - mas.index =3D 0; - mas.last =3D 0; - if (mas_expected_entries(&newmas, nr_entries)) { + + down_write(&mt_lock); + for (i =3D 0; i <=3D nr_entries; i++) { + mas_set_range(&mas, i*10, i*10 + 5); + mas_store_gfp(&mas, xa_mk_value(i), GFP_KERNEL); + } + + down_write_nested(&newmt_lock, SINGLE_DEPTH_NESTING); + ret =3D __mt_dup(&mt, &newmt, GFP_KERNEL); + if (ret) { pr_err("OOM!"); BUG_ON(1); } - rcu_read_lock(); - mas_for_each(&mas, val, ULONG_MAX) { - newmas.index =3D mas.index; - newmas.last =3D mas.last; + + mas_set(&newmas, 0); + mas_for_each(&newmas, val, ULONG_MAX) mas_store(&newmas, val); - } - rcu_read_unlock(); + mas_destroy(&newmas); + mas_destroy(&mas); mt_validate(&newmt); - mt_set_non_kernel(0); __mt_destroy(&newmt); + __mt_destroy(&mt); up_write(&newmt_lock); + up_write(&mt_lock); } =20 static noinline void __init check_iteration(struct maple_tree *mt) @@ -1977,49 +1978,51 @@ static noinline void __init check_mas_store_gfp(str= uct maple_tree *mt) } =20 #if defined(BENCH_FORK) -static noinline void __init bench_forking(struct maple_tree *mt) +static noinline void __init bench_forking(void) { - - struct maple_tree newmt; - int i, nr_entries =3D 134, nr_fork =3D 80000; + struct maple_tree mt, newmt; + int i, nr_entries =3D 134, nr_fork =3D 80000, ret; void *val; - MA_STATE(mas, mt, 0, 0); - MA_STATE(newmas, mt, 0, 0); - struct rw_semaphore newmt_lock; + MA_STATE(mas, &mt, 0, 0); + MA_STATE(newmas, &newmt, 0, 0); + struct rw_semaphore mt_lock, newmt_lock; =20 + init_rwsem(&mt_lock); init_rwsem(&newmt_lock); - mt_set_external_lock(&newmt, &newmt_lock); =20 - for (i =3D 0; i <=3D nr_entries; i++) - mtree_store_range(mt, i*10, i*10 + 5, - xa_mk_value(i), GFP_KERNEL); + mt_init_flags(&mt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); + mt_set_external_lock(&mt, &mt_lock); + + down_write(&mt_lock); + for (i =3D 0; i <=3D nr_entries; i++) { + mas_set_range(&mas, i*10, i*10 + 5); + mas_store_gfp(&mas, xa_mk_value(i), GFP_KERNEL); + } =20 for (i =3D 0; i < nr_fork; i++) { - mt_set_non_kernel(99999); - mt_init_flags(&newmt, MT_FLAGS_ALLOC_RANGE); - newmas.tree =3D &newmt; - mas_reset(&newmas); - mas_reset(&mas); - mas.index =3D 0; - mas.last =3D 0; - rcu_read_lock(); - down_write(&newmt_lock); - if (mas_expected_entries(&newmas, nr_entries)) { - printk("OOM!"); + mt_init_flags(&newmt, + MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); + mt_set_external_lock(&newmt, &newmt_lock); + + down_write_nested(&newmt_lock, SINGLE_DEPTH_NESTING); + ret =3D __mt_dup(&mt, &newmt, GFP_KERNEL); + if (ret) { + pr_err("OOM!"); BUG_ON(1); } - mas_for_each(&mas, val, ULONG_MAX) { - newmas.index =3D mas.index; - newmas.last =3D mas.last; + + mas_set(&newmas, 0); + mas_for_each(&newmas, val, ULONG_MAX) mas_store(&newmas, val); - } + mas_destroy(&newmas); - rcu_read_unlock(); mt_validate(&newmt); - mt_set_non_kernel(0); __mt_destroy(&newmt); up_write(&newmt_lock); } + mas_destroy(&mas); + __mt_destroy(&mt); + up_write(&mt_lock); } #endif =20 @@ -3615,9 +3618,7 @@ static int __init maple_tree_seed(void) #endif #if defined(BENCH_FORK) #define BENCH - mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); - bench_forking(&tree); - mtree_destroy(&tree); + bench_forking(); goto skip; #endif #if defined(BENCH_MT_FOR_EACH) @@ -3650,9 +3651,7 @@ static int __init maple_tree_seed(void) check_iteration(&tree); mtree_destroy(&tree); =20 - mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); - check_forking(&tree); - mtree_destroy(&tree); + check_forking(); =20 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_mas_store_gfp(&tree); diff --git a/tools/include/linux/rwsem.h b/tools/include/linux/rwsem.h index 83971b3cbfce..f8bffd4a987c 100644 --- a/tools/include/linux/rwsem.h +++ b/tools/include/linux/rwsem.h @@ -37,4 +37,8 @@ static inline int up_write(struct rw_semaphore *sem) { return pthread_rwlock_unlock(&sem->lock); } + +#define down_read_nested(sem, subclass) down_read(sem) +#define down_write_nested(sem, subclass) down_write(sem) + #endif /* _TOOLS_RWSEM_H */ --=20 2.20.1