From nobody Sun Feb 8 19:35:53 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1958EB64DC for ; Thu, 15 Jun 2023 08:44:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245227AbjFOIoI (ORCPT ); Thu, 15 Jun 2023 04:44:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241037AbjFOIne (ORCPT ); Thu, 15 Jun 2023 04:43:34 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB802272E for ; Thu, 15 Jun 2023 01:43:12 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-6664a9f0b10so1173531b3a.0 for ; Thu, 15 Jun 2023 01:43:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686818592; x=1689410592; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h6we1XLVpJvFDKToAtCSdGG7AESFIveH8POVJHK7ygw=; b=KJqooEgRb2iG8G1YEwQaqnb6yEyBC3iktDlEuCSBmlCh99KVdbC4uXKwyOEWcJ/vOP DIqo86HSGslqIkVliO7JkAyqCgamZvnuTKyKqN4AbrShWmLI8k5Ca37sxCWu0JtQEQ9K ol0WmAJ3q9Y2U4SIj4Vx039gw/b6Hns2IJDVLd99kDe5nheMueLD1C9GADpQ7Xqq0y6S 992BUsR57Tu/AIwGg5EIyEIr7FlnvDWdZ2OGQBBbIBiKEp1+lnu066CqqEsTMIn7mRCo +ag3+4ZmKY4vks40xwKEB5VEbwCyTezKBcyE443UrMi+13WctUK8zwBk/BSkby+dhEDv 8kMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686818592; x=1689410592; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h6we1XLVpJvFDKToAtCSdGG7AESFIveH8POVJHK7ygw=; b=bc5ABhnFyhlOcqd13CCHb187oq2NuOVzxsK7fMg3GeveopdSKeYgcnPA8fNFPc6mJd uxm13k2ISkNx5y4IT9pq7LQqOF0yCde3VBHJDgfVwk/eHyJlrU5cKnGh7h1wMby/rkQE gQ/9oqa2Frq9yCTOu7OsLZ7SxNvQOB/6tn+MkzLZIytjq++SornS1f9gMauP/rPAGHEl bSIjFJRQUETiB42/Gb9gN55CdzPqLcrXOq4ESD05UmnXkiUd9ZvbbGjqeKkmKOgLye7t EITQzZ/LXc4uDm4BnRFgrG0RhkUtB9JBQTwVKloFe/o1YYxA40iwtLf+xfueLRxGc8m/ 1Nig== X-Gm-Message-State: AC+VfDyRhkEGQ1qLNc8NeB62NQphULL1i1oc6XrWKX2EHMIZOWsuAsOA 1fAwyjzLo3y3DlCqB77I+rgizA== X-Google-Smtp-Source: ACHHUZ7OxUHpAAt4DVa7Yj5aKyir5I7RaKxitjppVwOvFNh7V119bUWSksBzFLFq7Hpwse3+/gV1zQ== X-Received: by 2002:a05:6a20:8419:b0:10b:bad9:1d31 with SMTP id c25-20020a056a20841900b0010bbad91d31mr3987449pzd.26.1686818592167; Thu, 15 Jun 2023 01:43:12 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id i21-20020aa78b55000000b0064fe06fe712sm11139783pfd.129.2023.06.15.01.43.09 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 15 Jun 2023 01:43:11 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v3 1/4] maple_tree: add test for mas_wr_modify() fast path Date: Thu, 15 Jun 2023 16:42:58 +0800 Message-Id: <20230615084301.97701-2-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230615084301.97701-1-zhangpeng.00@bytedance.com> References: <20230615084301.97701-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for all cases of mas_wr_append() and mas_wr_slot_store(). Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/test_maple_tree.c | 65 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index 15d7b7bce7d6..9403472af3d7 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -1159,6 +1159,71 @@ static noinline void __init check_ranges(struct mapl= e_tree *mt) MT_BUG_ON(mt, !mt_height(mt)); mtree_destroy(mt); =20 + /* Check in-place modifications */ + mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); + /* Append to the start of last range */ + mt_set_non_kernel(50); + for (i =3D 0; i <=3D 500; i++) { + val =3D i * 5 + 1; + val2 =3D val + 4; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Append to the last range without touching any boundaries */ + for (i =3D 0; i < 10; i++) { + val =3D val2 + 5; + val2 =3D val + 4; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Append to the end of last range */ + val =3D val2; + for (i =3D 0; i < 10; i++) { + val +=3D 5; + MT_BUG_ON(mt, mtree_test_store_range(mt, val, ULONG_MAX, + xa_mk_value(val)) !=3D 0); + } + + /* Overwriting the range and over a part of the next range */ + for (i =3D 10; i < 30; i +=3D 2) { + val =3D i * 5 + 1; + val2 =3D val + 5; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Overwriting a part of the range and over the next range */ + for (i =3D 50; i < 70; i +=3D 2) { + val2 =3D i * 5; + val =3D val2 - 5; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* + * Expand the range, only partially overwriting the previous and + * next ranges + */ + for (i =3D 100; i < 130; i +=3D 3) { + val =3D i * 5 - 5; + val2 =3D i * 5 + 1; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* + * Expand the range, only partially overwriting the previous and + * next ranges, in RCU mode + */ + mt_set_in_rcu(mt); + for (i =3D 150; i < 180; i +=3D 3) { + val =3D i * 5 - 5; + val2 =3D i * 5 + 1; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + MT_BUG_ON(mt, !mt_height(mt)); + mt_validate(mt); + mt_set_non_kernel(0); + mtree_destroy(mt); + /* Test rebalance gaps */ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); mt_set_non_kernel(50); --=20 2.20.1 From nobody Sun Feb 8 19:35:53 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ACE2EB64D9 for ; Thu, 15 Jun 2023 08:43:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245105AbjFOInl (ORCPT ); Thu, 15 Jun 2023 04:43:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244453AbjFOInR (ORCPT ); Thu, 15 Jun 2023 04:43:17 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34C3B296C for ; Thu, 15 Jun 2023 01:43:15 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-666729f9093so1220153b3a.1 for ; Thu, 15 Jun 2023 01:43:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686818595; x=1689410595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FpzykzsbiMOLk6qot3spisUo8VNsB6lkGRakXgvX2Lw=; b=f5u5ND5nwplGrN2UmPJkv0viCKJSKGsY/UI4A0vMG82lz601xyYPuSRYCh50ZPBdKc xZfMnrR2BUQmQZD1dLgTlZgt6DHNqhuw1uWV17tAjGf8aTLgGuQJOB5Rp+ETMqhBhEIe e9xOrmZd7CJebdQh/HkiPyMeMSA1f3mVlQ2WRanzU9FQhLWN2EZhlhLxjOVdUnOVFOk6 nF7tkC5CoW4iBhGnjs59/j96iVnqhzidhQy9ukii/QT/USuxV+nT/U3Z7DEL4MmyKltJ yM0d3URmYutsVKMrFIr75MB8eACGTg3uOEE/J2eqxWFngrikhAAOtOUSirS1ATZyVLqe c3Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686818595; x=1689410595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FpzykzsbiMOLk6qot3spisUo8VNsB6lkGRakXgvX2Lw=; b=Psbl5xZuIsqvurgUGKwWESAEIL6DKvSOseObOi1I8HGvxl9mkcw65ra4iUCaZiUULK kNKwz0dLqZT77RL14R1+YfMRrLB/ENYKGKx4+ZSM2dUUNa3VPD57DvfP3+bMrQN/VGw9 JDqPQf0j27XX3QF6448QTScTzq5wlgA3CkMjAGs7h9T3V2g4HeWBKEnwQNNR0Swu9jiv adRT4wTm5+ee/cF6FL+KET7obFvNEw99HVCh9DEJ8uFtomqDM4G2okjbqxgZ3C5hZzHb kgcKygFxVNBRyX+HwCshrFNIGiAsP7K+qCVh6+1TokzaB1/OV4CD5VSAye8LMG0BI1wi ynww== X-Gm-Message-State: AC+VfDxHQ/XnXsP2sEs+SSw4eYQ1XZ4JaqAElwri6+ROR/ZL8ER5F/BP q3XwTW4CumQf/OsyM0eo/wGnJA== X-Google-Smtp-Source: ACHHUZ4VQWfVCwhn8hOlPVP9rhgoXWi3pKF2VBEqo221ku0kE/CNCTN7a+kbBIJsTlD3klPehwhA6w== X-Received: by 2002:a05:6a00:c92:b0:64c:a554:f577 with SMTP id a18-20020a056a000c9200b0064ca554f577mr5393636pfv.11.1686818595126; Thu, 15 Jun 2023 01:43:15 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id i21-20020aa78b55000000b0064fe06fe712sm11139783pfd.129.2023.06.15.01.43.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 15 Jun 2023 01:43:14 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v3 2/4] maple_tree: add test for expanding range in RCU mode Date: Thu, 15 Jun 2023 16:42:59 +0800 Message-Id: <20230615084301.97701-3-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230615084301.97701-1-zhangpeng.00@bytedance.com> References: <20230615084301.97701-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add test for expanding range in RCU mode. If we use the fast path of the slot store to expand range in RCU mode, this test will fail. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- tools/testing/radix-tree/maple.c | 75 ++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index c42033172276..0887826946f9 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -45,6 +45,13 @@ struct rcu_test_struct2 { unsigned long last[RCU_RANGE_COUNT]; }; =20 +struct rcu_test_struct3 { + struct maple_tree *mt; + unsigned long index; + unsigned long last; + bool stop; +}; + struct rcu_reader_struct { unsigned int id; int mod; @@ -34954,6 +34961,70 @@ void run_check_rcu(struct maple_tree *mt, struct r= cu_test_struct *vals) MT_BUG_ON(mt, !vals->seen_entry2); } =20 +static void *rcu_slot_store_reader(void *ptr) +{ + struct rcu_test_struct3 *test =3D ptr; + MA_STATE(mas, test->mt, test->index, test->index); + + rcu_register_thread(); + + rcu_read_lock(); + while (!test->stop) { + mas_walk(&mas); + /* The length of growth to both sides must be equal. */ + RCU_MT_BUG_ON(test, (test->index - mas.index) !=3D + (mas.last - test->last)); + } + rcu_read_unlock(); + + rcu_unregister_thread(); + return NULL; +} + +static noinline void run_check_rcu_slot_store(struct maple_tree *mt) +{ + pthread_t readers[20]; + int range_cnt =3D 200, i, limit =3D 10000; + unsigned long len =3D ULONG_MAX / range_cnt, start, end; + struct rcu_test_struct3 test =3D {.stop =3D false, .mt =3D mt}; + + start =3D range_cnt / 2 * len; + end =3D start + len - 1; + test.index =3D start; + test.last =3D end; + + for (i =3D 0; i < range_cnt; i++) { + mtree_store_range(mt, i * len, i * len + len - 1, + xa_mk_value(i * 100), GFP_KERNEL); + } + + mt_set_in_rcu(mt); + MT_BUG_ON(mt, !mt_in_rcu(mt)); + + for (i =3D 0; i < ARRAY_SIZE(readers); i++) { + if (pthread_create(&readers[i], NULL, rcu_slot_store_reader, + &test)) { + perror("creating reader thread"); + exit(1); + } + } + + usleep(5); + + while (limit--) { + /* Step by step, expand the most middle range to both sides. */ + mtree_store_range(mt, --start, ++end, xa_mk_value(100), + GFP_KERNEL); + } + + test.stop =3D true; + + while (i--) + pthread_join(readers[i], NULL); + + mt_validate(mt); +} + static noinline void run_check_rcu_slowread(struct maple_tree *mt, struct rcu_test_struct = *vals) { @@ -35206,6 +35277,10 @@ static noinline void __init check_rcu_threaded(str= uct maple_tree *mt) run_check_rcu(mt, &vals); mtree_destroy(mt); =20 + /* Check expanding range in RCU mode */ + mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); + run_check_rcu_slot_store(mt); + mtree_destroy(mt); =20 /* Forward writer for rcu stress */ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); --=20 2.20.1 From nobody Sun Feb 8 19:35:53 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E76A8EB64D9 for ; Thu, 15 Jun 2023 08:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245248AbjFOIoM (ORCPT ); Thu, 15 Jun 2023 04:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244696AbjFOInj (ORCPT ); Thu, 15 Jun 2023 04:43:39 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABEFF297B for ; Thu, 15 Jun 2023 01:43:18 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-666aa1b79a3so80635b3a.3 for ; Thu, 15 Jun 2023 01:43:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686818598; x=1689410598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XtXCNMCToSbvY4lMnhwDqjdw8KEp/J31bkp75I6UIro=; b=l3TDRxu1IZeyZNvWGdJOFqtHd+LYhlWJeVIRmuLAkF0iPL/Q4KJgpGWeh7yt2KT390 KzCL+z04k59mTjKNc1qkFngmiEIN7AEVfPul0AZh9evDVDjlwvB62qsNc0qk1cODg5op c5ZcfjgPq4iTWVmfJGSM2rtBKFqryJoMm4+M8PPPckeA9qg2gVriJbfcAwUAQBPIeVXu YMm0+kwc4VoVWFYaesVj4e8F5hPwX8K5ZwSWddqC5b8kzgp76myGhJZxg9jm+Ptz3mtX cai+VRJ32w9/dUu3CoeZE60WbitOBrdrPjWnPzWbuqC7KE5tPbmfNbKrOtB+3Y5qgmcQ QROg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686818598; x=1689410598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XtXCNMCToSbvY4lMnhwDqjdw8KEp/J31bkp75I6UIro=; b=fjidC0WvjVjlwAJUzUig0kyFOPEz2pucLtGXW0uQoBs9OqZd4Js+1ksMMo+TRAZqAE NU3VLpZ/931V0eOpQSAzimHr6eJX+I2aicsD9MTEFCBBt+24wD2jq6RrLfsXK0gjtgxX AGvZIHeJQBs3sA+GJdWbid0/CPDYSnHM9gwfipMUWQKlgZthNaEyw5z2eg4n6VGOAgtJ n9IDZjR8yWJjnO/pKjySyOysnOXqzS1TwKf5PTBygcanYBbn4E1jfmhHE+sxPCJ+A+M2 iUzZ3fBSXGIqSNU5lJ6evQbTmgaG+1A1Zgej6tCU/1PLg12hqh4Q2J4gE0ldxXd2KaHd 6ymA== X-Gm-Message-State: AC+VfDyKE0QvaHTisEraSex+sKO60Kw+1HRr8fWITbzmBiHpBtwUDwvA BMtPCvtc4lPWn5EipVOy6pn9EA== X-Google-Smtp-Source: ACHHUZ5q9AP+JpvTGeNeOKfaSVbGfzkXtHUw8eqE+etHTDoY4NTa38tMPtfWroY21eD2WL8zxFFROQ== X-Received: by 2002:a05:6a00:391e:b0:643:aa8d:8cd7 with SMTP id fh30-20020a056a00391e00b00643aa8d8cd7mr4292787pfb.32.1686818598107; Thu, 15 Jun 2023 01:43:18 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id i21-20020aa78b55000000b0064fe06fe712sm11139783pfd.129.2023.06.15.01.43.15 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 15 Jun 2023 01:43:17 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v3 3/4] maple_tree: optimize mas_wr_append(), also improve duplicating VMAs Date: Thu, 15 Jun 2023 16:43:00 +0800 Message-Id: <20230615084301.97701-4-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230615084301.97701-1-zhangpeng.00@bytedance.com> References: <20230615084301.97701-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When the new range can be completely covered by the original last range without touching the boundaries on both sides, two new entries can be appended to the end as a fast path. We update the original last pivot at the end, and the newly appended two entries will not be accessed before this, so it is also safe in RCU mode. This is useful for sequential insertion, which is what we do in dup_mmap(). Enabling BENCH_FORK in test_maple_tree and just running bench_forking() gives the following time-consuming numbers: before: after: 17,874.83 msec 15,738.38 msec It shows about a 12% performance improvement for duplicating VMAs. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index d2799c69a669..da4af6743b30 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4202,10 +4202,10 @@ static inline unsigned char mas_wr_new_end(struct m= a_wr_state *wr_mas) * * Return: True if appended, false otherwise */ -static inline bool mas_wr_append(struct ma_wr_state *wr_mas) +static inline bool mas_wr_append(struct ma_wr_state *wr_mas, + unsigned char new_end) { unsigned char end =3D wr_mas->node_end; - unsigned char new_end =3D end + 1; struct ma_state *mas =3D wr_mas->mas; unsigned char node_pivots =3D mt_pivots[wr_mas->type]; =20 @@ -4217,16 +4217,27 @@ static inline bool mas_wr_append(struct ma_wr_state= *wr_mas) ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end); } =20 - if (mas->last =3D=3D wr_mas->r_max) { - /* Append to end of range */ - rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->entry); - wr_mas->pivots[end] =3D mas->index - 1; - mas->offset =3D new_end; + if (new_end =3D=3D wr_mas->node_end + 1) { + if (mas->last =3D=3D wr_mas->r_max) { + /* Append to end of range */ + rcu_assign_pointer(wr_mas->slots[new_end], + wr_mas->entry); + wr_mas->pivots[end] =3D mas->index - 1; + mas->offset =3D new_end; + } else { + /* Append to start of range */ + rcu_assign_pointer(wr_mas->slots[new_end], + wr_mas->content); + wr_mas->pivots[end] =3D mas->last; + rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry); + } } else { - /* Append to start of range */ + /* Append to the range without touching any boundaries. */ rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->content); - wr_mas->pivots[end] =3D mas->last; - rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry); + wr_mas->pivots[end + 1] =3D mas->last; + rcu_assign_pointer(wr_mas->slots[end + 1], wr_mas->entry); + wr_mas->pivots[end] =3D mas->index - 1; + mas->offset =3D end + 1; } =20 if (!wr_mas->content || !wr_mas->entry) @@ -4273,7 +4284,7 @@ static inline void mas_wr_modify(struct ma_wr_state *= wr_mas) goto slow_path; =20 /* Attempt to append */ - if (new_end =3D=3D wr_mas->node_end + 1 && mas_wr_append(wr_mas)) + if (mas_wr_append(wr_mas, new_end)) return; =20 if (new_end =3D=3D wr_mas->node_end && mas_wr_slot_store(wr_mas)) --=20 2.20.1 From nobody Sun Feb 8 19:35:53 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7CC2EB64D9 for ; Thu, 15 Jun 2023 08:44:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240995AbjFOIo2 (ORCPT ); Thu, 15 Jun 2023 04:44:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238198AbjFOInq (ORCPT ); Thu, 15 Jun 2023 04:43:46 -0400 Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com [IPv6:2607:f8b0:4864:20::82f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 066DC294E for ; Thu, 15 Jun 2023 01:43:22 -0700 (PDT) Received: by mail-qt1-x82f.google.com with SMTP id d75a77b69052e-3f9cf20da1dso26909091cf.3 for ; Thu, 15 Jun 2023 01:43:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686818601; x=1689410601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5k+8W6YpzHgSdUoJtesRB/pITjbosIv5ydX8Qu9f+Qw=; b=L9gTbOitjaiPVT1mPvKyA+1gkxVyHMUzILsZcki0A+ChBTywUw6NohiYLb1g1Ardc1 kMAEK5cFZZyUlbGGsMPuTdcRTmS+qB9DFC4KgbL+jedtkpUufHaDhGxSORQtenv5dxno 53zz55fJrTGOcwFkp3egUxDvqcto70rlu4gNyigvXcG/YlYSDa5Yehb8/c3sY9VPVH+W OTskE0Q7hvjBIGv50tIr95U8WiwV8dHyfjmjSvCSn45KdtpRTCdehu9SH6SLo2PF+1ik scfWicjFjDVMCw1azV116jpQLpvcldBAYY2FooqN0XwEDKBgjANfcgs0fzIgdXbiEPZC MxJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686818601; x=1689410601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5k+8W6YpzHgSdUoJtesRB/pITjbosIv5ydX8Qu9f+Qw=; b=O+YOEd8r0emKqihGDHOVCAqPYxtVN0CTL8AvfpJDg4ZBAy89OJt9xQxk0u2SO1uQHy P0cdAoeyePlObMJoykylnMKUCgd3eVu6O45ZcOu6MtfS61gbLRvwmmLmvf1M4S7QBzen CPJkercWkhLBJ48YlQCl/okNfFnzEtgSR8JBRjczYI8TCKxPcJhRFpoY8Pq63MIMBD6L iNgfDuDlAF7Kh46gdpniUSHLeS4PpgmLfF7goSAck4SmFlOYt2HxN8UdV6qtcy5PrIPC Ek2jVHFwVeqnpvHLK0q3WxMrXqnUJ4yzns0SokZl2Y176LrWn7bihskPyAWB1C99XfRV yv8Q== X-Gm-Message-State: AC+VfDzdsL5Urwf8CabMd0pj1y5CHBSJGrGzUC0uCMYWKH0oB1SjNOcl 6WdZARNn99daq7+VqQNeDsEDeA== X-Google-Smtp-Source: ACHHUZ56d5EfqOSVGPZP9YSkx1pVZ80Ypp/B7oY4uWraBWTQMEVsdq6dzt7z8dyRJ7J/IUMgpqTn6Q== X-Received: by 2002:a05:622a:1a9d:b0:3f8:64ea:9f5c with SMTP id s29-20020a05622a1a9d00b003f864ea9f5cmr5909163qtc.35.1686818601115; Thu, 15 Jun 2023 01:43:21 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id i21-20020aa78b55000000b0064fe06fe712sm11139783pfd.129.2023.06.15.01.43.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 15 Jun 2023 01:43:20 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v3 4/4] maple_tree: add a fast path case in mas_wr_slot_store() Date: Thu, 15 Jun 2023 16:43:01 +0800 Message-Id: <20230615084301.97701-5-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230615084301.97701-1-zhangpeng.00@bytedance.com> References: <20230615084301.97701-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When expanding a range in two directions, only partially overwriting the previous and next ranges, the number of entries will not be increased, so we can just update the pivots as a fast path. However, it may introduce potential risks in RCU mode (although it may pass the test), because it updates two pivots. We only enable it in non-RCU mode for now. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index da4af6743b30..bff6531fd0bc 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4100,23 +4100,35 @@ static inline bool mas_wr_slot_store(struct ma_wr_s= tate *wr_mas) { struct ma_state *mas =3D wr_mas->mas; unsigned char offset =3D mas->offset; + void __rcu **slots =3D wr_mas->slots; bool gap =3D false; =20 - if (wr_mas->offset_end - offset !=3D 1) - return false; - - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset); - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1); + gap |=3D !mt_slot_locked(mas->tree, slots, offset); + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 1); =20 - if (mas->index =3D=3D wr_mas->r_min) { - /* Overwriting the range and over a part of the next range. */ - rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->last; - } else { - /* Overwriting a part of the range and over the next range */ - rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry); + if (wr_mas->offset_end - offset =3D=3D 1) { + if (mas->index =3D=3D wr_mas->r_min) { + /* Overwriting the range and a part of the next one */ + rcu_assign_pointer(slots[offset], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->last; + } else { + /* Overwriting a part of the range and the next one */ + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->index - 1; + mas->offset++; /* Keep mas accurate. */ + } + } else if (!mt_in_rcu(mas->tree)) { + /* + * Expand the range, only partially overwriting the previous and + * next ranges + */ + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 2); + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); wr_mas->pivots[offset] =3D mas->index - 1; + wr_mas->pivots[offset + 1] =3D mas->last; mas->offset++; /* Keep mas accurate. */ + } else { + return false; } =20 trace_ma_write(__func__, mas, 0, wr_mas->entry); --=20 2.20.1