From nobody Sun Feb 8 21:48:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DED1EB64D7 for ; Wed, 28 Jun 2023 08:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233816AbjF1IME (ORCPT ); Wed, 28 Jun 2023 04:12:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233148AbjF1IIl (ORCPT ); Wed, 28 Jun 2023 04:08:41 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89C753A87 for ; Wed, 28 Jun 2023 01:06:57 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1b8063aa2e1so19760055ad.1 for ; Wed, 28 Jun 2023 01:06:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687939617; x=1690531617; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0By+FF+rGXXCgPIa+pNPe3WykMAyZs5vfqaT+achZz4=; b=i/niiM00dTBFqUbEK3rieg0A6WXfEy03LYPbyVVh01etA/CKeJ0PjivF90apN+HhyU HeQw9oG3I/n4WmTzGSz4phfPUR9z9PGYsZX+lN4HlnHY7mQZv9W1hBuLmb5kd6TbKDFe ta8lSfKVD4ZWNUtSt3D3ycweSgoC2flRn9IkAk+5tzNQd/Fn/tb26WEONtwCLk1h+PDV pfv53sgOFV8hUKkfiONpJfMJu72hh9AhmjlmDAxaW/I9MbERuX0GHBkkzSfjsys+RtjI FHWLB06197JeF5IIe7Up9okZqNabN7s7E/73jv7nr/UQbJepgwkaA8bfO5+qmYtjTD7r K80w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687939617; x=1690531617; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0By+FF+rGXXCgPIa+pNPe3WykMAyZs5vfqaT+achZz4=; b=LHsvW2dT2li14pBpO0xzVCtZvtDJSetSbahOJDYrTF+uJd858AwxUKRBwcAEHEy/O0 4dwizv1FR3LAW70AgkWYr2AWNwgLfCqsl8t9GBcQPS5iwL+zUgwTpp6l+KdZnHjSDFcm B/uC0rD59SQworMGZdGTn/xj6+/idt5tEDuJAv8ubuLE6uhv873tOUFh8HYSmFwPB2JI taXNXVmGZpC7pOAAe3nXfsy30Eeg2hllgWZcYZSjxQhFPOdI6HsnjguaIFo/iOB+VHMB 6XPQU61rKnsu2kEBpvrAMsT278bt3adDERYH0IjAtwczeA1vntAkzGlC8w+qyaxMrCRS gn6w== X-Gm-Message-State: AC+VfDyASH/S31HzVbtLeg0zRgIONtVWMXyJHop6dY7KSmCsIevV/fWs 2GohH/4kzBRW0X3dgv12/3CfnLRroFytiPT5EVsFgA== X-Google-Smtp-Source: ACHHUZ4+EQgGpYZBChxbpJnmew+64neAGFcXxlFtpsafzMYQAYKBzioIORokEkJaecac1vca1g9GzA== X-Received: by 2002:a17:902:dac3:b0:1b6:c139:9b23 with SMTP id q3-20020a170902dac300b001b6c1399b23mr8794103plx.26.1687937831748; Wed, 28 Jun 2023 00:37:11 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id jj6-20020a170903048600b001b8021fbcd2sm4836988plb.280.2023.06.28.00.37.08 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 28 Jun 2023 00:37:11 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v4 1/4] maple_tree: add test for mas_wr_modify() fast path Date: Wed, 28 Jun 2023 15:36:54 +0800 Message-Id: <20230628073657.75314-2-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230628073657.75314-1-zhangpeng.00@bytedance.com> References: <20230628073657.75314-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for all cases of mas_wr_append() and mas_wr_slot_store(). Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/test_maple_tree.c | 65 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index 9939be34e516..9f60e0c4cc8c 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -1157,6 +1157,71 @@ static noinline void __init check_ranges(struct mapl= e_tree *mt) MT_BUG_ON(mt, !mt_height(mt)); mtree_destroy(mt); =20 + /* Check in-place modifications */ + mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); + /* Append to the start of last range */ + mt_set_non_kernel(50); + for (i =3D 0; i <=3D 500; i++) { + val =3D i * 5 + 1; + val2 =3D val + 4; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Append to the last range without touching any boundaries */ + for (i =3D 0; i < 10; i++) { + val =3D val2 + 5; + val2 =3D val + 4; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Append to the end of last range */ + val =3D val2; + for (i =3D 0; i < 10; i++) { + val +=3D 5; + MT_BUG_ON(mt, mtree_test_store_range(mt, val, ULONG_MAX, + xa_mk_value(val)) !=3D 0); + } + + /* Overwriting the range and over a part of the next range */ + for (i =3D 10; i < 30; i +=3D 2) { + val =3D i * 5 + 1; + val2 =3D val + 5; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* Overwriting a part of the range and over the next range */ + for (i =3D 50; i < 70; i +=3D 2) { + val2 =3D i * 5; + val =3D val2 - 5; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* + * Expand the range, only partially overwriting the previous and + * next ranges + */ + for (i =3D 100; i < 130; i +=3D 3) { + val =3D i * 5 - 5; + val2 =3D i * 5 + 1; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + /* + * Expand the range, only partially overwriting the previous and + * next ranges, in RCU mode + */ + mt_set_in_rcu(mt); + for (i =3D 150; i < 180; i +=3D 3) { + val =3D i * 5 - 5; + val2 =3D i * 5 + 1; + check_store_range(mt, val, val2, xa_mk_value(val), 0); + } + + MT_BUG_ON(mt, !mt_height(mt)); + mt_validate(mt); + mt_set_non_kernel(0); + mtree_destroy(mt); + /* Test rebalance gaps */ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); mt_set_non_kernel(50); --=20 2.20.1 From nobody Sun Feb 8 21:48:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67ED6EB64DA for ; Wed, 28 Jun 2023 08:08:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232179AbjF1IIX (ORCPT ); Wed, 28 Jun 2023 04:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231464AbjF1ICs (ORCPT ); Wed, 28 Jun 2023 04:02:48 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1513C2956 for ; Wed, 28 Jun 2023 01:01:57 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-666ed230c81so4949703b3a.0 for ; Wed, 28 Jun 2023 01:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687939316; x=1690531316; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vVHRGKJQOMwzWN83g2m+Cd2Oov5QH2ldO2Pug8KfAvg=; b=QOF6Ti9wHMS5didrT90b4hH0tCVqbTZyim+VdQRTLxqzWA856cXTokXmeMWVkTD2Ny 0B3SvYZpfGwTQrNApbz4ShZf7vOBjBDFJsaJ27GPbzmsTlcIwumIGyFD3t83FY1TyTrj kAhNuxx65kG1/+r3TfAmStel9sNMof6P4yhLtSnnzMA2kF6oy2WQhVRrDBVRqf1FTjU2 CqF8ftEyF5/zVqxfMSfbgxRN2dI3KjpwvCfFlCCgPrhgZotdY0S8FhmwBuPNIFpjPNf5 ob5CNzzmX2pcBvjCeczYimVqFzLNmGWm6eo+9UBAnBi/Yi2R3tlHauVRRDnI9H1KtBTb m/tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687939316; x=1690531316; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vVHRGKJQOMwzWN83g2m+Cd2Oov5QH2ldO2Pug8KfAvg=; b=Usn+H8IXBC+oLwPMAYGja9b85VQ3TeJ4PnxBWR1sKiYuIQp8cusQj5drcEJxQynsLq nyYuiV1/EU4gFRrogjZ0hSLnsaOkUISJrv5qOY9ssxVGYHGM2L0duaGQCRQ5Yeu4NENs 18/l5MzVGaDB6FehNWWDT5EXXh8COGTgBjmN2U3gQRJdp2WDCJPLrwlmCMODi1/lihxL hWGpIYTfKzhYaBX/KoLmHKb6MoiDIuVHy0xR8kNRd19bLMgtkmPOtsiVepv1T38bnhub iMju84nyq7DzyYK5/qfcDxBU7KcmwDuoZI65och//bMIyj9jOS3QynIs9zKfdGWM2fKN mWsg== X-Gm-Message-State: AC+VfDzd8Az1uZlfmK5JpBThzV90nHbKu4jRoSk/CKs1wT8xI4qaEKtR tXoXxycvE67X+OXXeQbATlWNWfFniPLkhQ/2JI2YRw== X-Google-Smtp-Source: ACHHUZ5+PvtsHNs9Wy1quunYgyqWj1bXlsRhzyJYtceWJ7YMan2oFDzco6hyvxSK51LzuybFcQy+cg== X-Received: by 2002:a17:902:d2ca:b0:1b6:8c7e:4e7a with SMTP id n10-20020a170902d2ca00b001b68c7e4e7amr15240644plc.67.1687937834808; Wed, 28 Jun 2023 00:37:14 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id jj6-20020a170903048600b001b8021fbcd2sm4836988plb.280.2023.06.28.00.37.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 28 Jun 2023 00:37:14 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v4 2/4] maple_tree: add test for expanding range in RCU mode Date: Wed, 28 Jun 2023 15:36:55 +0800 Message-Id: <20230628073657.75314-3-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230628073657.75314-1-zhangpeng.00@bytedance.com> References: <20230628073657.75314-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add test for expanding range in RCU mode. If we use the fast path of the slot store to expand range in RCU mode, this test will fail. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- tools/testing/radix-tree/maple.c | 75 ++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index 03539d86cdf0..312c0d9fcbae 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -45,6 +45,13 @@ struct rcu_test_struct2 { unsigned long last[RCU_RANGE_COUNT]; }; =20 +struct rcu_test_struct3 { + struct maple_tree *mt; + unsigned long index; + unsigned long last; + bool stop; +}; + struct rcu_reader_struct { unsigned int id; int mod; @@ -34954,6 +34961,70 @@ void run_check_rcu(struct maple_tree *mt, struct r= cu_test_struct *vals) MT_BUG_ON(mt, !vals->seen_entry2); } =20 +static void *rcu_slot_store_reader(void *ptr) +{ + struct rcu_test_struct3 *test =3D ptr; + MA_STATE(mas, test->mt, test->index, test->index); + + rcu_register_thread(); + + rcu_read_lock(); + while (!test->stop) { + mas_walk(&mas); + /* The length of growth to both sides must be equal. */ + RCU_MT_BUG_ON(test, (test->index - mas.index) !=3D + (mas.last - test->last)); + } + rcu_read_unlock(); + + rcu_unregister_thread(); + return NULL; +} + +static noinline void run_check_rcu_slot_store(struct maple_tree *mt) +{ + pthread_t readers[20]; + int range_cnt =3D 200, i, limit =3D 10000; + unsigned long len =3D ULONG_MAX / range_cnt, start, end; + struct rcu_test_struct3 test =3D {.stop =3D false, .mt =3D mt}; + + start =3D range_cnt / 2 * len; + end =3D start + len - 1; + test.index =3D start; + test.last =3D end; + + for (i =3D 0; i < range_cnt; i++) { + mtree_store_range(mt, i * len, i * len + len - 1, + xa_mk_value(i * 100), GFP_KERNEL); + } + + mt_set_in_rcu(mt); + MT_BUG_ON(mt, !mt_in_rcu(mt)); + + for (i =3D 0; i < ARRAY_SIZE(readers); i++) { + if (pthread_create(&readers[i], NULL, rcu_slot_store_reader, + &test)) { + perror("creating reader thread"); + exit(1); + } + } + + usleep(5); + + while (limit--) { + /* Step by step, expand the most middle range to both sides. */ + mtree_store_range(mt, --start, ++end, xa_mk_value(100), + GFP_KERNEL); + } + + test.stop =3D true; + + while (i--) + pthread_join(readers[i], NULL); + + mt_validate(mt); +} + static noinline void run_check_rcu_slowread(struct maple_tree *mt, struct rcu_test_struct = *vals) { @@ -35206,6 +35277,10 @@ static noinline void __init check_rcu_threaded(str= uct maple_tree *mt) run_check_rcu(mt, &vals); mtree_destroy(mt); =20 + /* Check expanding range in RCU mode */ + mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); + run_check_rcu_slot_store(mt); + mtree_destroy(mt); =20 /* Forward writer for rcu stress */ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); --=20 2.20.1 From nobody Sun Feb 8 21:48:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FFD5EB64DA for ; Wed, 28 Jun 2023 08:18:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234879AbjF1ISY (ORCPT ); Wed, 28 Jun 2023 04:18:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232656AbjF1INo (ORCPT ); Wed, 28 Jun 2023 04:13:44 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3490F4239 for ; Wed, 28 Jun 2023 01:09:38 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id 5614622812f47-3a36803f667so126541b6e.0 for ; Wed, 28 Jun 2023 01:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687939777; x=1690531777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mE5S+xEF8iVKqL+efUmsUqFQSIecG35V/4BhhPZycNg=; b=AilhbHK0aM+A5BB8Z1rmPO4N+MsUR/2+LQgiCZJWFN2R11wFXOUolRw8hL8L8T4mof y5mtG8fPoQ/kIFKblMt5ceUIVNhur2Qn+h/0TBSRxWK5PIcMoDxtA/7e9HFQF8tA/TLP PoELeW1aYNmSReh/flMgBDZsXlw7UF79s0r+tfstLyIVfOMsrYZlkU/8ffNjc3MKkCPY x6A4cQN7bNxLQfVluDsG+qqxCWwJfiGmSpJF22bhYclYAJfE57DK3Kj4nFy6DwtKa0d5 1S00BaWFyUECXKoYNsz46o2qtcBolWgd7GAVfl9Pi5CnmqWO25huNDpK8CqmbKaZqya7 0BAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687939777; x=1690531777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mE5S+xEF8iVKqL+efUmsUqFQSIecG35V/4BhhPZycNg=; b=CiOzveHJ3KhZQ9SM/PKs2CUZlLc2EKLiFz9Z+R27Yuyp3RNATTBR6AnKsaYNcQP4hM BpP/c9a24TlZg7LALX53f6zt0JRSCJbYBwtxAlP7VHfOsG6oDslIlB4k/CyEk/Br1iUR BofecG64oh535rrTFtd+hBwqN3wx5CDBfyZuvlA3dHfzEk45CUXjPrU2G7z84C7t/F5c btwN3vmN+3O9AdV0u8hJcNRYxyhkOs80OJUbXwqJnjolrDxTJ2tbr09OUo1/LF9WO2v4 x4JrMtCIoKilH8I+zFH4/riFq7bnRdPUJ2i2rqJ75clNotqNkgql+74W3DuvV4lEA+H/ +h3g== X-Gm-Message-State: AC+VfDzeP38btlKaKCyrX8h3ewXmMMJlJEVLQuTAdZ/nQxmlzj6u8650 6HEgloAm28se4SCa10ePOaSbqQeBXI2wGjs7uCQmaw== X-Google-Smtp-Source: ACHHUZ7wsIFlFgorUeOyA8dts5//X0YcNBChMuP62NzBrpOOf8zMBBIOHsejV1gjrmGlqvrF9RiRhQ== X-Received: by 2002:a17:902:ecc3:b0:1b3:c62d:71b7 with SMTP id a3-20020a170902ecc300b001b3c62d71b7mr992905plh.18.1687937837922; Wed, 28 Jun 2023 00:37:17 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id jj6-20020a170903048600b001b8021fbcd2sm4836988plb.280.2023.06.28.00.37.15 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 28 Jun 2023 00:37:17 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v4 3/4] maple_tree: optimize mas_wr_append(), also improve duplicating VMAs Date: Wed, 28 Jun 2023 15:36:56 +0800 Message-Id: <20230628073657.75314-4-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230628073657.75314-1-zhangpeng.00@bytedance.com> References: <20230628073657.75314-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When the new range can be completely covered by the original last range without touching the boundaries on both sides, two new entries can be appended to the end as a fast path. We update the original last pivot at the end, and the newly appended two entries will not be accessed before this, so it is also safe in RCU mode. This is useful for sequential insertion, which is what we do in dup_mmap(). Enabling BENCH_FORK in test_maple_tree and just running bench_forking() gives the following time-consuming numbers: before: after: 17,874.83 msec 15,738.38 msec It shows about a 12% performance improvement for duplicating VMAs. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index bfffbb7cab26..56b9b5be28c8 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4266,10 +4266,10 @@ static inline unsigned char mas_wr_new_end(struct m= a_wr_state *wr_mas) * * Return: True if appended, false otherwise */ -static inline bool mas_wr_append(struct ma_wr_state *wr_mas) +static inline bool mas_wr_append(struct ma_wr_state *wr_mas, + unsigned char new_end) { unsigned char end =3D wr_mas->node_end; - unsigned char new_end =3D end + 1; struct ma_state *mas =3D wr_mas->mas; unsigned char node_pivots =3D mt_pivots[wr_mas->type]; =20 @@ -4281,16 +4281,27 @@ static inline bool mas_wr_append(struct ma_wr_state= *wr_mas) ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end); } =20 - if (mas->last =3D=3D wr_mas->r_max) { - /* Append to end of range */ - rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->entry); - wr_mas->pivots[end] =3D mas->index - 1; - mas->offset =3D new_end; + if (new_end =3D=3D wr_mas->node_end + 1) { + if (mas->last =3D=3D wr_mas->r_max) { + /* Append to end of range */ + rcu_assign_pointer(wr_mas->slots[new_end], + wr_mas->entry); + wr_mas->pivots[end] =3D mas->index - 1; + mas->offset =3D new_end; + } else { + /* Append to start of range */ + rcu_assign_pointer(wr_mas->slots[new_end], + wr_mas->content); + wr_mas->pivots[end] =3D mas->last; + rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry); + } } else { - /* Append to start of range */ + /* Append to the range without touching any boundaries. */ rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->content); - wr_mas->pivots[end] =3D mas->last; - rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry); + wr_mas->pivots[end + 1] =3D mas->last; + rcu_assign_pointer(wr_mas->slots[end + 1], wr_mas->entry); + wr_mas->pivots[end] =3D mas->index - 1; + mas->offset =3D end + 1; } =20 if (!wr_mas->content || !wr_mas->entry) @@ -4337,7 +4348,7 @@ static inline void mas_wr_modify(struct ma_wr_state *= wr_mas) goto slow_path; =20 /* Attempt to append */ - if (new_end =3D=3D wr_mas->node_end + 1 && mas_wr_append(wr_mas)) + if (mas_wr_append(wr_mas, new_end)) return; =20 if (new_end =3D=3D wr_mas->node_end && mas_wr_slot_store(wr_mas)) --=20 2.20.1 From nobody Sun Feb 8 21:48:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37475C001DD for ; Wed, 28 Jun 2023 08:10:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233940AbjF1IKb (ORCPT ); Wed, 28 Jun 2023 04:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233128AbjF1IFt (ORCPT ); Wed, 28 Jun 2023 04:05:49 -0400 Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com [IPv6:2607:f8b0:4864:20::b2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C6419B1 for ; Wed, 28 Jun 2023 01:04:12 -0700 (PDT) Received: by mail-yb1-xb2e.google.com with SMTP id 3f1490d57ef6-c01e1c0402cso3873679276.0 for ; Wed, 28 Jun 2023 01:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687939452; x=1690531452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=A8I7ZZT06jrIJroiCeY6eBIpR5P5K4jSqkyyySmW3xI=; b=OA+Jjuyz7hYDsggc8y/xtAxeS0GILvR6GdLQxEv+Pj4AfHnD4vC7+YU6HDJzHdxiSM klCun0l5ReNEzceWWxecaDG9Sb+j++54KEp12A3WSYo7TioS3ZRHzZUIBGtYd5mwwZMl tDsmPFCJS4e1rM2mqwiyyC8WPga3VkrPt/A9ABYlRERRYQH/47d/DXYOmdspLYZfqYfu 9Yuy5VALYTz9S81M3fD/3P+GkQBixbNY0ChyZJnYsK8H/KlAkxSfVuJ63J4HQ91mXfMR j9McCkLO/Ddv77LHInLUmTwavSEAyeFfvJBXcAM9Ic7MVXXs+LHdkDuLPDGRHoy0r+ZR hE/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687939452; x=1690531452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A8I7ZZT06jrIJroiCeY6eBIpR5P5K4jSqkyyySmW3xI=; b=ReGQ9M7fcMiDK3iR2UYJLxcQyAyNi41RtScHpK3X5CZVioKN4nUyLyZAoSDH6YlAjx 2+FZuR4/+vbzUEuQ6tPjigZR0jCIM71S5+6fiwujig4mzP1GkEkUDKJMYcXrnHRyhtq9 zNSTxLKPt78f+ho4SKtLa06BmV/bxxKXAE47YerMrEfo1fJC+qBjJHgPGZfhoEBUnWBQ JplL3nb+QCV+nQcMIB0m4nekW97CUsLsMm0cL4yHrGA/tEEAs6JN5DGxsERZ5iK47Mho PAhKbWaEdadLM5pMt3aGfmfSH4Eo9AjKysSW6ENHRX609KsCvhFT+Qu5hVvHocK6eiEU yW8Q== X-Gm-Message-State: AC+VfDzVJMJ1mfZD+H66I5HT5Efro2jlhXb02iGs/Ww6eHR5TjOJGY5M KhVCq1yLELWAUagObe2nNv+BBqBBRPGV89BvTtuXWQ== X-Google-Smtp-Source: ACHHUZ7p1BvjmM5/N4UfcgB+uJA4Wvbbz5hy0eN+ZhFEnki+h46GuQ6T2BniuMUzs+bIQ6Q1mK8/7w== X-Received: by 2002:a05:6a20:4422:b0:129:f768:27a4 with SMTP id ce34-20020a056a20442200b00129f76827a4mr7356161pzb.41.1687937841300; Wed, 28 Jun 2023 00:37:21 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id jj6-20020a170903048600b001b8021fbcd2sm4836988plb.280.2023.06.28.00.37.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 28 Jun 2023 00:37:20 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v4 4/4] maple_tree: add a fast path case in mas_wr_slot_store() Date: Wed, 28 Jun 2023 15:36:57 +0800 Message-Id: <20230628073657.75314-5-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230628073657.75314-1-zhangpeng.00@bytedance.com> References: <20230628073657.75314-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When expanding a range in two directions, only partially overwriting the previous and next ranges, the number of entries will not be increased, so we can just update the pivots as a fast path. However, it may introduce potential risks in RCU mode, because it updates two pivots. We only enable it in non-RCU mode. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 56b9b5be28c8..db3be8274660 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4167,23 +4167,35 @@ static inline bool mas_wr_slot_store(struct ma_wr_s= tate *wr_mas) { struct ma_state *mas =3D wr_mas->mas; unsigned char offset =3D mas->offset; + void __rcu **slots =3D wr_mas->slots; bool gap =3D false; =20 - if (wr_mas->offset_end - offset !=3D 1) - return false; - - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset); - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1); + gap |=3D !mt_slot_locked(mas->tree, slots, offset); + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 1); =20 - if (mas->index =3D=3D wr_mas->r_min) { - /* Overwriting the range and over a part of the next range. */ - rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->last; - } else { - /* Overwriting a part of the range and over the next range */ - rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry); + if (wr_mas->offset_end - offset =3D=3D 1) { + if (mas->index =3D=3D wr_mas->r_min) { + /* Overwriting the range and a part of the next one */ + rcu_assign_pointer(slots[offset], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->last; + } else { + /* Overwriting a part of the range and the next one */ + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->index - 1; + mas->offset++; /* Keep mas accurate. */ + } + } else if (!mt_in_rcu(mas->tree)) { + /* + * Expand the range, only partially overwriting the previous and + * next ranges + */ + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 2); + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); wr_mas->pivots[offset] =3D mas->index - 1; + wr_mas->pivots[offset + 1] =3D mas->last; mas->offset++; /* Keep mas accurate. */ + } else { + return false; } =20 trace_ma_write(__func__, mas, 0, wr_mas->entry); --=20 2.20.1