From nobody Mon Feb 9 19:52:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37475C001DD for ; Wed, 28 Jun 2023 08:10:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233940AbjF1IKb (ORCPT ); Wed, 28 Jun 2023 04:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233128AbjF1IFt (ORCPT ); Wed, 28 Jun 2023 04:05:49 -0400 Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com [IPv6:2607:f8b0:4864:20::b2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C6419B1 for ; Wed, 28 Jun 2023 01:04:12 -0700 (PDT) Received: by mail-yb1-xb2e.google.com with SMTP id 3f1490d57ef6-c01e1c0402cso3873679276.0 for ; Wed, 28 Jun 2023 01:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687939452; x=1690531452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=A8I7ZZT06jrIJroiCeY6eBIpR5P5K4jSqkyyySmW3xI=; b=OA+Jjuyz7hYDsggc8y/xtAxeS0GILvR6GdLQxEv+Pj4AfHnD4vC7+YU6HDJzHdxiSM klCun0l5ReNEzceWWxecaDG9Sb+j++54KEp12A3WSYo7TioS3ZRHzZUIBGtYd5mwwZMl tDsmPFCJS4e1rM2mqwiyyC8WPga3VkrPt/A9ABYlRERRYQH/47d/DXYOmdspLYZfqYfu 9Yuy5VALYTz9S81M3fD/3P+GkQBixbNY0ChyZJnYsK8H/KlAkxSfVuJ63J4HQ91mXfMR j9McCkLO/Ddv77LHInLUmTwavSEAyeFfvJBXcAM9Ic7MVXXs+LHdkDuLPDGRHoy0r+ZR hE/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687939452; x=1690531452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A8I7ZZT06jrIJroiCeY6eBIpR5P5K4jSqkyyySmW3xI=; b=ReGQ9M7fcMiDK3iR2UYJLxcQyAyNi41RtScHpK3X5CZVioKN4nUyLyZAoSDH6YlAjx 2+FZuR4/+vbzUEuQ6tPjigZR0jCIM71S5+6fiwujig4mzP1GkEkUDKJMYcXrnHRyhtq9 zNSTxLKPt78f+ho4SKtLa06BmV/bxxKXAE47YerMrEfo1fJC+qBjJHgPGZfhoEBUnWBQ JplL3nb+QCV+nQcMIB0m4nekW97CUsLsMm0cL4yHrGA/tEEAs6JN5DGxsERZ5iK47Mho PAhKbWaEdadLM5pMt3aGfmfSH4Eo9AjKysSW6ENHRX609KsCvhFT+Qu5hVvHocK6eiEU yW8Q== X-Gm-Message-State: AC+VfDzVJMJ1mfZD+H66I5HT5Efro2jlhXb02iGs/Ww6eHR5TjOJGY5M KhVCq1yLELWAUagObe2nNv+BBqBBRPGV89BvTtuXWQ== X-Google-Smtp-Source: ACHHUZ7p1BvjmM5/N4UfcgB+uJA4Wvbbz5hy0eN+ZhFEnki+h46GuQ6T2BniuMUzs+bIQ6Q1mK8/7w== X-Received: by 2002:a05:6a20:4422:b0:129:f768:27a4 with SMTP id ce34-20020a056a20442200b00129f76827a4mr7356161pzb.41.1687937841300; Wed, 28 Jun 2023 00:37:21 -0700 (PDT) Received: from GL4FX4PXWL.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id jj6-20020a170903048600b001b8021fbcd2sm4836988plb.280.2023.06.28.00.37.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 28 Jun 2023 00:37:20 -0700 (PDT) From: Peng Zhang To: Liam.Howlett@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, Peng Zhang Subject: [PATCH v4 4/4] maple_tree: add a fast path case in mas_wr_slot_store() Date: Wed, 28 Jun 2023 15:36:57 +0800 Message-Id: <20230628073657.75314-5-zhangpeng.00@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20230628073657.75314-1-zhangpeng.00@bytedance.com> References: <20230628073657.75314-1-zhangpeng.00@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When expanding a range in two directions, only partially overwriting the previous and next ranges, the number of entries will not be increased, so we can just update the pivots as a fast path. However, it may introduce potential risks in RCU mode, because it updates two pivots. We only enable it in non-RCU mode. Signed-off-by: Peng Zhang Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 56b9b5be28c8..db3be8274660 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4167,23 +4167,35 @@ static inline bool mas_wr_slot_store(struct ma_wr_s= tate *wr_mas) { struct ma_state *mas =3D wr_mas->mas; unsigned char offset =3D mas->offset; + void __rcu **slots =3D wr_mas->slots; bool gap =3D false; =20 - if (wr_mas->offset_end - offset !=3D 1) - return false; - - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset); - gap |=3D !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1); + gap |=3D !mt_slot_locked(mas->tree, slots, offset); + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 1); =20 - if (mas->index =3D=3D wr_mas->r_min) { - /* Overwriting the range and over a part of the next range. */ - rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->last; - } else { - /* Overwriting a part of the range and over the next range */ - rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry); + if (wr_mas->offset_end - offset =3D=3D 1) { + if (mas->index =3D=3D wr_mas->r_min) { + /* Overwriting the range and a part of the next one */ + rcu_assign_pointer(slots[offset], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->last; + } else { + /* Overwriting a part of the range and the next one */ + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); + wr_mas->pivots[offset] =3D mas->index - 1; + mas->offset++; /* Keep mas accurate. */ + } + } else if (!mt_in_rcu(mas->tree)) { + /* + * Expand the range, only partially overwriting the previous and + * next ranges + */ + gap |=3D !mt_slot_locked(mas->tree, slots, offset + 2); + rcu_assign_pointer(slots[offset + 1], wr_mas->entry); wr_mas->pivots[offset] =3D mas->index - 1; + wr_mas->pivots[offset + 1] =3D mas->last; mas->offset++; /* Keep mas accurate. */ + } else { + return false; } =20 trace_ma_write(__func__, mas, 0, wr_mas->entry); --=20 2.20.1