From nobody Wed Dec 17 04:09:46 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDF9C32771 for ; Mon, 26 Sep 2022 12:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239091AbiIZMA7 (ORCPT ); Mon, 26 Sep 2022 08:00:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238995AbiIZL5N (ORCPT ); Mon, 26 Sep 2022 07:57:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C6EA7A76A; Mon, 26 Sep 2022 03:51:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5206960C09; Mon, 26 Sep 2022 10:51:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 560EFC433C1; Mon, 26 Sep 2022 10:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664189503; bh=sHvIIc/WdPsfHk+hAlPh7SHkb11QWX6X4FwEN42rrUE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ht6g//m7bQSipsLmYBdqD2zc8YcYenjJ+cOuTzElsamRdvOa3V9F0wda/JKEXkkQZ zGwKVmHFUYIixv0JBr6MhWkgnjW0Z0DOTrQmDhSFLp/NIn0FN0+NOGBN90c0qRwMBe Vd2qnvYooA3rbjDuhSfZKG7SzaUfL64RSXJ9iHdw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, stable@kernel.org, Ojaswin Mujoo , "Ritesh Harjani (IBM)" , Jan Kara , Theodore Tso , Stefan Wahren Subject: [PATCH 5.19 203/207] ext4: avoid unnecessary spreading of allocations among groups Date: Mon, 26 Sep 2022 12:13:12 +0200 Message-Id: <20220926100815.691863297@linuxfoundation.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220926100806.522017616@linuxfoundation.org> References: <20220926100806.522017616@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jan Kara commit 1940265ede6683f6317cba0d428ce6505eaca944 upstream. mb_set_largest_free_order() updates lists containing groups with largest chunk of free space of given order. The way it updates it leads to always moving the group to the tail of the list. Thus allocations looking for free space of given order effectively end up cycling through all groups (and due to initialization in last to first order). This spreads allocations among block groups which reduces performance for rotating disks or low-end flash media. Change mb_set_largest_free_order() to only update lists if the order of the largest free chunk in the group changed. Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning") CC: stable@kernel.org Reported-and-tested-by: Stefan Wahren Tested-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Signed-off-by: Jan Kara Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se= .com/ Link: https://lore.kernel.org/r/20220908092136.11770-2-jack@suse.cz Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/mballoc.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1077,23 +1077,25 @@ mb_set_largest_free_order(struct super_b struct ext4_sb_info *sbi =3D EXT4_SB(sb); int i; =20 - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && grp->bb_largest_free_order >=3D 0)= { + for (i =3D MB_NUM_ORDERS(sb) - 1; i >=3D 0; i--) + if (grp->bb_counters[i] > 0) + break; + /* No need to move between order lists? */ + if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || + i =3D=3D grp->bb_largest_free_order) { + grp->bb_largest_free_order =3D i; + return; + } + + if (grp->bb_largest_free_order >=3D 0) { write_lock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); list_del_init(&grp->bb_largest_free_order_node); write_unlock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); } - grp->bb_largest_free_order =3D -1; /* uninit */ - - for (i =3D MB_NUM_ORDERS(sb) - 1; i >=3D 0; i--) { - if (grp->bb_counters[i] > 0) { - grp->bb_largest_free_order =3D i; - break; - } - } - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && - grp->bb_largest_free_order >=3D 0 && grp->bb_free) { + grp->bb_largest_free_order =3D i; + if (grp->bb_largest_free_order >=3D 0 && grp->bb_free) { write_lock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); list_add_tail(&grp->bb_largest_free_order_node,