From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55896C7EE23 for ; Tue, 30 May 2023 12:41:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230164AbjE3MlI (ORCPT ); Tue, 30 May 2023 08:41:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229568AbjE3MlF (ORCPT ); Tue, 30 May 2023 08:41:05 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 864EB102; Tue, 30 May 2023 05:40:40 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UCNvJA001911; Tue, 30 May 2023 12:34:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=+BnaDz8GOFXK9Oo3BhAJjGeaM8iWRw27pFgOHNngyF0=; b=gM/wS2SPf5FIc+fIV1j8oWiRfV465rz+BEbTYy4VbR0BCP1hdAOulBuDTxp6sUiDQKly aINYu9qVJW+rWng6+cMTJsZjDh43QJtRMgrbwcKfS1gRdXoLp7iBApJkC+u2mlJk4avv LjbqF46OGuj/dtmmIMn9CoeuvyMv8Yi7TU2/zEFsm8WQ5lHXDhLVkUp7+m9fzksgZwIc KNYBPeFDg4jDlqmHDy8IOO1bnLEOlC2LSOeEZuKig7ADgq+ORkFPBC2lqn0t2Yxxacma aoOFC4oPvNN8gKLplE/X3tcWjgR7vcAwUyBSyLp9TbO8ucmgCDw/Utfbhkml/OeXZrPo cA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8gh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:00 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCPBkp005122; Tue, 30 May 2023 12:34:00 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8f3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:00 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U2f7Gt007278; Tue, 30 May 2023 12:33:58 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g59fdv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:33:57 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCXtJa28967358 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:33:55 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 646472004B; Tue, 30 May 2023 12:33:55 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6627820040; Tue, 30 May 2023 12:33:53 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:33:53 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 01/12] Revert "ext4: remove ac->ac_found > sbi->s_mb_min_to_scan dead check in ext4_mb_check_limits" Date: Tue, 30 May 2023 18:03:39 +0530 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: MKzok8uVa59I_SxTeep2p2vJ_uz6d4jo X-Proofpoint-GUID: gWtt9MZouc8fyq9c5n5xISEDpgCVchP2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 suspectscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 clxscore=1015 adultscore=0 mlxlogscore=996 mlxscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This reverts commit 32c0869370194ae5ac9f9f501953ef693040f6a1. The reverted commit was intended to remove a dead check however it was obse= rved that this check was actually being used to exit early instead of looping sbi->s_mb_max_to_scan times when we are able to find a free extent bigger t= han the goal extent. Due to this, a my performance tests (fsmark, parallel file writes in a highly fragmented FS) were seeing a 2x-3x regression. Example, the default value of the following variables is: sbi->s_mb_max_to_scan =3D 200 sbi->s_mb_min_to_scan =3D 10 In ext4_mb_check_limits() if we find an extent smaller than goal, then we r= eturn early and try again. This loop will go on until we have processed sbi->s_mb_max_to_scan(=3D200) number of free extents at which point we exit= and just use whatever we have even if it is smaller than goal extent. Now, the regression comes when we find an extent bigger than goal. Earlier,= in this case we would loop only sbi->s_mb_min_to_scan(=3D10) times and then ju= st use the bigger extent. However with commit 32c08693 that check was removed and = hence we would loop sbi->s_mb_max_to_scan(=3D200) times even though we have a big= enough free extent to satisfy the request. The only time we would exit early would= be when the free extent is *exactly* the size of our goal, which is pretty unc= ommon occurrence and so we would almost always end up looping 200 times. Hence, revert the commit by adding the check back to fix the regression. Al= so add a comment to outline this policy. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Kemeng Shi --- fs/ext4/mballoc.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index d4b6a2c1881d..7ac6d3524f29 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2063,7 +2063,7 @@ static void ext4_mb_check_limits(struct ext4_allocati= on_context *ac, if (bex->fe_len < gex->fe_len) return; =20 - if (finish_group) + if (finish_group || ac->ac_found > sbi->s_mb_min_to_scan) ext4_mb_use_best_found(ac, e4b); } =20 @@ -2075,6 +2075,20 @@ static void ext4_mb_check_limits(struct ext4_allocat= ion_context *ac, * in the context. Later, the best found extent will be used, if * mballoc can't find good enough extent. * + * The algorithm used is roughly as follows: + * + * * If free extent found is exactly as big as goal, then + * stop the scan and use it immediately + * + * * If free extent found is smaller than goal, then keep retrying + * upto a max of sbi->s_mb_max_to_scan times (default 200). After + * that stop scanning and use whatever we have. + * + * * If free extent found is bigger than goal, then keep retrying + * upto a max of sbi->s_mb_min_to_scan times (default 10) before + * stopping the scan and using the extent. + * + * * FIXME: real allocation policy is to be designed yet! */ static void ext4_mb_measure_extent(struct ext4_allocation_context *ac, --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7AB0C77B7A for ; Tue, 30 May 2023 12:35:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231318AbjE3Mfx (ORCPT ); Tue, 30 May 2023 08:35:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230051AbjE3Mft (ORCPT ); Tue, 30 May 2023 08:35:49 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DF7D1B3; Tue, 30 May 2023 05:35:21 -0700 (PDT) Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UCGrIh024240; Tue, 30 May 2023 12:34:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=KeZdso/T3NqzSn/aX/sedRkY9LS8b35yxKJ1/aKaO7c=; b=d8LNgo0jJlemTsq0bKy8sttBHRadqJNLMoYPknJDZqZ32H4enyhISi/5eDyfWlOCPgfE ehY5F8vUX762fDuoHl/cMPsG4XSJtP+KlAMETekO08+iMJ+LxYaWYY40SaQeY6or4L4P Nc1LdSqUzWkIkkm0/VUDeNjsjGkowQZr3Fk3x58BAcvzfP/y1LbamsTYYMpJlF5uT9N/ qZf/yflmMBxCfu72k8MdRYFIUrcTCDXJmGYJtDpzJVn/1uoPbsf8EVoSWmboFXrh4w+J X+KmPW+wWSi56BrUNynx02IkMLPeGxdr3XZ3d0bC+zuhvj0mMa4W1mwlwi0q84AjIQB2 sQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwf7dkpqd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:03 +0000 Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UBToAb022627; Tue, 30 May 2023 12:34:02 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwf7dkpp3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:02 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U2HD5v021187; Tue, 30 May 2023 12:34:00 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g59fdw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:00 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCXvOM27394676 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:33:58 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D3EAE2004B; Tue, 30 May 2023 12:33:57 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C7D3920040; Tue, 30 May 2023 12:33:55 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:33:55 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , "Ritesh Harjani (IBM)" Subject: [PATCH v2 02/12] ext4: mballoc: Remove useless setting of ac_criteria Date: Tue, 30 May 2023 18:03:40 +0530 Message-Id: <1dbae05617519cb6202f1b299c9d1be3e7cda763.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: MTa8dPRfBiLZh3gKOz2Iv3owznizGACa X-Proofpoint-ORIG-GUID: jMRzHQeAiZ1NGx3YvI3ZN6dHC3TRaCPD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 bulkscore=0 priorityscore=1501 spamscore=0 adultscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Ritesh Harjani (IBM)" There will be changes coming in future patches which will introduce a new criteria for block allocation. This removes the useless setting of ac_crite= ria. AFAIU, this might be only used to differentiate between whether a prealloca= ted blocks was allocated or was regular allocator called for allocating blocks. Hence this also adds the debug prints to identify what type of block alloca= tion was done in ext4_mb_show_ac(). Signed-off-by: Ritesh Harjani (IBM) Signed-off-by: Ojaswin Mujoo Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 7ac6d3524f29..9d73f61458d4 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4627,7 +4627,6 @@ ext4_mb_use_preallocated(struct ext4_allocation_conte= xt *ac) atomic_inc(&tmp_pa->pa_count); ext4_mb_use_inode_pa(ac, tmp_pa); spin_unlock(&tmp_pa->pa_lock); - ac->ac_criteria =3D 10; read_unlock(&ei->i_prealloc_lock); return true; } @@ -4670,7 +4669,6 @@ ext4_mb_use_preallocated(struct ext4_allocation_conte= xt *ac) } if (cpa) { ext4_mb_use_group_pa(ac, cpa); - ac->ac_criteria =3D 20; return true; } return false; @@ -5444,6 +5442,10 @@ static void ext4_mb_show_ac(struct ext4_allocation_c= ontext *ac) (unsigned long)ac->ac_b_ex.fe_logical, (int)ac->ac_criteria); mb_debug(sb, "%u found", ac->ac_found); + mb_debug(sb, "used pa: %s, ", ac->ac_pa ? "yes" : "no"); + if (ac->ac_pa) + mb_debug(sb, "pa_type %s\n", ac->ac_pa->pa_type =3D=3D MB_GROUP_PA ? + "group pa" : "inode pa"); ext4_mb_show_pa(sb); } #else --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36037C7EE23 for ; Tue, 30 May 2023 12:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232277AbjE3MiX (ORCPT ); Tue, 30 May 2023 08:38:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231937AbjE3MiD (ORCPT ); Tue, 30 May 2023 08:38:03 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A375E1B8; Tue, 30 May 2023 05:37:18 -0700 (PDT) Received: from pps.filterd (m0353722.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBqlMG020740; Tue, 30 May 2023 12:35:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=2H9MOv1sHuP/C1tOjBsd9RLxHiXVX7EVkskRbQWhwpk=; b=jeHKPjxQ3afgx0s7XpkK6ez7NTFOqegpglrP5SKs2UIBh9Z6MngGMPT+wtm8MKujz+r7 N0+ksR1mEvK0tufdX51tp5Nhl/2mIg1wUajp8IL2wIWjHb3xZ9lkVccy7epXggPlgKkJ F2bitSEXVG2UASN+1PjJJF5u2wU7QimkBk+2qIlJv7pr7YLSF8EOQ7iF9Dq07zQS8yc1 g4uiTcV59SQz1ASFAenLgMyxAqZ6uGNbX3F620se2JXWvJ13S8yPKmvFwwlh9+xtmHRX dx7P6cIfuTXE/rnUqgg+gPbjINwCruA1Lv2yrmVVhnJlgB6YpQwcOKGO9jBnOfAqKSNI HA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwgnjs6ev-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:35:45 +0000 Received: from m0353722.ppops.net (m0353722.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCIF7V029084; Tue, 30 May 2023 12:35:44 GMT Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwgnjs571-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:35:44 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U4nThV029752; Tue, 30 May 2023 12:34:02 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma06ams.nl.ibm.com (PPS) with ESMTPS id 3qu94e1fjn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:02 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCY0Ri30999202 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:00 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 366E22004E; Tue, 30 May 2023 12:34:00 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4340E20040; Tue, 30 May 2023 12:33:58 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:33:58 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , "Ritesh Harjani (IBM)" Subject: [PATCH v2 03/12] ext4: Remove unused extern variables declaration Date: Tue, 30 May 2023 18:03:41 +0530 Message-Id: <928b3142062172533b6d1b5a94de94700590fef3.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 5wvMfDoqodiRioiBomsYd0XILeViIIqQ X-Proofpoint-ORIG-GUID: 2YTTg_13dglJ3PbURNwBP2XO1N-gtaxY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 spamscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 clxscore=1015 mlxscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Ritesh Harjani (IBM)" ext4_mb_stats & ext4_mb_max_to_scan are never used. We use sbi->s_mb_stats and sbi->s_mb_max_to_scan instead. Hence kill these extern declarations. Signed-off-by: Ritesh Harjani (IBM) Signed-off-by: Ojaswin Mujoo Reviewed-by: Jan Kara --- fs/ext4/ext4.h | 2 -- fs/ext4/mballoc.h | 2 +- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index b39a52b93a26..c075da665ec1 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2835,8 +2835,6 @@ int ext4_fc_record_regions(struct super_block *sb, in= t ino, /* mballoc.c */ extern const struct seq_operations ext4_mb_seq_groups_ops; extern const struct seq_operations ext4_mb_seq_structs_summary_ops; -extern long ext4_mb_stats; -extern long ext4_mb_max_to_scan; extern int ext4_seq_mb_stats_show(struct seq_file *seq, void *offset); extern int ext4_mb_init(struct super_block *); extern int ext4_mb_release(struct super_block *); diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h index 6d85ee8674a6..24b666e558f1 100644 --- a/fs/ext4/mballoc.h +++ b/fs/ext4/mballoc.h @@ -49,7 +49,7 @@ #define MB_DEFAULT_MIN_TO_SCAN 10 =20 /* - * with 'ext4_mb_stats' allocator will collect stats that will be + * with 's_mb_stats' allocator will collect stats that will be * shown at umount. The collecting costs though! */ #define MB_DEFAULT_STATS 0 --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3910FC7EE23 for ; Tue, 30 May 2023 13:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231354AbjE3Nsl (ORCPT ); Tue, 30 May 2023 09:48:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230178AbjE3Nsd (ORCPT ); Tue, 30 May 2023 09:48:33 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B00114; Tue, 30 May 2023 06:48:19 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBALCm027324; Tue, 30 May 2023 12:34:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=N+35H6cGzEZcml2yscuLVxwjgVgunQH7bxB+drYW3/w=; b=DRLkZHvqw7AQkP8kfNEyNFJnrTZM+oso5YQ+270EnZDbfI+zZCoeh0pOPXs7AzEeZ64M fxj4KSfRBBK1sPQTP9Rp5kX9qvBiCJ85BAX5hHnEP9Xgti7iQecBTqiKV3wFb3VhIwrp m4CfKvPY8FMdrMXPb5oKJorFQP+KB9+qpsCPtk541tiSGYjAIo1PXJ2BcATXbB8MW4Y7 T5zre5sX4uKpe2YKilwuyHBU/l5lBSwx22WxliZ6puXnMM+G/QUm1Ur/noGblVB0OTTO rC8OmJ0//OlxHny6sVJY33lWdk+38kauTDcP9KqotX9EmDlgnbeWZNa+SKS4/ncO0DxA cw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1dwd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:08 +0000 Received: from m0353726.ppops.net (m0353726.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UBowgd003266; Tue, 30 May 2023 12:34:07 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1dty-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:07 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U0oZCB017713; Tue, 30 May 2023 12:34:05 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g59fe0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:04 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCY2o241026210 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:02 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B209720040; Tue, 30 May 2023 12:34:02 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9A27A20043; Tue, 30 May 2023 12:34:00 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:00 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 04/12] ext4: Convert mballoc cr (criteria) to enum Date: Tue, 30 May 2023 18:03:42 +0530 Message-Id: <5d82fd467bdf70ea45bdaef810af3b146013946c.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: nOIegfHLqILxEieNPHDEaDb-XIvKRxKG X-Proofpoint-GUID: wtp93fRy3ivtwq74yRJOktMgccJFKorK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 adultscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert criteria to be an enum so it easier to maintain and update the tracefiles to use enum names. This change also makes it easier to insert new criterias in the future. There is no functional change in this patch. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) --- fs/ext4/ext4.h | 23 +++++++-- fs/ext4/mballoc.c | 96 ++++++++++++++++++------------------- include/trace/events/ext4.h | 16 ++++++- 3 files changed, 82 insertions(+), 53 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index c075da665ec1..f9a4eaa10c6a 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -127,6 +127,23 @@ enum SHIFT_DIRECTION { SHIFT_RIGHT, }; =20 +/* + * Number of criterias defined. For each criteria, mballoc has slightly + * different way of finding the required blocks nad usually, higher the + * criteria the slower the allocation. We start at lower criterias and keep + * falling back to higher ones if we are not able to find any blocks. + */ +#define EXT4_MB_NUM_CRS 4 +/* + * All possible allocation criterias for mballoc + */ +enum criteria { + CR0, + CR1, + CR2, + CR3, +}; + /* * Flags used in mballoc's allocation_context flags field. * @@ -1542,9 +1559,9 @@ struct ext4_sb_info { atomic_t s_bal_2orders; /* 2^order hits */ atomic_t s_bal_cr0_bad_suggestions; atomic_t s_bal_cr1_bad_suggestions; - atomic64_t s_bal_cX_groups_considered[4]; - atomic64_t s_bal_cX_hits[4]; - atomic64_t s_bal_cX_failed[4]; /* cX loop didn't find blocks */ + atomic64_t s_bal_cX_groups_considered[EXT4_MB_NUM_CRS]; + atomic64_t s_bal_cX_hits[EXT4_MB_NUM_CRS]; + atomic64_t s_bal_cX_failed[EXT4_MB_NUM_CRS]; /* cX loop didn't find bloc= ks */ atomic_t s_mb_buddies_generated; /* number of buddies generated */ atomic64_t s_mb_generation_time; atomic_t s_mb_lost_chunks; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 9d73f61458d4..97eaa22b907d 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -155,19 +155,19 @@ * structures to decide the order in which groups are to be traversed for * fulfilling an allocation request. * - * At CR =3D 0, we look for groups which have the largest_free_order >=3D = the order + * At CR0 , we look for groups which have the largest_free_order >=3D the = order * of the request. We directly look at the largest free order list in the = data * structure (1) above where largest_free_order =3D order of the request. = If that * list is empty, we look at remaining list in the increasing order of - * largest_free_order. This allows us to perform CR =3D 0 lookup in O(1) t= ime. + * largest_free_order. This allows us to perform CR0 lookup in O(1) time. * - * At CR =3D 1, we only consider groups where average fragment size > requ= est + * At CR1, we only consider groups where average fragment size > request * size. So, we lookup a group which has average fragment size just above = or * equal to request size using our average fragment size group lists (data * structure 2) in O(1) time. * * If "mb_optimize_scan" mount option is not set, mballoc traverses groups= in - * linear order which requires O(N) search time for each CR 0 and CR 1 pha= se. + * linear order which requires O(N) search time for each CR0 and CR1 phase. * * The regular allocator (using the buddy cache) supports a few tunables. * @@ -410,7 +410,7 @@ static void ext4_mb_generate_from_freelist(struct super= _block *sb, void *bitmap, static void ext4_mb_new_preallocation(struct ext4_allocation_context *ac); =20 static bool ext4_mb_good_group(struct ext4_allocation_context *ac, - ext4_group_t group, int cr); + ext4_group_t group, enum criteria cr); =20 static int ext4_try_to_trim_range(struct super_block *sb, struct ext4_buddy *e4b, ext4_grpblk_t start, @@ -860,7 +860,7 @@ mb_update_avg_fragment_size(struct super_block *sb, str= uct ext4_group_info *grp) * cr level needs an update. */ static void ext4_mb_choose_next_group_cr0(struct ext4_allocation_context *= ac, - int *new_cr, ext4_group_t *group, ext4_group_t ngroups) + enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_group_info *iter, *grp; @@ -885,8 +885,8 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_a= llocation_context *ac, list_for_each_entry(iter, &sbi->s_mb_largest_free_orders[i], bb_largest_free_order_node) { if (sbi->s_mb_stats) - atomic64_inc(&sbi->s_bal_cX_groups_considered[0]); - if (likely(ext4_mb_good_group(ac, iter->bb_group, 0))) { + atomic64_inc(&sbi->s_bal_cX_groups_considered[CR0]); + if (likely(ext4_mb_good_group(ac, iter->bb_group, CR0))) { grp =3D iter; break; } @@ -898,7 +898,7 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_a= llocation_context *ac, =20 if (!grp) { /* Increment cr and search again */ - *new_cr =3D 1; + *new_cr =3D CR1; } else { *group =3D grp->bb_group; ac->ac_flags |=3D EXT4_MB_CR0_OPTIMIZED; @@ -910,7 +910,7 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_a= llocation_context *ac, * order. Updates *new_cr if cr level needs an update. */ static void ext4_mb_choose_next_group_cr1(struct ext4_allocation_context *= ac, - int *new_cr, ext4_group_t *group, ext4_group_t ngroups) + enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_group_info *grp =3D NULL, *iter; @@ -933,8 +933,8 @@ static void ext4_mb_choose_next_group_cr1(struct ext4_a= llocation_context *ac, list_for_each_entry(iter, &sbi->s_mb_avg_fragment_size[i], bb_avg_fragment_size_node) { if (sbi->s_mb_stats) - atomic64_inc(&sbi->s_bal_cX_groups_considered[1]); - if (likely(ext4_mb_good_group(ac, iter->bb_group, 1))) { + atomic64_inc(&sbi->s_bal_cX_groups_considered[CR1]); + if (likely(ext4_mb_good_group(ac, iter->bb_group, CR1))) { grp =3D iter; break; } @@ -948,7 +948,7 @@ static void ext4_mb_choose_next_group_cr1(struct ext4_a= llocation_context *ac, *group =3D grp->bb_group; ac->ac_flags |=3D EXT4_MB_CR1_OPTIMIZED; } else { - *new_cr =3D 2; + *new_cr =3D CR2; } } =20 @@ -956,7 +956,7 @@ static inline int should_optimize_scan(struct ext4_allo= cation_context *ac) { if (unlikely(!test_opt2(ac->ac_sb, MB_OPTIMIZE_SCAN))) return 0; - if (ac->ac_criteria >=3D 2) + if (ac->ac_criteria >=3D CR2) return 0; if (!ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) return 0; @@ -1001,7 +1001,7 @@ next_linear_group(struct ext4_allocation_context *ac,= int group, int ngroups) * @ngroups Total number of groups */ static void ext4_mb_choose_next_group(struct ext4_allocation_context *ac, - int *new_cr, ext4_group_t *group, ext4_group_t ngroups) + enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { *new_cr =3D ac->ac_criteria; =20 @@ -1010,9 +1010,9 @@ static void ext4_mb_choose_next_group(struct ext4_all= ocation_context *ac, return; } =20 - if (*new_cr =3D=3D 0) { + if (*new_cr =3D=3D CR0) { ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups); - } else if (*new_cr =3D=3D 1) { + } else if (*new_cr =3D=3D CR1) { ext4_mb_choose_next_group_cr1(ac, new_cr, group, ngroups); } else { /* @@ -2409,13 +2409,13 @@ void ext4_mb_scan_aligned(struct ext4_allocation_co= ntext *ac, * for the allocation or not. */ static bool ext4_mb_good_group(struct ext4_allocation_context *ac, - ext4_group_t group, int cr) + ext4_group_t group, enum criteria cr) { ext4_grpblk_t free, fragments; int flex_size =3D ext4_flex_bg_size(EXT4_SB(ac->ac_sb)); struct ext4_group_info *grp =3D ext4_get_group_info(ac->ac_sb, group); =20 - BUG_ON(cr < 0 || cr >=3D 4); + BUG_ON(cr < CR0 || cr >=3D EXT4_MB_NUM_CRS); =20 if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp)) return false; @@ -2429,7 +2429,7 @@ static bool ext4_mb_good_group(struct ext4_allocation= _context *ac, return false; =20 switch (cr) { - case 0: + case CR0: BUG_ON(ac->ac_2order =3D=3D 0); =20 /* Avoid using the first bg of a flexgroup for data files */ @@ -2448,15 +2448,15 @@ static bool ext4_mb_good_group(struct ext4_allocati= on_context *ac, return false; =20 return true; - case 1: + case CR1: if ((free / fragments) >=3D ac->ac_g_ex.fe_len) return true; break; - case 2: + case CR2: if (free >=3D ac->ac_g_ex.fe_len) return true; break; - case 3: + case CR3: return true; default: BUG(); @@ -2477,7 +2477,7 @@ static bool ext4_mb_good_group(struct ext4_allocation= _context *ac, * out"! */ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac, - ext4_group_t group, int cr) + ext4_group_t group, enum criteria cr) { struct ext4_group_info *grp =3D ext4_get_group_info(ac->ac_sb, group); struct super_block *sb =3D ac->ac_sb; @@ -2497,7 +2497,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allo= cation_context *ac, free =3D grp->bb_free; if (free =3D=3D 0) goto out; - if (cr <=3D 2 && free < ac->ac_g_ex.fe_len) + if (cr <=3D CR2 && free < ac->ac_g_ex.fe_len) goto out; if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp))) goto out; @@ -2512,7 +2512,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allo= cation_context *ac, ext4_get_group_desc(sb, group, NULL); int ret; =20 - /* cr=3D0/1 is a very optimistic search to find large + /* cr=3DCR0/CR1 is a very optimistic search to find large * good chunks almost for free. If buddy data is not * ready, then this optimization makes no sense. But * we never skip the first block group in a flex_bg, @@ -2520,7 +2520,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allo= cation_context *ac, * and we want to make sure we locate metadata blocks * in the first block group in the flex_bg if possible. */ - if (cr < 2 && + if (cr < CR2 && (!sbi->s_log_groups_per_flex || ((group & ((1 << sbi->s_log_groups_per_flex) - 1)) !=3D 0)) && !(ext4_has_group_desc_csum(sb) && @@ -2626,7 +2626,7 @@ static noinline_for_stack int ext4_mb_regular_allocator(struct ext4_allocation_context *ac) { ext4_group_t prefetch_grp =3D 0, ngroups, group, i; - int cr =3D -1, new_cr; + enum criteria cr, new_cr; int err =3D 0, first_err =3D 0; unsigned int nr =3D 0, prefetch_ios =3D 0; struct ext4_sb_info *sbi; @@ -2684,13 +2684,13 @@ ext4_mb_regular_allocator(struct ext4_allocation_co= ntext *ac) } =20 /* Let's just scan groups to find more-less suitable blocks */ - cr =3D ac->ac_2order ? 0 : 1; + cr =3D ac->ac_2order ? CR0 : CR1; /* - * cr =3D=3D 0 try to get exact allocation, - * cr =3D=3D 3 try to get anything + * cr =3D=3D CR0 try to get exact allocation, + * cr =3D=3D CR3 try to get anything */ repeat: - for (; cr < 4 && ac->ac_status =3D=3D AC_STATUS_CONTINUE; cr++) { + for (; cr < EXT4_MB_NUM_CRS && ac->ac_status =3D=3D AC_STATUS_CONTINUE; c= r++) { ac->ac_criteria =3D cr; /* * searching for the right group start @@ -2717,7 +2717,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) * spend a lot of time loading imperfect groups */ if ((prefetch_grp =3D=3D group) && - (cr > 1 || + (cr > CR1 || prefetch_ios < sbi->s_mb_prefetch_limit)) { unsigned int curr_ios =3D prefetch_ios; =20 @@ -2759,9 +2759,9 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) } =20 ac->ac_groups_scanned++; - if (cr =3D=3D 0) + if (cr =3D=3D CR0) ext4_mb_simple_scan_group(ac, &e4b); - else if (cr =3D=3D 1 && sbi->s_stripe && + else if (cr =3D=3D CR1 && sbi->s_stripe && !(ac->ac_g_ex.fe_len % EXT4_B2C(sbi, sbi->s_stripe))) ext4_mb_scan_aligned(ac, &e4b); @@ -2802,7 +2802,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) ac->ac_b_ex.fe_len =3D 0; ac->ac_status =3D AC_STATUS_CONTINUE; ac->ac_flags |=3D EXT4_MB_HINT_FIRST; - cr =3D 3; + cr =3D CR3; goto repeat; } } @@ -2927,36 +2927,36 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, vo= id *offset) seq_printf(seq, "\tgroups_scanned: %u\n", atomic_read(&sbi->s_bal_groups= _scanned)); =20 seq_puts(seq, "\tcr0_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[0])= ); + seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR0= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[0])); + atomic64_read(&sbi->s_bal_cX_groups_considered[CR0])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[0])); + atomic64_read(&sbi->s_bal_cX_failed[CR0])); seq_printf(seq, "\t\tbad_suggestions: %u\n", atomic_read(&sbi->s_bal_cr0_bad_suggestions)); =20 seq_puts(seq, "\tcr1_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[1])= ); + seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR1= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[1])); + atomic64_read(&sbi->s_bal_cX_groups_considered[CR1])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[1])); + atomic64_read(&sbi->s_bal_cX_failed[CR1])); seq_printf(seq, "\t\tbad_suggestions: %u\n", atomic_read(&sbi->s_bal_cr1_bad_suggestions)); =20 seq_puts(seq, "\tcr2_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[2])= ); + seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR2= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[2])); + atomic64_read(&sbi->s_bal_cX_groups_considered[CR2])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[2])); + atomic64_read(&sbi->s_bal_cX_failed[CR2])); =20 seq_puts(seq, "\tcr3_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[3])= ); + seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR3= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[3])); + atomic64_read(&sbi->s_bal_cX_groups_considered[CR3])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[3])); + atomic64_read(&sbi->s_bal_cX_failed[CR3])); seq_printf(seq, "\textents_scanned: %u\n", atomic_read(&sbi->s_bal_ex_sca= nned)); seq_printf(seq, "\t\tgoal_hits: %u\n", atomic_read(&sbi->s_bal_goals)); seq_printf(seq, "\t\t2^n_hits: %u\n", atomic_read(&sbi->s_bal_2orders)); diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h index ebccf6a6aa1b..f062147ca32b 100644 --- a/include/trace/events/ext4.h +++ b/include/trace/events/ext4.h @@ -120,6 +120,18 @@ TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX); { EXT4_FC_REASON_INODE_JOURNAL_DATA, "INODE_JOURNAL_DATA"}, \ { EXT4_FC_REASON_ENCRYPTED_FILENAME, "ENCRYPTED_FILENAME"}) =20 +TRACE_DEFINE_ENUM(CR0); +TRACE_DEFINE_ENUM(CR1); +TRACE_DEFINE_ENUM(CR2); +TRACE_DEFINE_ENUM(CR3); + +#define show_criteria(cr) \ + __print_symbolic(cr, \ + { CR0, "CR0" }, \ + { CR1, "CR1" }, \ + { CR2, "CR2" }, \ + { CR3, "CR3" }) + TRACE_EVENT(ext4_other_inode_update_time, TP_PROTO(struct inode *inode, ino_t orig_ino), =20 @@ -1063,7 +1075,7 @@ TRACE_EVENT(ext4_mballoc_alloc, ), =20 TP_printk("dev %d,%d inode %lu orig %u/%d/%u@%u goal %u/%d/%u@%u " - "result %u/%d/%u@%u blks %u grps %u cr %u flags %s " + "result %u/%d/%u@%u blks %u grps %u cr %s flags %s " "tail %u broken %u", MAJOR(__entry->dev), MINOR(__entry->dev), (unsigned long) __entry->ino, @@ -1073,7 +1085,7 @@ TRACE_EVENT(ext4_mballoc_alloc, __entry->goal_len, __entry->goal_logical, __entry->result_group, __entry->result_start, __entry->result_len, __entry->result_logical, - __entry->found, __entry->groups, __entry->cr, + __entry->found, __entry->groups, show_criteria(__entry->cr), show_mballoc_flags(__entry->flags), __entry->tail, __entry->buddy ? 1 << __entry->buddy : 0) ); --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9B9EC7EE23 for ; Tue, 30 May 2023 12:37:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232240AbjE3MhY (ORCPT ); Tue, 30 May 2023 08:37:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232068AbjE3MhG (ORCPT ); Tue, 30 May 2023 08:37:06 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34990E5C; Tue, 30 May 2023 05:36:12 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UCOYFq003194; Tue, 30 May 2023 12:34:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=87g985SKq4ExfhGaneyPoQKUQgMSVXN8EEuL8wWxiWw=; b=eUvhoIpnmJWWHJsxSIZCWAcxSgVzrRTtNSSG9C3QdzSd+uEyrLgPF8cskByCfuY8BCj+ ECW9NtNdvcXM3aXjLAaBz9YgCoiWeC8jHD4zb2LumeUxODLW1+hEV8+EEkQJYeZyuUmP 3nOhRJjVIoWMeckfpCoBP690LefmpSYQRMclo5OuUcU5x9zwd98hzfAQuOVKfBTlb/Iq BNU1CQ4UhSibpvChu154CILHq23iPcFbEAnRI5X8WrCb39XbkGowa13ZwR2FRnJr0LKJ Wu6+xHXMPXy+f9lq7fmkiwV+9zjMY/OdQnMj1XurNhqUVGGeT/+KVV+KiGktvsvQeV0F Cw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8s4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:11 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCQUhR007964; Tue, 30 May 2023 12:34:11 GMT Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8qm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:10 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U3UTTZ029244; Tue, 30 May 2023 12:34:08 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma03ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g51fev-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:08 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCY6ru10617502 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:06 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E48632004E; Tue, 30 May 2023 12:34:04 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 15A0A20040; Tue, 30 May 2023 12:34:03 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:02 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 05/12] ext4: Add per CR extent scanned counter Date: Tue, 30 May 2023 18:03:43 +0530 Message-Id: <55bb6d80f6e22ed2a5a830aa045572bdffc8b1b9.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: HIfNjzvJiLukgtPSwjtrJ-PyJ5vnYw5D X-Proofpoint-GUID: fgQlRo-4cTBfXgHqpzBduX4L7bDJL1BV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 suspectscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 clxscore=1015 adultscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This gives better visibility into the number of extents scanned in each particular CR. For example, this information can be used to see how out block group scanning logic is performing when the BG is fragmented. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/ext4.h | 1 + fs/ext4/mballoc.c | 12 ++++++++++++ fs/ext4/mballoc.h | 1 + 3 files changed, 14 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index f9a4eaa10c6a..2df4189ef778 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1553,6 +1553,7 @@ struct ext4_sb_info { atomic_t s_bal_success; /* we found long enough chunks */ atomic_t s_bal_allocated; /* in blocks */ atomic_t s_bal_ex_scanned; /* total extents scanned */ + atomic_t s_bal_cX_ex_scanned[EXT4_MB_NUM_CRS]; /* total extents scanned */ atomic_t s_bal_groups_scanned; /* number of groups scanned */ atomic_t s_bal_goals; /* goal hits */ atomic_t s_bal_breaks; /* too long searches */ diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 97eaa22b907d..a3106607486f 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2104,6 +2104,7 @@ static void ext4_mb_measure_extent(struct ext4_alloca= tion_context *ac, BUG_ON(ac->ac_status !=3D AC_STATUS_CONTINUE); =20 ac->ac_found++; + ac->ac_cX_found[ac->ac_criteria]++; =20 /* * The special case - take what you catch first @@ -2278,6 +2279,7 @@ void ext4_mb_simple_scan_group(struct ext4_allocation= _context *ac, break; } ac->ac_found++; + ac->ac_cX_found[ac->ac_criteria]++; =20 ac->ac_b_ex.fe_len =3D 1 << i; ac->ac_b_ex.fe_start =3D k << i; @@ -2393,6 +2395,7 @@ void ext4_mb_scan_aligned(struct ext4_allocation_cont= ext *ac, max =3D mb_find_extent(e4b, i, stripe, &ex); if (max >=3D stripe) { ac->ac_found++; + ac->ac_cX_found[ac->ac_criteria]++; ex.fe_logical =3D 0xDEADF00D; /* debug value */ ac->ac_b_ex =3D ex; ext4_mb_use_best_found(ac, e4b); @@ -2930,6 +2933,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR0= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", atomic64_read(&sbi->s_bal_cX_groups_considered[CR0])); + seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR0])); seq_printf(seq, "\t\tuseless_loops: %llu\n", atomic64_read(&sbi->s_bal_cX_failed[CR0])); seq_printf(seq, "\t\tbad_suggestions: %u\n", @@ -2939,6 +2943,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR1= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", atomic64_read(&sbi->s_bal_cX_groups_considered[CR1])); + seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR1])); seq_printf(seq, "\t\tuseless_loops: %llu\n", atomic64_read(&sbi->s_bal_cX_failed[CR1])); seq_printf(seq, "\t\tbad_suggestions: %u\n", @@ -2948,6 +2953,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR2= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", atomic64_read(&sbi->s_bal_cX_groups_considered[CR2])); + seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR2])); seq_printf(seq, "\t\tuseless_loops: %llu\n", atomic64_read(&sbi->s_bal_cX_failed[CR2])); =20 @@ -2955,6 +2961,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR3= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", atomic64_read(&sbi->s_bal_cX_groups_considered[CR3])); + seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR3])); seq_printf(seq, "\t\tuseless_loops: %llu\n", atomic64_read(&sbi->s_bal_cX_failed[CR3])); seq_printf(seq, "\textents_scanned: %u\n", atomic_read(&sbi->s_bal_ex_sca= nned)); @@ -4403,7 +4410,12 @@ static void ext4_mb_collect_stats(struct ext4_alloca= tion_context *ac) atomic_add(ac->ac_b_ex.fe_len, &sbi->s_bal_allocated); if (ac->ac_b_ex.fe_len >=3D ac->ac_o_ex.fe_len) atomic_inc(&sbi->s_bal_success); + atomic_add(ac->ac_found, &sbi->s_bal_ex_scanned); + for (int i=3D0; iac_cX_found[i], &sbi->s_bal_cX_ex_scanned[i]); + } + atomic_add(ac->ac_groups_scanned, &sbi->s_bal_groups_scanned); if (ac->ac_g_ex.fe_start =3D=3D ac->ac_b_ex.fe_start && ac->ac_g_ex.fe_group =3D=3D ac->ac_b_ex.fe_group) diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h index 24b666e558f1..acfdc204e15d 100644 --- a/fs/ext4/mballoc.h +++ b/fs/ext4/mballoc.h @@ -184,6 +184,7 @@ struct ext4_allocation_context { __u16 ac_groups_scanned; __u16 ac_groups_linear_remaining; __u16 ac_found; + __u16 ac_cX_found[EXT4_MB_NUM_CRS]; __u16 ac_tail; __u16 ac_buddy; __u8 ac_status; --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89079C7EE2C for ; Tue, 30 May 2023 13:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231614AbjE3NqY (ORCPT ); Tue, 30 May 2023 09:46:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231320AbjE3NqV (ORCPT ); Tue, 30 May 2023 09:46:21 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1354E52; Tue, 30 May 2023 06:45:46 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBkADs020995; Tue, 30 May 2023 12:34:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=vKqk5VAlWO7T7apoxgtfCz1rQzInq/ixcvszJqIQEaI=; b=pxn88Zqh83rsEZNa/XC4dsTq2rxNi4kutSspmJOGFJltL26ZrCXleFcjBTGSkHt4RY6R lzfUnhqHzY0T8ltiQJNkzPX0M4eOKqNyumYyDuc6UZ96Ef7/pvl5LhxRas+FP1hIm6mp 6pfnMElFFhZa8Qzt2MOSt10dv9IVh9iamw/MzonyOvCg7G0EaGa4r3IfJ5uMsVqAkK2v +rE+/Zm437KwpyJEd8kvuKrLCuz0WEidh/4GadCtsl/bMZCztijQMn5cTUhv30S+ZHzQ ekWZq5w7NY+a9/xDBOtRr0geNCm1yE2bFgBihpTsx+bawmRPOChMGa1EOjT2LK1QuYJV lQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1e1d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:12 +0000 Received: from m0353726.ppops.net (m0353726.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCGPwO030742; Tue, 30 May 2023 12:34:12 GMT Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1dyg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:12 +0000 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U30Gkv021816; Tue, 30 May 2023 12:34:09 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma04fra.de.ibm.com (PPS) with ESMTPS id 3qu9g5183x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:09 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCY76136111094 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:07 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4700420043; Tue, 30 May 2023 12:34:07 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4438920040; Tue, 30 May 2023 12:34:05 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:05 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 06/12] ext4: Add counter to track successful allocation of goal length Date: Tue, 30 May 2023 18:03:44 +0530 Message-Id: <343620e2be8a237239ea2613a7a866ee8607e973.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: QNhsF3hkGIQzvuc7Ao8pzSztLvLlzWxR X-Proofpoint-GUID: SmM8nCuuuJbtAKCmR1X7DXcHA2AdVcrD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 adultscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Track number of allocations where the length of blocks allocated is equal t= o the length of goal blocks (post normalization). This metric could be useful if making changes to the allocator logic in the future as it could give us visibility into how often do we trim our requests. PS: ac_b_ex.fe_len might get modified due to preallocation efforts and hence we use ac_f_ex.fe_len instead since we want to compare how much the allocator was able to actually find. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/ext4.h | 1 + fs/ext4/mballoc.c | 3 +++ 2 files changed, 4 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 2df4189ef778..eae981ab2539 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1556,6 +1556,7 @@ struct ext4_sb_info { atomic_t s_bal_cX_ex_scanned[EXT4_MB_NUM_CRS]; /* total extents scanned */ atomic_t s_bal_groups_scanned; /* number of groups scanned */ atomic_t s_bal_goals; /* goal hits */ + atomic_t s_bal_len_goals; /* len goal hits */ atomic_t s_bal_breaks; /* too long searches */ atomic_t s_bal_2orders; /* 2^order hits */ atomic_t s_bal_cr0_bad_suggestions; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index a3106607486f..73e98a4d01f5 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2966,6 +2966,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) atomic64_read(&sbi->s_bal_cX_failed[CR3])); seq_printf(seq, "\textents_scanned: %u\n", atomic_read(&sbi->s_bal_ex_sca= nned)); seq_printf(seq, "\t\tgoal_hits: %u\n", atomic_read(&sbi->s_bal_goals)); + seq_printf(seq, "\t\tlen_goal_hits: %u\n", atomic_read(&sbi->s_bal_len_go= als)); seq_printf(seq, "\t\t2^n_hits: %u\n", atomic_read(&sbi->s_bal_2orders)); seq_printf(seq, "\t\tbreaks: %u\n", atomic_read(&sbi->s_bal_breaks)); seq_printf(seq, "\t\tlost: %u\n", atomic_read(&sbi->s_mb_lost_chunks)); @@ -4420,6 +4421,8 @@ static void ext4_mb_collect_stats(struct ext4_allocat= ion_context *ac) if (ac->ac_g_ex.fe_start =3D=3D ac->ac_b_ex.fe_start && ac->ac_g_ex.fe_group =3D=3D ac->ac_b_ex.fe_group) atomic_inc(&sbi->s_bal_goals); + if (ac->ac_f_ex.fe_len =3D=3D ac->ac_g_ex.fe_len) + atomic_inc(&sbi->s_bal_len_goals); if (ac->ac_found > sbi->s_mb_max_to_scan) atomic_inc(&sbi->s_bal_breaks); } --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CEA4C77B7A for ; Tue, 30 May 2023 12:36:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229667AbjE3MgH (ORCPT ); Tue, 30 May 2023 08:36:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231543AbjE3MgD (ORCPT ); Tue, 30 May 2023 08:36:03 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65A3DE4C; Tue, 30 May 2023 05:35:31 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBrV6o013352; Tue, 30 May 2023 12:34:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=l31TWPx7iQqEtqCIwTjdkarFhVmshfo03Z6bbRpQAto=; b=LqL8ekO01xkqai7C/h746+s6yMwbCZCfRbgTWMJpAH0Z6FA8mz8UsBQjsP6+EbXAuLDa t6HxPna/JKH94cEMa3JUukqejrRe3dREAYYITCuW+1i+aysRN0E2666gVhBJ9A89gRZF CCPpOCmmKYwVvPnavq+NG1WnJ7gDw7ob0O44ps5lKH85jb5BF/rUpKLM1v+78pAF87LK Zyihd8exjvA6sxGk3tf4EX/f9NYeXT8dKR02Ju+MIj7TNKr/kLD62EoY7bZ4gENtcEYw MZOJc0+YshMwZlkuicBPbSDC2FbjuiSETKVTDY+aycAUUex4Zhd5qHz4MofbDl+AJFR7 AQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1e3t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:15 +0000 Received: from m0353726.ppops.net (m0353726.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCGA1C030172; Tue, 30 May 2023 12:34:14 GMT Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwbst1e1h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:14 +0000 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U5k672032052; Tue, 30 May 2023 12:34:12 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma04fra.de.ibm.com (PPS) with ESMTPS id 3qu9g5183y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:12 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCY9Cm43123344 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:09 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B65E320043; Tue, 30 May 2023 12:34:09 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9D52720040; Tue, 30 May 2023 12:34:07 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:07 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 07/12] ext4: Avoid scanning smaller extents in BG during CR1 Date: Tue, 30 May 2023 18:03:45 +0530 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: Vi_Y8PMUwyEL3MLzPKwnEdq4EdBl5_D7 X-Proofpoint-GUID: TQTdXStfxejj1jI504QkJrQjoEY-EJV9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 adultscore=0 suspectscore=0 malwarescore=0 mlxlogscore=800 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we are inside ext4_mb_complex_scan_group() in CR1, we can be sure that this group has atleast 1 big enough continuous free extent to satisfy our request because (free / fragments) > goal length. Hence, instead of wasting time looping over smaller free extents, only try to consider the free extent if we are sure that it has enough continuous free space to satisfy goal length. This is particularly useful when scanning highly fragmented BGs in CR1 as, without this patch, the allocator might stop scanning early before reaching the big enough free extent (due to ac_found > mb_max_to_scan) which causes us to uncessarily trim the request. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 73e98a4d01f5..c86565606359 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2308,7 +2308,7 @@ void ext4_mb_complex_scan_group(struct ext4_allocatio= n_context *ac, struct super_block *sb =3D ac->ac_sb; void *bitmap =3D e4b->bd_bitmap; struct ext4_free_extent ex; - int i; + int i, j, freelen; int free; =20 free =3D e4b->bd_info->bb_free; @@ -2335,6 +2335,23 @@ void ext4_mb_complex_scan_group(struct ext4_allocati= on_context *ac, break; } =20 + if (ac->ac_criteria < CR2) { + /* + * In CR1, we are sure that this group will + * have a large enough continuous free extent, so skip + * over the smaller free extents + */ + j =3D mb_find_next_bit(bitmap, + EXT4_CLUSTERS_PER_GROUP(sb), i); + freelen =3D j - i; + + if (freelen < ac->ac_g_ex.fe_len) { + i =3D j; + free -=3D freelen; + continue; + } + } + mb_find_extent(e4b, i, ac->ac_g_ex.fe_len, &ex); if (WARN_ON(ex.fe_len <=3D 0)) break; --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2A5FC7EE2C for ; Tue, 30 May 2023 12:37:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231858AbjE3MhQ (ORCPT ); Tue, 30 May 2023 08:37:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232170AbjE3Mgw (ORCPT ); Tue, 30 May 2023 08:36:52 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C33F811C; Tue, 30 May 2023 05:36:06 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UCOl2u003371; Tue, 30 May 2023 12:34:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=1H7yBmKwWS6qLPxfXmhxuHALtpCrbdFBBi5pueIf3Tg=; b=OnQQRUnko7IbhupkBuSuquCGB4Po1talukgUbWNFC82+GscTtiZf69fVPBSaVWxIAvAR crYtpbUQsL8a5amGipEpRgt3QeMAC9imuOrlvAJStO3Nc/A5b/zBX1kyAq1aTiYxlpU/ 3xz/L81EJN0p5rAUyi8AJDTphMEzS5/v6/Yte5S/7BeXGTEE/P2IMlMvacamWqQ+jpyY 157K7hf2wP3uo4vMprIfqnagGPJGdE4WSXH1OlMcx7XENvCvvBGeMbJZEsmHtdXVggMn +GhMq1LdayA9FqGTSTsPPy573WLBYists2VjO0bTv6SwaqWe677BukU8yFqi7ix/+Fsk yg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8xs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:17 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCRhtm010960; Tue, 30 May 2023 12:34:17 GMT Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwh46r8wc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:17 +0000 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U5fcYK001036; Tue, 30 May 2023 12:34:14 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma05fra.de.ibm.com (PPS) with ESMTPS id 3qu9g5989e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:14 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCYC7a13173334 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:12 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F3E122004B; Tue, 30 May 2023 12:34:11 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 25C0520043; Tue, 30 May 2023 12:34:10 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:09 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 08/12] ext4: Don't skip prefetching BLOCK_UNINIT groups Date: Tue, 30 May 2023 18:03:46 +0530 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: g4xofLyjXWLwfN4tMZgNFLF-Bbnq3IcV X-Proofpoint-GUID: ac0-LLC3goxZyURTmM3K18cL4EsiDN4K X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 suspectscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 clxscore=1015 adultscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, ext4_mb_prefetch() and ext4_mb_prefetch_fini() skip BLOCK_UNINIT groups since fetching their bitmaps doesn't need disk IO. As a consequence, we end not initializing the buddy structures and CR0/1 lists for these BGs, even though it can be done without any disk IO overhead. Hence, don't skip such BGs during prefetch and prefetch_fini. This improves the accuracy of CR0/1 allocation as earlier, we could have essentially empty BLOCK_UNINIT groups being ignored by CR0/1 due to their b= uddy not being initialized, leading to slower CR2 allocations. With this patch C= R0/1 will be able to discover these groups as well, thus improving performance. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index c86565606359..79455c7e645b 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2590,9 +2590,7 @@ ext4_group_t ext4_mb_prefetch(struct super_block *sb,= ext4_group_t group, */ if (gdp && grp && !EXT4_MB_GRP_TEST_AND_SET_READ(grp) && EXT4_MB_GRP_NEED_INIT(grp) && - ext4_free_group_clusters(sb, gdp) > 0 && - !(ext4_has_group_desc_csum(sb) && - (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)))) { + ext4_free_group_clusters(sb, gdp) > 0 ) { bh =3D ext4_read_block_bitmap_nowait(sb, group, true); if (bh && !IS_ERR(bh)) { if (!buffer_uptodate(bh) && cnt) @@ -2633,9 +2631,7 @@ void ext4_mb_prefetch_fini(struct super_block *sb, ex= t4_group_t group, grp =3D ext4_get_group_info(sb, group); =20 if (grp && gdp && EXT4_MB_GRP_NEED_INIT(grp) && - ext4_free_group_clusters(sb, gdp) > 0 && - !(ext4_has_group_desc_csum(sb) && - (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)))) { + ext4_free_group_clusters(sb, gdp) > 0) { if (ext4_mb_init_group(sb, group, GFP_NOFS)) break; } --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABFD6C77B7A for ; Tue, 30 May 2023 12:36:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231769AbjE3MgP (ORCPT ); Tue, 30 May 2023 08:36:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230037AbjE3MgF (ORCPT ); Tue, 30 May 2023 08:36:05 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1384E4D; Tue, 30 May 2023 05:35:32 -0700 (PDT) Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBB2ZC019950; Tue, 30 May 2023 12:34:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=86pPEn07euotJE/N46+kGexbpyZlTqF7dkGt7c+ryg0=; b=UOm8DyZWISUEu7iX21VRjhm9bpni+zaW2L6qXRqMSNc/oNJ+tJBt7CdhYckZXBu5vrOh Vq9pu3ZP1NmnaNCLQ39CtpAHVzkBDfl9+1vXVwJ753VCPCjroi51Si5lEY2E8JGskieu AvLHAfY0gNdUvSJ2esH3VfrTiWnIuL59Z3MouRmOKqdlF/zmtCexdrwwrrEEc1l9+SFe QHCpGrd1VOiw3CYTLwHqsW700RjBd2CCZCUaao1UZ3sBu+vf3cBYYlgXp4OciUbj3PTK tNlG0sEZMMojqmDkYNvC93mk+SO7n5UCIdV3TqXnMv5vwxmcfbjMMUkT6KseCIV0Q/6f Rg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwfsq2m16-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:19 +0000 Received: from m0353729.ppops.net (m0353729.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UBbkun012737; Tue, 30 May 2023 12:34:19 GMT Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwfsq2m00-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:19 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34TLXZnl001143; Tue, 30 May 2023 12:34:16 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma06ams.nl.ibm.com (PPS) with ESMTPS id 3qu94e1fjt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:16 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCYEJk29360506 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:14 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7012D2005A; Tue, 30 May 2023 12:34:14 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 633642004B; Tue, 30 May 2023 12:34:12 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:12 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 09/12] ext4: Ensure ext4_mb_prefetch_fini() is called for all prefetched BGs Date: Tue, 30 May 2023 18:03:47 +0530 Message-Id: <05e648ae04ec5b754207032823e9c1de9a54f87a.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: DlrV7AcFGp5JbYArQxun4zf-qADwkxi1 X-Proofpoint-ORIG-GUID: ImgblJZvIWi664sKlN_gPNvV8PJOhI5u X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 clxscore=1015 lowpriorityscore=0 phishscore=0 adultscore=0 mlxlogscore=919 impostorscore=0 mlxscore=0 suspectscore=0 spamscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Before this patch, the call stack in ext4_run_li_request is as follows: /* * nr =3D no. of BGs we want to fetch (=3Ds_mb_prefetch) * prefetch_ios =3D no. of BGs not uptodate after * ext4_read_block_bitmap_nowait() */ next_group =3D ext4_mb_prefetch(sb, group, nr, prefetch_ios); ext4_mb_prefetch_fini(sb, next_group prefetch_ios); ext4_mb_prefetch_fini() will only try to initialize buddies for BGs in range [next_group - prefetch_ios, next_group). This is incorrect since sometimes (prefetch_ios < nr), which causes ext4_mb_prefetch_fini() to incorrectly ignore some of the BGs that might need initialization. This issue is more notable now with the previous patch enabling "fetching" of BLOCK_UNINIT BGs which are marked buffer_uptodate by default. Fix this by passing nr to ext4_mb_prefetch_fini() instead of prefetch_ios so that it considers the right range of groups. Similarly, make sure we don't pass nr=3D0 to ext4_mb_prefetch_fini() in ext4_mb_regular_allocator() since we might have prefetched BLOCK_UNINIT groups that would need buddy initialization. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 4 ---- fs/ext4/super.c | 11 ++++------- 2 files changed, 4 insertions(+), 11 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 79455c7e645b..6775d73dfc68 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2735,8 +2735,6 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) if ((prefetch_grp =3D=3D group) && (cr > CR1 || prefetch_ios < sbi->s_mb_prefetch_limit)) { - unsigned int curr_ios =3D prefetch_ios; - nr =3D sbi->s_mb_prefetch; if (ext4_has_feature_flex_bg(sb)) { nr =3D 1 << sbi->s_log_groups_per_flex; @@ -2745,8 +2743,6 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) } prefetch_grp =3D ext4_mb_prefetch(sb, group, nr, &prefetch_ios); - if (prefetch_ios =3D=3D curr_ios) - nr =3D 0; } =20 /* This now checks without needing the buddy page */ diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 2da5476fa48b..27c1dabacd43 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -3692,16 +3692,13 @@ static int ext4_run_li_request(struct ext4_li_reque= st *elr) ext4_group_t group =3D elr->lr_next_group; unsigned int prefetch_ios =3D 0; int ret =3D 0; + int nr =3D EXT4_SB(sb)->s_mb_prefetch; u64 start_time; =20 if (elr->lr_mode =3D=3D EXT4_LI_MODE_PREFETCH_BBITMAP) { - elr->lr_next_group =3D ext4_mb_prefetch(sb, group, - EXT4_SB(sb)->s_mb_prefetch, &prefetch_ios); - if (prefetch_ios) - ext4_mb_prefetch_fini(sb, elr->lr_next_group, - prefetch_ios); - trace_ext4_prefetch_bitmaps(sb, group, elr->lr_next_group, - prefetch_ios); + elr->lr_next_group =3D ext4_mb_prefetch(sb, group, nr, &prefetch_ios); + ext4_mb_prefetch_fini(sb, elr->lr_next_group, nr); + trace_ext4_prefetch_bitmaps(sb, group, elr->lr_next_group, nr); if (group >=3D elr->lr_next_group) { ret =3D 1; if (elr->lr_first_not_zeroed !=3D ngroups && --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3C17C7EE23 for ; Tue, 30 May 2023 12:36:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232163AbjE3Mgs (ORCPT ); Tue, 30 May 2023 08:36:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231977AbjE3Mgj (ORCPT ); Tue, 30 May 2023 08:36:39 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41367106; Tue, 30 May 2023 05:35:42 -0700 (PDT) Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UCW8rj024088; Tue, 30 May 2023 12:34:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=kgn6xxT26VWoUhCb9TpkGjQxt+cWGDoSd5p0LUKRKO8=; b=IXHQhXiaTf+uWz3yuWByrm4yEz77MJAlzw1jXbrA1aFozNWfRoNlpLX4ftT/oIVotQXo xvQypdnkmMEsSclWlLTUhHnt9Cnl+K/uprcIfT+l4CEiNp09EVg1zs7drbzTLWg6MHL3 xaH4lEr+ttBngEdVS7joJY79JwiVCdNeZxUKGqKyVWZB+66MFmjzyYqPMLeKgYb2q11A 0xp3EloVocUUEmYWBtUt9PXZHZmt80o9EFRJjUOgFDc2v1+1HWvEKItgTHYnKt6TXLyv vn96RTdR2FVfNl7Sz+O9e0kBY4cUd86cdCS5HIwK8pHE05GTm9SZFF9DL04Eo29BCAUD TQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwf7dkq68-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:24 +0000 Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UAd286030036; Tue, 30 May 2023 12:34:23 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwf7dkq59-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:23 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U4dNL9014942; Tue, 30 May 2023 12:34:21 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g59fe4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:19 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCYGbP40174082 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:17 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C67A22004B; Tue, 30 May 2023 12:34:16 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D31F320043; Tue, 30 May 2023 12:34:14 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:14 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 10/12] ext4: Abstract out logic to search average fragment list Date: Tue, 30 May 2023 18:03:48 +0530 Message-Id: <028c11d95b17ce0285f45456709a0ca922df1b83.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: PemJ3zz3lhNLePB5y_gdddgvJW7yGTip X-Proofpoint-ORIG-GUID: TKJFp6tHV7kOUTR9Fj3AW-KubOntF-Y3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 bulkscore=0 priorityscore=1501 spamscore=0 adultscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make the logic of searching average fragment list of a given order reusable by abstracting it out to a differnet function. This will also avoid code duplication in upcoming patches. No functional changes. Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 51 ++++++++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 18 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 6775d73dfc68..f59e1e0e01b1 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -905,6 +905,37 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_= allocation_context *ac, } } =20 +/* + * Find a suitable group of given order from the average fragments list. + */ +static struct ext4_group_info * +ext4_mb_find_good_group_avg_frag_lists(struct ext4_allocation_context *ac,= int order) +{ + struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); + struct list_head *frag_list =3D &sbi->s_mb_avg_fragment_size[order]; + rwlock_t *frag_list_lock =3D &sbi->s_mb_avg_fragment_size_locks[order]; + struct ext4_group_info *grp =3D NULL, *iter; + enum criteria cr =3D ac->ac_criteria; + + if (list_empty(frag_list)) + return NULL; + read_lock(frag_list_lock); + if (list_empty(frag_list)) { + read_unlock(frag_list_lock); + return NULL; + } + list_for_each_entry(iter, frag_list, bb_avg_fragment_size_node) { + if (sbi->s_mb_stats) + atomic64_inc(&sbi->s_bal_cX_groups_considered[cr]); + if (likely(ext4_mb_good_group(ac, iter->bb_group, cr))) { + grp =3D iter; + break; + } + } + read_unlock(frag_list_lock); + return grp; +} + /* * Choose next group by traversing average fragment size list of suitable * order. Updates *new_cr if cr level needs an update. @@ -913,7 +944,7 @@ static void ext4_mb_choose_next_group_cr1(struct ext4_a= llocation_context *ac, enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); - struct ext4_group_info *grp =3D NULL, *iter; + struct ext4_group_info *grp =3D NULL; int i; =20 if (unlikely(ac->ac_flags & EXT4_MB_CR1_OPTIMIZED)) { @@ -923,23 +954,7 @@ static void ext4_mb_choose_next_group_cr1(struct ext4_= allocation_context *ac, =20 for (i =3D mb_avg_fragment_size_order(ac->ac_sb, ac->ac_g_ex.fe_len); i < MB_NUM_ORDERS(ac->ac_sb); i++) { - if (list_empty(&sbi->s_mb_avg_fragment_size[i])) - continue; - read_lock(&sbi->s_mb_avg_fragment_size_locks[i]); - if (list_empty(&sbi->s_mb_avg_fragment_size[i])) { - read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]); - continue; - } - list_for_each_entry(iter, &sbi->s_mb_avg_fragment_size[i], - bb_avg_fragment_size_node) { - if (sbi->s_mb_stats) - atomic64_inc(&sbi->s_bal_cX_groups_considered[CR1]); - if (likely(ext4_mb_good_group(ac, iter->bb_group, CR1))) { - grp =3D iter; - break; - } - } - read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]); + grp =3D ext4_mb_find_good_group_avg_frag_lists(ac, i); if (grp) break; } --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3991FC7EE23 for ; Tue, 30 May 2023 12:36:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232001AbjE3Mgo (ORCPT ); Tue, 30 May 2023 08:36:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231903AbjE3MgJ (ORCPT ); Tue, 30 May 2023 08:36:09 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 749A9115; Tue, 30 May 2023 05:35:36 -0700 (PDT) Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBIew0015583; Tue, 30 May 2023 12:34:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=8AitXp5IA8vKufsR40aabJOyKfpQtoJksWpf2vri7y0=; b=cZnK2gAByKeGMICZsuuPHBHaQ5ou8jsD/23eJz+j4FiuBlnClsOxawRW1oDotLfT73Ea YphmVD/4DAghPDCLvdXivc9Va+aUPiIUxFfy7os93U44yPUjxEKH+a2nLfedBjRgwAoJ 7DsKuGCvofJ8az9h9naFu/z3+2M7CaKi3WomjuwMTo5IyHvoooxSWWxagnYR5KavUYVr 8B479jCZwje3DWrk2Z88hWMaPFxKGOU+f7/I4NON1TQ/5YmAj1rhXEe/vz42qTUnrKZz TfMISCMbkw2w7NeqaapYLnQRwa/l9aURkft21PBkCEeelyFhdbh0rmmmxeU1l/aTF9HX Zg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwg5dt2na-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:24 +0000 Received: from m0353727.ppops.net (m0353727.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 34UCWPGZ026488; Tue, 30 May 2023 12:34:24 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwg5dt2ks-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:24 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U2ZIUM002894; Tue, 30 May 2023 12:34:21 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3qu9g59fe6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:21 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCYJBI53412254 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:19 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2497C2004E; Tue, 30 May 2023 12:34:19 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 358FA2004B; Tue, 30 May 2023 12:34:17 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:16 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi , Ritesh Harjani Subject: [PATCH v2 11/12] ext4: Add allocation criteria 1.5 (CR1_5) Date: Tue, 30 May 2023 18:03:49 +0530 Message-Id: <150fdf65c8e4cc4dba71e020ce0859bcf636a5ff.1685449706.git.ojaswin@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: vZB_rfFL4mwfGlQr3HI5eWrijh4anRKu X-Proofpoint-GUID: -AQTrLB_adNoqBA9oXwlfq1mUZpgxVXk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 mlxscore=0 adultscore=0 phishscore=0 suspectscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" CR1_5 aims to optimize allocations which can't be satisfied in CR1. The fact that we couldn't find a group in CR1 suggests that it would be difficult to find a continuous extent to compleltely satisfy our allocations. So before falling to the slower CR2, in CR1.5 we proactively trim the the preallocations so we can find a group with (free / fragments) big enough. This speeds up our allocation at the cost of slightly reduced preallocation. The patch also adds a new sysfs tunable: * /sys/fs/ext4//mb_cr1_5_max_trim_order This controls how much CR1.5 can trim a request before falling to CR2. For example, for a request of order 7 and max trim order 2, CR1.5 can trim this upto order 5. Suggested-by: Ritesh Harjani (IBM) Signed-off-by: Ojaswin Mujoo Reviewed-by: Ritesh Harjani (IBM) ext4 squash --- fs/ext4/ext4.h | 8 ++- fs/ext4/mballoc.c | 135 +++++++++++++++++++++++++++++++++--- fs/ext4/mballoc.h | 13 ++++ fs/ext4/sysfs.c | 2 + include/trace/events/ext4.h | 2 + 5 files changed, 150 insertions(+), 10 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index eae981ab2539..942e97026a60 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -133,13 +133,14 @@ enum SHIFT_DIRECTION { * criteria the slower the allocation. We start at lower criterias and keep * falling back to higher ones if we are not able to find any blocks. */ -#define EXT4_MB_NUM_CRS 4 +#define EXT4_MB_NUM_CRS 5 /* * All possible allocation criterias for mballoc */ enum criteria { CR0, CR1, + CR1_5, CR2, CR3, }; @@ -185,6 +186,9 @@ enum criteria { #define EXT4_MB_CR0_OPTIMIZED 0x8000 /* Avg fragment size rb tree lookup succeeded at least once for cr =3D 1 */ #define EXT4_MB_CR1_OPTIMIZED 0x00010000 +/* Avg fragment size rb tree lookup succeeded at least once for cr =3D 1.5= */ +#define EXT4_MB_CR1_5_OPTIMIZED 0x00020000 + struct ext4_allocation_request { /* target inode for block we're allocating */ struct inode *inode; @@ -1547,6 +1551,7 @@ struct ext4_sb_info { unsigned long s_mb_last_start; unsigned int s_mb_prefetch; unsigned int s_mb_prefetch_limit; + unsigned int s_mb_cr1_5_max_trim_order; =20 /* stats for buddy allocator */ atomic_t s_bal_reqs; /* number of reqs with len > 1 */ @@ -1561,6 +1566,7 @@ struct ext4_sb_info { atomic_t s_bal_2orders; /* 2^order hits */ atomic_t s_bal_cr0_bad_suggestions; atomic_t s_bal_cr1_bad_suggestions; + atomic_t s_bal_cr1_5_bad_suggestions; atomic64_t s_bal_cX_groups_considered[EXT4_MB_NUM_CRS]; atomic64_t s_bal_cX_hits[EXT4_MB_NUM_CRS]; atomic64_t s_bal_cX_failed[EXT4_MB_NUM_CRS]; /* cX loop didn't find bloc= ks */ diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index f59e1e0e01b1..0cf037489e97 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -166,6 +166,14 @@ * equal to request size using our average fragment size group lists (data * structure 2) in O(1) time. * + * At CR1.5 (aka CR1_5), we aim to optimize allocations which can't be sat= isfied + * in CR1. The fact that we couldn't find a group in CR1 suggests that the= re is + * no BG that has average fragment size > goal length. So before falling t= o the + * slower CR2, in CR1.5 we proactively trim goal length and then use the s= ame + * fragment lists as CR1 to find a BG with a big enough average fragment s= ize. + * This increases the chances of finding a suitable block group in O(1) ti= me and + * results * in faster allocation at the cost of reduced size of allocatio= n. + * * If "mb_optimize_scan" mount option is not set, mballoc traverses groups= in * linear order which requires O(N) search time for each CR0 and CR1 phase. * @@ -963,6 +971,91 @@ static void ext4_mb_choose_next_group_cr1(struct ext4_= allocation_context *ac, *group =3D grp->bb_group; ac->ac_flags |=3D EXT4_MB_CR1_OPTIMIZED; } else { + *new_cr =3D CR1_5; + } +} + +/* + * We couldn't find a group in CR1 so try to find the highest free fragment + * order we have and proactively trim the goal request length to that orde= r to + * find a suitable group faster. + * + * This optimizes allocation speed at the cost of slightly reduced + * preallocations. However, we make sure that we don't trim the request too + * much and fall to CR2 in that case. + */ +static void ext4_mb_choose_next_group_cr1_5(struct ext4_allocation_context= *ac, + enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) +{ + struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); + struct ext4_group_info *grp =3D NULL; + int i, order, min_order; + unsigned long num_stripe_clusters =3D 0; + + if (unlikely(ac->ac_flags & EXT4_MB_CR1_5_OPTIMIZED)) { + if (sbi->s_mb_stats) + atomic_inc(&sbi->s_bal_cr1_5_bad_suggestions); + } + + /* + * mb_avg_fragment_size_order() returns order in a way that makes + * retrieving back the length using (1 << order) inaccurate. Hence, use + * fls() instead since we need to know the actual length while modifying + * goal length. + */ + order =3D fls(ac->ac_g_ex.fe_len); + min_order =3D order - sbi->s_mb_cr1_5_max_trim_order; + if (min_order < 0) + min_order =3D 0; + + if (1 << min_order < ac->ac_o_ex.fe_len) + min_order =3D fls(ac->ac_o_ex.fe_len) + 1; + + if (sbi->s_stripe > 0) { + /* + * We are assuming that stripe size is always a multiple of + * cluster ratio otherwise __ext4_fill_super exists early. + */ + num_stripe_clusters =3D EXT4_NUM_B2C(sbi, sbi->s_stripe); + if (1 << min_order < num_stripe_clusters) + min_order =3D fls(num_stripe_clusters); + } + + for (i =3D order; i >=3D min_order; i--) { + int frag_order; + /* + * Scale down goal len to make sure we find something + * in the free fragments list. Basically, reduce + * preallocations. + */ + ac->ac_g_ex.fe_len =3D 1 << i; + + if (num_stripe_clusters > 0) { + /* + * Try to round up the adjusted goal to stripe size + * (in cluster units) multiple for efficiency. + * + * XXX: Is s->stripe always a power of 2? In that case + * we can use the faster round_up() variant. + */ + ac->ac_g_ex.fe_len =3D roundup(ac->ac_g_ex.fe_len, + num_stripe_clusters); + } + + frag_order =3D mb_avg_fragment_size_order(ac->ac_sb, + ac->ac_g_ex.fe_len); + + grp =3D ext4_mb_find_good_group_avg_frag_lists(ac, frag_order); + if (grp) + break; + } + + if (grp) { + *group =3D grp->bb_group; + ac->ac_flags |=3D EXT4_MB_CR1_5_OPTIMIZED; + } else { + /* Reset goal length to original goal length before falling into CR2 */ + ac->ac_g_ex.fe_len =3D ac->ac_orig_goal_len; *new_cr =3D CR2; } } @@ -1029,6 +1122,8 @@ static void ext4_mb_choose_next_group(struct ext4_all= ocation_context *ac, ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups); } else if (*new_cr =3D=3D CR1) { ext4_mb_choose_next_group_cr1(ac, new_cr, group, ngroups); + } else if (*new_cr =3D=3D CR1_5) { + ext4_mb_choose_next_group_cr1_5(ac, new_cr, group, ngroups); } else { /* * TODO: For CR=3D2, we can arrange groups in an rb tree sorted by @@ -2352,7 +2447,7 @@ void ext4_mb_complex_scan_group(struct ext4_allocatio= n_context *ac, =20 if (ac->ac_criteria < CR2) { /* - * In CR1, we are sure that this group will + * In CR1 and CR1_5, we are sure that this group will * have a large enough continuous free extent, so skip * over the smaller free extents */ @@ -2484,6 +2579,7 @@ static bool ext4_mb_good_group(struct ext4_allocation= _context *ac, =20 return true; case CR1: + case CR1_5: if ((free / fragments) >=3D ac->ac_g_ex.fe_len) return true; break; @@ -2748,7 +2844,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) * spend a lot of time loading imperfect groups */ if ((prefetch_grp =3D=3D group) && - (cr > CR1 || + (cr > CR1_5 || prefetch_ios < sbi->s_mb_prefetch_limit)) { nr =3D sbi->s_mb_prefetch; if (ext4_has_feature_flex_bg(sb)) { @@ -2788,7 +2884,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) ac->ac_groups_scanned++; if (cr =3D=3D CR0) ext4_mb_simple_scan_group(ac, &e4b); - else if (cr =3D=3D CR1 && sbi->s_stripe && + else if ((cr =3D=3D CR1 || cr =3D=3D CR1_5) && sbi->s_stripe && !(ac->ac_g_ex.fe_len % EXT4_B2C(sbi, sbi->s_stripe))) ext4_mb_scan_aligned(ac, &e4b); @@ -2804,6 +2900,11 @@ ext4_mb_regular_allocator(struct ext4_allocation_con= text *ac) /* Processed all groups and haven't found blocks */ if (sbi->s_mb_stats && i =3D=3D ngroups) atomic64_inc(&sbi->s_bal_cX_failed[cr]); + + if (i =3D=3D ngroups && ac->ac_criteria =3D=3D CR1_5) + /* Reset goal length to original goal length before + * falling into CR2 */ + ac->ac_g_ex.fe_len =3D ac->ac_orig_goal_len; } =20 if (ac->ac_b_ex.fe_len > 0 && ac->ac_status !=3D AC_STATUS_FOUND && @@ -2973,6 +3074,16 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, voi= d *offset) seq_printf(seq, "\t\tbad_suggestions: %u\n", atomic_read(&sbi->s_bal_cr1_bad_suggestions)); =20 + seq_puts(seq, "\tcr1.5_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR1= _5])); + seq_printf(seq, "\t\tgroups_considered: %llu\n", + atomic64_read(&sbi->s_bal_cX_groups_considered[CR1_5])); + seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR1_5])); + seq_printf(seq, "\t\tuseless_loops: %llu\n", + atomic64_read(&sbi->s_bal_cX_failed[CR1_5])); + seq_printf(seq, "\t\tbad_suggestions: %u\n", + atomic_read(&sbi->s_bal_cr1_5_bad_suggestions)); + seq_puts(seq, "\tcr2_stats:\n"); seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR2= ])); seq_printf(seq, "\t\tgroups_considered: %llu\n", @@ -3490,6 +3601,8 @@ int ext4_mb_init(struct super_block *sb) sbi->s_mb_stats =3D MB_DEFAULT_STATS; sbi->s_mb_stream_request =3D MB_DEFAULT_STREAM_THRESHOLD; sbi->s_mb_order2_reqs =3D MB_DEFAULT_ORDER2_REQS; + sbi->s_mb_cr1_5_max_trim_order =3D MB_DEFAULT_CR1_5_TRIM_ORDER; + /* * The default group preallocation is 512, which for 4k block * sizes translates to 2 megabytes. However for bigalloc file @@ -4402,6 +4515,7 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, * placement or satisfy big request as is */ ac->ac_g_ex.fe_logical =3D start; ac->ac_g_ex.fe_len =3D EXT4_NUM_B2C(sbi, size); + ac->ac_orig_goal_len =3D ac->ac_g_ex.fe_len; =20 /* define goal start in order to merge */ if (ar->pright && (ar->lright =3D=3D (start + size)) && @@ -4445,8 +4559,10 @@ static void ext4_mb_collect_stats(struct ext4_alloca= tion_context *ac) if (ac->ac_g_ex.fe_start =3D=3D ac->ac_b_ex.fe_start && ac->ac_g_ex.fe_group =3D=3D ac->ac_b_ex.fe_group) atomic_inc(&sbi->s_bal_goals); - if (ac->ac_f_ex.fe_len =3D=3D ac->ac_g_ex.fe_len) + /* did we allocate as much as normalizer originally wanted? */ + if (ac->ac_f_ex.fe_len =3D=3D ac->ac_orig_goal_len) atomic_inc(&sbi->s_bal_len_goals); + if (ac->ac_found > sbi->s_mb_max_to_scan) atomic_inc(&sbi->s_bal_breaks); } @@ -4931,7 +5047,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *= ac) =20 pa =3D ac->ac_pa; =20 - if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) { + if (ac->ac_b_ex.fe_len < ac->ac_orig_goal_len) { int new_bex_start; int new_bex_end; =20 @@ -4946,14 +5062,14 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context= *ac) * fragmentation in check while ensuring logical range of best * extent doesn't overflow out of goal extent: * - * 1. Check if best ex can be kept at end of goal and still - * cover original start + * 1. Check if best ex can be kept at end of goal (before + * cr_best_avail trimmed it) and still cover original start * 2. Else, check if best ex can be kept at start of goal and * still cover original start * 3. Else, keep the best ex at start of original request. */ new_bex_end =3D ac->ac_g_ex.fe_logical + - EXT4_C2B(sbi, ac->ac_g_ex.fe_len); + EXT4_C2B(sbi, ac->ac_orig_goal_len); new_bex_start =3D new_bex_end - EXT4_C2B(sbi, ac->ac_b_ex.fe_len); if (ac->ac_o_ex.fe_logical >=3D new_bex_start) goto adjust_bex; @@ -4974,7 +5090,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *= ac) BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical); BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len); BUG_ON(new_bex_end > (ac->ac_g_ex.fe_logical + - EXT4_C2B(sbi, ac->ac_g_ex.fe_len))); + EXT4_C2B(sbi, ac->ac_orig_goal_len))); } =20 pa->pa_lstart =3D ac->ac_b_ex.fe_logical; @@ -5594,6 +5710,7 @@ ext4_mb_initialize_context(struct ext4_allocation_con= text *ac, ac->ac_o_ex.fe_start =3D block; ac->ac_o_ex.fe_len =3D len; ac->ac_g_ex =3D ac->ac_o_ex; + ac->ac_orig_goal_len =3D ac->ac_g_ex.fe_len; ac->ac_flags =3D ar->flags; =20 /* we have to define context: we'll work with a file or diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h index acfdc204e15d..bddc0335c261 100644 --- a/fs/ext4/mballoc.h +++ b/fs/ext4/mballoc.h @@ -85,6 +85,13 @@ */ #define MB_DEFAULT_LINEAR_SCAN_THRESHOLD 16 =20 +/* + * The maximum order upto which CR1.5 can trim a particular allocation req= uest. + * Example, if we have an order 7 request and max trim order of 3, CR1.5 c= an + * trim this upto order 4. + */ +#define MB_DEFAULT_CR1_5_TRIM_ORDER 3 + /* * Number of valid buddy orders */ @@ -179,6 +186,12 @@ struct ext4_allocation_context { /* copy of the best found extent taken before preallocation efforts */ struct ext4_free_extent ac_f_ex; =20 + /* + * goal len can change in CR1.5, so save the original len. This is + * used while adjusting the PA window and for accounting. + */ + ext4_grpblk_t ac_orig_goal_len; + __u32 ac_groups_considered; __u32 ac_flags; /* allocation hints */ __u16 ac_groups_scanned; diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c index 3042bc605bbf..4a5c08c8dddb 100644 --- a/fs/ext4/sysfs.c +++ b/fs/ext4/sysfs.c @@ -223,6 +223,7 @@ EXT4_RW_ATTR_SBI_UI(warning_ratelimit_interval_ms, s_wa= rning_ratelimit_state.int EXT4_RW_ATTR_SBI_UI(warning_ratelimit_burst, s_warning_ratelimit_state.bur= st); EXT4_RW_ATTR_SBI_UI(msg_ratelimit_interval_ms, s_msg_ratelimit_state.inter= val); EXT4_RW_ATTR_SBI_UI(msg_ratelimit_burst, s_msg_ratelimit_state.burst); +EXT4_RW_ATTR_SBI_UI(mb_cr1_5_max_trim_order, s_mb_cr1_5_max_trim_order); #ifdef CONFIG_EXT4_DEBUG EXT4_RW_ATTR_SBI_UL(simulate_fail, s_simulate_fail); #endif @@ -273,6 +274,7 @@ static struct attribute *ext4_attrs[] =3D { ATTR_LIST(warning_ratelimit_burst), ATTR_LIST(msg_ratelimit_interval_ms), ATTR_LIST(msg_ratelimit_burst), + ATTR_LIST(mb_cr1_5_max_trim_order), ATTR_LIST(errors_count), ATTR_LIST(warning_count), ATTR_LIST(msg_count), diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h index f062147ca32b..7ea9b4fcb21f 100644 --- a/include/trace/events/ext4.h +++ b/include/trace/events/ext4.h @@ -122,6 +122,7 @@ TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX); =20 TRACE_DEFINE_ENUM(CR0); TRACE_DEFINE_ENUM(CR1); +TRACE_DEFINE_ENUM(CR1_5); TRACE_DEFINE_ENUM(CR2); TRACE_DEFINE_ENUM(CR3); =20 @@ -129,6 +130,7 @@ TRACE_DEFINE_ENUM(CR3); __print_symbolic(cr, \ { CR0, "CR0" }, \ { CR1, "CR1" }, \ + { CR1_5, "CR1.5" } \ { CR2, "CR2" }, \ { CR3, "CR3" }) =20 --=20 2.31.1 From nobody Sun Sep 14 01:57:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00720C7EE2F for ; Tue, 30 May 2023 13:24:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232577AbjE3NX7 (ORCPT ); Tue, 30 May 2023 09:23:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230182AbjE3NX4 (ORCPT ); Tue, 30 May 2023 09:23:56 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B8FAB0; Tue, 30 May 2023 06:23:53 -0700 (PDT) Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34UBk7uO010419; Tue, 30 May 2023 12:34:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=5ppg8sWIaNmOt58tiDRr7hdnXKfZ8OMFB1aiKVovmQo=; b=mZOulnzXMP1IAGDVR59xxtXddurgvPAC0UbZqKjqHEr89wxr3wpt++7o3Az4x+/xCkkV XurRhFvC0nHTXDMLgrCh0Egt+IaIn3G7S+vAab5lcz0hKjIxcCuIvzqYzXLFbyE1/KhM daKAzZzwmi9KnxwYYyaW3goUsDyQrITCly/rAaCC8XmA7pd/2OejMK32SxPvMxR/iut4 kzpp8lhK6CObpccFnWZtPUiXU5bZjrnj4QFBpFXOuZ5g3bLICsUVSGQRx/QMLzzv/jVG IlJRSztJRksPMTWsja05Z/bCSYLZs3lVSxIRovRUD1+AzPdi6B3WGPaloVQhDZd3Q54X Qg== Received: from ppma01fra.de.ibm.com (46.49.7a9f.ip4.static.sl-reverse.com [159.122.73.70]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3qwg5dt2n7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:26 +0000 Received: from pps.filterd (ppma01fra.de.ibm.com [127.0.0.1]) by ppma01fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 34U1Z7v3022973; Tue, 30 May 2023 12:34:23 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma01fra.de.ibm.com (PPS) with ESMTPS id 3qu9g518be-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 May 2023 12:34:23 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 34UCYLt844499304 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 May 2023 12:34:21 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0E8932004F; Tue, 30 May 2023 12:34:21 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7395D20043; Tue, 30 May 2023 12:34:19 +0000 (GMT) Received: from li-bb2b2a4c-3307-11b2-a85c-8fa5c3a69313.in.ibm.com (unknown [9.109.253.169]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 30 May 2023 12:34:19 +0000 (GMT) From: Ojaswin Mujoo To: linux-ext4@vger.kernel.org, "Theodore Ts'o" Cc: Ritesh Harjani , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Kemeng Shi Subject: [PATCH v2 12/12] ext4: Give symbolic names to mballoc criterias Date: Tue, 30 May 2023 18:03:50 +0530 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 5mOoeLSNM7InesJ6CLmBEpviecZDiXk5 X-Proofpoint-GUID: 5mOoeLSNM7InesJ6CLmBEpviecZDiXk5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-30_08,2023-05-30_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 mlxscore=0 adultscore=0 phishscore=0 suspectscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305300103 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mballoc criterias have historically been called by numbers like CR0, CR1... however this makes it confusing to understand what each criteria is about. Change these criterias from numbers to symbolic names and add relevant comments. While we are at it, also reformat and add some comments to ext4_seq_mb_stats_show() for better readability. Additionally, define CR_FAST which signifies the criteria below which we can make quicker decisions like: * quitting early if (free block < requested len) * avoiding to scan free extents smaller than required len. * avoiding to initialize buddy cache and work with existing cache * limiting prefetches Suggested-by: Jan Kara Signed-off-by: Ojaswin Mujoo --- fs/ext4/ext4.h | 55 ++++++-- fs/ext4/mballoc.c | 271 ++++++++++++++++++++---------------- fs/ext4/mballoc.h | 8 +- fs/ext4/sysfs.c | 4 +- include/trace/events/ext4.h | 26 ++-- 5 files changed, 214 insertions(+), 150 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 942e97026a60..c29a4e1fcd5d 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -135,16 +135,45 @@ enum SHIFT_DIRECTION { */ #define EXT4_MB_NUM_CRS 5 /* - * All possible allocation criterias for mballoc + * All possible allocation criterias for mballoc. Lower are faster. */ enum criteria { - CR0, - CR1, - CR1_5, - CR2, - CR3, + /* + * Used when number of blocks needed is a power of 2. This doesn't + * trigger any disk IO except prefetch and is the fastest criteria. + */ + CR_POWER2_ALIGNED, + + /* + * Tries to lookup in-memory data structures to find the most suitable + * group that satisfies goal request. No disk IO except block prefetch. + */ + CR_GOAL_LEN_FAST, + + /* + * Same as CR_GOAL_LEN_FAST but is allowed to reduce the goal length to + * the best available length for faster allocation. + */ + CR_BEST_AVAIL_LEN, + + /* + * Reads each block group sequentially, performing disk IO if necessary, = to + * find find_suitable block group. Tries to allocate goal length but migh= t trim + * the request if nothing is found after enough tries. + */ + CR_GOAL_LEN_SLOW, + + /* + * Finds the first free set of blocks and allocates those. This is only + * used in rare cases when CR_GOAL_LEN_SLOW also fails to allocate + * anything. + */ + CR_ANY_FREE, }; =20 +/* criteria below which we use fast block scanning and avoid unnecessary I= O */ +#define CR_FAST CR_GOAL_LEN_SLOW + /* * Flags used in mballoc's allocation_context flags field. * @@ -183,11 +212,11 @@ enum criteria { /* Do strict check for free blocks while retrying block allocation */ #define EXT4_MB_STRICT_CHECK 0x4000 /* Large fragment size list lookup succeeded at least once for cr =3D 0 */ -#define EXT4_MB_CR0_OPTIMIZED 0x8000 +#define EXT4_MB_CR_POWER2_ALIGNED_OPTIMIZED 0x8000 /* Avg fragment size rb tree lookup succeeded at least once for cr =3D 1 */ -#define EXT4_MB_CR1_OPTIMIZED 0x00010000 +#define EXT4_MB_CR_GOAL_LEN_FAST_OPTIMIZED 0x00010000 /* Avg fragment size rb tree lookup succeeded at least once for cr =3D 1.5= */ -#define EXT4_MB_CR1_5_OPTIMIZED 0x00020000 +#define EXT4_MB_CR_BEST_AVAIL_LEN_OPTIMIZED 0x00020000 =20 struct ext4_allocation_request { /* target inode for block we're allocating */ @@ -1551,7 +1580,7 @@ struct ext4_sb_info { unsigned long s_mb_last_start; unsigned int s_mb_prefetch; unsigned int s_mb_prefetch_limit; - unsigned int s_mb_cr1_5_max_trim_order; + unsigned int s_mb_best_avail_max_trim_order; =20 /* stats for buddy allocator */ atomic_t s_bal_reqs; /* number of reqs with len > 1 */ @@ -1564,9 +1593,9 @@ struct ext4_sb_info { atomic_t s_bal_len_goals; /* len goal hits */ atomic_t s_bal_breaks; /* too long searches */ atomic_t s_bal_2orders; /* 2^order hits */ - atomic_t s_bal_cr0_bad_suggestions; - atomic_t s_bal_cr1_bad_suggestions; - atomic_t s_bal_cr1_5_bad_suggestions; + atomic_t s_bal_p2_aligned_bad_suggestions; + atomic_t s_bal_goal_fast_bad_suggestions; + atomic_t s_bal_best_avail_bad_suggestions; atomic64_t s_bal_cX_groups_considered[EXT4_MB_NUM_CRS]; atomic64_t s_bal_cX_hits[EXT4_MB_NUM_CRS]; atomic64_t s_bal_cX_failed[EXT4_MB_NUM_CRS]; /* cX loop didn't find bloc= ks */ diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 0cf037489e97..4f2a1df98141 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -155,27 +155,31 @@ * structures to decide the order in which groups are to be traversed for * fulfilling an allocation request. * - * At CR0 , we look for groups which have the largest_free_order >=3D the = order - * of the request. We directly look at the largest free order list in the = data - * structure (1) above where largest_free_order =3D order of the request. = If that - * list is empty, we look at remaining list in the increasing order of - * largest_free_order. This allows us to perform CR0 lookup in O(1) time. + * At CR_POWER2_ALIGNED , we look for groups which have the largest_free_o= rder + * >=3D the order of the request. We directly look at the largest free ord= er list + * in the data structure (1) above where largest_free_order =3D order of t= he + * request. If that list is empty, we look at remaining list in the increa= sing + * order of largest_free_order. This allows us to perform CR_POWER2_ALIGNED + * lookup in O(1) time. * - * At CR1, we only consider groups where average fragment size > request - * size. So, we lookup a group which has average fragment size just above = or - * equal to request size using our average fragment size group lists (data - * structure 2) in O(1) time. + * At CR_GOAL_LEN_FAST, we only consider groups where + * average fragment size > request size. So, we lookup a group which has a= verage + * fragment size just above or equal to request size using our average fra= gment + * size group lists (data structure 2) in O(1) time. * - * At CR1.5 (aka CR1_5), we aim to optimize allocations which can't be sat= isfied - * in CR1. The fact that we couldn't find a group in CR1 suggests that the= re is - * no BG that has average fragment size > goal length. So before falling t= o the - * slower CR2, in CR1.5 we proactively trim goal length and then use the s= ame - * fragment lists as CR1 to find a BG with a big enough average fragment s= ize. - * This increases the chances of finding a suitable block group in O(1) ti= me and - * results * in faster allocation at the cost of reduced size of allocatio= n. + * At CR_BEST_AVAIL_LEN, we aim to optimize allocations which can't be sat= isfied + * in CR_GOAL_LEN_FAST. The fact that we couldn't find a group in + * CR_GOAL_LEN_FAST suggests that there is no BG that has avg + * fragment size > goal length. So before falling to the slower + * CR_GOAL_LEN_SLOW, in CR_BEST_AVAIL_LEN we proactively trim goal length = and + * then use the same fragment lists as CR_GOAL_LEN_FAST to find a BG with = a big + * enough average fragment size. This increases the chances of finding a + * suitable block group in O(1) time and results in faster allocation at t= he + * cost of reduced size of allocation. * * If "mb_optimize_scan" mount option is not set, mballoc traverses groups= in - * linear order which requires O(N) search time for each CR0 and CR1 phase. + * linear order which requires O(N) search time for each CR_POWER2_ALIGNED= and + * CR_GOAL_LEN_FAST phase. * * The regular allocator (using the buddy cache) supports a few tunables. * @@ -360,8 +364,8 @@ * - bitlock on a group (group) * - object (inode/locality) (object) * - per-pa lock (pa) - * - cr0 lists lock (cr0) - * - cr1 tree lock (cr1) + * - cr_power2_aligned lists lock (cr_power2_aligned) + * - cr_goal_len_fast lists lock (cr_goal_len_fast) * * Paths: * - new pa @@ -393,7 +397,7 @@ * * - allocation path (ext4_mb_regular_allocator) * group - * cr0/cr1 + * cr_power2_aligned/cr_goal_len_fast */ static struct kmem_cache *ext4_pspace_cachep; static struct kmem_cache *ext4_ac_cachep; @@ -867,7 +871,7 @@ mb_update_avg_fragment_size(struct super_block *sb, str= uct ext4_group_info *grp) * Choose next group by traversing largest_free_order lists. Updates *new_= cr if * cr level needs an update. */ -static void ext4_mb_choose_next_group_cr0(struct ext4_allocation_context *= ac, +static void ext4_mb_choose_next_group_p2_aligned(struct ext4_allocation_co= ntext *ac, enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); @@ -877,8 +881,8 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_a= llocation_context *ac, if (ac->ac_status =3D=3D AC_STATUS_FOUND) return; =20 - if (unlikely(sbi->s_mb_stats && ac->ac_flags & EXT4_MB_CR0_OPTIMIZED)) - atomic_inc(&sbi->s_bal_cr0_bad_suggestions); + if (unlikely(sbi->s_mb_stats && ac->ac_flags & EXT4_MB_CR_POWER2_ALIGNED_= OPTIMIZED)) + atomic_inc(&sbi->s_bal_p2_aligned_bad_suggestions); =20 grp =3D NULL; for (i =3D ac->ac_2order; i < MB_NUM_ORDERS(ac->ac_sb); i++) { @@ -893,8 +897,8 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_a= llocation_context *ac, list_for_each_entry(iter, &sbi->s_mb_largest_free_orders[i], bb_largest_free_order_node) { if (sbi->s_mb_stats) - atomic64_inc(&sbi->s_bal_cX_groups_considered[CR0]); - if (likely(ext4_mb_good_group(ac, iter->bb_group, CR0))) { + atomic64_inc(&sbi->s_bal_cX_groups_considered[CR_POWER2_ALIGNED]); + if (likely(ext4_mb_good_group(ac, iter->bb_group, CR_POWER2_ALIGNED))) { grp =3D iter; break; } @@ -906,10 +910,10 @@ static void ext4_mb_choose_next_group_cr0(struct ext4= _allocation_context *ac, =20 if (!grp) { /* Increment cr and search again */ - *new_cr =3D CR1; + *new_cr =3D CR_GOAL_LEN_FAST; } else { *group =3D grp->bb_group; - ac->ac_flags |=3D EXT4_MB_CR0_OPTIMIZED; + ac->ac_flags |=3D EXT4_MB_CR_POWER2_ALIGNED_OPTIMIZED; } } =20 @@ -948,16 +952,16 @@ ext4_mb_find_good_group_avg_frag_lists(struct ext4_al= location_context *ac, int o * Choose next group by traversing average fragment size list of suitable * order. Updates *new_cr if cr level needs an update. */ -static void ext4_mb_choose_next_group_cr1(struct ext4_allocation_context *= ac, +static void ext4_mb_choose_next_group_goal_fast(struct ext4_allocation_con= text *ac, enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_group_info *grp =3D NULL; int i; =20 - if (unlikely(ac->ac_flags & EXT4_MB_CR1_OPTIMIZED)) { + if (unlikely(ac->ac_flags & EXT4_MB_CR_GOAL_LEN_FAST_OPTIMIZED)) { if (sbi->s_mb_stats) - atomic_inc(&sbi->s_bal_cr1_bad_suggestions); + atomic_inc(&sbi->s_bal_goal_fast_bad_suggestions); } =20 for (i =3D mb_avg_fragment_size_order(ac->ac_sb, ac->ac_g_ex.fe_len); @@ -969,22 +973,22 @@ static void ext4_mb_choose_next_group_cr1(struct ext4= _allocation_context *ac, =20 if (grp) { *group =3D grp->bb_group; - ac->ac_flags |=3D EXT4_MB_CR1_OPTIMIZED; + ac->ac_flags |=3D EXT4_MB_CR_GOAL_LEN_FAST_OPTIMIZED; } else { - *new_cr =3D CR1_5; + *new_cr =3D CR_BEST_AVAIL_LEN; } } =20 /* - * We couldn't find a group in CR1 so try to find the highest free fragment + * We couldn't find a group in CR_GOAL_LEN_FAST so try to find the highest= free fragment * order we have and proactively trim the goal request length to that orde= r to * find a suitable group faster. * * This optimizes allocation speed at the cost of slightly reduced * preallocations. However, we make sure that we don't trim the request too - * much and fall to CR2 in that case. + * much and fall to CR_GOAL_LEN_SLOW in that case. */ -static void ext4_mb_choose_next_group_cr1_5(struct ext4_allocation_context= *ac, +static void ext4_mb_choose_next_group_best_avail(struct ext4_allocation_co= ntext *ac, enum criteria *new_cr, ext4_group_t *group, ext4_group_t ngroups) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); @@ -992,9 +996,9 @@ static void ext4_mb_choose_next_group_cr1_5(struct ext4= _allocation_context *ac, int i, order, min_order; unsigned long num_stripe_clusters =3D 0; =20 - if (unlikely(ac->ac_flags & EXT4_MB_CR1_5_OPTIMIZED)) { + if (unlikely(ac->ac_flags & EXT4_MB_CR_BEST_AVAIL_LEN_OPTIMIZED)) { if (sbi->s_mb_stats) - atomic_inc(&sbi->s_bal_cr1_5_bad_suggestions); + atomic_inc(&sbi->s_bal_best_avail_bad_suggestions); } =20 /* @@ -1004,7 +1008,7 @@ static void ext4_mb_choose_next_group_cr1_5(struct ex= t4_allocation_context *ac, * goal length. */ order =3D fls(ac->ac_g_ex.fe_len); - min_order =3D order - sbi->s_mb_cr1_5_max_trim_order; + min_order =3D order - sbi->s_mb_best_avail_max_trim_order; if (min_order < 0) min_order =3D 0; =20 @@ -1052,11 +1056,11 @@ static void ext4_mb_choose_next_group_cr1_5(struct = ext4_allocation_context *ac, =20 if (grp) { *group =3D grp->bb_group; - ac->ac_flags |=3D EXT4_MB_CR1_5_OPTIMIZED; + ac->ac_flags |=3D EXT4_MB_CR_BEST_AVAIL_LEN_OPTIMIZED; } else { - /* Reset goal length to original goal length before falling into CR2 */ + /* Reset goal length to original goal length before falling into CR_GOAL= _LEN_SLOW */ ac->ac_g_ex.fe_len =3D ac->ac_orig_goal_len; - *new_cr =3D CR2; + *new_cr =3D CR_GOAL_LEN_SLOW; } } =20 @@ -1064,7 +1068,7 @@ static inline int should_optimize_scan(struct ext4_al= location_context *ac) { if (unlikely(!test_opt2(ac->ac_sb, MB_OPTIMIZE_SCAN))) return 0; - if (ac->ac_criteria >=3D CR2) + if (ac->ac_criteria >=3D CR_GOAL_LEN_SLOW) return 0; if (!ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) return 0; @@ -1118,12 +1122,12 @@ static void ext4_mb_choose_next_group(struct ext4_a= llocation_context *ac, return; } =20 - if (*new_cr =3D=3D CR0) { - ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups); - } else if (*new_cr =3D=3D CR1) { - ext4_mb_choose_next_group_cr1(ac, new_cr, group, ngroups); - } else if (*new_cr =3D=3D CR1_5) { - ext4_mb_choose_next_group_cr1_5(ac, new_cr, group, ngroups); + if (*new_cr =3D=3D CR_POWER2_ALIGNED) { + ext4_mb_choose_next_group_p2_aligned(ac, new_cr, group, ngroups); + } else if (*new_cr =3D=3D CR_GOAL_LEN_FAST) { + ext4_mb_choose_next_group_goal_fast(ac, new_cr, group, ngroups); + } else if (*new_cr =3D=3D CR_BEST_AVAIL_LEN) { + ext4_mb_choose_next_group_best_avail(ac, new_cr, group, ngroups); } else { /* * TODO: For CR=3D2, we can arrange groups in an rb tree sorted by @@ -2445,11 +2449,12 @@ void ext4_mb_complex_scan_group(struct ext4_allocat= ion_context *ac, break; } =20 - if (ac->ac_criteria < CR2) { + if (ac->ac_criteria < CR_FAST) { /* - * In CR1 and CR1_5, we are sure that this group will - * have a large enough continuous free extent, so skip - * over the smaller free extents + * In CR_GOAL_LEN_FAST and CR_BEST_AVAIL_LEN, we are + * sure that this group will have a large enough + * continuous free extent, so skip over the smaller free + * extents */ j =3D mb_find_next_bit(bitmap, EXT4_CLUSTERS_PER_GROUP(sb), i); @@ -2545,7 +2550,7 @@ static bool ext4_mb_good_group(struct ext4_allocation= _context *ac, int flex_size =3D ext4_flex_bg_size(EXT4_SB(ac->ac_sb)); struct ext4_group_info *grp =3D ext4_get_group_info(ac->ac_sb, group); =20 - BUG_ON(cr < CR0 || cr >=3D EXT4_MB_NUM_CRS); + BUG_ON(cr < CR_POWER2_ALIGNED || cr >=3D EXT4_MB_NUM_CRS); =20 if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp)) return false; @@ -2559,7 +2564,7 @@ static bool ext4_mb_good_group(struct ext4_allocation= _context *ac, return false; =20 switch (cr) { - case CR0: + case CR_POWER2_ALIGNED: BUG_ON(ac->ac_2order =3D=3D 0); =20 /* Avoid using the first bg of a flexgroup for data files */ @@ -2578,16 +2583,16 @@ static bool ext4_mb_good_group(struct ext4_allocati= on_context *ac, return false; =20 return true; - case CR1: - case CR1_5: + case CR_GOAL_LEN_FAST: + case CR_BEST_AVAIL_LEN: if ((free / fragments) >=3D ac->ac_g_ex.fe_len) return true; break; - case CR2: + case CR_GOAL_LEN_SLOW: if (free >=3D ac->ac_g_ex.fe_len) return true; break; - case CR3: + case CR_ANY_FREE: return true; default: BUG(); @@ -2628,7 +2633,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allo= cation_context *ac, free =3D grp->bb_free; if (free =3D=3D 0) goto out; - if (cr <=3D CR2 && free < ac->ac_g_ex.fe_len) + if (cr <=3D CR_FAST && free < ac->ac_g_ex.fe_len) goto out; if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp))) goto out; @@ -2643,15 +2648,16 @@ static int ext4_mb_good_group_nolock(struct ext4_al= location_context *ac, ext4_get_group_desc(sb, group, NULL); int ret; =20 - /* cr=3DCR0/CR1 is a very optimistic search to find large - * good chunks almost for free. If buddy data is not - * ready, then this optimization makes no sense. But - * we never skip the first block group in a flex_bg, - * since this gets used for metadata block allocation, - * and we want to make sure we locate metadata blocks - * in the first block group in the flex_bg if possible. + /* + * cr=3DCR_POWER2_ALIGNED/CR_GOAL_LEN_FAST is a very optimistic + * search to find large good chunks almost for free. If buddy + * data is not ready, then this optimization makes no sense. But + * we never skip the first block group in a flex_bg, since this + * gets used for metadata block allocation, and we want to make + * sure we locate metadata blocks in the first block group in + * the flex_bg if possible. */ - if (cr < CR2 && + if (cr < CR_FAST && (!sbi->s_log_groups_per_flex || ((group & ((1 << sbi->s_log_groups_per_flex) - 1)) !=3D 0)) && !(ext4_has_group_desc_csum(sb) && @@ -2811,10 +2817,10 @@ ext4_mb_regular_allocator(struct ext4_allocation_co= ntext *ac) } =20 /* Let's just scan groups to find more-less suitable blocks */ - cr =3D ac->ac_2order ? CR0 : CR1; + cr =3D ac->ac_2order ? CR_POWER2_ALIGNED : CR_GOAL_LEN_FAST; /* - * cr =3D=3D CR0 try to get exact allocation, - * cr =3D=3D CR3 try to get anything + * cr =3D=3D CR_POWER2_ALIGNED try to get exact allocation, + * cr =3D=3D CR_ANY_FREE try to get anything */ repeat: for (; cr < EXT4_MB_NUM_CRS && ac->ac_status =3D=3D AC_STATUS_CONTINUE; c= r++) { @@ -2844,7 +2850,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) * spend a lot of time loading imperfect groups */ if ((prefetch_grp =3D=3D group) && - (cr > CR1_5 || + (cr >=3D CR_FAST || prefetch_ios < sbi->s_mb_prefetch_limit)) { nr =3D sbi->s_mb_prefetch; if (ext4_has_feature_flex_bg(sb)) { @@ -2882,9 +2888,11 @@ ext4_mb_regular_allocator(struct ext4_allocation_con= text *ac) } =20 ac->ac_groups_scanned++; - if (cr =3D=3D CR0) + if (cr =3D=3D CR_POWER2_ALIGNED) ext4_mb_simple_scan_group(ac, &e4b); - else if ((cr =3D=3D CR1 || cr =3D=3D CR1_5) && sbi->s_stripe && + else if ((cr =3D=3D CR_GOAL_LEN_FAST || + cr =3D=3D CR_BEST_AVAIL_LEN) && + sbi->s_stripe && !(ac->ac_g_ex.fe_len % EXT4_B2C(sbi, sbi->s_stripe))) ext4_mb_scan_aligned(ac, &e4b); @@ -2901,9 +2909,9 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) if (sbi->s_mb_stats && i =3D=3D ngroups) atomic64_inc(&sbi->s_bal_cX_failed[cr]); =20 - if (i =3D=3D ngroups && ac->ac_criteria =3D=3D CR1_5) + if (i =3D=3D ngroups && ac->ac_criteria =3D=3D CR_BEST_AVAIL_LEN) /* Reset goal length to original goal length before - * falling into CR2 */ + * falling into CR_GOAL_LEN_SLOW */ ac->ac_g_ex.fe_len =3D ac->ac_orig_goal_len; } =20 @@ -2930,7 +2938,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_cont= ext *ac) ac->ac_b_ex.fe_len =3D 0; ac->ac_status =3D AC_STATUS_CONTINUE; ac->ac_flags |=3D EXT4_MB_HINT_FIRST; - cr =3D CR3; + cr =3D CR_ANY_FREE; goto repeat; } } @@ -3046,66 +3054,94 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, vo= id *offset) seq_puts(seq, "mballoc:\n"); if (!sbi->s_mb_stats) { seq_puts(seq, "\tmb stats collection turned off.\n"); - seq_puts(seq, "\tTo enable, please write \"1\" to sysfs file mb_stats.\n= "); + seq_puts( + seq, + "\tTo enable, please write \"1\" to sysfs file mb_stats.\n"); return 0; } seq_printf(seq, "\treqs: %u\n", atomic_read(&sbi->s_bal_reqs)); seq_printf(seq, "\tsuccess: %u\n", atomic_read(&sbi->s_bal_success)); =20 - seq_printf(seq, "\tgroups_scanned: %u\n", atomic_read(&sbi->s_bal_groups= _scanned)); - - seq_puts(seq, "\tcr0_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR0= ])); - seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[CR0])); - seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR0])); + seq_printf(seq, "\tgroups_scanned: %u\n", + atomic_read(&sbi->s_bal_groups_scanned)); + + /* CR_POWER2_ALIGNED stats */ + seq_puts(seq, "\tcr_p2_aligned_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", + atomic64_read(&sbi->s_bal_cX_hits[CR_POWER2_ALIGNED])); + seq_printf( + seq, "\t\tgroups_considered: %llu\n", + atomic64_read( + &sbi->s_bal_cX_groups_considered[CR_POWER2_ALIGNED])); + seq_printf(seq, "\t\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_cX_ex_scanned[CR_POWER2_ALIGNED])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[CR0])); + atomic64_read(&sbi->s_bal_cX_failed[CR_POWER2_ALIGNED])); seq_printf(seq, "\t\tbad_suggestions: %u\n", - atomic_read(&sbi->s_bal_cr0_bad_suggestions)); + atomic_read(&sbi->s_bal_p2_aligned_bad_suggestions)); =20 - seq_puts(seq, "\tcr1_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR1= ])); + /* CR_GOAL_LEN_FAST stats */ + seq_puts(seq, "\tcr_goal_fast_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", + atomic64_read(&sbi->s_bal_cX_hits[CR_GOAL_LEN_FAST])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[CR1])); - seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR1])); + atomic64_read( + &sbi->s_bal_cX_groups_considered[CR_GOAL_LEN_FAST])); + seq_printf(seq, "\t\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_cX_ex_scanned[CR_GOAL_LEN_FAST])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[CR1])); + atomic64_read(&sbi->s_bal_cX_failed[CR_GOAL_LEN_FAST])); seq_printf(seq, "\t\tbad_suggestions: %u\n", - atomic_read(&sbi->s_bal_cr1_bad_suggestions)); - - seq_puts(seq, "\tcr1.5_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR1= _5])); - seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[CR1_5])); - seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR1_5])); + atomic_read(&sbi->s_bal_goal_fast_bad_suggestions)); + + /* CR_BEST_AVAIL_LEN stats */ + seq_puts(seq, "\tcr_best_avail_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", + atomic64_read(&sbi->s_bal_cX_hits[CR_BEST_AVAIL_LEN])); + seq_printf( + seq, "\t\tgroups_considered: %llu\n", + atomic64_read( + &sbi->s_bal_cX_groups_considered[CR_BEST_AVAIL_LEN])); + seq_printf(seq, "\t\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_cX_ex_scanned[CR_BEST_AVAIL_LEN])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[CR1_5])); + atomic64_read(&sbi->s_bal_cX_failed[CR_BEST_AVAIL_LEN])); seq_printf(seq, "\t\tbad_suggestions: %u\n", - atomic_read(&sbi->s_bal_cr1_5_bad_suggestions)); + atomic_read(&sbi->s_bal_best_avail_bad_suggestions)); =20 - seq_puts(seq, "\tcr2_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR2= ])); + /* CR_GOAL_LEN_SLOW stats */ + seq_puts(seq, "\tcr_goal_slow_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", + atomic64_read(&sbi->s_bal_cX_hits[CR_GOAL_LEN_SLOW])); seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[CR2])); - seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR2])); + atomic64_read( + &sbi->s_bal_cX_groups_considered[CR_GOAL_LEN_SLOW])); + seq_printf(seq, "\t\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_cX_ex_scanned[CR_GOAL_LEN_SLOW])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[CR2])); - - seq_puts(seq, "\tcr3_stats:\n"); - seq_printf(seq, "\t\thits: %llu\n", atomic64_read(&sbi->s_bal_cX_hits[CR3= ])); - seq_printf(seq, "\t\tgroups_considered: %llu\n", - atomic64_read(&sbi->s_bal_cX_groups_considered[CR3])); - seq_printf(seq, "\t\textents_scanned: %u\n", atomic_read(&sbi->s_bal_cX_e= x_scanned[CR3])); + atomic64_read(&sbi->s_bal_cX_failed[CR_GOAL_LEN_SLOW])); + + /* CR_ANY_FREE stats */ + seq_puts(seq, "\tcr_any_free_stats:\n"); + seq_printf(seq, "\t\thits: %llu\n", + atomic64_read(&sbi->s_bal_cX_hits[CR_ANY_FREE])); + seq_printf( + seq, "\t\tgroups_considered: %llu\n", + atomic64_read(&sbi->s_bal_cX_groups_considered[CR_ANY_FREE])); + seq_printf(seq, "\t\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_cX_ex_scanned[CR_ANY_FREE])); seq_printf(seq, "\t\tuseless_loops: %llu\n", - atomic64_read(&sbi->s_bal_cX_failed[CR3])); - seq_printf(seq, "\textents_scanned: %u\n", atomic_read(&sbi->s_bal_ex_sca= nned)); + atomic64_read(&sbi->s_bal_cX_failed[CR_ANY_FREE])); + + /* Aggregates */ + seq_printf(seq, "\textents_scanned: %u\n", + atomic_read(&sbi->s_bal_ex_scanned)); seq_printf(seq, "\t\tgoal_hits: %u\n", atomic_read(&sbi->s_bal_goals)); - seq_printf(seq, "\t\tlen_goal_hits: %u\n", atomic_read(&sbi->s_bal_len_go= als)); + seq_printf(seq, "\t\tlen_goal_hits: %u\n", + atomic_read(&sbi->s_bal_len_goals)); seq_printf(seq, "\t\t2^n_hits: %u\n", atomic_read(&sbi->s_bal_2orders)); seq_printf(seq, "\t\tbreaks: %u\n", atomic_read(&sbi->s_bal_breaks)); seq_printf(seq, "\t\tlost: %u\n", atomic_read(&sbi->s_mb_lost_chunks)); - seq_printf(seq, "\tbuddies_generated: %u/%u\n", atomic_read(&sbi->s_mb_buddies_generated), ext4_get_groups_count(sb)); @@ -3113,8 +3149,7 @@ int ext4_seq_mb_stats_show(struct seq_file *seq, void= *offset) atomic64_read(&sbi->s_mb_generation_time)); seq_printf(seq, "\tpreallocated: %u\n", atomic_read(&sbi->s_mb_preallocated)); - seq_printf(seq, "\tdiscarded: %u\n", - atomic_read(&sbi->s_mb_discarded)); + seq_printf(seq, "\tdiscarded: %u\n", atomic_read(&sbi->s_mb_discarded)); return 0; } =20 @@ -3601,7 +3636,7 @@ int ext4_mb_init(struct super_block *sb) sbi->s_mb_stats =3D MB_DEFAULT_STATS; sbi->s_mb_stream_request =3D MB_DEFAULT_STREAM_THRESHOLD; sbi->s_mb_order2_reqs =3D MB_DEFAULT_ORDER2_REQS; - sbi->s_mb_cr1_5_max_trim_order =3D MB_DEFAULT_CR1_5_TRIM_ORDER; + sbi->s_mb_best_avail_max_trim_order =3D MB_DEFAULT_BEST_AVAIL_TRIM_ORDER; =20 /* * The default group preallocation is 512, which for 4k block diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h index bddc0335c261..df6b5e7c2274 100644 --- a/fs/ext4/mballoc.h +++ b/fs/ext4/mballoc.h @@ -86,11 +86,11 @@ #define MB_DEFAULT_LINEAR_SCAN_THRESHOLD 16 =20 /* - * The maximum order upto which CR1.5 can trim a particular allocation req= uest. - * Example, if we have an order 7 request and max trim order of 3, CR1.5 c= an - * trim this upto order 4. + * The maximum order upto which CR_BEST_AVAIL_LEN can trim a particular + * allocation request. Example, if we have an order 7 request and max trim= order + * of 3, we can trim this request upto order 4. */ -#define MB_DEFAULT_CR1_5_TRIM_ORDER 3 +#define MB_DEFAULT_BEST_AVAIL_TRIM_ORDER 3 =20 /* * Number of valid buddy orders diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c index 4a5c08c8dddb..6d332dff79dd 100644 --- a/fs/ext4/sysfs.c +++ b/fs/ext4/sysfs.c @@ -223,7 +223,7 @@ EXT4_RW_ATTR_SBI_UI(warning_ratelimit_interval_ms, s_wa= rning_ratelimit_state.int EXT4_RW_ATTR_SBI_UI(warning_ratelimit_burst, s_warning_ratelimit_state.bur= st); EXT4_RW_ATTR_SBI_UI(msg_ratelimit_interval_ms, s_msg_ratelimit_state.inter= val); EXT4_RW_ATTR_SBI_UI(msg_ratelimit_burst, s_msg_ratelimit_state.burst); -EXT4_RW_ATTR_SBI_UI(mb_cr1_5_max_trim_order, s_mb_cr1_5_max_trim_order); +EXT4_RW_ATTR_SBI_UI(mb_best_avail_max_trim_order, s_mb_best_avail_max_trim= _order); #ifdef CONFIG_EXT4_DEBUG EXT4_RW_ATTR_SBI_UL(simulate_fail, s_simulate_fail); #endif @@ -274,7 +274,7 @@ static struct attribute *ext4_attrs[] =3D { ATTR_LIST(warning_ratelimit_burst), ATTR_LIST(msg_ratelimit_interval_ms), ATTR_LIST(msg_ratelimit_burst), - ATTR_LIST(mb_cr1_5_max_trim_order), + ATTR_LIST(mb_best_avail_max_trim_order), ATTR_LIST(errors_count), ATTR_LIST(warning_count), ATTR_LIST(msg_count), diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h index 7ea9b4fcb21f..bab28121c7a4 100644 --- a/include/trace/events/ext4.h +++ b/include/trace/events/ext4.h @@ -120,19 +120,19 @@ TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX); { EXT4_FC_REASON_INODE_JOURNAL_DATA, "INODE_JOURNAL_DATA"}, \ { EXT4_FC_REASON_ENCRYPTED_FILENAME, "ENCRYPTED_FILENAME"}) =20 -TRACE_DEFINE_ENUM(CR0); -TRACE_DEFINE_ENUM(CR1); -TRACE_DEFINE_ENUM(CR1_5); -TRACE_DEFINE_ENUM(CR2); -TRACE_DEFINE_ENUM(CR3); - -#define show_criteria(cr) \ - __print_symbolic(cr, \ - { CR0, "CR0" }, \ - { CR1, "CR1" }, \ - { CR1_5, "CR1.5" } \ - { CR2, "CR2" }, \ - { CR3, "CR3" }) +TRACE_DEFINE_ENUM(CR_POWER2_ALIGNED); +TRACE_DEFINE_ENUM(CR_GOAL_LEN_FAST); +TRACE_DEFINE_ENUM(CR_BEST_AVAIL_LEN); +TRACE_DEFINE_ENUM(CR_GOAL_LEN_SLOW); +TRACE_DEFINE_ENUM(CR_ANY_FREE); + +#define show_criteria(cr) \ + __print_symbolic(cr, \ + { CR_POWER2_ALIGNED, "CR_POWER2_ALIGNED" }, \ + { CR_GOAL_LEN_FAST, "CR_GOAL_LEN_FAST" }, \ + { CR_BEST_AVAIL_LEN, "CR_BEST_AVAIL_LEN" }, \ + { CR_GOAL_LEN_SLOW, "CR_GOAL_LEN_SLOW" }, \ + { CR_ANY_FREE, "CR_ANY_FREE" }) =20 TRACE_EVENT(ext4_other_inode_update_time, TP_PROTO(struct inode *inode, ino_t orig_ino), --=20 2.31.1