From nobody Tue Feb 10 06:25:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B21E7EB64DC for ; Wed, 19 Jul 2023 03:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230409AbjGSDfJ (ORCPT ); Tue, 18 Jul 2023 23:35:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231178AbjGSDed (ORCPT ); Tue, 18 Jul 2023 23:34:33 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F748172E for ; Tue, 18 Jul 2023 20:34:31 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4R5Lz70zN8z4f3jM7 for ; Wed, 19 Jul 2023 11:34:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+mVWLdkI7bmOA--.28977S3; Wed, 19 Jul 2023 11:29:27 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: shikemeng@huaweicloud.com Subject: [PATCH 1/4] mm/compaction: use "spinlock_t *" to record held lock in compact [un]lock functions Date: Wed, 19 Jul 2023 19:29:58 +0800 Message-Id: <20230719113001.2023703-2-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230719113001.2023703-1-shikemeng@huaweicloud.com> References: <20230719113001.2023703-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: Syh0CgAXC+mVWLdkI7bmOA--.28977S3 X-Coremail-Antispam: 1UD129KBjvJXoWxAw4fKFWUtr1rKFy7CFWktFb_yoW5AF4rpF 1xG3Wayr18Zay5uF4ftFW8WF43Xw1fWa4UAwsIqw1rJF45tF13Gw10ya4UurWUXasavFZa qFW3tFyUAay7ZaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBK14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jr4l82xGYIkIc2x26xkF7I0E14v26r1Y6r1xM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Jr0_Gr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Jr0_GrUvcSsGvfC2KfnxnUUI43ZEXa7sRiDGYDUUUUU= = X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make compact_lock_irqsave and compact_unlock_should_abort use "spinlock_t *" to record held lock. This is a preparation to use compact_unlock_should_abort in isolate_migratepages_block to remove repeat code. Signed-off-by: Kemeng Shi --- mm/compaction.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 9641e2131901..dfef14d3ef78 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -523,22 +523,22 @@ static bool test_and_set_skip(struct compact_control = *cc, struct page *page) * abort when the current block is finished regardless of success rate. * Sync compaction acquires the lock. * - * Always returns true which makes it easier to track lock state in caller= s. + * Always returns lock which makes it easier to track lock state in caller= s. */ -static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, +static spinlock_t *compact_lock_irqsave(spinlock_t *lock, unsigned long *f= lags, struct compact_control *cc) __acquires(lock) { /* Track if the lock is contended in async mode */ if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { if (spin_trylock_irqsave(lock, *flags)) - return true; + return lock; =20 cc->contended =3D true; } =20 spin_lock_irqsave(lock, *flags); - return true; + return lock; } =20 /* @@ -553,12 +553,12 @@ static bool compact_lock_irqsave(spinlock_t *lock, un= signed long *flags, * Returns true if compaction should abort due to fatal signal pending. * Returns false when compaction can continue. */ -static bool compact_unlock_should_abort(spinlock_t *lock, - unsigned long flags, bool *locked, struct compact_control *cc) +static bool compact_unlock_should_abort(spinlock_t **locked, + unsigned long flags, struct compact_control *cc) { if (*locked) { - spin_unlock_irqrestore(lock, flags); - *locked =3D false; + spin_unlock_irqrestore(*locked, flags); + *locked =3D NULL; } =20 if (fatal_signal_pending(current)) { @@ -586,7 +586,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, int nr_scanned =3D 0, total_isolated =3D 0; struct page *cursor; unsigned long flags =3D 0; - bool locked =3D false; + spinlock_t *locked =3D NULL; unsigned long blockpfn =3D *start_pfn; unsigned int order; =20 @@ -607,8 +607,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, * pending. */ if (!(blockpfn % COMPACT_CLUSTER_MAX) - && compact_unlock_should_abort(&cc->zone->lock, flags, - &locked, cc)) + && compact_unlock_should_abort(&locked, flags, cc)) break; =20 nr_scanned++; @@ -673,7 +672,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, } =20 if (locked) - spin_unlock_irqrestore(&cc->zone->lock, flags); + spin_unlock_irqrestore(locked, flags); =20 /* * There is a tiny chance that we have read bogus compound_order(), --=20 2.30.0