From nobody Tue Feb 10 15:46:31 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B41AC001DF for ; Tue, 25 Jul 2023 10:05:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230452AbjGYKFg (ORCPT ); Tue, 25 Jul 2023 06:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233216AbjGYKFD (ORCPT ); Tue, 25 Jul 2023 06:05:03 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0F531FCA for ; Tue, 25 Jul 2023 03:04:32 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4R9CLP4Bgdz4f3l8s for ; Tue, 25 Jul 2023 18:04:29 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+ksnr9kFkPOOg--.2821S6; Tue, 25 Jul 2023 18:04:30 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: willy@infradead.org, baolin.wang@linux.alibaba.com, david@redhat.com, shikemeng@huaweicloud.com Subject: [PATCH 4/4] mm/compaction: add compact_unlock_irqrestore to remove repeat code Date: Wed, 26 Jul 2023 02:04:56 +0800 Message-Id: <20230725180456.2146626-5-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230725180456.2146626-1-shikemeng@huaweicloud.com> References: <20230725180456.2146626-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: Syh0CgAXC+ksnr9kFkPOOg--.2821S6 X-Coremail-Antispam: 1UD129KBjvJXoWxAF15Aw45WrWrCr1DWr1DAwb_yoWrXryxpF 4kGasIyr4kZFy3WF4ftr4ruFs0g34fXF47Ar4Sk3WfJa1FvF93Gw1SyFyUuFWrXryavFZ5 WFs8Jr18AF47Z3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPY14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xv wVC2z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFc xC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUXVWUAwAv7VC2z280aVAFwI0_Jr0_ Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2 IErcIFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRHa0PU UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add compact_unlock_irqrestore to remove repeat code. This also make compact lock functions sereis complete as we can call compact_lock_irqsave/compact_unlock_irqrestore in pair. Signed-off-by: Kemeng Shi --- mm/compaction.c | 43 ++++++++++++++++--------------------------- 1 file changed, 16 insertions(+), 27 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index c1dc821ac6e1..eb1d3d9a422c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -541,6 +541,14 @@ static spinlock_t *compact_lock_irqsave(spinlock_t *lo= ck, unsigned long *flags, return lock; } =20 +static inline void compact_unlock_irqrestore(spinlock_t **locked, unsigned= long flags) +{ + if (*locked) { + spin_unlock_irqrestore(*locked, flags); + *locked =3D NULL; + } +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avo= id @@ -556,10 +564,7 @@ static spinlock_t *compact_lock_irqsave(spinlock_t *lo= ck, unsigned long *flags, static bool compact_unlock_should_abort(spinlock_t **locked, unsigned long flags, struct compact_control *cc) { - if (*locked) { - spin_unlock_irqrestore(*locked, flags); - *locked =3D NULL; - } + compact_unlock_irqrestore(locked, flags); =20 if (fatal_signal_pending(current)) { cc->contended =3D true; @@ -671,8 +676,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, =20 } =20 - if (locked) - spin_unlock_irqrestore(locked, flags); + compact_unlock_irqrestore(&locked, flags); =20 /* * There is a tiny chance that we have read bogus compound_order(), @@ -935,10 +939,7 @@ isolate_migratepages_block(struct compact_control *cc,= unsigned long low_pfn, } =20 if (PageHuge(page) && cc->alloc_contig) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked =3D NULL; - } + compact_unlock_irqrestore(&locked, flags); =20 ret =3D isolate_or_dissolve_huge_page(page, &cc->migratepages); =20 @@ -1024,10 +1025,7 @@ isolate_migratepages_block(struct compact_control *c= c, unsigned long low_pfn, */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked =3D NULL; - } + compact_unlock_irqrestore(&locked, flags); =20 if (isolate_movable_page(page, mode)) { folio =3D page_folio(page); @@ -1111,9 +1109,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, =20 /* If we already hold the lock, we can skip some rechecking */ if (&lruvec->lru_lock !=3D locked) { - if (locked) - spin_unlock_irqrestore(locked, flags); - + compact_unlock_irqrestore(&locked, flags); locked =3D compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); =20 lruvec_memcg_debug(lruvec, folio); @@ -1176,10 +1172,7 @@ isolate_migratepages_block(struct compact_control *c= c, unsigned long low_pfn, =20 isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked =3D NULL; - } + compact_unlock_irqrestore(&locked, flags); folio_put(folio); =20 isolate_fail: @@ -1192,10 +1185,7 @@ isolate_migratepages_block(struct compact_control *c= c, unsigned long low_pfn, * page anyway. */ if (nr_isolated) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked =3D NULL; - } + compact_unlock_irqrestore(&locked, flags); putback_movable_pages(&cc->migratepages); cc->nr_migratepages =3D 0; nr_isolated =3D 0; @@ -1224,8 +1214,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, folio =3D NULL; =20 isolate_abort: - if (locked) - spin_unlock_irqrestore(locked, flags); + compact_unlock_irqrestore(&locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio); --=20 2.30.0