From nobody Sun Apr 19 00:24:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66936C43334 for ; Fri, 8 Jul 2022 14:46:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238570AbiGHOqC (ORCPT ); Fri, 8 Jul 2022 10:46:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238644AbiGHOpr (ORCPT ); Fri, 8 Jul 2022 10:45:47 -0400 Received: from outbound-smtp59.blacknight.com (outbound-smtp59.blacknight.com [46.22.136.243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC2865A47A for ; Fri, 8 Jul 2022 07:44:35 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp59.blacknight.com (Postfix) with ESMTPS id BE235FA77D for ; Fri, 8 Jul 2022 15:44:08 +0100 (IST) Received: (qmail 14303 invoked from network); 8 Jul 2022 14:44:08 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 8 Jul 2022 14:44:08 -0000 Date: Fri, 8 Jul 2022 15:44:06 +0100 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , Yu Zhao , Marek Szyprowski , LKML , Linux-MM Subject: [PATCH] mm/page_alloc: replace local_lock with normal spinlock -fix -fix Message-ID: <20220708144406.GJ27531@techsingularity.net> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock pcp->lock and then enable preemption. This lacks symmetry against both the pcpu_spin helpers and differs from how local_unlock_* is implemented. While this is harmless, it's unnecessary and it's generally better to unwind locks and preemption state in the reverse order as they were acquired. This is a fix on top of the mm-unstable patch mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch Signed-off-by: Mel Gorman --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 934d1b5a5449..d0141e51e613 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock); =20 #define pcpu_spin_unlock(member, ptr) \ ({ \ - spin_unlock(&ptr->member); \ pcpu_task_unpin(); \ + spin_unlock(&ptr->member); \ }) =20 #define pcpu_spin_unlock_irqrestore(member, ptr, flags) \ ({ \ - spin_unlock_irqrestore(&ptr->member, flags); \ pcpu_task_unpin(); \ + spin_unlock_irqrestore(&ptr->member, flags); \ }) =20 /* struct per_cpu_pages specific helpers. */