From nobody Fri Dec 19 08:44:37 2025 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07EC8203700 for ; Fri, 6 Dec 2024 10:18:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733480300; cv=none; b=Kn7q6vfzqjLwgD4wv1VTviZsuAEODU2ZYRRGe+Qhg0QMmJ8JAaZGuMxpmdEwMGjoWIRBzmKIsfCaBKZtrHWSTJPAyjQfzHsVvnAldc0dlZ6UgQtGKWJOAX3c8rICQbBF3H2edzSIVCwOyHenyFEK6Z7/FB8ocwgY7naSdh4vU48= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733480300; c=relaxed/simple; bh=s+OHIYhLdNgu2Bj1fkzB53Rc0o98RZT6LUjMLO0MEQA=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mNIm7St/cdKQte4rKGyrvMxD2Vq4cwIW6NaQ/nkIHMPbRO/JdOLW09TiJYha/4EXt271hD1JCXyxcPUh1H/ExBkTSo6EF1rKkCV4G5ejk4B+ERGmGhUhnLcWVSLwDvEx8b7BkWBHNS4EdXLzfIrIrDOkkrDZtxPyqg+9G+YsYNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Y4Rw30dq5z11Lt2; Fri, 6 Dec 2024 18:15:15 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 6CA1D18009B; Fri, 6 Dec 2024 18:18:14 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 18:18:13 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v5 06/22] arm64: entry: Expand the need_irq_preemption() macro ahead Date: Fri, 6 Dec 2024 18:17:28 +0800 Message-ID: <20241206101744.4161990-7-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241206101744.4161990-1-ruanjinjie@huawei.com> References: <20241206101744.4161990-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" The generic entry has the same logic as need_irq_preemption() macro and use a helper function to check other resched condition. In preparation for moving arm64 over to the generic entry code, check and expand need_irq_preemption() ahead and extract arm64 resched check code to a helper function. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/preempt.h | 1 + arch/arm64/kernel/entry-common.c | 28 +++++++++++++++++----------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/pree= mpt.h index 0159b625cc7f..d0f93385bd85 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -85,6 +85,7 @@ static inline bool should_resched(int preempt_offset) void preempt_schedule(void); void preempt_schedule_notrace(void); =20 +void raw_irqentry_exit_cond_resched(void); #ifdef CONFIG_PREEMPT_DYNAMIC =20 DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index efd1a990d138..80b47ca02db2 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -77,17 +77,10 @@ static noinstr irqentry_state_t enter_from_kernel_mode(= struct pt_regs *regs) =20 #ifdef CONFIG_PREEMPT_DYNAMIC DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#define need_irq_preemption() \ - (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) -#else -#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) #endif =20 static inline bool arm64_need_resched(void) { - if (!need_irq_preemption()) - return false; - /* * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC * priority masking is used the GIC irqchip driver will clear DAIF.IF @@ -111,6 +104,22 @@ static inline bool arm64_need_resched(void) return true; } =20 +void raw_irqentry_exit_cond_resched(void) +{ +#ifdef CONFIG_PREEMPT_DYNAMIC + if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) + return; +#else + if (!IS_ENABLED(CONFIG_PREEMPTION)) + return; +#endif + + if (!preempt_count()) { + if (need_resched() && arm64_need_resched()) + preempt_schedule_irq(); + } +} + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -133,10 +142,7 @@ static __always_inline void __exit_to_kernel_mode(stru= ct pt_regs *regs, return; } =20 - if (!preempt_count() && need_resched()) { - if (arm64_need_resched()) - preempt_schedule_irq(); - } + raw_irqentry_exit_cond_resched(); =20 trace_hardirqs_on(); } else { --=20 2.34.1