From nobody Wed Dec 4 19:18:19 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850963707754.1144309212227; Fri, 25 Oct 2024 03:09:23 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825841.1240180 (Exim 4.92) (envelope-from ) id 1t4HFS-0001cw-Jc; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (output) from mailman id 825841.1240180; Fri, 25 Oct 2024 10:08:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFS-0001cn-FT; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (input) for mailman id 825841; Fri, 25 Oct 2024 10:08:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFQ-00014t-FW for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:48 +0000 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1cf8b35f-92b9-11ef-a0bf-8be0dac302b0; Fri, 25 Oct 2024 12:08:45 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4XZdkC0sXgz1SDJF; Fri, 25 Oct 2024 18:07:15 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 5F07C1A016C; Fri, 25 Oct 2024 18:08:41 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:39 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1cf8b35f-92b9-11ef-a0bf-8be0dac302b0 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 06/19] arm64: entry: Move arm64_preempt_schedule_irq() into exit_to_kernel_mode() Date: Fri, 25 Oct 2024 18:06:47 +0800 Message-ID: <20241025100700.3714552-7-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850964739116600 Content-Type: text/plain; charset="utf-8" Move arm64_preempt_schedule_irq() into exit_to_kernel_mode(), so not only __el1_irq() but also every time when kernel mode irq return, there is a chance to reschedule. As Mark pointed out, this change will have the following key impact: "We'll preempt even without taking a "real" interrupt. That shouldn't result in preemption that wasn't possible before, but it does change the probability of preempting at certain points, and might have a performance impact, so probably warrants a benchmark." Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 88 ++++++++++++++++---------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 137481a3f0fa..e0380812d71e 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -61,6 +61,48 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) return ret; } =20 +#ifdef CONFIG_PREEMPT_DYNAMIC +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); +#define need_irq_preemption() \ + (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) +#else +#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) +#endif + +static void __sched arm64_preempt_schedule_irq(void) +{ + if (!need_irq_preemption()) + return; + + /* + * Note: thread_info::preempt_count includes both thread_info::count + * and thread_info::need_resched, and is not equivalent to + * preempt_count(). + */ + if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) + return; + + /* + * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC + * priority masking is used the GIC irqchip driver will clear DAIF.IF + * using gic_arch_enable_irqs() for normal IRQs. If anything is set in + * DAIF we must have handled an NMI, so skip preemption. + */ + if (system_uses_irq_prio_masking() && read_sysreg(daif)) + return; + + /* + * Preempting a task from an IRQ means we leave copies of PSTATE + * on the stack. cpufeature's enable calls may modify PSTATE, but + * resuming one of these preempted tasks would undo those changes. + * + * Only allow a task to be preempted once cpufeatures have been + * enabled. + */ + if (system_capabilities_finalized()) + preempt_schedule_irq(); +} + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -72,6 +114,8 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) static void noinstr exit_to_kernel_mode(struct pt_regs *regs, irqentry_state_t state) { + arm64_preempt_schedule_irq(); + mte_check_tfsr_exit(); =20 lockdep_assert_irqs_disabled(); @@ -257,48 +301,6 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs = *regs, lockdep_hardirqs_on(CALLER_ADDR0); } =20 -#ifdef CONFIG_PREEMPT_DYNAMIC -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#define need_irq_preemption() \ - (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) -#else -#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) -#endif - -static void __sched arm64_preempt_schedule_irq(void) -{ - if (!need_irq_preemption()) - return; - - /* - * Note: thread_info::preempt_count includes both thread_info::count - * and thread_info::need_resched, and is not equivalent to - * preempt_count(). - */ - if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) - return; - - /* - * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC - * priority masking is used the GIC irqchip driver will clear DAIF.IF - * using gic_arch_enable_irqs() for normal IRQs. If anything is set in - * DAIF we must have handled an NMI, so skip preemption. - */ - if (system_uses_irq_prio_masking() && read_sysreg(daif)) - return; - - /* - * Preempting a task from an IRQ means we leave copies of PSTATE - * on the stack. cpufeature's enable calls may modify PSTATE, but - * resuming one of these preempted tasks would undo those changes. - * - * Only allow a task to be preempted once cpufeatures have been - * enabled. - */ - if (system_capabilities_finalized()) - preempt_schedule_irq(); -} - static void do_interrupt_handler(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { @@ -567,8 +569,6 @@ static __always_inline void __el1_irq(struct pt_regs *r= egs, do_interrupt_handler(regs, handler); irq_exit_rcu(); =20 - arm64_preempt_schedule_irq(); - exit_to_kernel_mode(regs, state); } static void noinstr el1_interrupt(struct pt_regs *regs, --=20 2.34.1