From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850960955750.9802204512246; Fri, 25 Oct 2024 03:09:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825845.1240220 (Exim 4.92) (envelope-from ) id 1t4HFW-0002gX-2h; Fri, 25 Oct 2024 10:08:54 +0000 Received: by outflank-mailman (output) from mailman id 825845.1240220; Fri, 25 Oct 2024 10:08:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFV-0002gI-VZ; Fri, 25 Oct 2024 10:08:53 +0000 Received: by outflank-mailman (input) for mailman id 825845; Fri, 25 Oct 2024 10:08:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFU-0000tn-Hi for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:52 +0000 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 199213b7-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:49 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XZdjL03xBz10My6; Fri, 25 Oct 2024 18:06:30 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 7AEEB1800A5; Fri, 25 Oct 2024 18:08:34 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:32 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 199213b7-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 01/19] arm64: ptrace: Replace interrupts_enabled() with regs_irqs_disabled() Date: Fri, 25 Oct 2024 18:06:42 +0800 Message-ID: <20241025100700.3714552-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850962854116600 Content-Type: text/plain; charset="utf-8" Implement regs_irqs_disabled(), and replace interrupts_enabled() macro with regs_irqs_disabled() all over the place. No functional changes. Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm/include/asm/ptrace.h | 4 ++-- arch/arm/kernel/hw_breakpoint.c | 2 +- arch/arm/kernel/process.c | 2 +- arch/arm/mm/alignment.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/arm64/include/asm/daifflags.h | 2 +- arch/arm64/include/asm/ptrace.h | 4 ++-- arch/arm64/include/asm/xen/events.h | 2 +- arch/arm64/kernel/acpi.c | 2 +- arch/arm64/kernel/debug-monitors.c | 2 +- arch/arm64/kernel/entry-common.c | 4 ++-- arch/arm64/kernel/sdei.c | 2 +- drivers/irqchip/irq-gic-v3.c | 2 +- 13 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h index 6eb311fb2da0..2054b17b3a69 100644 --- a/arch/arm/include/asm/ptrace.h +++ b/arch/arm/include/asm/ptrace.h @@ -46,8 +46,8 @@ struct svc_pt_regs { #define processor_mode(regs) \ ((regs)->ARM_cpsr & MODE_MASK) =20 -#define interrupts_enabled(regs) \ - (!((regs)->ARM_cpsr & PSR_I_BIT)) +#define regs_irqs_disabled(regs) \ + ((regs)->ARM_cpsr & PSR_I_BIT) =20 #define fast_interrupts_enabled(regs) \ (!((regs)->ARM_cpsr & PSR_F_BIT)) diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoin= t.c index a12efd0f43e8..bc7c9f5a2767 100644 --- a/arch/arm/kernel/hw_breakpoint.c +++ b/arch/arm/kernel/hw_breakpoint.c @@ -947,7 +947,7 @@ static int hw_breakpoint_pending(unsigned long addr, un= signed int fsr, =20 preempt_disable(); =20 - if (interrupts_enabled(regs)) + if (!regs_irqs_disabled(regs)) local_irq_enable(); =20 /* We only handle watchpoints and hardware breakpoints. */ diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index e16ed102960c..5979a5cec2d0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -167,7 +167,7 @@ void __show_regs(struct pt_regs *regs) segment =3D "user"; =20 printk("Flags: %s IRQs o%s FIQs o%s Mode %s ISA %s Segment %s\n", - buf, interrupts_enabled(regs) ? "n" : "ff", + buf, !regs_irqs_disabled(regs) ? "n" : "ff", fast_interrupts_enabled(regs) ? "n" : "ff", processor_modes[processor_mode(regs)], isa_modes[isa_mode(regs)], segment); diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index 3c6ddb1afdc4..642aae48a09e 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -809,7 +809,7 @@ do_alignment(unsigned long addr, unsigned int fsr, stru= ct pt_regs *regs) int thumb2_32b =3D 0; int fault; =20 - if (interrupts_enabled(regs)) + if (!regs_irqs_disabled(regs)) local_irq_enable(); =20 instrptr =3D instruction_pointer(regs); diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index ab01b51de559..dd8e95fcce10 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -275,7 +275,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, str= uct pt_regs *regs) =20 =20 /* Enable interrupts if they were enabled in the parent context. */ - if (interrupts_enabled(regs)) + if (!regs_irqs_disabled(regs)) local_irq_enable(); =20 /* diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/da= ifflags.h index fbb5c99eb2f9..5fca48009043 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -128,7 +128,7 @@ static inline void local_daif_inherit(struct pt_regs *r= egs) { unsigned long flags =3D regs->pstate & DAIF_MASK; =20 - if (interrupts_enabled(regs)) + if (!regs_irqs_disabled(regs)) trace_hardirqs_on(); =20 if (system_uses_irq_prio_masking()) diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrac= e.h index 47ff8654c5ec..3e5372a98da4 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -214,8 +214,8 @@ static inline void forget_syscall(struct pt_regs *regs) (regs)->pmr =3D=3D GIC_PRIO_IRQON : \ true) =20 -#define interrupts_enabled(regs) \ - (!((regs)->pstate & PSR_I_BIT) && irqs_priority_unmasked(regs)) +#define regs_irqs_disabled(regs) \ + (((regs)->pstate & PSR_I_BIT) || (!irqs_priority_unmasked(regs))) =20 #define fast_interrupts_enabled(regs) \ (!((regs)->pstate & PSR_F_BIT)) diff --git a/arch/arm64/include/asm/xen/events.h b/arch/arm64/include/asm/x= en/events.h index 2788e95d0ff0..2977b5fe068d 100644 --- a/arch/arm64/include/asm/xen/events.h +++ b/arch/arm64/include/asm/xen/events.h @@ -14,7 +14,7 @@ enum ipi_vector { =20 static inline int xen_irqs_disabled(struct pt_regs *regs) { - return !interrupts_enabled(regs); + return regs_irqs_disabled(regs); } =20 #define xchg_xen_ulong(ptr, val) xchg((ptr), (val)) diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index e6f66491fbe9..732f89daae23 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -403,7 +403,7 @@ int apei_claim_sea(struct pt_regs *regs) return_to_irqs_enabled =3D !irqs_disabled_flags(arch_local_save_flags()); =20 if (regs) - return_to_irqs_enabled =3D interrupts_enabled(regs); + return_to_irqs_enabled =3D !regs_irqs_disabled(regs); =20 /* * SEA can interrupt SError, mask it and describe this as an NMI so diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-m= onitors.c index c60a4a90c6a5..5497df05dd1a 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -231,7 +231,7 @@ static void send_user_sigtrap(int si_code) if (WARN_ON(!user_mode(regs))) return; =20 - if (interrupts_enabled(regs)) + if (!regs_irqs_disabled(regs)) local_irq_enable(); =20 arm64_force_sig_fault(SIGTRAP, si_code, instruction_pointer(regs), diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index b260ddc4d3e9..c547e70428d3 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -73,7 +73,7 @@ static __always_inline void __exit_to_kernel_mode(struct = pt_regs *regs) { lockdep_assert_irqs_disabled(); =20 - if (interrupts_enabled(regs)) { + if (!regs_irqs_disabled(regs)) { if (regs->exit_rcu) { trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); @@ -569,7 +569,7 @@ static void noinstr el1_interrupt(struct pt_regs *regs, { write_sysreg(DAIF_PROCCTX_NOIRQ, daif); =20 - if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) + if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && regs_irqs_disabled(regs)) __el1_pnmi(regs, handler); else __el1_irq(regs, handler); diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 255d12f881c2..27a17da635d8 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -247,7 +247,7 @@ unsigned long __kprobes do_sdei_event(struct pt_regs *r= egs, * If we interrupted the kernel with interrupts masked, we always go * back to wherever we came from. */ - if (mode =3D=3D kernel_mode && !interrupts_enabled(regs)) + if (mode =3D=3D kernel_mode && regs_irqs_disabled(regs)) return SDEI_EV_HANDLED; =20 /* diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index ce87205e3e82..5c832c436bd8 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -932,7 +932,7 @@ static void __gic_handle_irq_from_irqsoff(struct pt_reg= s *regs) =20 static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) { - if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs))) + if (unlikely(gic_supports_nmi() && regs_irqs_disabled(regs))) __gic_handle_irq_from_irqsoff(regs); else __gic_handle_irq_from_irqson(regs); --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850960512563.304012760589; Fri, 25 Oct 2024 03:09:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825842.1240185 (Exim 4.92) (envelope-from ) id 1t4HFS-0001fq-Uk; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (output) from mailman id 825842.1240185; Fri, 25 Oct 2024 10:08:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFS-0001ea-O5; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (input) for mailman id 825842; Fri, 25 Oct 2024 10:08:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFQ-0000tn-FM for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:48 +0000 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 19c37b21-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:45 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XZdk639hWz1jvxH; Fri, 25 Oct 2024 18:07:10 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id DC0FB1401F1; Fri, 25 Oct 2024 18:08:35 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:34 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 19c37b21-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 02/19] arm64: entry: Refactor the entry and exit for exceptions from EL1 Date: Fri, 25 Oct 2024 18:06:43 +0800 Message-ID: <20241025100700.3714552-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850962898116600 Content-Type: text/plain; charset="utf-8" These changes refactor the entry and exit routines for the exceptions from EL1. They store the RCU and lockdep state in a struct irqentry_state variable on the stack, rather than recording them in the fields of pt_regs, since it is safe enough for these context. Before: struct pt_regs { ... u64 lockdep_hardirqs; u64 exit_rcu; } enter_from_kernel_mode(regs); ... exit_to_kernel_mode(regs); After: typedef struct irqentry_state { union { bool exit_rcu; bool lockdep; }; } irqentry_state_t; irqentry_state_t state =3D enter_from_kernel_mode(regs); ... exit_to_kernel_mode(regs, state); No functional changes. Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/ptrace.h | 11 ++- arch/arm64/kernel/entry-common.c | 129 +++++++++++++++++++------------ 2 files changed, 85 insertions(+), 55 deletions(-) diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrac= e.h index 3e5372a98da4..5156c0d5fa20 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -149,6 +149,13 @@ static inline unsigned long pstate_to_compat_psr(const= unsigned long pstate) return psr; } =20 +typedef struct irqentry_state { + union { + bool exit_rcu; + bool lockdep; + }; +} irqentry_state_t; + /* * This struct defines the way the registers are stored on the stack durin= g an * exception. struct user_pt_regs must form a prefix of struct pt_regs. @@ -169,10 +176,6 @@ struct pt_regs { =20 u64 sdei_ttbr1; struct frame_record_meta stackframe; - - /* Only valid for some EL1 exceptions. */ - u64 lockdep_hardirqs; - u64 exit_rcu; }; =20 /* For correct stack alignment, pt_regs has to be a multiple of 16 bytes. = */ diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index c547e70428d3..68a9aecacdb9 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -36,29 +36,36 @@ * This is intended to match the logic in irqentry_enter(), handling the k= ernel * mode transitions only. */ -static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs) +static __always_inline irqentry_state_t __enter_from_kernel_mode(struct pt= _regs *regs) { - regs->exit_rcu =3D false; + irqentry_state_t ret =3D { + .exit_rcu =3D false, + }; =20 if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) { lockdep_hardirqs_off(CALLER_ADDR0); ct_irq_enter(); trace_hardirqs_off_finish(); =20 - regs->exit_rcu =3D true; - return; + ret.exit_rcu =3D true; + return ret; } =20 lockdep_hardirqs_off(CALLER_ADDR0); rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); + + return ret; } =20 -static void noinstr enter_from_kernel_mode(struct pt_regs *regs) +static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg= s) { - __enter_from_kernel_mode(regs); + irqentry_state_t ret =3D __enter_from_kernel_mode(regs); + mte_check_tfsr_entry(); mte_disable_tco_entry(current); + + return ret; } =20 /* @@ -69,12 +76,13 @@ static void noinstr enter_from_kernel_mode(struct pt_re= gs *regs) * This is intended to match the logic in irqentry_exit(), handling the ke= rnel * mode transitions only, and with preemption handled elsewhere. */ -static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs) +static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs, + irqentry_state_t state) { lockdep_assert_irqs_disabled(); =20 if (!regs_irqs_disabled(regs)) { - if (regs->exit_rcu) { + if (state.exit_rcu) { trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); ct_irq_exit(); @@ -84,15 +92,16 @@ static __always_inline void __exit_to_kernel_mode(struc= t pt_regs *regs) =20 trace_hardirqs_on(); } else { - if (regs->exit_rcu) + if (state.exit_rcu) ct_irq_exit(); } } =20 -static void noinstr exit_to_kernel_mode(struct pt_regs *regs) +static void noinstr exit_to_kernel_mode(struct pt_regs *regs, + irqentry_state_t state) { mte_check_tfsr_exit(); - __exit_to_kernel_mode(regs); + __exit_to_kernel_mode(regs, state); } =20 /* @@ -190,9 +199,11 @@ asmlinkage void noinstr asm_exit_to_user_mode(struct p= t_regs *regs) * mode. Before this function is called it is not safe to call regular ker= nel * code, instrumentable code, or any code which may trigger an exception. */ -static void noinstr arm64_enter_nmi(struct pt_regs *regs) +static noinstr irqentry_state_t arm64_enter_nmi(struct pt_regs *regs) { - regs->lockdep_hardirqs =3D lockdep_hardirqs_enabled(); + irqentry_state_t irq_state; + + irq_state.lockdep =3D lockdep_hardirqs_enabled(); =20 __nmi_enter(); lockdep_hardirqs_off(CALLER_ADDR0); @@ -201,6 +212,8 @@ static void noinstr arm64_enter_nmi(struct pt_regs *reg= s) =20 trace_hardirqs_off_finish(); ftrace_nmi_enter(); + + return irq_state; } =20 /* @@ -208,19 +221,18 @@ static void noinstr arm64_enter_nmi(struct pt_regs *r= egs) * mode. After this function returns it is not safe to call regular kernel * code, instrumentable code, or any code which may trigger an exception. */ -static void noinstr arm64_exit_nmi(struct pt_regs *regs) +static void noinstr arm64_exit_nmi(struct pt_regs *regs, + irqentry_state_t irq_state) { - bool restore =3D regs->lockdep_hardirqs; - ftrace_nmi_exit(); - if (restore) { + if (irq_state.lockdep) { trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); } =20 ct_nmi_exit(); lockdep_hardirq_exit(); - if (restore) + if (irq_state.lockdep) lockdep_hardirqs_on(CALLER_ADDR0); __nmi_exit(); } @@ -230,14 +242,18 @@ static void noinstr arm64_exit_nmi(struct pt_regs *re= gs) * kernel mode. Before this function is called it is not safe to call regu= lar * kernel code, instrumentable code, or any code which may trigger an exce= ption. */ -static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) +static noinstr irqentry_state_t arm64_enter_el1_dbg(struct pt_regs *regs) { - regs->lockdep_hardirqs =3D lockdep_hardirqs_enabled(); + irqentry_state_t state; + + state.lockdep =3D lockdep_hardirqs_enabled(); =20 lockdep_hardirqs_off(CALLER_ADDR0); ct_nmi_enter(); =20 trace_hardirqs_off_finish(); + + return state; } =20 /* @@ -245,17 +261,16 @@ static void noinstr arm64_enter_el1_dbg(struct pt_reg= s *regs) * kernel mode. After this function returns it is not safe to call regular * kernel code, instrumentable code, or any code which may trigger an exce= ption. */ -static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) +static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs, + irqentry_state_t state) { - bool restore =3D regs->lockdep_hardirqs; - - if (restore) { + if (state.lockdep) { trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); } =20 ct_nmi_exit(); - if (restore) + if (state.lockdep) lockdep_hardirqs_on(CALLER_ADDR0); } =20 @@ -426,78 +441,86 @@ UNHANDLED(el1t, 64, error) static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr) { unsigned long far =3D read_sysreg(far_el1); + irqentry_state_t state; =20 - enter_from_kernel_mode(regs); + state =3D enter_from_kernel_mode(regs); local_daif_inherit(regs); do_mem_abort(far, esr, regs); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr) { unsigned long far =3D read_sysreg(far_el1); + irqentry_state_t state; =20 - enter_from_kernel_mode(regs); + state =3D enter_from_kernel_mode(regs); local_daif_inherit(regs); do_sp_pc_abort(far, esr, regs); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); + local_daif_inherit(regs); do_el1_undef(regs, esr); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); + local_daif_inherit(regs); do_el1_bti(regs, esr); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); + local_daif_inherit(regs); do_el1_gcs(regs, esr); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); + local_daif_inherit(regs); do_el1_mops(regs, esr); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) { unsigned long far =3D read_sysreg(far_el1); + irqentry_state_t state; =20 - arm64_enter_el1_dbg(regs); + state =3D arm64_enter_el1_dbg(regs); if (!cortex_a76_erratum_1463225_debug_handler(regs)) do_debug_exception(far, esr, regs); - arm64_exit_el1_dbg(regs); + arm64_exit_el1_dbg(regs, state); } =20 static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); + local_daif_inherit(regs); do_el1_fpac(regs, esr); local_daif_mask(); - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } =20 asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs) @@ -546,15 +569,16 @@ asmlinkage void noinstr el1h_64_sync_handler(struct p= t_regs *regs) static __always_inline void __el1_pnmi(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - arm64_enter_nmi(regs); + irqentry_state_t state =3D arm64_enter_nmi(regs); + do_interrupt_handler(regs, handler); - arm64_exit_nmi(regs); + arm64_exit_nmi(regs, state); } =20 static __always_inline void __el1_irq(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - enter_from_kernel_mode(regs); + irqentry_state_t state =3D enter_from_kernel_mode(regs); =20 irq_enter_rcu(); do_interrupt_handler(regs, handler); @@ -562,7 +586,7 @@ static __always_inline void __el1_irq(struct pt_regs *r= egs, =20 arm64_preempt_schedule_irq(); =20 - exit_to_kernel_mode(regs); + exit_to_kernel_mode(regs, state); } static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) @@ -588,11 +612,12 @@ asmlinkage void noinstr el1h_64_fiq_handler(struct pt= _regs *regs) asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs) { unsigned long esr =3D read_sysreg(esr_el1); + irqentry_state_t state; =20 local_daif_restore(DAIF_ERRCTX); - arm64_enter_nmi(regs); + state =3D arm64_enter_nmi(regs); do_serror(regs, esr); - arm64_exit_nmi(regs); + arm64_exit_nmi(regs, state); } =20 static void noinstr el0_da(struct pt_regs *regs, unsigned long esr) @@ -855,12 +880,13 @@ asmlinkage void noinstr el0t_64_fiq_handler(struct pt= _regs *regs) static void noinstr __el0_error_handler_common(struct pt_regs *regs) { unsigned long esr =3D read_sysreg(esr_el1); + irqentry_state_t state; =20 enter_from_user_mode(regs); local_daif_restore(DAIF_ERRCTX); - arm64_enter_nmi(regs); + state =3D arm64_enter_nmi(regs); do_serror(regs, esr); - arm64_exit_nmi(regs); + arm64_exit_nmi(regs, state); local_daif_restore(DAIF_PROCCTX); exit_to_user_mode(regs); } @@ -968,6 +994,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(str= uct pt_regs *regs) asmlinkage noinstr unsigned long __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg) { + irqentry_state_t state; unsigned long ret; =20 /* @@ -992,9 +1019,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_regis= tered_event *arg) else if (cpu_has_pan()) set_pstate_pan(0); =20 - arm64_enter_nmi(regs); + state =3D arm64_enter_nmi(regs); ret =3D do_sdei_event(regs, arg); - arm64_exit_nmi(regs); + arm64_exit_nmi(regs, state); =20 return ret; } --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850954770759.8843807517765; Fri, 25 Oct 2024 03:09:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825838.1240150 (Exim 4.92) (envelope-from ) id 1t4HFP-0000u9-Lg; Fri, 25 Oct 2024 10:08:47 +0000 Received: by outflank-mailman (output) from mailman id 825838.1240150; Fri, 25 Oct 2024 10:08:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFP-0000u2-H0; Fri, 25 Oct 2024 10:08:47 +0000 Received: by outflank-mailman (input) for mailman id 825838; Fri, 25 Oct 2024 10:08:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFN-0000tn-Pi for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:45 +0000 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1a81b42d-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:42 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4XZdlw6k9cz1yng3; Fri, 25 Oct 2024 18:08:44 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 41002140360; Fri, 25 Oct 2024 18:08:37 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:35 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1a81b42d-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 03/19] arm64: entry: Remove __enter_from_user_mode() Date: Fri, 25 Oct 2024 18:06:44 +0800 Message-ID: <20241025100700.3714552-4-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850956685116600 Content-Type: text/plain; charset="utf-8" The __enter_from_user_mode() is only called by enter_from_user_mode(), so replaced it with enter_from_user_mode(). No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 68a9aecacdb9..ccf59b44464d 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -109,7 +109,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs = *regs, * Before this function is called it is not safe to call regular kernel co= de, * instrumentable code, or any code which may trigger an exception. */ -static __always_inline void __enter_from_user_mode(void) +static __always_inline void enter_from_user_mode(struct pt_regs *regs) { lockdep_hardirqs_off(CALLER_ADDR0); CT_WARN_ON(ct_state() !=3D CT_STATE_USER); @@ -118,11 +118,6 @@ static __always_inline void __enter_from_user_mode(voi= d) mte_disable_tco_entry(current); } =20 -static __always_inline void enter_from_user_mode(struct pt_regs *regs) -{ - __enter_from_user_mode(); -} - /* * Handle IRQ/context state management when exiting to user mode. * After this function returns it is not safe to call regular kernel code, --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97AAF1D47C7 for ; Fri, 25 Oct 2024 10:08:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850923; cv=none; b=tO5LOxcqdBC2DjVzDgqINSlHRLmECF/y7rp0Rvc2xZxJ2/UiMLAbgVinlC9B43niRQ2XfXw4shZs7/VTl4QJfD1d4wsYaUaaPXlUJ7QqcJx0+o4sHEO2mdtp5H1a09/ShyouQp6RFw5MrbW1fUc8fJkd/rlpkFsx0c3mCTlcJek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850923; c=relaxed/simple; bh=FLsHu0qjo5G4VRdjZlOasGkexWsOwJQe+LUn8sRCXqo=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Gr1oULvGJNMtE4AS9UtbVxsqwvRr+0stXm6oVqGcmh/FkbwEPuiiS9vgv6BCR4vvGUlCwq8ZLAP6cLjmyAvANpezaGNzlsoCgskpp7Bv1mU2dui4dTg2w8fvXEGVFUyS5vUeO1aliJwLpJcs5lYdRdta61sZtd3el95N6U7aYQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4XZdk82QfKz1SDBV; Fri, 25 Oct 2024 18:07:12 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 94CA51401F1; Fri, 25 Oct 2024 18:08:38 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:37 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 04/19] arm64: entry: Remove __enter_from_kernel_mode() Date: Fri, 25 Oct 2024 18:06:45 +0800 Message-ID: <20241025100700.3714552-5-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" The __enter_from_kernel_mode() is only called by enter_from_kernel_mode(), remove it. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index ccf59b44464d..a7fd4d6c7650 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -36,7 +36,7 @@ * This is intended to match the logic in irqentry_enter(), handling the k= ernel * mode transitions only. */ -static __always_inline irqentry_state_t __enter_from_kernel_mode(struct pt= _regs *regs) +static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg= s) { irqentry_state_t ret =3D { .exit_rcu =3D false, @@ -55,13 +55,6 @@ static __always_inline irqentry_state_t __enter_from_ker= nel_mode(struct pt_regs rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); =20 - return ret; -} - -static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg= s) -{ - irqentry_state_t ret =3D __enter_from_kernel_mode(regs); - mte_check_tfsr_entry(); mte_disable_tco_entry(current); =20 --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850961447614.5147395312198; Fri, 25 Oct 2024 03:09:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825840.1240161 (Exim 4.92) (envelope-from ) id 1t4HFQ-00013N-6n; Fri, 25 Oct 2024 10:08:48 +0000 Received: by outflank-mailman (output) from mailman id 825840.1240161; Fri, 25 Oct 2024 10:08:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFQ-00012d-14; Fri, 25 Oct 2024 10:08:48 +0000 Received: by outflank-mailman (input) for mailman id 825840; Fri, 25 Oct 2024 10:08:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFP-0000tn-FC for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:47 +0000 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1c286c7a-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:44 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XZdkB3WLkz1jvxM; Fri, 25 Oct 2024 18:07:14 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id F313F180043; Fri, 25 Oct 2024 18:08:39 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:38 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1c286c7a-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 05/19] arm64: entry: Remove __exit_to_kernel_mode() Date: Fri, 25 Oct 2024 18:06:46 +0800 Message-ID: <20241025100700.3714552-6-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850962718116600 Content-Type: text/plain; charset="utf-8" __exit_to_kernel_mode() is only called by exit_to_kernel_mode(), remove it. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index a7fd4d6c7650..137481a3f0fa 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -69,9 +69,11 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) * This is intended to match the logic in irqentry_exit(), handling the ke= rnel * mode transitions only, and with preemption handled elsewhere. */ -static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs, - irqentry_state_t state) +static void noinstr exit_to_kernel_mode(struct pt_regs *regs, + irqentry_state_t state) { + mte_check_tfsr_exit(); + lockdep_assert_irqs_disabled(); =20 if (!regs_irqs_disabled(regs)) { @@ -90,13 +92,6 @@ static __always_inline void __exit_to_kernel_mode(struct= pt_regs *regs, } } =20 -static void noinstr exit_to_kernel_mode(struct pt_regs *regs, - irqentry_state_t state) -{ - mte_check_tfsr_exit(); - __exit_to_kernel_mode(regs, state); -} - /* * Handle IRQ/context state management when entering from user mode. * Before this function is called it is not safe to call regular kernel co= de, --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850963707754.1144309212227; Fri, 25 Oct 2024 03:09:23 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825841.1240180 (Exim 4.92) (envelope-from ) id 1t4HFS-0001cw-Jc; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (output) from mailman id 825841.1240180; Fri, 25 Oct 2024 10:08:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFS-0001cn-FT; Fri, 25 Oct 2024 10:08:50 +0000 Received: by outflank-mailman (input) for mailman id 825841; Fri, 25 Oct 2024 10:08:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFQ-00014t-FW for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:48 +0000 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1cf8b35f-92b9-11ef-a0bf-8be0dac302b0; Fri, 25 Oct 2024 12:08:45 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4XZdkC0sXgz1SDJF; Fri, 25 Oct 2024 18:07:15 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 5F07C1A016C; Fri, 25 Oct 2024 18:08:41 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:39 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1cf8b35f-92b9-11ef-a0bf-8be0dac302b0 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 06/19] arm64: entry: Move arm64_preempt_schedule_irq() into exit_to_kernel_mode() Date: Fri, 25 Oct 2024 18:06:47 +0800 Message-ID: <20241025100700.3714552-7-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850964739116600 Content-Type: text/plain; charset="utf-8" Move arm64_preempt_schedule_irq() into exit_to_kernel_mode(), so not only __el1_irq() but also every time when kernel mode irq return, there is a chance to reschedule. As Mark pointed out, this change will have the following key impact: "We'll preempt even without taking a "real" interrupt. That shouldn't result in preemption that wasn't possible before, but it does change the probability of preempting at certain points, and might have a performance impact, so probably warrants a benchmark." Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 88 ++++++++++++++++---------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 137481a3f0fa..e0380812d71e 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -61,6 +61,48 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) return ret; } =20 +#ifdef CONFIG_PREEMPT_DYNAMIC +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); +#define need_irq_preemption() \ + (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) +#else +#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) +#endif + +static void __sched arm64_preempt_schedule_irq(void) +{ + if (!need_irq_preemption()) + return; + + /* + * Note: thread_info::preempt_count includes both thread_info::count + * and thread_info::need_resched, and is not equivalent to + * preempt_count(). + */ + if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) + return; + + /* + * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC + * priority masking is used the GIC irqchip driver will clear DAIF.IF + * using gic_arch_enable_irqs() for normal IRQs. If anything is set in + * DAIF we must have handled an NMI, so skip preemption. + */ + if (system_uses_irq_prio_masking() && read_sysreg(daif)) + return; + + /* + * Preempting a task from an IRQ means we leave copies of PSTATE + * on the stack. cpufeature's enable calls may modify PSTATE, but + * resuming one of these preempted tasks would undo those changes. + * + * Only allow a task to be preempted once cpufeatures have been + * enabled. + */ + if (system_capabilities_finalized()) + preempt_schedule_irq(); +} + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -72,6 +114,8 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) static void noinstr exit_to_kernel_mode(struct pt_regs *regs, irqentry_state_t state) { + arm64_preempt_schedule_irq(); + mte_check_tfsr_exit(); =20 lockdep_assert_irqs_disabled(); @@ -257,48 +301,6 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs = *regs, lockdep_hardirqs_on(CALLER_ADDR0); } =20 -#ifdef CONFIG_PREEMPT_DYNAMIC -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#define need_irq_preemption() \ - (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) -#else -#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) -#endif - -static void __sched arm64_preempt_schedule_irq(void) -{ - if (!need_irq_preemption()) - return; - - /* - * Note: thread_info::preempt_count includes both thread_info::count - * and thread_info::need_resched, and is not equivalent to - * preempt_count(). - */ - if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) - return; - - /* - * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC - * priority masking is used the GIC irqchip driver will clear DAIF.IF - * using gic_arch_enable_irqs() for normal IRQs. If anything is set in - * DAIF we must have handled an NMI, so skip preemption. - */ - if (system_uses_irq_prio_masking() && read_sysreg(daif)) - return; - - /* - * Preempting a task from an IRQ means we leave copies of PSTATE - * on the stack. cpufeature's enable calls may modify PSTATE, but - * resuming one of these preempted tasks would undo those changes. - * - * Only allow a task to be preempted once cpufeatures have been - * enabled. - */ - if (system_capabilities_finalized()) - preempt_schedule_irq(); -} - static void do_interrupt_handler(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { @@ -567,8 +569,6 @@ static __always_inline void __el1_irq(struct pt_regs *r= egs, do_interrupt_handler(regs, handler); irq_exit_rcu(); =20 - arm64_preempt_schedule_irq(); - exit_to_kernel_mode(regs, state); } static void noinstr el1_interrupt(struct pt_regs *regs, --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9ADF61F8F08 for ; Fri, 25 Oct 2024 10:08:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850929; cv=none; b=XXynemGqk3EzNX5NbMwlAlY2Z97kIU7K13LtZQoA1ND1ZfyeVJTlx1M15m8B0+PIH04dwJeevdl4zQG3TLi9pK3kgNUeZZufaPvr4afASkpEhsCMAcOT/rlAAiDCFDa6kwk5TFAHo/fmUFyDx+mIsvIR8foEYKjlgYQjbxcyqMk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850929; c=relaxed/simple; bh=9kqHcB+7k53qLeynfgYZAgp7xh2N7UyG6JIxMziXokc=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BYJalRdAL5RlMAvX0HkvWgTd2o0uhhkutinCI1g3Sgry7404jxbFPrA/ongisFZDsakDLefY6EFzBImHSx39valZJHQK0xHQGNbtKYU9RxX7Ajn6Py9oSo+48/SowwYxKWyALuQ+oXchEziL6MuXVgIpE7nOcy6G5TwkXjqt83o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XZdkr6FBHz20qdN; Fri, 25 Oct 2024 18:07:48 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id B7E6D1A0188; Fri, 25 Oct 2024 18:08:42 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:41 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 07/19] arm64: entry: Call arm64_preempt_schedule_irq() only if irqs enabled Date: Fri, 25 Oct 2024 18:06:48 +0800 Message-ID: <20241025100700.3714552-8-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Only if irqs are enabled when the interrupt trapped, there may be a chance to reschedule after the interrupt has been handled, so move arm64_preempt_schedule_irq() into regs_irqs_disabled() check false if block. As Mark pointed out, this change will have the following key impact: "We will not preempt when taking interrupts from a region of kernel code where IRQs are enabled but RCU is not watching, matching the behaviour of the generic entry code. This has the potential to introduce livelock if we can ever have a screaming interrupt in such a region, so we'll need to go figure out whether that's actually a problem. Having this as a separate patch will make it easier to test/bisect for that specifically." Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index e0380812d71e..b57f6dc66115 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -114,8 +114,6 @@ static void __sched arm64_preempt_schedule_irq(void) static void noinstr exit_to_kernel_mode(struct pt_regs *regs, irqentry_state_t state) { - arm64_preempt_schedule_irq(); - mte_check_tfsr_exit(); =20 lockdep_assert_irqs_disabled(); @@ -129,6 +127,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs = *regs, return; } =20 + arm64_preempt_schedule_irq(); + trace_hardirqs_on(); } else { if (state.exit_rcu) --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850964323579.1845451341107; Fri, 25 Oct 2024 03:09:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825844.1240199 (Exim 4.92) (envelope-from ) id 1t4HFT-0001vV-Vw; Fri, 25 Oct 2024 10:08:51 +0000 Received: by outflank-mailman (output) from mailman id 825844.1240199; Fri, 25 Oct 2024 10:08:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFT-0001tK-KE; Fri, 25 Oct 2024 10:08:51 +0000 Received: by outflank-mailman (input) for mailman id 825844; Fri, 25 Oct 2024 10:08:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFS-00014t-4g for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:50 +0000 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1ea77fac-92b9-11ef-a0bf-8be0dac302b0; Fri, 25 Oct 2024 12:08:47 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XZdjW6PJ5z1T95x; Fri, 25 Oct 2024 18:06:39 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 1E0281800E2; Fri, 25 Oct 2024 18:08:44 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:42 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1ea77fac-92b9-11ef-a0bf-8be0dac302b0 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 08/19] arm64: entry: Rework arm64_preempt_schedule_irq() Date: Fri, 25 Oct 2024 18:06:49 +0800 Message-ID: <20241025100700.3714552-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850966694116600 Content-Type: text/plain; charset="utf-8" Rework arm64_preempt_schedule_irq() to check whether it need resched in a check function arm64_irqentry_exit_need_resched(). No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index b57f6dc66115..a3414fb599fa 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -69,10 +69,10 @@ DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_re= sched); #define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) #endif =20 -static void __sched arm64_preempt_schedule_irq(void) +static inline bool arm64_irqentry_exit_need_resched(void) { if (!need_irq_preemption()) - return; + return false; =20 /* * Note: thread_info::preempt_count includes both thread_info::count @@ -80,7 +80,7 @@ static void __sched arm64_preempt_schedule_irq(void) * preempt_count(). */ if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) - return; + return false; =20 /* * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC @@ -89,7 +89,7 @@ static void __sched arm64_preempt_schedule_irq(void) * DAIF we must have handled an NMI, so skip preemption. */ if (system_uses_irq_prio_masking() && read_sysreg(daif)) - return; + return false; =20 /* * Preempting a task from an IRQ means we leave copies of PSTATE @@ -99,8 +99,10 @@ static void __sched arm64_preempt_schedule_irq(void) * Only allow a task to be preempted once cpufeatures have been * enabled. */ - if (system_capabilities_finalized()) - preempt_schedule_irq(); + if (!system_capabilities_finalized()) + return false; + + return true; } =20 /* @@ -127,7 +129,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs = *regs, return; } =20 - arm64_preempt_schedule_irq(); + if (arm64_irqentry_exit_need_resched()) + preempt_schedule_irq(); =20 trace_hardirqs_on(); } else { --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850961487536.8052776414389; Fri, 25 Oct 2024 03:09:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825848.1240245 (Exim 4.92) (envelope-from ) id 1t4HFZ-0003Lg-I2; Fri, 25 Oct 2024 10:08:57 +0000 Received: by outflank-mailman (output) from mailman id 825848.1240245; Fri, 25 Oct 2024 10:08:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFZ-0003KH-4a; Fri, 25 Oct 2024 10:08:57 +0000 Received: by outflank-mailman (input) for mailman id 825848; Fri, 25 Oct 2024 10:08:55 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFX-0000tn-Hr for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:08:55 +0000 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1f83c448-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:52 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XZdjh6T3TzpX51; Fri, 25 Oct 2024 18:06:48 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 74D8E14011B; Fri, 25 Oct 2024 18:08:45 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:43 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1f83c448-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 09/19] arm64: entry: Use preempt_count() and need_resched() helper Date: Fri, 25 Oct 2024 18:06:50 +0800 Message-ID: <20241025100700.3714552-10-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850962714116600 Content-Type: text/plain; charset="utf-8" The "READ_ONCE(current_thread_info()->preempt_count =3D 0" is equivalent to "preempt_count() =3D=3D 0 && need_resched()", so use these helpers to replace it, which will make it more clear when switch to generic entry. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index a3414fb599fa..3ea3ab32d232 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -74,14 +74,6 @@ static inline bool arm64_irqentry_exit_need_resched(void) if (!need_irq_preemption()) return false; =20 - /* - * Note: thread_info::preempt_count includes both thread_info::count - * and thread_info::need_resched, and is not equivalent to - * preempt_count(). - */ - if (READ_ONCE(current_thread_info()->preempt_count) !=3D 0) - return false; - /* * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC * priority masking is used the GIC irqchip driver will clear DAIF.IF @@ -129,8 +121,10 @@ static void noinstr exit_to_kernel_mode(struct pt_regs= *regs, return; } =20 - if (arm64_irqentry_exit_need_resched()) - preempt_schedule_irq(); + if (!preempt_count()) { + if (need_resched() && arm64_irqentry_exit_need_resched()) + preempt_schedule_irq(); + } =20 trace_hardirqs_on(); } else { --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCDCA2003C5 for ; Fri, 25 Oct 2024 10:08:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850932; cv=none; b=Vy2mwwJPUq0WNsY5ZgG1Pj7y/fdMFnNp/B3EVPDjoqnAUybwrkcMagti0xekPtbg/AiSRkgE+eDYdxz7R4TmkxfuBfr1BeG2Ulvbj4l+nLJyi0KKeO+jhYv37VQniwEy0g4t1/Dkp/hXDZ3FZnoFRsi0HPGueHFCVA893eSPxOQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850932; c=relaxed/simple; bh=nwRo69/b02TZ0djNQMgVRpNwZSsAQF+pxOwFEiQsxuY=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BOllEirycKgsNevgnytGWkPOZhoYYp4nA4uHqBFMC6pwLsElpxV/9Gl22L0wd9t9SWJFNDdSue1cmNwX788A4uXccfdOAQhVkkXhgwh5kWJd/qn3o83pPJxsdmOTkGpoX9lfx8rUfz4T+WQa8cfpuC+sT78TF5FEirJryVphwdI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XZdkw6p65z20qy6; Fri, 25 Oct 2024 18:07:52 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id CD2451401F1; Fri, 25 Oct 2024 18:08:46 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:45 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 10/19] arm64: entry: preempt_schedule_irq() only if PREEMPTION enabled Date: Fri, 25 Oct 2024 18:06:51 +0800 Message-ID: <20241025100700.3714552-11-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Whether PREEMPT_DYNAMIC enabled or not, PREEMPTION should be enabled to allow reschedule after an interrupt. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 3ea3ab32d232..58d660878c09 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -65,8 +65,6 @@ static noinstr irqentry_state_t enter_from_kernel_mode(st= ruct pt_regs *regs) DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); #define need_irq_preemption() \ (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) -#else -#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) #endif =20 static inline bool arm64_irqentry_exit_need_resched(void) @@ -121,9 +119,12 @@ static void noinstr exit_to_kernel_mode(struct pt_regs= *regs, return; } =20 - if (!preempt_count()) { - if (need_resched() && arm64_irqentry_exit_need_resched()) - preempt_schedule_irq(); + if (IS_ENABLED(CONFIG_PREEMPTION)) { + if (!preempt_count()) { + if (need_resched() && + arm64_irqentry_exit_need_resched()) + preempt_schedule_irq(); + } } =20 trace_hardirqs_on(); --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A889C201017 for ; Fri, 25 Oct 2024 10:08:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850933; cv=none; b=pzqCwb6PrE9c5BD4F1jL9x8erB/8JBhI2PFYwf5NbcZo8rE5n24dahpRZSSn0GnTNfULMBNOZ+tuNmqE/fFPIpoOj9gjFHkx7Zg0P6/zD45Tn6P9n4PQbvkLHXlJVfW/9ydSwWgmh9uXD1ZJOBm71U/X2u926wkorecNF0cu12k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850933; c=relaxed/simple; bh=N+SRuvOelmKf4isWCLciC4gqRGgc2Cyj+tD9maRjaf0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uJCWgYNSGa6YTRd82ToQ+L0AweDAj8aqj1O08sj4hhkutFTxubBQrVceSf3+Xi6isp364udxakqiLHtHnoqWhB9yLrd2p7J9JaooN0+2Yyn1HgTYlJy9VM/8QsRhDsZ6BRFcQTtKDKRBPrHU0IAjltq3W295hkEOmCjHsCct6Fg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XZdfx13mZz1HLR8; Fri, 25 Oct 2024 18:04:25 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 357351A0188; Fri, 25 Oct 2024 18:08:48 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:46 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 11/19] arm64: entry: Extract raw_irqentry_exit_cond_resched() function Date: Fri, 25 Oct 2024 18:06:52 +0800 Message-ID: <20241025100700.3714552-12-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Extract the arm64 resched logic code to raw_irqentry_exit_cond_resched() function, which makes the code more clear when switch to generic entry. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/preempt.h | 1 + arch/arm64/kernel/entry-common.c | 17 ++++++++++------- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/pree= mpt.h index 0159b625cc7f..d0f93385bd85 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -85,6 +85,7 @@ static inline bool should_resched(int preempt_offset) void preempt_schedule(void); void preempt_schedule_notrace(void); =20 +void raw_irqentry_exit_cond_resched(void); #ifdef CONFIG_PREEMPT_DYNAMIC =20 DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 58d660878c09..5b7df53cfcf6 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -95,6 +95,14 @@ static inline bool arm64_irqentry_exit_need_resched(void) return true; } =20 +void raw_irqentry_exit_cond_resched(void) +{ + if (!preempt_count()) { + if (need_resched() && arm64_irqentry_exit_need_resched()) + preempt_schedule_irq(); + } +} + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -119,13 +127,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs= *regs, return; } =20 - if (IS_ENABLED(CONFIG_PREEMPTION)) { - if (!preempt_count()) { - if (need_resched() && - arm64_irqentry_exit_need_resched()) - preempt_schedule_irq(); - } - } + if (IS_ENABLED(CONFIG_PREEMPTION)) + raw_irqentry_exit_cond_resched(); =20 trace_hardirqs_on(); } else { --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 172985096118364.38790056107734; Fri, 25 Oct 2024 03:09:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825851.1240276 (Exim 4.92) (envelope-from ) id 1t4HFe-0004dX-Fd; Fri, 25 Oct 2024 10:09:02 +0000 Received: by outflank-mailman (output) from mailman id 825851.1240276; Fri, 25 Oct 2024 10:09:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFe-0004cA-7Y; Fri, 25 Oct 2024 10:09:02 +0000 Received: by outflank-mailman (input) for mailman id 825851; Fri, 25 Oct 2024 10:09:00 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFc-0000tn-EL for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:09:00 +0000 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 21cdbb47-92b9-11ef-99a3-01e77a169b0f; Fri, 25 Oct 2024 12:08:57 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XZdjd2lHyz1T7q6; Fri, 25 Oct 2024 18:06:45 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 9417D18010F; Fri, 25 Oct 2024 18:08:49 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:48 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 21cdbb47-92b9-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 12/19] arm64: entry: Check dynamic key ahead Date: Fri, 25 Oct 2024 18:06:53 +0800 Message-ID: <20241025100700.3714552-13-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850962704116600 Content-Type: text/plain; charset="utf-8" Check dynamic key ahead in raw_irqentry_exit_cond_resched(), which will make arm64_irqentry_exit_need_resched() all about arch-specific. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/entry-common.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 5b7df53cfcf6..3b110dcf4fa3 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -63,15 +63,10 @@ static noinstr irqentry_state_t enter_from_kernel_mode(= struct pt_regs *regs) =20 #ifdef CONFIG_PREEMPT_DYNAMIC DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#define need_irq_preemption() \ - (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) #endif =20 static inline bool arm64_irqentry_exit_need_resched(void) { - if (!need_irq_preemption()) - return false; - /* * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC * priority masking is used the GIC irqchip driver will clear DAIF.IF @@ -97,6 +92,11 @@ static inline bool arm64_irqentry_exit_need_resched(void) =20 void raw_irqentry_exit_cond_resched(void) { +#ifdef CONFIG_PREEMPT_DYNAMIC + if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) + return; +#endif + if (!preempt_count()) { if (need_resched() && arm64_irqentry_exit_need_resched()) preempt_schedule_irq(); --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 875032038A2 for ; Fri, 25 Oct 2024 10:08:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850937; cv=none; b=mlMzHnqxgMIdO7sTlySX07zDSjn1cD3okT5eTMvEkoE0s9XBbUb/leGnx/hHuo99PiJTLsR7tGTyMPUNYxPEd+Rdq8/ns7PWNkm47zP1RagriwSc6qeg4ofjOkNMsriyxYGF6NiKTXfDWwJQmDZv2JYvcSyB7tYz9i5KoGKY/+U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850937; c=relaxed/simple; bh=7evoTh19wJPUUGn+APPEiPtALN4zxV5WuwrZZIwIru0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I/zD5UkCdxxAa1vR66g5JNxpVVhZNo0F+/dwafBFwPIEEeoinQ30a5JRmFcVvX/yTbSh4ZjXNy1rad44vYzPgQQSh6uz3zZToBqgV8kW7W1BLmsO0i+0n3LAhTQUhNgstGhnbt2h8iHiJ6JG4eY0H0DqyM/Ab8Oh8IKv0mqdi+s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XZdjf346yz10McY; Fri, 25 Oct 2024 18:06:46 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id E7F3D1400E3; Fri, 25 Oct 2024 18:08:50 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:49 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 13/19] arm64: entry: Check dynamic resched when PREEMPT_DYNAMIC enabled Date: Fri, 25 Oct 2024 18:06:54 +0800 Message-ID: <20241025100700.3714552-14-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Check dynamic resched alone when PREEMPT_DYNAMIC enabled. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/preempt.h | 3 +++ arch/arm64/kernel/entry-common.c | 21 +++++++++++---------- 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/pree= mpt.h index d0f93385bd85..0f0ba250efe8 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -93,11 +93,14 @@ void dynamic_preempt_schedule(void); #define __preempt_schedule() dynamic_preempt_schedule() void dynamic_preempt_schedule_notrace(void); #define __preempt_schedule_notrace() dynamic_preempt_schedule_notrace() +void dynamic_irqentry_exit_cond_resched(void); +#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() =20 #else /* CONFIG_PREEMPT_DYNAMIC */ =20 #define __preempt_schedule() preempt_schedule() #define __preempt_schedule_notrace() preempt_schedule_notrace() +#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() =20 #endif /* CONFIG_PREEMPT_DYNAMIC */ #endif /* CONFIG_PREEMPTION */ diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 3b110dcf4fa3..152216201f84 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -61,10 +61,6 @@ static noinstr irqentry_state_t enter_from_kernel_mode(s= truct pt_regs *regs) return ret; } =20 -#ifdef CONFIG_PREEMPT_DYNAMIC -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#endif - static inline bool arm64_irqentry_exit_need_resched(void) { /* @@ -92,17 +88,22 @@ static inline bool arm64_irqentry_exit_need_resched(voi= d) =20 void raw_irqentry_exit_cond_resched(void) { -#ifdef CONFIG_PREEMPT_DYNAMIC - if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) - return; -#endif - if (!preempt_count()) { if (need_resched() && arm64_irqentry_exit_need_resched()) preempt_schedule_irq(); } } =20 +#ifdef CONFIG_PREEMPT_DYNAMIC +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); +void dynamic_irqentry_exit_cond_resched(void) +{ + if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) + return; + raw_irqentry_exit_cond_resched(); +} +#endif + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -128,7 +129,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs = *regs, } =20 if (IS_ENABLED(CONFIG_PREEMPTION)) - raw_irqentry_exit_cond_resched(); + irqentry_exit_cond_resched(); =20 trace_hardirqs_on(); } else { --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFEBC204F98 for ; Fri, 25 Oct 2024 10:09:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.189 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850946; cv=none; b=KsQ8QKvfNiQ9YXlD/Bs06/s5Jnpn56ypoakiPWZnAkpj0XOTTUySjbtLLZajAHk4t0rvzmg98dwc6FWNhdal2rIFTiciRdGFX3s9Uxy+M3RzvEY9QLLY3PvbDNJr5O8N20fwakbUg3YVF9mO3iJrgpJTJ7Ah9GVDLccBsyD5GI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850946; c=relaxed/simple; bh=rL+8drHqW9cOZISA2bW1H+JB6rJ6Ricn9MfPmjA89Dg=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hORgjKFBaRnADaE5iT5bM8HWauG4m3kZH1EWJrK3OzBEmV6AoF7FxK4osPEaVStaHei+W/P4+guwyz41rlEeDgTLAzmykoY/R9Sq45pKk+oVa3NQDwfy+tSLkMBUWWyOr4nYxFciECdImn7HEIwrp/hQglQhMHusYRHvsMDBUBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4XZdl229wbzQsFJ; Fri, 25 Oct 2024 18:07:58 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 5BF2B1800A5; Fri, 25 Oct 2024 18:08:52 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:50 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 14/19] entry: Split into irq entry and syscall Date: Fri, 25 Oct 2024 18:06:55 +0800 Message-ID: <20241025100700.3714552-15-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" As Mark pointed out, do not try to switch to *all* the generic entry code in one go. The regular entry state management (e.g. enter_from_user_mode() and exit_to_user_mode()) is largely separate from the syscall state management. Move arm64 over to enter_from_user_mode() and exit_to_user_mode() without needing to use any of the generic syscall logic. Doing that first, *then* moving over to the generic syscall handling would be much easier to review/test/bisect, and if there are any ABI issues with the syscall handling in particular, it will be easier to handle those in isolation. So split generic entry into irq entry and syscall code, which will make review work easier and switch to generic entry clear. Introdue two configs called GENERIC_SYSCALL and GENERIC_IRQ_ENTRY, which control the irq entry and syscall parts of the generic code respectively. And split the header file irq-entry-common.h from entry-common.h for GENERIC_IRQ_ENTRY. Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- MAINTAINERS | 1 + arch/Kconfig | 8 + include/linux/entry-common.h | 376 +---------------------------- include/linux/irq-entry-common.h | 393 +++++++++++++++++++++++++++++++ kernel/entry/Makefile | 3 +- kernel/entry/common.c | 159 +------------ kernel/entry/syscall-common.c | 159 +++++++++++++ 7 files changed, 565 insertions(+), 534 deletions(-) create mode 100644 include/linux/irq-entry-common.h create mode 100644 kernel/entry/syscall-common.c diff --git a/MAINTAINERS b/MAINTAINERS index 72dce03be648..468d6e1a3228 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9531,6 +9531,7 @@ S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/entry F: include/linux/entry-common.h F: include/linux/entry-kvm.h +F: include/linux/irq-entry-common.h F: kernel/entry/ =20 GENERIC GPIO I2C DRIVER diff --git a/arch/Kconfig b/arch/Kconfig index feb50cfc4bdb..8e9c6f85960e 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -64,8 +64,16 @@ config HOTPLUG_PARALLEL bool select HOTPLUG_SPLIT_STARTUP =20 +config GENERIC_IRQ_ENTRY + bool + +config GENERIC_SYSCALL + bool + config GENERIC_ENTRY bool + select GENERIC_IRQ_ENTRY + select GENERIC_SYSCALL =20 config KPROBES bool "Kprobes" diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index 1e50cdb83ae5..1ae3143d4b12 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -2,6 +2,7 @@ #ifndef __LINUX_ENTRYCOMMON_H #define __LINUX_ENTRYCOMMON_H =20 +#include #include #include #include @@ -15,14 +16,6 @@ =20 #include =20 -/* - * Define dummy _TIF work flags if not defined by the architecture or for - * disabled functionality. - */ -#ifndef _TIF_PATCH_PENDING -# define _TIF_PATCH_PENDING (0) -#endif - #ifndef _TIF_UPROBE # define _TIF_UPROBE (0) #endif @@ -55,68 +48,6 @@ SYSCALL_WORK_SYSCALL_EXIT_TRAP | \ ARCH_SYSCALL_WORK_EXIT) =20 -/* - * TIF flags handled in exit_to_user_mode_loop() - */ -#ifndef ARCH_EXIT_TO_USER_MODE_WORK -# define ARCH_EXIT_TO_USER_MODE_WORK (0) -#endif - -#define EXIT_TO_USER_MODE_WORK \ - (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ - _TIF_NEED_RESCHED | _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL | \ - ARCH_EXIT_TO_USER_MODE_WORK) - -/** - * arch_enter_from_user_mode - Architecture specific sanity check for user= mode regs - * @regs: Pointer to currents pt_regs - * - * Defaults to an empty implementation. Can be replaced by architecture - * specific code. - * - * Invoked from syscall_enter_from_user_mode() in the non-instrumentable - * section. Use __always_inline so the compiler cannot push it out of line - * and make it instrumentable. - */ -static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs= ); - -#ifndef arch_enter_from_user_mode -static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs= ) {} -#endif - -/** - * enter_from_user_mode - Establish state when coming from user mode - * - * Syscall/interrupt entry disables interrupts, but user mode is traced as - * interrupts enabled. Also with NO_HZ_FULL RCU might be idle. - * - * 1) Tell lockdep that interrupts are disabled - * 2) Invoke context tracking if enabled to reactivate RCU - * 3) Trace interrupts off state - * - * Invoked from architecture specific syscall entry code with interrupts - * disabled. The calling code has to be non-instrumentable. When the - * function returns all state is correct and interrupts are still - * disabled. The subsequent functions can be instrumented. - * - * This is invoked when there is architecture specific functionality to be - * done between establishing state and enabling interrupts. The caller must - * enable interrupts before invoking syscall_enter_from_user_mode_work(). - */ -static __always_inline void enter_from_user_mode(struct pt_regs *regs) -{ - arch_enter_from_user_mode(regs); - lockdep_hardirqs_off(CALLER_ADDR0); - - CT_WARN_ON(__ct_state() !=3D CT_STATE_USER); - user_exit_irqoff(); - - instrumentation_begin(); - kmsan_unpoison_entry_regs(regs); - trace_hardirqs_off_finish(); - instrumentation_end(); -} - /** * syscall_enter_from_user_mode_prepare - Establish state and enable inter= rupts * @regs: Pointer to currents pt_regs @@ -201,170 +132,6 @@ static __always_inline long syscall_enter_from_user_m= ode(struct pt_regs *regs, l return ret; } =20 -/** - * local_irq_enable_exit_to_user - Exit to user variant of local_irq_enabl= e() - * @ti_work: Cached TIF flags gathered with interrupts disabled - * - * Defaults to local_irq_enable(). Can be supplied by architecture specific - * code. - */ -static inline void local_irq_enable_exit_to_user(unsigned long ti_work); - -#ifndef local_irq_enable_exit_to_user -static inline void local_irq_enable_exit_to_user(unsigned long ti_work) -{ - local_irq_enable(); -} -#endif - -/** - * local_irq_disable_exit_to_user - Exit to user variant of local_irq_disa= ble() - * - * Defaults to local_irq_disable(). Can be supplied by architecture specif= ic - * code. - */ -static inline void local_irq_disable_exit_to_user(void); - -#ifndef local_irq_disable_exit_to_user -static inline void local_irq_disable_exit_to_user(void) -{ - local_irq_disable(); -} -#endif - -/** - * arch_exit_to_user_mode_work - Architecture specific TIF work for exit - * to user mode. - * @regs: Pointer to currents pt_regs - * @ti_work: Cached TIF flags gathered with interrupts disabled - * - * Invoked from exit_to_user_mode_loop() with interrupt enabled - * - * Defaults to NOOP. Can be supplied by architecture specific code. - */ -static inline void arch_exit_to_user_mode_work(struct pt_regs *regs, - unsigned long ti_work); - -#ifndef arch_exit_to_user_mode_work -static inline void arch_exit_to_user_mode_work(struct pt_regs *regs, - unsigned long ti_work) -{ -} -#endif - -/** - * arch_exit_to_user_mode_prepare - Architecture specific preparation for - * exit to user mode. - * @regs: Pointer to currents pt_regs - * @ti_work: Cached TIF flags gathered with interrupts disabled - * - * Invoked from exit_to_user_mode_prepare() with interrupt disabled as the= last - * function before return. Defaults to NOOP. - */ -static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, - unsigned long ti_work); - -#ifndef arch_exit_to_user_mode_prepare -static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, - unsigned long ti_work) -{ -} -#endif - -/** - * arch_exit_to_user_mode - Architecture specific final work before - * exit to user mode. - * - * Invoked from exit_to_user_mode() with interrupt disabled as the last - * function before return. Defaults to NOOP. - * - * This needs to be __always_inline because it is non-instrumentable code - * invoked after context tracking switched to user mode. - * - * An architecture implementation must not do anything complex, no locking - * etc. The main purpose is for speculation mitigations. - */ -static __always_inline void arch_exit_to_user_mode(void); - -#ifndef arch_exit_to_user_mode -static __always_inline void arch_exit_to_user_mode(void) { } -#endif - -/** - * arch_do_signal_or_restart - Architecture specific signal delivery func= tion - * @regs: Pointer to currents pt_regs - * - * Invoked from exit_to_user_mode_loop(). - */ -void arch_do_signal_or_restart(struct pt_regs *regs); - -/** - * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce - */ -unsigned long exit_to_user_mode_loop(struct pt_regs *regs, - unsigned long ti_work); - -/** - * exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required - * @regs: Pointer to pt_regs on entry stack - * - * 1) check that interrupts are disabled - * 2) call tick_nohz_user_enter_prepare() - * 3) call exit_to_user_mode_loop() if any flags from - * EXIT_TO_USER_MODE_WORK are set - * 4) check that interrupts are still disabled - */ -static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) -{ - unsigned long ti_work; - - lockdep_assert_irqs_disabled(); - - /* Flush pending rcuog wakeup before the last need_resched() check */ - tick_nohz_user_enter_prepare(); - - ti_work =3D read_thread_flags(); - if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) - ti_work =3D exit_to_user_mode_loop(regs, ti_work); - - arch_exit_to_user_mode_prepare(regs, ti_work); - - /* Ensure that kernel state is sane for a return to userspace */ - kmap_assert_nomap(); - lockdep_assert_irqs_disabled(); - lockdep_sys_exit(); -} - -/** - * exit_to_user_mode - Fixup state when exiting to user mode - * - * Syscall/interrupt exit enables interrupts, but the kernel state is - * interrupts disabled when this is invoked. Also tell RCU about it. - * - * 1) Trace interrupts on state - * 2) Invoke context tracking if enabled to adjust RCU state - * 3) Invoke architecture specific last minute exit code, e.g. speculation - * mitigations, etc.: arch_exit_to_user_mode() - * 4) Tell lockdep that interrupts are enabled - * - * Invoked from architecture specific code when syscall_exit_to_user_mode() - * is not suitable as the last step before returning to userspace. Must be - * invoked with interrupts disabled and the caller must be - * non-instrumentable. - * The caller has to invoke syscall_exit_to_user_mode_work() before this. - */ -static __always_inline void exit_to_user_mode(void) -{ - instrumentation_begin(); - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(); - instrumentation_end(); - - user_enter_irqoff(); - arch_exit_to_user_mode(); - lockdep_hardirqs_on(CALLER_ADDR0); -} - /** * syscall_exit_to_user_mode_work - Handle work before returning to user m= ode * @regs: Pointer to currents pt_regs @@ -411,145 +178,4 @@ void syscall_exit_to_user_mode_work(struct pt_regs *r= egs); */ void syscall_exit_to_user_mode(struct pt_regs *regs); =20 -/** - * irqentry_enter_from_user_mode - Establish state before invoking the irq= handler - * @regs: Pointer to currents pt_regs - * - * Invoked from architecture specific entry code with interrupts disabled. - * Can only be called when the interrupt entry came from user mode. The - * calling code must be non-instrumentable. When the function returns all - * state is correct and the subsequent functions can be instrumented. - * - * The function establishes state (lockdep, RCU (context tracking), tracin= g) - */ -void irqentry_enter_from_user_mode(struct pt_regs *regs); - -/** - * irqentry_exit_to_user_mode - Interrupt exit work - * @regs: Pointer to current's pt_regs - * - * Invoked with interrupts disabled and fully valid regs. Returns with all - * work handled, interrupts disabled such that the caller can immediately - * switch to user mode. Called from architecture specific interrupt - * handling code. - * - * The call order is #2 and #3 as described in syscall_exit_to_user_mode(). - * Interrupt exit is not invoking #1 which is the syscall specific one time - * work. - */ -void irqentry_exit_to_user_mode(struct pt_regs *regs); - -#ifndef irqentry_state -/** - * struct irqentry_state - Opaque object for exception state storage - * @exit_rcu: Used exclusively in the irqentry_*() calls; signals whether = the - * exit path has to invoke ct_irq_exit(). - * @lockdep: Used exclusively in the irqentry_nmi_*() calls; ensures that - * lockdep state is restored correctly on exit from nmi. - * - * This opaque object is filled in by the irqentry_*_enter() functions and - * must be passed back into the corresponding irqentry_*_exit() functions - * when the exception is complete. - * - * Callers of irqentry_*_[enter|exit]() must consider this structure opaque - * and all members private. Descriptions of the members are provided to a= id in - * the maintenance of the irqentry_*() functions. - */ -typedef struct irqentry_state { - union { - bool exit_rcu; - bool lockdep; - }; -} irqentry_state_t; -#endif - -/** - * irqentry_enter - Handle state tracking on ordinary interrupt entries - * @regs: Pointer to pt_regs of interrupted context - * - * Invokes: - * - lockdep irqflag state tracking as low level ASM entry disabled - * interrupts. - * - * - Context tracking if the exception hit user mode. - * - * - The hardirq tracer to keep the state consistent as low level ASM - * entry disabled interrupts. - * - * As a precondition, this requires that the entry came from user mode, - * idle, or a kernel context in which RCU is watching. - * - * For kernel mode entries RCU handling is done conditional. If RCU is - * watching then the only RCU requirement is to check whether the tick has - * to be restarted. If RCU is not watching then ct_irq_enter() has to be - * invoked on entry and ct_irq_exit() on exit. - * - * Avoiding the ct_irq_enter/exit() calls is an optimization but also - * solves the problem of kernel mode pagefaults which can schedule, which - * is not possible after invoking ct_irq_enter() without undoing it. - * - * For user mode entries irqentry_enter_from_user_mode() is invoked to - * establish the proper context for NOHZ_FULL. Otherwise scheduling on exit - * would not be possible. - * - * Returns: An opaque object that must be passed to idtentry_exit() - */ -irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs); - -/** - * irqentry_exit_cond_resched - Conditionally reschedule on return from in= terrupt - * - * Conditional reschedule with additional sanity checks. - */ -void raw_irqentry_exit_cond_resched(void); -#ifdef CONFIG_PREEMPT_DYNAMIC -#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) -#define irqentry_exit_cond_resched_dynamic_enabled raw_irqentry_exit_cond_= resched -#define irqentry_exit_cond_resched_dynamic_disabled NULL -DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_res= ched); -#define irqentry_exit_cond_resched() static_call(irqentry_exit_cond_resche= d)() -#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) -DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -void dynamic_irqentry_exit_cond_resched(void); -#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() -#endif -#else /* CONFIG_PREEMPT_DYNAMIC */ -#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() -#endif /* CONFIG_PREEMPT_DYNAMIC */ - -/** - * irqentry_exit - Handle return from exception that used irqentry_enter() - * @regs: Pointer to pt_regs (exception entry regs) - * @state: Return value from matching call to irqentry_enter() - * - * Depending on the return target (kernel/user) this runs the necessary - * preemption and work checks if possible and required and returns to - * the caller with interrupts disabled and no further work pending. - * - * This is the last action before returning to the low level ASM code which - * just needs to return to the appropriate context. - * - * Counterpart to irqentry_enter(). - */ -void noinstr irqentry_exit(struct pt_regs *regs, irqentry_state_t state); - -/** - * irqentry_nmi_enter - Handle NMI entry - * @regs: Pointer to currents pt_regs - * - * Similar to irqentry_enter() but taking care of the NMI constraints. - */ -irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs); - -/** - * irqentry_nmi_exit - Handle return from NMI handling - * @regs: Pointer to pt_regs (NMI entry regs) - * @irq_state: Return value from matching call to irqentry_nmi_enter() - * - * Last action before returning to the low level assembly code. - * - * Counterpart to irqentry_nmi_enter(). - */ -void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_= state); - #endif diff --git a/include/linux/irq-entry-common.h b/include/linux/irq-entry-com= mon.h new file mode 100644 index 000000000000..b7d60a18f1a2 --- /dev/null +++ b/include/linux/irq-entry-common.h @@ -0,0 +1,393 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_IRQENTRYCOMMON_H +#define __LINUX_IRQENTRYCOMMON_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +/* + * Define dummy _TIF work flags if not defined by the architecture or for + * disabled functionality. + */ +#ifndef _TIF_PATCH_PENDING +# define _TIF_PATCH_PENDING (0) +#endif + +/* + * TIF flags handled in exit_to_user_mode_loop() + */ +#ifndef ARCH_EXIT_TO_USER_MODE_WORK +# define ARCH_EXIT_TO_USER_MODE_WORK (0) +#endif + +#define EXIT_TO_USER_MODE_WORK \ + (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ + _TIF_NEED_RESCHED | _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL | \ + ARCH_EXIT_TO_USER_MODE_WORK) + +/** + * arch_enter_from_user_mode - Architecture specific sanity check for user= mode regs + * @regs: Pointer to currents pt_regs + * + * Defaults to an empty implementation. Can be replaced by architecture + * specific code. + * + * Invoked from syscall_enter_from_user_mode() in the non-instrumentable + * section. Use __always_inline so the compiler cannot push it out of line + * and make it instrumentable. + */ +static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs= ); + +#ifndef arch_enter_from_user_mode +static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs= ) {} +#endif + +/** + * enter_from_user_mode - Establish state when coming from user mode + * + * Syscall/interrupt entry disables interrupts, but user mode is traced as + * interrupts enabled. Also with NO_HZ_FULL RCU might be idle. + * + * 1) Tell lockdep that interrupts are disabled + * 2) Invoke context tracking if enabled to reactivate RCU + * 3) Trace interrupts off state + * + * Invoked from architecture specific syscall entry code with interrupts + * disabled. The calling code has to be non-instrumentable. When the + * function returns all state is correct and interrupts are still + * disabled. The subsequent functions can be instrumented. + * + * This is invoked when there is architecture specific functionality to be + * done between establishing state and enabling interrupts. The caller must + * enable interrupts before invoking syscall_enter_from_user_mode_work(). + */ +static __always_inline void enter_from_user_mode(struct pt_regs *regs) +{ + arch_enter_from_user_mode(regs); + lockdep_hardirqs_off(CALLER_ADDR0); + + CT_WARN_ON(__ct_state() !=3D CT_STATE_USER); + user_exit_irqoff(); + + instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); + trace_hardirqs_off_finish(); + instrumentation_end(); +} + +/** + * local_irq_enable_exit_to_user - Exit to user variant of local_irq_enabl= e() + * @ti_work: Cached TIF flags gathered with interrupts disabled + * + * Defaults to local_irq_enable(). Can be supplied by architecture specific + * code. + */ +static inline void local_irq_enable_exit_to_user(unsigned long ti_work); + +#ifndef local_irq_enable_exit_to_user +static inline void local_irq_enable_exit_to_user(unsigned long ti_work) +{ + local_irq_enable(); +} +#endif + +/** + * local_irq_disable_exit_to_user - Exit to user variant of local_irq_disa= ble() + * + * Defaults to local_irq_disable(). Can be supplied by architecture specif= ic + * code. + */ +static inline void local_irq_disable_exit_to_user(void); + +#ifndef local_irq_disable_exit_to_user +static inline void local_irq_disable_exit_to_user(void) +{ + local_irq_disable(); +} +#endif + +/** + * arch_exit_to_user_mode_work - Architecture specific TIF work for exit + * to user mode. + * @regs: Pointer to currents pt_regs + * @ti_work: Cached TIF flags gathered with interrupts disabled + * + * Invoked from exit_to_user_mode_loop() with interrupt enabled + * + * Defaults to NOOP. Can be supplied by architecture specific code. + */ +static inline void arch_exit_to_user_mode_work(struct pt_regs *regs, + unsigned long ti_work); + +#ifndef arch_exit_to_user_mode_work +static inline void arch_exit_to_user_mode_work(struct pt_regs *regs, + unsigned long ti_work) +{ +} +#endif + +/** + * arch_exit_to_user_mode_prepare - Architecture specific preparation for + * exit to user mode. + * @regs: Pointer to currents pt_regs + * @ti_work: Cached TIF flags gathered with interrupts disabled + * + * Invoked from exit_to_user_mode_prepare() with interrupt disabled as the= last + * function before return. Defaults to NOOP. + */ +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work); + +#ifndef arch_exit_to_user_mode_prepare +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ +} +#endif + +/** + * arch_exit_to_user_mode - Architecture specific final work before + * exit to user mode. + * + * Invoked from exit_to_user_mode() with interrupt disabled as the last + * function before return. Defaults to NOOP. + * + * This needs to be __always_inline because it is non-instrumentable code + * invoked after context tracking switched to user mode. + * + * An architecture implementation must not do anything complex, no locking + * etc. The main purpose is for speculation mitigations. + */ +static __always_inline void arch_exit_to_user_mode(void); + +#ifndef arch_exit_to_user_mode +static __always_inline void arch_exit_to_user_mode(void) { } +#endif + +/** + * arch_do_signal_or_restart - Architecture specific signal delivery func= tion + * @regs: Pointer to currents pt_regs + * + * Invoked from exit_to_user_mode_loop(). + */ +void arch_do_signal_or_restart(struct pt_regs *regs); + +/** + * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce + */ +unsigned long exit_to_user_mode_loop(struct pt_regs *regs, + unsigned long ti_work); + +/** + * exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required + * @regs: Pointer to pt_regs on entry stack + * + * 1) check that interrupts are disabled + * 2) call tick_nohz_user_enter_prepare() + * 3) call exit_to_user_mode_loop() if any flags from + * EXIT_TO_USER_MODE_WORK are set + * 4) check that interrupts are still disabled + */ +static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) +{ + unsigned long ti_work; + + lockdep_assert_irqs_disabled(); + + /* Flush pending rcuog wakeup before the last need_resched() check */ + tick_nohz_user_enter_prepare(); + + ti_work =3D read_thread_flags(); + if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) + ti_work =3D exit_to_user_mode_loop(regs, ti_work); + + arch_exit_to_user_mode_prepare(regs, ti_work); + + /* Ensure that kernel state is sane for a return to userspace */ + kmap_assert_nomap(); + lockdep_assert_irqs_disabled(); + lockdep_sys_exit(); +} + +/** + * exit_to_user_mode - Fixup state when exiting to user mode + * + * Syscall/interrupt exit enables interrupts, but the kernel state is + * interrupts disabled when this is invoked. Also tell RCU about it. + * + * 1) Trace interrupts on state + * 2) Invoke context tracking if enabled to adjust RCU state + * 3) Invoke architecture specific last minute exit code, e.g. speculation + * mitigations, etc.: arch_exit_to_user_mode() + * 4) Tell lockdep that interrupts are enabled + * + * Invoked from architecture specific code when syscall_exit_to_user_mode() + * is not suitable as the last step before returning to userspace. Must be + * invoked with interrupts disabled and the caller must be + * non-instrumentable. + * The caller has to invoke syscall_exit_to_user_mode_work() before this. + */ +static __always_inline void exit_to_user_mode(void) +{ + instrumentation_begin(); + trace_hardirqs_on_prepare(); + lockdep_hardirqs_on_prepare(); + instrumentation_end(); + + user_enter_irqoff(); + arch_exit_to_user_mode(); + lockdep_hardirqs_on(CALLER_ADDR0); +} + +/** + * irqentry_enter_from_user_mode - Establish state before invoking the irq= handler + * @regs: Pointer to currents pt_regs + * + * Invoked from architecture specific entry code with interrupts disabled. + * Can only be called when the interrupt entry came from user mode. The + * calling code must be non-instrumentable. When the function returns all + * state is correct and the subsequent functions can be instrumented. + * + * The function establishes state (lockdep, RCU (context tracking), tracin= g) + */ +void irqentry_enter_from_user_mode(struct pt_regs *regs); + +/** + * irqentry_exit_to_user_mode - Interrupt exit work + * @regs: Pointer to current's pt_regs + * + * Invoked with interrupts disabled and fully valid regs. Returns with all + * work handled, interrupts disabled such that the caller can immediately + * switch to user mode. Called from architecture specific interrupt + * handling code. + * + * The call order is #2 and #3 as described in syscall_exit_to_user_mode(). + * Interrupt exit is not invoking #1 which is the syscall specific one time + * work. + */ +void irqentry_exit_to_user_mode(struct pt_regs *regs); + +#ifndef irqentry_state +/** + * struct irqentry_state - Opaque object for exception state storage + * @exit_rcu: Used exclusively in the irqentry_*() calls; signals whether = the + * exit path has to invoke ct_irq_exit(). + * @lockdep: Used exclusively in the irqentry_nmi_*() calls; ensures that + * lockdep state is restored correctly on exit from nmi. + * + * This opaque object is filled in by the irqentry_*_enter() functions and + * must be passed back into the corresponding irqentry_*_exit() functions + * when the exception is complete. + * + * Callers of irqentry_*_[enter|exit]() must consider this structure opaque + * and all members private. Descriptions of the members are provided to a= id in + * the maintenance of the irqentry_*() functions. + */ +typedef struct irqentry_state { + union { + bool exit_rcu; + bool lockdep; + }; +} irqentry_state_t; +#endif + +/** + * irqentry_enter - Handle state tracking on ordinary interrupt entries + * @regs: Pointer to pt_regs of interrupted context + * + * Invokes: + * - lockdep irqflag state tracking as low level ASM entry disabled + * interrupts. + * + * - Context tracking if the exception hit user mode. + * + * - The hardirq tracer to keep the state consistent as low level ASM + * entry disabled interrupts. + * + * As a precondition, this requires that the entry came from user mode, + * idle, or a kernel context in which RCU is watching. + * + * For kernel mode entries RCU handling is done conditional. If RCU is + * watching then the only RCU requirement is to check whether the tick has + * to be restarted. If RCU is not watching then ct_irq_enter() has to be + * invoked on entry and ct_irq_exit() on exit. + * + * Avoiding the ct_irq_enter/exit() calls is an optimization but also + * solves the problem of kernel mode pagefaults which can schedule, which + * is not possible after invoking ct_irq_enter() without undoing it. + * + * For user mode entries irqentry_enter_from_user_mode() is invoked to + * establish the proper context for NOHZ_FULL. Otherwise scheduling on exit + * would not be possible. + * + * Returns: An opaque object that must be passed to idtentry_exit() + */ +irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs); + +/** + * irqentry_exit_cond_resched - Conditionally reschedule on return from in= terrupt + * + * Conditional reschedule with additional sanity checks. + */ +void raw_irqentry_exit_cond_resched(void); +#ifdef CONFIG_PREEMPT_DYNAMIC +#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) +#define irqentry_exit_cond_resched_dynamic_enabled raw_irqentry_exit_cond_= resched +#define irqentry_exit_cond_resched_dynamic_disabled NULL +DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_res= ched); +#define irqentry_exit_cond_resched() static_call(irqentry_exit_cond_resche= d)() +#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) +DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); +void dynamic_irqentry_exit_cond_resched(void); +#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() +#endif +#else /* CONFIG_PREEMPT_DYNAMIC */ +#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() +#endif /* CONFIG_PREEMPT_DYNAMIC */ + +/** + * irqentry_exit - Handle return from exception that used irqentry_enter() + * @regs: Pointer to pt_regs (exception entry regs) + * @state: Return value from matching call to irqentry_enter() + * + * Depending on the return target (kernel/user) this runs the necessary + * preemption and work checks if possible and required and returns to + * the caller with interrupts disabled and no further work pending. + * + * This is the last action before returning to the low level ASM code which + * just needs to return to the appropriate context. + * + * Counterpart to irqentry_enter(). + */ +void noinstr irqentry_exit(struct pt_regs *regs, irqentry_state_t state); + +/** + * irqentry_nmi_enter - Handle NMI entry + * @regs: Pointer to currents pt_regs + * + * Similar to irqentry_enter() but taking care of the NMI constraints. + */ +irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs); + +/** + * irqentry_nmi_exit - Handle return from NMI handling + * @regs: Pointer to pt_regs (NMI entry regs) + * @irq_state: Return value from matching call to irqentry_nmi_enter() + * + * Last action before returning to the low level assembly code. + * + * Counterpart to irqentry_nmi_enter(). + */ +void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_= state); + +#endif diff --git a/kernel/entry/Makefile b/kernel/entry/Makefile index 095c775e001e..d38f3a7e7396 100644 --- a/kernel/entry/Makefile +++ b/kernel/entry/Makefile @@ -9,5 +9,6 @@ KCOV_INSTRUMENT :=3D n CFLAGS_REMOVE_common.o =3D -fstack-protector -fstack-protector-strong CFLAGS_common.o +=3D -fno-stack-protector =20 -obj-$(CONFIG_GENERIC_ENTRY) +=3D common.o syscall_user_dispatch.o +obj-$(CONFIG_GENERIC_IRQ_ENTRY) +=3D common.o +obj-$(CONFIG_GENERIC_SYSCALL) +=3D syscall-common.o syscall_user_dispatc= h.o obj-$(CONFIG_KVM_XFER_TO_GUEST_WORK) +=3D kvm.o diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 5b6934e23c21..2ad132c7be05 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -1,84 +1,14 @@ // SPDX-License-Identifier: GPL-2.0 =20 #include -#include +#include #include #include #include #include #include -#include #include =20 -#include "common.h" - -#define CREATE_TRACE_POINTS -#include - -static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) -{ - if (unlikely(audit_context())) { - unsigned long args[6]; - - syscall_get_arguments(current, regs, args); - audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]); - } -} - -long syscall_trace_enter(struct pt_regs *regs, long syscall, - unsigned long work) -{ - long ret =3D 0; - - /* - * Handle Syscall User Dispatch. This must comes first, since - * the ABI here can be something that doesn't make sense for - * other syscall_work features. - */ - if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) { - if (syscall_user_dispatch(regs)) - return -1L; - } - - /* Handle ptrace */ - if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { - ret =3D ptrace_report_syscall_entry(regs); - if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) - return -1L; - } - - /* Do seccomp after ptrace, to catch any tracer changes. */ - if (work & SYSCALL_WORK_SECCOMP) { - ret =3D __secure_computing(NULL); - if (ret =3D=3D -1L) - return ret; - } - - /* Either of the above might have changed the syscall number */ - syscall =3D syscall_get_nr(current, regs); - - if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) { - trace_sys_enter(regs, syscall); - /* - * Probes or BPF hooks in the tracepoint may have changed the - * system call number as well. - */ - syscall =3D syscall_get_nr(current, regs); - } - - syscall_enter_audit(regs, syscall); - - return ret ? : syscall; -} - -noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) -{ - enter_from_user_mode(regs); - instrumentation_begin(); - local_irq_enable(); - instrumentation_end(); -} - /* Workaround to allow gradual conversion of architecture code */ void __weak arch_do_signal_or_restart(struct pt_regs *regs) { } =20 @@ -133,93 +63,6 @@ __always_inline unsigned long exit_to_user_mode_loop(st= ruct pt_regs *regs, return ti_work; } =20 -/* - * If SYSCALL_EMU is set, then the only reason to report is when - * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall - * instruction has been already reported in syscall_enter_from_user_mode(). - */ -static inline bool report_single_step(unsigned long work) -{ - if (work & SYSCALL_WORK_SYSCALL_EMU) - return false; - - return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; -} - -static void syscall_exit_work(struct pt_regs *regs, unsigned long work) -{ - bool step; - - /* - * If the syscall was rolled back due to syscall user dispatching, - * then the tracers below are not invoked for the same reason as - * the entry side was not invoked in syscall_trace_enter(): The ABI - * of these syscalls is unknown. - */ - if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) { - if (unlikely(current->syscall_dispatch.on_dispatch)) { - current->syscall_dispatch.on_dispatch =3D false; - return; - } - } - - audit_syscall_exit(regs); - - if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT) - trace_sys_exit(regs, syscall_get_return_value(current, regs)); - - step =3D report_single_step(work); - if (step || work & SYSCALL_WORK_SYSCALL_TRACE) - ptrace_report_syscall_exit(regs, step); -} - -/* - * Syscall specific exit to user mode preparation. Runs with interrupts - * enabled. - */ -static void syscall_exit_to_user_mode_prepare(struct pt_regs *regs) -{ - unsigned long work =3D READ_ONCE(current_thread_info()->syscall_work); - unsigned long nr =3D syscall_get_nr(current, regs); - - CT_WARN_ON(ct_state() !=3D CT_STATE_KERNEL); - - if (IS_ENABLED(CONFIG_PROVE_LOCKING)) { - if (WARN(irqs_disabled(), "syscall %lu left IRQs disabled", nr)) - local_irq_enable(); - } - - rseq_syscall(regs); - - /* - * Do one-time syscall specific work. If these work items are - * enabled, we want to run them exactly once per syscall exit with - * interrupts enabled. - */ - if (unlikely(work & SYSCALL_WORK_EXIT)) - syscall_exit_work(regs, work); -} - -static __always_inline void __syscall_exit_to_user_mode_work(struct pt_reg= s *regs) -{ - syscall_exit_to_user_mode_prepare(regs); - local_irq_disable_exit_to_user(); - exit_to_user_mode_prepare(regs); -} - -void syscall_exit_to_user_mode_work(struct pt_regs *regs) -{ - __syscall_exit_to_user_mode_work(regs); -} - -__visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) -{ - instrumentation_begin(); - __syscall_exit_to_user_mode_work(regs); - instrumentation_end(); - exit_to_user_mode(); -} - noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) { enter_from_user_mode(regs); diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c new file mode 100644 index 000000000000..0eb036986ad4 --- /dev/null +++ b/kernel/entry/syscall-common.c @@ -0,0 +1,159 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "common.h" + +#define CREATE_TRACE_POINTS +#include + +static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) +{ + if (unlikely(audit_context())) { + unsigned long args[6]; + + syscall_get_arguments(current, regs, args); + audit_syscall_entry(syscall, args[0], args[1], args[2], args[3]); + } +} + +long syscall_trace_enter(struct pt_regs *regs, long syscall, + unsigned long work) +{ + long ret =3D 0; + + /* + * Handle Syscall User Dispatch. This must comes first, since + * the ABI here can be something that doesn't make sense for + * other syscall_work features. + */ + if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) { + if (syscall_user_dispatch(regs)) + return -1L; + } + + /* Handle ptrace */ + if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { + ret =3D ptrace_report_syscall_entry(regs); + if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) + return -1L; + } + + /* Do seccomp after ptrace, to catch any tracer changes. */ + if (work & SYSCALL_WORK_SECCOMP) { + ret =3D __secure_computing(NULL); + if (ret =3D=3D -1L) + return ret; + } + + /* Either of the above might have changed the syscall number */ + syscall =3D syscall_get_nr(current, regs); + + if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) { + trace_sys_enter(regs, syscall); + /* + * Probes or BPF hooks in the tracepoint may have changed the + * system call number as well. + */ + syscall =3D syscall_get_nr(current, regs); + } + + syscall_enter_audit(regs, syscall); + + return ret ? : syscall; +} + +noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) +{ + enter_from_user_mode(regs); + instrumentation_begin(); + local_irq_enable(); + instrumentation_end(); +} + +/* + * If SYSCALL_EMU is set, then the only reason to report is when + * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall + * instruction has been already reported in syscall_enter_from_user_mode(). + */ +static inline bool report_single_step(unsigned long work) +{ + if (work & SYSCALL_WORK_SYSCALL_EMU) + return false; + + return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; +} + +static void syscall_exit_work(struct pt_regs *regs, unsigned long work) +{ + bool step; + + /* + * If the syscall was rolled back due to syscall user dispatching, + * then the tracers below are not invoked for the same reason as + * the entry side was not invoked in syscall_trace_enter(): The ABI + * of these syscalls is unknown. + */ + if (work & SYSCALL_WORK_SYSCALL_USER_DISPATCH) { + if (unlikely(current->syscall_dispatch.on_dispatch)) { + current->syscall_dispatch.on_dispatch =3D false; + return; + } + } + + audit_syscall_exit(regs); + + if (work & SYSCALL_WORK_SYSCALL_TRACEPOINT) + trace_sys_exit(regs, syscall_get_return_value(current, regs)); + + step =3D report_single_step(work); + if (step || work & SYSCALL_WORK_SYSCALL_TRACE) + ptrace_report_syscall_exit(regs, step); +} + +/* + * Syscall specific exit to user mode preparation. Runs with interrupts + * enabled. + */ +static void syscall_exit_to_user_mode_prepare(struct pt_regs *regs) +{ + unsigned long work =3D READ_ONCE(current_thread_info()->syscall_work); + unsigned long nr =3D syscall_get_nr(current, regs); + + CT_WARN_ON(ct_state() !=3D CT_STATE_KERNEL); + + if (IS_ENABLED(CONFIG_PROVE_LOCKING)) { + if (WARN(irqs_disabled(), "syscall %lu left IRQs disabled", nr)) + local_irq_enable(); + } + + rseq_syscall(regs); + + /* + * Do one-time syscall specific work. If these work items are + * enabled, we want to run them exactly once per syscall exit with + * interrupts enabled. + */ + if (unlikely(work & SYSCALL_WORK_EXIT)) + syscall_exit_work(regs, work); +} + +static __always_inline void __syscall_exit_to_user_mode_work(struct pt_reg= s *regs) +{ + syscall_exit_to_user_mode_prepare(regs); + local_irq_disable_exit_to_user(); + exit_to_user_mode_prepare(regs); +} + +void syscall_exit_to_user_mode_work(struct pt_regs *regs) +{ + __syscall_exit_to_user_mode_work(regs); +} + +__visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) +{ + instrumentation_begin(); + __syscall_exit_to_user_mode_work(regs); + instrumentation_end(); + exit_to_user_mode(); +} --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65269205136 for ; Fri, 25 Oct 2024 10:08:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850939; cv=none; b=HJmZLzAmJmZGzQpdpio89N7eWD2iZFZNiplM0dioWA0LyLYXctXLo+n/eCYHvorJy97aJN9DRXA5vh+V4/Rz7EpBphg/D+TsuTAQmJmi2UfrB6UJ86VUGMSHbkDp+UVVQdJ1WSAPKh0BvGyZuC9ifHJT8MCeW2Lsyvod56dYywA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850939; c=relaxed/simple; bh=V4ntC7CeTbEvPdpuCYbztDTs54KnUno16SgF0R0lU78=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aBs2saDO0WYJuZj3UEWEDIxK94fBAXNhv09sPQ+zbGJOdomD+yIkzz+aQo2wwNXr+I4eGG4Xbbt8amkuxauf6v31cgjCzLMOJXVlTPbfdddqDivg1QrQZYCwN9Wt2OCOkqxvbprJHA3rRao1i6WHpjEt25MaSFynl0i1lq1XHjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XZdkJ0VWlzyTRq; Fri, 25 Oct 2024 18:07:20 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id AB3BF18010F; Fri, 25 Oct 2024 18:08:53 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:52 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 15/19] entry: Add arch irqentry_exit_need_resched() for arm64 Date: Fri, 25 Oct 2024 18:06:56 +0800 Message-ID: <20241025100700.3714552-16-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is same with the irq preempt schedule code of generic entry besides those architecture-related logic called arm64_irqentry_exit_need_resched(). So add arch irqentry_exit_need_resched() to support architecture-related need_resched() check logic, which do not affect existing architectures that use generic entry, but support arm64 to use generic irq entry. Suggested-by: Mark Rutland Suggested-by: Kevin Brodsky Suggested-by: Thomas Gleixner Signed-off-by: Jinjie Ruan --- kernel/entry/common.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 2ad132c7be05..0cc117b658b8 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -143,6 +143,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs= *regs) return ret; } =20 +/** + * arch_irqentry_exit_need_resched - Architecture specific need resched fu= nction + * + * Invoked from raw_irqentry_exit_cond_resched() to check if need resched. + * Defaults return true. + * + * The main purpose is to permit arch to skip preempt a task from an IRQ. + */ +static inline bool arch_irqentry_exit_need_resched(void); + +#ifndef arch_irqentry_exit_need_resched +static inline bool arch_irqentry_exit_need_resched(void) { return true; } +#endif + void raw_irqentry_exit_cond_resched(void) { if (!preempt_count()) { @@ -150,7 +164,7 @@ void raw_irqentry_exit_cond_resched(void) rcu_irq_exit_check_preempt(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) WARN_ON_ONCE(!on_thread_stack()); - if (need_resched()) + if (need_resched() && arch_irqentry_exit_need_resched()) preempt_schedule_irq(); } } --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E50D620409E for ; Fri, 25 Oct 2024 10:08:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850943; cv=none; b=oSLPL54Y9hpMIx69LZYbY4jPbnlvPYez3LGLY5Hx82SRwmVA8lIb6RVCmCPq22t1e+B2Vu1+U32729t5AXrXji/TAQpG0jMOaJ86bUpetRgNfRcp32JtvV2MPDfdh/J6vD74EKTJUB8rV3l599pvFdCcgc5fGfk++SSPgx3Co7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850943; c=relaxed/simple; bh=RGhCADQrOiuDhfJkkwXXiUMy1u7T9lvJvjk7JzZj6co=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZkL7I/ALj8lr6tnAg0oMKC7QcRmuhQ9N9+BASyXLk/p2Dh3TPYA0D8MmI0/uUSyezLgOP4W02QLGacfXEEzFqXZoL3t1/d6ERcElyMAnJExcbmwup9qRx/4oWzRjsW/safXntH3rJK+qVIPhOIDCFgSni0V8zFv7J8BwrmbjjBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XZdjC6xFvzdkNq; Fri, 25 Oct 2024 18:06:23 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 2F0251800A5; Fri, 25 Oct 2024 18:08:55 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:53 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 16/19] arm64: entry: Switch to generic IRQ entry Date: Fri, 25 Oct 2024 18:06:57 +0800 Message-ID: <20241025100700.3714552-17-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64 to use the generic entry infrastructure from kernel/entry/*. The generic entry makes maintainers' work easier and codes more elegant. Switch arm64 to generic IRQ entry first, which removed duplicate 100+ LOC, the next patch will switch arm64 to generic entry completely. Switch to generic entry in two steps according to Mark's suggestion will make it easier to review. The changes are below: - Remove *enter_from/exit_to_kernel_mode(), and wrap with generic irqentry_enter/exit(). Also remove *enter_from/exit_to_user_mode(), and wrap with generic enter_from/exit_to_user_mode(). The front patch 1 ~ 5 try to make it easier to make this switch. And the patch 14 split the generic irq entry and generic syscall to make this patch more single and concentrated in switching to generic IRQ entry. - Remove arm64_enter/exit_nmi() and use generic irqentry_nmi_enter/exit(). - Remove PREEMPT_DYNAMIC code, as generic entry do the same thing if arm64 implement arch_irqentry_exit_need_resched(). The front patch 6 ~ 13 and patch 15 try to make it closer to the generic implementation. Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/entry-common.h | 64 ++++++ arch/arm64/include/asm/preempt.h | 6 - arch/arm64/include/asm/ptrace.h | 7 - arch/arm64/kernel/entry-common.c | 303 ++++++-------------------- arch/arm64/kernel/signal.c | 3 +- 6 files changed, 130 insertions(+), 254 deletions(-) create mode 100644 arch/arm64/include/asm/entry-common.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 232dcade2783..4545017cfd01 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -146,6 +146,7 @@ config ARM64 select GENERIC_CPU_DEVICES select GENERIC_CPU_VULNERABILITIES select GENERIC_EARLY_IOREMAP + select GENERIC_IRQ_ENTRY select GENERIC_IDLE_POLL_SETUP select GENERIC_IOREMAP select GENERIC_IRQ_IPI diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm= /entry-common.h new file mode 100644 index 000000000000..1cc9d966a6c3 --- /dev/null +++ b/arch/arm64/include/asm/entry-common.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_ARM64_ENTRY_COMMON_H +#define _ASM_ARM64_ENTRY_COMMON_H + +#include + +#include +#include +#include +#include + +#define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_F= PSTATE) + +static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *re= gs, + unsigned long ti_work) +{ + if (ti_work & _TIF_MTE_ASYNC_FAULT) { + clear_thread_flag(TIF_MTE_ASYNC_FAULT); + send_sig_fault(SIGSEGV, SEGV_MTEAERR, (void __user *)NULL, current); + } + + if (ti_work & _TIF_FOREIGN_FPSTATE) + fpsimd_restore_current_state(); +} + +#define arch_exit_to_user_mode_work arch_exit_to_user_mode_work + +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ + local_daif_mask(); +} + +#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare + +static inline bool arch_irqentry_exit_need_resched(void) +{ + /* + * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC + * priority masking is used the GIC irqchip driver will clear DAIF.IF + * using gic_arch_enable_irqs() for normal IRQs. If anything is set in + * DAIF we must have handled an NMI, so skip preemption. + */ + if (system_uses_irq_prio_masking() && read_sysreg(daif)) + return false; + + /* + * Preempting a task from an IRQ means we leave copies of PSTATE + * on the stack. cpufeature's enable calls may modify PSTATE, but + * resuming one of these preempted tasks would undo those changes. + * + * Only allow a task to be preempted once cpufeatures have been + * enabled. + */ + if (!system_capabilities_finalized()) + return false; + + return true; +} + +#define arch_irqentry_exit_need_resched arch_irqentry_exit_need_resched + +#endif /* _ASM_ARM64_ENTRY_COMMON_H */ diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/pree= mpt.h index 0f0ba250efe8..932ea4b62042 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -2,7 +2,6 @@ #ifndef __ASM_PREEMPT_H #define __ASM_PREEMPT_H =20 -#include #include =20 #define PREEMPT_NEED_RESCHED BIT(32) @@ -85,22 +84,17 @@ static inline bool should_resched(int preempt_offset) void preempt_schedule(void); void preempt_schedule_notrace(void); =20 -void raw_irqentry_exit_cond_resched(void); #ifdef CONFIG_PREEMPT_DYNAMIC =20 -DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); void dynamic_preempt_schedule(void); #define __preempt_schedule() dynamic_preempt_schedule() void dynamic_preempt_schedule_notrace(void); #define __preempt_schedule_notrace() dynamic_preempt_schedule_notrace() -void dynamic_irqentry_exit_cond_resched(void); -#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() =20 #else /* CONFIG_PREEMPT_DYNAMIC */ =20 #define __preempt_schedule() preempt_schedule() #define __preempt_schedule_notrace() preempt_schedule_notrace() -#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() =20 #endif /* CONFIG_PREEMPT_DYNAMIC */ #endif /* CONFIG_PREEMPTION */ diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrac= e.h index 5156c0d5fa20..f14c2adc239a 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -149,13 +149,6 @@ static inline unsigned long pstate_to_compat_psr(const= unsigned long pstate) return psr; } =20 -typedef struct irqentry_state { - union { - bool exit_rcu; - bool lockdep; - }; -} irqentry_state_t; - /* * This struct defines the way the registers are stored on the stack durin= g an * exception. struct user_pt_regs must form a prefix of struct pt_regs. diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 152216201f84..55fee0960fca 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -6,6 +6,7 @@ */ =20 #include +#include #include #include #include @@ -38,71 +39,13 @@ */ static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *reg= s) { - irqentry_state_t ret =3D { - .exit_rcu =3D false, - }; - - if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) { - lockdep_hardirqs_off(CALLER_ADDR0); - ct_irq_enter(); - trace_hardirqs_off_finish(); - - ret.exit_rcu =3D true; - return ret; - } - - lockdep_hardirqs_off(CALLER_ADDR0); - rcu_irq_enter_check_tick(); - trace_hardirqs_off_finish(); + irqentry_state_t state =3D irqentry_enter(regs); =20 mte_check_tfsr_entry(); mte_disable_tco_entry(current); =20 - return ret; -} - -static inline bool arm64_irqentry_exit_need_resched(void) -{ - /* - * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC - * priority masking is used the GIC irqchip driver will clear DAIF.IF - * using gic_arch_enable_irqs() for normal IRQs. If anything is set in - * DAIF we must have handled an NMI, so skip preemption. - */ - if (system_uses_irq_prio_masking() && read_sysreg(daif)) - return false; - - /* - * Preempting a task from an IRQ means we leave copies of PSTATE - * on the stack. cpufeature's enable calls may modify PSTATE, but - * resuming one of these preempted tasks would undo those changes. - * - * Only allow a task to be preempted once cpufeatures have been - * enabled. - */ - if (!system_capabilities_finalized()) - return false; - - return true; -} - -void raw_irqentry_exit_cond_resched(void) -{ - if (!preempt_count()) { - if (need_resched() && arm64_irqentry_exit_need_resched()) - preempt_schedule_irq(); - } -} - -#ifdef CONFIG_PREEMPT_DYNAMIC -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -void dynamic_irqentry_exit_cond_resched(void) -{ - if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) - return; - raw_irqentry_exit_cond_resched(); + return state; } -#endif =20 /* * Handle IRQ/context state management when exiting to kernel mode. @@ -116,26 +59,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs = *regs, irqentry_state_t state) { mte_check_tfsr_exit(); - - lockdep_assert_irqs_disabled(); - - if (!regs_irqs_disabled(regs)) { - if (state.exit_rcu) { - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(); - ct_irq_exit(); - lockdep_hardirqs_on(CALLER_ADDR0); - return; - } - - if (IS_ENABLED(CONFIG_PREEMPTION)) - irqentry_exit_cond_resched(); - - trace_hardirqs_on(); - } else { - if (state.exit_rcu) - ct_irq_exit(); - } + irqentry_exit(regs, state); } =20 /* @@ -143,127 +67,26 @@ static void noinstr exit_to_kernel_mode(struct pt_reg= s *regs, * Before this function is called it is not safe to call regular kernel co= de, * instrumentable code, or any code which may trigger an exception. */ -static __always_inline void enter_from_user_mode(struct pt_regs *regs) +static __always_inline void arm64_enter_from_user_mode(struct pt_regs *reg= s) { - lockdep_hardirqs_off(CALLER_ADDR0); - CT_WARN_ON(ct_state() !=3D CT_STATE_USER); - user_exit_irqoff(); - trace_hardirqs_off_finish(); + enter_from_user_mode(regs); mte_disable_tco_entry(current); } =20 -/* - * Handle IRQ/context state management when exiting to user mode. - * After this function returns it is not safe to call regular kernel code, - * instrumentable code, or any code which may trigger an exception. - */ -static __always_inline void __exit_to_user_mode(void) -{ - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(); - user_enter_irqoff(); - lockdep_hardirqs_on(CALLER_ADDR0); -} - -static void do_notify_resume(struct pt_regs *regs, unsigned long thread_fl= ags) +static __always_inline void arm64_exit_to_user_mode(struct pt_regs *regs) { - do { - local_irq_enable(); - - if (thread_flags & _TIF_NEED_RESCHED) - schedule(); - - if (thread_flags & _TIF_UPROBE) - uprobe_notify_resume(regs); - - if (thread_flags & _TIF_MTE_ASYNC_FAULT) { - clear_thread_flag(TIF_MTE_ASYNC_FAULT); - send_sig_fault(SIGSEGV, SEGV_MTEAERR, - (void __user *)NULL, current); - } - - if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) - do_signal(regs); - - if (thread_flags & _TIF_NOTIFY_RESUME) - resume_user_mode_work(regs); - - if (thread_flags & _TIF_FOREIGN_FPSTATE) - fpsimd_restore_current_state(); - - local_irq_disable(); - thread_flags =3D read_thread_flags(); - } while (thread_flags & _TIF_WORK_MASK); -} - -static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) -{ - unsigned long flags; - local_irq_disable(); =20 - flags =3D read_thread_flags(); - if (unlikely(flags & _TIF_WORK_MASK)) - do_notify_resume(regs, flags); - - local_daif_mask(); - - lockdep_sys_exit(); -} - -static __always_inline void exit_to_user_mode(struct pt_regs *regs) -{ + instrumentation_begin(); exit_to_user_mode_prepare(regs); + instrumentation_end(); mte_check_tfsr_exit(); - __exit_to_user_mode(); + exit_to_user_mode(); } =20 asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs) { - exit_to_user_mode(regs); -} - -/* - * Handle IRQ/context state management when entering an NMI from user/kern= el - * mode. Before this function is called it is not safe to call regular ker= nel - * code, instrumentable code, or any code which may trigger an exception. - */ -static noinstr irqentry_state_t arm64_enter_nmi(struct pt_regs *regs) -{ - irqentry_state_t irq_state; - - irq_state.lockdep =3D lockdep_hardirqs_enabled(); - - __nmi_enter(); - lockdep_hardirqs_off(CALLER_ADDR0); - lockdep_hardirq_enter(); - ct_nmi_enter(); - - trace_hardirqs_off_finish(); - ftrace_nmi_enter(); - - return irq_state; -} - -/* - * Handle IRQ/context state management when exiting an NMI from user/kernel - * mode. After this function returns it is not safe to call regular kernel - * code, instrumentable code, or any code which may trigger an exception. - */ -static void noinstr arm64_exit_nmi(struct pt_regs *regs, - irqentry_state_t irq_state) -{ - ftrace_nmi_exit(); - if (irq_state.lockdep) { - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(); - } - - ct_nmi_exit(); - lockdep_hardirq_exit(); - if (irq_state.lockdep) - lockdep_hardirqs_on(CALLER_ADDR0); - __nmi_exit(); + arm64_exit_to_user_mode(regs); } =20 /* @@ -322,7 +145,7 @@ extern void (*handle_arch_fiq)(struct pt_regs *); static void noinstr __panic_unhandled(struct pt_regs *regs, const char *ve= ctor, unsigned long esr) { - arm64_enter_nmi(regs); + irqentry_nmi_enter(regs); =20 console_verbose(); =20 @@ -556,10 +379,10 @@ asmlinkage void noinstr el1h_64_sync_handler(struct p= t_regs *regs) static __always_inline void __el1_pnmi(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - irqentry_state_t state =3D arm64_enter_nmi(regs); + irqentry_state_t state =3D irqentry_nmi_enter(regs); =20 do_interrupt_handler(regs, handler); - arm64_exit_nmi(regs, state); + irqentry_nmi_exit(regs, state); } =20 static __always_inline void __el1_irq(struct pt_regs *regs, @@ -600,19 +423,19 @@ asmlinkage void noinstr el1h_64_error_handler(struct = pt_regs *regs) irqentry_state_t state; =20 local_daif_restore(DAIF_ERRCTX); - state =3D arm64_enter_nmi(regs); + state =3D irqentry_nmi_enter(regs); do_serror(regs, esr); - arm64_exit_nmi(regs, state); + irqentry_nmi_exit(regs, state); } =20 static void noinstr el0_da(struct pt_regs *regs, unsigned long esr) { unsigned long far =3D read_sysreg(far_el1); =20 - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_mem_abort(far, esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr) @@ -627,50 +450,50 @@ static void noinstr el0_ia(struct pt_regs *regs, unsi= gned long esr) if (!is_ttbr0_addr(far)) arm64_apply_bp_hardening(); =20 - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_mem_abort(far, esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_fpsimd_acc(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_sve_acc(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_sme_acc(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_fpsimd_exc(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_sys(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr) @@ -680,58 +503,58 @@ static void noinstr el0_pc(struct pt_regs *regs, unsi= gned long esr) if (!is_ttbr0_addr(instruction_pointer(regs))) arm64_apply_bp_hardening(); =20 - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_sp_pc_abort(far, esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_sp_pc_abort(regs->sp, esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_undef(regs, esr); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_bti(struct pt_regs *regs) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_bti(regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_mops(regs, esr); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_gcs(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_gcs(regs, esr); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); bad_el0_sync(regs, 0, esr); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) @@ -739,28 +562,28 @@ static void noinstr el0_dbg(struct pt_regs *regs, uns= igned long esr) /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */ unsigned long far =3D read_sysreg(far_el1); =20 - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); do_debug_exception(far, esr, regs); local_daif_restore(DAIF_PROCCTX); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_svc(struct pt_regs *regs) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); fp_user_discard(); local_daif_restore(DAIF_PROCCTX); do_el0_svc(regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_fpac(regs, esr); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) @@ -828,7 +651,7 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_= regs *regs) static void noinstr el0_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); =20 write_sysreg(DAIF_PROCCTX_NOIRQ, daif); =20 @@ -839,7 +662,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs, do_interrupt_handler(regs, handler); irq_exit_rcu(); =20 - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr __el0_irq_handler_common(struct pt_regs *regs) @@ -867,13 +690,13 @@ static void noinstr __el0_error_handler_common(struct= pt_regs *regs) unsigned long esr =3D read_sysreg(esr_el1); irqentry_state_t state; =20 - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_ERRCTX); - state =3D arm64_enter_nmi(regs); + state =3D irqentry_nmi_enter(regs); do_serror(regs, esr); - arm64_exit_nmi(regs, state); + irqentry_nmi_exit(regs, state); local_daif_restore(DAIF_PROCCTX); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs) @@ -884,19 +707,19 @@ asmlinkage void noinstr el0t_64_error_handler(struct = pt_regs *regs) #ifdef CONFIG_COMPAT static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); local_daif_restore(DAIF_PROCCTX); do_el0_cp15(esr, regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 static void noinstr el0_svc_compat(struct pt_regs *regs) { - enter_from_user_mode(regs); + arm64_enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); local_daif_restore(DAIF_PROCCTX); do_el0_svc_compat(regs); - exit_to_user_mode(regs); + arm64_exit_to_user_mode(regs); } =20 asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs) @@ -970,7 +793,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(str= uct pt_regs *regs) unsigned long esr =3D read_sysreg(esr_el1); unsigned long far =3D read_sysreg(far_el1); =20 - arm64_enter_nmi(regs); + irqentry_nmi_enter(regs); panic_bad_stack(regs, esr, far); } #endif /* CONFIG_VMAP_STACK */ @@ -1004,9 +827,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_regis= tered_event *arg) else if (cpu_has_pan()) set_pstate_pan(0); =20 - state =3D arm64_enter_nmi(regs); + state =3D irqentry_nmi_enter(regs); ret =3D do_sdei_event(regs, arg); - arm64_exit_nmi(regs, state); + irqentry_nmi_exit(regs, state); =20 return ret; } diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 2eb2e97a934f..04b20c2f6cda 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -1540,7 +1541,7 @@ static void handle_signal(struct ksignal *ksig, struc= t pt_regs *regs) * the kernel can handle, and then we build all the user-level signal hand= ling * stack-frames in one go after that. */ -void do_signal(struct pt_regs *regs) +void arch_do_signal_or_restart(struct pt_regs *regs) { unsigned long continue_addr =3D 0, restart_addr =3D 0; int retval =3D 0; --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=huawei.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729850965856547.1265875906171; Fri, 25 Oct 2024 03:09:25 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.825862.1240310 (Exim 4.92) (envelope-from ) id 1t4HFk-0006Cs-1R; Fri, 25 Oct 2024 10:09:08 +0000 Received: by outflank-mailman (output) from mailman id 825862.1240310; Fri, 25 Oct 2024 10:09:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFj-0006CH-Rr; Fri, 25 Oct 2024 10:09:07 +0000 Received: by outflank-mailman (input) for mailman id 825862; Fri, 25 Oct 2024 10:09:06 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t4HFi-00014t-EZ for xen-devel@lists.xenproject.org; Fri, 25 Oct 2024 10:09:06 +0000 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 268e3d55-92b9-11ef-a0bf-8be0dac302b0; Fri, 25 Oct 2024 12:09:03 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XZdjm2S6Hz1T8p8; Fri, 25 Oct 2024 18:06:52 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 8ADF81800A5; Fri, 25 Oct 2024 18:08:56 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:54 +0800 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 268e3d55-92b9-11ef-a0bf-8be0dac302b0 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 17/19] entry: Add syscall arch functions to use generic syscall for arm64 Date: Fri, 25 Oct 2024 18:06:58 +0800 Message-ID: <20241025100700.3714552-18-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) X-ZM-MESSAGEID: 1729850966771116600 Content-Type: text/plain; charset="utf-8" Add some syscall arch functions to support arm64 to use generic syscall code, which do not affect existing architectures that use generic entry: - arch_pre/post_report_syscall_entry/exit(). Also make syscall_exit_work() not static and move report_single_step() to thread_info.h, which can be used by arm64 later. Suggested-by: Mark Rutland Suggested-by: Kevin Brodsky Suggested-by: Thomas Gleixner Signed-off-by: Jinjie Ruan --- include/linux/entry-common.h | 1 + include/linux/thread_info.h | 13 +++++ kernel/entry/syscall-common.c | 100 ++++++++++++++++++++++++++++++---- 3 files changed, 103 insertions(+), 11 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index 1ae3143d4b12..39a2d41af05e 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -178,4 +178,5 @@ void syscall_exit_to_user_mode_work(struct pt_regs *reg= s); */ void syscall_exit_to_user_mode(struct pt_regs *regs); =20 +void syscall_exit_work(struct pt_regs *regs, unsigned long work); #endif diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index 9ea0b28068f4..062de9666ef3 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -55,6 +55,19 @@ enum syscall_work_bit { #define SYSCALL_WORK_SYSCALL_AUDIT BIT(SYSCALL_WORK_BIT_SYSCALL_AUDIT) #define SYSCALL_WORK_SYSCALL_USER_DISPATCH BIT(SYSCALL_WORK_BIT_SYSCALL_US= ER_DISPATCH) #define SYSCALL_WORK_SYSCALL_EXIT_TRAP BIT(SYSCALL_WORK_BIT_SYSCALL_EXIT_T= RAP) + +/* + * If SYSCALL_EMU is set, then the only reason to report is when + * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall + * instruction has been already reported in syscall_enter_from_user_mode(). + */ +static inline bool report_single_step(unsigned long work) +{ + if (work & SYSCALL_WORK_SYSCALL_EMU) + return false; + + return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; +} #endif =20 #include diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c index 0eb036986ad4..73f87d09e04e 100644 --- a/kernel/entry/syscall-common.c +++ b/kernel/entry/syscall-common.c @@ -17,6 +17,49 @@ static inline void syscall_enter_audit(struct pt_regs *r= egs, long syscall) } } =20 +/** + * arch_pre_report_syscall_entry - Architecture specific work before + * report_syscall_entry(). + * + * Invoked from syscall_trace_enter() to prepare for ptrace_report_syscall= _entry(). + * Defaults to NOP. + * + * The main purpose is for saving a general purpose register clobbered + * in the tracee. + */ +static inline unsigned long arch_pre_report_syscall_entry(struct pt_regs *= regs); + +#ifndef arch_pre_report_syscall_entry +static inline unsigned long arch_pre_report_syscall_entry(struct pt_regs *= regs) +{ + return 0; +} +#endif + +/** + * arch_post_report_syscall_entry - Architecture specific work after + * report_syscall_entry(). + * + * Invoked from syscall_trace_enter() after calling ptrace_report_syscall_= entry(). + * Defaults to NOP. + * + * The main purpose is for restoring a general purpose register clobbered + * in the trace saved in arch_pre_report_syscall_entry(), also it can + * do something arch-specific according to the return value of + * ptrace_report_syscall_entry(). + */ +static inline void arch_post_report_syscall_entry(struct pt_regs *regs, + unsigned long saved_reg, + long ret); + +#ifndef arch_post_report_syscall_entry +static inline void arch_post_report_syscall_entry(struct pt_regs *regs, + unsigned long saved_reg, + long ret) +{ +} +#endif + long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { @@ -34,7 +77,9 @@ long syscall_trace_enter(struct pt_regs *regs, long sysca= ll, =20 /* Handle ptrace */ if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { + unsigned long saved_reg =3D arch_pre_report_syscall_entry(regs); ret =3D ptrace_report_syscall_entry(regs); + arch_post_report_syscall_entry(regs, saved_reg, ret); if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) return -1L; } @@ -71,20 +116,50 @@ noinstr void syscall_enter_from_user_mode_prepare(stru= ct pt_regs *regs) instrumentation_end(); } =20 -/* - * If SYSCALL_EMU is set, then the only reason to report is when - * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall - * instruction has been already reported in syscall_enter_from_user_mode(). +/** + * arch_pre_report_syscall_exit - Architecture specific work before + * report_syscall_exit(). + * + * Invoked from syscall_exit_work() to prepare for ptrace_report_syscall_e= xit(). + * Defaults to NOP. + * + * The main purpose is for saving a general purpose register clobbered + * in the trace. */ -static inline bool report_single_step(unsigned long work) -{ - if (work & SYSCALL_WORK_SYSCALL_EMU) - return false; +static inline unsigned long arch_pre_report_syscall_exit(struct pt_regs *r= egs, + unsigned long work); =20 - return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; +#ifndef arch_pre_report_syscall_exit +static inline unsigned long arch_pre_report_syscall_exit(struct pt_regs *r= egs, + unsigned long work) +{ + return 0; } +#endif + +/** + * arch_post_report_syscall_exit - Architecture specific work after + * report_syscall_exit(). + * + * Invoked from syscall_exit_work() after calling ptrace_report_syscall_ex= it(). + * Defaults to NOP. + * + * The main purpose is for restoring a general purpose register clobbered + * in the trace saved in arch_pre_report_syscall_exit(). + */ +static inline void arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work); + +#ifndef arch_post_report_syscall_exit +static inline void arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work) +{ +} +#endif =20 -static void syscall_exit_work(struct pt_regs *regs, unsigned long work) +void syscall_exit_work(struct pt_regs *regs, unsigned long work) { bool step; =20 @@ -107,8 +182,11 @@ static void syscall_exit_work(struct pt_regs *regs, un= signed long work) trace_sys_exit(regs, syscall_get_return_value(current, regs)); =20 step =3D report_single_step(work); - if (step || work & SYSCALL_WORK_SYSCALL_TRACE) + if (step || work & SYSCALL_WORK_SYSCALL_TRACE) { + unsigned long saved_reg =3D arch_pre_report_syscall_exit(regs, work); ptrace_report_syscall_exit(regs, step); + arch_post_report_syscall_exit(regs, saved_reg, work); + } } =20 /* --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B35320515D for ; Fri, 25 Oct 2024 10:08:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850942; cv=none; b=Vhu0bMMfeoE97/qG7XCNpurZdvvvADIZH1s38d2goPMwEPjmMX5VUsPBMoWmIvRqRgjZrKxM9xxqTunnkKUxlmk06VKynuEtAt3Yupd/8u4v5IN3R/RJkwDZ1Ojd7DRp/AGn3RTQk7lAALpD/mUt5ZORc3tm5k8D8W4syv7yYfI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850942; c=relaxed/simple; bh=Aw3f2mt3bp0IxIe7lEqJxyxXlJMyFkvyan/JtI0fwKY=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=i2mkDo8fkxu+VjyKX1ItbK4J0IhfBPpPxpS5lU5avbwQ/Y09ryMgi5bvCRKuiz0JlSdXl4PR7r7TQNcaSK1ResM6tAhYNAIbUEBziskXQxPK+EgmUc7A3vHc31iRD3ZfcjZ9G6cvRgvs+ZwTNAWgeTXvqUyJfHNa1HwoKDknPDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4XZdmK4NFSz1ynJN; Fri, 25 Oct 2024 18:09:05 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id E3CE1180043; Fri, 25 Oct 2024 18:08:57 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:56 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 18/19] arm64/ptrace: Split report_syscall() into separate enter and exit functions Date: Fri, 25 Oct 2024 18:06:59 +0800 Message-ID: <20241025100700.3714552-19-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Split report_syscall() to two separate enter and exit functions. So it will be more clear when arm64 switch to generic entry. No functional changes. Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/ptrace.c | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 6c1dcfe6d25a..6ea303ab9e22 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -2290,7 +2290,7 @@ enum ptrace_syscall_dir { PTRACE_SYSCALL_EXIT, }; =20 -static void report_syscall(struct pt_regs *regs, enum ptrace_syscall_dir d= ir) +static void report_syscall_enter(struct pt_regs *regs) { int regno; unsigned long saved_reg; @@ -2313,13 +2313,24 @@ static void report_syscall(struct pt_regs *regs, en= um ptrace_syscall_dir dir) */ regno =3D (is_compat_task() ? 12 : 7); saved_reg =3D regs->regs[regno]; - regs->regs[regno] =3D dir; + regs->regs[regno] =3D PTRACE_SYSCALL_ENTER; =20 - if (dir =3D=3D PTRACE_SYSCALL_ENTER) { - if (ptrace_report_syscall_entry(regs)) - forget_syscall(regs); - regs->regs[regno] =3D saved_reg; - } else if (!test_thread_flag(TIF_SINGLESTEP)) { + if (ptrace_report_syscall_entry(regs)) + forget_syscall(regs); + regs->regs[regno] =3D saved_reg; +} + +static void report_syscall_exit(struct pt_regs *regs) +{ + int regno; + unsigned long saved_reg; + + /* See comment for report_syscall_enter() */ + regno =3D (is_compat_task() ? 12 : 7); + saved_reg =3D regs->regs[regno]; + regs->regs[regno] =3D PTRACE_SYSCALL_EXIT; + + if (!test_thread_flag(TIF_SINGLESTEP)) { ptrace_report_syscall_exit(regs, 0); regs->regs[regno] =3D saved_reg; } else { @@ -2339,7 +2350,7 @@ int syscall_trace_enter(struct pt_regs *regs) unsigned long flags =3D read_thread_flags(); =20 if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) { - report_syscall(regs, PTRACE_SYSCALL_ENTER); + report_syscall_enter(regs); if (flags & _TIF_SYSCALL_EMU) return NO_SYSCALL; } @@ -2367,7 +2378,7 @@ void syscall_trace_exit(struct pt_regs *regs) trace_sys_exit(regs, syscall_get_return_value(current, regs)); =20 if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)) - report_syscall(regs, PTRACE_SYSCALL_EXIT); + report_syscall_exit(regs); =20 rseq_syscall(regs); } --=20 2.34.1 From nobody Wed Dec 4 08:29:59 2024 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D60B61DD0CB for ; Fri, 25 Oct 2024 10:09:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.255 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850945; cv=none; b=ps3GbdF45ND8q6/P5Mova64ZWEX6pxlnPcP27JOclejzFPhiFmp2sipErsjR6QZPL/xtv5YM7sQEw1iWzftBMXGLiEytmladLplc6lkJC6YKd2pEqi8p4G/slZWJwMA1Cs5S+p+W+0//2zHQQByRP1wyoq7wLFClxGW3FZ9kqPM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729850945; c=relaxed/simple; bh=n0JOFGOTPnC4O0tJjAXj36qW/w2SrVqcqljKADOnDNk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IJpA+76kMDMlfLeTtultlBbyUTzRZ17ivNHy9CubqKJAZCNWNrAiACxF5yr8/8Oi8hs5KwjxC0DvgoUrqJXIIi8hUJcFC4iu++dPaOYuVDflCSHzQ++lnOXMxTPOPwQ6z38gSC0cg253V20E398I2dgG1eugFa6Wm4Guravc+Fg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.255 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XZdjq0g7Xz1T91t; Fri, 25 Oct 2024 18:06:55 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id 495EF1400E3; Fri, 25 Oct 2024 18:08:59 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 18:08:57 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v4 19/19] arm64: entry: Convert to generic entry Date: Fri, 25 Oct 2024 18:07:00 +0800 Message-ID: <20241025100700.3714552-20-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241025100700.3714552-1-ruanjinjie@huawei.com> References: <20241025100700.3714552-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemg200008.china.huawei.com (7.202.181.35) Content-Type: text/plain; charset="utf-8" Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64 to use the generic entry infrastructure from kernel/entry/*. The generic entry makes maintainers' work easier and codes more elegant. The changes are below: - Remove TIF_SYSCALL_* flag, _TIF_WORK_MASK, _TIF_SYSCALL_WORK - Remove syscall_trace_enter/exit() and use generic identical functions. Tested ok with following test cases on Qemu cortex-a53 and HiSilicon Kunpeng-920: - Perf tests. - Different `dynamic preempt` mode switch. - Pseudo NMI tests. - Stress-ng CPU stress test. - MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst and all test cases in tools/testing/selftests/arm64/mte/* (Only Qemu). Suggested-by: Mark Rutland Signed-off-by: Jinjie Ruan --- arch/arm64/Kconfig | 2 +- arch/arm64/include/asm/entry-common.h | 85 ++++++++++++++++++++++ arch/arm64/include/asm/syscall.h | 6 +- arch/arm64/include/asm/thread_info.h | 23 +----- arch/arm64/kernel/ptrace.c | 101 -------------------------- arch/arm64/kernel/signal.c | 2 +- arch/arm64/kernel/syscall.c | 18 +++-- 7 files changed, 103 insertions(+), 134 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4545017cfd01..89d46d0fb18b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -146,7 +146,7 @@ config ARM64 select GENERIC_CPU_DEVICES select GENERIC_CPU_VULNERABILITIES select GENERIC_EARLY_IOREMAP - select GENERIC_IRQ_ENTRY + select GENERIC_ENTRY select GENERIC_IDLE_POLL_SETUP select GENERIC_IOREMAP select GENERIC_IRQ_IPI diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm= /entry-common.h index 1cc9d966a6c3..04a31b4fc4fd 100644 --- a/arch/arm64/include/asm/entry-common.h +++ b/arch/arm64/include/asm/entry-common.h @@ -10,6 +10,11 @@ #include #include =20 +enum ptrace_syscall_dir { + PTRACE_SYSCALL_ENTER =3D 0, + PTRACE_SYSCALL_EXIT, +}; + #define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_F= PSTATE) =20 static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *re= gs, @@ -61,4 +66,84 @@ static inline bool arch_irqentry_exit_need_resched(void) =20 #define arch_irqentry_exit_need_resched arch_irqentry_exit_need_resched =20 +static inline unsigned long arch_pre_report_syscall_entry(struct pt_regs *= regs) +{ + unsigned long saved_reg; + int regno; + + /* + * We have some ABI weirdness here in the way that we handle syscall + * exit stops because we indicate whether or not the stop has been + * signalled from syscall entry or syscall exit by clobbering a general + * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee + * and restoring its old value after the stop. This means that: + * + * - Any writes by the tracer to this register during the stop are + * ignored/discarded. + * + * - The actual value of the register is not available during the stop, + * so the tracer cannot save it and restore it later. + * + * - Syscall stops behave differently to seccomp and pseudo-step traps + * (the latter do not nobble any registers). + */ + regno =3D (is_compat_task() ? 12 : 7); + saved_reg =3D regs->regs[regno]; + regs->regs[regno] =3D PTRACE_SYSCALL_ENTER; + + return saved_reg; +} + +#define arch_pre_report_syscall_entry arch_pre_report_syscall_entry + +static inline void arch_post_report_syscall_entry(struct pt_regs *regs, + unsigned long saved_reg, long ret) +{ + int regno =3D (is_compat_task() ? 12 : 7); + + if (ret) + forget_syscall(regs); + + regs->regs[regno] =3D saved_reg; +} + +#define arch_post_report_syscall_entry arch_post_report_syscall_entry + +static inline unsigned long arch_pre_report_syscall_exit(struct pt_regs *r= egs, + unsigned long work) +{ + unsigned long saved_reg; + int regno; + + /* See comment for arch_pre_report_syscall_entry() */ + regno =3D (is_compat_task() ? 12 : 7); + saved_reg =3D regs->regs[regno]; + regs->regs[regno] =3D PTRACE_SYSCALL_EXIT; + + if (report_single_step(work)) { + /* + * Signal a pseudo-step exception since we are stepping but + * tracer modifications to the registers may have rewound the + * state machine. + */ + regs->regs[regno] =3D saved_reg; + } + + return saved_reg; +} + +#define arch_pre_report_syscall_exit arch_pre_report_syscall_exit + +static inline void arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work) +{ + int regno =3D (is_compat_task() ? 12 : 7); + + if (!report_single_step(work)) + regs->regs[regno] =3D saved_reg; +} + +#define arch_post_report_syscall_exit arch_post_report_syscall_exit + #endif /* _ASM_ARM64_ENTRY_COMMON_H */ diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/sysc= all.h index ab8e14b96f68..9891b15da4c3 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -85,7 +85,9 @@ static inline int syscall_get_arch(struct task_struct *ta= sk) return AUDIT_ARCH_AARCH64; } =20 -int syscall_trace_enter(struct pt_regs *regs); -void syscall_trace_exit(struct pt_regs *regs); +static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs) +{ + return false; +} =20 #endif /* __ASM_SYSCALL_H */ diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/= thread_info.h index 1114c1c3300a..543fdb00d713 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -43,6 +43,7 @@ struct thread_info { void *scs_sp; #endif u32 cpu; + unsigned long syscall_work; /* SYSCALL_WORK_ flags */ }; =20 #define thread_saved_pc(tsk) \ @@ -64,11 +65,6 @@ void arch_setup_new_exec(void); #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */ #define TIF_MTE_ASYNC_FAULT 5 /* MTE Asynchronous Tag Check Fault */ #define TIF_NOTIFY_SIGNAL 6 /* signal notifications exist */ -#define TIF_SYSCALL_TRACE 8 /* syscall trace active */ -#define TIF_SYSCALL_AUDIT 9 /* syscall auditing */ -#define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */ -#define TIF_SECCOMP 11 /* syscall secure computing */ -#define TIF_SYSCALL_EMU 12 /* syscall emulation active */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_FREEZE 19 #define TIF_RESTORE_SIGMASK 20 @@ -87,28 +83,13 @@ void arch_setup_new_exec(void); #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE) -#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) -#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) -#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) -#define _TIF_SECCOMP (1 << TIF_SECCOMP) -#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU) -#define _TIF_UPROBE (1 << TIF_UPROBE) -#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) +#define _TIF_UPROBE (1 << TIF_UPROBE) #define _TIF_32BIT (1 << TIF_32BIT) #define _TIF_SVE (1 << TIF_SVE) #define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT) #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) #define _TIF_TSC_SIGSEGV (1 << TIF_TSC_SIGSEGV) =20 -#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ - _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ - _TIF_UPROBE | _TIF_MTE_ASYNC_FAULT | \ - _TIF_NOTIFY_SIGNAL) - -#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ - _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ - _TIF_SYSCALL_EMU) - #ifdef CONFIG_SHADOW_CALL_STACK #define INIT_SCS \ .scs_base =3D init_shadow_call_stack, \ diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 6ea303ab9e22..0f642ed4dbe4 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -42,9 +42,6 @@ #include #include =20 -#define CREATE_TRACE_POINTS -#include - struct pt_regs_offset { const char *name; int offset; @@ -2285,104 +2282,6 @@ long arch_ptrace(struct task_struct *child, long re= quest, return ptrace_request(child, request, addr, data); } =20 -enum ptrace_syscall_dir { - PTRACE_SYSCALL_ENTER =3D 0, - PTRACE_SYSCALL_EXIT, -}; - -static void report_syscall_enter(struct pt_regs *regs) -{ - int regno; - unsigned long saved_reg; - - /* - * We have some ABI weirdness here in the way that we handle syscall - * exit stops because we indicate whether or not the stop has been - * signalled from syscall entry or syscall exit by clobbering a general - * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee - * and restoring its old value after the stop. This means that: - * - * - Any writes by the tracer to this register during the stop are - * ignored/discarded. - * - * - The actual value of the register is not available during the stop, - * so the tracer cannot save it and restore it later. - * - * - Syscall stops behave differently to seccomp and pseudo-step traps - * (the latter do not nobble any registers). - */ - regno =3D (is_compat_task() ? 12 : 7); - saved_reg =3D regs->regs[regno]; - regs->regs[regno] =3D PTRACE_SYSCALL_ENTER; - - if (ptrace_report_syscall_entry(regs)) - forget_syscall(regs); - regs->regs[regno] =3D saved_reg; -} - -static void report_syscall_exit(struct pt_regs *regs) -{ - int regno; - unsigned long saved_reg; - - /* See comment for report_syscall_enter() */ - regno =3D (is_compat_task() ? 12 : 7); - saved_reg =3D regs->regs[regno]; - regs->regs[regno] =3D PTRACE_SYSCALL_EXIT; - - if (!test_thread_flag(TIF_SINGLESTEP)) { - ptrace_report_syscall_exit(regs, 0); - regs->regs[regno] =3D saved_reg; - } else { - regs->regs[regno] =3D saved_reg; - - /* - * Signal a pseudo-step exception since we are stepping but - * tracer modifications to the registers may have rewound the - * state machine. - */ - ptrace_report_syscall_exit(regs, 1); - } -} - -int syscall_trace_enter(struct pt_regs *regs) -{ - unsigned long flags =3D read_thread_flags(); - - if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) { - report_syscall_enter(regs); - if (flags & _TIF_SYSCALL_EMU) - return NO_SYSCALL; - } - - /* Do the secure computing after ptrace; failures should be fast. */ - if (secure_computing() =3D=3D -1) - return NO_SYSCALL; - - if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) - trace_sys_enter(regs, regs->syscallno); - - audit_syscall_entry(regs->syscallno, regs->orig_x0, regs->regs[1], - regs->regs[2], regs->regs[3]); - - return regs->syscallno; -} - -void syscall_trace_exit(struct pt_regs *regs) -{ - unsigned long flags =3D read_thread_flags(); - - audit_syscall_exit(regs); - - if (flags & _TIF_SYSCALL_TRACEPOINT) - trace_sys_exit(regs, syscall_get_return_value(current, regs)); - - if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)) - report_syscall_exit(regs); - - rseq_syscall(regs); -} - /* * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a. * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which = is diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 04b20c2f6cda..4965cb80e67e 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -8,8 +8,8 @@ =20 #include #include +#include #include -#include #include #include #include diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index c442fcec6b9e..ea818e3d597b 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -2,6 +2,7 @@ =20 #include #include +#include #include #include #include @@ -65,14 +66,15 @@ static void invoke_syscall(struct pt_regs *regs, unsign= ed int scno, choose_random_kstack_offset(get_random_u16()); } =20 -static inline bool has_syscall_work(unsigned long flags) +static inline bool has_syscall_work(unsigned long work) { - return unlikely(flags & _TIF_SYSCALL_WORK); + return unlikely(work & SYSCALL_WORK_ENTER); } =20 static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, const syscall_fn_t syscall_table[]) { + unsigned long work =3D READ_ONCE(current_thread_info()->syscall_work); unsigned long flags =3D read_thread_flags(); =20 regs->orig_x0 =3D regs->regs[0]; @@ -106,7 +108,7 @@ static void el0_svc_common(struct pt_regs *regs, int sc= no, int sc_nr, return; } =20 - if (has_syscall_work(flags)) { + if (has_syscall_work(work)) { /* * The de-facto standard way to skip a system call using ptrace * is to set the system call to -1 (NO_SYSCALL) and set x0 to a @@ -124,7 +126,7 @@ static void el0_svc_common(struct pt_regs *regs, int sc= no, int sc_nr, */ if (scno =3D=3D NO_SYSCALL) syscall_set_return_value(current, regs, -ENOSYS, 0); - scno =3D syscall_trace_enter(regs); + scno =3D syscall_trace_enter(regs, regs->syscallno, work); if (scno =3D=3D NO_SYSCALL) goto trace_exit; } @@ -136,14 +138,14 @@ static void el0_svc_common(struct pt_regs *regs, int = scno, int sc_nr, * check again. However, if we were tracing entry, then we always trace * exit regardless, as the old entry assembly did. */ - if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { - flags =3D read_thread_flags(); - if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) + if (!has_syscall_work(work) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { + work =3D READ_ONCE(current_thread_info()->syscall_work); + if (!has_syscall_work(work) && !report_single_step(work)) return; } =20 trace_exit: - syscall_trace_exit(regs); + syscall_exit_work(regs, work); } =20 void do_el0_svc(struct pt_regs *regs) --=20 2.34.1