From nobody Sun Dec 14 12:17:22 2025 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2981D289344 for ; Thu, 22 May 2025 11:12:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747912334; cv=none; b=aZSJvEbhGleLbvb8aITlb0baxxHfbaD6+OfL1NZ/KDTVajbmRud2Ocrnw3B71y1sGjwm7P+aX/6smvNxKL/gua3oymQqgoAcY0U27Jnc1bAOl90jOBtDpeJ5GT5LhJQcmwfMhf7tw+fTljFVUZbn6kfwMwdjfz4MJFueAljoUHM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747912334; c=relaxed/simple; bh=bMyrO6ghyIVUzYr6t3QbyNrn/jRVVxAFLVIrJYJ9NvU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jRno0fy0AmsjLG6kX+CGgvVsS4ELe77zdzX6cMJPYsu8kvyEGS0rAzX90k/J+d3hqBW7p3MWgWlWvgM8xXWG6F9bGz2ggCX2ooPfpf1+cxh2qKrXWyIAi8+g2ol4ukSWJSwnmw6cVf5QDkYMZmfe4TV36tsG7LfWeUE77Fk9CeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4b35FQ4cT3z1f0cg; Thu, 22 May 2025 19:11:06 +0800 (CST) Received: from dggemv706-chm.china.huawei.com (unknown [10.3.19.33]) by mail.maildlp.com (Postfix) with ESMTPS id 09EC11A016C; Thu, 22 May 2025 19:12:01 +0800 (CST) Received: from kwepemq200011.china.huawei.com (7.202.195.155) by dggemv706-chm.china.huawei.com (10.3.19.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 22 May 2025 19:12:00 +0800 Received: from huawei.com (10.67.174.28) by kwepemq200011.china.huawei.com (7.202.195.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 22 May 2025 19:11:58 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v5 1/2] arm64: New exception mask helpers to manage DAIF, PMR and ALLINT Date: Thu, 22 May 2025 10:56:57 +0000 Message-ID: <20250522105658.1338331-2-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522105658.1338331-1-liaochang1@huawei.com> References: <20250522105658.1338331-1-liaochang1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemq200011.china.huawei.com (7.202.195.155) Content-Type: text/plain; charset="utf-8" This patch refactors the arm64 exception masking helpers to reduce complexity without any function changes, especially with the upcoming FEAT_NMI support [1]. Currently, managing differnt hardware contexts (DAIF, PMR, ALLINT) within a single set of exception mask/save/unmask/restore helpers is becoming error-prone and adds more conditional branching overhead. To adderss this problem, this patch introduces: 1. Using a single 64bit integer to manage all hardware contexts involved in masking exceptions, maskable and non-maskable interrupts. Field 'pmr' stores a software generated value that will be written to the SYS_ICC_PMR_EL1 register when kernel uses PSEUDO_NMI. Field 'daif' stores the value of PSTATE.DAIF. It indicates any exception or interrupt is masked. Field 'allint' stores PSTATE.A when kernel uses PSEUDO_NMI, Alternatively, it stores PSTATE.ALLINT when kernel uses FEAT_NMI to indicate that arm64 superpriority interrupt(NMI) is masked. 2. Instead of one complicated function trying to handle every situation, it now has two distinct series of specialized logical exception mask helpers. Each set is designed for a specific scenario based on how the kernel is configured. For examples: - When NMI is not supported (CONFIG_ARM64_PSEUDO_NMI and CONFIG_ARM64_NMI [1] are not 'y'), the kernel uses the default daif_handler to manipulate DAIF only. - With PSEUDO_NMI enabled (CONFIG_ARM64_PSEUDO_NMI=3Dy and irqchip.gicv3_pseudo_nmi=3D1), the kernel uses the pmr_handler to manage both PMR and DAIF for interrupt masking. The most trick part is masking NMI mentioned above. - For hardware NMI support (CONFIG_ARM64_NMI=3Dy and pseudo_nmi disabled), a new cpu_exception_mask_handler could be added to manage DAIF and ALLINT for interrupt masking. This changes make exception mask helpers more obvious what's happening in each situation. And it's easier to add support for new CPU feature in the future without adding code complexity. With these refactor, this patch also replace the local_daif_xxx helpers used at exception entry and exit with a series abstract exception masks helpers. [1] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/ Signed-off-by: Liao Chang CC: Mark Brown CC: Mark Rutland CC: Marc Zyngier CC: Jonathan Cameron --- arch/arm64/include/asm/exception_mask.h | 137 ++++++++++++++++++ arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/cpufeature.c | 3 + arch/arm64/kernel/entry-common.c | 79 +++++------ arch/arm64/kernel/exception_mask.c | 177 ++++++++++++++++++++++++ 5 files changed, 358 insertions(+), 40 deletions(-) create mode 100644 arch/arm64/include/asm/exception_mask.h create mode 100644 arch/arm64/kernel/exception_mask.c diff --git a/arch/arm64/include/asm/exception_mask.h b/arch/arm64/include/a= sm/exception_mask.h new file mode 100644 index 000000000000..46e85e4ac70d --- /dev/null +++ b/arch/arm64/include/asm/exception_mask.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2025 Huawei Ltd. + */ +#ifndef __ASM_EXCEPTION_MASK_H +#define __ASM_EXCEPTION_MASK_H + +#include +#include + +#define DAIF_PROCCTX 0 +#define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) +#define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) +#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) + +union cpu_exception_mask { + unsigned long flags; + struct { + unsigned long pmr : 8; // SYS_ICC_PMR_EL1 + unsigned long daif : 10; // PSTATE.DAIF at bits[6-9] + unsigned long allint : 14; // PSTATE.ALLINT at bits[13] + } fields; +}; + +struct cpu_exception_mask_handler { + void (*mask)(void); // mask all exception and interrupt + unsigned long (*save)(void); // save exception and interrupt masks + void (*restore)(unsigned long flags); // restore exception from given mas= ks +}; + +extern const union cpu_exception_mask procctx; +extern const union cpu_exception_mask procctx_noirq; +extern const union cpu_exception_mask errctx; +extern struct cpu_exception_mask_handler *cpu_exception; +int set_exception_mask_handler(int type); + +/* + * The exception masking steps for exception entry and exit: + * + * [EL0 Sync] + * el0_sync_entry_exception_mask() + * ... + * do_resume_notify() + * el0_common_exit_exception_mask() + * + * [EL0 IRQ & FIQ] + * irq_common_entry_exception_mask() + * ... + * do_resume_notify() + * el0_common_exit_exception_mask() + * + * [EL0 serror] + * serror_entry_exception_mask() + * ... + * el0_serror_exit_exception_mask() + * do_resume_notify() + * el0_exit_common_exception_mask() + * + * [EL1 Sync] + * el1_sync_entry_exception_mask() + * ... + * el1_sync_exit_exception_mask() + * + * [EL1 IRQ & FIQ] + * irq_common_entry_exception_mask() + * + * [EL1 Serror] + * serror_entry_exception_mask() + */ +static inline void el0_sync_entry_exception_mask(void) +{ + cpu_exception->restore(procctx.flags); +} + +static inline void irq_common_entry_exception_mask(void) +{ + /* only mask normal interrupts and NMIs*/ + asm volatile ("msr daifclr, #0xc" : : : "memory"); +} + +static inline void serror_entry_exception_mask(void) +{ + cpu_exception->restore(errctx.flags); +} + +static inline void el0_serror_exit_exception_mask(void) +{ + cpu_exception->restore(procctx.flags); +} + +static inline void el0_common_exit_exception_mask(void) +{ + cpu_exception->mask(); +} + +static inline void el1_sync_entry_exception_mask(struct pt_regs *regs) +{ + union cpu_exception_mask mask; + + mask.fields.pmr =3D regs->pmr; + mask.fields.daif =3D regs->pstate & DAIF_MASK; + + cpu_exception->restore(mask.flags); +} + +static inline void el1_sync_exit_exception_mask(void) +{ + cpu_exception->mask(); +} + +static inline unsigned long local_exception_save(void) +{ + unsigned long flags; + + flags =3D cpu_exception->save(); + cpu_exception->mask(); + + return flags; +} + +static inline unsigned long local_exception_save_flags(void) +{ + return cpu_exception->save(); +} + +/* mask/save/unmask/restore all exceptions, including interrupts. */ +static inline void local_exception_mask(void) +{ + cpu_exception->mask(); +} + +static inline void local_exception_restore(unsigned long flags) +{ + cpu_exception->restore(flags); +} + +#endif diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 71c29a2a2f19..b88c0f2af796 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -34,7 +34,7 @@ obj-y :=3D debug-monitors.o entry.o irq.o fpsimd.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ syscall.o proton-pack.o idle.o patching.o pi/ \ - rsi.o + rsi.o exception_mask.o =20 obj-$(CONFIG_COMPAT) +=3D sys32.o signal32.o \ sys_compat.o diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9c4d6d552b25..792f599f5c4d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -93,6 +93,7 @@ #include #include #include +#include =20 /* Kernel representation of AT_HWCAP and AT_HWCAP2 */ static DECLARE_BITMAP(elf_hwcap, MAX_CPU_FEATURES) __read_mostly; @@ -2290,6 +2291,8 @@ static bool can_use_gic_priorities(const struct arm64= _cpu_capabilities *entry, if (!cpus_have_cap(ARM64_HAS_GIC_CPUIF_SYSREGS)) return false; =20 + set_exception_mask_handler(enable_pseudo_nmi ? 1 : 0); + return enable_pseudo_nmi; } =20 diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index b260ddc4d3e9..528d1bfbb5ed 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -27,6 +27,7 @@ #include #include #include +#include =20 /* * Handle IRQ/context state management when entering from kernel mode. @@ -168,7 +169,7 @@ static __always_inline void exit_to_user_mode_prepare(s= truct pt_regs *regs) if (unlikely(flags & _TIF_WORK_MASK)) do_notify_resume(regs, flags); =20 - local_daif_mask(); + el0_common_exit_exception_mask(); =20 lockdep_sys_exit(); } @@ -428,9 +429,9 @@ static void noinstr el1_abort(struct pt_regs *regs, uns= igned long esr) unsigned long far =3D read_sysreg(far_el1); =20 enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_mem_abort(far, esr, regs); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 @@ -439,45 +440,45 @@ static void noinstr el1_pc(struct pt_regs *regs, unsi= gned long esr) unsigned long far =3D read_sysreg(far_el1); =20 enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_sp_pc_abort(far, esr, regs); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_el1_undef(regs, esr); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_el1_bti(regs, esr); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_el1_gcs(regs, esr); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_el1_mops(regs, esr); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 @@ -494,9 +495,9 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsig= ned long esr) static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + el1_sync_entry_exception_mask(regs); do_el1_fpac(regs, esr); - local_daif_mask(); + el1_sync_exit_exception_mask(); exit_to_kernel_mode(regs); } =20 @@ -567,7 +568,7 @@ static __always_inline void __el1_irq(struct pt_regs *r= egs, static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + irq_common_entry_exception_mask(); =20 if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); @@ -589,7 +590,7 @@ asmlinkage void noinstr el1h_64_error_handler(struct pt= _regs *regs) { unsigned long esr =3D read_sysreg(esr_el1); =20 - local_daif_restore(DAIF_ERRCTX); + serror_entry_exception_mask(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); @@ -600,7 +601,7 @@ static void noinstr el0_da(struct pt_regs *regs, unsign= ed long esr) unsigned long far =3D read_sysreg(far_el1); =20 enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -618,7 +619,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsign= ed long esr) arm64_apply_bp_hardening(); =20 enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -626,7 +627,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsign= ed long esr) static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_fpsimd_acc(esr, regs); exit_to_user_mode(regs); } @@ -634,7 +635,7 @@ static void noinstr el0_fpsimd_acc(struct pt_regs *regs= , unsigned long esr) static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_sve_acc(esr, regs); exit_to_user_mode(regs); } @@ -642,7 +643,7 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, u= nsigned long esr) static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_sme_acc(esr, regs); exit_to_user_mode(regs); } @@ -650,7 +651,7 @@ static void noinstr el0_sme_acc(struct pt_regs *regs, u= nsigned long esr) static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_fpsimd_exc(esr, regs); exit_to_user_mode(regs); } @@ -658,7 +659,7 @@ static void noinstr el0_fpsimd_exc(struct pt_regs *regs= , unsigned long esr) static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_sys(esr, regs); exit_to_user_mode(regs); } @@ -671,7 +672,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsign= ed long esr) arm64_apply_bp_hardening(); =20 enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_sp_pc_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -679,7 +680,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsign= ed long esr) static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_sp_pc_abort(regs->sp, esr, regs); exit_to_user_mode(regs); } @@ -687,7 +688,7 @@ static void noinstr el0_sp(struct pt_regs *regs, unsign= ed long esr) static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_undef(regs, esr); exit_to_user_mode(regs); } @@ -695,7 +696,7 @@ static void noinstr el0_undef(struct pt_regs *regs, uns= igned long esr) static void noinstr el0_bti(struct pt_regs *regs) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_bti(regs); exit_to_user_mode(regs); } @@ -703,7 +704,7 @@ static void noinstr el0_bti(struct pt_regs *regs) static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_mops(regs, esr); exit_to_user_mode(regs); } @@ -711,7 +712,7 @@ static void noinstr el0_mops(struct pt_regs *regs, unsi= gned long esr) static void noinstr el0_gcs(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_gcs(regs, esr); exit_to_user_mode(regs); } @@ -719,7 +720,7 @@ static void noinstr el0_gcs(struct pt_regs *regs, unsig= ned long esr) static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); bad_el0_sync(regs, 0, esr); exit_to_user_mode(regs); } @@ -731,7 +732,7 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsig= ned long esr) =20 enter_from_user_mode(regs); do_debug_exception(far, esr, regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); exit_to_user_mode(regs); } =20 @@ -740,7 +741,7 @@ static void noinstr el0_svc(struct pt_regs *regs) enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); fp_user_discard(); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_svc(regs); exit_to_user_mode(regs); } @@ -748,7 +749,7 @@ static void noinstr el0_svc(struct pt_regs *regs) static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_fpac(regs, esr); exit_to_user_mode(regs); } @@ -820,7 +821,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs, { enter_from_user_mode(regs); =20 - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + irq_common_entry_exception_mask(); =20 if (regs->pc & BIT(55)) arm64_apply_bp_hardening(); @@ -857,11 +858,11 @@ static void noinstr __el0_error_handler_common(struct= pt_regs *regs) unsigned long esr =3D read_sysreg(esr_el1); =20 enter_from_user_mode(regs); - local_daif_restore(DAIF_ERRCTX); + serror_entry_exception_mask(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); - local_daif_restore(DAIF_PROCCTX); + el0_serror_exit_exception_mask(); exit_to_user_mode(regs); } =20 @@ -874,7 +875,7 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt= _regs *regs) static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_cp15(esr, regs); exit_to_user_mode(regs); } @@ -883,7 +884,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs) { enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); - local_daif_restore(DAIF_PROCCTX); + el0_sync_entry_exception_mask(); do_el0_svc_compat(regs); exit_to_user_mode(regs); } diff --git a/arch/arm64/kernel/exception_mask.c b/arch/arm64/kernel/excepti= on_mask.c new file mode 100644 index 000000000000..b0f1ec71ba0e --- /dev/null +++ b/arch/arm64/kernel/exception_mask.c @@ -0,0 +1,177 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Exception masking code for DAIF, PMR and ALLINT. + * + * Copyright (C) 2025 Huawei Ltd. + */ +#include + +#include +#include +#include +#include +#include + +/* unmask all interrupts including interrupts */ +const union cpu_exception_mask procctx =3D { + .fields.daif =3D 0, + .fields.pmr =3D GIC_PRIO_IRQON, + .fields.allint =3D 0, +}; + +/* only mask normal interrupts */ +const union cpu_exception_mask procctx_noirq =3D { + .fields.daif =3D DAIF_PROCCTX_NOIRQ, + .fields.pmr =3D GIC_PRIO_IRQOFF, + .fields.allint =3D 0, +}; + +/* mask all interrupts including NMI and Serror */ +const union cpu_exception_mask errctx =3D { + .fields.daif =3D DAIF_ERRCTX, + .fields.pmr =3D GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET, + .fields.allint =3D 1, +}; + +static void daif_exception_mask(void) +{ + asm volatile( + "msr daifset, #0xf\n" + : + : + : "memory"); + + trace_hardirqs_off(); +} + +static unsigned long daif_exception_save(void) +{ + return read_sysreg(daif); +} + +static void daif_exception_restore(unsigned long flags) +{ + union cpu_exception_mask mask =3D { .flags =3D flags }; + bool irq_disabled =3D mask.fields.daif & PSR_I_BIT; + + if (!irq_disabled) + trace_hardirqs_on(); + + write_sysreg(mask.fields.daif, daif); + + if (irq_disabled) + trace_hardirqs_off(); +} + +static struct cpu_exception_mask_handler daif_handler =3D { + .mask =3D daif_exception_mask, + .save =3D daif_exception_save, + .restore =3D daif_exception_restore, +}; + +static void pmr_exception_mask(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg_s(SYS_ICC_PMR_EL1) =3D=3D (GIC_PRIO_IRQOFF | + GIC_PRIO_PSR_I_SET))); + + asm volatile( + "msr daifset, #0xf\n" + : + : + : "memory"); + + gic_write_pmr(errctx.fields.pmr); + + trace_hardirqs_off(); +} + +static unsigned long pmr_exception_save(void) +{ + union cpu_exception_mask mask =3D { .flags =3D 0UL }; + + mask.fields.daif =3D read_sysreg(daif); + mask.fields.allint =3D mask.fields.daif & PSR_A_BIT; + + /* If IRQs are masked with PMR, reflect it in the daif */ + if (read_sysreg_s(SYS_ICC_PMR_EL1) !=3D procctx.fields.pmr) { + mask.fields.daif |=3D DAIF_PROCCTX_NOIRQ; + mask.fields.pmr =3D mask.fields.allint ? + errctx.fields.pmr : procctx_noirq.fields.pmr; + } else { + mask.fields.pmr =3D procctx.fields.pmr; + } + + return mask.flags; +} + +static void pmr_exception_restore(unsigned long flags) +{ + union cpu_exception_mask mask =3D { .flags =3D flags }; + bool irq_disabled =3D (mask.fields.daif & PSR_I_BIT); + + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg(daif) & (DAIF_PROCCTX_NOIRQ)) !=3D (DAIF_PROCCTX_NOIRQ)); + + if (!irq_disabled) { + trace_hardirqs_on(); + gic_write_pmr(mask.fields.pmr); + pmr_sync(); + } else { + if (!mask.fields.allint) + mask.fields.daif &=3D ~DAIF_PROCCTX_NOIRQ; + /* + * There has been concern that the write to daif + * might be reordered before this write to PMR. + * From the ARM ARM DDI 0487D.a, section D1.7.1 + * "Accessing PSTATE fields": + * Writes to the PSTATE fields have side-effects on + * various aspects of the PE operation. All of these + * side-effects are guaranteed: + * - Not to be visible to earlier instructions in + * the execution stream. + * - To be visible to later instructions in the + * execution stream + * + * Also, writes to PMR are self-synchronizing, so no + * interrupts with a lower priority than PMR is signaled + * to the PE after the write. + * + * So we don't need additional synchronization here. + */ + gic_write_pmr(mask.fields.pmr); + } + + write_sysreg(mask.fields.daif, daif); + + if (irq_disabled) + trace_hardirqs_off(); +} + +static struct cpu_exception_mask_handler pmr_handler =3D { + .mask =3D pmr_exception_mask, + .save =3D pmr_exception_save, + .restore =3D pmr_exception_restore, +}; + +struct cpu_exception_mask_handler *cpu_exception =3D &daif_handler; + +int set_exception_mask_handler(int type) +{ + switch (type) { + case 0: + cpu_exception =3D &daif_handler; + break; + case 1: + cpu_exception =3D &pmr_handler; + break; + /* case 2: reserved for FEAT_NMI */ + default: + return -EINVAL; + } + pr_info("Exception mask handlers: %ps %ps %ps\n", + cpu_exception->mask, + cpu_exception->save, + cpu_exception->restore); + return 0; +} --=20 2.34.1 From nobody Sun Dec 14 12:17:22 2025 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1806D289349 for ; Thu, 22 May 2025 11:12:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747912334; cv=none; b=JGhaJHrEZsh1Vk33Zf3MKLeS8R967AUA+uJD5WVRvFI9sYkW1QaqnMaPznCFiNbpewE80RE8SXht2Fn+GmiwEDhPq5ewJlYHMSxpRcnwBvlndjFvZBjweqOxvgL9fcUFF+2Q6XisYa+XmfHJ1zhAfwrUjaBdEoKbKZvmh+buK1o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747912334; c=relaxed/simple; bh=0jbWSeWR8R8oDbMMJ4+irFq8XnQcrQQeU8RP/GzlMro=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DZ90Cy9vRmwah09qxAnm5E2l4otittJIaGde/0mp7z2EtuseOybtTAZKJvStGr5GqLK1xjxDSCk6BPcUnYAtce74ufDYCzEuMtJ3fe00hnCc2qiVeILvj2YanH4N2W7fkdY7VYn1L5Cn5Lzu+RAeDwG8FGuRHQCgmXzPxxGj7J8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4b35Cy24XPz1R7k2; Thu, 22 May 2025 19:09:50 +0800 (CST) Received: from dggemv712-chm.china.huawei.com (unknown [10.1.198.32]) by mail.maildlp.com (Postfix) with ESMTPS id EC8DD1A0188; Thu, 22 May 2025 19:12:02 +0800 (CST) Received: from kwepemq200011.china.huawei.com (7.202.195.155) by dggemv712-chm.china.huawei.com (10.1.198.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 22 May 2025 19:12:02 +0800 Received: from huawei.com (10.67.174.28) by kwepemq200011.china.huawei.com (7.202.195.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 22 May 2025 19:12:00 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v5 2/2] arm64: Deprecate the old daifflags helpers Date: Thu, 22 May 2025 10:56:58 +0000 Message-ID: <20250522105658.1338331-3-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522105658.1338331-1-liaochang1@huawei.com> References: <20250522105658.1338331-1-liaochang1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemq200011.china.huawei.com (7.202.195.155) Content-Type: text/plain; charset="utf-8" Replacing all of the daifflags helpers used in the arm64 architecture code with the counterparts defined in exception_mask.h Signed-off-by: Liao Chang CC: Mark Brown CC: Mark Rutland CC: Marc Zyngier CC: Jonathan Cameron --- arch/arm64/include/asm/daifflags.h | 144 --------------------------- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/acpi.c | 10 +- arch/arm64/kernel/debug-monitors.c | 6 +- arch/arm64/kernel/entry-common.c | 2 +- arch/arm64/kernel/hibernate.c | 6 +- arch/arm64/kernel/irq.c | 4 +- arch/arm64/kernel/machine_kexec.c | 4 +- arch/arm64/kernel/probes/kprobes.c | 2 +- arch/arm64/kernel/setup.c | 4 +- arch/arm64/kernel/signal.c | 2 +- arch/arm64/kernel/smp.c | 10 +- arch/arm64/kernel/suspend.c | 6 +- arch/arm64/kernel/traps.c | 2 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 4 +- arch/arm64/kvm/hyp/vhe/switch.c | 4 +- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mmu.c | 6 +- 19 files changed, 39 insertions(+), 183 deletions(-) delete mode 100644 arch/arm64/include/asm/daifflags.h diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/da= ifflags.h deleted file mode 100644 index fbb5c99eb2f9..000000000000 --- a/arch/arm64/include/asm/daifflags.h +++ /dev/null @@ -1,144 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2017 ARM Ltd. - */ -#ifndef __ASM_DAIFFLAGS_H -#define __ASM_DAIFFLAGS_H - -#include - -#include -#include -#include -#include - -#define DAIF_PROCCTX 0 -#define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) -#define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) -#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) - - -/* mask/save/unmask/restore all exceptions, including interrupts. */ -static inline void local_daif_mask(void) -{ - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg_s(SYS_ICC_PMR_EL1) =3D=3D (GIC_PRIO_IRQOFF | - GIC_PRIO_PSR_I_SET))); - - asm volatile( - "msr daifset, #0xf // local_daif_mask\n" - : - : - : "memory"); - - /* Don't really care for a dsb here, we don't intend to enable IRQs */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - - trace_hardirqs_off(); -} - -static inline unsigned long local_daif_save_flags(void) -{ - unsigned long flags; - - flags =3D read_sysreg(daif); - - if (system_uses_irq_prio_masking()) { - /* If IRQs are masked with PMR, reflect it in the flags */ - if (read_sysreg_s(SYS_ICC_PMR_EL1) !=3D GIC_PRIO_IRQON) - flags |=3D PSR_I_BIT | PSR_F_BIT; - } - - return flags; -} - -static inline unsigned long local_daif_save(void) -{ - unsigned long flags; - - flags =3D local_daif_save_flags(); - - local_daif_mask(); - - return flags; -} - -static inline void local_daif_restore(unsigned long flags) -{ - bool irq_disabled =3D flags & PSR_I_BIT; - - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) !=3D (PSR_I_BIT | PSR_F_BI= T)); - - if (!irq_disabled) { - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON); - pmr_sync(); - } - } else if (system_uses_irq_prio_masking()) { - u64 pmr; - - if (!(flags & PSR_A_BIT)) { - /* - * If interrupts are disabled but we can take - * asynchronous errors, we can take NMIs - */ - flags &=3D ~(PSR_I_BIT | PSR_F_BIT); - pmr =3D GIC_PRIO_IRQOFF; - } else { - pmr =3D GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; - } - - /* - * There has been concern that the write to daif - * might be reordered before this write to PMR. - * From the ARM ARM DDI 0487D.a, section D1.7.1 - * "Accessing PSTATE fields": - * Writes to the PSTATE fields have side-effects on - * various aspects of the PE operation. All of these - * side-effects are guaranteed: - * - Not to be visible to earlier instructions in - * the execution stream. - * - To be visible to later instructions in the - * execution stream - * - * Also, writes to PMR are self-synchronizing, so no - * interrupts with a lower priority than PMR is signaled - * to the PE after the write. - * - * So we don't need additional synchronization here. - */ - gic_write_pmr(pmr); - } - - write_sysreg(flags, daif); - - if (irq_disabled) - trace_hardirqs_off(); -} - -/* - * Called by synchronous exception handlers to restore the DAIF bits that = were - * modified by taking an exception. - */ -static inline void local_daif_inherit(struct pt_regs *regs) -{ - unsigned long flags =3D regs->pstate & DAIF_MASK; - - if (interrupts_enabled(regs)) - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) - gic_write_pmr(regs->pmr); - - /* - * We can't use local_daif_restore(regs->pstate) here as - * system_has_prio_mask_debugging() won't restore the I bit if it can - * use the pmr instead. - */ - write_sysreg(flags, daif); -} -#endif diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 08ba91e6fb03..0af825f11efc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/= mmu_context.h index 0dbe3b29049b..6284ce3fd3b4 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -19,7 +19,7 @@ =20 #include #include -#include +#include #include #include #include diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index b9a66fc146c9..3f3a4ff95d5d 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -33,7 +33,7 @@ #include #include #include -#include +#include #include =20 int acpi_noirq =3D 1; /* skip ACPI IRQ initialization */ @@ -397,7 +397,7 @@ int apei_claim_sea(struct pt_regs *regs) if (!IS_ENABLED(CONFIG_ACPI_APEI_GHES)) return err; =20 - current_flags =3D local_daif_save_flags(); + current_flags =3D local_exception_save_flags(); =20 /* current_flags isn't useful here as daif doesn't tell us about pNMI */ return_to_irqs_enabled =3D !irqs_disabled_flags(arch_local_save_flags()); @@ -409,7 +409,7 @@ int apei_claim_sea(struct pt_regs *regs) * SEA can interrupt SError, mask it and describe this as an NMI so * that APEI defers the handling. */ - local_daif_restore(DAIF_ERRCTX); + serror_entry_exception_mask(); nmi_enter(); err =3D ghes_notify_sea(); nmi_exit(); @@ -420,7 +420,7 @@ int apei_claim_sea(struct pt_regs *regs) */ if (!err) { if (return_to_irqs_enabled) { - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_exception_restore(procctx_noirq.flags); __irq_enter(); irq_work_run(); __irq_exit(); @@ -430,7 +430,7 @@ int apei_claim_sea(struct pt_regs *regs) } } =20 - local_daif_restore(current_flags); + local_exception_restore(current_flags); =20 return err; } diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-m= onitors.c index 58f047de3e1c..97dc7cafccd6 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -19,7 +19,7 @@ =20 #include #include -#include +#include #include #include #include @@ -37,9 +37,9 @@ u8 debug_monitors_arch(void) static void mdscr_write(u32 mdscr) { unsigned long flags; - flags =3D local_daif_save(); + flags =3D local_exception_save(); write_sysreg(mdscr, mdscr_el1); - local_daif_restore(flags); + local_exception_restore(flags); } NOKPROBE_SYMBOL(mdscr_write); =20 diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 528d1bfbb5ed..221df3937da6 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -16,7 +16,7 @@ #include =20 #include -#include +#include #include #include #include diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 18749e9a6c2d..cb1c9d00d840 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include @@ -341,7 +341,7 @@ int swsusp_arch_suspend(void) return -EBUSY; } =20 - flags =3D local_daif_save(); + flags =3D local_exception_save(); =20 if (__cpu_suspend_enter(&state)) { /* make the crash dump kernel image visible/saveable */ @@ -391,7 +391,7 @@ int swsusp_arch_suspend(void) spectre_v4_enable_mitigation(NULL); } =20 - local_daif_restore(flags); + local_exception_restore(flags); =20 return ret; } diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 85087e2df564..f92497e9849d 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include @@ -132,6 +132,6 @@ void __init init_IRQ(void) * the PMR/PSR pair to a consistent state. */ WARN_ON(read_sysreg(daif) & PSR_A_BIT); - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_exception_restore(procctx_noirq.flags); } } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_= kexec.c index 6f121a0164a4..e415d7a74269 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -17,7 +17,7 @@ =20 #include #include -#include +#include #include #include #include @@ -176,7 +176,7 @@ void machine_kexec(struct kimage *kimage) =20 pr_info("Bye!\n"); =20 - local_daif_mask(); + local_exception_mask(); =20 /* * Both restart and kernel_reloc will shutdown the MMU, disable data diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/= kprobes.c index d9e462eafb95..9405d8110525 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -23,7 +23,7 @@ #include =20 #include -#include +#include #include #include #include diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 85104587f849..9e52c4649a43 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -37,7 +37,7 @@ #include #include #include -#include +#include #include #include #include @@ -311,7 +311,7 @@ void __init __no_sanitize_address setup_arch(char **cmd= line_p) * IRQ and FIQ will be unmasked after the root irqchip has been * detected and initialized. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_exception_restore(procctx_noirq.flags); =20 /* * TTBR0 is only used for the identity mapping at this stage. Make it diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index a7c37afb4ebe..07c2950c3298 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -21,7 +21,7 @@ #include #include =20 -#include +#include #include #include #include diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 3b3f6b56e733..86594601fe37 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -42,7 +42,7 @@ #include #include #include -#include +#include #include #include #include @@ -271,7 +271,7 @@ asmlinkage notrace void secondary_start_kernel(void) * as the root irqchip has already been detected and initialized we can * unmask IRQ and FIQ at the same time. */ - local_daif_restore(DAIF_PROCCTX); + local_exception_restore(procctx.flags); =20 /* * OK, it's off to the idle thread for us @@ -378,7 +378,7 @@ void __noreturn cpu_die(void) =20 idle_task_exit(); =20 - local_daif_mask(); + local_exception_mask(); =20 /* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */ cpuhp_ap_report_dead(); @@ -873,7 +873,7 @@ static void __noreturn local_cpu_stop(unsigned int cpu) { set_cpu_online(cpu, false); =20 - local_daif_mask(); + local_exception_mask(); sdei_mask_local_cpu(); cpu_park_loop(); } @@ -899,7 +899,7 @@ static void __noreturn ipi_cpu_crash_stop(unsigned int = cpu, struct pt_regs *regs * interrupt us. It's better to prevent the NMI and let the IRQ * finish since the pt_regs will be better. */ - local_daif_mask(); + local_exception_mask(); =20 crash_save_cpu(regs, cpu); =20 diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index eaaff94329cd..754a56f08e66 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include #include @@ -122,7 +122,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned l= ong)) * hardirqs should be firmly off by now. This really ought to use * something like raw_local_daif_save(). */ - flags =3D local_daif_save(); + flags =3D local_exception_save(); =20 /* * Function graph tracer state gets inconsistent when the kernel @@ -168,7 +168,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned l= ong)) * restored, so from this point onwards, debugging is fully * reenabled if it was enabled when core started shutdown. */ - local_daif_restore(flags); + local_exception_restore(flags); =20 return ret; } diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 529cff825531..62562e9fb4ba 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -33,7 +33,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-s= r.c index ed363aa3027e..2e60f61a094c 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -437,7 +437,7 @@ u64 __vgic_v3_get_gic_config(void) * EL2. */ if (has_vhe()) - flags =3D local_daif_save(); + flags =3D local_exception_save(); =20 /* * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates @@ -457,7 +457,7 @@ u64 __vgic_v3_get_gic_config(void) isb(); =20 if (has_vhe()) - local_daif_restore(flags); + local_exception_restore(flags); =20 val =3D (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63); val |=3D read_gicreg(ICH_VTR_EL2); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 731a0378ed13..ea938b0d6920 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -644,7 +644,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; =20 - local_daif_mask(); + local_exception_mask(); =20 /* * Having IRQs masked via PMR when entering the guest means the GIC @@ -663,7 +663,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * local_daif_restore() takes care to properly restore PSTATE.DAIF * and the GIC PMR if the host is using IRQ priorities. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_exception_restore(procctx_noirq.flags); =20 /* * When we exit from the guest we change a number of CPU configuration diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index ec0a337891dd..34441e023b58 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -33,7 +33,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ea6695d53fb9..a712f9b02d58 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1533,7 +1533,7 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; ttbr_replace_func *replace_phys; - unsigned long daif; + unsigned long flags; =20 /* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */ phys_addr_t ttbr1 =3D phys_to_ttbr(virt_to_phys(pgdp)); @@ -1549,9 +1549,9 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) * We really don't want to take *any* exceptions while TTBR1 is * in the process of being replaced so mask everything. */ - daif =3D local_daif_save(); + flags =3D local_exception_save(); replace_phys(ttbr1); - local_daif_restore(daif); + local_exception_restore(flags); =20 cpu_uninstall_idmap(); } --=20 2.34.1