From nobody Sat May 18 19:47:52 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6336DC433F5 for ; Mon, 3 Oct 2022 10:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229715AbiJCKjE (ORCPT ); Mon, 3 Oct 2022 06:39:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229698AbiJCKi5 (ORCPT ); Mon, 3 Oct 2022 06:38:57 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5341356EE for ; Mon, 3 Oct 2022 03:38:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 71E33B81049 for ; Mon, 3 Oct 2022 10:38:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E8D63C43141; Mon, 3 Oct 2022 10:38:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664793534; bh=C4Vx5tvsjc4GTe+ODm+teOUMFaKT91CBKzQiLXfj2dc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fMuZyrKBqrNiFoQFi+CtalwIR9BMcH8RYBA2q9Ikh+r1G+fCMFL0fy59iTweNgB+O 40MadJensyCmFJYgg2V7DGmIi0R1OdmX4SvpR2YrnHigZ5p/x1TVhl0pn+CELNambv RboGUr52C7Xp71TWbVhtIbce7ooMVP31jx57oF1CZ3wVbLPn/8c31Lr3B/r+ohOjcm e8PqHQdiRpXAoxF3W0j5fEzPto1ef7AYwcIPJ2FwXDFAdQR47430QWmq+cGEGMCS4L IGuA0A7K7BDRW6KpxrfV3WQQPVwyDNFfPOpmYbIPMS6wUuySlmfn8QRzR05K83j8WV 5M0QYkDgfjb0Q== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/4] riscv: process: fix kernel info leakage Date: Mon, 3 Oct 2022 18:29:18 +0800 Message-Id: <20221003102921.3973-2-jszhang@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221003102921.3973-1-jszhang@kernel.org> References: <20221003102921.3973-1-jszhang@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" thread_struct's s[12] may contain random kernel memory content, which may be finally leaked to userspace. This is a security hole. Fix it by clearing the s[12] array in thread_struct when fork. As for kthread case, it's better to clear the s[12] array as well. Fixes: 7db91e57a0ac ("RISC-V: Task implementation") Signed-off-by: Jisheng Zhang Reviewed-by: Guo Ren --- arch/riscv/kernel/process.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index ceb9ebab6558..52002d54b163 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -164,6 +164,8 @@ int copy_thread(struct task_struct *p, const struct ker= nel_clone_args *args) unsigned long tls =3D args->tls; struct pt_regs *childregs =3D task_pt_regs(p); =20 + memset(&p->thread.s, 0, sizeof(p->thread.s)); + /* p->thread holds context to be restored by __switch_to() */ if (unlikely(args->fn)) { /* Kernel thread */ --=20 2.37.2 From nobody Sat May 18 19:47:52 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E8F1C4332F for ; Mon, 3 Oct 2022 10:39:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229790AbiJCKjH (ORCPT ); Mon, 3 Oct 2022 06:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229696AbiJCKi7 (ORCPT ); Mon, 3 Oct 2022 06:38:59 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F46C1DA74 for ; Mon, 3 Oct 2022 03:38:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 29EA8B80F2B for ; Mon, 3 Oct 2022 10:38:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4DD4C433D6; Mon, 3 Oct 2022 10:38:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664793536; bh=Ge/o7Rp8DcY2sG0HuCr9FZPeFSgyQgALAhESbZPndRM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZaefyPbry2+/5DcwEpVZlyTQfffy/QXFw2DUFzzbsDvxRnGR21S2t/KwZT3ATBXha lxrD1qzQiZoBI8aJJgbfoR6TWckp/KrqnWTZlgRsmWc63P3LmfF2r82uQVcs32F1US tsJk6YBDilccAjDtZeNTwMWavvXFSzcCCEcLTHKkjNzZHetyonpT8rHBoaioUOqUSy rtFQD76heyFpj5nTk8zSGPzKy+eofdjRsaHa0V0du7cBFr8UT+JeaXZ9Dqce6BRNTL tqDJBW1LPZ/ebb9/8wbE3hB1EzyfR4cpUv1iKkuNtCmZ5gF5B/KAsutiYHiZcezC4c rklcZlV5+fvVQ== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/4] riscv: consolidate ret_from_kernel_thread into ret_from_fork Date: Mon, 3 Oct 2022 18:29:19 +0800 Message-Id: <20221003102921.3973-3-jszhang@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221003102921.3973-1-jszhang@kernel.org> References: <20221003102921.3973-1-jszhang@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The ret_from_kernel_thread() behaves similarly with ret_from_fork(), the only difference is whether call the fn(arg) or not, this can be acchieved by testing fn is NULL or not, I.E s0 is 0 or not. Signed-off-by: Jisheng Zhang Acked-by: Guo Ren --- arch/riscv/kernel/entry.S | 11 +++-------- arch/riscv/kernel/process.c | 5 ++--- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 39097c1474a0..d227aca7f9d4 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -323,20 +323,15 @@ END(handle_kernel_stack_overflow) =20 ENTRY(ret_from_fork) call schedule_tail - move a0, sp /* pt_regs */ - la ra, ret_from_exception - tail syscall_exit_to_user_mode -ENDPROC(ret_from_fork) - -ENTRY(ret_from_kernel_thread) - call schedule_tail + beqz s0, 1f /* not from kernel thread */ /* Call fn(arg) */ move a0, s1 jalr s0 +1: move a0, sp /* pt_regs */ la ra, ret_from_exception tail syscall_exit_to_user_mode -ENDPROC(ret_from_kernel_thread) +ENDPROC(ret_from_fork) =20 =20 /* diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 52002d54b163..fdafed185e21 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -34,7 +34,6 @@ EXPORT_SYMBOL(__stack_chk_guard); #endif =20 extern asmlinkage void ret_from_fork(void); -extern asmlinkage void ret_from_kernel_thread(void); =20 void arch_cpu_idle(void) { @@ -174,7 +173,6 @@ int copy_thread(struct task_struct *p, const struct ker= nel_clone_args *args) /* Supervisor/Machine, irqs on: */ childregs->status =3D SR_PP | SR_PIE; =20 - p->thread.ra =3D (unsigned long)ret_from_kernel_thread; p->thread.s[0] =3D (unsigned long)args->fn; p->thread.s[1] =3D (unsigned long)args->fn_arg; } else { @@ -184,8 +182,9 @@ int copy_thread(struct task_struct *p, const struct ker= nel_clone_args *args) if (clone_flags & CLONE_SETTLS) childregs->tp =3D tls; childregs->a0 =3D 0; /* Return value of fork() */ - p->thread.ra =3D (unsigned long)ret_from_fork; + p->thread.s[0] =3D 0; } + p->thread.ra =3D (unsigned long)ret_from_fork; p->thread.sp =3D (unsigned long)childregs; /* kernel sp */ return 0; } --=20 2.37.2 From nobody Sat May 18 19:47:52 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAFB0C433F5 for ; Mon, 3 Oct 2022 10:39:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229829AbiJCKjM (ORCPT ); Mon, 3 Oct 2022 06:39:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229549AbiJCKi7 (ORCPT ); Mon, 3 Oct 2022 06:38:59 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A71D4356EE for ; Mon, 3 Oct 2022 03:38:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 42AE36102D for ; Mon, 3 Oct 2022 10:38:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FCC9C43141; Mon, 3 Oct 2022 10:38:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664793537; bh=IKiDPx28Nqicnl0QtLd5sCtYnZ9BfdaTV6QNxzNd+Es=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lbLJhSqrdlkkxkdVgsV6sWnGtCM5TpPSOHjF7DBzu6PWs6INY91Ilq0Hr0msNCy+X yxcytS3F5Wx0B/LAsH3ZtYM9+XFBGmpNBxf0ew4yETP3n63d4ZkTQivySTS4LOGGAE oGIQlx8fRVEOXNKzW0eTSzfcNYne7WwzqMcUJ4RBtR2d0Qkcw4fOsR0rpouoBXtZVG 4PmuzAS55C6838sg/UBnq6j+KxLGYaFrIGJqHgZWwgR8Ay9+6lkbNSJX2lOAR42MWx D5ZGUkHVB+xOIb4lsUHbmNko05OvcqHlcbLmT5cXfF2ahYl+dRNq2/hVFvTUG/DFh7 /kIeDLcGH0XYg== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/4] riscv: fix race when vmap stack overflow and remove shadow_stack Date: Mon, 3 Oct 2022 18:29:20 +0800 Message-Id: <20221003102921.3973-4-jszhang@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221003102921.3973-1-jszhang@kernel.org> References: <20221003102921.3973-1-jszhang@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, when detecting vmap stack overflow, riscv firstly switches to the so called shadow stack, then use this shadow stack to call the get_overflow_stack() to get the overflow stack. However, there's a race here if two or more harts use the same shadow stack at the same time. To solve this race, we rely on two facts: 1. the content of kernel thread pointer I.E "tp" register can still be gotten from the CSR_SCRATCH register, thus we can clobber tp under the condition that we restore tp from CSR_SCRATCH later. 2. Once vmap stack overflow happen, panic is coming soon, no performance concern at all, so we don't need to define the overflow stack as percpu var, we can simplify it into a pointer array which points to allocated pages. Thus we can use tp as a tmp register to get the cpu id to calculate the offset of overflow stack pointer array for each cpu w/o shadow stack any more. Thus the race condition is removed as a side effect. NOTE: we can use similar mechanism to let each cpu use different shadow stack to fix the race codition, but if we can remove shadow stack usage totally, why not. Signed-off-by: Jisheng Zhang Fixes: 31da94c25aea ("riscv: add VMAP_STACK overflow detection") --- arch/riscv/include/asm/asm-prototypes.h | 1 - arch/riscv/include/asm/thread_info.h | 3 -- arch/riscv/kernel/asm-offsets.c | 1 + arch/riscv/kernel/entry.S | 56 ++++--------------------- arch/riscv/kernel/traps.c | 32 +++++++------- 5 files changed, 28 insertions(+), 65 deletions(-) diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/a= sm/asm-prototypes.h index ef386fcf3939..4a06fa0f6493 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -25,7 +25,6 @@ DECLARE_DO_ERROR_INFO(do_trap_ecall_s); DECLARE_DO_ERROR_INFO(do_trap_ecall_m); DECLARE_DO_ERROR_INFO(do_trap_break); =20 -asmlinkage unsigned long get_overflow_stack(void); asmlinkage void handle_bad_stack(struct pt_regs *regs); =20 #endif /* _ASM_RISCV_PROTOTYPES_H */ diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/= thread_info.h index c970d41dc4c6..26d3de62aab0 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -28,14 +28,11 @@ =20 #define THREAD_SHIFT (PAGE_SHIFT + THREAD_SIZE_ORDER) #define OVERFLOW_STACK_SIZE SZ_4K -#define SHADOW_OVERFLOW_STACK_SIZE (1024) =20 #define IRQ_STACK_SIZE THREAD_SIZE =20 #ifndef __ASSEMBLY__ =20 -extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; - #include #include =20 diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offset= s.c index df9444397908..62bf3bacc322 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -37,6 +37,7 @@ void asm_offsets(void) OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_count); OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); =20 OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index d227aca7f9d4..48ed1df7a792 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -223,54 +223,16 @@ END(ret_from_exception) =20 #ifdef CONFIG_VMAP_STACK ENTRY(handle_kernel_stack_overflow) - la sp, shadow_stack - addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE - - //save caller register to shadow stack - addi sp, sp, -(PT_SIZE_ON_STACK) - REG_S x1, PT_RA(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) - - la ra, restore_caller_reg - tail get_overflow_stack - -restore_caller_reg: - //save per-cpu overflow stack - REG_S a0, -8(sp) - //restore caller register from shadow_stack - REG_L x1, PT_RA(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + la sp, overflow_stack + /* use tp as tmp register since we can restore it from CSR_SCRATCH */ + REG_L tp, TASK_TI_CPU(tp) + slli tp, tp, RISCV_LGPTR + add tp, sp, tp + REG_L sp, 0(tp) + + /* restore tp */ + csrr tp, CSR_SCRATCH =20 - //load per-cpu overflow stack - REG_L sp, -8(sp) addi sp, sp, -(PT_SIZE_ON_STACK) =20 //save context to overflow stack diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index d20037585c2f..d317429b4097 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -245,23 +245,12 @@ int is_valid_bugaddr(unsigned long pc) #endif /* CONFIG_GENERIC_BUG */ =20 #ifdef CONFIG_VMAP_STACK -static DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], - overflow_stack)__aligned(16); -/* - * shadow stack, handled_ kernel_ stack_ overflow(in kernel/entry.S) is us= ed - * to get per-cpu overflow stack(get_overflow_stack). - */ -long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)]; -asmlinkage unsigned long get_overflow_stack(void) -{ - return (unsigned long)this_cpu_ptr(overflow_stack) + - OVERFLOW_STACK_SIZE; -} +u8 *overflow_stack[NR_CPUS] __ro_after_init __aligned(16); =20 asmlinkage void handle_bad_stack(struct pt_regs *regs) { unsigned long tsk_stk =3D (unsigned long)current->stack; - unsigned long ovf_stk =3D (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long ovf_stk =3D (unsigned long)overflow_stack[raw_smp_processor= _id()]; =20 console_verbose(); =20 @@ -269,7 +258,7 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) pr_emerg("Task stack: [0x%016lx..0x%016lx]\n", tsk_stk, tsk_stk + THREAD_SIZE); pr_emerg("Overflow stack: [0x%016lx..0x%016lx]\n", - ovf_stk, ovf_stk + OVERFLOW_STACK_SIZE); + ovf_stk - OVERFLOW_STACK_SIZE, ovf_stk); =20 __show_regs(regs); panic("Kernel stack overflow"); @@ -277,4 +266,19 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) for (;;) wait_for_interrupt(); } + +static int __init alloc_overflow_stacks(void) +{ + u8 *s; + int cpu; + + for_each_possible_cpu(cpu) { + s =3D (u8 *)__get_free_pages(GFP_KERNEL, get_order(OVERFLOW_STACK_SIZE)); + if (WARN_ON(!s)) + return -ENOMEM; + overflow_stack[cpu] =3D &s[OVERFLOW_STACK_SIZE]; + } + return 0; +} +early_initcall(alloc_overflow_stacks); #endif --=20 2.37.2 From nobody Sat May 18 19:47:52 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 986B1C433F5 for ; Mon, 3 Oct 2022 10:39:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229843AbiJCKjO (ORCPT ); Mon, 3 Oct 2022 06:39:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229723AbiJCKjB (ORCPT ); Mon, 3 Oct 2022 06:39:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76805356EE for ; Mon, 3 Oct 2022 03:39:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EDF376102D for ; Mon, 3 Oct 2022 10:38:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A6D0C433D7; Mon, 3 Oct 2022 10:38:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664793539; bh=2yPxgsbvpf73ZFZmAOVMhUZAd5fLy8hEziX0VcFYJEk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XOcg6bIRCq0jeLM7zHD2TaHxVIM/HOCSuGpdgBEqvkiAnP/8FNLktZttPc4PxcTPq OfojgBTt7FN/5/hbuA5uYlqBofWRqD/eM21fHON30AIbJZbyw52IXC1+pgiytcgcLs 2i5i5xjLYXHXJah7LVuRI3X2mL6N+W8WWpb88shl+RCqvTjLkqAaBzX6UQ4WRxgMa4 tqEwlVH8XgSsWYfgj3/uO1ObMDYo9lXyO+TbTud5k8/a97XXneixWD5x+cGfkp/kfx 6w5RGDHAZMwoNQBi9SLL8SFri5TnzhN84zNsI98NGMqAbciHiI1dyUzz1+fRX+sZRy 6O6C5iqHOaIIQ== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/4] riscv: entry: consolidate general regs saving/restoring Date: Mon, 3 Oct 2022 18:29:21 +0800 Message-Id: <20221003102921.3973-5-jszhang@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221003102921.3973-1-jszhang@kernel.org> References: <20221003102921.3973-1-jszhang@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Consolidate the saving/restoring GPs(except zero, ra, sp, gp and tp) into save_from_x5_to_x31/restore_from_x5_to_x31 macros. No functional change intended. Signed-off-by: Jisheng Zhang Reviewed-by: Guo Ren --- arch/riscv/include/asm/asm.h | 63 +++++++++++++++++++++++++ arch/riscv/kernel/entry.S | 84 ++-------------------------------- arch/riscv/kernel/mcount-dyn.S | 56 +---------------------- 3 files changed, 68 insertions(+), 135 deletions(-) diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 1b471ff73178..bf5247aa317d 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -68,6 +68,7 @@ #endif =20 #ifdef __ASSEMBLY__ +#include =20 /* Common assembly source macros */ =20 @@ -80,6 +81,68 @@ .endr .endm =20 + /* save all GPs except zero, ra, sp, gp and tp */ + .macro save_from_x5_to_x31 + REG_S x5, PT_T0(sp) + REG_S x6, PT_T1(sp) + REG_S x7, PT_T2(sp) + REG_S x8, PT_S0(sp) + REG_S x9, PT_S1(sp) + REG_S x10, PT_A0(sp) + REG_S x11, PT_A1(sp) + REG_S x12, PT_A2(sp) + REG_S x13, PT_A3(sp) + REG_S x14, PT_A4(sp) + REG_S x15, PT_A5(sp) + REG_S x16, PT_A6(sp) + REG_S x17, PT_A7(sp) + REG_S x18, PT_S2(sp) + REG_S x19, PT_S3(sp) + REG_S x20, PT_S4(sp) + REG_S x21, PT_S5(sp) + REG_S x22, PT_S6(sp) + REG_S x23, PT_S7(sp) + REG_S x24, PT_S8(sp) + REG_S x25, PT_S9(sp) + REG_S x26, PT_S10(sp) + REG_S x27, PT_S11(sp) + REG_S x28, PT_T3(sp) + REG_S x29, PT_T4(sp) + REG_S x30, PT_T5(sp) + REG_S x31, PT_T6(sp) + .endm + + /* restore all GPs except zero, ra, sp, gp and tp */ + .macro restore_from_x5_to_x31 + REG_L x5, PT_T0(sp) + REG_L x6, PT_T1(sp) + REG_L x7, PT_T2(sp) + REG_L x8, PT_S0(sp) + REG_L x9, PT_S1(sp) + REG_L x10, PT_A0(sp) + REG_L x11, PT_A1(sp) + REG_L x12, PT_A2(sp) + REG_L x13, PT_A3(sp) + REG_L x14, PT_A4(sp) + REG_L x15, PT_A5(sp) + REG_L x16, PT_A6(sp) + REG_L x17, PT_A7(sp) + REG_L x18, PT_S2(sp) + REG_L x19, PT_S3(sp) + REG_L x20, PT_S4(sp) + REG_L x21, PT_S5(sp) + REG_L x22, PT_S6(sp) + REG_L x23, PT_S7(sp) + REG_L x24, PT_S8(sp) + REG_L x25, PT_S9(sp) + REG_L x26, PT_S10(sp) + REG_L x27, PT_S11(sp) + REG_L x28, PT_T3(sp) + REG_L x29, PT_T4(sp) + REG_L x30, PT_T5(sp) + REG_L x31, PT_T6(sp) + .endm + #endif /* __ASSEMBLY__ */ =20 #endif /* _ASM_RISCV_ASM_H */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 48ed1df7a792..7ba3826dde84 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -41,33 +41,7 @@ _save_context: addi sp, sp, -(PT_SIZE_ON_STACK) REG_S x1, PT_RA(sp) REG_S x3, PT_GP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_from_x5_to_x31 =20 /* * Disable user-mode memory access as it should only be set in the @@ -184,33 +158,7 @@ ENTRY(ret_from_exception) REG_L x1, PT_RA(sp) REG_L x3, PT_GP(sp) REG_L x4, PT_TP(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x8, PT_S0(sp) - REG_L x9, PT_S1(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x18, PT_S2(sp) - REG_L x19, PT_S3(sp) - REG_L x20, PT_S4(sp) - REG_L x21, PT_S5(sp) - REG_L x22, PT_S6(sp) - REG_L x23, PT_S7(sp) - REG_L x24, PT_S8(sp) - REG_L x25, PT_S9(sp) - REG_L x26, PT_S10(sp) - REG_L x27, PT_S11(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + restore_from_x5_to_x31 =20 REG_L x2, PT_SP(sp) =20 @@ -238,33 +186,7 @@ ENTRY(handle_kernel_stack_overflow) //save context to overflow stack REG_S x1, PT_RA(sp) REG_S x3, PT_GP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_from_x5_to_x31 =20 REG_L s0, TASK_TI_KERNEL_SP(tp) csrr s1, CSR_STATUS diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S index d171eca623b6..040d098279a9 100644 --- a/arch/riscv/kernel/mcount-dyn.S +++ b/arch/riscv/kernel/mcount-dyn.S @@ -70,33 +70,7 @@ REG_S x2, PT_SP(sp) REG_S x3, PT_GP(sp) REG_S x4, PT_TP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_from_x5_to_x31 .endm =20 .macro RESTORE_ALL @@ -108,33 +82,7 @@ REG_L x2, PT_SP(sp) REG_L x3, PT_GP(sp) REG_L x4, PT_TP(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x8, PT_S0(sp) - REG_L x9, PT_S1(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x18, PT_S2(sp) - REG_L x19, PT_S3(sp) - REG_L x20, PT_S4(sp) - REG_L x21, PT_S5(sp) - REG_L x22, PT_S6(sp) - REG_L x23, PT_S7(sp) - REG_L x24, PT_S8(sp) - REG_L x25, PT_S9(sp) - REG_L x26, PT_S10(sp) - REG_L x27, PT_S11(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + restore_from_x5_to_x31 =20 addi sp, sp, PT_SIZE_ON_STACK addi sp, sp, SZREG --=20 2.37.2