From nobody Mon Nov 25 06:54:55 2024 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2FE5212D0F for ; Tue, 29 Oct 2024 23:44:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730245498; cv=none; b=Aoi7At1P3wSStbG7aTQS9wcxhbE+zlp7RVrCDIGupVBTA85GsF2rvsQkV+LvdkNelXaynJJJYQ833LYC1KbnakGoU2ZNmMJOFQ9J5YWy+YQO7t28u7oZ9phqKNnUPwwXdZARxFn6gKlnXiJ8X7pDccjnytyR+Q8Z4Y9M0SOq1yo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730245498; c=relaxed/simple; bh=ThsMN6KEiJoiaVxvCqD9EQjJgNv3JCs58KhCV5BE+7Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nBZLIobTk3T+K0HiRlMRp6QMLmA/AH6Pc4Lef1elGxxK893ruShAGnnp6JxnHG+I6fcmxC4m5ytDVTkNvweOoB19xNxhvX52KPgbj0vpjG75HiFqoNuzFZ1wUh55HI+OU5Eji4GNqW8GEwpZglrPEb4m2kkuMIsyhncbB+q9nYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=C7QWsQRw; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="C7QWsQRw" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7203c431f93so4876066b3a.1 for ; Tue, 29 Oct 2024 16:44:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1730245495; x=1730850295; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=hNRREKXIol+zn7NHo8wrO8lt9q1atoxXotkDizp4KWc=; b=C7QWsQRwVZc/NrarwzPuBFBGv3HVJlfhZ1XOf53HX4pUXIkuh32xYbOlboMfBO67OK TbSI6SW8EllgOuJIGJ4cJaEUpkEYuZHEsF0I4MCwpVzg7ftN0kyKlBr0z+HCIK1RUhB0 A/yaSFryjTM9Ep+zX11sah9gcMT7mv9B0Luxg1tSy/ndYG+w5nK2LLNtK0wZB2DFOoOS UJwFYfBiztR6vYCMvIJh8PhNwJGIJqpLxszyA2DU+qYuQQdoLC4GCkBwKUjwQWXowKPw v2g4NNTuR+xtTdYqVd/q0xsa8Ol8x4YZQLaWK9V5LHipnKb4JzLnY9NBM9cqKZwfxTMY Xj9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730245495; x=1730850295; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hNRREKXIol+zn7NHo8wrO8lt9q1atoxXotkDizp4KWc=; b=FWolp13J5lPPj1Fioxtn6ia9GhdQ7pTvY3sABuhe8j4h52eTNdmS55Z9wSaImWYRVT 67kQ1r5Wwa4qyOwX0x1y44M2IEr/61pt5zn4dGXGV3QpUprtdIvuj3Zfz9C8/EpyR1FY 43TLz2KvcteA6MGDxw0BpMK2QY2mFAMebT3G/BFqFbibl1sAYh/7Y4327+ov7qqi2o5P OQXvPM+7G4Zwz5sON11VhyeJ9wYxxQahwiR8RGUbo4zrVDDIiC+UNOnEp1PVRLTfffEE 8ys1N9GtFXN8S11BGAwsTYhG4Cuiqo6Ct/4bDFNQgVQbIkciCYXXXPt5bkg4lFzM2NRD zxtg== X-Gm-Message-State: AOJu0Yydt2+UVDvwefP53K8GfSdtauoENeo2lF8bIDVeFQu34KEEce4x Bz6Ae5eD3WaVQUJ229tzW6i8scmt3nnWuw0maUM9jbCQ5kS4/EfJx5LwBXx0mww= X-Google-Smtp-Source: AGHT+IFML8Lb42dKqClxemrPSZcKrg8g9X461U+Y/Tl3xJP5MBi9Wc3gH0NfVIyPMGXIby79FMBbgQ== X-Received: by 2002:aa7:9385:0:b0:71e:7d52:fa8c with SMTP id d2e1a72fcca58-7206306f0f8mr15309521b3a.22.1730245495095; Tue, 29 Oct 2024 16:44:55 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72057921863sm8157643b3a.33.2024.10.29.16.44.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Oct 2024 16:44:54 -0700 (PDT) From: Deepak Gupta Date: Tue, 29 Oct 2024 16:44:15 -0700 Subject: [PATCH v7 15/32] riscv/shstk: If needed allocate a new shadow stack on clone Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241029-v5_user_cfi_series-v7-15-2727ce9936cb@rivosinc.com> References: <20241029-v5_user_cfi_series-v7-0-2727ce9936cb@rivosinc.com> In-Reply-To: <20241029-v5_user_cfi_series-v7-0-2727ce9936cb@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Deepak Gupta X-Mailer: b4 0.14.0 Userspace specifies CLONE_VM to share address space and spawn new thread. `clone` allow userspace to specify a new stack for new thread. However there is no way to specify new shadow stack base address without changing API. This patch allocates a new shadow stack whenever CLONE_VM is given. In case of CLONE_VFORK, parent is suspended until child finishes and thus can child use parent shadow stack. In case of !CLONE_VM, COW kicks in because entire address space is copied from parent to child. `clone3` is extensible and can provide mechanisms using which shadow stack as an input parameter can be provided. This is not settled yet and being extensively discussed on mailing list. Once that's settled, this commit will adapt to that. Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/mmu_context.h | 7 ++ arch/riscv/include/asm/usercfi.h | 25 ++++++++ arch/riscv/kernel/process.c | 9 +++ arch/riscv/kernel/usercfi.c | 121 +++++++++++++++++++++++++++++++= ++++ 4 files changed, 162 insertions(+) diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/= mmu_context.h index 7030837adc1a..d4432a46164c 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -35,6 +35,13 @@ static inline int init_new_context(struct task_struct *t= sk, =20 DECLARE_STATIC_KEY_FALSE(use_asid_allocator); =20 +#define deactivate_mm deactivate_mm +static inline void deactivate_mm(struct task_struct *tsk, + struct mm_struct *mm) +{ + shstk_release(tsk); +} + #include =20 #endif /* _ASM_RISCV_MMU_CONTEXT_H */ diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/user= cfi.h index 4fa201b4fc4e..4da9cbc8f9b5 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -8,6 +8,9 @@ #ifndef __ASSEMBLY__ #include =20 +struct task_struct; +struct kernel_clone_args; + #ifdef CONFIG_RISCV_USER_CFI struct cfi_status { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ @@ -17,6 +20,28 @@ struct cfi_status { unsigned long shdw_stk_size; /* size of shadow stack */ }; =20 +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args); +void shstk_release(struct task_struct *tsk); +void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, un= signed long size); +unsigned long get_shstk_base(struct task_struct *task, unsigned long *size= ); +void set_active_shstk(struct task_struct *task, unsigned long shstk_addr); +bool is_shstk_enabled(struct task_struct *task); + +#else + +#define shstk_alloc_thread_stack(tsk, args) 0 + +#define shstk_release(tsk) + +#define get_shstk_base(task, size) 0UL + +#define set_shstk_base(task, shstk_addr, size) + +#define set_active_shstk(task, shstk_addr) + +#define is_shstk_enabled(task) false + #endif /* CONFIG_RISCV_USER_CFI */ =20 #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index e3142d8a6e28..632c621682f6 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -28,6 +28,7 @@ #include #include #include +#include =20 #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_T= ASK) #include @@ -206,6 +207,7 @@ int copy_thread(struct task_struct *p, const struct ker= nel_clone_args *args) unsigned long clone_flags =3D args->flags; unsigned long usp =3D args->stack; unsigned long tls =3D args->tls; + unsigned long ssp =3D 0; struct pt_regs *childregs =3D task_pt_regs(p); =20 memset(&p->thread.s, 0, sizeof(p->thread.s)); @@ -220,11 +222,18 @@ int copy_thread(struct task_struct *p, const struct k= ernel_clone_args *args) p->thread.s[0] =3D (unsigned long)args->fn; p->thread.s[1] =3D (unsigned long)args->fn_arg; } else { + /* allocate new shadow stack if needed. In case of CLONE_VM we have to */ + ssp =3D shstk_alloc_thread_stack(p, args); + if (IS_ERR_VALUE(ssp)) + return PTR_ERR((void *)ssp); + *childregs =3D *(current_pt_regs()); /* Turn off status.VS */ riscv_v_vstate_off(childregs); if (usp) /* User fork */ childregs->sp =3D usp; + /* if needed, set new ssp */ + ssp ? set_active_shstk(p, ssp) : 0; if (clone_flags & CLONE_SETTLS) childregs->tp =3D tls; childregs->a0 =3D 0; /* Return value of fork() */ diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index 96bb324abafb..6cd166b73316 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -19,6 +19,41 @@ =20 #define SHSTK_ENTRY_SIZE sizeof(void *) =20 +bool is_shstk_enabled(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ubcfi_en ? true : false; +} + +void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, un= signed long size) +{ + task->thread_info.user_cfi_state.shdw_stk_base =3D shstk_addr; + task->thread_info.user_cfi_state.shdw_stk_size =3D size; +} + +unsigned long get_shstk_base(struct task_struct *task, unsigned long *size) +{ + if (size) + *size =3D task->thread_info.user_cfi_state.shdw_stk_size; + return task->thread_info.user_cfi_state.shdw_stk_base; +} + +void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) +{ + task->thread_info.user_cfi_state.user_shdw_stk =3D shstk_addr; +} + +/* + * If size is 0, then to be compatible with regular stack we want it to be= as big as + * regular stack. Else PAGE_ALIGN it and return back + */ +static unsigned long calc_shstk_size(unsigned long size) +{ + if (size) + return PAGE_ALIGN(size); + + return PAGE_ALIGN(min_t(unsigned long long, rlimit(RLIMIT_STACK), SZ_4G)); +} + /* * Writes on shadow stack can either be `sspush` or `ssamoswap`. `sspush` = can happen * implicitly on current shadow stack pointed to by CSR_SSP. `ssamoswap` t= akes pointer to @@ -143,3 +178,89 @@ SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr,= unsigned long, size, unsi =20 return allocate_shadow_stack(addr, aligned_size, size, set_tok); } + +/* + * This gets called during clone/clone3/fork. And is needed to allocate a = shadow stack for + * cases where CLONE_VM is specified and thus a different stack is specifi= ed by user. We + * thus need a separate shadow stack too. How does separate shadow stack i= s specified by + * user is still being debated. Once that's settled, remove this part of t= he comment. + * This function simply returns 0 if shadow stack are not supported or if = separate shadow + * stack allocation is not needed (like in case of !CLONE_VM) + */ +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) +{ + unsigned long addr, size; + + /* If shadow stack is not supported, return 0 */ + if (!cpu_supports_shadow_stack()) + return 0; + + /* + * If shadow stack is not enabled on the new thread, skip any + * switch to a new shadow stack. + */ + if (!is_shstk_enabled(tsk)) + return 0; + + /* + * For CLONE_VFORK the child will share the parents shadow stack. + * Set base =3D 0 and size =3D 0, this is special means to track this sta= te + * so the freeing logic run for child knows to leave it alone. + */ + if (args->flags & CLONE_VFORK) { + set_shstk_base(tsk, 0, 0); + return 0; + } + + /* + * For !CLONE_VM the child will use a copy of the parents shadow + * stack. + */ + if (!(args->flags & CLONE_VM)) + return 0; + + /* + * reaching here means, CLONE_VM was specified and thus a separate shadow + * stack is needed for new cloned thread. Note: below allocation is happe= ning + * using current mm. + */ + size =3D calc_shstk_size(args->stack_size); + addr =3D allocate_shadow_stack(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return addr; + + set_shstk_base(tsk, addr, size); + + return addr + size; +} + +void shstk_release(struct task_struct *tsk) +{ + unsigned long base =3D 0, size =3D 0; + /* If shadow stack is not supported or not enabled, nothing to release */ + if (!cpu_supports_shadow_stack() || + !is_shstk_enabled(tsk)) + return; + + /* + * When fork() with CLONE_VM fails, the child (tsk) already has a + * shadow stack allocated, and exit_thread() calls this function to + * free it. In this case the parent (current) and the child share + * the same mm struct. Move forward only when they're same. + */ + if (!tsk->mm || tsk->mm !=3D current->mm) + return; + + /* + * We know shadow stack is enabled but if base is NULL, then + * this task is not managing its own shadow stack (CLONE_VFORK). So + * skip freeing it. + */ + base =3D get_shstk_base(tsk, &size); + if (!base) + return; + + vm_munmap(base, size); + set_shstk_base(tsk, 0, 0); +} --=20 2.34.1