From nobody Tue Feb 10 13:36:13 2026 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EEB322AE52 for ; Wed, 5 Feb 2025 01:22:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738718536; cv=none; b=OGf5wYB0mKH3kszUegbRqwUQi3c0nO9YvqTFXvTWbJncoAqwjG/RxovMUVsF27glExfJ/fnMMLIiyy7wlDvS0deOnwZh8um7YuYPsfuXZEaJYjW2lEBzbiz80/M55pwVlD7wzQJtc8JUTQwzD4O4I9axH/uUhvQaCknuUMfRRHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738718536; c=relaxed/simple; bh=n/OJIVv97sm5i0GJSBQDLTL5bF2hSZWy7j+gIWNrU0I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MKq09HSwCKt4u0Jk0Ztm4Ta8QiLwZsUYl100SWS3RmjjiaLhSbGIjRSvJCUAm2PojdXrI6bjpqLl0RKgAdK7Suu6vUye/kez2NBa/Zwlkb1/j5v7WDne05vtVwzKWNHygaYDV+co+ofOWVk/+zOjJChvoKWS6HFaoil8CMN//UQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=HoxSv/xs; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="HoxSv/xs" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-2f44353649aso8334352a91.0 for ; Tue, 04 Feb 2025 17:22:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1738718533; x=1739323333; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=hK7DlmuIRYzIsuvvXHgsbKnv3WN7JKJpEt1D7vAnJaQ=; b=HoxSv/xsJuGj7XAXTlNS7C4LVHOLLzZs2ZtCb8qt6MLG73F1StSmcSFv6H5hDx3Inc 77GxAhZooL7AtyMgi8GzJ18FpymnVpBo2evJXJ110onDqr04fLboqhYZI7NMDY/8QpUR SyO+J7pu9X0Vy+7idkPzPAq4X4APwacTMIIQDb0bAQLgEt1F+3nuuXta52jPs6xHBaG4 uuCdhgOnw+MEEFDGULI34hZVlZmIsVO3bg9HjmxKprzHbbMs12jcOlj+xeHfaxrvXQvo YNcXeugRUZk3rX0mQy7IVAd4t41JEmEJ4c6JqDzmNZRDH17KE0ucqzoLn5vc05vQU6OP CXuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738718533; x=1739323333; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hK7DlmuIRYzIsuvvXHgsbKnv3WN7JKJpEt1D7vAnJaQ=; b=wa5TNZ7yWlpIyseI+o99T4Ybn2C331BcTtDVt0YuE9XHfRMJB0J5A8AQS2b4dK7Yl7 RuNxw//u1sbpdd1RqQBRKqMjzF29HcIbKQmA1EOEE94wvERYuTNW96N6lY3hzQq5Myte kcPTSyMB59sMwokAd3H5QoD9XcSqtxd731Rki4g47xyL8eqHijyy9u/Pd2rWCtORwtbt lUzKfne0oavNZM2YfzGxGIYgH280fdsa+ts3qv5BLWss+l1vG65Kzlhn3ZghFn1IrAPT BsgKsNBk1dotqnR8vSB3E5t9L5j3XZD+LJ9WKwBamFaQrGuMgkvMmAOysYBnlVt0V0WJ s3KQ== X-Gm-Message-State: AOJu0YwaymmPiPfvEo6IIAYFm0ohSeQOTnsblpipNi65xeXBOkhKfDHL W/cKRxcNDJTnXb49mIXZDRrS9EUAx20srS1rRMpLtT7QAb6+QwFcEIztuM6UkNQ= X-Gm-Gg: ASbGnctSAmdzHGgkf6VfF9VAF7xh7IDnq58mhkz1B5/lnbDNFQm4nWR5WZpdm2bRt8l 9RO10Q0nS/uDzdClVrl18L6spbk0TfzRpvWp5m1ma/5VJOFH9ImWi0SRzjC0T4itZDhrmOir+9A UZb6bYnOlgREGiSEVuLYSbYFb+4HqsPJRaxCLWLSvUZxzqkGFy9dKknFc7i4kIk9H7TZmf/RGp5 IPMmAtl6dXYF2LITn+GJPBt0Yd4XPonqJpIitP5diEUJXsZYjDnSQBOYuduk0FbyMickXf9+HRn 4iPKFH/ksPn+CKaM/pS+1Mp0Eg== X-Google-Smtp-Source: AGHT+IHSDstT0FGTL1rKn5OYVTictI2F7hNGUqzMma/U3VqWcjoJawW9HqWn0ZJ0B/nhVBR5FKHupQ== X-Received: by 2002:a05:6a00:1d88:b0:72f:d7ce:4ff8 with SMTP id d2e1a72fcca58-7303520d315mr1572427b3a.22.1738718533123; Tue, 04 Feb 2025 17:22:13 -0800 (PST) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe69cec0fsm11457202b3a.137.2025.02.04.17.22.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 17:22:12 -0800 (PST) From: Deepak Gupta Date: Tue, 04 Feb 2025 17:21:57 -0800 Subject: [PATCH v9 10/26] riscv/mm: Implement map_shadow_stack() syscall Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250204-v5_user_cfi_series-v9-10-b37a49c5205c@rivosinc.com> References: <20250204-v5_user_cfi_series-v9-0-b37a49c5205c@rivosinc.com> In-Reply-To: <20250204-v5_user_cfi_series-v9-0-b37a49c5205c@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Deepak Gupta X-Mailer: b4 0.14.0 As discussed extensively in the changelog for the addition of this syscall on x86 ("x86/shstk: Introduce map_shadow_stack syscall") the existing mmap() and madvise() syscalls do not map entirely well onto the security requirements for shadow stack memory since they lead to windows where memory is allocated but not yet protected or stacks which are not properly and safely initialised. Instead a new syscall map_shadow_stack() has been defined which allocates and initialises a shadow stack page. This patch implements this syscall for riscv. riscv doesn't require token to be setup by kernel because user mode can do that by itself. However to provide compatibility and portability with other architectues, user mode can specify token set flag. Signed-off-by: Deepak Gupta --- arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/usercfi.c | 144 ++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 145 insertions(+) diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 8d186bfced45..3a861d320654 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -125,3 +125,4 @@ obj-$(CONFIG_ACPI) +=3D acpi.o obj-$(CONFIG_ACPI_NUMA) +=3D acpi_numa.o =20 obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) +=3D bugs.o +obj-$(CONFIG_RISCV_USER_CFI) +=3D usercfi.o diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c new file mode 100644 index 000000000000..24022809a7b5 --- /dev/null +++ b/arch/riscv/kernel/usercfi.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2024 Rivos, Inc. + * Deepak Gupta + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SHSTK_ENTRY_SIZE sizeof(void *) + +/* + * Writes on shadow stack can either be `sspush` or `ssamoswap`. `sspush` = can happen + * implicitly on current shadow stack pointed to by CSR_SSP. `ssamoswap` t= akes pointer to + * shadow stack. To keep it simple, we plan to use `ssamoswap` to perform = writes on shadow + * stack. + */ +static noinline unsigned long amo_user_shstk(unsigned long *addr, unsigned= long val) +{ + /* + * Never expect -1 on shadow stack. Expect return addresses and zero + */ + unsigned long swap =3D -1; + + __enable_user_access(); + asm goto( + ".option push\n" + ".option arch, +zicfiss\n" + "1: ssamoswap.d %[swap], %[val], %[addr]\n" + _ASM_EXTABLE(1b, %l[fault]) + RISCV_ACQUIRE_BARRIER + ".option pop\n" + : [swap] "=3Dr" (swap), [addr] "+A" (*addr) + : [val] "r" (val) + : "memory" + : fault + ); + __disable_user_access(); + return swap; +fault: + __disable_user_access(); + return -1; +} + +/* + * Create a restore token on the shadow stack. A token is always XLEN wide + * and aligned to XLEN. + */ +static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) +{ + unsigned long addr; + + /* Token must be aligned */ + if (!IS_ALIGNED(ssp, SHSTK_ENTRY_SIZE)) + return -EINVAL; + + /* On RISC-V we're constructing token to be function of address itself */ + addr =3D ssp - SHSTK_ENTRY_SIZE; + + if (amo_user_shstk((unsigned long __user *)addr, (unsigned long)ssp) =3D= =3D -1) + return -EFAULT; + + if (token_addr) + *token_addr =3D addr; + + return 0; +} + +static unsigned long allocate_shadow_stack(unsigned long addr, unsigned lo= ng size, + unsigned long token_offset, bool set_tok) +{ + int flags =3D MAP_ANONYMOUS | MAP_PRIVATE; + struct mm_struct *mm =3D current->mm; + unsigned long populate, tok_loc =3D 0; + + if (addr) + flags |=3D MAP_FIXED_NOREPLACE; + + mmap_write_lock(mm); + addr =3D do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &populate, NULL); + mmap_write_unlock(mm); + + if (!set_tok || IS_ERR_VALUE(addr)) + goto out; + + if (create_rstor_token(addr + token_offset, &tok_loc)) { + vm_munmap(addr, size); + return -EINVAL; + } + + addr =3D tok_loc; + +out: + return addr; +} + +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size= , unsigned int, flags) +{ + bool set_tok =3D flags & SHADOW_STACK_SET_TOKEN; + unsigned long aligned_size =3D 0; + + if (!cpu_supports_shadow_stack()) + return -EOPNOTSUPP; + + /* Anything other than set token should result in invalid param */ + if (flags & ~SHADOW_STACK_SET_TOKEN) + return -EINVAL; + + /* + * Unlike other architectures, on RISC-V, SSP pointer is held in CSR_SSP = and is available + * CSR in all modes. CSR accesses are performed using 12bit index program= med in instruction + * itself. This provides static property on register programming and writ= es to CSR can't + * be unintentional from programmer's perspective. As long as programmer = has guarded areas + * which perform writes to CSR_SSP properly, shadow stack pivoting is not= possible. Since + * CSR_SSP is writeable by user mode, it itself can setup a shadow stack = token subsequent + * to allocation. Although in order to provide portablity with other arch= itecture (because + * `map_shadow_stack` is arch agnostic syscall), RISC-V will follow expec= tation of a token + * flag in flags and if provided in flags, setup a token at the base. + */ + + /* If there isn't space for a token */ + if (set_tok && size < SHSTK_ENTRY_SIZE) + return -ENOSPC; + + if (addr && (addr & (PAGE_SIZE - 1))) + return -EINVAL; + + aligned_size =3D PAGE_ALIGN(size); + if (aligned_size < size) + return -EOVERFLOW; + + return allocate_shadow_stack(addr, aligned_size, size, set_tok); +} --=20 2.34.1