From nobody Tue Dec 16 22:29:49 2025 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EF8D2C3253 for ; Mon, 2 Jun 2025 19:44:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893473; cv=none; b=j2bfsDWgLpAZkQvCWOaDCYetTRSzZoQLET48FxsbGHrNoAPf3HuhulXGPAmzRgur97GtRnq6E+VwE5pXg5eitTgWSS67dGIqdSsj1eiLDnNX6kYLVlysYQk19z45eq3LTpR2sr1HAXImnFOntoFXH3qupC1tOqYT+BCWvh6xpLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893473; c=relaxed/simple; bh=RQQS9wykCAKPz7cXHAoZxH3SjVHqtw9yqXXjVL2LqLM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gfwA+j4ZzBqLRPbBThYcXBXbB8ZyWE00EaVNzpFW8/FBMKHkFEaFLQg2IYqEEvZhxekp7cOYfyEBLqKZVMV9MzBs6Gt1lJBavRp26YAoFa1dKGs2JpcWVe3j2l5JHp3rYNV7d9/9QFJnC7Z5RFQZ/OuG26iuJdX6ykHX6mHlwsQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=CzXYifg7; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="CzXYifg7" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-23035b3edf1so43555055ad.3 for ; Mon, 02 Jun 2025 12:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1748893471; x=1749498271; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zur8vnBL/aIBIBT0v6Ls3k2PGwSe6i2tzuRUsrSMGvQ=; b=CzXYifg7IJLsa0KlJZUcMPdt7psNV8Ay7hrahoE1D+TCM4U1xHrBMgvebaqbdqQKKP jVBCX1iAm3R6GCGA/xrK4x2Dn+Y/9EuPDW0V9lxK150SunSMGHmdM22zTG+QEbRca10F sZf22cmL1UUFncwy6+LZzy/7l0ZW8dP6+40Arxm1Za5NN3iuVc8U3OnhPcFVLyGM8ecI ON+HuYzbBJmcXfHFguh7KndTMs64TAnoLqq70bQ58Zala0pOmTeYQhKh1Spq61rAjWzN oY+yyTSHudfDjmAhN5xASYlrgW2wlz+bQ0sBNMILRR4qUI95zLq52FW0dIwEEgll9Ypn 1NHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748893471; x=1749498271; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zur8vnBL/aIBIBT0v6Ls3k2PGwSe6i2tzuRUsrSMGvQ=; b=ZhM09fTgAdU2GA658eEyt30P+e4/nyh4QCKRTuto0xTQMiB9ZHaYxZaGdmspR70LIP QGZg4d2A4bUvt9k+o5f9b2QnQpdIBoGejbOeqOImUwWH2EkzH3zP8DBzBLhI8Tol/0eh D6MeJHObHZznNihb/f57qfzaDbE/CxW92OI/OMvvawJqi3n39M73a+em/C2Z9UdOVUp5 syUIfu97zzDyg9PIK4uf/aCIrFUBdzJ2ge1psAdInXJmIVAIIBxbXomppSlPcnNVaR7X GiNEAuSPnN78SBjlmO/gIQgahCB8YLiLX/Q7VuaZ02fT2imxXtip8RKkwp532SgWMn7F jJzA== X-Forwarded-Encrypted: i=1; AJvYcCXL2kz90Ba7VXIDCR0oVCiIgcCHqA9SLyfR69UcdayBSdrBXCki4G+HjaHs8iA4myG/3AXw9yKIeu/yPgc=@vger.kernel.org X-Gm-Message-State: AOJu0YyKe1TMOYLkM8OYLWOSJQ0RMe6/EFpYOxBYhEGx02xOBAwt/azd C28wmElnm5eXHLKbQmweQtT0fTfGA7/yPWomA0aA3FtRp6bX0RoGxnPQPgJMQPi8iHo= X-Gm-Gg: ASbGncu0vMjfto0SjqP05LUAFOCR2weS4GLISxgIclxx92mHN7QHIKV4Yt4S4QcAsHv AEW14JSEOwTAd6EbbhIGHKv6p+xR0d1VEkP5hnoACa3ZL+CemBP7QpqkqfH/SzLtsuQAGjzm80I m1h2Zojxuiur3ZsQFLQQQATWCWAGj5JZPWJz4C2111nx3kRxy8NfSz8x3S360asdktxwcNs5CD7 QMT9g7o75EVgFnCIjeN1sNkigS0CFvCqrF3xaK0D7cEOm7mBwX1UUcf8i9AtliN4BpVkOnX7Ekz j5HNNCfDUmnLtL9vaNGmQOAhsoXbmmIYsMhOg/QcJ4EHacEsqzaB X-Google-Smtp-Source: AGHT+IGGiWE+FvM3qtu3me1rB36Kwx5uGw0pOGBY7v2J71v6g26D+49dy3A/NTvFDvJEI5GYayYmlg== X-Received: by 2002:a17:902:ea08:b0:234:a063:e2ac with SMTP id d9443c01a7336-23528fedd7bmr219553865ad.2.1748893470551; Mon, 02 Jun 2025 12:44:30 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bd974asm74589615ad.97.2025.06.02.12.44.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Jun 2025 12:44:29 -0700 (PDT) From: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "Maciej W . Rozycki" , David Laight , Alexandre Ghiti Subject: [PATCH v2 1/3] riscv: make unsafe user copy routines use existing assembly routines Date: Mon, 2 Jun 2025 21:39:14 +0200 Message-ID: <20250602193918.868962-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250602193918.868962-1-cleger@rivosinc.com> References: <20250602193918.868962-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alexandre Ghiti The current implementation is underperforming and in addition, it triggers misaligned access traps on platforms which do not handle misaligned accesses in hardware. Use the existing assembly routines to solve both problems at once. Signed-off-by: Alexandre Ghiti --- arch/riscv/include/asm/asm-prototypes.h | 2 +- arch/riscv/include/asm/uaccess.h | 33 ++++------------ arch/riscv/lib/riscv_v_helpers.c | 11 ++++-- arch/riscv/lib/uaccess.S | 50 +++++++++++++++++-------- arch/riscv/lib/uaccess_vector.S | 15 ++++++-- 5 files changed, 63 insertions(+), 48 deletions(-) diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/a= sm/asm-prototypes.h index cd627ec289f1..5d10edde6d17 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -12,7 +12,7 @@ long long __ashlti3(long long a, int b); #ifdef CONFIG_RISCV_ISA_V =20 #ifdef CONFIG_MMU -asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n); +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n, bool = enable_sum); #endif /* CONFIG_MMU */ =20 void xor_regs_2_(unsigned long bytes, unsigned long *__restrict p1, diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uacc= ess.h index 87d01168f80a..046de7ced09c 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -450,35 +450,18 @@ static inline void user_access_restore(unsigned long = enabled) { } (x) =3D (__force __typeof__(*(ptr)))__gu_val; \ } while (0) =20 -#define unsafe_copy_loop(dst, src, len, type, op, label) \ - while (len >=3D sizeof(type)) { \ - op(*(type *)(src), (type __user *)(dst), label); \ - dst +=3D sizeof(type); \ - src +=3D sizeof(type); \ - len -=3D sizeof(type); \ - } +unsigned long __must_check __asm_copy_to_user_sum_enabled(void __user *to, + const void *from, unsigned long n); +unsigned long __must_check __asm_copy_from_user_sum_enabled(void *to, + const void __user *from, unsigned long n); =20 #define unsafe_copy_to_user(_dst, _src, _len, label) \ -do { \ - char __user *__ucu_dst =3D (_dst); \ - const char *__ucu_src =3D (_src); \ - size_t __ucu_len =3D (_len); \ - unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u64, unsafe_put_user, l= abel); \ - unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u32, unsafe_put_user, l= abel); \ - unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u16, unsafe_put_user, l= abel); \ - unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, unsafe_put_user, la= bel); \ -} while (0) + if (__asm_copy_to_user_sum_enabled(_dst, _src, _len)) \ + goto label; =20 #define unsafe_copy_from_user(_dst, _src, _len, label) \ -do { \ - char *__ucu_dst =3D (_dst); \ - const char __user *__ucu_src =3D (_src); \ - size_t __ucu_len =3D (_len); \ - unsafe_copy_loop(__ucu_src, __ucu_dst, __ucu_len, u64, unsafe_get_user, l= abel); \ - unsafe_copy_loop(__ucu_src, __ucu_dst, __ucu_len, u32, unsafe_get_user, l= abel); \ - unsafe_copy_loop(__ucu_src, __ucu_dst, __ucu_len, u16, unsafe_get_user, l= abel); \ - unsafe_copy_loop(__ucu_src, __ucu_dst, __ucu_len, u8, unsafe_get_user, la= bel); \ -} while (0) + if (__asm_copy_from_user_sum_enabled(_dst, _src, _len)) \ + goto label; =20 #else /* CONFIG_MMU */ #include diff --git a/arch/riscv/lib/riscv_v_helpers.c b/arch/riscv/lib/riscv_v_help= ers.c index be38a93cedae..7bbdfc6d4552 100644 --- a/arch/riscv/lib/riscv_v_helpers.c +++ b/arch/riscv/lib/riscv_v_helpers.c @@ -16,8 +16,11 @@ #ifdef CONFIG_MMU size_t riscv_v_usercopy_threshold =3D CONFIG_RISCV_ISA_V_UCOPY_THRESHOLD; int __asm_vector_usercopy(void *dst, void *src, size_t n); +int __asm_vector_usercopy_sum_enabled(void *dst, void *src, size_t n); int fallback_scalar_usercopy(void *dst, void *src, size_t n); -asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n) +int fallback_scalar_usercopy_sum_enabled(void *dst, void *src, size_t n); +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n, + bool enable_sum) { size_t remain, copied; =20 @@ -26,7 +29,8 @@ asmlinkage int enter_vector_usercopy(void *dst, void *src= , size_t n) goto fallback; =20 kernel_vector_begin(); - remain =3D __asm_vector_usercopy(dst, src, n); + remain =3D enable_sum ? __asm_vector_usercopy(dst, src, n) : + __asm_vector_usercopy_sum_enabled(dst, src, n); kernel_vector_end(); =20 if (remain) { @@ -40,6 +44,7 @@ asmlinkage int enter_vector_usercopy(void *dst, void *src= , size_t n) return remain; =20 fallback: - return fallback_scalar_usercopy(dst, src, n); + return enable_sum ? fallback_scalar_usercopy(dst, src, n) : + fallback_scalar_usercopy_sum_enabled(dst, src, n); } #endif diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 6a9f116bb545..4efea1b3326c 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -17,14 +17,43 @@ SYM_FUNC_START(__asm_copy_to_user) ALTERNATIVE("j fallback_scalar_usercopy", "nop", 0, RISCV_ISA_EXT_ZVE32X,= CONFIG_RISCV_ISA_V) REG_L t0, riscv_v_usercopy_threshold bltu a2, t0, fallback_scalar_usercopy - tail enter_vector_usercopy + li a3, 1 + tail enter_vector_usercopy #endif -SYM_FUNC_START(fallback_scalar_usercopy) +SYM_FUNC_END(__asm_copy_to_user) +EXPORT_SYMBOL(__asm_copy_to_user) +SYM_FUNC_ALIAS(__asm_copy_from_user, __asm_copy_to_user) +EXPORT_SYMBOL(__asm_copy_from_user) =20 +SYM_FUNC_START(fallback_scalar_usercopy) /* Enable access to user memory */ - li t6, SR_SUM - csrs CSR_STATUS, t6 + li t6, SR_SUM + csrs CSR_STATUS, t6 + mv t6, ra =20 + call fallback_scalar_usercopy_sum_enabled + + /* Disable access to user memory */ + mv ra, t6 + li t6, SR_SUM + csrc CSR_STATUS, t6 + ret +SYM_FUNC_END(fallback_scalar_usercopy) + +SYM_FUNC_START(__asm_copy_to_user_sum_enabled) +#ifdef CONFIG_RISCV_ISA_V + ALTERNATIVE("j fallback_scalar_usercopy_sum_enabled", "nop", 0, RISCV_ISA= _EXT_ZVE32X, CONFIG_RISCV_ISA_V) + REG_L t0, riscv_v_usercopy_threshold + bltu a2, t0, fallback_scalar_usercopy_sum_enabled + li a3, 0 + tail enter_vector_usercopy +#endif +SYM_FUNC_END(__asm_copy_to_user_sum_enabled) +SYM_FUNC_ALIAS(__asm_copy_from_user_sum_enabled, __asm_copy_to_user_sum_en= abled) +EXPORT_SYMBOL(__asm_copy_from_user_sum_enabled) +EXPORT_SYMBOL(__asm_copy_to_user_sum_enabled) + +SYM_FUNC_START(fallback_scalar_usercopy_sum_enabled) /* * Save the terminal address which will be used to compute the number * of bytes copied in case of a fixup exception. @@ -178,23 +207,12 @@ SYM_FUNC_START(fallback_scalar_usercopy) bltu a0, t0, 4b /* t0 - end of dst */ =20 .Lout_copy_user: - /* Disable access to user memory */ - csrc CSR_STATUS, t6 li a0, 0 ret - - /* Exception fixup code */ 10: - /* Disable access to user memory */ - csrc CSR_STATUS, t6 sub a0, t5, a0 ret -SYM_FUNC_END(__asm_copy_to_user) -SYM_FUNC_END(fallback_scalar_usercopy) -EXPORT_SYMBOL(__asm_copy_to_user) -SYM_FUNC_ALIAS(__asm_copy_from_user, __asm_copy_to_user) -EXPORT_SYMBOL(__asm_copy_from_user) - +SYM_FUNC_END(fallback_scalar_usercopy_sum_enabled) =20 SYM_FUNC_START(__clear_user) =20 diff --git a/arch/riscv/lib/uaccess_vector.S b/arch/riscv/lib/uaccess_vecto= r.S index 7c45f26de4f7..03b5560609a2 100644 --- a/arch/riscv/lib/uaccess_vector.S +++ b/arch/riscv/lib/uaccess_vector.S @@ -24,7 +24,18 @@ SYM_FUNC_START(__asm_vector_usercopy) /* Enable access to user memory */ li t6, SR_SUM csrs CSR_STATUS, t6 + mv t6, ra =20 + call __asm_vector_usercopy_sum_enabled + + /* Disable access to user memory */ + mv ra, t6 + li t6, SR_SUM + csrc CSR_STATUS, t6 + ret +SYM_FUNC_END(__asm_vector_usercopy) + +SYM_FUNC_START(__asm_vector_usercopy_sum_enabled) loop: vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma fixup vle8.v vData, (pSrc), 10f @@ -36,8 +47,6 @@ loop: =20 /* Exception fixup for vector load is shared with normal exit */ 10: - /* Disable access to user memory */ - csrc CSR_STATUS, t6 mv a0, iNum ret =20 @@ -49,4 +58,4 @@ loop: csrr t2, CSR_VSTART sub iNum, iNum, t2 j 10b -SYM_FUNC_END(__asm_vector_usercopy) +SYM_FUNC_END(__asm_vector_usercopy_sum_enabled) --=20 2.49.0 From nobody Tue Dec 16 22:29:49 2025 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A34F227EA7 for ; Mon, 2 Jun 2025 19:44:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893479; cv=none; b=OM6+0TSycExLoIa9dGMghh6ycK1aS1RUmSe+2GU4uadSq2Zqr5zkc2502C/Ndl+iX3OZb0I5SYnTBm247C7qvyyKZvHDOcmBQ4tvqc5jlmBoEmpj4ljsIN7upKi76melIevHL29Z4DZeDCJFiFQYZuJMwHzKvI3MlFk46wSkPbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893479; c=relaxed/simple; bh=m86mWweeA9Ak95YLUpC+nPRlhYzIdQ5qKntHnCrwvDI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=JCzZWQWwk23zMN5qPvPkQcJxq3pe5GNgqDLkOsdoebu+FpzSXLYuxWMuXQLR8EZ7urPU2M/sjfgkaCLPOTeNAZw5RXnd+RvdprV/8Z0bvH5zwj5U8cZP3NYH2ZYRHTmDgVDnvyez7vh73N0UGU0M+0c98fVqVB17ZSThUv3QWRk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=eVcbuEab; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="eVcbuEab" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-234f17910d8so45717935ad.3 for ; Mon, 02 Jun 2025 12:44:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1748893477; x=1749498277; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=60pgDrU15Q1dZJENX35Q0Xke0F/FZ5IdLg55ZZD51Ko=; b=eVcbuEabe94zDwJT5AiR4wuS2/jmsSijdBWOOwqrx+k85LMJfuPUmJgxYTQF9hJ9bH 2qe8UCNjAy0kCjgk5+/IpIPPgJisYgaqAiakoOalO9HtLbZglHmvsuF1ItLLQnvwhxUr AyVnLd4W0OrbV57kr10Vlnm+RCgbv8ZyfHLtAslb32xtwVrRSkNv5fK9CA/pl7EI24id ze5Ag89bo0Bus75G5ZA+swP92LnH1QfUdC5z99re7izE5YYhRz5CkPAjEAQNQVeJB5Vp crleo12bureDeYZweFz1YbM+pnAWEa+demW8xGYTFX5y42e3rzLL2aG0HbDzYu/J43+n mzXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748893477; x=1749498277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=60pgDrU15Q1dZJENX35Q0Xke0F/FZ5IdLg55ZZD51Ko=; b=Y9UTbP32K9XAR5VVnXf6hAjYYngjh2mLJxO9SC9rJrHusrPZ4IzIHQ5HBD+Fq0/+87 l9PmYO6uqUb+0xs4eSn1sAetzSkwyrRvpj1nn5chjTex3l8mhjP5vPa1MNBdOGYUWFA4 K2KPYEujhDnUFOj6UrLEsjeZcvD5Xl+aOeLVqkEiHWT6SU7my+34RbB4oTh/J4hQHgHx KnLdlNFNuWhTkcpCQe74STwfqMxWWHDWw159qsL1aCsJSg4skYhxup5GzR7nuBZPbCHq AuH5J+oOGyWGkC6yhpUgzkPmyTBIbKT1rSHTk0wDqDdiS+LOeioFQXxU+QHt6b1rUlrb fz2w== X-Forwarded-Encrypted: i=1; AJvYcCXPcH/t4Fty2/mcC2pGaPg6byPOqhjhAWYPGzpuqDHC0W3vE8q9X2dNmIdVoB9zj8lQ7Vj/KknFxHnicgs=@vger.kernel.org X-Gm-Message-State: AOJu0YztXNh+L/+NMsVutyYXlGV+KN9SPtkx3s3IZrxcq0stSDhqAjab U56HeJpBficZOMXHVcrx1OH7sGXjbQThPckK2/PJHfzeoVGbGXi+/E/P5Qni5Y0ozPDqhdJJJth P3IXcIgs= X-Gm-Gg: ASbGnctFmB7RQe1EWidyjID5/H/+NgvbSGRU46YRBnrrBMzLw5xmHVwT0BYU5QeXGcz 1Gq33jbFQn6KqVKDEa+ICF2kQRpKi4xeFYvVcWuPQpZp0Meq02bcjmGKFfwXhqPurbZCLbOJTlu IGJjVOjd/5+X1uvFYWwfJx4ajR4cuNhlwKvxbuhDO5eVHmjJUAB76DHnqfOuk/FZrAaWyZDsw8F v0jv4QD7HGkrzdpOPGVJcfQRhOxy42C7hjeiOGt2I/U9avruDNRq7P4JjY7jJJNkLsxKYrtvXNF 4T0AM6wL7z76QNxnQMAOWSfglGz4m3ZyK0+mnze2JiR+/GYC/jEo X-Google-Smtp-Source: AGHT+IEIHHsZjf4J0ml+6mpE8+yLkKp4H4vo8G9A6uKjKuNPrdkWTI2+f9FxJdyW+FDKO7b9LwW15A== X-Received: by 2002:a17:903:185:b0:234:986c:66c4 with SMTP id d9443c01a7336-23538ed95c2mr178422005ad.1.1748893476789; Mon, 02 Jun 2025 12:44:36 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bd974asm74589615ad.97.2025.06.02.12.44.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Jun 2025 12:44:36 -0700 (PDT) From: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "Maciej W . Rozycki" , David Laight Subject: [PATCH v2 2/3] riscv: process: use unsigned int instead of unsigned long for put_user() Date: Mon, 2 Jun 2025 21:39:15 +0200 Message-ID: <20250602193918.868962-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250602193918.868962-1-cleger@rivosinc.com> References: <20250602193918.868962-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The specification of prctl() for GET_UNALIGN_CTL states that the value is returned in an unsigned int * address passed as an unsigned long. Change the type to match that and avoid an unaligned access as well. Signed-off-by: Cl=C3=A9ment L=C3=A9ger Reviewed-by: Alexandre Ghiti --- arch/riscv/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 15d8f75902f8..9ee6d816b98b 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -57,7 +57,7 @@ int get_unalign_ctl(struct task_struct *tsk, unsigned lon= g adr) if (!unaligned_ctl_available()) return -EINVAL; =20 - return put_user(tsk->thread.align_ctl, (unsigned long __user *)adr); + return put_user(tsk->thread.align_ctl, (unsigned int __user *)adr); } =20 void __show_regs(struct pt_regs *regs) --=20 2.49.0 From nobody Tue Dec 16 22:29:49 2025 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2086223704 for ; Mon, 2 Jun 2025 19:44:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893485; cv=none; b=uhX918/ET379W7fq7zs7krkqRB3IaW46ijUz3AFmuZSgeLzLdc3MFaNAXQmzEPoSWM+NF2qZLvUMfte1nI0gNKonFImrj+EMpZX7fpZtVjY7HI3BiSY5Lz65UEyAGLhhRsGdFauR06H9uGkfmmDH9fxCtr+s3JESoOQM9KlfcIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748893485; c=relaxed/simple; bh=/dIcbXD2Yz8NgiFS/BC1NVmG+q34kcpB4tZXORVzkI0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qKjoQiTZxuzpvvcQZZmdepDadwl3dVL//TuI6792wcNXjRUZliCRZONmo3vCDrhfeF9GFDWaQody68X4yHipHN2MEbtUzqS6EOilGxVu4lqeYJQA2Wu0jHgUZHt2Wd9hz75xGGMXehYxKt+GRdx2XUNv+cXmaAGkHsHnBCcm+sc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=dfUYE4eC; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="dfUYE4eC" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-23211e62204so37455025ad.3 for ; Mon, 02 Jun 2025 12:44:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1748893483; x=1749498283; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Db/Ed44rvRu053l+IvlTe4ds9o8zgJplXr/Yk0XQfi8=; b=dfUYE4eCX3fOokvmxM3bhbuZogH6iRH+jdj+bvtPELje4I+SjGTj7YW0+HrUXbX9lG 3yw8HN9hdAFTURcaJs2EtyyaflIU1BgsMN1ZTjlkO2EjuTZGOiDMlDLPYjFvRfLjHCnr QpM0wKqJekYXH6AE0BWFiHrVzK+HbF6hwr90UwgZZeCn6CJD30UjzOxx9pAjKU2PLgO9 e+eGnb/2wpmbEFpq36U7BL+OUC863MkJJeGZ8DDlfrOZXO/T6ZlBZgrjb37tckbxuExv ivNnfiwf0XcKcxaV6xaosExSo7Gtpv8fQZxy9WXKhZ7ePembSJ1rL8kHnWIExMw3v6Fc yEhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748893483; x=1749498283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Db/Ed44rvRu053l+IvlTe4ds9o8zgJplXr/Yk0XQfi8=; b=qo2nPnaVcx3TSh+OX6/RSsP7CHVAcNcKvdZn9GMWkqeTnZEEnAplYYhiFDsYZOoTQc 8UPtMFGwwsFKaCae9A66uhj0O3CY5TawqgJA43ZRGGVKBjftvO9zXA4ej+Pt3hhQ3d5Y LhIskU2lSf/gQHs2HRWr0xSBVl7Ms7rf9gafZ/YWfdrTPKZ8nT4ntnBuwApo0LU43yzq YEvVgNggjZ3uahJyGRJttwDppMu8KM5Goq7YOngADm3rXlFY0gnXCfDk41XANzFvVmhd 6LiXEWBT4SRhNu7r+OCb0yEKF8Xdvy9Qy4XjdykhEpCIM50eBOIm9GMT6uFqZnOWroXE Q06Q== X-Forwarded-Encrypted: i=1; AJvYcCWaApTTAc/28CEXjwq+ff3wKoqtH9pJZFiTv+loGi13ngxZ1PSrufqEEqDBZoOgx9ZmnDDnzsg/pU3UHp0=@vger.kernel.org X-Gm-Message-State: AOJu0Yzz7Ovf3UiBvfOGVG5x5+1IHLygJYz+h0wC7fQbF9xQ6tD0UHwM UmTimRfkvVX9jM/q4MMOD4l72gOZG+tyX/2GoFz3d7rG6u91oLRP9MvpqcJldpBjwDs= X-Gm-Gg: ASbGnctzAQjpTy0+fVesPtzns256bMYTuJ/wQlfbjDTLbs7M1PKRt1zMlwOq6iMxqDr 5IwxVOPKJ+glpN+Vq7vfUNUzsZbL6/q/FOk4Gh3YFnrmWJzurObzMNRsnJBwlm96TttHdqjZnu1 7p589g/uUJf0q4XN0N7thaAWAdrLuuCqh5t3sg4Seu2ytpVAzmKvcSROJOG6f+5JXIi66yPyFs3 JSyAqj3qMyN+WwQ3jzJeAf3pCxo+7sP26zgQmbUj1dCzHpgS2dNNfsdNiPxxChB7mZkP7f7N/68 F8op1NGscJOTQ+QY2S2l2R6sItSz8xAyCpWbpcbKZBTULid1ZIRx X-Google-Smtp-Source: AGHT+IFbuzLN7KFe7fs8Vc2nBOhFcNV/tot07VDwvWe1dI2/MhaIzuoXM1xe//BMSqSz86QiYuqiUQ== X-Received: by 2002:a17:902:e80e:b0:234:d2fb:2d0e with SMTP id d9443c01a7336-235289c81eamr244455005ad.10.1748893483100; Mon, 02 Jun 2025 12:44:43 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bd974asm74589615ad.97.2025.06.02.12.44.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Jun 2025 12:44:42 -0700 (PDT) From: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "Maciej W . Rozycki" , David Laight Subject: [PATCH v2 3/3] riscv: uaccess: do not do misaligned accesses in get/put_user() Date: Mon, 2 Jun 2025 21:39:16 +0200 Message-ID: <20250602193918.868962-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250602193918.868962-1-cleger@rivosinc.com> References: <20250602193918.868962-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Doing misaligned access to userspace memory would make a trap on platform where it is emulated. Latest fixes removed the kernel capability to do unaligned accesses to userspace memory safely since interrupts are kept disabled at all time during that. Thus doing so would crash the kernel. Such behavior was detected with GET_UNALIGN_CTL() that was doing a put_user() with an unsigned long* address that should have been an unsigned int*. Reenabling kernel misaligned access emulation is a bit risky and it would also degrade performances. Rather than doing that, we will try to avoid any misaligned accessed by using copy_from/to_user() which does not do any misaligned accesses. This can be done only for !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS and thus allows to only generate a bit more code for this config. Signed-off-by: Cl=C3=A9ment L=C3=A9ger Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/uaccess.h | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uacc= ess.h index 046de7ced09c..d472da4450e6 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -169,8 +169,19 @@ do { \ =20 #endif /* CONFIG_64BIT */ =20 +unsigned long __must_check __asm_copy_to_user_sum_enabled(void __user *to, + const void *from, unsigned long n); +unsigned long __must_check __asm_copy_from_user_sum_enabled(void *to, + const void __user *from, unsigned long n); + #define __get_user_nocheck(x, __gu_ptr, label) \ do { \ + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \ + !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \ + if (__asm_copy_from_user_sum_enabled(&(x), __gu_ptr, sizeof(*__gu_ptr)))= \ + goto label; \ + break; \ + } \ switch (sizeof(*__gu_ptr)) { \ case 1: \ __get_user_asm("lb", (x), __gu_ptr, label); \ @@ -297,6 +308,13 @@ do { \ =20 #define __put_user_nocheck(x, __gu_ptr, label) \ do { \ + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \ + !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \ + __inttype(x) val =3D (__inttype(x))x; \ + if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(val), sizeof(*__gu_ptr)))= \ + goto label; \ + break; \ + } \ switch (sizeof(*__gu_ptr)) { \ case 1: \ __put_user_asm("sb", (x), __gu_ptr, label); \ @@ -450,11 +468,6 @@ static inline void user_access_restore(unsigned long e= nabled) { } (x) =3D (__force __typeof__(*(ptr)))__gu_val; \ } while (0) =20 -unsigned long __must_check __asm_copy_to_user_sum_enabled(void __user *to, - const void *from, unsigned long n); -unsigned long __must_check __asm_copy_from_user_sum_enabled(void *to, - const void __user *from, unsigned long n); - #define unsafe_copy_to_user(_dst, _src, _len, label) \ if (__asm_copy_to_user_sum_enabled(_dst, _src, _len)) \ goto label; --=20 2.49.0