From nobody Thu Oct 9 01:16:06 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 020481C84D0; Sun, 22 Jun 2025 10:20:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587646; cv=none; b=FTv2ZquAL6sqkfMdJCltG/fePTEW2bUY6kG2E8SS4jUWaNYJOFOcEkRYdkoRndW8pxhCrTkbtN344S8jIJcSZP0W3oMB4YJiMnj2mzQ/WpzRFFeEiVvT59N+zKOylwNZBXmPcWzERLNlNyxMSS8sv7IJeOvCKjFNZPpd91+dJ0k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587646; c=relaxed/simple; bh=aBr5ug3pbfIIRk62LUQKZ+EqAMQWAhKPmPnsru18T2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BMfN6ilOLAz9gJQyv9+Xe2UNbQ2X6HGLjALl/iIDc6uSCAoXyqwjHXP2QSTx5vw9UJJSMKK00VGS1tUjYOFy44c+J4qdUZXjp9LTOa9jaUPMzPzxr9Y7AGKhe+IclRo98eBLORK0x7d0VOEL6lhe0kXhhsOz7ud8E1JLssFqOHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4bQ62p3Ttmz9sZD; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6FkHDcQTmUzv; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4bQ62p2bPnz9sXD; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 50BE08B765; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id B7f7mpVTx9fe; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.235.99]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 598B58B764; Sun, 22 Jun 2025 11:52:49 +0200 (CEST) From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , Naveen N Rao , Madhavan Srinivasan , Alexander Viro , Christian Brauner , Jan Kara , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Darren Hart , Davidlohr Bueso , "Andre Almeida" , Andrew Morton , David Laight , Dave Hansen , Linus Torvalds Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/5] uaccess: Add masked_user_{read/write}_access_begin Date: Sun, 22 Jun 2025 11:52:39 +0200 Message-ID: <6fddae0cf0da15a6521bb847b63324b7a2a067b1.1750585239.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750585958; l=3699; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=aBr5ug3pbfIIRk62LUQKZ+EqAMQWAhKPmPnsru18T2w=; b=aJLPxkFnAJjAAnY0M+etgKWknkSFXagT8tFBPQmYzWl+GOTm3envK6GdDSt+ZAJhzvqm23pRc 8C2dbv0jA/oBse2u5aACyUyBHpCpJRg5EdA/oAqEDbh/PIRmkp3wF2J X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allthough masked_user_access_begin() seems to only be used when reading data from user at the moment, introduce masked_user_read_access_begin() and masked_user_write_access_begin() in order to match user_read_access_begin() and user_write_access_begin(). Have them default to masked_user_access_begin() when they are not defined. Signed-off-by: Christophe Leroy --- fs/select.c | 2 +- include/linux/uaccess.h | 8 ++++++++ kernel/futex/futex.h | 4 ++-- lib/strncpy_from_user.c | 2 +- lib/strnlen_user.c | 2 +- 5 files changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/select.c b/fs/select.c index 9fb650d03d52..d8547bedf5eb 100644 --- a/fs/select.c +++ b/fs/select.c @@ -777,7 +777,7 @@ static inline int get_sigset_argpack(struct sigset_argp= ack *to, // the path is hot enough for overhead of copy_from_user() to matter if (from) { if (can_do_masked_user_access()) - from =3D masked_user_access_begin(from); + from =3D masked_user_read_access_begin(from); else if (!user_read_access_begin(from, sizeof(*from))) return -EFAULT; unsafe_get_user(to->p, &from->p, Efault); diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 7c06f4795670..682a0cd2fe51 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -41,6 +41,14 @@ #define mask_user_address(src) (src) #endif =20 +#ifndef masked_user_write_access_begin +#define masked_user_write_access_begin masked_user_access_begin +#endif +#ifndef masked_user_read_access_begin +#define masked_user_read_access_begin masked_user_access_begin +#endif + + /* * Architectures should provide two primitives (raw_copy_{to,from}_user()) * and get rid of their private instances of copy_{to,from}_user() and diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index fcd1617212ee..6cfcafa00736 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -305,7 +305,7 @@ static __always_inline int futex_get_value(u32 *dest, u= 32 __user *from) u32 val; =20 if (can_do_masked_user_access()) - from =3D masked_user_access_begin(from); + from =3D masked_user_read_access_begin(from); else if (!user_read_access_begin(from, sizeof(*from))) return -EFAULT; unsafe_get_user(val, from, Efault); @@ -320,7 +320,7 @@ static __always_inline int futex_get_value(u32 *dest, u= 32 __user *from) static __always_inline int futex_put_value(u32 val, u32 __user *to) { if (can_do_masked_user_access()) - to =3D masked_user_access_begin(to); + to =3D masked_user_read_access_begin(to); else if (!user_read_access_begin(to, sizeof(*to))) return -EFAULT; unsafe_put_user(val, to, Efault); diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c index 6dc234913dd5..5bb752ff7c61 100644 --- a/lib/strncpy_from_user.c +++ b/lib/strncpy_from_user.c @@ -126,7 +126,7 @@ long strncpy_from_user(char *dst, const char __user *sr= c, long count) if (can_do_masked_user_access()) { long retval; =20 - src =3D masked_user_access_begin(src); + src =3D masked_user_read_access_begin(src); retval =3D do_strncpy_from_user(dst, src, count, count); user_read_access_end(); return retval; diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c index 6e489f9e90f1..4a6574b67f82 100644 --- a/lib/strnlen_user.c +++ b/lib/strnlen_user.c @@ -99,7 +99,7 @@ long strnlen_user(const char __user *str, long count) if (can_do_masked_user_access()) { long retval; =20 - str =3D masked_user_access_begin(str); + str =3D masked_user_read_access_begin(str); retval =3D do_strnlen_user(str, count, count); user_read_access_end(); return retval; --=20 2.49.0 From nobody Thu Oct 9 01:16:06 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 205B11422AB; Sun, 22 Jun 2025 10:20:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587636; cv=none; b=fLN7i4BdVzL+UoPwMMxz0iYh44Aacvgw+n2GZoX91yUnF5yLW0650P+m4nFuseqXBXmjEMsUGHOtPeiGGOgSA0sMe653M20yYHuFK9BAMSqmTZfN5ooHGoUfRwDQBkf954T/HJcnvJCbkP5VsOaVg9Y0j6VpvpokrKCQpxBfWRA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587636; c=relaxed/simple; bh=UYxQJS6ZJHCmE3zbSwM0x0YAEUS01DTZiXJZEvopO2E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j4gaoou2pGk97zYSIgdAyBQ7pTpxAcEGkE5e4AZxdGixCA8RPzuHw2FUFFASAp3I1y81bH1VgbrqP7QWyBtnO2x4uopd2XSrVpn9PFu/vRTVgUG7Sk3w8r6JOxsDVYBOSHXmp/dfCdMRLaj1xUN4pFpwjLt98k1M3FIkZxekMtk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4bQ62q2tJCz9sbF; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OYHyRMeMBap9; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4bQ62q20ksz9sXD; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 3CD248B764; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id ItRVmE4jH7_E; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.235.99]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 4E5C18B763; Sun, 22 Jun 2025 11:52:50 +0200 (CEST) From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , Naveen N Rao , Madhavan Srinivasan , Alexander Viro , Christian Brauner , Jan Kara , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Darren Hart , Davidlohr Bueso , "Andre Almeida" , Andrew Morton , David Laight , Dave Hansen , Linus Torvalds Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/5] uaccess: Add speculation barrier to copy_from_user_iter() Date: Sun, 22 Jun 2025 11:52:40 +0200 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750585958; l=1183; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=UYxQJS6ZJHCmE3zbSwM0x0YAEUS01DTZiXJZEvopO2E=; b=CZDMdcFmnufRcPRhMzoZv84scqPZFsLBFe6GrnY1YvrDPqC/h/DYWJVikSKeIb35hKqIXPgrn /bJG+wBOUX6CLrIA00TjhTQhNdgXy5elmLQw6y5gHEPFOZkqvKJl1vx X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The results of "access_ok()" can be mis-speculated. The result is that you can end speculatively: if (access_ok(from, size)) // Right here For the same reason as done in copy_from_user() by commit 74e19ef0ff80 ("uaccess: Add speculation barrier to copy_from_user()"), add a speculation barrier to copy_from_user_iter(). See commit 74e19ef0ff80 ("uaccess: Add speculation barrier to copy_from_user()") for more details. Signed-off-by: Christophe Leroy --- lib/iov_iter.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index f9193f952f49..ebf524a37907 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -50,6 +50,13 @@ size_t copy_from_user_iter(void __user *iter_from, size_= t progress, if (should_fail_usercopy()) return len; if (access_ok(iter_from, len)) { + /* + * Ensure that bad access_ok() speculation will not + * lead to nasty side effects *after* the copy is + * finished: + */ + barrier_nospec(); + to +=3D progress; instrument_copy_from_user_before(to, iter_from, len); res =3D raw_copy_from_user(to, iter_from, len); --=20 2.49.0 From nobody Thu Oct 9 01:16:06 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2EF1D1CF7AF; Sun, 22 Jun 2025 10:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587650; cv=none; b=Ozb3xkwrk8IlKbKwTpby/u6F5mg0lhufbRHAcS7Dz+tIXCEhlSmJWlpqb7N1WoPeRIhnrF4wi4ofAqW5lIOqhWR1BO+RLnVSHEWO0FQnCjirqjIrsqfO8V6L5cj3Bw3Ie38C/Fx/ghHRxw3tWdcNOywKySdbdHxiolbOqRIFWK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587650; c=relaxed/simple; bh=2Vc/SRI5ImEjhEPkBWs4KWXB6VDzEnBrqeyy8uayrnc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PibjYNTolrRKPE2c0wkiLfuVYvxZo/AYeKEZAx57Y3XvXlbxys99jHfN/ZMz9rrcNLsYnBcDVTG+la9JWn2qAiwu8Pv6iZy59HYn7FnmGcFUiKkbcEWso9gg1he4BmZHTOS5hzz4vJ8NTyyrIL3sz3ZTOWPRyGy5bRtmNeFLi78= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4bQ62r3M2Lz9sd1; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id X7xjR-bygz_Z; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4bQ62r2Lkzz9scZ; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 48ECC8B765; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id LyA231yFeiNX; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.235.99]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 43EF68B763; Sun, 22 Jun 2025 11:52:51 +0200 (CEST) From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , Naveen N Rao , Madhavan Srinivasan , Alexander Viro , Christian Brauner , Jan Kara , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Darren Hart , Davidlohr Bueso , "Andre Almeida" , Andrew Morton , David Laight , Dave Hansen , Linus Torvalds Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/5] powerpc: Remove unused size parametre to KUAP enabling/disabling functions Date: Sun, 22 Jun 2025 11:52:41 +0200 Message-ID: <6b6667bce077c6a55c93142695cb54efbedf1578.1750585239.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750585958; l=9539; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=2Vc/SRI5ImEjhEPkBWs4KWXB6VDzEnBrqeyy8uayrnc=; b=pgIyHdQmTj0l2CprTPPslHZ/hvyn8SgStdIZt2ews0GTf9CH5MYRSSYqe5Vw5e5H0LC9wCo3y 0/f8jyZNx7+CcFCgf4PL0e7aYJchjIdzOSMr/hHGxCc0DlVvkX8LxsZ X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since commit 16132529cee5 ("powerpc/32s: Rework Kernel Userspace Access Protection") the size parameter is unused on all platforms. Remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/32/kup.h | 2 +- arch/powerpc/include/asm/book3s/64/kup.h | 4 +-- arch/powerpc/include/asm/kup.h | 22 ++++++------ arch/powerpc/include/asm/nohash/32/kup-8xx.h | 2 +- arch/powerpc/include/asm/nohash/kup-booke.h | 2 +- arch/powerpc/include/asm/uaccess.h | 36 ++++++++++---------- 6 files changed, 33 insertions(+), 35 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/includ= e/asm/book3s/32/kup.h index 4e14a5427a63..8ea68d136152 100644 --- a/arch/powerpc/include/asm/book3s/32/kup.h +++ b/arch/powerpc/include/asm/book3s/32/kup.h @@ -98,7 +98,7 @@ static __always_inline unsigned long __kuap_get_and_asser= t_locked(void) #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked =20 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - u32 size, unsigned long dir) + unsigned long dir) { BUILD_BUG_ON(!__builtin_constant_p(dir)); =20 diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/includ= e/asm/book3s/64/kup.h index 497a7bd31ecc..853fa2fb12be 100644 --- a/arch/powerpc/include/asm/book3s/64/kup.h +++ b/arch/powerpc/include/asm/book3s/64/kup.h @@ -354,7 +354,7 @@ __bad_kuap_fault(struct pt_regs *regs, unsigned long ad= dress, bool is_write) } =20 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - unsigned long size, unsigned long dir) + unsigned long dir) { unsigned long thread_amr =3D 0; =20 @@ -384,7 +384,7 @@ static __always_inline unsigned long get_kuap(void) static __always_inline void set_kuap(unsigned long value) { } =20 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - unsigned long size, unsigned long dir) + unsigned long dir) { } =20 #endif /* !CONFIG_PPC_KUAP */ diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index 2bb03d941e3e..4c70be11b99a 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -73,7 +73,7 @@ static __always_inline void __kuap_kernel_restore(struct = pt_regs *regs, unsigned */ #ifndef CONFIG_PPC_BOOK3S_64 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - unsigned long size, unsigned long dir) { } + unsigned long dir) { } static __always_inline void prevent_user_access(unsigned long dir) { } static __always_inline unsigned long prevent_user_access_return(void) { re= turn 0UL; } static __always_inline void restore_user_access(unsigned long flags) { } @@ -132,36 +132,34 @@ static __always_inline void kuap_assert_locked(void) kuap_get_and_assert_locked(); } =20 -static __always_inline void allow_read_from_user(const void __user *from, = unsigned long size) +static __always_inline void allow_read_from_user(const void __user *from) { barrier_nospec(); - allow_user_access(NULL, from, size, KUAP_READ); + allow_user_access(NULL, from, KUAP_READ); } =20 -static __always_inline void allow_write_to_user(void __user *to, unsigned = long size) +static __always_inline void allow_write_to_user(void __user *to) { - allow_user_access(to, NULL, size, KUAP_WRITE); + allow_user_access(to, NULL, KUAP_WRITE); } =20 -static __always_inline void allow_read_write_user(void __user *to, const v= oid __user *from, - unsigned long size) +static __always_inline void allow_read_write_user(void __user *to, const v= oid __user *from) { barrier_nospec(); - allow_user_access(to, from, size, KUAP_READ_WRITE); + allow_user_access(to, from, KUAP_READ_WRITE); } =20 -static __always_inline void prevent_read_from_user(const void __user *from= , unsigned long size) +static __always_inline void prevent_read_from_user(const void __user *from) { prevent_user_access(KUAP_READ); } =20 -static __always_inline void prevent_write_to_user(void __user *to, unsigne= d long size) +static __always_inline void prevent_write_to_user(void __user *to) { prevent_user_access(KUAP_WRITE); } =20 -static __always_inline void prevent_read_write_user(void __user *to, const= void __user *from, - unsigned long size) +static __always_inline void prevent_read_write_user(void __user *to, const= void __user *from) { prevent_user_access(KUAP_READ_WRITE); } diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/in= clude/asm/nohash/32/kup-8xx.h index 46bc5925e5fd..c2b32b392d41 100644 --- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h @@ -50,7 +50,7 @@ static __always_inline void uaccess_end_8xx(void) } =20 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - unsigned long size, unsigned long dir) + unsigned long dir) { uaccess_begin_8xx(MD_APG_INIT); } diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/inc= lude/asm/nohash/kup-booke.h index 0c7c3258134c..6035d51af3cd 100644 --- a/arch/powerpc/include/asm/nohash/kup-booke.h +++ b/arch/powerpc/include/asm/nohash/kup-booke.h @@ -74,7 +74,7 @@ static __always_inline void uaccess_end_booke(void) } =20 static __always_inline void allow_user_access(void __user *to, const void = __user *from, - unsigned long size, unsigned long dir) + unsigned long dir) { uaccess_begin_booke(current->thread.pid); } diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/= uaccess.h index 4f5a46a77fa2..dd5cf325ecde 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -45,14 +45,14 @@ do { \ __label__ __pu_failed; \ \ - allow_write_to_user(__pu_addr, __pu_size); \ + allow_write_to_user(__pu_addr); \ __put_user_size_goto(__pu_val, __pu_addr, __pu_size, __pu_failed); \ - prevent_write_to_user(__pu_addr, __pu_size); \ + prevent_write_to_user(__pu_addr); \ __pu_err =3D 0; \ break; \ \ __pu_failed: \ - prevent_write_to_user(__pu_addr, __pu_size); \ + prevent_write_to_user(__pu_addr); \ __pu_err =3D -EFAULT; \ } while (0); \ \ @@ -301,9 +301,9 @@ do { \ __typeof__(sizeof(*(ptr))) __gu_size =3D sizeof(*(ptr)); \ \ might_fault(); \ - allow_read_from_user(__gu_addr, __gu_size); \ + allow_read_from_user(__gu_addr); \ __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \ - prevent_read_from_user(__gu_addr, __gu_size); \ + prevent_read_from_user(__gu_addr); \ (x) =3D (__typeof__(*(ptr)))__gu_val; \ \ __gu_err; \ @@ -329,9 +329,9 @@ raw_copy_in_user(void __user *to, const void __user *fr= om, unsigned long n) { unsigned long ret; =20 - allow_read_write_user(to, from, n); + allow_read_write_user(to, from); ret =3D __copy_tofrom_user(to, from, n); - prevent_read_write_user(to, from, n); + prevent_read_write_user(to, from); return ret; } #endif /* __powerpc64__ */ @@ -341,9 +341,9 @@ static inline unsigned long raw_copy_from_user(void *to, { unsigned long ret; =20 - allow_read_from_user(from, n); + allow_read_from_user(from); ret =3D __copy_tofrom_user((__force void __user *)to, from, n); - prevent_read_from_user(from, n); + prevent_read_from_user(from); return ret; } =20 @@ -352,9 +352,9 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) { unsigned long ret; =20 - allow_write_to_user(to, n); + allow_write_to_user(to); ret =3D __copy_tofrom_user(to, (__force const void __user *)from, n); - prevent_write_to_user(to, n); + prevent_write_to_user(to); return ret; } =20 @@ -365,9 +365,9 @@ static inline unsigned long __clear_user(void __user *a= ddr, unsigned long size) unsigned long ret; =20 might_fault(); - allow_write_to_user(addr, size); + allow_write_to_user(addr); ret =3D __arch_clear_user(addr, size); - prevent_write_to_user(addr, size); + prevent_write_to_user(addr); return ret; } =20 @@ -395,9 +395,9 @@ copy_mc_to_user(void __user *to, const void *from, unsi= gned long n) { if (check_copy_size(from, n, true)) { if (access_ok(to, n)) { - allow_write_to_user(to, n); + allow_write_to_user(to); n =3D copy_mc_generic((void __force *)to, from, n); - prevent_write_to_user(to, n); + prevent_write_to_user(to); } } =20 @@ -415,7 +415,7 @@ static __must_check __always_inline bool user_access_be= gin(const void __user *pt =20 might_fault(); =20 - allow_read_write_user((void __user *)ptr, ptr, len); + allow_read_write_user((void __user *)ptr, ptr); return true; } #define user_access_begin user_access_begin @@ -431,7 +431,7 @@ user_read_access_begin(const void __user *ptr, size_t l= en) =20 might_fault(); =20 - allow_read_from_user(ptr, len); + allow_read_from_user(ptr); return true; } #define user_read_access_begin user_read_access_begin @@ -445,7 +445,7 @@ user_write_access_begin(const void __user *ptr, size_t = len) =20 might_fault(); =20 - allow_write_to_user((void __user *)ptr, len); + allow_write_to_user((void __user *)ptr); return true; } #define user_write_access_begin user_write_access_begin --=20 2.49.0 From nobody Thu Oct 9 01:16:06 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1A5841DEFDD; Sun, 22 Jun 2025 10:20:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587655; cv=none; b=mpcI3osKwY0xT/fTTHlt7Wx5HkJz2vyU80D+jBJ4LxJ4ipE1iuphr3306siA/ugqn8tj884tkSScEhMAFq/14M/JIR+k7iLEQzc6kH4E8Ras/T/q86EkxQ3L2LVjUBVV18mW9A38y1Ls2gPOEo/v5k8mhGODWfYH1YfjsBqqbmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587655; c=relaxed/simple; bh=wzgi8TIxhTfIFVNyBJ8YVgkDaYmDFZG7Y/RKQLlutyc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aBWu3fDgpuqhddyqBZuOuL/W9Vdv0mf6VZnXKPnPrq9YZVhILMv4Xa2fZeTv4Vk65BIEOd3ytM78PwL/PvipQWUAobh9o2wxnOiNflb+imALLKxDHdo9X0x0ZK6Fks6GPRjpl501VdP4fYFoIpDlV3+frY3CuLJFvJoFTQkP6CQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4bQ62s3TsWz9sfW; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3Qu3VT7lXSBd; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4bQ62s2Hmfz9sfF; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 475B48B765; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id l6aql5vCM1zT; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.235.99]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 45B4A8B764; Sun, 22 Jun 2025 11:52:52 +0200 (CEST) From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , Naveen N Rao , Madhavan Srinivasan , Alexander Viro , Christian Brauner , Jan Kara , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Darren Hart , Davidlohr Bueso , "Andre Almeida" , Andrew Morton , David Laight , Dave Hansen , Linus Torvalds Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/5] powerpc: Move barrier_nospec() out of allow_read_{from/write}_user() Date: Sun, 22 Jun 2025 11:52:42 +0200 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750585958; l=2362; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=wzgi8TIxhTfIFVNyBJ8YVgkDaYmDFZG7Y/RKQLlutyc=; b=SHR7GQWRwSWYB89RLRaWxjLnaNArovgMY5ntnQhDOcYsi0NJTaUtpbHESijjCGnb2sXQYAJuB 0/NeP7QB3kqCs4+wyhoCLxM/bIdw6SQ2cEBJrKh7F1NhtihxJN7KIx6 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move barrier_nospec() out of allow_read_from_user() and allow_read_write_user() in order to allow reuse of those functions when implementing masked user access. Don't add it back in raw_copy_from_user() as it is already done by callers of raw_copy_from_user(). Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/kup.h | 2 -- arch/powerpc/include/asm/uaccess.h | 4 ++++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index 4c70be11b99a..4e2c79df4cdb 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -134,7 +134,6 @@ static __always_inline void kuap_assert_locked(void) =20 static __always_inline void allow_read_from_user(const void __user *from) { - barrier_nospec(); allow_user_access(NULL, from, KUAP_READ); } =20 @@ -145,7 +144,6 @@ static __always_inline void allow_write_to_user(void __= user *to) =20 static __always_inline void allow_read_write_user(void __user *to, const v= oid __user *from) { - barrier_nospec(); allow_user_access(to, from, KUAP_READ_WRITE); } =20 diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/= uaccess.h index dd5cf325ecde..89d53d4c2236 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -301,6 +301,7 @@ do { \ __typeof__(sizeof(*(ptr))) __gu_size =3D sizeof(*(ptr)); \ \ might_fault(); \ + barrier_nospec(); \ allow_read_from_user(__gu_addr); \ __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \ prevent_read_from_user(__gu_addr); \ @@ -329,6 +330,7 @@ raw_copy_in_user(void __user *to, const void __user *fr= om, unsigned long n) { unsigned long ret; =20 + barrier_nospec(); allow_read_write_user(to, from); ret =3D __copy_tofrom_user(to, from, n); prevent_read_write_user(to, from); @@ -415,6 +417,7 @@ static __must_check __always_inline bool user_access_be= gin(const void __user *pt =20 might_fault(); =20 + barrier_nospec(); allow_read_write_user((void __user *)ptr, ptr); return true; } @@ -431,6 +434,7 @@ user_read_access_begin(const void __user *ptr, size_t l= en) =20 might_fault(); =20 + barrier_nospec(); allow_read_from_user(ptr); return true; } --=20 2.49.0 From nobody Thu Oct 9 01:16:06 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E504A1B423D; Sun, 22 Jun 2025 10:20:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587659; cv=none; b=SfGkU/yCPbKB9luTpnZzCl7PYrdsdDfCq5jrsNzddnghB4kokwuTCuos2qu2B8LDz3+DmvU0Gk7Df/kJhzI4nITAOzKVorDLjsCLJeMvZF0eCncSva/dE6rdJXUaYse3wKLpvXsNOu8v86Wzo9Qbzec3JwDRvshl3ibfPIwtBDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750587659; c=relaxed/simple; bh=qYAjfl18Jby7CwcbCS7uFbA23zIiub7HvvgW5b+LR8Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RvaM/RRUxz3lS+DhNwhB8O0X9PV5ie3w/lb/BgWjwHOxJ+xXqv/wpR6bOFEyfiHfIIXUGergLZ/x5lk6eDSNxRHhvv7nAMkSwvZhw2WtV+IpiU6RBqWFWFI7GJ4/6L5MEI9pf6g1Nlcb6zTc5+Rw3i5HpYVRplVWljYDTtzvEEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4bQ62t64q5z9sgR; Sun, 22 Jun 2025 11:52:54 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NPMK_hjCi9xN; Sun, 22 Jun 2025 11:52:54 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4bQ62t4rxQz9sfF; Sun, 22 Jun 2025 11:52:54 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 9B88E8B764; Sun, 22 Jun 2025 11:52:54 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id s3blMO_cBix6; Sun, 22 Jun 2025 11:52:54 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.235.99]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 437548B763; Sun, 22 Jun 2025 11:52:53 +0200 (CEST) From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , Naveen N Rao , Madhavan Srinivasan , Alexander Viro , Christian Brauner , Jan Kara , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Darren Hart , Davidlohr Bueso , "Andre Almeida" , Andrew Morton , David Laight , Dave Hansen , Linus Torvalds Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/5] powerpc: Implement masked user access Date: Sun, 22 Jun 2025 11:52:43 +0200 Message-ID: <9dfb66c94941e8f778c4cabbf046af2a301dd963.1750585239.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750585959; l=7393; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=qYAjfl18Jby7CwcbCS7uFbA23zIiub7HvvgW5b+LR8Q=; b=detE8VsRyRW16C7FKw7YfeTCPzD9KbeI1xJBtx/L1fEZ2e90YcB368xGnwXVIdeqLMLnDjpxY Tph1QADOh/aDelxyXJVKuTKJ0HVilE7dg2yH+ubXhNEGZOguGh1XLt3 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Masked user access avoids the address/size verification by access_ok(). Allthough its main purpose is to skip the speculation in the verification of user address and size hence avoid the need of spec mitigation, it also has the advantage to reduce the amount of instructions needed so it also benefits to platforms that don't need speculation mitigation, especially when the size of the copy is not know at build time. So implement masked user access on powerpc. The only requirement is to have memory gap that faults between the top user space and the real start of kernel area. On 64 bits platform it is easy, bit 0 is always 0 for user addresses and always 1 for kernel addresses and user addresses stop long before the end of the area. On 32 bits it is more tricky. It theory user space can go up to 0xbfffffff while kernel will usually start at 0xc0000000. So a gap needs to be added inbetween. Allthough in theory a single 4k page would suffice, it is easier and more efficient to enforce a 128k gap below kernel, as it simplifies the masking. Unlike x86_64 which masks the address to 'all bits set' when the user address is invalid, here the address is set to an address is the gap. It avoids relying on the zero page to catch offseted accesses. e500 has the isel instruction which allows selecting one value or the other without branch and that instruction is not speculative, so use it. Allthough GCC usually generates code using that instruction, it is safer to use inline assembly to be sure. The result is: 14: 3d 20 bf fe lis r9,-16386 18: 7c 03 48 40 cmplw r3,r9 1c: 7c 69 18 5e iselgt r3,r9,r3 On other ones, when kernel space is over 0x80000000 and user space is below, the logic in mask_user_address_simple() leads to a 3 instruction sequence: 14: 7c 69 fe 70 srawi r9,r3,31 18: 7c 63 48 78 andc r3,r3,r9 1c: 51 23 00 00 rlwimi r3,r9,0,0,0 This is the default on powerpc 8xx. When the limit between user space and kernel space is not 0x80000000, mask_user_address_32() is used and a 6 instructions sequence is generated: 24: 54 69 7c 7e srwi r9,r3,17 28: 21 29 57 ff subfic r9,r9,22527 2c: 7d 29 fe 70 srawi r9,r9,31 30: 75 2a b0 00 andis. r10,r9,45056 34: 7c 63 48 78 andc r3,r3,r9 38: 7c 63 53 78 or r3,r3,r10 The constraint is that TASK_SIZE be aligned to 128K in order to get the most optimal number of instructions. When CONFIG_PPC_BARRIER_NOSPEC is not defined, fallback on the test-based masking as it is quicker than the 6 instructions sequence but not necessarily quicker than the 3 instructions sequences above. On 64 bits, kernel is always above 0x8000000000000000 and user always below, which leads to a 4 instructions sequence: 80: 7c 69 1b 78 mr r9,r3 84: 7c 63 fe 76 sradi r3,r3,63 88: 7d 29 18 78 andc r9,r9,r3 8c: 79 23 00 4c rldimi r3,r9,0,1 Signed-off-by: Christophe Leroy --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/uaccess.h | 100 +++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index c3e0cc83f120..c26a39b4504a 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -1303,7 +1303,7 @@ config TASK_SIZE hex "Size of user task space" if TASK_SIZE_BOOL default "0x80000000" if PPC_8xx default "0xb0000000" if PPC_BOOK3S_32 && EXECMEM - default "0xc0000000" + default "0xbffe0000" =20 config MODULES_SIZE_BOOL bool "Set custom size for modules/execmem area" diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/= uaccess.h index 89d53d4c2236..19743ee80523 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -2,6 +2,8 @@ #ifndef _ARCH_POWERPC_UACCESS_H #define _ARCH_POWERPC_UACCESS_H =20 +#include + #include #include #include @@ -455,6 +457,104 @@ user_write_access_begin(const void __user *ptr, size_= t len) #define user_write_access_begin user_write_access_begin #define user_write_access_end prevent_current_write_to_user =20 +/* + * Masking the user address is an alternative to a conditional + * user_access_begin that can avoid the fencing. This only works + * for dense accesses starting at the address. + */ +static inline void __user *mask_user_address_simple(const void __user *ptr) +{ + unsigned long addr =3D (unsigned long)ptr; + unsigned long mask =3D (unsigned long)((long)addr >> (BITS_PER_LONG - 1)); + + addr =3D ((addr & ~mask) & (~0UL >> 1)) | (mask & (1UL << (BITS_PER_LONG = - 1))); + + return (void __user *)addr; +} + +static inline void __user *mask_user_address_e500(const void __user *ptr) +{ + unsigned long addr; + + asm("cmplw %1, %2; iselgt %0, %2, %1" : "=3Dr"(addr) : "r"(ptr), "r"(TASK= _SIZE): "cr0"); + + return (void __user *)addr; +} + +/* Make sure TASK_SIZE is a multiple of 128K for shifting by 17 to the rig= ht */ +static inline void __user *mask_user_address_32(const void __user *ptr) +{ + unsigned long addr =3D (unsigned long)ptr; + unsigned long mask =3D (unsigned long)((long)((TASK_SIZE >> 17) - 1 - (ad= dr >> 17)) >> 31); + + addr =3D (addr & ~mask) | (TASK_SIZE & mask); + + return (void __user *)addr; +} + +static inline void __user *mask_user_address_fallback(const void __user *p= tr) +{ + unsigned long addr =3D (unsigned long)ptr; + + return (void __user *)(addr < TASK_SIZE ? addr : TASK_SIZE); +} + +static inline void __user *mask_user_address(const void __user *ptr) +{ +#ifdef MODULES_VADDR + const unsigned long border =3D MODULES_VADDR; +#else + const unsigned long border =3D PAGE_OFFSET; +#endif + BUILD_BUG_ON(TASK_SIZE_MAX & (SZ_128K - 1)); + BUILD_BUG_ON(TASK_SIZE_MAX + SZ_128K > border); + BUILD_BUG_ON(TASK_SIZE_MAX & 0x8000000000000000ULL); + BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC64) && !(PAGE_OFFSET & 0x80000000000000= 00ULL)); + + if (IS_ENABLED(CONFIG_PPC64)) + return mask_user_address_simple(ptr); + if (IS_ENABLED(CONFIG_E500)) + return mask_user_address_e500(ptr); + if (TASK_SIZE <=3D SZ_2G && border >=3D SZ_2G) + return mask_user_address_simple(ptr); + if (IS_ENABLED(CONFIG_PPC_BARRIER_NOSPEC)) + return mask_user_address_32(ptr); + return mask_user_address_fallback(ptr); +} + +static inline void __user *masked_user_access_begin(const void __user *p) +{ + void __user *ptr =3D mask_user_address(p); + + might_fault(); + allow_read_write_user(ptr, ptr); + + return ptr; +} +#define masked_user_access_begin masked_user_access_begin + +static inline void __user *masked_user_read_access_begin(const void __user= *p) +{ + void __user *ptr =3D mask_user_address(p); + + might_fault(); + allow_read_from_user(ptr); + + return ptr; +} +#define masked_user_read_access_begin masked_user_read_access_begin + +static inline void __user *masked_user_write_access_begin(const void __use= r *p) +{ + void __user *ptr =3D mask_user_address(p); + + might_fault(); + allow_write_to_user(ptr); + + return ptr; +} +#define masked_user_write_access_begin masked_user_write_access_begin + #define unsafe_get_user(x, p, e) do { \ __long_type(*(p)) __gu_val; \ __typeof__(*(p)) __user *__gu_addr =3D (p); \ --=20 2.49.0