From nobody Sun Feb 8 03:57:30 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B616C363C6B; Mon, 19 Jan 2026 13:01:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827700; cv=none; b=VKG9fPxkOxPglAjl6UH/jgegpuULiD5lNUUvsyyI/XYjS8Mn/JmMrIlviwbY4e6xCKko4zfIGerhjm3KEpX1LFz/ARpsZ5dx0Ys+TAT6HdcHbnfuw6U49LRG7PemsO9W2XT7hmVS7tOQJhLnxsM9kT5x1edNTwLqUbVeolJldPM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827700; c=relaxed/simple; bh=JjIQdpB33auSRMly0z+AzAup61l3AMN9igfJYdLwAic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ccyq6YsZxv4DF+z5OoO5QvS9+gN20xDaxES+WuPJDJe0LZiak8Kf65a/6t2QT2Rg9RV9KNj7ggvTZFjt0FP3TB8E3ppGKsqF6ln5fTgMo8aUesqK5pskly8q9s6LipYBp4Xz9l3mVRFn6QjaJNbHg6WgCPFknqkZ9Meu5WO1msA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A144FEC; Mon, 19 Jan 2026 05:01:31 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A51C03F740; Mon, 19 Jan 2026 05:01:34 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , Madhavan Srinivasan , Michael Ellerman , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Kees Cook , "Gustavo A. R. Silva" , Arnd Bergmann , Mark Rutland , "Jason A. Donenfeld" , Ard Biesheuvel , Jeremy Linton , David Laight Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hardening@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v4 1/3] randomize_kstack: Maintain kstack_offset per task Date: Mon, 19 Jan 2026 13:01:08 +0000 Message-ID: <20260119130122.1283821-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260119130122.1283821-1-ryan.roberts@arm.com> References: <20260119130122.1283821-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" kstack_offset was previously maintained per-cpu, but this caused a couple of issues. So let's instead make it per-task. Issue 1: add_random_kstack_offset() and choose_random_kstack_offset() expected and required to be called with interrupts and preemption disabled so that it could manipulate per-cpu state. But arm64, loongarch and risc-v are calling them with interrupts and preemption enabled. I don't _think_ this causes any functional issues, but it's certainly unexpected and could lead to manipulating the wrong cpu's state, which could cause a minor performance degradation due to bouncing the cache lines. By maintaining the state per-task those functions can safely be called in preemptible context. Issue 2: add_random_kstack_offset() is called before executing the syscall and expands the stack using a previously chosen random offset. choose_random_kstack_offset() is called after executing the syscall and chooses and stores a new random offset for the next syscall. With per-cpu storage for this offset, an attacker could force cpu migration during the execution of the syscall and prevent the offset from being updated for the original cpu such that it is predictable for the next syscall on that cpu. By maintaining the state per-task, this problem goes away because the per-task random offset is updated after the syscall regardless of which cpu it is executing on. Fixes: 39218ff4c625 ("stack: Optionally randomize kernel stack offset each = syscall") Closes: https://lore.kernel.org/all/dd8c37bc-795f-4c7a-9086-69e584d8ab24@ar= m.com/ Cc: stable@vger.kernel.org Acked-by: Mark Rutland Signed-off-by: Ryan Roberts Acked-by: Dave Hansen Acked-by: Heiko Carstens # s390 --- include/linux/randomize_kstack.h | 26 +++++++++++++++----------- include/linux/sched.h | 4 ++++ init/main.c | 1 - kernel/fork.c | 2 ++ 4 files changed, 21 insertions(+), 12 deletions(-) diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kst= ack.h index 1d982dbdd0d0..5d3916ca747c 100644 --- a/include/linux/randomize_kstack.h +++ b/include/linux/randomize_kstack.h @@ -9,7 +9,6 @@ =20 DECLARE_STATIC_KEY_MAYBE(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, randomize_kstack_offset); -DECLARE_PER_CPU(u32, kstack_offset); =20 /* * Do not use this anywhere else in the kernel. This is used here because @@ -50,15 +49,14 @@ DECLARE_PER_CPU(u32, kstack_offset); * add_random_kstack_offset - Increase stack utilization by previously * chosen random offset * - * This should be used in the syscall entry path when interrupts and - * preempt are disabled, and after user registers have been stored to - * the stack. For testing the resulting entropy, please see: - * tools/testing/selftests/lkdtm/stack-entropy.sh + * This should be used in the syscall entry path after user registers have= been + * stored to the stack. Preemption may be enabled. For testing the resulti= ng + * entropy, please see: tools/testing/selftests/lkdtm/stack-entropy.sh */ #define add_random_kstack_offset() do { \ if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ &randomize_kstack_offset)) { \ - u32 offset =3D raw_cpu_read(kstack_offset); \ + u32 offset =3D current->kstack_offset; \ u8 *ptr =3D __kstack_alloca(KSTACK_OFFSET_MAX(offset)); \ /* Keep allocation even after "ptr" loses scope. */ \ asm volatile("" :: "r"(ptr) : "memory"); \ @@ -69,9 +67,9 @@ DECLARE_PER_CPU(u32, kstack_offset); * choose_random_kstack_offset - Choose the random offset for the next * add_random_kstack_offset() * - * This should only be used during syscall exit when interrupts and - * preempt are disabled. This position in the syscall flow is done to - * frustrate attacks from userspace attempting to learn the next offset: + * This should only be used during syscall exit. Preemption may be enabled= . This + * position in the syscall flow is done to frustrate attacks from userspace + * attempting to learn the next offset: * - Maximize the timing uncertainty visible from userspace: if the * offset is chosen at syscall entry, userspace has much more control * over the timing between choosing offsets. "How long will we be in @@ -85,14 +83,20 @@ DECLARE_PER_CPU(u32, kstack_offset); #define choose_random_kstack_offset(rand) do { \ if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ &randomize_kstack_offset)) { \ - u32 offset =3D raw_cpu_read(kstack_offset); \ + u32 offset =3D current->kstack_offset; \ offset =3D ror32(offset, 5) ^ (rand); \ - raw_cpu_write(kstack_offset, offset); \ + current->kstack_offset =3D offset; \ } \ } while (0) + +static inline void random_kstack_task_init(struct task_struct *tsk) +{ + tsk->kstack_offset =3D 0; +} #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ #define add_random_kstack_offset() do { } while (0) #define choose_random_kstack_offset(rand) do { } while (0) +#define random_kstack_task_init(tsk) do { } while (0) #endif /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ =20 #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index da0133524d08..23081a702ecf 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1591,6 +1591,10 @@ struct task_struct { unsigned long prev_lowest_stack; #endif =20 +#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET + u32 kstack_offset; +#endif + #ifdef CONFIG_X86_MCE void __user *mce_vaddr; __u64 mce_kflags; diff --git a/init/main.c b/init/main.c index b84818ad9685..27fcbbde933e 100644 --- a/init/main.c +++ b/init/main.c @@ -830,7 +830,6 @@ static inline void initcall_debug_enable(void) #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, randomize_kstack_offset); -DEFINE_PER_CPU(u32, kstack_offset); =20 static int __init early_randomize_kstack_offset(char *buf) { diff --git a/kernel/fork.c b/kernel/fork.c index b1f3915d5f8e..b061e1edbc43 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -95,6 +95,7 @@ #include #include #include +#include #include #include #include @@ -2231,6 +2232,7 @@ __latent_entropy struct task_struct *copy_process( if (retval) goto bad_fork_cleanup_io; =20 + random_kstack_task_init(p); stackleak_task_init(p); =20 if (pid !=3D &init_struct_pid) { --=20 2.43.0 From nobody Sun Feb 8 03:57:30 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 84083363C6B; Mon, 19 Jan 2026 13:01:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827703; cv=none; b=iagwQeaVcteUiYV83hZwHaPKgFgCeb7LvG0DZrhYoRDsJ5l9pM6BIolmfATE3ofErPvMoFPMmr0guQQ3r643maLtDonMrREyjSXgz906IQciwhwyFR2pPpOuByQGL3avDRBydPdyLWDVMn9gCo8SVvRODVLH0WMA3P+e2GmPVi4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827703; c=relaxed/simple; bh=XXdfNoer74kAlBnWwIuJh6HDCzEvuxazXlyaTwog5ZY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TGz35ul2g/fe/+XT7k7Kxw5Fyj25WH0wnenzdpvFaGdVubZ9vmKOsFV8dyucVUFU/mASFtH+eY8b8XAPh0Gc94NzPvHopGXzhIcMtYLW3S3uf8V3bpfILoRKPsf+KYRnJaMrpeVUqmdkKDXuvIhO7gBYkBrQmV18OZ4slSJOk/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 431E11517; Mon, 19 Jan 2026 05:01:35 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 74ECD3F740; Mon, 19 Jan 2026 05:01:38 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , Madhavan Srinivasan , Michael Ellerman , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Kees Cook , "Gustavo A. R. Silva" , Arnd Bergmann , Mark Rutland , "Jason A. Donenfeld" , Ard Biesheuvel , Jeremy Linton , David Laight Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v4 2/3] prandom: Add __always_inline version of prandom_u32_state() Date: Mon, 19 Jan 2026 13:01:09 +0000 Message-ID: <20260119130122.1283821-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260119130122.1283821-1-ryan.roberts@arm.com> References: <20260119130122.1283821-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We will shortly use prandom_u32_state() to implement kstack offset randomization and some arches need to call it from non-instrumentable context. So let's implement prandom_u32_state() as an out-of-line wrapper around a new __always_inline prandom_u32_state_inline(). kstack offset randomization will use this new version. Acked-by: Mark Rutland Signed-off-by: Ryan Roberts Acked-by: Dave Hansen Acked-by: Heiko Carstens # s390 --- include/linux/prandom.h | 20 ++++++++++++++++++++ lib/random32.c | 8 +------- 2 files changed, 21 insertions(+), 7 deletions(-) diff --git a/include/linux/prandom.h b/include/linux/prandom.h index ff7dcc3fa105..801188680a29 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -17,6 +17,26 @@ struct rnd_state { __u32 s1, s2, s3, s4; }; =20 +/** + * prandom_u32_state_inline - seeded pseudo-random number generator. + * @state: pointer to state structure holding seeded state. + * + * This is used for pseudo-randomness with no outside seeding. + * For more random results, use get_random_u32(). + * For use only where the out-of-line version, prandom_u32_state(), cannot= be + * used (e.g. noinstr code). + */ +static __always_inline u32 prandom_u32_state_inline(struct rnd_state *stat= e) +{ +#define TAUSWORTHE(s, a, b, c, d) ((s & c) << d) ^ (((s << a) ^ s) >> b) + state->s1 =3D TAUSWORTHE(state->s1, 6U, 13U, 4294967294U, 18U); + state->s2 =3D TAUSWORTHE(state->s2, 2U, 27U, 4294967288U, 2U); + state->s3 =3D TAUSWORTHE(state->s3, 13U, 21U, 4294967280U, 7U); + state->s4 =3D TAUSWORTHE(state->s4, 3U, 12U, 4294967168U, 13U); + + return (state->s1 ^ state->s2 ^ state->s3 ^ state->s4); +} + u32 prandom_u32_state(struct rnd_state *state); void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes= ); void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state); diff --git a/lib/random32.c b/lib/random32.c index 24e7acd9343f..2a02d82e91bc 100644 --- a/lib/random32.c +++ b/lib/random32.c @@ -51,13 +51,7 @@ */ u32 prandom_u32_state(struct rnd_state *state) { -#define TAUSWORTHE(s, a, b, c, d) ((s & c) << d) ^ (((s << a) ^ s) >> b) - state->s1 =3D TAUSWORTHE(state->s1, 6U, 13U, 4294967294U, 18U); - state->s2 =3D TAUSWORTHE(state->s2, 2U, 27U, 4294967288U, 2U); - state->s3 =3D TAUSWORTHE(state->s3, 13U, 21U, 4294967280U, 7U); - state->s4 =3D TAUSWORTHE(state->s4, 3U, 12U, 4294967168U, 13U); - - return (state->s1 ^ state->s2 ^ state->s3 ^ state->s4); + return prandom_u32_state_inline(state); } EXPORT_SYMBOL(prandom_u32_state); =20 --=20 2.43.0 From nobody Sun Feb 8 03:57:30 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4670436213D; Mon, 19 Jan 2026 13:01:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827708; cv=none; b=UCJQv3NLI2ckXKj8ydBRCMeQ3jIQStwWLN+hYGq6hiQ6y8ekzfIvpAwW6DJLSOTdrNfoTcAE8H9P7HNSUU9l+aEpXRsHWpm7/gWrZOWjhJ9nDE7Sl8S0c/DSxSfKSPSZsNvSWw3+ApmlgCNKNNVVjPoExG78AtGTWkj+N1mydaw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768827708; c=relaxed/simple; bh=+otIr4inlzKy0Cl/z4oOi1h87dA8AmPNqByMwGJP6uQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aGmEIeNudC+INDW6grq5C33CA1x8RR4jsumPnYy4/cn3wP16zmZqVEHxeb1A5TOLin/djKCS4ZMA9o8VX3v86dFCVoJytIAxAEi4l59sHVz4yJON4kmiDP3At8+a3/5cn6Nkz9rKrrt9y1dhg/OX9mXmrMtGMj6EE3lPqAPzjuM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 134AC153B; Mon, 19 Jan 2026 05:01:39 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2AE063F740; Mon, 19 Jan 2026 05:01:42 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Huacai Chen , Madhavan Srinivasan , Michael Ellerman , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Kees Cook , "Gustavo A. R. Silva" , Arnd Bergmann , Mark Rutland , "Jason A. Donenfeld" , Ard Biesheuvel , Jeremy Linton , David Laight Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v4 3/3] randomize_kstack: Unify random source across arches Date: Mon, 19 Jan 2026 13:01:10 +0000 Message-ID: <20260119130122.1283821-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260119130122.1283821-1-ryan.roberts@arm.com> References: <20260119130122.1283821-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Previously different architectures were using random sources of differing strength and cost to decide the random kstack offset. A number of architectures (loongarch, powerpc, s390, x86) were using their timestamp counter, at whatever the frequency happened to be. Other arches (arm64, riscv) were using entropy from the crng via get_random_u16(). There have been concerns that in some cases the timestamp counters may be too weak, because they can be easily guessed or influenced by user space. And get_random_u16() has been shown to be too costly for the level of protection kstack offset randomization provides. So let's use a common, architecture-agnostic source of entropy; a per-cpu prng, seeded at boot-time from the crng. This has a few benefits: - We can remove choose_random_kstack_offset(); That was only there to try to make the timestamp counter value a bit harder to influence from user space [*]. - The architecture code is simplified. All it has to do now is call add_random_kstack_offset() in the syscall path. - The strength of the randomness can be reasoned about independently of the architecture. - Arches previously using get_random_u16() now have much faster syscall paths, see below results. [*] Additionally, this gets rid of some redundant work on s390 and x86. Before this patch, those architectures called choose_random_kstack_offset() under arch_exit_to_user_mode_prepare(), which is also called for exception returns to userspace which were *not* syscalls (e.g. regular interrupts). Getting rid of choose_random_kstack_offset() avoids a small amount of redundant work for the non-syscall cases. There have been some claims that a prng may be less strong than the timestamp counter if not regularly reseeded. But the prng has a period of about 2^113. So as long as the prng state remains secret, it should not be possible to guess. If the prng state can be accessed, we have bigger problems. Additionally, we are only consuming 6 bits to randomize the stack, so there are only 64 possible random offsets. I assert that it would be trivial for an attacker to brute force by repeating their attack and waiting for the random stack offset to be the desired one. The prng approach seems entirely proportional to this level of protection. Performance data are provided below. The baseline is v6.18 with rndstack on for each respective arch. (I)/(R) indicate statistically significant improvement/regression. arm64 platform is AWS Graviton3 (m7g.metal). x86_64 platform is AWS Sapphire Rapids (m7i.24xlarge): +-----------------+--------------+---------------+---------------+ | Benchmark | Result Class | per-task-prng | per-task-prng | | | | arm64 (metal) | x86_64 (VM) | +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ | syscall/getpid | mean (ns) | (I) -9.50% | (I) -17.65% | | | p99 (ns) | (I) -59.24% | (I) -24.41% | | | p99.9 (ns) | (I) -59.52% | (I) -28.52% | +-----------------+--------------+---------------+---------------+ | syscall/getppid | mean (ns) | (I) -9.52% | (I) -19.24% | | | p99 (ns) | (I) -59.25% | (I) -25.03% | | | p99.9 (ns) | (I) -59.50% | (I) -28.17% | +-----------------+--------------+---------------+---------------+ | syscall/invalid | mean (ns) | (I) -10.31% | (I) -18.56% | | | p99 (ns) | (I) -60.79% | (I) -20.06% | | | p99.9 (ns) | (I) -61.04% | (I) -25.04% | +-----------------+--------------+---------------+---------------+ I tested an earlier version of this change on x86 bare metal and it showed a smaller but still significant improvement. The bare metal system wasn't available this time around so testing was done in a VM instance. I'm guessing the cost of rdtsc is higher for VMs. Acked-by: Mark Rutland Signed-off-by: Ryan Roberts Acked-by: Dave Hansen Acked-by: Heiko Carstens # s390 --- arch/Kconfig | 5 ++- arch/arm64/kernel/syscall.c | 11 ------ arch/loongarch/kernel/syscall.c | 11 ------ arch/powerpc/kernel/syscall.c | 12 ------- arch/riscv/kernel/traps.c | 12 ------- arch/s390/include/asm/entry-common.h | 8 ----- arch/x86/include/asm/entry-common.h | 12 ------- include/linux/randomize_kstack.h | 52 +++++++++------------------- include/linux/sched.h | 4 --- init/main.c | 8 +++++ kernel/fork.c | 1 - 11 files changed, 27 insertions(+), 109 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 31220f512b16..8591fe7b4ac1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1516,9 +1516,8 @@ config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET def_bool n help An arch should select this symbol if it can support kernel stack - offset randomization with calls to add_random_kstack_offset() - during syscall entry and choose_random_kstack_offset() during - syscall exit. Careful removal of -fstack-protector-strong and + offset randomization with a call to add_random_kstack_offset() + during syscall entry. Careful removal of -fstack-protector-strong and -fstack-protector should also be applied to the entry code and closely examined, as the artificial stack bump looks like an array to the compiler, so it will attempt to add canary checks regardless diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index c062badd1a56..358ddfbf1401 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -52,17 +52,6 @@ static void invoke_syscall(struct pt_regs *regs, unsigne= d int scno, } =20 syscall_set_return_value(current, regs, 0, ret); - - /* - * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 - * bits. The actual entropy will be further reduced by the compiler - * when applying stack alignment constraints: the AAPCS mandates a - * 16-byte aligned SP at function boundaries, which will remove the - * 4 low bits from any entropy chosen here. - * - * The resulting 6 bits of entropy is seen in SP[9:4]. - */ - choose_random_kstack_offset(get_random_u16()); } =20 static inline bool has_syscall_work(unsigned long flags) diff --git a/arch/loongarch/kernel/syscall.c b/arch/loongarch/kernel/syscal= l.c index 1249d82c1cd0..85da7e050d97 100644 --- a/arch/loongarch/kernel/syscall.c +++ b/arch/loongarch/kernel/syscall.c @@ -79,16 +79,5 @@ void noinstr __no_stack_protector do_syscall(struct pt_r= egs *regs) regs->regs[7], regs->regs[8], regs->regs[9]); } =20 - /* - * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 - * bits. The actual entropy will be further reduced by the compiler - * when applying stack alignment constraints: 16-bytes (i.e. 4-bits) - * aligned, which will remove the 4 low bits from any entropy chosen - * here. - * - * The resulting 6 bits of entropy is seen in SP[9:4]. - */ - choose_random_kstack_offset(get_cycles()); - syscall_exit_to_user_mode(regs); } diff --git a/arch/powerpc/kernel/syscall.c b/arch/powerpc/kernel/syscall.c index be159ad4b77b..b3d8b0f9823b 100644 --- a/arch/powerpc/kernel/syscall.c +++ b/arch/powerpc/kernel/syscall.c @@ -173,17 +173,5 @@ notrace long system_call_exception(struct pt_regs *reg= s, unsigned long r0) } #endif =20 - /* - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), - * so the maximum stack offset is 1k bytes (10 bits). - * - * The actual entropy will be further reduced by the compiler when - * applying stack alignment constraints: the powerpc architecture - * may have two kinds of stack alignment (16-bytes and 8-bytes). - * - * So the resulting 6 or 7 bits of entropy is seen in SP[9:4] or SP[9:3]. - */ - choose_random_kstack_offset(mftb()); - return ret; } diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 47afea4ff1a8..dfb91ed3a243 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -344,18 +344,6 @@ void do_trap_ecall_u(struct pt_regs *regs) syscall_handler(regs, syscall); } =20 - /* - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), - * so the maximum stack offset is 1k bytes (10 bits). - * - * The actual entropy will be further reduced by the compiler when - * applying stack alignment constraints: 16-byte (i.e. 4-bit) aligned - * for RV32I or RV64I. - * - * The resulting 6 bits of entropy is seen in SP[9:4]. - */ - choose_random_kstack_offset(get_random_u16()); - syscall_exit_to_user_mode(regs); } else { irqentry_state_t state =3D irqentry_nmi_enter(regs); diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/e= ntry-common.h index 979af986a8fe..35450a485323 100644 --- a/arch/s390/include/asm/entry-common.h +++ b/arch/s390/include/asm/entry-common.h @@ -51,14 +51,6 @@ static __always_inline void arch_exit_to_user_mode(void) =20 #define arch_exit_to_user_mode arch_exit_to_user_mode =20 -static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, - unsigned long ti_work) -{ - choose_random_kstack_offset(get_tod_clock_fast()); -} - -#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare - static __always_inline bool arch_in_rcu_eqs(void) { if (IS_ENABLED(CONFIG_KVM)) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index ce3eb6d5fdf9..7535131c711b 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -82,18 +82,6 @@ static inline void arch_exit_to_user_mode_prepare(struct= pt_regs *regs, current_thread_info()->status &=3D ~(TS_COMPAT | TS_I386_REGS_POKED); #endif =20 - /* - * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 - * bits. The actual entropy will be further reduced by the compiler - * when applying stack alignment constraints (see cc_stack_align4/8 in - * arch/x86/Makefile), which will remove the 3 (x86_64) or 2 (ia32) - * low bits from any entropy chosen here. - * - * Therefore, final stack offset entropy will be 7 (x86_64) or - * 8 (ia32) bits. - */ - choose_random_kstack_offset(rdtsc()); - /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && this_cpu_read(x86_ibpb_exit_to_user)) { diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kst= ack.h index 5d3916ca747c..eef39701e914 100644 --- a/include/linux/randomize_kstack.h +++ b/include/linux/randomize_kstack.h @@ -6,6 +6,7 @@ #include #include #include +#include =20 DECLARE_STATIC_KEY_MAYBE(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, randomize_kstack_offset); @@ -45,9 +46,22 @@ DECLARE_STATIC_KEY_MAYBE(CONFIG_RANDOMIZE_KSTACK_OFFSET_= DEFAULT, #define KSTACK_OFFSET_MAX(x) ((x) & 0b1111111100) #endif =20 +DECLARE_PER_CPU(struct rnd_state, kstack_rnd_state); + +static __always_inline u32 get_kstack_offset(void) +{ + struct rnd_state *state; + u32 rnd; + + state =3D &get_cpu_var(kstack_rnd_state); + rnd =3D prandom_u32_state_inline(state); + put_cpu_var(kstack_rnd_state); + + return rnd; +} + /** - * add_random_kstack_offset - Increase stack utilization by previously - * chosen random offset + * add_random_kstack_offset - Increase stack utilization by a random offse= t. * * This should be used in the syscall entry path after user registers have= been * stored to the stack. Preemption may be enabled. For testing the resulti= ng @@ -56,47 +70,15 @@ DECLARE_STATIC_KEY_MAYBE(CONFIG_RANDOMIZE_KSTACK_OFFSET= _DEFAULT, #define add_random_kstack_offset() do { \ if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ &randomize_kstack_offset)) { \ - u32 offset =3D current->kstack_offset; \ + u32 offset =3D get_kstack_offset(); \ u8 *ptr =3D __kstack_alloca(KSTACK_OFFSET_MAX(offset)); \ /* Keep allocation even after "ptr" loses scope. */ \ asm volatile("" :: "r"(ptr) : "memory"); \ } \ } while (0) =20 -/** - * choose_random_kstack_offset - Choose the random offset for the next - * add_random_kstack_offset() - * - * This should only be used during syscall exit. Preemption may be enabled= . This - * position in the syscall flow is done to frustrate attacks from userspace - * attempting to learn the next offset: - * - Maximize the timing uncertainty visible from userspace: if the - * offset is chosen at syscall entry, userspace has much more control - * over the timing between choosing offsets. "How long will we be in - * kernel mode?" tends to be more difficult to predict than "how long - * will we be in user mode?" - * - Reduce the lifetime of the new offset sitting in memory during - * kernel mode execution. Exposure of "thread-local" memory content - * (e.g. current, percpu, etc) tends to be easier than arbitrary - * location memory exposure. - */ -#define choose_random_kstack_offset(rand) do { \ - if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ - &randomize_kstack_offset)) { \ - u32 offset =3D current->kstack_offset; \ - offset =3D ror32(offset, 5) ^ (rand); \ - current->kstack_offset =3D offset; \ - } \ -} while (0) - -static inline void random_kstack_task_init(struct task_struct *tsk) -{ - tsk->kstack_offset =3D 0; -} #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ #define add_random_kstack_offset() do { } while (0) -#define choose_random_kstack_offset(rand) do { } while (0) -#define random_kstack_task_init(tsk) do { } while (0) #endif /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ =20 #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 23081a702ecf..da0133524d08 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1591,10 +1591,6 @@ struct task_struct { unsigned long prev_lowest_stack; #endif =20 -#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET - u32 kstack_offset; -#endif - #ifdef CONFIG_X86_MCE void __user *mce_vaddr; __u64 mce_kflags; diff --git a/init/main.c b/init/main.c index 27fcbbde933e..8626e048095a 100644 --- a/init/main.c +++ b/init/main.c @@ -830,6 +830,14 @@ static inline void initcall_debug_enable(void) #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, randomize_kstack_offset); +DEFINE_PER_CPU(struct rnd_state, kstack_rnd_state); + +static int __init random_kstack_init(void) +{ + prandom_seed_full_state(&kstack_rnd_state); + return 0; +} +late_initcall(random_kstack_init); =20 static int __init early_randomize_kstack_offset(char *buf) { diff --git a/kernel/fork.c b/kernel/fork.c index b061e1edbc43..68d9766288fd 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2232,7 +2232,6 @@ __latent_entropy struct task_struct *copy_process( if (retval) goto bad_fork_cleanup_io; =20 - random_kstack_task_init(p); stackleak_task_init(p); =20 if (pid !=3D &init_struct_pid) { --=20 2.43.0