arch/x86/mm/kaslr.c | 21 +++++++++++++++------ mm/slab.c | 29 +++++++++-------------------- mm/slab_common.c | 11 +++-------- 3 files changed, 27 insertions(+), 34 deletions(-)
From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
Hi,
The security improvements for prandom_u32 done in commits c51f8f88d705
from October 2020 and d4150779e60f from May 2022 didn't handle the cases
when prandom_bytes_state() and prandom_u32_state() are used.
Specifically, this weak randomization takes place in three cases:
1. mm/slab.c
2. mm/slab_common.c
3. arch/x86/mm/kaslr.c
The first two invocations (mm/slab.c, mm/slab_common.c) are used to create
randomization in the slab allocator freelists.
This is done to make sure attackers can’t obtain information on the heap state.
The last invocation, inside arch/x86/mm/kaslr.c,
randomizes the virtual address space of kernel memory regions.
Hence, we have added the necessary changes to make those randomizations stronger,
switching prandom_u32 instance to siphash.
Changes since v5:
* Fixed coding style issues in mm/slab and mm/slab_common.
* Deleted irrelevant changes which were appended accidentally in
arch/x86/mm/kaslr.
Changes since v4:
* Changed only the arch/x86/mm/kaslr patch.
In particular, we replaced the use of prandom_bytes_state and
prandom_seed_state with siphash inside arch/x86/mm/kaslr.c.
Changes since v3:
* edited commit messages
Changes since v2:
* edited commit message.
* replaced instances of get_random_u32 with get_random_u32_below
in mm/slab.c, mm/slab_common.c
Regards,
David Keisar Schmidt (3):
mm/slab: Replace invocation of weak PRNG
mm/slab_common: Replace invocation of weak PRNG
arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state
arch/x86/mm/kaslr.c | 21 +++++++++++++++------
mm/slab.c | 29 +++++++++--------------------
mm/slab_common.c | 11 +++--------
3 files changed, 27 insertions(+), 34 deletions(-)
--
2.37.3
On 4/16/23 19:21, david.keisarschm@mail.huji.ac.il wrote: > From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il> Hi, btw, the threading of v5 and v6 seems broken, v4 was fine. I've added the patches 1+2 to slab tree for 6.5 (too late for 6.4 now): https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/for-6.5/prandom Thanks, Vlastimil > Hi, > > The security improvements for prandom_u32 done in commits c51f8f88d705 > from October 2020 and d4150779e60f from May 2022 didn't handle the cases > when prandom_bytes_state() and prandom_u32_state() are used. > > Specifically, this weak randomization takes place in three cases: > 1. mm/slab.c > 2. mm/slab_common.c > 3. arch/x86/mm/kaslr.c > > The first two invocations (mm/slab.c, mm/slab_common.c) are used to create > randomization in the slab allocator freelists. > This is done to make sure attackers can’t obtain information on the heap state. > > The last invocation, inside arch/x86/mm/kaslr.c, > randomizes the virtual address space of kernel memory regions. > Hence, we have added the necessary changes to make those randomizations stronger, > switching prandom_u32 instance to siphash. > > Changes since v5: > * Fixed coding style issues in mm/slab and mm/slab_common. > * Deleted irrelevant changes which were appended accidentally in > arch/x86/mm/kaslr. > > Changes since v4: > * Changed only the arch/x86/mm/kaslr patch. > In particular, we replaced the use of prandom_bytes_state and > prandom_seed_state with siphash inside arch/x86/mm/kaslr.c. > > Changes since v3: > * edited commit messages > > Changes since v2: > * edited commit message. > * replaced instances of get_random_u32 with get_random_u32_below > in mm/slab.c, mm/slab_common.c > > Regards, > > > David Keisar Schmidt (3): > mm/slab: Replace invocation of weak PRNG > mm/slab_common: Replace invocation of weak PRNG > arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state > > arch/x86/mm/kaslr.c | 21 +++++++++++++++------ > mm/slab.c | 29 +++++++++-------------------- > mm/slab_common.c | 11 +++-------- > 3 files changed, 27 insertions(+), 34 deletions(-) >
© 2016 - 2025 Red Hat, Inc.