From nobody Tue Apr 23 18:47:19 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EC5CC433FE for ; Thu, 24 Nov 2022 16:56:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229649AbiKXQ4M (ORCPT ); Thu, 24 Nov 2022 11:56:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229629AbiKXQ4I (ORCPT ); Thu, 24 Nov 2022 11:56:08 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6969173551; Thu, 24 Nov 2022 08:56:06 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 72E1D621C0; Thu, 24 Nov 2022 16:56:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72AB8C433D6; Thu, 24 Nov 2022 16:56:04 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="YQglz19Z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1669308963; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ploUEO/hXRLlHAliHhqhlqOTAN6eXKYU4nbOKp/484A=; b=YQglz19Z7yDzzmorp66eyx52kqz/4Rb9KfITU0yfsGAP/EpkQKUv56Q5QY0107QrOsKg9o 8hPa0l1neSWRnkOxT5TOl5PRfWh1LSrq0qxmGTsM4vT2c/v9JWZtjb5G1TVvpYXM69CEOo kln2b96x7Tp9CLK3rp1GMMjOYxueqgo= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id ebdb93cc (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Thu, 24 Nov 2022 16:56:03 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, patches@lists.linux.dev, tglx@linutronix.de Cc: "Jason A. Donenfeld" , linux-crypto@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, Greg Kroah-Hartman , Adhemerval Zanella Netto , Carlos O'Donell , Florian Weimer , Arnd Bergmann , Christian Brauner Subject: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Date: Thu, 24 Nov 2022 17:55:34 +0100 Message-Id: <20221124165536.1631325-2-Jason@zx2c4.com> In-Reply-To: <20221124165536.1631325-1-Jason@zx2c4.com> References: <20221124165536.1631325-1-Jason@zx2c4.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The vDSO getrandom() works over an opaque per-thread state of an unexported size, which must be marked as MADV_WIPEONFORK and be mlock()'d for proper operation. Over time, the nuances of these allocations may change or grow or even differ based on architectural features. The syscall has the signature: void *vgetrandom_alloc([inout] unsigned int *num, [out] unsigned int *size_per_each, unsigned int flags); This takes the desired number of opaque states in `num`, and returns a pointer to an array of opaque states, the number actually allocated back in `num`, and the size in bytes of each one in `size_per_each`, enabling a libc to slice up the returned array into a state per each thread. (The `flags` argument is always zero for now.) Libc is expected to allocate a chunk of these on first use, and then dole them out to threads as they're created, allocating more when needed. The following commit shows an example of this, being used in conjunction with the getrandom() vDSO function. We very intentionally do *not* leave state allocation for vDSO getrandom() up to userspace itself, but rather provide this new syscall for such allocations. vDSO getrandom() must not store its state in just any old memory address, but rather just ones that the kernel specially allocates for it, leaving the particularities of those allocations up to the kernel. Signed-off-by: Jason A. Donenfeld --- MAINTAINERS | 1 + arch/x86/Kconfig | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/x86/include/asm/unistd.h | 1 + drivers/char/random.c | 59 +++++++++++++++++++++++++ include/uapi/asm-generic/unistd.h | 7 ++- kernel/sys_ni.c | 3 ++ lib/vdso/getrandom.h | 23 ++++++++++ scripts/checksyscalls.sh | 4 ++ tools/include/uapi/asm-generic/unistd.h | 7 ++- 10 files changed, 105 insertions(+), 2 deletions(-) create mode 100644 lib/vdso/getrandom.h diff --git a/MAINTAINERS b/MAINTAINERS index 256f03904987..843dd6a49538 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17287,6 +17287,7 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/= git/crng/random.git S: Maintained F: drivers/char/random.c F: drivers/virt/vmgenid.c +F: lib/vdso/getrandom.h =20 RAPIDIO SUBSYSTEM M: Matt Porter diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 67745ceab0db..331e21ba961a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -59,6 +59,7 @@ config X86 # select ACPI_LEGACY_TABLES_LOOKUP if ACPI select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI + select ADVISE_SYSCALLS if X86_64 select ARCH_32BIT_OFF_T if X86_32 select ARCH_CLOCKSOURCE_INIT select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscal= ls/syscall_64.tbl index c84d12608cd2..0186f173f0e8 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common vgetrandom_alloc sys_vgetrandom_alloc =20 # # Due to a historical design error, certain syscalls are numbered differen= tly diff --git a/arch/x86/include/asm/unistd.h b/arch/x86/include/asm/unistd.h index 761173ccc33c..1bf509eaeff1 100644 --- a/arch/x86/include/asm/unistd.h +++ b/arch/x86/include/asm/unistd.h @@ -27,6 +27,7 @@ # define __ARCH_WANT_COMPAT_SYS_PWRITEV64 # define __ARCH_WANT_COMPAT_SYS_PREADV64V2 # define __ARCH_WANT_COMPAT_SYS_PWRITEV64V2 +# define __ARCH_WANT_VGETRANDOM_ALLOC # define X32_NR_syscalls (__NR_x32_syscalls) # define IA32_NR_syscalls (__NR_ia32_syscalls) =20 diff --git a/drivers/char/random.c b/drivers/char/random.c index a2a18bd3d7d7..71db7b787a60 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -8,6 +8,7 @@ * into roughly six sections, each with a section header: * * - Initialization and readiness waiting. + * - vDSO support helpers. * - Fast key erasure RNG, the "crng". * - Entropy accumulation and extraction routines. * - Entropy collection routines. @@ -39,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -59,6 +61,7 @@ #include #include #include +#include "../../lib/vdso/getrandom.h" =20 /********************************************************************* * @@ -167,6 +170,62 @@ int __cold execute_with_initialized_rng(struct notifie= r_block *nb) __func__, (void *)_RET_IP_, crng_init) =20 =20 + +/******************************************************************** + * + * vDSO support helpers. + * + * The actual vDSO function is defined over in lib/vdso/getrandom.c, + * but this section contains the kernel-mode helpers to support that. + * + ********************************************************************/ + +#ifdef __ARCH_WANT_VGETRANDOM_ALLOC +/* + * The vgetrandom() function in userspace requires an opaque state, which = this + * function provides to userspace, by mapping a certain number of special = pages + * into the calling process. It takes a hint as to the number of opaque st= ates + * desired, and returns the number of opaque states actually allocated, the + * size of each one in bytes, and the address of the first state. + */ +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num, + unsigned int __user *, size_per_each, unsigned int, flags) +{ + size_t alloc_size, num_states; + unsigned long pages_addr; + unsigned int num_hint; + int ret; + + if (flags) + return -EINVAL; + + if (get_user(num_hint, num)) + return -EFAULT; + + num_states =3D clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / size= of(struct vgetrandom_state)); + alloc_size =3D PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state)); + + if (put_user(alloc_size / sizeof(struct vgetrandom_state), num) || + put_user(sizeof(struct vgetrandom_state), size_per_each)) + return -EFAULT; + + pages_addr =3D vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0); + if (IS_ERR_VALUE(pages_addr)) + return pages_addr; + + ret =3D do_madvise(current->mm, pages_addr, alloc_size, MADV_WIPEONFORK); + if (ret < 0) + goto err_unmap; + + return pages_addr; + +err_unmap: + vm_munmap(pages_addr, alloc_size); + return ret; +} +#endif + /********************************************************************* * * Fast key erasure RNG, the "crng". diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/u= nistd.h index 45fa180cc56a..77b6debe7e18 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) =20 +#ifdef __ARCH_WANT_VGETRANDOM_ALLOC +#define __NR_vgetrandom_alloc 451 +__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc) +#endif + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 =20 /* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..f28196cb919b 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free); /* memfd_secret */ COND_SYSCALL(memfd_secret); =20 +/* random */ +COND_SYSCALL(vgetrandom_alloc); + /* * Architecture specific weak syscall entries. */ diff --git a/lib/vdso/getrandom.h b/lib/vdso/getrandom.h new file mode 100644 index 000000000000..c7f727db2aaa --- /dev/null +++ b/lib/vdso/getrandom.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2022 Jason A. Donenfeld . All Rights Res= erved. + */ + +#ifndef _VDSO_LIB_GETRANDOM_H +#define _VDSO_LIB_GETRANDOM_H + +#include + +struct vgetrandom_state { + union { + struct { + u8 batch[CHACHA_BLOCK_SIZE * 3 / 2]; + u32 key[CHACHA_KEY_SIZE / sizeof(u32)]; + }; + u8 batch_key[CHACHA_BLOCK_SIZE * 2]; + }; + unsigned long generation; + u8 pos; +}; + +#endif /* _VDSO_LIB_GETRANDOM_H */ diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh index f33e61aca93d..7f7928c6487f 100755 --- a/scripts/checksyscalls.sh +++ b/scripts/checksyscalls.sh @@ -44,6 +44,10 @@ cat << EOF #define __IGNORE_memfd_secret #endif =20 +#ifndef __ARCH_WANT_VGETRANDOM_ALLOC +#define __IGNORE_vgetrandom_alloc +#endif + /* Missing flags argument */ #define __IGNORE_renameat /* renameat2 */ =20 diff --git a/tools/include/uapi/asm-generic/unistd.h b/tools/include/uapi/a= sm-generic/unistd.h index 45fa180cc56a..77b6debe7e18 100644 --- a/tools/include/uapi/asm-generic/unistd.h +++ b/tools/include/uapi/asm-generic/unistd.h @@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) =20 +#ifdef __ARCH_WANT_VGETRANDOM_ALLOC +#define __NR_vgetrandom_alloc 451 +__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc) +#endif + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 =20 /* * 32 bit systems traditionally used different --=20 2.38.1 From nobody Tue Apr 23 18:47:19 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760C5C433FE for ; Thu, 24 Nov 2022 16:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229719AbiKXQ4d (ORCPT ); Thu, 24 Nov 2022 11:56:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229564AbiKXQ43 (ORCPT ); Thu, 24 Nov 2022 11:56:29 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A47617A509; Thu, 24 Nov 2022 08:56:14 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DA65FB82899; Thu, 24 Nov 2022 16:56:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18165C433C1; Thu, 24 Nov 2022 16:56:10 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="IuPHJ85H" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1669308968; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WgRJ5rn5AVu2P+Egwx8pzLTAsbm29wfp/I0VqWSPpYA=; b=IuPHJ85HPz8JbCNDkMGw1FRI9GjHN8MSUyEwr+PvmjYd6tcbduvcfmOYq+xSCeqz5PtT16 25JPmbEupJCq2o235nWgwit0AKRiTKZi29kql8wZFu0l+21x5HqO60J8ArKzl36MKmzwY5 L4NMnrOnnHglIIPIO5hxiX0LkvFj0sY= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 70b2281a (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Thu, 24 Nov 2022 16:56:08 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, patches@lists.linux.dev, tglx@linutronix.de Cc: "Jason A. Donenfeld" , linux-crypto@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, Greg Kroah-Hartman , Adhemerval Zanella Netto , Carlos O'Donell , Florian Weimer , Arnd Bergmann , Christian Brauner Subject: [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation Date: Thu, 24 Nov 2022 17:55:35 +0100 Message-Id: <20221124165536.1631325-3-Jason@zx2c4.com> In-Reply-To: <20221124165536.1631325-1-Jason@zx2c4.com> References: <20221124165536.1631325-1-Jason@zx2c4.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Provide a generic C vDSO getrandom() implementation, which operates on an opaque state returned by vgetrandom_alloc() and produces random bytes the same way as getrandom(). This has a the API signature: ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *op= aque_state); The return value and the first 3 arguments are the same as ordinary getrandom(), while the last argument is a pointer to the opaque allocated state. Were all four arguments passed to the getrandom() syscall, nothing different would happen, and the functions would have the exact same behavior. The actual vDSO RNG algorithm implemented is the same one implemented by drivers/char/random.c, using the same fast-erasure techniques as that. Should the in-kernel implementation change, so too will the vDSO one. It requires an implementation of ChaCha20 that does not use any stack, in order to maintain forward secrecy, so this is left as an architecture-specific fill-in. Stack-less ChaCha20 is an easy algorithm to implement on a variety of architectures, so this shouldn't be too onerous. Initially, the state is keyless, and so the first call makes a getrandom() syscall to generate that key, and then uses it for subsequent calls. By keeping track of a generation counter, it knows when its key is invalidated and it should fetch a new one using the syscall. Later, more than just a generation counter might be used. Since MADV_WIPEONFORK is set on the opaque state, the key and related state is wiped during a fork(), so secrets don't roll over into new processes, and the same state doesn't accidentally generate the same random stream. The generation counter, as well, is always >0, so that the 0 counter is a useful indication of a fork() or otherwise uninitialized state. If the kernel RNG is not yet initialized, then the vDSO always calls the syscall, because that behavior cannot be emulated in userspace, but fortunately that state is short lived and only during early boot. If it has been initialized, then there is no need to inspect the `flags` argument, because the behavior does not change post-initialization regardless of the `flags` value. Since the opaque state passed to it is mutated, vDSO getrandom() is not reentrant, when used with the same opaque state, which libc should be mindful of. Together with the previous commit that introduces vgetrandom_alloc(), this functionality is intended to be integrated into libc's thread management. As an illustrative example, the following code might be used to do the same outside of libc. All of the static functions are to be considered implementation private, including the vgetrandom_alloc() syscall wrapper, which generally shouldn't be exposed outside of libc, with the non-static vgetrandom() function at the end being the exported interface. The various pthread-isms are expected to be elided into libc internals. This per-thread allocation scheme is very naive and does not shrink; other implementations may choose to be more complex. static void *vgetrandom_alloc(unsigned int *num, unsigned int *size_per_e= ach, unsigned int flags) { long ret =3D syscall(__NR_vgetrandom_alloc, &num, &size_per_each, flags= ); return ret =3D=3D -1 ? NULL : (void *)ret; } static struct { pthread_mutex_t lock; void **states; size_t len, cap; } grnd_allocator =3D { .lock =3D PTHREAD_MUTEX_INITIALIZER }; static void *vgetrandom_get_state(void) { void *state =3D NULL; pthread_mutex_lock(&grnd_allocator.lock); if (!grnd_allocator.len) { size_t new_cap; unsigned int size_per_each, num =3D 16; /* Just a hint. Could also be= nr_cpus. */ void *new_block =3D vgetrandom_alloc(&num, &size_per_each, 0), *new_s= tates; if (!new_block) goto out; new_cap =3D grnd_allocator.cap + num; new_states =3D reallocarray(grnd_allocator.states, new_cap, sizeof(*g= rnd_allocator.states)); if (!new_states) { munmap(new_block, num * size_per_each); goto out; } grnd_allocator.cap =3D new_cap; grnd_allocator.states =3D new_states; for (size_t i =3D 0; i < num; ++i) { grnd_allocator.states[i] =3D new_block; new_block +=3D size_per_each; } grnd_allocator.len =3D num; } state =3D grnd_allocator.states[--grnd_allocator.len]; out: pthread_mutex_unlock(&grnd_allocator.lock); return state; } static void vgetrandom_put_state(void *state) { if (!state) return; pthread_mutex_lock(&grnd_allocator.lock); grnd_allocator.states[grnd_allocator.len++] =3D state; pthread_mutex_unlock(&grnd_allocator.lock); } static struct { ssize_t(*fn)(void *buf, size_t len, unsigned long flags, void *state); pthread_key_t key; pthread_once_t initialized; } grnd_ctx =3D { .initialized =3D PTHREAD_ONCE_INIT }; static void vgetrandom_init(void) { if (pthread_key_create(&grnd_ctx.key, vgetrandom_put_state) !=3D 0) return; grnd_ctx.fn =3D __vdsosym("LINUX_2.6", "__vdso_getrandom"); } ssize_t vgetrandom(void *buf, size_t len, unsigned long flags) { void *state; pthread_once(&grnd_ctx.initialized, vgetrandom_init); if (!grnd_ctx.fn) return getrandom(buf, len, flags); state =3D pthread_getspecific(grnd_ctx.key); if (!state) { state =3D vgetrandom_get_state(); if (pthread_setspecific(grnd_ctx.key, state) !=3D 0) { vgetrandom_put_state(state); state =3D NULL; } if (!state) return getrandom(buf, len, flags); } return grnd_ctx.fn(buf, len, flags, state); } Signed-off-by: Jason A. Donenfeld --- MAINTAINERS | 1 + drivers/char/random.c | 9 ++++ include/vdso/datapage.h | 6 +++ lib/vdso/Kconfig | 5 ++ lib/vdso/getrandom.c | 114 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 135 insertions(+) create mode 100644 lib/vdso/getrandom.c diff --git a/MAINTAINERS b/MAINTAINERS index 843dd6a49538..e0aa33f54c57 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17287,6 +17287,7 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/= git/crng/random.git S: Maintained F: drivers/char/random.c F: drivers/virt/vmgenid.c +F: lib/vdso/getrandom.c F: lib/vdso/getrandom.h =20 RAPIDIO SUBSYSTEM diff --git a/drivers/char/random.c b/drivers/char/random.c index 71db7b787a60..35ac2d4d0726 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -61,6 +61,9 @@ #include #include #include +#ifdef CONFIG_HAVE_VDSO_GETRANDOM +#include +#endif #include "../../lib/vdso/getrandom.h" =20 /********************************************************************* @@ -328,6 +331,9 @@ static void crng_reseed(struct work_struct *work) if (next_gen =3D=3D ULONG_MAX) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); +#ifdef CONFIG_HAVE_VDSO_GETRANDOM + smp_store_release(&_vdso_rng_data.generation, next_gen + 1); +#endif if (!static_branch_likely(&crng_is_ready)) crng_init =3D CRNG_READY; spin_unlock_irqrestore(&base_crng.lock, flags); @@ -778,6 +784,9 @@ static void __cold _credit_init_bits(size_t bits) if (static_key_initialized) execute_in_process_context(crng_set_ready, &set_ready); atomic_notifier_call_chain(&random_ready_notifier, 0, NULL); +#ifdef CONFIG_HAVE_VDSO_GETRANDOM + smp_store_release(&_vdso_rng_data.is_ready, true); +#endif wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); pr_notice("crng init done\n"); diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h index 73eb622e7663..cbacfd923a5c 100644 --- a/include/vdso/datapage.h +++ b/include/vdso/datapage.h @@ -109,6 +109,11 @@ struct vdso_data { struct arch_vdso_data arch_data; }; =20 +struct vdso_rng_data { + unsigned long generation; + bool is_ready; +}; + /* * We use the hidden visibility to prevent the compiler from generating a = GOT * relocation. Not only is going through a GOT useless (the entry couldn't= and @@ -120,6 +125,7 @@ struct vdso_data { */ extern struct vdso_data _vdso_data[CS_BASES] __attribute__((visibility("hi= dden"))); extern struct vdso_data _timens_data[CS_BASES] __attribute__((visibility("= hidden"))); +extern struct vdso_rng_data _vdso_rng_data __attribute__((visibility("hidd= en"))); =20 /* * The generic vDSO implementation requires that gettimeofday.h diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig index d883ac299508..c35fac664574 100644 --- a/lib/vdso/Kconfig +++ b/lib/vdso/Kconfig @@ -30,4 +30,9 @@ config GENERIC_VDSO_TIME_NS Selected by architectures which support time namespaces in the VDSO =20 +config HAVE_VDSO_GETRANDOM + bool + help + Selected by architectures that support vDSO getrandom(). + endif diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c new file mode 100644 index 000000000000..2c4ef5ef212c --- /dev/null +++ b/lib/vdso/getrandom.c @@ -0,0 +1,114 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022 Jason A. Donenfeld . All Rights Res= erved. + */ + +#include +#include +#include +#include +#include +#include +#include "getrandom.h" + +static void memcpy_and_zero(void *dst, void *src, size_t len) +{ +#define CASCADE(type) \ + while (len >=3D sizeof(type)) { \ + __put_unaligned_t(type, __get_unaligned_t(type, src), dst); \ + __put_unaligned_t(type, 0, src); \ + dst +=3D sizeof(type); \ + src +=3D sizeof(type); \ + len -=3D sizeof(type); \ + } +#if IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) +#if BITS_PER_LONG =3D=3D 64 + CASCADE(u64); +#endif + CASCADE(u32); + CASCADE(u16); +#endif + CASCADE(u8); +#undef CASCADE +} + +static __always_inline ssize_t +__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer,= size_t len, + unsigned int flags, void *opaque_state) +{ + ssize_t ret =3D min_t(size_t, MAX_RW_COUNT, len); + struct vgetrandom_state *state =3D opaque_state; + size_t batch_len, nblocks, orig_len =3D len; + unsigned long current_generation; + void *orig_buffer =3D buffer; + u32 counter[2] =3D { 0 }; + + /* + * If the kernel isn't yet initialized, then the various flags might have= some effect + * that we can't emulate in userspace, so use the syscall. Otherwise, th= e flags have + * no effect, and can continue. + */ + if (unlikely(!rng_info->is_ready)) + return getrandom_syscall(orig_buffer, orig_len, flags); + + if (unlikely(!len)) + return 0; + +retry_generation: + current_generation =3D READ_ONCE(rng_info->generation); + if (unlikely(state->generation !=3D current_generation)) { + /* Write the generation before filling the key, in case there's a fork b= efore. */ + WRITE_ONCE(state->generation, current_generation); + /* If the generation is wrong, the kernel has reseeded, so we should too= . */ + if (getrandom_syscall(state->key, sizeof(state->key), 0) !=3D sizeof(sta= te->key)) + return getrandom_syscall(orig_buffer, orig_len, flags); + /* Set state->pos so that the batch is considered emptied. */ + state->pos =3D sizeof(state->batch); + } + + len =3D ret; +more_batch: + /* First use whatever is left from the last call. */ + batch_len =3D min_t(size_t, sizeof(state->batch) - state->pos, len); + if (batch_len) { + /* Zero out bytes as they're copied out, to preserve forward secrecy. */ + memcpy_and_zero(buffer, state->batch + state->pos, batch_len); + state->pos +=3D batch_len; + buffer +=3D batch_len; + len -=3D batch_len; + } + if (!len) { + /* + * Since rng_info->generation will never be 0, we re-read state->generat= ion, + * rather than using the local current_generation variable, to learn whe= ther + * we forked. Primarily, though, this indicates whether the rng itself h= as + * reseeded, in which case we should generate a new key and start over. + */ + if (unlikely(READ_ONCE(state->generation) !=3D READ_ONCE(rng_info->gener= ation))) { + buffer =3D orig_buffer; + goto retry_generation; + } + return ret; + } + + /* Generate blocks of rng output directly into the buffer while there's e= nough left. */ + nblocks =3D len / CHACHA_BLOCK_SIZE; + if (nblocks) { + __arch_chacha20_blocks_nostack(buffer, state->key, counter, nblocks); + buffer +=3D nblocks * CHACHA_BLOCK_SIZE; + len -=3D nblocks * CHACHA_BLOCK_SIZE; + } + + /* Refill the batch and then overwrite the key, in order to preserve forw= ard secrecy. */ + BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE !=3D 0); + __arch_chacha20_blocks_nostack(state->batch_key, state->key, counter, + sizeof(state->batch_key) / CHACHA_BLOCK_SIZE); + state->pos =3D 0; + goto more_batch; +} + +static __always_inline ssize_t +__cvdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaq= ue_state) +{ + return __cvdso_getrandom_data(__arch_get_vdso_rng_data(), buffer, len, fl= ags, opaque_state); +} --=20 2.38.1 From nobody Tue Apr 23 18:47:19 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EFF2C433FE for ; Thu, 24 Nov 2022 16:56:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbiKXQ4s (ORCPT ); Thu, 24 Nov 2022 11:56:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229612AbiKXQ4a (ORCPT ); Thu, 24 Nov 2022 11:56:30 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 519CD54752; Thu, 24 Nov 2022 08:56:17 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D3FF6621CA; Thu, 24 Nov 2022 16:56:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9161C433D6; Thu, 24 Nov 2022 16:56:14 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="arxlqSqY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1669308973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WQkNPzT+4E6D9ikxguKExRCMA9UxFawQtzMPEnpl28M=; b=arxlqSqYukYk5orSZci9P0U8+n1NoLSPANyatjKk29yyQaFXaAgV74fpZ/ZH6OacBrHbzP 10HlEmaJGV3CkOvsTON8s1RRrqf5is5E80ly/5gm8y7x3j49hOLQAal1euiNGN77B3SV2r NdWhFf5Xxn7U6s19Mq5RucOzLqgwe6c= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id a70463aa (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Thu, 24 Nov 2022 16:56:12 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, patches@lists.linux.dev, tglx@linutronix.de Cc: "Jason A. Donenfeld" , linux-crypto@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, Greg Kroah-Hartman , Adhemerval Zanella Netto , Carlos O'Donell , Florian Weimer , Arnd Bergmann , Christian Brauner Subject: [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation Date: Thu, 24 Nov 2022 17:55:36 +0100 Message-Id: <20221124165536.1631325-4-Jason@zx2c4.com> In-Reply-To: <20221124165536.1631325-1-Jason@zx2c4.com> References: <20221124165536.1631325-1-Jason@zx2c4.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hook up the generic vDSO implementation to the x86 vDSO data page. Since the existing vDSO infrastructure is heavily based on the timekeeping functionality, which works over arrays of bases, a new macro is introduced for vvars that are not arrays. The vDSO function requires a ChaCha20 implementation that does not write to the stack, yet can still do an entire ChaCha20 permutation, so provide this using SSE2, since this is userland code that must work on all x86-64 processors. Signed-off-by: Jason A. Donenfeld Reviewed-by: Samuel Neves # for vgetrandom-chacha.S --- arch/x86/Kconfig | 1 + arch/x86/entry/vdso/Makefile | 3 +- arch/x86/entry/vdso/vdso.lds.S | 2 + arch/x86/entry/vdso/vgetrandom-chacha.S | 179 ++++++++++++++++++++++++ arch/x86/entry/vdso/vgetrandom.c | 18 +++ arch/x86/include/asm/vdso/getrandom.h | 49 +++++++ arch/x86/include/asm/vdso/vsyscall.h | 2 + arch/x86/include/asm/vvar.h | 16 +++ 8 files changed, 269 insertions(+), 1 deletion(-) create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S create mode 100644 arch/x86/entry/vdso/vgetrandom.c create mode 100644 arch/x86/include/asm/vdso/getrandom.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 331e21ba961a..b64b1b1274ae 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -270,6 +270,7 @@ config X86 select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_USER_RETURN_NOTIFIER select HAVE_GENERIC_VDSO + select HAVE_VDSO_GETRANDOM if X86_64 select HOTPLUG_SMT if SMP select IRQ_FORCED_THREADING select NEED_PER_CPU_EMBED_FIRST_CHUNK diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 3e88b9df8c8f..2de64e52236a 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -27,7 +27,7 @@ VDSO32-$(CONFIG_X86_32) :=3D y VDSO32-$(CONFIG_IA32_EMULATION) :=3D y =20 # files to link into the vdso -vobjs-y :=3D vdso-note.o vclock_gettime.o vgetcpu.o +vobjs-y :=3D vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrando= m-chacha.o vobjs32-y :=3D vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o vobjs32-y +=3D vdso32/vclock_gettime.o vobjs-$(CONFIG_X86_SGX) +=3D vsgx.o @@ -104,6 +104,7 @@ CFLAGS_REMOVE_vclock_gettime.o =3D -pg CFLAGS_REMOVE_vdso32/vclock_gettime.o =3D -pg CFLAGS_REMOVE_vgetcpu.o =3D -pg CFLAGS_REMOVE_vsgx.o =3D -pg +CFLAGS_REMOVE_vgetrandom.o =3D -pg =20 # # X32 processes use x32 vDSO to access 64bit kernel data. diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S index 4bf48462fca7..1919cc39277e 100644 --- a/arch/x86/entry/vdso/vdso.lds.S +++ b/arch/x86/entry/vdso/vdso.lds.S @@ -28,6 +28,8 @@ VERSION { clock_getres; __vdso_clock_getres; __vdso_sgx_enter_enclave; + getrandom; + __vdso_getrandom; local: *; }; } diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/= vgetrandom-chacha.S new file mode 100644 index 000000000000..d1b986be3aa4 --- /dev/null +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022 Jason A. Donenfeld . All Rights Res= erved. + */ + +#include +#include + +.section .rodata.cst16.CONSTANTS, "aM", @progbits, 16 +.align 16 +CONSTANTS: .octa 0x6b20657479622d323320646e61707865 +.text + +/* + * Very basic SSE2 implementation of ChaCha20. Produces a given positive n= umber + * of blocks of output with a nonce of 0, taking an input key and 8-byte + * counter. Importantly does not spill to the stack. Its arguments are: + * + * rdi: output bytes + * rsi: 32-byte key input + * rdx: 8-byte counter input/output + * rcx: number of 64-byte blocks to write to output + */ +SYM_FUNC_START(chacha20_blocks_nostack) + +#define output %rdi +#define key %rsi +#define counter %rdx +#define nblocks %rcx +#define i %al +#define state0 %xmm0 +#define state1 %xmm1 +#define state2 %xmm2 +#define state3 %xmm3 +#define copy0 %xmm4 +#define copy1 %xmm5 +#define copy2 %xmm6 +#define copy3 %xmm7 +#define temp %xmm8 +#define one %xmm9 + + /* copy0 =3D "expand 32-byte k" */ + movaps CONSTANTS(%rip),copy0 + /* copy1,copy2 =3D key */ + movdqu 0x00(key),copy1 + movdqu 0x10(key),copy2 + /* copy3 =3D counter || zero nonce */ + movq 0x00(counter),copy3 + /* one =3D 1 || 0 */ + movq $1,%rax + movq %rax,one + +.Lblock: + /* state0,state1,state2,state3 =3D copy0,copy1,copy2,copy3 */ + movdqa copy0,state0 + movdqa copy1,state1 + movdqa copy2,state2 + movdqa copy3,state3 + + movb $10,i +.Lpermute: + /* state0 +=3D state1, state3 =3D rotl32(state3 ^ state0, 16) */ + paddd state1,state0 + pxor state0,state3 + movdqa state3,temp + pslld $16,temp + psrld $16,state3 + por temp,state3 + + /* state2 +=3D state3, state1 =3D rotl32(state1 ^ state2, 12) */ + paddd state3,state2 + pxor state2,state1 + movdqa state1,temp + pslld $12,temp + psrld $20,state1 + por temp,state1 + + /* state0 +=3D state1, state3 =3D rotl32(state3 ^ state0, 8) */ + paddd state1,state0 + pxor state0,state3 + movdqa state3,temp + pslld $8,temp + psrld $24,state3 + por temp,state3 + + /* state2 +=3D state3, state1 =3D rotl32(state1 ^ state2, 7) */ + paddd state3,state2 + pxor state2,state1 + movdqa state1,temp + pslld $7,temp + psrld $25,state1 + por temp,state1 + + /* state1 =3D shuffle32(state1, MASK(0, 3, 2, 1)) */ + pshufd $0x39,state1,state1 + /* state2 =3D shuffle32(state2, MASK(1, 0, 3, 2)) */ + pshufd $0x4e,state2,state2 + /* state3 =3D shuffle32(state3, MASK(2, 1, 0, 3)) */ + pshufd $0x93,state3,state3 + + /* state0 +=3D state1, state3 =3D rotl32(state3 ^ state0, 16) */ + paddd state1,state0 + pxor state0,state3 + movdqa state3,temp + pslld $16,temp + psrld $16,state3 + por temp,state3 + + /* state2 +=3D state3, state1 =3D rotl32(state1 ^ state2, 12) */ + paddd state3,state2 + pxor state2,state1 + movdqa state1,temp + pslld $12,temp + psrld $20,state1 + por temp,state1 + + /* state0 +=3D state1, state3 =3D rotl32(state3 ^ state0, 8) */ + paddd state1,state0 + pxor state0,state3 + movdqa state3,temp + pslld $8,temp + psrld $24,state3 + por temp,state3 + + /* state2 +=3D state3, state1 =3D rotl32(state1 ^ state2, 7) */ + paddd state3,state2 + pxor state2,state1 + movdqa state1,temp + pslld $7,temp + psrld $25,state1 + por temp,state1 + + /* state1 =3D shuffle32(state1, MASK(2, 1, 0, 3)) */ + pshufd $0x93,state1,state1 + /* state2 =3D shuffle32(state2, MASK(1, 0, 3, 2)) */ + pshufd $0x4e,state2,state2 + /* state3 =3D shuffle32(state3, MASK(0, 3, 2, 1)) */ + pshufd $0x39,state3,state3 + + decb i + jnz .Lpermute + + /* output0 =3D state0 + copy0 */ + paddd copy0,state0 + movdqu state0,0x00(output) + /* output1 =3D state1 + copy1 */ + paddd copy1,state1 + movdqu state1,0x10(output) + /* output2 =3D state2 + copy2 */ + paddd copy2,state2 + movdqu state2,0x20(output) + /* output3 =3D state3 + copy3 */ + paddd copy3,state3 + movdqu state3,0x30(output) + + /* ++copy3.counter */ + paddq one,copy3 + + /* output +=3D 64, --nblocks */ + addq $64,output + decq nblocks + jnz .Lblock + + /* counter =3D copy3.counter */ + movq copy3,0x00(counter) + + /* Zero out all the regs, in case nothing uses these again. */ + pxor state0,state0 + pxor state1,state1 + pxor state2,state2 + pxor state3,state3 + pxor copy0,copy0 + pxor copy1,copy1 + pxor copy2,copy2 + pxor copy3,copy3 + pxor temp,temp + + ret +SYM_FUNC_END(chacha20_blocks_nostack) diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetran= dom.c new file mode 100644 index 000000000000..c7a2476d5d8a --- /dev/null +++ b/arch/x86/entry/vdso/vgetrandom.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Jason A. Donenfeld . All Rights Res= erved. + */ +#include +#include + +#include "../../../../lib/vdso/getrandom.c" + +ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, voi= d *state); + +ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, voi= d *state) +{ + return __cvdso_getrandom(buffer, len, flags, state); +} + +ssize_t getrandom(void *, size_t, unsigned int, void *) + __attribute__((weak, alias("__vdso_getrandom"))); diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/v= dso/getrandom.h new file mode 100644 index 000000000000..099aca58ef20 --- /dev/null +++ b/arch/x86/include/asm/vdso/getrandom.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2022 Jason A. Donenfeld . All Rights Res= erved. + */ +#ifndef __ASM_VDSO_GETRANDOM_H +#define __ASM_VDSO_GETRANDOM_H + +#ifndef __ASSEMBLY__ + +#include +#include + +static __always_inline ssize_t +getrandom_syscall(void *buffer, size_t len, unsigned int flags) +{ + long ret; + + asm ("syscall" : "=3Da" (ret) : + "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) : + "rcx", "r11", "memory"); + + return ret; +} + +#define __vdso_rng_data (VVAR(_vdso_rng_data)) + +static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_dat= a(void) +{ + if (__vdso_data->clock_mode =3D=3D VDSO_CLOCKMODE_TIMENS) + return (void *)&__vdso_rng_data + + ((void *)&__timens_vdso_data - (void *)&__vdso_data); + return &__vdso_rng_data; +} + +/* + * Generates a given positive number of block of ChaCha20 output with nonc= e=3D0, + * and does not write to any stack or memory outside of the parameters pas= sed + * to it. This way, we don't need to worry about stack data leaking into f= orked + * child processes. + */ +static __always_inline void __arch_chacha20_blocks_nostack(u8 *dst_bytes, = const u32 *key, u32 *counter, size_t nblocks) +{ + extern void chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *c= ounter, size_t nblocks); + return chacha20_blocks_nostack(dst_bytes, key, counter, nblocks); +} + +#endif /* !__ASSEMBLY__ */ + +#endif /* __ASM_VDSO_GETRANDOM_H */ diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vd= so/vsyscall.h index be199a9b2676..71c56586a22f 100644 --- a/arch/x86/include/asm/vdso/vsyscall.h +++ b/arch/x86/include/asm/vdso/vsyscall.h @@ -11,6 +11,8 @@ #include =20 DEFINE_VVAR(struct vdso_data, _vdso_data); +DEFINE_VVAR_SINGLE(struct vdso_rng_data, _vdso_rng_data); + /* * Update the vDSO data page to keep in sync with kernel timekeeping. */ diff --git a/arch/x86/include/asm/vvar.h b/arch/x86/include/asm/vvar.h index 183e98e49ab9..9d9af37f7cab 100644 --- a/arch/x86/include/asm/vvar.h +++ b/arch/x86/include/asm/vvar.h @@ -26,6 +26,8 @@ */ #define DECLARE_VVAR(offset, type, name) \ EMIT_VVAR(name, offset) +#define DECLARE_VVAR_SINGLE(offset, type, name) \ + EMIT_VVAR(name, offset) =20 #else =20 @@ -37,6 +39,10 @@ extern char __vvar_page; extern type timens_ ## name[CS_BASES] \ __attribute__((visibility("hidden"))); \ =20 +#define DECLARE_VVAR_SINGLE(offset, type, name) \ + extern type vvar_ ## name \ + __attribute__((visibility("hidden"))); \ + #define VVAR(name) (vvar_ ## name) #define TIMENS(name) (timens_ ## name) =20 @@ -44,12 +50,22 @@ extern char __vvar_page; type name[CS_BASES] \ __attribute__((section(".vvar_" #name), aligned(16))) __visible =20 +#define DEFINE_VVAR_SINGLE(type, name) \ + type name \ + __attribute__((section(".vvar_" #name), aligned(16))) __visible + #endif =20 /* DECLARE_VVAR(offset, type, name) */ =20 DECLARE_VVAR(128, struct vdso_data, _vdso_data) =20 +#if !defined(_SINGLE_DATA) +#define _SINGLE_DATA +DECLARE_VVAR_SINGLE(640, struct vdso_rng_data, _vdso_rng_data) +#endif + #undef DECLARE_VVAR +#undef DECLARE_VVAR_SINGLE =20 #endif --=20 2.38.1