From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 454E4C433EF for ; Fri, 27 May 2022 08:51:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349962AbiE0IvO (ORCPT ); Fri, 27 May 2022 04:51:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349944AbiE0IvE (ORCPT ); Fri, 27 May 2022 04:51:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5507EF136B; Fri, 27 May 2022 01:51:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D75FC61D2B; Fri, 27 May 2022 08:51:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B567CC385B8; Fri, 27 May 2022 08:51:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641462; bh=BghL1surAkgirDQEAlsYzgJhE3Bu1nCy2mEUcXP8XK4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=apqf/ZDgxnW5JAohxoYcEUjkrOpWMmjmAINsWPk4vhlRtbfghtOZnXBAg/K3OrysG gsAc2NToasv6yZzpu2ijlkI/I/mq5AyNOtV/np/xrZ+GwzK6VC6PSgpyaheZAEOXe+ Db0OUipnRxkPkL04jAGSg91DPL5BUjyDdAcYoWXI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mario Limonciello , Basavaraj Natikar , Jiri Kosina , Mario Limonciello Subject: [PATCH 5.17 001/111] HID: amd_sfh: Add support for sensor discovery Date: Fri, 27 May 2022 10:48:33 +0200 Message-Id: <20220527084819.348712980@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Basavaraj Natikar commit b5d7f43e97dabfa04a4be5ff027ce7da119332be upstream. Sensor discovery status fails in case of broken sensors or platform not supported. Hence disable driver on failure of sensor discovery. Signed-off-by: Mario Limonciello Signed-off-by: Basavaraj Natikar Signed-off-by: Jiri Kosina Cc: Mario Limonciello Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/hid/amd-sfh-hid/amd_sfh_client.c | 11 +++++++++++ drivers/hid/amd-sfh-hid/amd_sfh_pcie.c | 7 +++++++ drivers/hid/amd-sfh-hid/amd_sfh_pcie.h | 4 ++++ 3 files changed, 22 insertions(+) --- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c +++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c @@ -227,6 +227,17 @@ int amd_sfh_hid_client_init(struct amd_m dev_dbg(dev, "sid 0x%x status 0x%x\n", cl_data->sensor_idx[i], cl_data->sensor_sts[i]); } + if (privdata->mp2_ops->discovery_status && + privdata->mp2_ops->discovery_status(privdata) =3D=3D 0) { + amd_sfh_hid_client_deinit(privdata); + for (i =3D 0; i < cl_data->num_hid_devices; i++) { + devm_kfree(dev, cl_data->feature_report[i]); + devm_kfree(dev, in_data->input_report[i]); + devm_kfree(dev, cl_data->report_descr[i]); + } + dev_warn(dev, "Failed to discover, sensors not enabled\n"); + return -EOPNOTSUPP; + } schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDL= E_LOOP)); return 0; =20 --- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c +++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c @@ -130,6 +130,12 @@ static int amd_sfh_irq_init_v2(struct am return 0; } =20 +static int amd_sfh_dis_sts_v2(struct amd_mp2_dev *privdata) +{ + return (readl(privdata->mmio + AMD_P2C_MSG(1)) & + SENSOR_DISCOVERY_STATUS_MASK) >> SENSOR_DISCOVERY_STATUS_SHIFT; +} + void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_= info info) { union sfh_cmd_param cmd_param; @@ -245,6 +251,7 @@ static const struct amd_mp2_ops amd_sfh_ .response =3D amd_sfh_wait_response_v2, .clear_intr =3D amd_sfh_clear_intr_v2, .init_intr =3D amd_sfh_irq_init_v2, + .discovery_status =3D amd_sfh_dis_sts_v2, }; =20 static const struct amd_mp2_ops amd_sfh_ops =3D { --- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h +++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h @@ -39,6 +39,9 @@ =20 #define AMD_SFH_IDLE_LOOP 200 =20 +#define SENSOR_DISCOVERY_STATUS_MASK GENMASK(5, 3) +#define SENSOR_DISCOVERY_STATUS_SHIFT 3 + /* SFH Command register */ union sfh_cmd_base { u32 ul; @@ -143,5 +146,6 @@ struct amd_mp2_ops { int (*response)(struct amd_mp2_dev *mp2, u8 sid, u32 sensor_sts); void (*clear_intr)(struct amd_mp2_dev *privdata); int (*init_intr)(struct amd_mp2_dev *privdata); + int (*discovery_status)(struct amd_mp2_dev *privdata); }; #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48D87C433EF for ; Fri, 27 May 2022 08:51:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239184AbiE0IvJ (ORCPT ); Fri, 27 May 2022 04:51:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347393AbiE0IvB (ORCPT ); Fri, 27 May 2022 04:51:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBAFAED72E; Fri, 27 May 2022 01:50:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 45B1B61D2B; Fri, 27 May 2022 08:50:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0730C385A9; Fri, 27 May 2022 08:50:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641458; bh=fBTeEa0/kpFdvrd4LOk+5yeDMtLDjIZNh892u5cdTQg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wyNHhROHOzrwX3MejOMIVtTwm96IMrz+AWcr4s9UkX5zjkHBhjm1EyojBJfg2TkCp Qow5TZn4nYqGspIcv7TytiSMo3nFG4WNh7btRl/hM0j30nw2ikvGaHIt5TFdbtWxzf +znw7VGSBUO9TTzt6MyuDFxvbeq16Ulgf/Ao12ew= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Yongkang Jia , Paolo Bonzini , Vegard Nossum Subject: [PATCH 5.17 002/111] KVM: x86/mmu: fix NULL pointer dereference on guest INVPCID Date: Fri, 27 May 2022 10:48:34 +0200 Message-Id: <20220527084819.475104331@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Paolo Bonzini commit 9f46c187e2e680ecd9de7983e4d081c3391acc76 upstream. With shadow paging enabled, the INVPCID instruction results in a call to kvm_mmu_invpcid_gva. If INVPCID is executed with CR0.PG=3D0, the invlpg callback is not set and the result is a NULL pointer dereference. Fix it trivially by checking for mmu->invlpg before every call. There are other possibilities: - check for CR0.PG, because KVM (like all Intel processors after P5) flushes guest TLB on CR0.PG changes so that INVPCID/INVLPG are a nop with paging disabled - check for EFER.LMA, because KVM syncs and flushes when switching MMU contexts outside of 64-bit mode All of these are tricky, go for the simple solution. This is CVE-2022-1789. Reported-by: Yongkang Jia Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini [fix conflict due to missing b9e5603c2a3accbadfec570ac501a54431a6bdba] Signed-off-by: Vegard Nossum Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5416,14 +5416,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu uint i; =20 if (pcid =3D=3D kvm_get_active_pcid(vcpu)) { - mmu->invlpg(vcpu, gva, mmu->root_hpa); + if (mmu->invlpg) + mmu->invlpg(vcpu, gva, mmu->root_hpa); tlb_flush =3D true; } =20 for (i =3D 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (VALID_PAGE(mmu->prev_roots[i].hpa) && pcid =3D=3D kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) { - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + if (mmu->invlpg) + mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); tlb_flush =3D true; } } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18069C433F5 for ; Fri, 27 May 2022 08:54:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350199AbiE0Iyh (ORCPT ); Fri, 27 May 2022 04:54:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350110AbiE0IyI (ORCPT ); Fri, 27 May 2022 04:54:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D991250468; Fri, 27 May 2022 01:52:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 16B2B61D3D; Fri, 27 May 2022 08:52:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA861C385A9; Fri, 27 May 2022 08:52:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641574; bh=XJZV81fg0CvvUE8NFLqOOC7gPN7yHPArlxoOH4nRArk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cktzWQFm1cr5RPsDWcOUprYJnq4GIjSvgyvSRKr1uN8+6NCAOmx59DH5Q37es4e57 RRRF+46enbvoKPi1UiyQbuo5W+PKb9WHlSIQNRbhyDCmTy4S2g2oVy7CYNIQ/uPzU0 NcztisBPqZAJnACcNRlPiDcDcT4U5vA/8WCPzdWM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , Jean-Philippe Aumasson , "Jason A. Donenfeld" Subject: [PATCH 5.17 003/111] random: use computational hash for entropy extraction Date: Fri, 27 May 2022 10:48:35 +0200 Message-Id: <20220527084819.612284512@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 6e8ec2552c7d13991148e551e3325a624d73fac6 upstream. The current 4096-bit LFSR used for entropy collection had a few desirable attributes for the context in which it was created. For example, the state was huge, which meant that /dev/random would be able to output quite a bit of accumulated entropy before blocking. It was also, in its time, quite fast at accumulating entropy byte-by-byte, which matters given the varying contexts in which mix_pool_bytes() is called. And its diffusion was relatively high, which meant that changes would ripple across several words of state rather quickly. However, it also suffers from a few security vulnerabilities. In particular, inputs learned by an attacker can be undone, but moreover, if the state of the pool leaks, its contents can be controlled and entirely zeroed out. I've demonstrated this attack with this SMT2 script, , which Boolector/CaDiCal solves in a matter of seconds on a single core of my laptop, resulting in little proof of concept C demonstrators such as . For basically all recent formal models of RNGs, these attacks represent a significant cryptographic flaw. But how does this manifest practically? If an attacker has access to the system to such a degree that he can learn the internal state of the RNG, arguably there are other lower hanging vulnerabilities -- side-channel, infoleak, or otherwise -- that might have higher priority. On the other hand, seed files are frequently used on systems that have a hard time generating much entropy on their own, and these seed files, being files, often leak or are duplicated and distributed accidentally, or are even seeded over the Internet intentionally, where their contents might be recorded or tampered with. Seen this way, an otherwise quasi-implausible vulnerability is a bit more practical than initially thought. Another aspect of the current mix_pool_bytes() function is that, while its performance was arguably competitive for the time in which it was created, it's no longer considered so. This patch improves performance significantly: on a high-end CPU, an i7-11850H, it improves performance of mix_pool_bytes() by 225%, and on a low-end CPU, a Cortex-A7, it improves performance by 103%. This commit replaces the LFSR of mix_pool_bytes() with a straight- forward cryptographic hash function, BLAKE2s, which is already in use for pool extraction. Universal hashing with a secret seed was considered too, something along the lines of , but the requirement for a secret seed makes for a chicken & egg problem. Instead we go with a formally proven scheme using a computational hash function, described in sections 5.1, 6.4, and B.1.8 of . BLAKE2s outputs 256 bits, which should give us an appropriate amount of min-entropy accumulation, and a wide enough margin of collision resistance against active attacks. mix_pool_bytes() becomes a simple call to blake2s_update(), for accumulation, while the extraction step becomes a blake2s_final() to generate a seed, with which we can then do a HKDF-like or BLAKE2X-like expansion, the first part of which we fold back as an init key for subsequent blake2s_update()s, and the rest we produce to the caller. This then is provided to our CRNG like usual. In that expansion step, we make opportunistic use of 32 bytes of RDRAND output, just as before. We also always reseed the crng with 32 bytes, unconditionally, or not at all, rather than sometimes with 16 as before, as we don't win anything by limiting beyond the 16 byte threshold. Going for a hash function as an entropy collector is a conservative, proven approach. The result of all this is a much simpler and much less bespoke construction than what's there now, which not only plugs a vulnerability but also improves performance considerably. Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Eric Biggers Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 304 +++++++++------------------------------------= ----- 1 file changed, 55 insertions(+), 249 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -42,61 +42,6 @@ */ =20 /* - * (now, with legal B.S. out of the way.....) - * - * This routine gathers environmental noise from device drivers, etc., - * and returns good random numbers, suitable for cryptographic use. - * Besides the obvious cryptographic uses, these numbers are also good - * for seeding TCP sequence numbers, and other places where it is - * desirable to have numbers which are not only random, but hard to - * predict by an attacker. - * - * Theory of operation - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - * - * Computers are very predictable devices. Hence it is extremely hard - * to produce truly random numbers on a computer --- as opposed to - * pseudo-random numbers, which can easily generated by using a - * algorithm. Unfortunately, it is very easy for attackers to guess - * the sequence of pseudo-random number generators, and for some - * applications this is not acceptable. So instead, we must try to - * gather "environmental noise" from the computer's environment, which - * must be hard for outside attackers to observe, and use that to - * generate random numbers. In a Unix environment, this is best done - * from inside the kernel. - * - * Sources of randomness from the environment include inter-keyboard - * timings, inter-interrupt timings from some interrupts, and other - * events which are both (a) non-deterministic and (b) hard for an - * outside observer to measure. Randomness from these sources are - * added to an "entropy pool", which is mixed using a CRC-like function. - * This is not cryptographically strong, but it is adequate assuming - * the randomness is not chosen maliciously, and it is fast enough that - * the overhead of doing it on every interrupt is very reasonable. - * As random bytes are mixed into the entropy pool, the routines keep - * an *estimate* of how many bits of randomness have been stored into - * the random number generator's internal state. - * - * When random bytes are desired, they are obtained by taking the BLAKE2s - * hash of the contents of the "entropy pool". The BLAKE2s hash avoids - * exposing the internal state of the entropy pool. It is believed to - * be computationally infeasible to derive any useful information - * about the input of BLAKE2s from its output. Even if it is possible to - * analyze BLAKE2s in some clever way, as long as the amount of data - * returned from the generator is less than the inherent entropy in - * the pool, the output data is totally unpredictable. For this - * reason, the routine decreases its internal estimate of how many - * bits of "true randomness" are contained in the entropy pool as it - * outputs random numbers. - * - * If this estimate goes to zero, the routine can still generate - * random numbers; however, an attacker may (at least in theory) be - * able to infer the future output of the generator from prior - * outputs. This requires successful cryptanalysis of BLAKE2s, which is - * not believed to be feasible, but there is a remote possibility. - * Nonetheless, these numbers should be useful for the vast majority - * of purposes. - * * Exported interfaces ---- output * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D * @@ -298,23 +243,6 @@ * * mknod /dev/random c 1 8 * mknod /dev/urandom c 1 9 - * - * Acknowledgements: - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - * - * Ideas for constructing this random number generator were derived - * from Pretty Good Privacy's random number generator, and from private - * discussions with Phil Karn. Colin Plumb provided a faster random - * number generator, which speed up the mixing function of the entropy - * pool, taken from PGPfone. Dale Worley has also contributed many - * useful ideas and suggestions to improve this driver. - * - * Any flaws in the design are solely my responsibility, and should - * not be attributed to the Phil, Colin, or any of authors of PGP. - * - * Further background information on this topic may be obtained from - * RFC 1750, "Randomness Recommendations for Security", by Donald - * Eastlake, Steve Crocker, and Jeff Schiller. */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -358,79 +286,15 @@ =20 /* #define ADD_INTERRUPT_BENCH */ =20 -/* - * If the entropy count falls under this number of bits, then we - * should wake up processes which are selecting or polling on write - * access to /dev/random. - */ -static int random_write_wakeup_bits =3D 28 * (1 << 5); - -/* - * Originally, we used a primitive polynomial of degree .poolwords - * over GF(2). The taps for various sizes are defined below. They - * were chosen to be evenly spaced except for the last tap, which is 1 - * to get the twisting happening as fast as possible. - * - * For the purposes of better mixing, we use the CRC-32 polynomial as - * well to make a (modified) twisted Generalized Feedback Shift - * Register. (See M. Matsumoto & Y. Kurita, 1992. Twisted GFSR - * generators. ACM Transactions on Modeling and Computer Simulation - * 2(3):179-194. Also see M. Matsumoto & Y. Kurita, 1994. Twisted - * GFSR generators II. ACM Transactions on Modeling and Computer - * Simulation 4:254-266) - * - * Thanks to Colin Plumb for suggesting this. - * - * The mixing operation is much less sensitive than the output hash, - * where we use BLAKE2s. All that we want of mixing operation is that - * it be a good non-cryptographic hash; i.e. it not produce collisions - * when fed "random" data of the sort we expect to see. As long as - * the pool state differs for different inputs, we have preserved the - * input entropy and done a good job. The fact that an intelligent - * attacker can construct inputs that will produce controlled - * alterations to the pool's state is not important because we don't - * consider such inputs to contribute any randomness. The only - * property we need with respect to them is that the attacker can't - * increase his/her knowledge of the pool's state. Since all - * additions are reversible (knowing the final state and the input, - * you can reconstruct the initial state), if an attacker has any - * uncertainty about the initial state, he/she can only shuffle that - * uncertainty about, but never cause any collisions (which would - * decrease the uncertainty). - * - * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and - * Videau in their paper, "The Linux Pseudorandom Number Generator - * Revisited" (see: http://eprint.iacr.org/2012/251.pdf). In their - * paper, they point out that we are not using a true Twisted GFSR, - * since Matsumoto & Kurita used a trinomial feedback polynomial (that - * is, with only three taps, instead of the six that we are using). - * As a result, the resulting polynomial is neither primitive nor - * irreducible, and hence does not have a maximal period over - * GF(2**32). They suggest a slight change to the generator - * polynomial which improves the resulting TGFSR polynomial to be - * irreducible, which we have made here. - */ enum poolinfo { - POOL_WORDS =3D 128, - POOL_WORDMASK =3D POOL_WORDS - 1, - POOL_BYTES =3D POOL_WORDS * sizeof(u32), - POOL_BITS =3D POOL_BYTES * 8, + POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, POOL_BITSHIFT =3D ilog2(POOL_BITS), =20 /* To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. */ POOL_ENTROPY_SHIFT =3D 3, #define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIF= T) - POOL_FRACBITS =3D POOL_BITS << POOL_ENTROPY_SHIFT, - - /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ - POOL_TAP1 =3D 104, - POOL_TAP2 =3D 76, - POOL_TAP3 =3D 51, - POOL_TAP4 =3D 25, - POOL_TAP5 =3D 1, - - EXTRACT_SIZE =3D BLAKE2S_HASH_SIZE / 2 + POOL_FRACBITS =3D POOL_BITS << POOL_ENTROPY_SHIFT }; =20 /* @@ -438,6 +302,12 @@ enum poolinfo { */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_bits =3D POOL_BITS * 3 / 4; =20 static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -493,73 +363,31 @@ MODULE_PARM_DESC(ratelimit_disable, "Dis * **********************************************************************/ =20 -static u32 input_pool_data[POOL_WORDS] __latent_entropy; - static struct { + struct blake2s_state hash; spinlock_t lock; - u16 add_ptr; - u16 input_rotate; int entropy_count; } input_pool =3D { + .hash.h =3D { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), + BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, + BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 }, + .hash.outlen =3D BLAKE2S_HASH_SIZE, .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), }; =20 -static ssize_t extract_entropy(void *buf, size_t nbytes, int min); -static ssize_t _extract_entropy(void *buf, size_t nbytes); +static bool extract_entropy(void *buf, size_t nbytes, int min); +static void _extract_entropy(void *buf, size_t nbytes); =20 static void crng_reseed(struct crng_state *crng, bool use_input_pool); =20 -static const u32 twist_table[8] =3D { - 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158, - 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 }; - /* * This function adds bytes into the entropy "pool". It does not * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. - * - * The pool is stirred with a primitive polynomial of the appropriate - * degree, and then twisted. We twist by three bits at a time because - * it's cheap to do so and helps slightly in the expected case where - * the entropy is concentrated in the low-order bits. */ static void _mix_pool_bytes(const void *in, int nbytes) { - unsigned long i; - int input_rotate; - const u8 *bytes =3D in; - u32 w; - - input_rotate =3D input_pool.input_rotate; - i =3D input_pool.add_ptr; - - /* mix one byte at a time to simplify size handling and churn faster */ - while (nbytes--) { - w =3D rol32(*bytes++, input_rotate); - i =3D (i - 1) & POOL_WORDMASK; - - /* XOR in the various taps */ - w ^=3D input_pool_data[i]; - w ^=3D input_pool_data[(i + POOL_TAP1) & POOL_WORDMASK]; - w ^=3D input_pool_data[(i + POOL_TAP2) & POOL_WORDMASK]; - w ^=3D input_pool_data[(i + POOL_TAP3) & POOL_WORDMASK]; - w ^=3D input_pool_data[(i + POOL_TAP4) & POOL_WORDMASK]; - w ^=3D input_pool_data[(i + POOL_TAP5) & POOL_WORDMASK]; - - /* Mix the result back in with a twist */ - input_pool_data[i] =3D (w >> 3) ^ twist_table[w & 7]; - - /* - * Normally, we add 7 bits of rotation to the pool. - * At the beginning of the pool, add an extra 7 bits - * rotation, so that successive passes spread the - * input bits across the pool evenly. - */ - input_rotate =3D (input_rotate + (i ? 7 : 14)) & 31; - } - - input_pool.input_rotate =3D input_rotate; - input_pool.add_ptr =3D i; + blake2s_update(&input_pool.hash, in, nbytes); } =20 static void __mix_pool_bytes(const void *in, int nbytes) @@ -953,15 +781,14 @@ static int crng_slow_load(const u8 *cp, static void crng_reseed(struct crng_state *crng, bool use_input_pool) { unsigned long flags; - int i, num; + int i; union { u8 block[CHACHA_BLOCK_SIZE]; u32 key[8]; } buf; =20 if (use_input_pool) { - num =3D extract_entropy(&buf, 32, 16); - if (num =3D=3D 0) + if (!extract_entropy(&buf, 32, 16)) return; } else { _extract_crng(&primary_crng, buf.block); @@ -1329,74 +1156,48 @@ retry: } =20 /* - * This function does the actual extraction for extract_entropy. - * - * Note: we assume that .poolwords is a multiple of 16 words. + * This is an HKDF-like construction for using the hashed collected entropy + * as a PRF key, that's then expanded block-by-block. */ -static void extract_buf(u8 *out) +static void _extract_entropy(void *buf, size_t nbytes) { - struct blake2s_state state __aligned(__alignof__(unsigned long)); - u8 hash[BLAKE2S_HASH_SIZE]; - unsigned long *salt; unsigned long flags; - - blake2s_init(&state, sizeof(hash)); - - /* - * If we have an architectural hardware random number - * generator, use it for BLAKE2's salt & personal fields. - */ - for (salt =3D (unsigned long *)&state.h[4]; - salt < (unsigned long *)&state.h[8]; ++salt) { - unsigned long v; - if (!arch_get_random_long(&v)) - break; - *salt ^=3D v; + u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; + struct { + unsigned long rdrand[32 / sizeof(long)]; + size_t counter; + } block; + size_t i; + + for (i =3D 0; i < ARRAY_SIZE(block.rdrand); ++i) { + if (!arch_get_random_long(&block.rdrand[i])) + block.rdrand[i] =3D random_get_entropy(); } =20 - /* Generate a hash across the pool */ spin_lock_irqsave(&input_pool.lock, flags); - blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES); - blake2s_final(&state, hash); /* final zeros out state */ =20 - /* - * We mix the hash back into the pool to prevent backtracking - * attacks (where the attacker knows the state of the pool - * plus the current outputs, and attempts to find previous - * outputs), unless the hash function can be inverted. By - * mixing at least a hash worth of hash data back, we make - * brute-forcing the feedback as hard as brute-forcing the - * hash. - */ - __mix_pool_bytes(hash, sizeof(hash)); - spin_unlock_irqrestore(&input_pool.lock, flags); + /* seed =3D HASHPRF(last_key, entropy_input) */ + blake2s_final(&input_pool.hash, seed); =20 - /* Note that EXTRACT_SIZE is half of hash size here, because above - * we've dumped the full length back into mixer. By reducing the - * amount that we emit, we retain a level of forward secrecy. - */ - memcpy(out, hash, EXTRACT_SIZE); - memzero_explicit(hash, sizeof(hash)); -} + /* next_key =3D HASHPRF(seed, RDRAND || 0) */ + block.counter =3D 0; + blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), si= zeof(seed)); + blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(ne= xt_key)); =20 -static ssize_t _extract_entropy(void *buf, size_t nbytes) -{ - ssize_t ret =3D 0, i; - u8 tmp[EXTRACT_SIZE]; + spin_unlock_irqrestore(&input_pool.lock, flags); + memzero_explicit(next_key, sizeof(next_key)); =20 while (nbytes) { - extract_buf(tmp); - i =3D min_t(int, nbytes, EXTRACT_SIZE); - memcpy(buf, tmp, i); + i =3D min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + /* output =3D HASHPRF(seed, RDRAND || ++counter) */ + ++block.counter; + blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); nbytes -=3D i; buf +=3D i; - ret +=3D i; } =20 - /* Wipe data just returned from memory */ - memzero_explicit(tmp, sizeof(tmp)); - - return ret; + memzero_explicit(seed, sizeof(seed)); + memzero_explicit(&block, sizeof(block)); } =20 /* @@ -1404,13 +1205,18 @@ static ssize_t _extract_entropy(void *bu * returns it in a buffer. * * The min parameter specifies the minimum amount we can pull before - * failing to avoid races that defeat catastrophic reseeding. + * failing to avoid races that defeat catastrophic reseeding. If we + * have less than min entropy available, we return false and buf is + * not filled. */ -static ssize_t extract_entropy(void *buf, size_t nbytes, int min) +static bool extract_entropy(void *buf, size_t nbytes, int min) { trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); - nbytes =3D account(nbytes, min); - return _extract_entropy(buf, nbytes); + if (account(nbytes, min)) { + _extract_entropy(buf, nbytes); + return true; + } + return false; } =20 #define warn_unseeded_randomness(previous) \ @@ -1674,7 +1480,7 @@ static void __init init_std_data(void) unsigned long rv; =20 mix_pool_bytes(&now, sizeof(now)); - for (i =3D POOL_BYTES; i > 0; i -=3D sizeof(rv)) { + for (i =3D BLAKE2S_BLOCK_SIZE; i > 0; i -=3D sizeof(rv)) { if (!arch_get_random_seed_long(&rv) && !arch_get_random_long(&rv)) rv =3D random_get_entropy(); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13481C433EF for ; Fri, 27 May 2022 08:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350276AbiE0I7e (ORCPT ); Fri, 27 May 2022 04:59:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350266AbiE0I6g (ORCPT ); Fri, 27 May 2022 04:58:36 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D8D01207CD; Fri, 27 May 2022 01:54:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0776CB82338; Fri, 27 May 2022 08:54:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52BAFC385A9; Fri, 27 May 2022 08:54:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641696; bh=8qXnFh0yU80+R8Mh2kVMrCcRTwcVGvH//4Sl7mjS2fI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HKwVjKh7C7kIrXAoJHvcD45EaeuWXnF31aUqOoo3zWGA57xR9v8Cr9AWBAaD+K/pj KKiGeqKvYCcbbH/TZKWHqiavM29GlRbRfvosaqtiMsWQf7jqK6x7+TsyD1mRuB2mcJ 5OMSoW1d/Oe+Mnq+sc0cSR+IUw+ICsS6so2fJ3Ps= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 004/111] random: simplify entropy debiting Date: Fri, 27 May 2022 10:48:36 +0200 Message-Id: <20220527084819.740408531@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 9c07f57869e90140080cfc282cc628d123e27704 upstream. Our pool is 256 bits, and we only ever use all of it or don't use it at all, which is decided by whether or not it has at least 128 bits in it. So we can drastically simplify the accounting and cmpxchg loop to do exactly this. While we're at it, we move the minimum bit size into a constant so it can be shared between the two places where it matters. The reason we want any of this is for the case in which an attacker has compromised the current state, and then bruteforces small amounts of entropy added to it. By demanding a particular minimum amount of entropy be present before reseeding, we make that bruteforcing difficult. Note that this rationale no longer includes anything about /dev/random blocking at the right moment, since /dev/random no longer blocks (except for at ~boot), but rather uses the crng. In a former life, /dev/random was different and therefore required a more nuanced account(), but this is no longer. Behaviorally, nothing changes here. This is just a simplification of the code. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 91 +++++++++----------------------------= ----- include/trace/events/random.h | 30 ++----------- 2 files changed, 27 insertions(+), 94 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -289,12 +289,14 @@ enum poolinfo { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, POOL_BITSHIFT =3D ilog2(POOL_BITS), + POOL_MIN_BITS =3D POOL_BITS / 2, =20 /* To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. */ POOL_ENTROPY_SHIFT =3D 3, #define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIF= T) - POOL_FRACBITS =3D POOL_BITS << POOL_ENTROPY_SHIFT + POOL_FRACBITS =3D POOL_BITS << POOL_ENTROPY_SHIFT, + POOL_MIN_FRACBITS =3D POOL_MIN_BITS << POOL_ENTROPY_SHIFT }; =20 /* @@ -375,8 +377,7 @@ static struct { .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), }; =20 -static bool extract_entropy(void *buf, size_t nbytes, int min); -static void _extract_entropy(void *buf, size_t nbytes); +static void extract_entropy(void *buf, size_t nbytes); =20 static void crng_reseed(struct crng_state *crng, bool use_input_pool); =20 @@ -467,7 +468,7 @@ static void process_random_ready_list(vo */ static void credit_entropy_bits(int nbits) { - int entropy_count, entropy_bits, orig; + int entropy_count, orig; int nfrac =3D nbits << POOL_ENTROPY_SHIFT; =20 /* Ensure that the multiplication can avoid being 64 bits wide. */ @@ -527,8 +528,7 @@ retry: =20 trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RE= T_IP_); =20 - entropy_bits =3D entropy_count >> POOL_ENTROPY_SHIFT; - if (crng_init < 2 && entropy_bits >=3D 128) + if (crng_init < 2 && entropy_count >=3D POOL_MIN_FRACBITS) crng_reseed(&primary_crng, true); } =20 @@ -618,7 +618,7 @@ static void crng_initialize_secondary(st =20 static void __init crng_initialize_primary(void) { - _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); @@ -788,8 +788,17 @@ static void crng_reseed(struct crng_stat } buf; =20 if (use_input_pool) { - if (!extract_entropy(&buf, 32, 16)) - return; + int entropy_count; + do { + entropy_count =3D READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_FRACBITS) + return; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entro= py_count); + extract_entropy(buf.key, sizeof(buf.key)); + if (random_write_wakeup_bits) { + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + } } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, @@ -1115,51 +1124,10 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); *********************************************************************/ =20 /* - * This function decides how many bytes to actually take from the - * given pool, and also debits the entropy count accordingly. - */ -static size_t account(size_t nbytes, int min) -{ - int entropy_count, orig; - size_t ibytes, nfrac; - - BUG_ON(input_pool.entropy_count > POOL_FRACBITS); - - /* Can we pull enough? */ -retry: - entropy_count =3D orig =3D READ_ONCE(input_pool.entropy_count); - if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy count: count %d\n", entropy_count); - entropy_count =3D 0; - } - - /* never pull more than available */ - ibytes =3D min_t(size_t, nbytes, entropy_count >> (POOL_ENTROPY_SHIFT + 3= )); - if (ibytes < min) - ibytes =3D 0; - nfrac =3D ibytes << (POOL_ENTROPY_SHIFT + 3); - if ((size_t)entropy_count > nfrac) - entropy_count -=3D nfrac; - else - entropy_count =3D 0; - - if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D orig) - goto retry; - - trace_debit_entropy(8 * ibytes); - if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } - - return ibytes; -} - -/* * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. */ -static void _extract_entropy(void *buf, size_t nbytes) +static void extract_entropy(void *buf, size_t nbytes) { unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; @@ -1169,6 +1137,8 @@ static void _extract_entropy(void *buf, } block; size_t i; =20 + trace_extract_entropy(nbytes, POOL_ENTROPY_BITS()); + for (i =3D 0; i < ARRAY_SIZE(block.rdrand); ++i) { if (!arch_get_random_long(&block.rdrand[i])) block.rdrand[i] =3D random_get_entropy(); @@ -1200,25 +1170,6 @@ static void _extract_entropy(void *buf, memzero_explicit(&block, sizeof(block)); } =20 -/* - * This function extracts randomness from the "entropy pool", and - * returns it in a buffer. - * - * The min parameter specifies the minimum amount we can pull before - * failing to avoid races that defeat catastrophic reseeding. If we - * have less than min entropy available, we return false and buf is - * not filled. - */ -static bool extract_entropy(void *buf, size_t nbytes, int min) -{ - trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); - if (account(nbytes, min)) { - _extract_entropy(buf, nbytes); - return true; - } - return false; -} - #define warn_unseeded_randomness(previous) \ _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) =20 --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -79,22 +79,6 @@ TRACE_EVENT(credit_entropy_bits, __entry->bits, __entry->entropy_count, (void *)__entry->IP) ); =20 -TRACE_EVENT(debit_entropy, - TP_PROTO(int debit_bits), - - TP_ARGS( debit_bits), - - TP_STRUCT__entry( - __field( int, debit_bits ) - ), - - TP_fast_assign( - __entry->debit_bits =3D debit_bits; - ), - - TP_printk("input pool: debit_bits %d", __entry->debit_bits) -); - TRACE_EVENT(add_input_randomness, TP_PROTO(int input_bits), =20 @@ -161,31 +145,29 @@ DEFINE_EVENT(random__get_random_bytes, g ); =20 DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(int nbytes, int entropy_count, unsigned long IP), + TP_PROTO(int nbytes, int entropy_count), =20 - TP_ARGS(nbytes, entropy_count, IP), + TP_ARGS(nbytes, entropy_count), =20 TP_STRUCT__entry( __field( int, nbytes ) __field( int, entropy_count ) - __field(unsigned long, IP ) ), =20 TP_fast_assign( __entry->nbytes =3D nbytes; __entry->entropy_count =3D entropy_count; - __entry->IP =3D IP; ), =20 - TP_printk("input pool: nbytes %d entropy_count %d caller %pS", - __entry->nbytes, __entry->entropy_count, (void *)__entry->IP) + TP_printk("input pool: nbytes %d entropy_count %d", + __entry->nbytes, __entry->entropy_count) ); =20 =20 DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(int nbytes, int entropy_count, unsigned long IP), + TP_PROTO(int nbytes, int entropy_count), =20 - TP_ARGS(nbytes, entropy_count, IP) + TP_ARGS(nbytes, entropy_count) ); =20 TRACE_EVENT(urandom_read, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11A80C433F5 for ; Fri, 27 May 2022 08:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350196AbiE0I5a (ORCPT ); Fri, 27 May 2022 04:57:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350573AbiE0Iza (ORCPT ); Fri, 27 May 2022 04:55:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF1F25D1AC; Fri, 27 May 2022 01:54:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E98BEB823DD; Fri, 27 May 2022 08:54:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FB35C385B8; Fri, 27 May 2022 08:54:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641652; bh=Z0b9J/9NjPkqu1AaQNSLVMMxN9tPiLzO6nL+BZhnEYw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2hO1hxL9w90wN9xi/oKjdzs0QV9HvZk7Fssmfh1W3W9SQeuxYmxXT7XT5Dni0rxqf 6wwBPsJ66XE619X+rgRZ2h4+BoKAffsXx1XCqPHnTKZkM08i6PVlRjMWXK7WYh33/0 UfQqHtoPNWM6C6qUROUMWNtUiMJJNympwlD0Kpcw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , Jean-Philippe Aumasson , "Jason A. Donenfeld" Subject: [PATCH 5.17 005/111] random: use linear min-entropy accumulation crediting Date: Fri, 27 May 2022 10:48:37 +0200 Message-Id: <20220527084819.887147273@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit c570449094844527577c5c914140222cb1893e3f upstream. 30e37ec516ae ("random: account for entropy loss due to overwrites") assumed that adding new entropy to the LFSR pool probabilistically cancelled out old entropy there, so entropy was credited asymptotically, approximating Shannon entropy of independent sources (rather than a stronger min-entropy notion) using 1/8th fractional bits and replacing a constant 2-2/=E2=88=9A=F0=9D=91=92 term (~0.786938) with 3/4 (0.75) to sl= ightly underestimate it. This wasn't superb, but it was perhaps better than nothing, so that's what was done. Which entropy specifically was being cancelled out and how much precisely each time is hard to tell, though as I showed with the attack code in my previous commit, a motivated adversary with sufficient information can actually cancel out everything. Since we're no longer using an LFSR for entropy accumulation, this probabilistic cancellation is no longer relevant. Rather, we're now using a computational hash function as the accumulator and we've switched to working in the random oracle model, from which we can now revisit the question of min-entropy accumulation, which is done in detail in . Consider a long input bit string that is built by concatenating various smaller independent input bit strings. Each one of these inputs has a designated min-entropy, which is what we're passing to credit_entropy_bits(h). When we pass the concatenation of these to a random oracle, it means that an adversary trying to receive back the same reply as us would need to become certain about each part of the concatenated bit string we passed in, which means becoming certain about all of those h values. That means we can estimate the accumulation by simply adding up the h values in calls to credit_entropy_bits(h); there's no probabilistic cancellation at play like there was said to be for the LFSR. Incidentally, this is also what other entropy accumulators based on computational hash functions do as well. So this commit replaces credit_entropy_bits(h) with essentially `total =3D min(POOL_BITS, total + h)`, done with a cmpxchg loop as before. What if we're wrong and the above is nonsense? It's not, but let's assume we don't want the actual _behavior_ of the code to change much. Currently that behavior is not extracting from the input pool until it has 128 bits of entropy in it. With the old algorithm, we'd hit that magic 128 number after roughly 256 calls to credit_entropy_bits(1). So, we can retain more or less the old behavior by waiting to extract from the input pool until it hits 256 bits of entropy using the new code. For people concerned about this change, it means that there's not that much practical behavioral change. And for folks actually trying to model the behavior rigorously, it means that we have an even higher margin against attacks. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 114 ++++++++-------------------------------------= ----- 1 file changed, 20 insertions(+), 94 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -286,17 +286,9 @@ =20 /* #define ADD_INTERRUPT_BENCH */ =20 -enum poolinfo { +enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_BITSHIFT =3D ilog2(POOL_BITS), - POOL_MIN_BITS =3D POOL_BITS / 2, - - /* To allow fractional bits to be tracked, the entropy_count field is - * denominated in units of 1/8th bits. */ - POOL_ENTROPY_SHIFT =3D 3, -#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIF= T) - POOL_FRACBITS =3D POOL_BITS << POOL_ENTROPY_SHIFT, - POOL_MIN_FRACBITS =3D POOL_MIN_BITS << POOL_ENTROPY_SHIFT + POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ }; =20 /* @@ -309,7 +301,7 @@ static struct fasync_struct *fasync; * should wake up processes which are selecting or polling on write * access to /dev/random. */ -static int random_write_wakeup_bits =3D POOL_BITS * 3 / 4; +static int random_write_wakeup_bits =3D POOL_MIN_BITS; =20 static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -469,66 +461,18 @@ static void process_random_ready_list(vo static void credit_entropy_bits(int nbits) { int entropy_count, orig; - int nfrac =3D nbits << POOL_ENTROPY_SHIFT; - - /* Ensure that the multiplication can avoid being 64 bits wide. */ - BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31); =20 if (!nbits) return; =20 -retry: - entropy_count =3D orig =3D READ_ONCE(input_pool.entropy_count); - if (nfrac < 0) { - /* Debit */ - entropy_count +=3D nfrac; - } else { - /* - * Credit: we have to account for the possibility of - * overwriting already present entropy. Even in the - * ideal case of pure Shannon entropy, new contributions - * approach the full value asymptotically: - * - * entropy <- entropy + (pool_size - entropy) * - * (1 - exp(-add_entropy/pool_size)) - * - * For add_entropy <=3D pool_size/2 then - * (1 - exp(-add_entropy/pool_size)) >=3D - * (add_entropy/pool_size)*0.7869... - * so we can approximate the exponential with - * 3/4*add_entropy/pool_size and still be on the - * safe side by adding at most pool_size/2 at a time. - * - * The use of pool_size-2 in the while statement is to - * prevent rounding artifacts from making the loop - * arbitrarily long; this limits the loop to log2(pool_size)*2 - * turns no matter how large nbits is. - */ - int pnfrac =3D nfrac; - const int s =3D POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2; - /* The +2 corresponds to the /4 in the denominator */ - - do { - unsigned int anfrac =3D min(pnfrac, POOL_FRACBITS / 2); - unsigned int add =3D - ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s; - - entropy_count +=3D add; - pnfrac -=3D anfrac; - } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac)); - } - - if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy/overflow: count %d\n", entropy_count); - entropy_count =3D 0; - } else if (entropy_count > POOL_FRACBITS) - entropy_count =3D POOL_FRACBITS; - if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D orig) - goto retry; + do { + orig =3D READ_ONCE(input_pool.entropy_count); + entropy_count =3D min(POOL_BITS, orig + nbits); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); =20 - trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RE= T_IP_); + trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); =20 - if (crng_init < 2 && entropy_count >=3D POOL_MIN_FRACBITS) + if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) crng_reseed(&primary_crng, true); } =20 @@ -791,7 +735,7 @@ static void crng_reseed(struct crng_stat int entropy_count; do { entropy_count =3D READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_FRACBITS) + if (entropy_count < POOL_MIN_BITS) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entro= py_count); extract_entropy(buf.key, sizeof(buf.key)); @@ -1014,7 +958,7 @@ void add_input_randomness(unsigned int t last_value =3D value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(POOL_ENTROPY_BITS()); + trace_add_input_randomness(input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_input_randomness); =20 @@ -1112,7 +1056,7 @@ void add_disk_randomness(struct gendisk return; /* first major is 1, so we get >=3D 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS()); + trace_add_disk_randomness(disk_devt(disk), input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -1137,7 +1081,7 @@ static void extract_entropy(void *buf, s } block; size_t i; =20 - trace_extract_entropy(nbytes, POOL_ENTROPY_BITS()); + trace_extract_entropy(nbytes, input_pool.entropy_count); =20 for (i =3D 0; i < ARRAY_SIZE(block.rdrand); ++i) { if (!arch_get_random_long(&block.rdrand[i])) @@ -1486,9 +1430,9 @@ static ssize_t urandom_read_nowarn(struc { int ret; =20 - nbytes =3D min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3)); + nbytes =3D min_t(size_t, nbytes, INT_MAX >> 6); ret =3D extract_crng_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS()); + trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret; } =20 @@ -1527,7 +1471,7 @@ static __poll_t random_poll(struct file mask =3D 0; if (crng_ready()) mask |=3D EPOLLIN | EPOLLRDNORM; - if (POOL_ENTROPY_BITS() < random_write_wakeup_bits) + if (input_pool.entropy_count < random_write_wakeup_bits) mask |=3D EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1582,8 +1526,7 @@ static long random_ioctl(struct file *f, switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ - ent_count =3D POOL_ENTROPY_BITS(); - if (put_user(ent_count, p)) + if (put_user(input_pool.entropy_count, p)) return -EFAULT; return 0; case RNDADDTOENTCNT: @@ -1734,23 +1677,6 @@ static int proc_do_uuid(struct ctl_table return proc_dostring(&fake_table, write, buffer, lenp, ppos); } =20 -/* - * Return entropy available scaled to integral bits - */ -static int proc_do_entropy(struct ctl_table *table, int write, void *buffe= r, - size_t *lenp, loff_t *ppos) -{ - struct ctl_table fake_table; - int entropy_count; - - entropy_count =3D *(int *)table->data >> POOL_ENTROPY_SHIFT; - - fake_table.data =3D &entropy_count; - fake_table.maxlen =3D sizeof(entropy_count); - - return proc_dointvec(&fake_table, write, buffer, lenp, ppos); -} - static int sysctl_poolsize =3D POOL_BITS; static struct ctl_table random_table[] =3D { { @@ -1762,10 +1688,10 @@ static struct ctl_table random_table[] =3D }, { .procname =3D "entropy_avail", + .data =3D &input_pool.entropy_count, .maxlen =3D sizeof(int), .mode =3D 0444, - .proc_handler =3D proc_do_entropy, - .data =3D &input_pool.entropy_count, + .proc_handler =3D proc_dointvec, }, { .procname =3D "write_wakeup_threshold", @@ -1972,7 +1898,7 @@ void add_hwgenerator_randomness(const ch */ wait_event_interruptible_timeout(random_write_wait, !system_wq || kthread_should_stop() || - POOL_ENTROPY_BITS() <=3D random_write_wakeup_bits, + input_pool.entropy_count <=3D random_write_wakeup_bits, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24DAFC433F5 for ; Fri, 27 May 2022 08:59:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240778AbiE0I7H (ORCPT ); Fri, 27 May 2022 04:59:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350188AbiE0I47 (ORCPT ); Fri, 27 May 2022 04:56:59 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B140A5D67F; Fri, 27 May 2022 01:54:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 5959FCE238F; Fri, 27 May 2022 08:54:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F538C385A9; Fri, 27 May 2022 08:54:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641674; bh=WPmKZxR6+87A9ebPxq6085kpMuJiIzfTjd0GcDaqNKM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mc6LbID4irsK4sjorVpa+L/y5+EzRY2y4Wjuv5ukrk8uFV5GRKz9Jaz5WfaIvsW5s JhPMIlPsiU9svb7HKq3dRUJyHJhnvk4EenSH4urCqd3G0i3Tu1ljZbeqYn+nyDgjZ4 nqRSrKQNGsOzgMgMakCKnCh3Qs2IIFBkKmatpa3E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 006/111] random: always wake up entropy writers after extraction Date: Fri, 27 May 2022 10:48:38 +0200 Message-Id: <20220527084820.012764833@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 489c7fc44b5740d377e8cfdbf0851036e493af00 upstream. Now that POOL_BITS =3D=3D POOL_MIN_BITS, we must unconditionally wake up entropy writers after every extraction. Therefore there's no point of write_wakeup_threshold, so we can move it to the dustbin of unused compatibility sysctls. While we're at it, we can fix a small comparison where we were waking up after <=3D min rather than < min. Cc: Theodore Ts'o Suggested-by: Eric Biggers Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- Documentation/admin-guide/sysctl/kernel.rst | 7 ++++- drivers/char/random.c | 33 +++++++++--------------= ----- 2 files changed, 16 insertions(+), 24 deletions(-) --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1030,14 +1030,17 @@ This is a directory, with the following * ``poolsize``: the entropy pool size, in bits; =20 * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum - number of seconds between urandom pool reseeding). + number of seconds between urandom pool reseeding). This file is + writable for compatibility purposes, but writing to it has no effect + on any RNG behavior. =20 * ``uuid``: a UUID generated every time this is retrieved (this can thus be used to generate UUIDs at will); =20 * ``write_wakeup_threshold``: when the entropy count drops below this (as a number of bits), processes waiting to write to ``/dev/random`` - are woken up. + are woken up. This file is writable for compatibility purposes, but + writing to it has no effect on any RNG behavior. =20 If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH`` defined, these additional entries are present: --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -296,12 +296,6 @@ enum { */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; -/* - * If the entropy count falls under this number of bits, then we - * should wake up processes which are selecting or polling on write - * access to /dev/random. - */ -static int random_write_wakeup_bits =3D POOL_MIN_BITS; =20 static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -739,10 +733,8 @@ static void crng_reseed(struct crng_stat return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entro= py_count); extract_entropy(buf.key, sizeof(buf.key)); - if (random_write_wakeup_bits) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, @@ -1471,7 +1463,7 @@ static __poll_t random_poll(struct file mask =3D 0; if (crng_ready()) mask |=3D EPOLLIN | EPOLLRDNORM; - if (input_pool.entropy_count < random_write_wakeup_bits) + if (input_pool.entropy_count < POOL_MIN_BITS) mask |=3D EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1556,7 +1548,7 @@ static long random_ioctl(struct file *f, */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) { + if (xchg(&input_pool.entropy_count, 0)) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1636,9 +1628,9 @@ SYSCALL_DEFINE3(getrandom, char __user * =20 #include =20 -static int min_write_thresh; -static int max_write_thresh =3D POOL_BITS; static int random_min_urandom_seed =3D 60; +static int random_write_wakeup_bits =3D POOL_MIN_BITS; +static int sysctl_poolsize =3D POOL_BITS; static char sysctl_bootid[16]; =20 /* @@ -1677,7 +1669,6 @@ static int proc_do_uuid(struct ctl_table return proc_dostring(&fake_table, write, buffer, lenp, ppos); } =20 -static int sysctl_poolsize =3D POOL_BITS; static struct ctl_table random_table[] =3D { { .procname =3D "poolsize", @@ -1698,9 +1689,7 @@ static struct ctl_table random_table[] =3D .data =3D &random_write_wakeup_bits, .maxlen =3D sizeof(int), .mode =3D 0644, - .proc_handler =3D proc_dointvec_minmax, - .extra1 =3D &min_write_thresh, - .extra2 =3D &max_write_thresh, + .proc_handler =3D proc_dointvec, }, { .procname =3D "urandom_min_reseed_secs", @@ -1892,13 +1881,13 @@ void add_hwgenerator_randomness(const ch } =20 /* Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below random_write_wakeup_thresh, - * when the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has lapsed. + * We'll be woken up again once below POOL_MIN_BITS, when + * the calling thread is about to terminate, or once + * CRNG_RESEED_INTERVAL has elapsed. */ wait_event_interruptible_timeout(random_write_wait, !system_wq || kthread_should_stop() || - input_pool.entropy_count <=3D random_write_wakeup_bits, + input_pool.entropy_count < POOL_MIN_BITS, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC946C433F5 for ; Fri, 27 May 2022 08:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350308AbiE0I67 (ORCPT ); Fri, 27 May 2022 04:58:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350301AbiE0I5z (ORCPT ); Fri, 27 May 2022 04:57:55 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B836C119044; Fri, 27 May 2022 01:54:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 13504CE237A; Fri, 27 May 2022 08:54:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 052A6C385B8; Fri, 27 May 2022 08:54:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641682; bh=vTKQqfjX4F70+FXW6akEB42FGx0gbfSWX/LAI81xnYI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1S//vrJmpwF4ZW6mcuvu556w5GTpbrSc4xnml/u9PVFgEN581goqSpqrCIw/2qN45 I3nI5YBp3/bUOPz3Uclim1cuSvWqM/8Xz6daMh6jjeBOafWwNcbw5dqoJNcdgUCsLL nufI3+Z/1ozHtpd23cgoNx+l5VAofHiR8PrUMUTc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sultan Alsawaf , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 007/111] random: make credit_entropy_bits() always safe Date: Fri, 27 May 2022 10:48:39 +0200 Message-Id: <20220527084820.150052298@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a49c010e61e1938be851f5e49ac219d49b704103 upstream. This is called from various hwgenerator drivers, so rather than having one "safe" version for userspace and one "unsafe" version for the kernel, just make everything safe; the checks are cheap and sensible to have anyway. Reported-by: Sultan Alsawaf Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 29 +++++++++-------------------- 1 file changed, 9 insertions(+), 20 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -447,18 +447,15 @@ static void process_random_ready_list(vo spin_unlock_irqrestore(&random_ready_list_lock, flags); } =20 -/* - * Credit (or debit) the entropy store with n bits of entropy. - * Use credit_entropy_bits_safe() if the value comes from userspace - * or otherwise should be checked for extreme values. - */ static void credit_entropy_bits(int nbits) { int entropy_count, orig; =20 - if (!nbits) + if (nbits <=3D 0) return; =20 + nbits =3D min(nbits, POOL_BITS); + do { orig =3D READ_ONCE(input_pool.entropy_count); entropy_count =3D min(POOL_BITS, orig + nbits); @@ -470,18 +467,6 @@ static void credit_entropy_bits(int nbit crng_reseed(&primary_crng, true); } =20 -static int credit_entropy_bits_safe(int nbits) -{ - if (nbits < 0) - return -EINVAL; - - /* Cap the value to avoid overflows */ - nbits =3D min(nbits, POOL_BITS); - - credit_entropy_bits(nbits); - return 0; -} - /********************************************************************* * * CRNG using CHACHA20 @@ -1526,7 +1511,10 @@ static long random_ioctl(struct file *f, return -EPERM; if (get_user(ent_count, p)) return -EFAULT; - return credit_entropy_bits_safe(ent_count); + if (ent_count < 0) + return -EINVAL; + credit_entropy_bits(ent_count); + return 0; case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) return -EPERM; @@ -1539,7 +1527,8 @@ static long random_ioctl(struct file *f, retval =3D write_pool((const char __user *)p, size); if (retval < 0) return retval; - return credit_entropy_bits_safe(ent_count); + credit_entropy_bits(ent_count); + return 0; case RNDZAPENTCNT: case RNDCLEARPOOL: /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0886DC433F5 for ; Fri, 27 May 2022 08:59:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236455AbiE0I7U (ORCPT ); Fri, 27 May 2022 04:59:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350321AbiE0I6d (ORCPT ); Fri, 27 May 2022 04:58:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DCC611CA3C; Fri, 27 May 2022 01:54:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 323A761C01; Fri, 27 May 2022 08:54:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05770C385A9; Fri, 27 May 2022 08:54:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641689; bh=Wf5YjyGS/Miwbxm5fWl0FgB1C4SqltlK0Q2dh7YVefI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MAuPNhCXl4nWxZs3NpdzMB4qZOizsmUQgvEa44bAEIiQN9CNydd+rwNxGyoB57ryF FHnoDTvDffpCjAyJdXgp3bJtTTuwP67/PIJSHZjTn0JxqvWtYd5i0D9EV0Woz2QTJ6 pPO5IHzTBtnOinUlUP6ukp/G0CEIB6jJWMFa+nKg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 008/111] random: remove use_input_pool parameter from crng_reseed() Date: Fri, 27 May 2022 10:48:40 +0200 Message-Id: <20220527084820.288878202@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Eric Biggers commit 5d58ea3a31cc98b9fa563f6921d3d043bf0103d1 upstream. The primary_crng is always reseeded from the input_pool, while the NUMA crngs are always reseeded from the primary_crng. Remove the redundant 'use_input_pool' parameter from crng_reseed() and just directly check whether the crng is the primary_crng. Signed-off-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -365,7 +365,7 @@ static struct { =20 static void extract_entropy(void *buf, size_t nbytes); =20 -static void crng_reseed(struct crng_state *crng, bool use_input_pool); +static void crng_reseed(struct crng_state *crng); =20 /* * This function adds bytes into the entropy "pool". It does not @@ -464,7 +464,7 @@ static void credit_entropy_bits(int nbit trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); =20 if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) - crng_reseed(&primary_crng, true); + crng_reseed(&primary_crng); } =20 /********************************************************************* @@ -701,7 +701,7 @@ static int crng_slow_load(const u8 *cp, return 1; } =20 -static void crng_reseed(struct crng_state *crng, bool use_input_pool) +static void crng_reseed(struct crng_state *crng) { unsigned long flags; int i; @@ -710,7 +710,7 @@ static void crng_reseed(struct crng_stat u32 key[8]; } buf; =20 - if (use_input_pool) { + if (crng =3D=3D &primary_crng) { int entropy_count; do { entropy_count =3D READ_ONCE(input_pool.entropy_count); @@ -748,7 +748,7 @@ static void _extract_crng(struct crng_st init_time =3D READ_ONCE(crng->init_time); if (time_after(READ_ONCE(crng_global_init_time), init_time) || time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng, crng =3D=3D &primary_crng); + crng_reseed(crng); } spin_lock_irqsave(&crng->lock, flags); chacha20_block(&crng->state[0], out); @@ -1547,7 +1547,7 @@ static long random_ioctl(struct file *f, return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng, true); + crng_reseed(&primary_crng); WRITE_ONCE(crng_global_init_time, jiffies - 1); return 0; default: From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AD9AC433F5 for ; Fri, 27 May 2022 08:56:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240637AbiE0I4R (ORCPT ); Fri, 27 May 2022 04:56:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350331AbiE0IzQ (ORCPT ); Fri, 27 May 2022 04:55:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FD68FF5A7; Fri, 27 May 2022 01:53:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C485F61D52; Fri, 27 May 2022 08:53:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E10FC385A9; Fri, 27 May 2022 08:53:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641616; bh=/VSZyAVjaetEfzPX1m1jLK2kGsRXWlfOtDgLaLcG9Cw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kY16p7aRh+MQHIOZ1kXsO6ihY667Ncp7zl3E6Xr4W4oYmiNSkPgz4RdppCmElzexO zZUSOB7M1lSJNjYJHAi8IYIoPNl+56TPV2+RJAwufZnCTdVwhyX9BTuTiFdUPAG6BP ZnsRA2bGx7HDGX5pBGCrwyYMCQbpXksqTY0DXbEI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sebastian Andrzej Siewior , Dominik Brodowski , Eric Biggers , Andy Lutomirski , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , "Jason A. Donenfeld" Subject: [PATCH 5.17 009/111] random: remove batched entropy locking Date: Fri, 27 May 2022 10:48:41 +0200 Message-Id: <20220527084820.419381242@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 77760fd7f7ae3dfd03668204e708d1568d75447d upstream. Rather than use spinlocks to protect batched entropy, we can instead disable interrupts locally, since we're dealing with per-cpu data, and manage resets with a basic generation counter. At the same time, we can't quite do this on PREEMPT_RT, where we still want spinlocks-as- mutexes semantics. So we use a local_lock_t, which provides the right behavior for each. Because this is a per-cpu lock, that generation counter is still doing the necessary CPU-to-CPU communication. This should improve performance a bit. It will also fix the linked splat that Jonathan received with a PROVE_RAW_LOCK_NESTING=3Dy. Reviewed-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Andy Lutomirski Reported-by: Jonathan Neusch=C3=A4fer Tested-by: Jonathan Neusch=C3=A4fer Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/ Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 55 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 28 insertions(+), 27 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1731,13 +1731,16 @@ static int __init random_sysctls_init(vo device_initcall(random_sysctls_init); #endif /* CONFIG_SYSCTL */ =20 +static atomic_t batch_generation =3D ATOMIC_INIT(0); + struct batched_entropy { union { u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; }; + local_lock_t lock; unsigned int position; - spinlock_t batch_lock; + int generation; }; =20 /* @@ -1749,7 +1752,7 @@ struct batched_entropy { * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) =3D { - .batch_lock =3D __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock) }; =20 u64 get_random_u64(void) @@ -1758,67 +1761,65 @@ u64 get_random_u64(void) unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; =20 warn_unseeded_randomness(&previous); =20 + local_lock_irqsave(&batched_entropy_u64.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u64); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u64) =3D=3D 0) { + + next_gen =3D atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u64) =3D=3D 0 || + next_gen !=3D batch->generation) { extract_crng((u8 *)batch->entropy_u64); batch->position =3D 0; + batch->generation =3D next_gen; } + ret =3D batch->entropy_u64[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batched_entropy_u64.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u64); =20 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) =3D { - .batch_lock =3D __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock), + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock) }; + u32 get_random_u32(void) { u32 ret; unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; =20 warn_unseeded_randomness(&previous); =20 + local_lock_irqsave(&batched_entropy_u32.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u32); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u32) =3D=3D 0) { + + next_gen =3D atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u32) =3D=3D 0 || + next_gen !=3D batch->generation) { extract_crng((u8 *)batch->entropy_u32); batch->position =3D 0; + batch->generation =3D next_gen; } + ret =3D batch->entropy_u32[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batched_entropy_u32.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u32); =20 /* It's important to invalidate all potential batched entropy that might * be stored before the crng is initialized, which we can do lazily by - * simply resetting the counter to zero so that it's re-extracted on the - * next usage. */ + * bumping the generation counter. + */ static void invalidate_batched_entropy(void) { - int cpu; - unsigned long flags; - - for_each_possible_cpu(cpu) { - struct batched_entropy *batched_entropy; - - batched_entropy =3D per_cpu_ptr(&batched_entropy_u32, cpu); - spin_lock_irqsave(&batched_entropy->batch_lock, flags); - batched_entropy->position =3D 0; - spin_unlock(&batched_entropy->batch_lock); - - batched_entropy =3D per_cpu_ptr(&batched_entropy_u64, cpu); - spin_lock(&batched_entropy->batch_lock); - batched_entropy->position =3D 0; - spin_unlock_irqrestore(&batched_entropy->batch_lock, flags); - } + atomic_inc(&batch_generation); } =20 /** From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9B53C433FE for ; Fri, 27 May 2022 08:57:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349621AbiE0I5w (ORCPT ); Fri, 27 May 2022 04:57:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350369AbiE0IzS (ORCPT ); Fri, 27 May 2022 04:55:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96C10106A44; Fri, 27 May 2022 01:53:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E993D61D54; Fri, 27 May 2022 08:53:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91E1DC385A9; Fri, 27 May 2022 08:53:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641623; bh=/taNfpjeICuWWVktR629JPn5TweCvJIQlqTEdvykcDA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lLSqcDLFnlHdAywpT1ZdmUxPZTtu2Kdtnuku60PPS8dC7U7IlZN3vbqdxLF6eG9il uJK94uDmU67yaLZH+9hMSQYH+i5Aa+1Fp5CgtWK1/L8+LY6DgpPXmpAKv3dw78kafC b69hbMsUOo3RDO+I4fqTlyAuPuaJ8sb32kS13emo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 010/111] random: fix locking in crng_fast_load() Date: Fri, 27 May 2022 10:48:42 +0200 Message-Id: <20220527084820.568796827@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Dominik Brodowski commit 7c2fe2b32bf76441ff5b7a425b384e5f75aa530a upstream. crng_init is protected by primary_crng->lock, so keep holding that lock when incrementing crng_init from 0 to 1 in crng_fast_load(). The call to pr_notice() can wait until the lock is released; this code path cannot be reached twice, as crng_fast_load() aborts early if crng_init > 0. Signed-off-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -647,12 +647,13 @@ static size_t crng_fast_load(const u8 *c p[crng_init_cnt % CHACHA_KEY_SIZE] ^=3D *cp; cp++; crng_init_cnt++; len--; ret++; } - spin_unlock_irqrestore(&primary_crng.lock, flags); if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init =3D 1; - pr_notice("fast init done\n"); } + spin_unlock_irqrestore(&primary_crng.lock, flags); + if (crng_init =3D=3D 1) + pr_notice("fast init done\n"); return ret; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 517EBC433EF for ; Fri, 27 May 2022 08:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349729AbiE0I5g (ORCPT ); Fri, 27 May 2022 04:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350410AbiE0IzV (ORCPT ); Fri, 27 May 2022 04:55:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C40EC1078A8; Fri, 27 May 2022 01:53:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 35AC661D53; Fri, 27 May 2022 08:53:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD810C385B8; Fri, 27 May 2022 08:53:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641630; bh=zvNJAXoilwDxTtaE7iGov4VsUnmsvHqf6cJHqz+4f5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dCUIma6Vc7IuIX6YNaweKgBD3ygPdKe6yLPMHEbLSd0tB0yDEkpA/uReWn7Ajfmus 2U29WT2i3H4PVX0EI9iI9xWsipSuuAXpJXxGkB2Jtd+0tkZv4ksIDUnRVt5mCM0jqU GqhwZRBp/Wta/xtblqWp/C3E995HvuvvYtA/TVKQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 011/111] random: use RDSEED instead of RDRAND in entropy extraction Date: Fri, 27 May 2022 10:48:43 +0200 Message-Id: <20220527084820.761344978@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 28f425e573e906a4c15f8392cc2b1561ef448595 upstream. When /dev/random was directly connected with entropy extraction, without any expansion stage, extract_buf() was called for every 10 bytes of data read from /dev/random. For that reason, RDRAND was used rather than RDSEED. At the same time, crng_reseed() was still only called every 5 minutes, so there RDSEED made sense. Those olden days were also a time when the entropy collector did not use a cryptographic hash function, which meant most bets were off in terms of real preimage resistance. For that reason too it didn't matter _that_ much whether RDSEED was mixed in before or after entropy extraction; both choices were sort of bad. But now we have a cryptographic hash function at work, and with that we get real preimage resistance. We also now only call extract_entropy() every 5 minutes, rather than every 10 bytes. This allows us to do two important things. First, we can switch to using RDSEED in extract_entropy(), as Dominik suggested. Second, we can ensure that RDSEED input always goes into the cryptographic hash function with other things before being used directly. This eliminates a category of attacks in which the CPU knows the current state of the crng and knows that we're going to xor RDSEED into it, and so it computes a malicious RDSEED. By going through our hash function, it would require the CPU to compute a preimage on the fly, which isn't going to happen. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Suggested-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -727,13 +727,8 @@ static void crng_reseed(struct crng_stat CHACHA_KEY_SIZE); } spin_lock_irqsave(&crng->lock, flags); - for (i =3D 0; i < 8; i++) { - unsigned long rv; - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv =3D random_get_entropy(); - crng->state[i + 4] ^=3D buf.key[i] ^ rv; - } + for (i =3D 0; i < 8; i++) + crng->state[i + 4] ^=3D buf.key[i]; memzero_explicit(&buf, sizeof(buf)); WRITE_ONCE(crng->init_time, jiffies); spin_unlock_irqrestore(&crng->lock, flags); @@ -1054,16 +1049,17 @@ static void extract_entropy(void *buf, s unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; struct { - unsigned long rdrand[32 / sizeof(long)]; + unsigned long rdseed[32 / sizeof(long)]; size_t counter; } block; size_t i; =20 trace_extract_entropy(nbytes, input_pool.entropy_count); =20 - for (i =3D 0; i < ARRAY_SIZE(block.rdrand); ++i) { - if (!arch_get_random_long(&block.rdrand[i])) - block.rdrand[i] =3D random_get_entropy(); + for (i =3D 0; i < ARRAY_SIZE(block.rdseed); ++i) { + if (!arch_get_random_seed_long(&block.rdseed[i]) && + !arch_get_random_long(&block.rdseed[i])) + block.rdseed[i] =3D random_get_entropy(); } =20 spin_lock_irqsave(&input_pool.lock, flags); @@ -1071,7 +1067,7 @@ static void extract_entropy(void *buf, s /* seed =3D HASHPRF(last_key, entropy_input) */ blake2s_final(&input_pool.hash, seed); =20 - /* next_key =3D HASHPRF(seed, RDRAND || 0) */ + /* next_key =3D HASHPRF(seed, RDSEED || 0) */ block.counter =3D 0; blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), si= zeof(seed)); blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(ne= xt_key)); @@ -1081,7 +1077,7 @@ static void extract_entropy(void *buf, s =20 while (nbytes) { i =3D min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); - /* output =3D HASHPRF(seed, RDRAND || ++counter) */ + /* output =3D HASHPRF(seed, RDSEED || ++counter) */ ++block.counter; blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); nbytes -=3D i; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36EF6C433EF for ; Fri, 27 May 2022 08:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232097AbiE0I4w (ORCPT ); Fri, 27 May 2022 04:56:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350460AbiE0IzX (ORCPT ); Fri, 27 May 2022 04:55:23 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C80AE108ABC; Fri, 27 May 2022 01:54:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 063EDCE237A; Fri, 27 May 2022 08:54:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F353AC34100; Fri, 27 May 2022 08:53:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641638; bh=U66YyZ1t+Wm7G0wK12W5dhdAKSxCoUPe06VVrd3J6Sw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RBlTAumYByHxXt3ANh82N41tltnX07b8NgN7ocT4zyjrZ4UxRm8shQ0HOB4/cMCSZ CawjdHsnzuJiw3xLWBCePYhAYh5/kvE9eXmjwmQ5ZJ3IX1BVG4eXLtgYCaSD0bq7OB i7V1y5cybiDlZk/2fBZ0cSY7rMS/AfH15p3hrNGo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 012/111] random: get rid of secondary crngs Date: Fri, 27 May 2022 10:48:44 +0200 Message-Id: <20220527084820.922972262@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a9412d510ab9a9ba411fea612903631d2e1f1601 upstream. As the comment said, this is indeed a "hack". Since it was introduced, it's been a constant state machine nightmare, with lots of subtle early boot issues and a wildly complex set of machinery to keep everything in sync. Rather than continuing to play whack-a-mole with this approach, this commit simply removes it entirely. This commit is preparation for "random: use simpler fast key erasure flow on per-cpu keys" in this series, which introduces a simpler (and faster) mechanism to accomplish the same thing. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 227 +++++++++++----------------------------------= ----- 1 file changed, 54 insertions(+), 173 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -323,14 +323,11 @@ static struct crng_state primary_crng =3D * its value (from 0->1->2). */ static int crng_init =3D 0; -static bool crng_need_final_init =3D false; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt =3D 0; -static unsigned long crng_global_init_time =3D 0; #define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZ= E]); -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used); +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); =20 @@ -365,7 +362,7 @@ static struct { =20 static void extract_entropy(void *buf, size_t nbytes); =20 -static void crng_reseed(struct crng_state *crng); +static void crng_reseed(void); =20 /* * This function adds bytes into the entropy "pool". It does not @@ -464,7 +461,7 @@ static void credit_entropy_bits(int nbit trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); =20 if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) - crng_reseed(&primary_crng); + crng_reseed(); } =20 /********************************************************************* @@ -477,16 +474,7 @@ static void credit_entropy_bits(int nbit =20 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); =20 -/* - * Hack to deal with crazy userspace progams when they are all trying - * to access /dev/urandom in parallel. The programs are almost - * certainly doing something terribly wrong, but we'll work around - * their brain damage. - */ -static struct crng_state **crng_node_pool __read_mostly; - static void invalidate_batched_entropy(void); -static void numa_crng_init(void); =20 static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); static int __init parse_trust_cpu(char *arg) @@ -495,24 +483,6 @@ static int __init parse_trust_cpu(char * } early_param("random.trust_cpu", parse_trust_cpu); =20 -static bool crng_init_try_arch(struct crng_state *crng) -{ - int i; - bool arch_init =3D true; - unsigned long rv; - - for (i =3D 4; i < 16; i++) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) { - rv =3D random_get_entropy(); - arch_init =3D false; - } - crng->state[i] ^=3D rv; - } - - return arch_init; -} - static bool __init crng_init_try_arch_early(void) { int i; @@ -531,100 +501,17 @@ static bool __init crng_init_try_arch_ea return arch_init; } =20 -static void crng_initialize_secondary(struct crng_state *crng) -{ - chacha_init_consts(crng->state); - _get_random_bytes(&crng->state[4], sizeof(u32) * 12); - crng_init_try_arch(crng); - crng->init_time =3D jiffies - CRNG_RESEED_INTERVAL - 1; -} - -static void __init crng_initialize_primary(void) +static void __init crng_initialize(void) { extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); - numa_crng_init(); crng_init =3D 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } primary_crng.init_time =3D jiffies - CRNG_RESEED_INTERVAL - 1; } =20 -static void crng_finalize_init(void) -{ - if (!system_wq) { - /* We can't call numa_crng_init until we have workqueues, - * so mark this for processing later. */ - crng_need_final_init =3D true; - return; - } - - invalidate_batched_entropy(); - numa_crng_init(); - crng_init =3D 2; - crng_need_final_init =3D false; - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed =3D 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed =3D 0; - } -} - -static void do_numa_crng_init(struct work_struct *work) -{ - int i; - struct crng_state *crng; - struct crng_state **pool; - - pool =3D kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL); - for_each_online_node(i) { - crng =3D kmalloc_node(sizeof(struct crng_state), - GFP_KERNEL | __GFP_NOFAIL, i); - spin_lock_init(&crng->lock); - crng_initialize_secondary(crng); - pool[i] =3D crng; - } - /* pairs with READ_ONCE() in select_crng() */ - if (cmpxchg_release(&crng_node_pool, NULL, pool) !=3D NULL) { - for_each_node(i) - kfree(pool[i]); - kfree(pool); - } -} - -static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); - -static void numa_crng_init(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) - schedule_work(&numa_crng_init_work); -} - -static struct crng_state *select_crng(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) { - struct crng_state **pool; - int nid =3D numa_node_id(); - - /* pairs with cmpxchg_release() in do_numa_crng_init() */ - pool =3D READ_ONCE(crng_node_pool); - if (pool && pool[nid]) - return pool[nid]; - } - - return &primary_crng; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -702,68 +589,71 @@ static int crng_slow_load(const u8 *cp, return 1; } =20 -static void crng_reseed(struct crng_state *crng) +static void crng_reseed(void) { unsigned long flags; - int i; + int i, entropy_count; union { u8 block[CHACHA_BLOCK_SIZE]; u32 key[8]; } buf; =20 - if (crng =3D=3D &primary_crng) { - int entropy_count; - do { - entropy_count =3D READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entro= py_count); - extract_entropy(buf.key, sizeof(buf.key)); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } else { - _extract_crng(&primary_crng, buf.block); - _crng_backtrack_protect(&primary_crng, buf.block, - CHACHA_KEY_SIZE); - } - spin_lock_irqsave(&crng->lock, flags); + do { + entropy_count =3D READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); + extract_entropy(buf.key, sizeof(buf.key)); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + + spin_lock_irqsave(&primary_crng.lock, flags); for (i =3D 0; i < 8; i++) - crng->state[i + 4] ^=3D buf.key[i]; + primary_crng.state[i + 4] ^=3D buf.key[i]; memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(crng->init_time, jiffies); - spin_unlock_irqrestore(&crng->lock, flags); - if (crng =3D=3D &primary_crng && crng_init < 2) - crng_finalize_init(); + WRITE_ONCE(primary_crng.init_time, jiffies); + spin_unlock_irqrestore(&primary_crng.lock, flags); + if (crng_init < 2) { + invalidate_batched_entropy(); + crng_init =3D 2; + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed =3D 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed =3D 0; + } + } } =20 -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZ= E]) +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) { unsigned long flags, init_time; =20 if (crng_ready()) { - init_time =3D READ_ONCE(crng->init_time); - if (time_after(READ_ONCE(crng_global_init_time), init_time) || - time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng); - } - spin_lock_irqsave(&crng->lock, flags); - chacha20_block(&crng->state[0], out); - if (crng->state[12] =3D=3D 0) - crng->state[13]++; - spin_unlock_irqrestore(&crng->lock, flags); -} - -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) -{ - _extract_crng(select_crng(), out); + init_time =3D READ_ONCE(primary_crng.init_time); + if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) + crng_reseed(); + } + spin_lock_irqsave(&primary_crng.lock, flags); + chacha20_block(&primary_crng.state[0], out); + if (primary_crng.state[12] =3D=3D 0) + primary_crng.state[13]++; + spin_unlock_irqrestore(&primary_crng.lock, flags); } =20 /* * Use the leftover bytes from the CRNG block output (if there is * enough) to mutate the CRNG key to provide backtracking protection. */ -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) { unsigned long flags; u32 *s, *d; @@ -774,17 +664,12 @@ static void _crng_backtrack_protect(stru extract_crng(tmp); used =3D 0; } - spin_lock_irqsave(&crng->lock, flags); + spin_lock_irqsave(&primary_crng.lock, flags); s =3D (u32 *)&tmp[used]; - d =3D &crng->state[4]; + d =3D &primary_crng.state[4]; for (i =3D 0; i < 8; i++) *d++ ^=3D *s++; - spin_unlock_irqrestore(&crng->lock, flags); -} - -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) -{ - _crng_backtrack_protect(select_crng(), tmp, used); + spin_unlock_irqrestore(&primary_crng.lock, flags); } =20 static ssize_t extract_crng_user(void __user *buf, size_t nbytes) @@ -1371,10 +1256,7 @@ static void __init init_std_data(void) int __init rand_initialize(void) { init_std_data(); - if (crng_need_final_init) - crng_finalize_init(); - crng_initialize_primary(); - crng_global_init_time =3D jiffies; + crng_initialize(); if (ratelimit_disable) { urandom_warning.interval =3D 0; unseeded_warning.interval =3D 0; @@ -1544,8 +1426,7 @@ static long random_ioctl(struct file *f, return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng); - WRITE_ONCE(crng_global_init_time, jiffies - 1); + crng_reseed(); return 0; default: return -EINVAL; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91908C433EF for ; Fri, 27 May 2022 08:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350181AbiE0I4l (ORCPT ); Fri, 27 May 2022 04:56:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350503AbiE0Iz0 (ORCPT ); Fri, 27 May 2022 04:55:26 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D09015C874; Fri, 27 May 2022 01:54:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 694DEB823D9; Fri, 27 May 2022 08:54:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81FB9C385A9; Fri, 27 May 2022 08:54:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641645; bh=tdJgFI6BSEbMlog94O8U41zIwpESdHgVPKAWwGejSCQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A4Cd04XJHh9M942xrkkf9uoILUUkQOt/GZLdJ1VI6opp7H0tPMZX08aSOpXOQrq1J 2sc+akQSuPrWk0wjp/53X7eDuoJZghv9X1gxGVlp2M9rgjCtGwOZNhJLPZxj6fjfTY 39AZI6MUdAag8EqLhGMSlB8gPUabkQHnMEnAffcs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 013/111] random: inline leaves of rand_initialize() Date: Fri, 27 May 2022 10:48:45 +0200 Message-Id: <20220527084821.044236791@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 8566417221fcec51346ec164e920dacb979c6b5f upstream. This is a preparatory commit for the following one. We simply inline the various functions that rand_initialize() calls that have no other callers. The compiler was doing this anyway before. Doing this will allow us to reorganize this after. We can then move the trust_cpu and parse_trust_cpu definitions a bit closer to where they're actually used, which makes the code easier to read. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 90 ++++++++++++++++++---------------------------= ----- 1 file changed, 33 insertions(+), 57 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -476,42 +476,6 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init =20 static void invalidate_batched_entropy(void); =20 -static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); -static int __init parse_trust_cpu(char *arg) -{ - return kstrtobool(arg, &trust_cpu); -} -early_param("random.trust_cpu", parse_trust_cpu); - -static bool __init crng_init_try_arch_early(void) -{ - int i; - bool arch_init =3D true; - unsigned long rv; - - for (i =3D 4; i < 16; i++) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv =3D random_get_entropy(); - arch_init =3D false; - } - primary_crng.state[i] ^=3D rv; - } - - return arch_init; -} - -static void __init crng_initialize(void) -{ - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { - invalidate_batched_entropy(); - crng_init =3D 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } - primary_crng.init_time =3D jiffies - CRNG_RESEED_INTERVAL - 1; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -1220,17 +1184,28 @@ int __must_check get_random_bytes_arch(v } EXPORT_SYMBOL(get_random_bytes_arch); =20 +static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); +static int __init parse_trust_cpu(char *arg) +{ + return kstrtobool(arg, &trust_cpu); +} +early_param("random.trust_cpu", parse_trust_cpu); + /* - * init_std_data - initialize pool with system data - * - * This function clears the pool's entropy count and mixes some system - * data into the pool to prepare it for use. The pool is not cleared - * as that can only decrease the entropy in the pool. + * Note that setup_arch() may call add_device_randomness() + * long before we get here. This allows seeding of the pools + * with some platform dependent data very early in the boot + * process. But it limits our options here. We must use + * statically allocated structures that already have all + * initializations complete at compile time. We should also + * take care not to overwrite the precious per platform data + * we were given. */ -static void __init init_std_data(void) +int __init rand_initialize(void) { int i; ktime_t now =3D ktime_get_real(); + bool arch_init =3D true; unsigned long rv; =20 mix_pool_bytes(&now, sizeof(now)); @@ -1241,22 +1216,23 @@ static void __init init_std_data(void) mix_pool_bytes(&rv, sizeof(rv)); } mix_pool_bytes(utsname(), sizeof(*(utsname()))); -} =20 -/* - * Note that setup_arch() may call add_device_randomness() - * long before we get here. This allows seeding of the pools - * with some platform dependent data very early in the boot - * process. But it limits our options here. We must use - * statically allocated structures that already have all - * initializations complete at compile time. We should also - * take care not to overwrite the precious per platform data - * we were given. - */ -int __init rand_initialize(void) -{ - init_std_data(); - crng_initialize(); + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + for (i =3D 4; i < 16; i++) { + if (!arch_get_random_seed_long_early(&rv) && + !arch_get_random_long_early(&rv)) { + rv =3D random_get_entropy(); + arch_init =3D false; + } + primary_crng.state[i] ^=3D rv; + } + if (arch_init && trust_cpu && crng_init < 2) { + invalidate_batched_entropy(); + crng_init =3D 2; + pr_notice("crng init done (trusting CPU's manufacturer)\n"); + } + primary_crng.init_time =3D jiffies - CRNG_RESEED_INTERVAL - 1; + if (ratelimit_disable) { urandom_warning.interval =3D 0; unseeded_warning.interval =3D 0; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B8F3C433FE for ; Fri, 27 May 2022 08:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350241AbiE0I5S (ORCPT ); Fri, 27 May 2022 04:57:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350572AbiE0Iza (ORCPT ); Fri, 27 May 2022 04:55:30 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96B8247056; Fri, 27 May 2022 01:54:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BBCAB61C01; Fri, 27 May 2022 08:54:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9333BC34100; Fri, 27 May 2022 08:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641656; bh=XxK0OtvwNhgUMCOyhriJ/+qNy/K+HjGgOW3wfyCgR3A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FKLC0RGwjyIDUPWq6JW3GtqjKMKHcfmv7J7OwODRZZCahtj78y8+tHgN87zVpmgso +LAgA4kTJA1xnXV7FygevDefFKFww4yBMOoY1Ykt5BfLpW67shetLXpm73Hp5kXxxF ltkoVTsVVZ8hcMNXUZ9u2sXDUjVgV4dUQ84RiSi4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 014/111] random: ensure early RDSEED goes through mixer on init Date: Fri, 27 May 2022 10:48:46 +0200 Message-Id: <20220527084821.184949574@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a02cf3d0dd77244fd5333ac48d78871de459ae6d upstream. Continuing the reasoning of "random: use RDSEED instead of RDRAND in entropy extraction" from this series, at init time we also don't want to be xoring RDSEED directly into the crng. Instead it's safer to put it into our entropy collector and then re-extract it, so that it goes through a hash function with preimage resistance. As a matter of hygiene, we also order these now so that the RDSEED byte are hashed in first, followed by the bytes that are likely more predictable (e.g. utsname()). Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1208,24 +1208,18 @@ int __init rand_initialize(void) bool arch_init =3D true; unsigned long rv; =20 - mix_pool_bytes(&now, sizeof(now)); for (i =3D BLAKE2S_BLOCK_SIZE; i > 0; i -=3D sizeof(rv)) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv =3D random_get_entropy(); - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(utsname(), sizeof(*(utsname()))); - - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - for (i =3D 4; i < 16; i++) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv =3D random_get_entropy(); arch_init =3D false; } - primary_crng.state[i] ^=3D rv; + mix_pool_bytes(&rv, sizeof(rv)); } + mix_pool_bytes(&now, sizeof(now)); + mix_pool_bytes(utsname(), sizeof(*(utsname()))); + + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init =3D 2; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B38C433FE for ; Fri, 27 May 2022 08:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350358AbiE0I6i (ORCPT ); Fri, 27 May 2022 04:58:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350280AbiE0Izy (ORCPT ); Fri, 27 May 2022 04:55:54 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3332F1157D8; Fri, 27 May 2022 01:54:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id A977DCE238F; Fri, 27 May 2022 08:54:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78FBFC385B8; Fri, 27 May 2022 08:54:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641663; bh=hwxcHasEV/bb2nOwjqLI3dCmvo+Iqwr+zYuY74pKObQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gDW3ldJ7ifKU7ArWkdyp+x2/D0l932+013/M1FExTuFjLRAWjZl3DZDyDtFYe39V4 ZJNLJaQ7mFUgQDhQA/bXt2CbM1saIDDPBfc54CyWLZHOL1NbZw3NeEfBqGWhQHvouW T7+DGFlGKcnwdc6MqNELcLY0BrNnBdGyUecMFA7Q= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 015/111] random: do not xor RDRAND when writing into /dev/random Date: Fri, 27 May 2022 10:48:47 +0200 Message-Id: <20220527084821.312132171@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 91c2afca290ed3034841c8c8532e69ed9e16cf34 upstream. Continuing the reasoning of "random: ensure early RDSEED goes through mixer on init", we don't want RDRAND interacting with anything without going through the mixer function, as a backdoored CPU could presumably cancel out data during an xor, which it'd have a harder time doing when being forced through a cryptographic hash function. There's actually no need at all to be calling RDRAND in write_pool(), because before we extract from the pool, we always do so with 32 bytes of RDSEED hashed in at that stage. Xoring at this stage is needless and introduces a minor liability. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1305,25 +1305,15 @@ static __poll_t random_poll(struct file static int write_pool(const char __user *buffer, size_t count) { size_t bytes; - u32 t, buf[16]; + u8 buf[BLAKE2S_BLOCK_SIZE]; const char __user *p =3D buffer; =20 while (count > 0) { - int b, i =3D 0; - bytes =3D min(count, sizeof(buf)); - if (copy_from_user(&buf, p, bytes)) + if (copy_from_user(buf, p, bytes)) return -EFAULT; - - for (b =3D bytes; b > 0; b -=3D sizeof(u32), i++) { - if (!arch_get_random_int(&t)) - break; - buf[i] ^=3D t; - } - count -=3D bytes; p +=3D bytes; - mix_pool_bytes(buf, bytes); cond_resched(); } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBAA5C4332F for ; Fri, 27 May 2022 09:03:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350300AbiE0JDS (ORCPT ); Fri, 27 May 2022 05:03:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350518AbiE0JAD (ORCPT ); Fri, 27 May 2022 05:00:03 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 942F910274F; Fri, 27 May 2022 01:56:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D5D56B823D9; Fri, 27 May 2022 08:56:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEB60C34100; Fri, 27 May 2022 08:56:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641763; bh=IRIB/nfddzpYszgOCi6TNfrnOf4M5Ta9hww7v3gomiQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0E4lgKzPGYrm4UkdSJEov2Dqdp2ww1OATHWGkcejLmZgi9mz0UE9nviL4SLxNvFvM IjxxDBLqnIhwnNBh1JliDO/Zn3Lmwwe8q3lEc9a5wum95/Je4LVq5FWDonGNub/B2+ 9+6yG35LkExw2M4crroccHfq17V0XtF6dNE6T1/E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 016/111] random: absorb fast pool into input pool after fast load Date: Fri, 27 May 2022 10:48:48 +0200 Message-Id: <20220527084821.509701521@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit c30c575db4858f0bbe5e315ff2e529c782f33a1f upstream. During crng_init =3D=3D 0, we never credit entropy in add_interrupt_ randomness(), but instead dump it directly into the primary_crng. That's fine, except for the fact that we then wind up throwing away that entropy later when we switch to extracting from the input pool and xoring into (and later in this series overwriting) the primary_crng key. The two other early init sites -- add_hwgenerator_randomness()'s use crng_fast_load() and add_device_ randomness()'s use of crng_slow_load() -- always additionally give their inputs to the input pool. But not add_interrupt_randomness(). This commit fixes that shortcoming by calling mix_pool_bytes() after crng_fast_load() in add_interrupt_randomness(). That's partially verboten on PREEMPT_RT, where it implies taking spinlock_t from an IRQ handler. But this also only happens during early boot and then never again after that. Plus it's a trylock so it has the same considerations as calling crng_fast_load(), which we're already using. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -850,6 +850,10 @@ void add_interrupt_randomness(int irq) crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count =3D 0; fast_pool->last =3D now; + if (spin_trylock(&input_pool.lock)) { + _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + spin_unlock(&input_pool.lock); + } } return; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 654EEC38162 for ; Fri, 27 May 2022 09:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351435AbiE0JGP (ORCPT ); Fri, 27 May 2022 05:06:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350257AbiE0I7a (ORCPT ); Fri, 27 May 2022 04:59:30 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F2ED5BD31; Fri, 27 May 2022 01:55:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 47D71CE23D5; Fri, 27 May 2022 08:55:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5B96C385A9; Fri, 27 May 2022 08:55:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641734; bh=746zp27G6htukWBKN0/9hfFS21cQDj4NBfg1PXZYjOE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j9xN7Rs5ciclNMdHFhKxLH2ROy3Dvyg297LCmbJLkz0qsuy9i3t4zjKfPS3qUnAuA wlMvo8gWSIakCu274bEH4+ffj5LzdJoB5+pQTR+MTVnizxcz2SZmasT9Z79E0MPf+o TGHVqHmnLSkQDJP9GaAKIzuXe3ZnQhH+xAH9dYxg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Sebastian Andrzej Siewior , Jann Horn , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 017/111] random: use simpler fast key erasure flow on per-cpu keys Date: Fri, 27 May 2022 10:48:49 +0200 Message-Id: <20220527084821.718346212@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 186873c549df11b63e17062f863654e1501e1524 upstream. Rather than the clunky NUMA full ChaCha state system we had prior, this commit is closer to the original "fast key erasure RNG" proposal from , by simply treating ChaCha keys on a per-cpu basis. All entropy is extracted to a base crng key of 32 bytes. This base crng has a birthdate and a generation counter. When we go to take bytes from the crng, we first check if the birthdate is too old; if it is, we reseed per usual. Then we start working on a per-cpu crng. This per-cpu crng makes sure that it has the same generation counter as the base crng. If it doesn't, it does fast key erasure with the base crng key and uses the output as its new per-cpu key, and then updates its local generation counter. Then, using this per-cpu state, we do ordinary fast key erasure. Half of this first block is used to overwrite the per-cpu crng key for the next call -- this is the fast key erasure RNG idea -- and the other half, along with the ChaCha state, is returned to the caller. If the caller desires more than this remaining half, it can generate more ChaCha blocks, unlocked, using the now detached ChaCha state that was just returned. Crypto-wise, this is more or less what we were doing before, but this simply makes it more explicit and ensures that we always have backtrack protection by not playing games with a shared block counter. The flow looks like this: =E2=94=80=E2=94=80extract()=E2=94=80=E2=94=80=E2=96=BA base_crng.key =E2=97= =84=E2=94=80=E2=94=80memcpy()=E2=94=80=E2=94=80=E2=94=80=E2=94=90 =E2=94=82 =E2=94=82 =E2=94=94=E2=94=80=E2=94=80chacha()=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=96=BA new_base_key =E2=94=94=E2=94=80=E2=96=BA crngs[n].ke= y =E2=97=84=E2=94=80=E2=94=80memcpy()=E2=94=80=E2=94=80=E2=94=80=E2=94=90 =E2=94=82 = =E2=94=82 =E2=94=94=E2=94=80=E2=94=80ch= acha()=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=96=BA new_key =E2=94=94=E2=94= =80=E2=96=BA random_bytes =E2= =94=82 =E2= =94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=96=BA There are a few hairy details around early init. Just as was done before, prior to having gathered enough entropy, crng_fast_load() and crng_slow_load() dump bytes directly into the base crng, and when we go to take bytes from the crng, in that case, we're doing fast key erasure with the base crng rather than the fast unlocked per-cpu crngs. This is fine as that's only the state of affairs during very early boot; once the crng initializes we never use these paths again. In the process of all this, the APIs into the crng become a bit simpler: we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len), which both do what you'd expect. All of the details of fast key erasure and per-cpu selection happen only in a very short critical section of crng_make_state(), which selects the right per-cpu key, does the fast key erasure, and returns a local state to the caller's stack. So, we no longer have a need for a separate backtrack function, as this happens all at once here. The API then allows us to extend backtrack protection to batched entropy without really having to do much at all. The result is a bit simpler than before and has fewer foot guns. The init time state machine also gets a lot simpler as we don't need to wait for workqueues to come online and do deferred work. And the multi-core performance should be increased significantly, by virtue of having hardly any locking on the fast path. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Sebastian Andrzej Siewior Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 403 ++++++++++++++++++++++++++++-----------------= ----- 1 file changed, 233 insertions(+), 170 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -67,63 +67,19 @@ * Exported interfaces ---- kernel output * -------------------------------------- * - * The primary kernel interface is + * The primary kernel interfaces are: * * void get_random_bytes(void *buf, int nbytes); - * - * This interface will return the requested number of random bytes, - * and place it in the requested buffer. This is equivalent to a - * read from /dev/urandom. - * - * For less critical applications, there are the functions: - * * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() * unsigned long get_random_long() * - * These are produced by a cryptographic RNG seeded from get_random_bytes, - * and so do not deplete the entropy pool as much. These are recommended - * for most in-kernel operations *if the result is going to be stored in - * the kernel*. - * - * Specifically, the get_random_int() family do not attempt to do - * "anti-backtracking". If you capture the state of the kernel (e.g. - * by snapshotting the VM), you can figure out previous get_random_int() - * return values. But if the value is stored in the kernel anyway, - * this is not a problem. - * - * It *is* safe to expose get_random_int() output to attackers (e.g. as - * network cookies); given outputs 1..n, it's not feasible to predict - * outputs 0 or n+1. The only concern is an attacker who breaks into - * the kernel later; the get_random_int() engine is not reseeded as - * often as the get_random_bytes() one. - * - * get_random_bytes() is needed for keys that need to stay secret after - * they are erased from the kernel. For example, any key that will - * be wrapped and stored encrypted. And session encryption keys: we'd - * like to know that after the session is closed and the keys erased, - * the plaintext is unrecoverable to someone who recorded the ciphertext. - * - * But for network ports/cookies, stack canaries, PRNG seeds, address - * space layout randomization, session *authentication* keys, or other - * applications where the sensitive data is stored in the kernel in - * plaintext for as long as it's sensitive, the get_random_int() family - * is just fine. - * - * Consider ASLR. We want to keep the address space secret from an - * outside attacker while the process is running, but once the address - * space is torn down, it's of no use to an attacker any more. And it's - * stored in kernel data structures as long as it's alive, so worrying - * about an attacker's ability to extrapolate it from the get_random_int() - * CRNG is silly. - * - * Even some cryptographic keys are safe to generate with get_random_int(). - * In particular, keys for SipHash are generally fine. Here, knowledge - * of the key authorizes you to do something to a kernel object (inject - * packets to a network connection, or flood a hash table), and the - * key is stored with the object being protected. Once it goes away, - * we no longer care if anyone knows the key. + * These interfaces will return the requested number of random bytes + * into the given buffer or as a return value. This is equivalent to a + * read from /dev/urandom. The get_random_{u32,u64,int,long}() family + * of functions may be higher performance for one-off random integers, + * because they do a bit of buffering. * * prandom_u32() * ------------- @@ -300,20 +256,6 @@ static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); =20 -struct crng_state { - u32 state[16]; - unsigned long init_time; - spinlock_t lock; -}; - -static struct crng_state primary_crng =3D { - .lock =3D __SPIN_LOCK_UNLOCKED(primary_crng.lock), - .state[0] =3D CHACHA_CONSTANT_EXPA, - .state[1] =3D CHACHA_CONSTANT_ND_3, - .state[2] =3D CHACHA_CONSTANT_2_BY, - .state[3] =3D CHACHA_CONSTANT_TE_K, -}; - /* * crng_init =3D 0 --> Uninitialized * 1 --> Initialized @@ -325,9 +267,6 @@ static struct crng_state primary_crng =3D static int crng_init =3D 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt =3D 0; -#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); =20 @@ -470,7 +409,30 @@ static void credit_entropy_bits(int nbit * *********************************************************************/ =20 -#define CRNG_RESEED_INTERVAL (300 * HZ) +enum { + CRNG_RESEED_INTERVAL =3D 300 * HZ, + CRNG_INIT_CNT_THRESH =3D 2 * CHACHA_KEY_SIZE +}; + +static struct { + u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); + unsigned long birth; + unsigned long generation; + spinlock_t lock; +} base_crng =3D { + .lock =3D __SPIN_LOCK_UNLOCKED(base_crng.lock) +}; + +struct crng { + u8 key[CHACHA_KEY_SIZE]; + unsigned long generation; + local_lock_t lock; +}; + +static DEFINE_PER_CPU(struct crng, crngs) =3D { + .generation =3D ULONG_MAX, + .lock =3D INIT_LOCAL_LOCK(crngs.lock), +}; =20 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); =20 @@ -487,22 +449,22 @@ static size_t crng_fast_load(const u8 *c u8 *p; size_t ret =3D 0; =20 - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init !=3D 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - p =3D (u8 *)&primary_crng.state[4]; + p =3D base_crng.key; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % CHACHA_KEY_SIZE] ^=3D *cp; + p[crng_init_cnt % sizeof(base_crng.key)] ^=3D *cp; cp++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init =3D 1; } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); if (crng_init =3D=3D 1) pr_notice("fast init done\n"); return ret; @@ -527,14 +489,14 @@ static int crng_slow_load(const u8 *cp, unsigned long flags; static u8 lfsr =3D 1; u8 tmp; - unsigned int i, max =3D CHACHA_KEY_SIZE; + unsigned int i, max =3D sizeof(base_crng.key); const u8 *src_buf =3D cp; - u8 *dest_buf =3D (u8 *)&primary_crng.state[4]; + u8 *dest_buf =3D base_crng.key; =20 - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init !=3D 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } if (len > max) @@ -545,38 +507,50 @@ static int crng_slow_load(const u8 *cp, lfsr >>=3D 1; if (tmp & 1) lfsr ^=3D 0xE1; - tmp =3D dest_buf[i % CHACHA_KEY_SIZE]; - dest_buf[i % CHACHA_KEY_SIZE] ^=3D src_buf[i % len] ^ lfsr; + tmp =3D dest_buf[i % sizeof(base_crng.key)]; + dest_buf[i % sizeof(base_crng.key)] ^=3D src_buf[i % len] ^ lfsr; lfsr +=3D (tmp << 3) | (tmp >> 5); } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 1; } =20 static void crng_reseed(void) { unsigned long flags; - int i, entropy_count; - union { - u8 block[CHACHA_BLOCK_SIZE]; - u32 key[8]; - } buf; + int entropy_count; + unsigned long next_gen; + u8 key[CHACHA_KEY_SIZE]; =20 + /* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, + * and then we drain all of it. Only then can we extract a new key. + */ do { entropy_count =3D READ_ONCE(input_pool.entropy_count); if (entropy_count < POOL_MIN_BITS) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); - extract_entropy(buf.key, sizeof(buf.key)); + extract_entropy(key, sizeof(key)); wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); =20 - spin_lock_irqsave(&primary_crng.lock, flags); - for (i =3D 0; i < 8; i++) - primary_crng.state[i + 4] ^=3D buf.key[i]; - memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(primary_crng.init_time, jiffies); - spin_unlock_irqrestore(&primary_crng.lock, flags); + /* + * We copy the new key into the base_crng, overwriting the old one, + * and update the generation counter. We avoid hitting ULONG_MAX, + * because the per-cpu crngs are initialized to ULONG_MAX, so this + * forces new CPUs that come online to always initialize. + */ + spin_lock_irqsave(&base_crng.lock, flags); + memcpy(base_crng.key, key, sizeof(base_crng.key)); + next_gen =3D base_crng.generation + 1; + if (next_gen =3D=3D ULONG_MAX) + ++next_gen; + WRITE_ONCE(base_crng.generation, next_gen); + WRITE_ONCE(base_crng.birth, jiffies); + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (crng_init < 2) { invalidate_batched_entropy(); crng_init =3D 2; @@ -597,77 +571,143 @@ static void crng_reseed(void) } } =20 -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) +/* + * The general form here is based on a "fast key erasure RNG" from + * . It generates a ChaCha + * block using the provided key, and then immediately overwites that + * key with half the block. It returns the resultant ChaCha state to the + * user, along with the second half of the block containing 32 bytes of + * random data that may be used; random_data_len may not be greater than + * 32. + */ +static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], + u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) { - unsigned long flags, init_time; + u8 first_block[CHACHA_BLOCK_SIZE]; =20 - if (crng_ready()) { - init_time =3D READ_ONCE(primary_crng.init_time); - if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(); - } - spin_lock_irqsave(&primary_crng.lock, flags); - chacha20_block(&primary_crng.state[0], out); - if (primary_crng.state[12] =3D=3D 0) - primary_crng.state[13]++; - spin_unlock_irqrestore(&primary_crng.lock, flags); + BUG_ON(random_data_len > 32); + + chacha_init_consts(chacha_state); + memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE); + memset(&chacha_state[12], 0, sizeof(u32) * 4); + chacha20_block(chacha_state, first_block); + + memcpy(key, first_block, CHACHA_KEY_SIZE); + memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len); + memzero_explicit(first_block, sizeof(first_block)); } =20 /* - * Use the leftover bytes from the CRNG block output (if there is - * enough) to mutate the CRNG key to provide backtracking protection. + * This function returns a ChaCha state that you may use for generating + * random data. It also returns up to 32 bytes on its own of random data + * that may be used; random_data_len may not be greater than 32. */ -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) { unsigned long flags; - u32 *s, *d; - int i; + struct crng *crng; + + BUG_ON(random_data_len > 32); + + /* + * For the fast path, we check whether we're ready, unlocked first, and + * then re-check once locked later. In the case where we're really not + * ready, we do fast key erasure with the base_crng directly, because + * this is what crng_{fast,slow}_load mutate during early init. + */ + if (unlikely(!crng_ready())) { + bool ready; + + spin_lock_irqsave(&base_crng.lock, flags); + ready =3D crng_ready(); + if (!ready) + crng_fast_key_erasure(base_crng.key, chacha_state, + random_data, random_data_len); + spin_unlock_irqrestore(&base_crng.lock, flags); + if (!ready) + return; + } + + /* + * If the base_crng is more than 5 minutes old, we reseed, which + * in turn bumps the generation counter that we check below. + */ + if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED= _INTERVAL))) + crng_reseed(); + + local_lock_irqsave(&crngs.lock, flags); + crng =3D raw_cpu_ptr(&crngs); + + /* + * If our per-cpu crng is older than the base_crng, then it means + * somebody reseeded the base_crng. In that case, we do fast key + * erasure on the base_crng, and use its output as the new key + * for our per-cpu crng. This brings us up to date with base_crng. + */ + if (unlikely(crng->generation !=3D READ_ONCE(base_crng.generation))) { + spin_lock(&base_crng.lock); + crng_fast_key_erasure(base_crng.key, chacha_state, + crng->key, sizeof(crng->key)); + crng->generation =3D base_crng.generation; + spin_unlock(&base_crng.lock); + } + + /* + * Finally, when we've made it this far, our per-cpu crng has an up + * to date key, and we can do fast key erasure with it to produce + * some random data and a ChaCha state for the caller. All other + * branches of this function are "unlikely", so most of the time we + * should wind up here immediately. + */ + crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_l= en); + local_unlock_irqrestore(&crngs.lock, flags); +} + +static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) +{ + bool large_request =3D nbytes > 256; + ssize_t ret =3D 0, len; + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 output[CHACHA_BLOCK_SIZE]; + + if (!nbytes) + return 0; =20 - used =3D round_up(used, sizeof(u32)); - if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { - extract_crng(tmp); - used =3D 0; - } - spin_lock_irqsave(&primary_crng.lock, flags); - s =3D (u32 *)&tmp[used]; - d =3D &primary_crng.state[4]; - for (i =3D 0; i < 8; i++) - *d++ ^=3D *s++; - spin_unlock_irqrestore(&primary_crng.lock, flags); -} - -static ssize_t extract_crng_user(void __user *buf, size_t nbytes) -{ - ssize_t ret =3D 0, i =3D CHACHA_BLOCK_SIZE; - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); - int large_request =3D (nbytes > 256); + len =3D min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, output, len); + + if (copy_to_user(buf, output, len)) + return -EFAULT; + nbytes -=3D len; + buf +=3D len; + ret +=3D len; =20 while (nbytes) { if (large_request && need_resched()) { - if (signal_pending(current)) { - if (ret =3D=3D 0) - ret =3D -ERESTARTSYS; + if (signal_pending(current)) break; - } schedule(); } =20 - extract_crng(tmp); - i =3D min_t(int, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, tmp, i)) { + chacha20_block(chacha_state, output); + if (unlikely(chacha_state[12] =3D=3D 0)) + ++chacha_state[13]; + + len =3D min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + if (copy_to_user(buf, output, len)) { ret =3D -EFAULT; break; } =20 - nbytes -=3D i; - buf +=3D i; - ret +=3D i; + nbytes -=3D len; + buf +=3D len; + ret +=3D len; } - crng_backtrack_protect(tmp, i); - - /* Wipe data just written to memory */ - memzero_explicit(tmp, sizeof(tmp)); =20 + memzero_explicit(chacha_state, sizeof(chacha_state)); + memzero_explicit(output, sizeof(output)); return ret; } =20 @@ -976,23 +1016,36 @@ static void _warn_unseeded_randomness(co */ static void _get_random_bytes(void *buf, int nbytes) { - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 tmp[CHACHA_BLOCK_SIZE]; + ssize_t len; =20 trace_get_random_bytes(nbytes, _RET_IP_); =20 - while (nbytes >=3D CHACHA_BLOCK_SIZE) { - extract_crng(buf); - buf +=3D CHACHA_BLOCK_SIZE; + if (!nbytes) + return; + + len =3D min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, buf, len); + nbytes -=3D len; + buf +=3D len; + + while (nbytes) { + if (nbytes < CHACHA_BLOCK_SIZE) { + chacha20_block(chacha_state, tmp); + memcpy(buf, tmp, nbytes); + memzero_explicit(tmp, sizeof(tmp)); + break; + } + + chacha20_block(chacha_state, buf); + if (unlikely(chacha_state[12] =3D=3D 0)) + ++chacha_state[13]; nbytes -=3D CHACHA_BLOCK_SIZE; + buf +=3D CHACHA_BLOCK_SIZE; } =20 - if (nbytes > 0) { - extract_crng(tmp); - memcpy(buf, tmp, nbytes); - crng_backtrack_protect(tmp, nbytes); - } else - crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE); - memzero_explicit(tmp, sizeof(tmp)); + memzero_explicit(chacha_state, sizeof(chacha_state)); } =20 void get_random_bytes(void *buf, int nbytes) @@ -1223,13 +1276,12 @@ int __init rand_initialize(void) mix_pool_bytes(&now, sizeof(now)); mix_pool_bytes(utsname(), sizeof(*(utsname()))); =20 - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + extract_entropy(base_crng.key, sizeof(base_crng.key)); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init =3D 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } - primary_crng.init_time =3D jiffies - CRNG_RESEED_INTERVAL - 1; =20 if (ratelimit_disable) { urandom_warning.interval =3D 0; @@ -1261,7 +1313,7 @@ static ssize_t urandom_read_nowarn(struc int ret; =20 nbytes =3D min_t(size_t, nbytes, INT_MAX >> 6); - ret =3D extract_crng_user(buf, nbytes); + ret =3D get_random_bytes_user(buf, nbytes); trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret; } @@ -1577,8 +1629,15 @@ static atomic_t batch_generation =3D ATOMI =20 struct batched_entropy { union { - u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; - u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; + /* + * We make this 1.5x a ChaCha block, so that we get the + * remaining 32 bytes from fast key erasure, plus one full + * block from the detached ChaCha state. We can increase + * the size of this later if needed so long as we keep the + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. + */ + u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; + u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; }; local_lock_t lock; unsigned int position; @@ -1587,14 +1646,13 @@ struct batched_entropy { =20 /* * Get a random word for internal kernel use only. The quality of the rand= om - * number is good as /dev/urandom, but there is no backtrack protection, w= ith - * the goal of being quite fast and not depleting entropy. In order to ens= ure - * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once at = any - * point prior. + * number is good as /dev/urandom. In order to ensure that the randomness + * provided by this function is okay, the function wait_for_random_bytes() + * should be called and return 0 at least once at any point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock) + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock), + .position =3D UINT_MAX }; =20 u64 get_random_u64(void) @@ -1611,21 +1669,24 @@ u64 get_random_u64(void) batch =3D raw_cpu_ptr(&batched_entropy_u64); =20 next_gen =3D atomic_read(&batch_generation); - if (batch->position % ARRAY_SIZE(batch->entropy_u64) =3D=3D 0 || + if (batch->position >=3D ARRAY_SIZE(batch->entropy_u64) || next_gen !=3D batch->generation) { - extract_crng((u8 *)batch->entropy_u64); + _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); batch->position =3D 0; batch->generation =3D next_gen; } =20 - ret =3D batch->entropy_u64[batch->position++]; + ret =3D batch->entropy_u64[batch->position]; + batch->entropy_u64[batch->position] =3D 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u64.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u64); =20 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock) + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock), + .position =3D UINT_MAX }; =20 u32 get_random_u32(void) @@ -1642,14 +1703,16 @@ u32 get_random_u32(void) batch =3D raw_cpu_ptr(&batched_entropy_u32); =20 next_gen =3D atomic_read(&batch_generation); - if (batch->position % ARRAY_SIZE(batch->entropy_u32) =3D=3D 0 || + if (batch->position >=3D ARRAY_SIZE(batch->entropy_u32) || next_gen !=3D batch->generation) { - extract_crng((u8 *)batch->entropy_u32); + _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); batch->position =3D 0; batch->generation =3D next_gen; } =20 - ret =3D batch->entropy_u32[batch->position++]; + ret =3D batch->entropy_u32[batch->position]; + batch->entropy_u32[batch->position] =3D 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u32.lock, flags); return ret; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED8AFC46467 for ; Fri, 27 May 2022 09:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351229AbiE0JGB (ORCPT ); Fri, 27 May 2022 05:06:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350373AbiE0I7w (ORCPT ); Fri, 27 May 2022 04:59:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8C995C874; Fri, 27 May 2022 01:55:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 214EB61D6F; Fri, 27 May 2022 08:55:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B0F1C385A9; Fri, 27 May 2022 08:55:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641741; bh=i2Ls4FRmXeR8s7LWB/7GTRBtb7/HOAyQN4vOp63lBFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gIJNFahT/Y36NVU0zm0bG4J7vlHqxFjAdC4yf1qogVlG+zEB0QoVEHtZ7GzKjWtSk W/Q/BjPgvQiqJr7quH3Wv3Uw5iPtsEMMI7CtUuOWuDRNaC046rR3qxGAc0r2RAGeUt A4aDWGgCB23IeLYq9qVX7eKRTr1RaaooCJ43/uLM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 018/111] random: use hash function for crng_slow_load() Date: Fri, 27 May 2022 10:48:50 +0200 Message-Id: <20220527084821.857162460@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 66e4c2b9541503d721e936cc3898c9f25f4591ff upstream. Since we have a hash function that's really fast, and the goal of crng_slow_load() is reportedly to "touch all of the crng's state", we can just hash the old state together with the new state and call it a day. This way we dont need to reason about another LFSR or worry about various attacks there. This code is only ever used at early boot and then never again. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 40 ++++++++++++++-------------------------- 1 file changed, 14 insertions(+), 26 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -477,42 +477,30 @@ static size_t crng_fast_load(const u8 *c * all), and (2) it doesn't have the performance constraints of * crng_fast_load(). * - * So we do something more comprehensive which is guaranteed to touch - * all of the primary_crng's state, and which uses a LFSR with a - * period of 255 as part of the mixing algorithm. Finally, we do - * *not* advance crng_init_cnt since buffer we may get may be something - * like a fixed DMI table (for example), which might very well be - * unique to the machine, but is otherwise unvarying. + * So, we simply hash the contents in with the current key. Finally, + * we do *not* advance crng_init_cnt since buffer we may get may be + * something like a fixed DMI table (for example), which might very + * well be unique to the machine, but is otherwise unvarying. */ -static int crng_slow_load(const u8 *cp, size_t len) +static void crng_slow_load(const u8 *cp, size_t len) { unsigned long flags; - static u8 lfsr =3D 1; - u8 tmp; - unsigned int i, max =3D sizeof(base_crng.key); - const u8 *src_buf =3D cp; - u8 *dest_buf =3D base_crng.key; + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); =20 if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; + return; if (crng_init !=3D 0) { spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; + return; } - if (len > max) - max =3D len; =20 - for (i =3D 0; i < max; i++) { - tmp =3D lfsr; - lfsr >>=3D 1; - if (tmp & 1) - lfsr ^=3D 0xE1; - tmp =3D dest_buf[i % sizeof(base_crng.key)]; - dest_buf[i % sizeof(base_crng.key)] ^=3D src_buf[i % len] ^ lfsr; - lfsr +=3D (tmp << 3) | (tmp >> 5); - } + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, cp, len); + blake2s_final(&hash, base_crng.key); + spin_unlock_irqrestore(&base_crng.lock, flags); - return 1; } =20 static void crng_reseed(void) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC296C433EF for ; Fri, 27 May 2022 09:01:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350453AbiE0JBg (ORCPT ); Fri, 27 May 2022 05:01:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350441AbiE0I76 (ORCPT ); Fri, 27 May 2022 04:59:58 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 417B14A929; Fri, 27 May 2022 01:55:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8E24BB82338; Fri, 27 May 2022 08:55:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A631C34113; Fri, 27 May 2022 08:55:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641749; bh=BaEbjc6QPOcp5sKGka4RMmqCvpuEc+2FTKjmL7DgiMI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VyOoUkdoAORvlsqBb099xL8qY77PbcpSG4lgKfsrNPRW7DK2Wb/LzecOCX1VZFFjO 0iWk5UzFKlIepZ2F/wl9fyBnFCYsvHRH+OKnZPKHJiAGUL3Gh8pD901zqvYNnds+u0 VsaXyNmuRsyvFnGGVP8QyzR8KJGi2Jp2w1xZReL4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Jann Horn , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 019/111] random: make more consistent use of integer types Date: Fri, 27 May 2022 10:48:51 +0200 Message-Id: <20220527084822.008140673@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 04ec96b768c9dd43946b047c3da60dcc66431370 upstream. We've been using a flurry of int, unsigned int, size_t, and ssize_t. Let's unify all of this into size_t where it makes sense, as it does in most places, and leave ssize_t for return values with possible errors. In addition, keeping with the convention of other functions in this file, functions that are dealing with raw bytes now take void * consistently instead of a mix of that and u8 *, because much of the time we're actually passing some other structure that is then interpreted as bytes by the function. We also take the opportunity to fix the outdated and incorrect comment in get_random_bytes_arch(). Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 125 ++++++++++++++++++-------------------= ----- include/linux/hw_random.h | 2=20 include/linux/random.h | 10 +-- include/trace/events/random.h | 79 ++++++++++++-------------- 4 files changed, 100 insertions(+), 116 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -69,7 +69,7 @@ * * The primary kernel interfaces are: * - * void get_random_bytes(void *buf, int nbytes); + * void get_random_bytes(void *buf, size_t nbytes); * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() @@ -97,14 +97,14 @@ * The current exported interfaces for gathering environmental noise * from the devices are: * - * void add_device_randomness(const void *buf, unsigned int size); + * void add_device_randomness(const void *buf, size_t size); * void add_input_randomness(unsigned int type, unsigned int code, * unsigned int value); * void add_interrupt_randomness(int irq); * void add_disk_randomness(struct gendisk *disk); - * void add_hwgenerator_randomness(const char *buffer, size_t count, + * void add_hwgenerator_randomness(const void *buffer, size_t count, * size_t entropy); - * void add_bootloader_randomness(const void *buf, unsigned int size); + * void add_bootloader_randomness(const void *buf, size_t size); * * add_device_randomness() is for adding data to the random pool that * is likely to differ between two devices (or possibly even per boot). @@ -268,7 +268,7 @@ static int crng_init =3D 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt =3D 0; static void process_random_ready_list(void); -static void _get_random_bytes(void *buf, int nbytes); +static void _get_random_bytes(void *buf, size_t nbytes); =20 static struct ratelimit_state unseeded_warning =3D RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); @@ -290,7 +290,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Dis static struct { struct blake2s_state hash; spinlock_t lock; - int entropy_count; + unsigned int entropy_count; } input_pool =3D { .hash.h =3D { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, @@ -308,18 +308,12 @@ static void crng_reseed(void); * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. */ -static void _mix_pool_bytes(const void *in, int nbytes) +static void _mix_pool_bytes(const void *in, size_t nbytes) { blake2s_update(&input_pool.hash, in, nbytes); } =20 -static void __mix_pool_bytes(const void *in, int nbytes) -{ - trace_mix_pool_bytes_nolock(nbytes, _RET_IP_); - _mix_pool_bytes(in, nbytes); -} - -static void mix_pool_bytes(const void *in, int nbytes) +static void mix_pool_bytes(const void *in, size_t nbytes) { unsigned long flags; =20 @@ -383,18 +377,18 @@ static void process_random_ready_list(vo spin_unlock_irqrestore(&random_ready_list_lock, flags); } =20 -static void credit_entropy_bits(int nbits) +static void credit_entropy_bits(size_t nbits) { - int entropy_count, orig; + unsigned int entropy_count, orig, add; =20 - if (nbits <=3D 0) + if (!nbits) return; =20 - nbits =3D min(nbits, POOL_BITS); + add =3D min_t(size_t, nbits, POOL_BITS); =20 do { orig =3D READ_ONCE(input_pool.entropy_count); - entropy_count =3D min(POOL_BITS, orig + nbits); + entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); =20 trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); @@ -443,10 +437,10 @@ static void invalidate_batched_entropy(v * path. So we can't afford to dilly-dally. Returns the number of * bytes processed from cp. */ -static size_t crng_fast_load(const u8 *cp, size_t len) +static size_t crng_fast_load(const void *cp, size_t len) { unsigned long flags; - u8 *p; + const u8 *src =3D (const u8 *)cp; size_t ret =3D 0; =20 if (!spin_trylock_irqsave(&base_crng.lock, flags)) @@ -455,10 +449,9 @@ static size_t crng_fast_load(const u8 *c spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - p =3D base_crng.key; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % sizeof(base_crng.key)] ^=3D *cp; - cp++; crng_init_cnt++; len--; ret++; + base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^=3D *src; + src++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); @@ -482,7 +475,7 @@ static size_t crng_fast_load(const u8 *c * something like a fixed DMI table (for example), which might very * well be unique to the machine, but is otherwise unvarying. */ -static void crng_slow_load(const u8 *cp, size_t len) +static void crng_slow_load(const void *cp, size_t len) { unsigned long flags; struct blake2s_state hash; @@ -656,14 +649,15 @@ static void crng_make_state(u32 chacha_s static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { bool large_request =3D nbytes > 256; - ssize_t ret =3D 0, len; + ssize_t ret =3D 0; + size_t len; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; =20 if (!nbytes) return 0; =20 - len =3D min_t(ssize_t, 32, nbytes); + len =3D min_t(size_t, 32, nbytes); crng_make_state(chacha_state, output, len); =20 if (copy_to_user(buf, output, len)) @@ -683,7 +677,7 @@ static ssize_t get_random_bytes_user(voi if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; =20 - len =3D min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + len =3D min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); if (copy_to_user(buf, output, len)) { ret =3D -EFAULT; break; @@ -721,7 +715,7 @@ struct timer_rand_state { * the entropy pool having similar initial state across largely * identical devices. */ -void add_device_randomness(const void *buf, unsigned int size) +void add_device_randomness(const void *buf, size_t size) { unsigned long time =3D random_get_entropy() ^ jiffies; unsigned long flags; @@ -749,7 +743,7 @@ static struct timer_rand_state input_tim * keyboard scan codes, and 256 upwards for interrupts. * */ -static void add_timer_randomness(struct timer_rand_state *state, unsigned = num) +static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) { struct { long jiffies; @@ -793,7 +787,7 @@ static void add_timer_randomness(struct * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(min_t(int, fls(delta >> 1), 11)); + credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11)); } =20 void add_input_randomness(unsigned int type, unsigned int code, @@ -874,8 +868,8 @@ void add_interrupt_randomness(int irq) add_interrupt_bench(cycles); =20 if (unlikely(crng_init =3D=3D 0)) { - if ((fast_pool->count >=3D 64) && - crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { + if (fast_pool->count >=3D 64 && + crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count =3D 0; fast_pool->last =3D now; if (spin_trylock(&input_pool.lock)) { @@ -893,7 +887,7 @@ void add_interrupt_randomness(int irq) return; =20 fast_pool->last =3D now; - __mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); spin_unlock(&input_pool.lock); =20 fast_pool->count =3D 0; @@ -1002,18 +996,18 @@ static void _warn_unseeded_randomness(co * wait_for_random_bytes() should be called and return 0 at least once * at any point prior. */ -static void _get_random_bytes(void *buf, int nbytes) +static void _get_random_bytes(void *buf, size_t nbytes) { u32 chacha_state[CHACHA_STATE_WORDS]; u8 tmp[CHACHA_BLOCK_SIZE]; - ssize_t len; + size_t len; =20 trace_get_random_bytes(nbytes, _RET_IP_); =20 if (!nbytes) return; =20 - len =3D min_t(ssize_t, 32, nbytes); + len =3D min_t(size_t, 32, nbytes); crng_make_state(chacha_state, buf, len); nbytes -=3D len; buf +=3D len; @@ -1036,7 +1030,7 @@ static void _get_random_bytes(void *buf, memzero_explicit(chacha_state, sizeof(chacha_state)); } =20 -void get_random_bytes(void *buf, int nbytes) +void get_random_bytes(void *buf, size_t nbytes) { static void *previous; =20 @@ -1197,25 +1191,19 @@ EXPORT_SYMBOL(del_random_ready_callback) =20 /* * This function will use the architecture-specific hardware random - * number generator if it is available. The arch-specific hw RNG will - * almost certainly be faster than what we can do in software, but it - * is impossible to verify that it is implemented securely (as - * opposed, to, say, the AES encryption of a sequence number using a - * key known by the NSA). So it's useful if we need the speed, but - * only if we're willing to trust the hardware manufacturer not to - * have put in a back door. - * - * Return number of bytes filled in. + * number generator if it is available. It is not recommended for + * use. Use get_random_bytes() instead. It returns the number of + * bytes filled in. */ -int __must_check get_random_bytes_arch(void *buf, int nbytes) +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) { - int left =3D nbytes; + size_t left =3D nbytes; u8 *p =3D buf; =20 trace_get_random_bytes_arch(left, _RET_IP_); while (left) { unsigned long v; - int chunk =3D min_t(int, left, sizeof(unsigned long)); + size_t chunk =3D min_t(size_t, left, sizeof(unsigned long)); =20 if (!arch_get_random_long(&v)) break; @@ -1248,12 +1236,12 @@ early_param("random.trust_cpu", parse_tr */ int __init rand_initialize(void) { - int i; + size_t i; ktime_t now =3D ktime_get_real(); bool arch_init =3D true; unsigned long rv; =20 - for (i =3D BLAKE2S_BLOCK_SIZE; i > 0; i -=3D sizeof(rv)) { + for (i =3D 0; i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv =3D random_get_entropy(); @@ -1302,7 +1290,7 @@ static ssize_t urandom_read_nowarn(struc =20 nbytes =3D min_t(size_t, nbytes, INT_MAX >> 6); ret =3D get_random_bytes_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); + trace_urandom_read(nbytes, input_pool.entropy_count); return ret; } =20 @@ -1346,19 +1334,18 @@ static __poll_t random_poll(struct file return mask; } =20 -static int write_pool(const char __user *buffer, size_t count) +static int write_pool(const char __user *ubuf, size_t count) { - size_t bytes; - u8 buf[BLAKE2S_BLOCK_SIZE]; - const char __user *p =3D buffer; - - while (count > 0) { - bytes =3D min(count, sizeof(buf)); - if (copy_from_user(buf, p, bytes)) + size_t len; + u8 block[BLAKE2S_BLOCK_SIZE]; + + while (count) { + len =3D min(count, sizeof(block)); + if (copy_from_user(block, ubuf, len)) return -EFAULT; - count -=3D bytes; - p +=3D bytes; - mix_pool_bytes(buf, bytes); + count -=3D len; + ubuf +=3D len; + mix_pool_bytes(block, len); cond_resched(); } =20 @@ -1368,7 +1355,7 @@ static int write_pool(const char __user static ssize_t random_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos) { - size_t ret; + int ret; =20 ret =3D write_pool(buffer, count); if (ret) @@ -1464,8 +1451,6 @@ const struct file_operations urandom_fop SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, flags) { - int ret; - if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; =20 @@ -1480,6 +1465,8 @@ SYSCALL_DEFINE3(getrandom, char __user * count =3D INT_MAX; =20 if (!(flags & GRND_INSECURE) && !crng_ready()) { + int ret; + if (flags & GRND_NONBLOCK) return -EAGAIN; ret =3D wait_for_random_bytes(); @@ -1751,7 +1738,7 @@ unsigned long randomize_page(unsigned lo * Those devices may produce endless random bits and will be throttled * when our pool is full. */ -void add_hwgenerator_randomness(const char *buffer, size_t count, +void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { if (unlikely(crng_init =3D=3D 0)) { @@ -1782,7 +1769,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random * it would be regarded as device data. * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. */ -void add_bootloader_randomness(const void *buf, unsigned int size) +void add_bootloader_randomness(const void *buf, size_t size) { if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) add_hwgenerator_randomness(buf, size, size * 8); --- a/include/linux/hw_random.h +++ b/include/linux/hw_random.h @@ -61,6 +61,6 @@ extern int devm_hwrng_register(struct de extern void hwrng_unregister(struct hwrng *rng); extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng); /** Feed random bits into the pool. */ -extern void add_hwgenerator_randomness(const char *buffer, size_t count, s= ize_t entropy); +extern void add_hwgenerator_randomness(const void *buffer, size_t count, s= ize_t entropy); =20 #endif /* LINUX_HWRANDOM_H_ */ --- a/include/linux/random.h +++ b/include/linux/random.h @@ -20,8 +20,8 @@ struct random_ready_callback { struct module *owner; }; =20 -extern void add_device_randomness(const void *, unsigned int); -extern void add_bootloader_randomness(const void *, unsigned int); +extern void add_device_randomness(const void *, size_t); +extern void add_bootloader_randomness(const void *, size_t); =20 #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -37,13 +37,13 @@ extern void add_input_randomness(unsigne unsigned int value) __latent_entropy; extern void add_interrupt_randomness(int irq) __latent_entropy; =20 -extern void get_random_bytes(void *buf, int nbytes); +extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); extern int __init rand_initialize(void); extern bool rng_is_initialized(void); extern int add_random_ready_callback(struct random_ready_callback *rdy); extern void del_random_ready_callback(struct random_ready_callback *rdy); -extern int __must_check get_random_bytes_arch(void *buf, int nbytes); +extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); =20 #ifndef MODULE extern const struct file_operations random_fops, urandom_fops; @@ -87,7 +87,7 @@ static inline unsigned long get_random_c =20 /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbyt= es). * Returns the result of the call to wait_for_random_bytes. */ -static inline int get_random_bytes_wait(void *buf, int nbytes) +static inline int get_random_bytes_wait(void *buf, size_t nbytes) { int ret =3D wait_for_random_bytes(); get_random_bytes(buf, nbytes); --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -9,13 +9,13 @@ #include =20 TRACE_EVENT(add_device_randomness, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), =20 TP_ARGS(bytes, IP), =20 TP_STRUCT__entry( - __field( int, bytes ) - __field(unsigned long, IP ) + __field(size_t, bytes ) + __field(unsigned long, IP ) ), =20 TP_fast_assign( @@ -23,18 +23,18 @@ TRACE_EVENT(add_device_randomness, __entry->IP =3D IP; ), =20 - TP_printk("bytes %d caller %pS", + TP_printk("bytes %zu caller %pS", __entry->bytes, (void *)__entry->IP) ); =20 DECLARE_EVENT_CLASS(random__mix_pool_bytes, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), =20 TP_ARGS(bytes, IP), =20 TP_STRUCT__entry( - __field( int, bytes ) - __field(unsigned long, IP ) + __field(size_t, bytes ) + __field(unsigned long, IP ) ), =20 TP_fast_assign( @@ -42,12 +42,12 @@ DECLARE_EVENT_CLASS(random__mix_pool_byt __entry->IP =3D IP; ), =20 - TP_printk("input pool: bytes %d caller %pS", + TP_printk("input pool: bytes %zu caller %pS", __entry->bytes, (void *)__entry->IP) ); =20 DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), =20 TP_ARGS(bytes, IP) ); @@ -59,13 +59,13 @@ DEFINE_EVENT(random__mix_pool_bytes, mix ); =20 TRACE_EVENT(credit_entropy_bits, - TP_PROTO(int bits, int entropy_count, unsigned long IP), + TP_PROTO(size_t bits, size_t entropy_count, unsigned long IP), =20 TP_ARGS(bits, entropy_count, IP), =20 TP_STRUCT__entry( - __field( int, bits ) - __field( int, entropy_count ) + __field(size_t, bits ) + __field(size_t, entropy_count ) __field(unsigned long, IP ) ), =20 @@ -75,34 +75,34 @@ TRACE_EVENT(credit_entropy_bits, __entry->IP =3D IP; ), =20 - TP_printk("input pool: bits %d entropy_count %d caller %pS", + TP_printk("input pool: bits %zu entropy_count %zu caller %pS", __entry->bits, __entry->entropy_count, (void *)__entry->IP) ); =20 TRACE_EVENT(add_input_randomness, - TP_PROTO(int input_bits), + TP_PROTO(size_t input_bits), =20 TP_ARGS(input_bits), =20 TP_STRUCT__entry( - __field( int, input_bits ) + __field(size_t, input_bits ) ), =20 TP_fast_assign( __entry->input_bits =3D input_bits; ), =20 - TP_printk("input_pool_bits %d", __entry->input_bits) + TP_printk("input_pool_bits %zu", __entry->input_bits) ); =20 TRACE_EVENT(add_disk_randomness, - TP_PROTO(dev_t dev, int input_bits), + TP_PROTO(dev_t dev, size_t input_bits), =20 TP_ARGS(dev, input_bits), =20 TP_STRUCT__entry( - __field( dev_t, dev ) - __field( int, input_bits ) + __field(dev_t, dev ) + __field(size_t, input_bits ) ), =20 TP_fast_assign( @@ -110,17 +110,17 @@ TRACE_EVENT(add_disk_randomness, __entry->input_bits =3D input_bits; ), =20 - TP_printk("dev %d,%d input_pool_bits %d", MAJOR(__entry->dev), + TP_printk("dev %d,%d input_pool_bits %zu", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->input_bits) ); =20 DECLARE_EVENT_CLASS(random__get_random_bytes, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), =20 TP_ARGS(nbytes, IP), =20 TP_STRUCT__entry( - __field( int, nbytes ) + __field(size_t, nbytes ) __field(unsigned long, IP ) ), =20 @@ -129,29 +129,29 @@ DECLARE_EVENT_CLASS(random__get_random_b __entry->IP =3D IP; ), =20 - TP_printk("nbytes %d caller %pS", __entry->nbytes, (void *)__entry->IP) + TP_printk("nbytes %zu caller %pS", __entry->nbytes, (void *)__entry->IP) ); =20 DEFINE_EVENT(random__get_random_bytes, get_random_bytes, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), =20 TP_ARGS(nbytes, IP) ); =20 DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), =20 TP_ARGS(nbytes, IP) ); =20 DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(int nbytes, int entropy_count), + TP_PROTO(size_t nbytes, size_t entropy_count), =20 TP_ARGS(nbytes, entropy_count), =20 TP_STRUCT__entry( - __field( int, nbytes ) - __field( int, entropy_count ) + __field( size_t, nbytes ) + __field( size_t, entropy_count ) ), =20 TP_fast_assign( @@ -159,37 +159,34 @@ DECLARE_EVENT_CLASS(random__extract_entr __entry->entropy_count =3D entropy_count; ), =20 - TP_printk("input pool: nbytes %d entropy_count %d", + TP_printk("input pool: nbytes %zu entropy_count %zu", __entry->nbytes, __entry->entropy_count) ); =20 =20 DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(int nbytes, int entropy_count), + TP_PROTO(size_t nbytes, size_t entropy_count), =20 TP_ARGS(nbytes, entropy_count) ); =20 TRACE_EVENT(urandom_read, - TP_PROTO(int got_bits, int pool_left, int input_left), + TP_PROTO(size_t nbytes, size_t entropy_count), =20 - TP_ARGS(got_bits, pool_left, input_left), + TP_ARGS(nbytes, entropy_count), =20 TP_STRUCT__entry( - __field( int, got_bits ) - __field( int, pool_left ) - __field( int, input_left ) + __field( size_t, nbytes ) + __field( size_t, entropy_count ) ), =20 TP_fast_assign( - __entry->got_bits =3D got_bits; - __entry->pool_left =3D pool_left; - __entry->input_left =3D input_left; + __entry->nbytes =3D nbytes; + __entry->entropy_count =3D entropy_count; ), =20 - TP_printk("got_bits %d nonblocking_pool_entropy_left %d " - "input_entropy_left %d", __entry->got_bits, - __entry->pool_left, __entry->input_left) + TP_printk("reading: nbytes %zu entropy_count %zu", + __entry->nbytes, __entry->entropy_count) ); =20 TRACE_EVENT(prandom_u32, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27149C433EF for ; Fri, 27 May 2022 09:04:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350352AbiE0JE3 (ORCPT ); Fri, 27 May 2022 05:04:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350461AbiE0I77 (ORCPT ); Fri, 27 May 2022 04:59:59 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E04795D5C0; Fri, 27 May 2022 01:55:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6CC4361CB7; Fri, 27 May 2022 08:55:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47306C385B8; Fri, 27 May 2022 08:55:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641756; bh=Ve6NfPpdS1r05uyDwxztOFOgYNEeL5kzJRoMP9JvtMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZY4FmL6y8TdecJTQaoJNhliuhozPzUuSd40UHPGC28C4TsmiizEgo11eqf7ax32j+ eVbASixqPL/JkzOB0FM+2zO01u+a8MckK2qpqMAe9CrBX+aO+8UIBXGeeTDaMSdvQq b/rNhxOlG5SdrKzhawc8ywIgbmu3Yyylv/7TVc1w= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Jann Horn , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 020/111] random: remove outdated INT_MAX >> 6 check in urandom_read() Date: Fri, 27 May 2022 10:48:52 +0200 Message-Id: <20220527084822.188475049@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 434537ae54ad37e93555de21b6ac8133d6d773a9 upstream. In 79a8468747c5 ("random: check for increase of entropy_count because of signed conversion"), a number of checks were added around what values were passed to account(), because account() was doing fancy fixed point fractional arithmetic, and a user had some ability to pass large values directly into it. One of things in that commit was limiting those values to INT_MAX >> 6. The first >> 3 was for bytes to bits, and the next >> 3 was for bits to 1/8 fractional bits. However, for several years now, urandom reads no longer touch entropy accounting, and so this check serves no purpose. The current flow is: urandom_read_nowarn()-->get_random_bytes_user()-->chacha20_block() Of course, we don't want that size_t to be truncated when adding it into the ssize_t. But we arrive at urandom_read_nowarn() in the first place either via ordinary fops, which limits reads to MAX_RW_COUNT, or via getrandom() which limits reads to INT_MAX. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1286,9 +1286,8 @@ void rand_initialize_disk(struct gendisk static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { - int ret; + ssize_t ret; =20 - nbytes =3D min_t(size_t, nbytes, INT_MAX >> 6); ret =3D get_random_bytes_user(buf, nbytes); trace_urandom_read(nbytes, input_pool.entropy_count); return ret; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D45FC4332F for ; Fri, 27 May 2022 09:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350353AbiE0JBX (ORCPT ); Fri, 27 May 2022 05:01:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350300AbiE0I6j (ORCPT ); Fri, 27 May 2022 04:58:39 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 712F7123884; Fri, 27 May 2022 01:55:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 029CEB823DD; Fri, 27 May 2022 08:55:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B078C385A9; Fri, 27 May 2022 08:55:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641703; bh=0ycG1C8R9RfHAaw3cLJ3/vzUv+464neREcYcVWVG6kQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JPONJSJ9ZVopTqZ/QzVYyk+bCTd3kIPiDGnYdF/3s34v28gii25ivJ0TigzX/MaVB EWqOoH+4xPADhJv1WPdfUkZOsi5T+TtyNqg+QyqPu9ZUWARZpPviaK0nS06APoHAUO bgWuAwfW9sMPiQhFWkOBrmzcfP2jVacIhvGhCjaM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Jann Horn , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 021/111] random: zero buffer after reading entropy from userspace Date: Fri, 27 May 2022 10:48:53 +0200 Message-Id: <20220527084822.362277500@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 7b5164fb1279bf0251371848e40bae646b59b3a8 upstream. This buffer may contain entropic data that shouldn't stick around longer than needed, so zero out the temporary buffer at the end of write_pool(). Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1336,19 +1336,24 @@ static __poll_t random_poll(struct file static int write_pool(const char __user *ubuf, size_t count) { size_t len; + int ret =3D 0; u8 block[BLAKE2S_BLOCK_SIZE]; =20 while (count) { len =3D min(count, sizeof(block)); - if (copy_from_user(block, ubuf, len)) - return -EFAULT; + if (copy_from_user(block, ubuf, len)) { + ret =3D -EFAULT; + goto out; + } count -=3D len; ubuf +=3D len; mix_pool_bytes(block, len); cond_resched(); } =20 - return 0; +out: + memzero_explicit(block, sizeof(block)); + return ret; } =20 static ssize_t random_write(struct file *file, const char __user *buffer, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C3FFC433F5 for ; Fri, 27 May 2022 09:06:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243015AbiE0JGq (ORCPT ); Fri, 27 May 2022 05:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350224AbiE0I6r (ORCPT ); Fri, 27 May 2022 04:58:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 316201238B5; Fri, 27 May 2022 01:55:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E387FB823DD; Fri, 27 May 2022 08:55:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A4C6C385A9; Fri, 27 May 2022 08:55:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641707; bh=nyNsl1FhrSkC5K3QNeXt8djsHyg/84AZ3Q0j/O2PnJo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TI3mt4fLsuENM51eDHjohTwaCgus6Ho/gGt75AwOeSporjjCrby/nRlhIab8aLsIX IZb7PqOZLTq0S8HzzIbGvBGK8jrH9omXgzpAV3A8XJZPc7/GMqViCHdzAjL0IYk0xa 5x/M/qyF4tyFWrJ6bNFoO83F6vK8+5r1cjtKwRQo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 022/111] random: fix locking for crng_init in crng_reseed() Date: Fri, 27 May 2022 10:48:54 +0200 Message-Id: <20220527084822.577367876@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Dominik Brodowski commit 7191c628fe07b70d3f37de736d173d1b115396ed upstream. crng_init is protected by primary_crng->lock. Therefore, we need to hold this lock when increasing crng_init to 2. As we shouldn't hold this lock for too long, only hold it for those parts which require protection. Signed-off-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -502,6 +502,7 @@ static void crng_reseed(void) int entropy_count; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; + bool finalize_init =3D false; =20 /* * First we make sure we have POOL_MIN_BITS of entropy in the pool, @@ -529,12 +530,14 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - spin_unlock_irqrestore(&base_crng.lock, flags); - memzero_explicit(key, sizeof(key)); - if (crng_init < 2) { invalidate_batched_entropy(); crng_init =3D 2; + finalize_init =3D true; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (finalize_init) { process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35F6CC433EF for ; Fri, 27 May 2022 09:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350225AbiE0JGy (ORCPT ); Fri, 27 May 2022 05:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350262AbiE0I6r (ORCPT ); Fri, 27 May 2022 04:58:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DED55640D; Fri, 27 May 2022 01:55:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B5430B823DD; Fri, 27 May 2022 08:55:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EACDBC385A9; Fri, 27 May 2022 08:55:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641711; bh=3Tqro75dduuDCTez7Zs5xe13wkt4FGkHIS0ja0Jgt9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iRVaHFlYAAjFVvrlHYIwZtr6pgNQa7z0BRYykzrzjkDKSprOdRZ8bi2yZdliU5lnR nKfJf1ErZAo54pZg3THOAq2UG26gu5JiPiKsUcqZXXUcSd3ZV9uopxpUStPh2EYAli cpw3bQd3+7xz/SyQGtWdnmqEtppF5FXqcTM8ng2U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 023/111] random: tie batched entropy generation to base_crng generation Date: Fri, 27 May 2022 10:48:55 +0200 Message-Id: <20220527084822.751087885@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 0791e8b655cc373718f0f58800fdc625a3447ac5 upstream. Now that we have an explicit base_crng generation counter, we don't need a separate one for batched entropy. Rather, we can just move the generation forward every time we change crng_init state or update the base_crng key. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -430,8 +430,6 @@ static DEFINE_PER_CPU(struct crng, crngs =20 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); =20 -static void invalidate_batched_entropy(void); - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -454,7 +452,7 @@ static size_t crng_fast_load(const void src++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { - invalidate_batched_entropy(); + ++base_crng.generation; crng_init =3D 1; } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -531,7 +529,6 @@ static void crng_reseed(void) WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); if (crng_init < 2) { - invalidate_batched_entropy(); crng_init =3D 2; finalize_init =3D true; } @@ -1256,8 +1253,9 @@ int __init rand_initialize(void) mix_pool_bytes(utsname(), sizeof(*(utsname()))); =20 extract_entropy(base_crng.key, sizeof(base_crng.key)); + ++base_crng.generation; + if (arch_init && trust_cpu && crng_init < 2) { - invalidate_batched_entropy(); crng_init =3D 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } @@ -1607,8 +1605,6 @@ static int __init random_sysctls_init(vo device_initcall(random_sysctls_init); #endif /* CONFIG_SYSCTL */ =20 -static atomic_t batch_generation =3D ATOMIC_INIT(0); - struct batched_entropy { union { /* @@ -1622,8 +1618,8 @@ struct batched_entropy { u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; }; local_lock_t lock; + unsigned long generation; unsigned int position; - int generation; }; =20 /* @@ -1643,14 +1639,14 @@ u64 get_random_u64(void) unsigned long flags; struct batched_entropy *batch; static void *previous; - int next_gen; + unsigned long next_gen; =20 warn_unseeded_randomness(&previous); =20 local_lock_irqsave(&batched_entropy_u64.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u64); =20 - next_gen =3D atomic_read(&batch_generation); + next_gen =3D READ_ONCE(base_crng.generation); if (batch->position >=3D ARRAY_SIZE(batch->entropy_u64) || next_gen !=3D batch->generation) { _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); @@ -1677,14 +1673,14 @@ u32 get_random_u32(void) unsigned long flags; struct batched_entropy *batch; static void *previous; - int next_gen; + unsigned long next_gen; =20 warn_unseeded_randomness(&previous); =20 local_lock_irqsave(&batched_entropy_u32.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u32); =20 - next_gen =3D atomic_read(&batch_generation); + next_gen =3D READ_ONCE(base_crng.generation); if (batch->position >=3D ARRAY_SIZE(batch->entropy_u32) || next_gen !=3D batch->generation) { _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); @@ -1700,15 +1696,6 @@ u32 get_random_u32(void) } EXPORT_SYMBOL(get_random_u32); =20 -/* It's important to invalidate all potential batched entropy that might - * be stored before the crng is initialized, which we can do lazily by - * bumping the generation counter. - */ -static void invalidate_batched_entropy(void) -{ - atomic_inc(&batch_generation); -} - /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D612C433EF for ; Fri, 27 May 2022 09:06:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351553AbiE0JGe (ORCPT ); Fri, 27 May 2022 05:06:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350272AbiE0I6r (ORCPT ); Fri, 27 May 2022 04:58:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88EF557112; Fri, 27 May 2022 01:55:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1E26061CB7; Fri, 27 May 2022 08:55:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0AD3C385A9; Fri, 27 May 2022 08:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641715; bh=pT6zFsldiOjbn+Msv4GLJ981a80hVZ4IhcIP0RG0+aA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TAJsPAvLqGTKxj2M/SzexQJLFXxVreC6T/Uv6yVQ8xMLUuoalpRLvWOMkPR6Qj/w/ SuhCqIOBj2/jLKwmnH3rbBtDyHJIjwzxz/0GY870+gXPxUQO1l6sLGMmwlC1JCoqdO hYYrnAQj2AmylTdMnpxUA1sFTxnJNCIjkGKopev0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 024/111] random: remove ifdefd out interrupt bench Date: Fri, 27 May 2022 10:48:56 +0200 Message-Id: <20220527084822.907731130@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 95e6060c20a7f5db60163274c5222a725ac118f9 upstream. With tools like kbench9000 giving more finegrained responses, and this basically never having been used ever since it was initially added, let's just get rid of this. There *is* still work to be done on the interrupt handler, but this really isn't the way it's being developed. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- Documentation/admin-guide/sysctl/kernel.rst | 9 ------ drivers/char/random.c | 40 -----------------------= ----- 2 files changed, 49 deletions(-) --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1042,15 +1042,6 @@ This is a directory, with the following are woken up. This file is writable for compatibility purposes, but writing to it has no effect on any RNG behavior. =20 -If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH`` -defined, these additional entries are present: - -* ``add_interrupt_avg_cycles``: the average number of cycles between - interrupts used to feed the pool; - -* ``add_interrupt_avg_deviation``: the standard deviation seen on the - number of cycles between interrupts used to feed the pool. - =20 randomize_va_space =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -240,8 +240,6 @@ #define CREATE_TRACE_POINTS #include =20 -/* #define ADD_INTERRUPT_BENCH */ - enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ @@ -808,27 +806,6 @@ EXPORT_SYMBOL_GPL(add_input_randomness); =20 static DEFINE_PER_CPU(struct fast_pool, irq_randomness); =20 -#ifdef ADD_INTERRUPT_BENCH -static unsigned long avg_cycles, avg_deviation; - -#define AVG_SHIFT 8 /* Exponential average factor k=3D1/256 */ -#define FIXED_1_2 (1 << (AVG_SHIFT - 1)) - -static void add_interrupt_bench(cycles_t start) -{ - long delta =3D random_get_entropy() - start; - - /* Use a weighted moving average */ - delta =3D delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT); - avg_cycles +=3D delta; - /* And average deviation */ - delta =3D abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT); - avg_deviation +=3D delta; -} -#else -#define add_interrupt_bench(x) -#endif - static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { u32 *ptr =3D (u32 *)regs; @@ -865,7 +842,6 @@ void add_interrupt_randomness(int irq) (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs); =20 fast_mix(fast_pool); - add_interrupt_bench(cycles); =20 if (unlikely(crng_init =3D=3D 0)) { if (fast_pool->count >=3D 64 && @@ -1574,22 +1550,6 @@ static struct ctl_table random_table[] =3D .mode =3D 0444, .proc_handler =3D proc_do_uuid, }, -#ifdef ADD_INTERRUPT_BENCH - { - .procname =3D "add_interrupt_avg_cycles", - .data =3D &avg_cycles, - .maxlen =3D sizeof(avg_cycles), - .mode =3D 0444, - .proc_handler =3D proc_doulongvec_minmax, - }, - { - .procname =3D "add_interrupt_avg_deviation", - .data =3D &avg_deviation, - .maxlen =3D sizeof(avg_deviation), - .mode =3D 0444, - .proc_handler =3D proc_doulongvec_minmax, - }, -#endif { } }; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD0DDC433FE for ; Fri, 27 May 2022 09:06:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237712AbiE0JGk (ORCPT ); Fri, 27 May 2022 05:06:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350282AbiE0I6r (ORCPT ); Fri, 27 May 2022 04:58:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E588C56C1A; Fri, 27 May 2022 01:55:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A93C3B823D9; Fri, 27 May 2022 08:55:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DEFBAC385B8; Fri, 27 May 2022 08:55:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641719; bh=aKJ6WoDEOhdGQOXt8LBXzcQsK4XbJjpx5d+KyM3odns=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jaRzxr308CHcBqvRm5Z5d7y7pyk8j8GTu5wLcn1zp76o9eyaTF6E+vCVrSyRV8QYT UFgLDmdJTDXZUnnk0op7eGW8luMD5Qe5Y0nGO1qe59BhD+cj8wpvDWHJcqMoZ6c6FP bI67gUNrpHVZ99ZWPkTlXk843s0wb18fFZtLo9Cc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 025/111] random: remove unused tracepoints Date: Fri, 27 May 2022 10:48:57 +0200 Message-Id: <20220527084823.063888450@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 14c174633f349cb41ea90c2c0aaddac157012f74 upstream. These explicit tracepoints aren't really used and show sign of aging. It's work to keep these up to date, and before I attempted to keep them up to date, they weren't up to date, which indicates that they're not really used. These days there are better ways of introspecting anyway. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 30 ----- include/trace/events/random.h | 212 -------------------------------------= ----- lib/random32.c | 2=20 3 files changed, 3 insertions(+), 241 deletions(-) delete mode 100644 include/trace/events/random.h --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -237,9 +237,6 @@ #include #include =20 -#define CREATE_TRACE_POINTS -#include - enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ @@ -315,7 +312,6 @@ static void mix_pool_bytes(const void *i { unsigned long flags; =20 - trace_mix_pool_bytes(nbytes, _RET_IP_); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(in, nbytes); spin_unlock_irqrestore(&input_pool.lock, flags); @@ -389,8 +385,6 @@ static void credit_entropy_bits(size_t n entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); =20 - trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); - if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) crng_reseed(); } @@ -721,7 +715,6 @@ void add_device_randomness(const void *b if (!crng_ready() && size) crng_slow_load(buf, size); =20 - trace_add_device_randomness(size, _RET_IP_); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(buf, size); _mix_pool_bytes(&time, sizeof(time)); @@ -800,7 +793,6 @@ void add_input_randomness(unsigned int t last_value =3D value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_input_randomness); =20 @@ -880,7 +872,6 @@ void add_disk_randomness(struct gendisk return; /* first major is 1, so we get >=3D 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -905,8 +896,6 @@ static void extract_entropy(void *buf, s } block; size_t i; =20 - trace_extract_entropy(nbytes, input_pool.entropy_count); - for (i =3D 0; i < ARRAY_SIZE(block.rdseed); ++i) { if (!arch_get_random_seed_long(&block.rdseed[i]) && !arch_get_random_long(&block.rdseed[i])) @@ -978,8 +967,6 @@ static void _get_random_bytes(void *buf, u8 tmp[CHACHA_BLOCK_SIZE]; size_t len; =20 - trace_get_random_bytes(nbytes, _RET_IP_); - if (!nbytes) return; =20 @@ -1176,7 +1163,6 @@ size_t __must_check get_random_bytes_arc size_t left =3D nbytes; u8 *p =3D buf; =20 - trace_get_random_bytes_arch(left, _RET_IP_); while (left) { unsigned long v; size_t chunk =3D min_t(size_t, left, sizeof(unsigned long)); @@ -1260,16 +1246,6 @@ void rand_initialize_disk(struct gendisk } #endif =20 -static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, - size_t nbytes, loff_t *ppos) -{ - ssize_t ret; - - ret =3D get_random_bytes_user(buf, nbytes); - trace_urandom_read(nbytes, input_pool.entropy_count); - return ret; -} - static ssize_t urandom_read(struct file *file, char __user *buf, size_t nb= ytes, loff_t *ppos) { @@ -1282,7 +1258,7 @@ static ssize_t urandom_read(struct file current->comm, nbytes); } =20 - return urandom_read_nowarn(file, buf, nbytes, ppos); + return get_random_bytes_user(buf, nbytes); } =20 static ssize_t random_read(struct file *file, char __user *buf, size_t nby= tes, @@ -1293,7 +1269,7 @@ static ssize_t random_read(struct file * ret =3D wait_for_random_bytes(); if (ret !=3D 0) return ret; - return urandom_read_nowarn(file, buf, nbytes, ppos); + return get_random_bytes_user(buf, nbytes); } =20 static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1454,7 +1430,7 @@ SYSCALL_DEFINE3(getrandom, char __user * if (unlikely(ret)) return ret; } - return urandom_read_nowarn(NULL, buf, count, NULL); + return get_random_bytes_user(buf, count); } =20 /******************************************************************** --- a/include/trace/events/random.h +++ /dev/null @@ -1,212 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#undef TRACE_SYSTEM -#define TRACE_SYSTEM random - -#if !defined(_TRACE_RANDOM_H) || defined(TRACE_HEADER_MULTI_READ) -#define _TRACE_RANDOM_H - -#include -#include - -TRACE_EVENT(add_device_randomness, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP), - - TP_STRUCT__entry( - __field(size_t, bytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bytes =3D bytes; - __entry->IP =3D IP; - ), - - TP_printk("bytes %zu caller %pS", - __entry->bytes, (void *)__entry->IP) -); - -DECLARE_EVENT_CLASS(random__mix_pool_bytes, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP), - - TP_STRUCT__entry( - __field(size_t, bytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bytes =3D bytes; - __entry->IP =3D IP; - ), - - TP_printk("input pool: bytes %zu caller %pS", - __entry->bytes, (void *)__entry->IP) -); - -DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP) -); - -DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock, - TP_PROTO(int bytes, unsigned long IP), - - TP_ARGS(bytes, IP) -); - -TRACE_EVENT(credit_entropy_bits, - TP_PROTO(size_t bits, size_t entropy_count, unsigned long IP), - - TP_ARGS(bits, entropy_count, IP), - - TP_STRUCT__entry( - __field(size_t, bits ) - __field(size_t, entropy_count ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bits =3D bits; - __entry->entropy_count =3D entropy_count; - __entry->IP =3D IP; - ), - - TP_printk("input pool: bits %zu entropy_count %zu caller %pS", - __entry->bits, __entry->entropy_count, (void *)__entry->IP) -); - -TRACE_EVENT(add_input_randomness, - TP_PROTO(size_t input_bits), - - TP_ARGS(input_bits), - - TP_STRUCT__entry( - __field(size_t, input_bits ) - ), - - TP_fast_assign( - __entry->input_bits =3D input_bits; - ), - - TP_printk("input_pool_bits %zu", __entry->input_bits) -); - -TRACE_EVENT(add_disk_randomness, - TP_PROTO(dev_t dev, size_t input_bits), - - TP_ARGS(dev, input_bits), - - TP_STRUCT__entry( - __field(dev_t, dev ) - __field(size_t, input_bits ) - ), - - TP_fast_assign( - __entry->dev =3D dev; - __entry->input_bits =3D input_bits; - ), - - TP_printk("dev %d,%d input_pool_bits %zu", MAJOR(__entry->dev), - MINOR(__entry->dev), __entry->input_bits) -); - -DECLARE_EVENT_CLASS(random__get_random_bytes, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP), - - TP_STRUCT__entry( - __field(size_t, nbytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->nbytes =3D nbytes; - __entry->IP =3D IP; - ), - - TP_printk("nbytes %zu caller %pS", __entry->nbytes, (void *)__entry->IP) -); - -DEFINE_EVENT(random__get_random_bytes, get_random_bytes, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP) -); - -DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP) -); - -DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count), - - TP_STRUCT__entry( - __field( size_t, nbytes ) - __field( size_t, entropy_count ) - ), - - TP_fast_assign( - __entry->nbytes =3D nbytes; - __entry->entropy_count =3D entropy_count; - ), - - TP_printk("input pool: nbytes %zu entropy_count %zu", - __entry->nbytes, __entry->entropy_count) -); - - -DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count) -); - -TRACE_EVENT(urandom_read, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count), - - TP_STRUCT__entry( - __field( size_t, nbytes ) - __field( size_t, entropy_count ) - ), - - TP_fast_assign( - __entry->nbytes =3D nbytes; - __entry->entropy_count =3D entropy_count; - ), - - TP_printk("reading: nbytes %zu entropy_count %zu", - __entry->nbytes, __entry->entropy_count) -); - -TRACE_EVENT(prandom_u32, - - TP_PROTO(unsigned int ret), - - TP_ARGS(ret), - - TP_STRUCT__entry( - __field( unsigned int, ret) - ), - - TP_fast_assign( - __entry->ret =3D ret; - ), - - TP_printk("ret=3D%u" , __entry->ret) -); - -#endif /* _TRACE_RANDOM_H */ - -/* This part must be outside protection */ -#include --- a/lib/random32.c +++ b/lib/random32.c @@ -41,7 +41,6 @@ #include #include #include -#include =20 /** * prandom_u32_state - seeded pseudo-random number generator. @@ -387,7 +386,6 @@ u32 prandom_u32(void) struct siprand_state *state =3D get_cpu_ptr(&net_rand_state); u32 res =3D siprand_u32(state); =20 - trace_prandom_u32(res); put_cpu_ptr(&net_rand_state); return res; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A2E6C38A02 for ; Fri, 27 May 2022 09:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351495AbiE0JGc (ORCPT ); Fri, 27 May 2022 05:06:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350314AbiE0I7C (ORCPT ); Fri, 27 May 2022 04:59:02 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13E80579B3; Fri, 27 May 2022 01:55:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 7A5D1CE237A; Fri, 27 May 2022 08:55:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83D59C34100; Fri, 27 May 2022 08:55:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641722; bh=m1t48f8J1t5GlS2HZ+kgwhOatWjtlZrYDKKdcmlt4Co=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yo65ibpVD1mcKbt+cWT5Dr/0gbtYpiPtYVi3trLhuwM193y4deJ62PlMbP9ihrPGa YSaX+3B6nYCF0lql0mmdtGjayRTqpEm04Mto0cj9BQnFyRU4wfyg/EOlSGgUckIIc1 bsNi+sMNfEZOl/7d9D76Fy5NRA0gFTF0PhBl19p8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 026/111] random: add proper SPDX header Date: Fri, 27 May 2022 10:48:58 +0200 Message-Id: <20220527084823.206306225@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a07fdae346c35c6ba286af1c88e0effcfa330bf9 upstream. Convert the current license into the SPDX notation of "(GPL-2.0 OR BSD-3-Clause)". This infers GPL-2.0 from the text "ALTERNATIVELY, this product may be distributed under the terms of the GNU General Public License, in which case the provisions of the GPL are required INSTEAD OF the above restrictions" and it infers BSD-3-Clause from the verbatim BSD 3 clause license in the file. Cc: Thomas Gleixner Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Greg Kroah-Hartman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 37 +------------------------------------ 1 file changed, 1 insertion(+), 36 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,44 +1,9 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) /* - * random.c -- A strong random number generator - * * Copyright (C) 2017-2022 Jason A. Donenfeld . All Right= s Reserved. - * * Copyright Matt Mackall , 2003, 2004, 2005 - * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All * rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, and the entire permission notice in its entirety, - * including the disclaimer of warranties. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. The name of the author may not be used to endorse or promote - * products derived from this software without specific prior - * written permission. - * - * ALTERNATIVELY, this product may be distributed under the terms of - * the GNU General Public License, in which case the provisions of the GPL= are - * required INSTEAD OF the above restrictions. (This clause is - * necessary due to a potential bad interaction between the GPL and - * the restrictions contained in a BSD-style copyright.) - * - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF - * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE - * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT - * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR - * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE - * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH - * DAMAGE. */ =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B6A1C433FE for ; Fri, 27 May 2022 09:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350428AbiE0JBa (ORCPT ); Fri, 27 May 2022 05:01:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350320AbiE0I7D (ORCPT ); Fri, 27 May 2022 04:59:03 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D634B59091; Fri, 27 May 2022 01:55:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 21CF9CE237A; Fri, 27 May 2022 08:55:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23808C385A9; Fri, 27 May 2022 08:55:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641726; bh=mqr+CmKBFFHkb8QDzclxqHbsHvaX+z4Z1RjSRFRczNg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mJhz5S5hFWoq/eZf004BVGtij3bVkJ6iMA9p0OLyRmCb+OA3tOdtE9+A8B7wwjiDx nrB+S7FrTYGid4Sue0x6hrbBOsMpcSpdl6yY8zD3P1gr6RWLqCZ5Ej7E5/KYAkbYp3 0ODRzaAQoI0IzxHcnC5ZYaarVy259P2nU++Zq2UQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 027/111] random: deobfuscate irq u32/u64 contributions Date: Fri, 27 May 2022 10:48:59 +0200 Message-Id: <20220527084823.344476921@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit b2f408fe403800c91a49f6589d95b6759ce1b30b upstream. In the irq handler, we fill out 16 bytes differently on 32-bit and 64-bit platforms, and for 32-bit vs 64-bit cycle counters, which doesn't always correspond with the bitness of the platform. Whether or not you like this strangeness, it is a matter of fact. But it might not be a fact you well realized until now, because the code that loaded the irq info into 4 32-bit words was quite confusing. Instead, this commit makes everything explicit by having separate (compile-time) branches for 32-bit and 64-bit types. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 49 ++++++++++++++++++++++++++++-----------------= ---- 1 file changed, 28 insertions(+), 21 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -283,7 +283,10 @@ static void mix_pool_bytes(const void *i } =20 struct fast_pool { - u32 pool[4]; + union { + u32 pool32[4]; + u64 pool64[2]; + }; unsigned long last; u16 reg_idx; u8 count; @@ -294,10 +297,10 @@ struct fast_pool { * collector. It's hardcoded for an 128 bit pool and assumes that any * locks that might be needed are taken by the caller. */ -static void fast_mix(struct fast_pool *f) +static void fast_mix(u32 pool[4]) { - u32 a =3D f->pool[0], b =3D f->pool[1]; - u32 c =3D f->pool[2], d =3D f->pool[3]; + u32 a =3D pool[0], b =3D pool[1]; + u32 c =3D pool[2], d =3D pool[3]; =20 a +=3D b; c +=3D d; b =3D rol32(b, 6); d =3D rol32(d, 27); @@ -315,9 +318,8 @@ static void fast_mix(struct fast_pool *f b =3D rol32(b, 16); d =3D rol32(d, 14); d ^=3D a; b ^=3D c; =20 - f->pool[0] =3D a; f->pool[1] =3D b; - f->pool[2] =3D c; f->pool[3] =3D d; - f->count++; + pool[0] =3D a; pool[1] =3D b; + pool[2] =3D c; pool[3] =3D d; } =20 static void process_random_ready_list(void) @@ -784,29 +786,34 @@ void add_interrupt_randomness(int irq) struct pt_regs *regs =3D get_irq_regs(); unsigned long now =3D jiffies; cycles_t cycles =3D random_get_entropy(); - u32 c_high, j_high; - u64 ip; =20 if (cycles =3D=3D 0) cycles =3D get_reg(fast_pool, regs); - c_high =3D (sizeof(cycles) > 4) ? cycles >> 32 : 0; - j_high =3D (sizeof(now) > 4) ? now >> 32 : 0; - fast_pool->pool[0] ^=3D cycles ^ j_high ^ irq; - fast_pool->pool[1] ^=3D now ^ c_high; - ip =3D regs ? instruction_pointer(regs) : _RET_IP_; - fast_pool->pool[2] ^=3D ip; - fast_pool->pool[3] ^=3D - (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs); =20 - fast_mix(fast_pool); + if (sizeof(cycles) =3D=3D 8) + fast_pool->pool64[0] ^=3D cycles ^ rol64(now, 32) ^ irq; + else { + fast_pool->pool32[0] ^=3D cycles ^ irq; + fast_pool->pool32[1] ^=3D now; + } + + if (sizeof(unsigned long) =3D=3D 8) + fast_pool->pool64[1] ^=3D regs ? instruction_pointer(regs) : _RET_IP_; + else { + fast_pool->pool32[2] ^=3D regs ? instruction_pointer(regs) : _RET_IP_; + fast_pool->pool32[3] ^=3D get_reg(fast_pool, regs); + } + + fast_mix(fast_pool->pool32); + ++fast_pool->count; =20 if (unlikely(crng_init =3D=3D 0)) { if (fast_pool->count >=3D 64 && - crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) { + crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { fast_pool->count =3D 0; fast_pool->last =3D now; if (spin_trylock(&input_pool.lock)) { - _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); spin_unlock(&input_pool.lock); } } @@ -820,7 +827,7 @@ void add_interrupt_randomness(int irq) return; =20 fast_pool->last =3D now; - _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); spin_unlock(&input_pool.lock); =20 fast_pool->count =3D 0; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D74C4707E for ; Fri, 27 May 2022 09:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351472AbiE0JGZ (ORCPT ); Fri, 27 May 2022 05:06:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346004AbiE0I7E (ORCPT ); Fri, 27 May 2022 04:59:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A5FB5A2DE; Fri, 27 May 2022 01:55:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BE612B823DD; Fri, 27 May 2022 08:55:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED8E2C385A9; Fri, 27 May 2022 08:55:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641730; bh=ZdWzXFsXEYwjkTdi4Ho7WDOoLXL0Z7KWiEWGYTe0WUI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AXi/F/Jh685yUy/O8fiXwzhJk1QsKhQohPVHd+/rNPvGeKC8OMHKE8BQfERnEYqV1 3F3WM4CP+y8yMnsig5MFiwHYGJOhPZHEBXoLUpRsYwBE+KrR09kHYrnNoKL9eohI3H 5AiqEK1i8Rd8qgQcnige3Hi2Tc0mx6OrkNQtXa60= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 028/111] random: introduce drain_entropy() helper to declutter crng_reseed() Date: Fri, 27 May 2022 10:49:00 +0200 Message-Id: <20220527084823.472648620@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 246c03dd899164d0186b6d685d6387f228c28d93 upstream. In preparation for separating responsibilities, break out the entropy count management part of crng_reseed() into its own function. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -260,6 +260,7 @@ static struct { }; =20 static void extract_entropy(void *buf, size_t nbytes); +static bool drain_entropy(void *buf, size_t nbytes); =20 static void crng_reseed(void); =20 @@ -456,23 +457,13 @@ static void crng_slow_load(const void *c static void crng_reseed(void) { unsigned long flags; - int entropy_count; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; bool finalize_init =3D false; =20 - /* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, - * and then we drain all of it. Only then can we extract a new key. - */ - do { - entropy_count =3D READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); - extract_entropy(key, sizeof(key)); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); + /* Only reseed if we can, to prevent brute forcing a small amount of new = bits. */ + if (!drain_entropy(key, sizeof(key))) + return; =20 /* * We copy the new key into the base_crng, overwriting the old one, @@ -900,6 +891,25 @@ static void extract_entropy(void *buf, s memzero_explicit(&block, sizeof(block)); } =20 +/* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, and th= en we + * set the entropy count to zero (but don't actually touch any data). Only= then + * can we extract a new key with extract_entropy(). + */ +static bool drain_entropy(void *buf, size_t nbytes) +{ + unsigned int entropy_count; + do { + entropy_count =3D READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return false; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); + extract_entropy(buf, nbytes); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + return true; +} + #define warn_unseeded_randomness(previous) \ _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D7EFC433EF for ; Fri, 27 May 2022 09:03:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350376AbiE0JDi (ORCPT ); Fri, 27 May 2022 05:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350733AbiE0JAg (ORCPT ); Fri, 27 May 2022 05:00:36 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47C936B014; Fri, 27 May 2022 01:56:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 06365B823DF; Fri, 27 May 2022 08:56:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B443C385B8; Fri, 27 May 2022 08:56:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641808; bh=gRkETQJvr2FxHDr9oXch+XuY+cPLZ3ZZHS7YjlX3Skc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Isv6GD2yZROnyZrYjDbVDrr5ogkAuO+fj92IE3tVKBI2wQ0ZdB0NSM6rskTVEtabB KnutOisvQhWVFtv9x9jSzQAtMWfhcXOelgOdaO3TSC9vXassiKZXnTMTue23JIr2YD sNbZEVD4IDtJ6SpZfwgZm77vyl/xOONDNEjqqvmk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 029/111] random: remove useless header comment Date: Fri, 27 May 2022 10:49:01 +0200 Message-Id: <20220527084823.599592472@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 6071a6c0fba2d747742cadcbb3ba26ed756ed73b upstream. This really adds nothing at all useful. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- include/linux/random.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) --- a/include/linux/random.h +++ b/include/linux/random.h @@ -1,9 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* - * include/linux/random.h - * - * Include file for the random number generator. - */ + #ifndef _LINUX_RANDOM_H #define _LINUX_RANDOM_H From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CEEDC4321E for ; Fri, 27 May 2022 09:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350847AbiE0JFZ (ORCPT ); Fri, 27 May 2022 05:05:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350840AbiE0JAp (ORCPT ); Fri, 27 May 2022 05:00:45 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 696D5939E4; Fri, 27 May 2022 01:57:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 218D5B823D9; Fri, 27 May 2022 08:57:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44EEBC385B8; Fri, 27 May 2022 08:57:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641830; bh=PI9wF56zKrfi07MLCXgjjjfQXCRyWevrkV50RAwJeS8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i+6M4CdxxeCdy8mrP6Avben0clEz7aa3suhuO130Hr2s6Ij9D8Bdj/cKdt4wbZZk9 7QMbpcq4jnyxFhqPMwCi5SgHO6cvFyDgOLrw9T64tTp0EcQjCd0fz+9S1j8xDRhCrz 7Brld3Ueg97J6i8zUHgd1YBN9hKttmTZyJtfUXpE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 030/111] random: remove whitespace and reorder includes Date: Fri, 27 May 2022 10:49:02 +0200 Message-Id: <20220527084823.735530817@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 87e7d5abad0cbc9312dea7f889a57d294c1a5fcc upstream. This is purely cosmetic. Future work involves figuring out which of these headers we need and which we don't. Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -193,11 +193,10 @@ #include #include #include +#include #include #include - #include -#include #include #include #include From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0264CC433F5 for ; Fri, 27 May 2022 09:06:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242057AbiE0JFD (ORCPT ); Fri, 27 May 2022 05:05:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350876AbiE0JAt (ORCPT ); Fri, 27 May 2022 05:00:49 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AEAD126999; Fri, 27 May 2022 01:57:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 18239B823DE; Fri, 27 May 2022 08:57:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EDEDC385A9; Fri, 27 May 2022 08:57:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641838; bh=mcPIvvpE6dulN5Xat5qqXk4JwHaVYipAgps6oQ+Ix88=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C80b6V/BJkaJQtI+gGrHQAo7WBoExznPXY5gAA5nXBNUcb9Z2U97mOS76rxUSXM6u a0Dz0C+rTsUtdIlCzVi9E6IGADy7XaAxB74w5VAnba2SxIrYf2AXvbKzz+8D3n6Vvt A6aIhA55dn8AcW79ph+qv+Lc1AVBjBqC7fcpoiB0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 031/111] random: group initialization wait functions Date: Fri, 27 May 2022 10:49:03 +0200 Message-Id: <20220527084823.886956683@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5f1bb112006b104b3e2a1e1b39bbb9b2617581e6 upstream. This pulls all of the readiness waiting-focused functions into the first labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 333 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 172 insertions(+), 161 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -201,44 +201,197 @@ #include #include =20 -enum { - POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ -}; - -/* - * Static global variables - */ -static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); -static struct fasync_struct *fasync; - -static DEFINE_SPINLOCK(random_ready_list_lock); -static LIST_HEAD(random_ready_list); +/********************************************************************* + * + * Initialization and readiness waiting. + * + * Much of the RNG infrastructure is devoted to various dependencies + * being able to wait until the RNG has collected enough entropy and + * is ready for safe consumption. + * + *********************************************************************/ =20 /* * crng_init =3D 0 --> Uninitialized * 1 --> Initialized * 2 --> Initialized from input_pool * - * crng_init is protected by primary_crng->lock, and only increases + * crng_init is protected by base_crng->lock, and only increases * its value (from 0->1->2). */ static int crng_init =3D 0; #define crng_ready() (likely(crng_init > 1)) -static int crng_init_cnt =3D 0; -static void process_random_ready_list(void); -static void _get_random_bytes(void *buf, size_t nbytes); +/* Various types of waiters for crng_init->2 transition. */ +static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); +static struct fasync_struct *fasync; +static DEFINE_SPINLOCK(random_ready_list_lock); +static LIST_HEAD(random_ready_list); =20 +/* Control how we warn userspace. */ static struct ratelimit_state unseeded_warning =3D RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); static struct ratelimit_state urandom_warning =3D RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); - static int ratelimit_disable __read_mostly; - module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"= ); =20 +/* + * Returns whether or not the input pool has been seeded and thus guarante= ed + * to supply cryptographically secure random numbers. This applies to: the + * /dev/urandom device, the get_random_bytes function, and the get_random_= {u32, + * ,u64,int,long} family of functions. + * + * Returns: true if the input pool has been seeded. + * false if the input pool has not been seeded. + */ +bool rng_is_initialized(void) +{ + return crng_ready(); +} +EXPORT_SYMBOL(rng_is_initialized); + +/* Used by wait_for_random_bytes(), and considered an entropy collector, b= elow. */ +static void try_to_generate_entropy(void); + +/* + * Wait for the input pool to be seeded and thus guaranteed to supply + * cryptographically secure random numbers. This applies to: the /dev/uran= dom + * device, the get_random_bytes function, and the get_random_{u32,u64,int,= long} + * family of functions. Using any of these functions without first calling + * this function forfeits the guarantee of security. + * + * Returns: 0 if the input pool has been seeded. + * -ERESTARTSYS if the function was interrupted by a signal. + */ +int wait_for_random_bytes(void) +{ + if (likely(crng_ready())) + return 0; + + do { + int ret; + ret =3D wait_event_interruptible_timeout(crng_init_wait, crng_ready(), H= Z); + if (ret) + return ret > 0 ? 0 : ret; + + try_to_generate_entropy(); + } while (!crng_ready()); + + return 0; +} +EXPORT_SYMBOL(wait_for_random_bytes); + +/* + * Add a callback function that will be invoked when the input + * pool is initialised. + * + * returns: 0 if callback is successfully added + * -EALREADY if pool is already initialised (callback not called) + * -ENOENT if module for callback is not alive + */ +int add_random_ready_callback(struct random_ready_callback *rdy) +{ + struct module *owner; + unsigned long flags; + int err =3D -EALREADY; + + if (crng_ready()) + return err; + + owner =3D rdy->owner; + if (!try_module_get(owner)) + return -ENOENT; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (crng_ready()) + goto out; + + owner =3D NULL; + + list_add(&rdy->list, &random_ready_list); + err =3D 0; + +out: + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); + + return err; +} +EXPORT_SYMBOL(add_random_ready_callback); + +/* + * Delete a previously registered readiness callback function. + */ +void del_random_ready_callback(struct random_ready_callback *rdy) +{ + unsigned long flags; + struct module *owner =3D NULL; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (!list_empty(&rdy->list)) { + list_del_init(&rdy->list); + owner =3D rdy->owner; + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); +} +EXPORT_SYMBOL(del_random_ready_callback); + +static void process_random_ready_list(void) +{ + unsigned long flags; + struct random_ready_callback *rdy, *tmp; + + spin_lock_irqsave(&random_ready_list_lock, flags); + list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { + struct module *owner =3D rdy->owner; + + list_del_init(&rdy->list); + rdy->func(rdy); + module_put(owner); + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); +} + +#define warn_unseeded_randomness(previous) \ + _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) + +static void _warn_unseeded_randomness(const char *func_name, void *caller,= void **previous) +{ +#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM + const bool print_once =3D false; +#else + static bool print_once __read_mostly; +#endif + + if (print_once || crng_ready() || + (previous && (caller =3D=3D READ_ONCE(*previous)))) + return; + WRITE_ONCE(*previous, caller); +#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM + print_once =3D true; +#endif + if (__ratelimit(&unseeded_warning)) + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init= =3D%d\n", + func_name, caller, crng_init); +} + + +enum { + POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, + POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ +}; + +/* + * Static global variables + */ +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +static int crng_init_cnt =3D 0; + /********************************************************************** * * OS independent entropy store. Here are the functions which handle @@ -322,22 +475,6 @@ static void fast_mix(u32 pool[4]) pool[2] =3D c; pool[3] =3D d; } =20 -static void process_random_ready_list(void) -{ - unsigned long flags; - struct random_ready_callback *rdy, *tmp; - - spin_lock_irqsave(&random_ready_list_lock, flags); - list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { - struct module *owner =3D rdy->owner; - - list_del_init(&rdy->list); - rdy->func(rdy); - module_put(owner); - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); -} - static void credit_entropy_bits(size_t nbits) { unsigned int entropy_count, orig, add; @@ -387,8 +524,6 @@ static DEFINE_PER_CPU(struct crng, crngs .lock =3D INIT_LOCAL_LOCK(crngs.lock), }; =20 -static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -909,29 +1044,6 @@ static bool drain_entropy(void *buf, siz return true; } =20 -#define warn_unseeded_randomness(previous) \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) - -static void _warn_unseeded_randomness(const char *func_name, void *caller,= void **previous) -{ -#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM - const bool print_once =3D false; -#else - static bool print_once __read_mostly; -#endif - - if (print_once || crng_ready() || - (previous && (caller =3D=3D READ_ONCE(*previous)))) - return; - WRITE_ONCE(*previous, caller); -#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM - print_once =3D true; -#endif - if (__ratelimit(&unseeded_warning)) - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init= =3D%d\n", - func_name, caller, crng_init); -} - /* * This function is the exported kernel interface. It returns some * number of good random numbers, suitable for key generation, seeding @@ -1033,107 +1145,6 @@ static void try_to_generate_entropy(void } =20 /* - * Wait for the urandom pool to be seeded and thus guaranteed to supply - * cryptographically secure random numbers. This applies to: the /dev/uran= dom - * device, the get_random_bytes function, and the get_random_{u32,u64,int,= long} - * family of functions. Using any of these functions without first calling - * this function forfeits the guarantee of security. - * - * Returns: 0 if the urandom pool has been seeded. - * -ERESTARTSYS if the function was interrupted by a signal. - */ -int wait_for_random_bytes(void) -{ - if (likely(crng_ready())) - return 0; - - do { - int ret; - ret =3D wait_event_interruptible_timeout(crng_init_wait, crng_ready(), H= Z); - if (ret) - return ret > 0 ? 0 : ret; - - try_to_generate_entropy(); - } while (!crng_ready()); - - return 0; -} -EXPORT_SYMBOL(wait_for_random_bytes); - -/* - * Returns whether or not the urandom pool has been seeded and thus guaran= teed - * to supply cryptographically secure random numbers. This applies to: the - * /dev/urandom device, the get_random_bytes function, and the get_random_= {u32, - * ,u64,int,long} family of functions. - * - * Returns: true if the urandom pool has been seeded. - * false if the urandom pool has not been seeded. - */ -bool rng_is_initialized(void) -{ - return crng_ready(); -} -EXPORT_SYMBOL(rng_is_initialized); - -/* - * Add a callback function that will be invoked when the nonblocking - * pool is initialised. - * - * returns: 0 if callback is successfully added - * -EALREADY if pool is already initialised (callback not called) - * -ENOENT if module for callback is not alive - */ -int add_random_ready_callback(struct random_ready_callback *rdy) -{ - struct module *owner; - unsigned long flags; - int err =3D -EALREADY; - - if (crng_ready()) - return err; - - owner =3D rdy->owner; - if (!try_module_get(owner)) - return -ENOENT; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (crng_ready()) - goto out; - - owner =3D NULL; - - list_add(&rdy->list, &random_ready_list); - err =3D 0; - -out: - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); - - return err; -} -EXPORT_SYMBOL(add_random_ready_callback); - -/* - * Delete a previously registered readiness callback function. - */ -void del_random_ready_callback(struct random_ready_callback *rdy) -{ - unsigned long flags; - struct module *owner =3D NULL; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (!list_empty(&rdy->list)) { - list_del_init(&rdy->list); - owner =3D rdy->owner; - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); -} -EXPORT_SYMBOL(del_random_ready_callback); - -/* * This function will use the architecture-specific hardware random * number generator if it is available. It is not recommended for * use. Use get_random_bytes() instead. It returns the number of From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44FACC433EF for ; Fri, 27 May 2022 09:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348719AbiE0JEP (ORCPT ); Fri, 27 May 2022 05:04:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350913AbiE0JAx (ORCPT ); Fri, 27 May 2022 05:00:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A502D129EF4; Fri, 27 May 2022 01:57:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2AC1361D6F; Fri, 27 May 2022 08:57:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9DE7C385B8; Fri, 27 May 2022 08:57:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641846; bh=O28AUpc8cRD2K80o0YiFrsRJJJSqXRJduiDTxeVyRxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gdZpSiBHsBfaJHGePvO+AZkfdek4oZTcLB70Snh17EodcPqCmGXb0IGOIacMFOiJw wEBNwRNcBwCMyNvGYGqs79SRHT6AtBb/KGB6o7sECkWkxtVrsA3u+NO3gjkvc5L7+v AbR+E3Zbyz907JLEOBHMEyI2FTXFD76zcFt9XXr8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 032/111] random: group crng functions Date: Fri, 27 May 2022 10:49:04 +0200 Message-Id: <20220527084824.023996171@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 3655adc7089da4f8ca74cec8fcef73ea5101430e upstream. This pulls all of the crng-focused functions into the second labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 792 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 410 insertions(+), 382 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -380,122 +380,27 @@ static void _warn_unseeded_randomness(co } =20 =20 -enum { - POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ -}; - -/* - * Static global variables - */ -static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); - -static int crng_init_cnt =3D 0; - -/********************************************************************** +/********************************************************************* * - * OS independent entropy store. Here are the functions which handle - * storing entropy in an entropy pool. + * Fast key erasure RNG, the "crng". * - **********************************************************************/ - -static struct { - struct blake2s_state hash; - spinlock_t lock; - unsigned int entropy_count; -} input_pool =3D { - .hash.h =3D { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), - BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, - BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 }, - .hash.outlen =3D BLAKE2S_HASH_SIZE, - .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), -}; - -static void extract_entropy(void *buf, size_t nbytes); -static bool drain_entropy(void *buf, size_t nbytes); - -static void crng_reseed(void); - -/* - * This function adds bytes into the entropy "pool". It does not - * update the entropy estimate. The caller should call - * credit_entropy_bits if this is appropriate. - */ -static void _mix_pool_bytes(const void *in, size_t nbytes) -{ - blake2s_update(&input_pool.hash, in, nbytes); -} - -static void mix_pool_bytes(const void *in, size_t nbytes) -{ - unsigned long flags; - - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(in, nbytes); - spin_unlock_irqrestore(&input_pool.lock, flags); -} - -struct fast_pool { - union { - u32 pool32[4]; - u64 pool64[2]; - }; - unsigned long last; - u16 reg_idx; - u8 count; -}; - -/* - * This is a fast mixing routine used by the interrupt randomness - * collector. It's hardcoded for an 128 bit pool and assumes that any - * locks that might be needed are taken by the caller. - */ -static void fast_mix(u32 pool[4]) -{ - u32 a =3D pool[0], b =3D pool[1]; - u32 c =3D pool[2], d =3D pool[3]; - - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; - - pool[0] =3D a; pool[1] =3D b; - pool[2] =3D c; pool[3] =3D d; -} - -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; - - if (!nbits) - return; - - add =3D min_t(size_t, nbits, POOL_BITS); - - do { - orig =3D READ_ONCE(input_pool.entropy_count); - entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); - - if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) - crng_reseed(); -} - -/********************************************************************* + * These functions expand entropy from the entropy extractor into + * long streams for external consumption using the "fast key erasure" + * RNG described at . + * + * There are a few exported interfaces for use by other drivers: * - * CRNG using CHACHA20 + * void get_random_bytes(void *buf, size_t nbytes) + * u32 get_random_u32() + * u64 get_random_u64() + * unsigned int get_random_int() + * unsigned long get_random_long() + * + * These interfaces will return the requested number of random bytes + * into the given buffer or as a return value. This is equivalent to + * a read from /dev/urandom. The integer family of functions may be + * higher performance for one-off random integers, because they do a + * bit of buffering. * *********************************************************************/ =20 @@ -524,70 +429,14 @@ static DEFINE_PER_CPU(struct crng, crngs .lock =3D INIT_LOCAL_LOCK(crngs.lock), }; =20 -/* - * crng_fast_load() can be called by code in the interrupt service - * path. So we can't afford to dilly-dally. Returns the number of - * bytes processed from cp. - */ -static size_t crng_fast_load(const void *cp, size_t len) -{ - unsigned long flags; - const u8 *src =3D (const u8 *)cp; - size_t ret =3D 0; - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; - if (crng_init !=3D 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; - } - while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^=3D *src; - src++; crng_init_cnt++; len--; ret++; - } - if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; - crng_init =3D 1; - } - spin_unlock_irqrestore(&base_crng.lock, flags); - if (crng_init =3D=3D 1) - pr_notice("fast init done\n"); - return ret; -} +/* Used by crng_reseed() to extract a new seed from the input pool. */ +static bool drain_entropy(void *buf, size_t nbytes); =20 /* - * crng_slow_load() is called by add_device_randomness, which has two - * attributes. (1) We can't trust the buffer passed to it is - * guaranteed to be unpredictable (so it might not have any entropy at - * all), and (2) it doesn't have the performance constraints of - * crng_fast_load(). - * - * So, we simply hash the contents in with the current key. Finally, - * we do *not* advance crng_init_cnt since buffer we may get may be - * something like a fixed DMI table (for example), which might very - * well be unique to the machine, but is otherwise unvarying. + * This extracts a new crng key from the input pool, but only if there is a + * sufficient amount of entropy available, in order to mitigate bruteforci= ng + * of newly added bits. */ -static void crng_slow_load(const void *cp, size_t len) -{ - unsigned long flags; - struct blake2s_state hash; - - blake2s_init(&hash, sizeof(base_crng.key)); - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return; - if (crng_init !=3D 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; - } - - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, cp, len); - blake2s_final(&hash, base_crng.key); - - spin_unlock_irqrestore(&base_crng.lock, flags); -} - static void crng_reseed(void) { unsigned long flags; @@ -637,13 +486,11 @@ static void crng_reseed(void) } =20 /* - * The general form here is based on a "fast key erasure RNG" from - * . It generates a ChaCha - * block using the provided key, and then immediately overwites that - * key with half the block. It returns the resultant ChaCha state to the - * user, along with the second half of the block containing 32 bytes of - * random data that may be used; random_data_len may not be greater than - * 32. + * This generates a ChaCha block using the provided key, and then + * immediately overwites that key with half the block. It returns + * the resultant ChaCha state to the user, along with the second + * half of the block containing 32 bytes of random data that may + * be used; random_data_len may not be greater than 32. */ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], u32 chacha_state[CHACHA_STATE_WORDS], @@ -730,6 +577,126 @@ static void crng_make_state(u32 chacha_s local_unlock_irqrestore(&crngs.lock, flags); } =20 +/* + * This function is for crng_init =3D=3D 0 only. + * + * crng_fast_load() can be called by code in the interrupt service + * path. So we can't afford to dilly-dally. Returns the number of + * bytes processed from cp. + */ +static size_t crng_fast_load(const void *cp, size_t len) +{ + static int crng_init_cnt =3D 0; + unsigned long flags; + const u8 *src =3D (const u8 *)cp; + size_t ret =3D 0; + + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return 0; + if (crng_init !=3D 0) { + spin_unlock_irqrestore(&base_crng.lock, flags); + return 0; + } + while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { + base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^=3D *src; + src++; crng_init_cnt++; len--; ret++; + } + if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { + ++base_crng.generation; + crng_init =3D 1; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + if (crng_init =3D=3D 1) + pr_notice("fast init done\n"); + return ret; +} + +/* + * This function is for crng_init =3D=3D 0 only. + * + * crng_slow_load() is called by add_device_randomness, which has two + * attributes. (1) We can't trust the buffer passed to it is + * guaranteed to be unpredictable (so it might not have any entropy at + * all), and (2) it doesn't have the performance constraints of + * crng_fast_load(). + * + * So, we simply hash the contents in with the current key. Finally, + * we do *not* advance crng_init_cnt since buffer we may get may be + * something like a fixed DMI table (for example), which might very + * well be unique to the machine, but is otherwise unvarying. + */ +static void crng_slow_load(const void *cp, size_t len) +{ + unsigned long flags; + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); + + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return; + if (crng_init !=3D 0) { + spin_unlock_irqrestore(&base_crng.lock, flags); + return; + } + + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, cp, len); + blake2s_final(&hash, base_crng.key); + + spin_unlock_irqrestore(&base_crng.lock, flags); +} + +static void _get_random_bytes(void *buf, size_t nbytes) +{ + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 tmp[CHACHA_BLOCK_SIZE]; + size_t len; + + if (!nbytes) + return; + + len =3D min_t(size_t, 32, nbytes); + crng_make_state(chacha_state, buf, len); + nbytes -=3D len; + buf +=3D len; + + while (nbytes) { + if (nbytes < CHACHA_BLOCK_SIZE) { + chacha20_block(chacha_state, tmp); + memcpy(buf, tmp, nbytes); + memzero_explicit(tmp, sizeof(tmp)); + break; + } + + chacha20_block(chacha_state, buf); + if (unlikely(chacha_state[12] =3D=3D 0)) + ++chacha_state[13]; + nbytes -=3D CHACHA_BLOCK_SIZE; + buf +=3D CHACHA_BLOCK_SIZE; + } + + memzero_explicit(chacha_state, sizeof(chacha_state)); +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for key generation, seeding + * TCP sequence numbers, etc. It does not rely on the hardware random + * number generator. For random bytes direct from the hardware RNG + * (when available), use get_random_bytes_arch(). In order to ensure + * that the randomness provided by this function is okay, the function + * wait_for_random_bytes() should be called and return 0 at least once + * at any point prior. + */ +void get_random_bytes(void *buf, size_t nbytes) +{ + static void *previous; + + warn_unseeded_randomness(&previous); + _get_random_bytes(buf, nbytes); +} +EXPORT_SYMBOL(get_random_bytes); + static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { bool large_request =3D nbytes > 256; @@ -777,6 +744,268 @@ static ssize_t get_random_bytes_user(voi return ret; } =20 +/* + * Batched entropy returns random integers. The quality of the random + * number is good as /dev/urandom. In order to ensure that the randomness + * provided by this function is okay, the function wait_for_random_bytes() + * should be called and return 0 at least once at any point prior. + */ +struct batched_entropy { + union { + /* + * We make this 1.5x a ChaCha block, so that we get the + * remaining 32 bytes from fast key erasure, plus one full + * block from the detached ChaCha state. We can increase + * the size of this later if needed so long as we keep the + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. + */ + u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; + u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; + }; + local_lock_t lock; + unsigned long generation; + unsigned int position; +}; + + +static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) =3D { + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock), + .position =3D UINT_MAX +}; + +u64 get_random_u64(void) +{ + u64 ret; + unsigned long flags; + struct batched_entropy *batch; + static void *previous; + unsigned long next_gen; + + warn_unseeded_randomness(&previous); + + local_lock_irqsave(&batched_entropy_u64.lock, flags); + batch =3D raw_cpu_ptr(&batched_entropy_u64); + + next_gen =3D READ_ONCE(base_crng.generation); + if (batch->position >=3D ARRAY_SIZE(batch->entropy_u64) || + next_gen !=3D batch->generation) { + _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); + batch->position =3D 0; + batch->generation =3D next_gen; + } + + ret =3D batch->entropy_u64[batch->position]; + batch->entropy_u64[batch->position] =3D 0; + ++batch->position; + local_unlock_irqrestore(&batched_entropy_u64.lock, flags); + return ret; +} +EXPORT_SYMBOL(get_random_u64); + +static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) =3D { + .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock), + .position =3D UINT_MAX +}; + +u32 get_random_u32(void) +{ + u32 ret; + unsigned long flags; + struct batched_entropy *batch; + static void *previous; + unsigned long next_gen; + + warn_unseeded_randomness(&previous); + + local_lock_irqsave(&batched_entropy_u32.lock, flags); + batch =3D raw_cpu_ptr(&batched_entropy_u32); + + next_gen =3D READ_ONCE(base_crng.generation); + if (batch->position >=3D ARRAY_SIZE(batch->entropy_u32) || + next_gen !=3D batch->generation) { + _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); + batch->position =3D 0; + batch->generation =3D next_gen; + } + + ret =3D batch->entropy_u32[batch->position]; + batch->entropy_u32[batch->position] =3D 0; + ++batch->position; + local_unlock_irqrestore(&batched_entropy_u32.lock, flags); + return ret; +} +EXPORT_SYMBOL(get_random_u32); + +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed = that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -=3D PAGE_ALIGN(start) - start; + start =3D PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range =3D ULONG_MAX - start; + + range >>=3D PAGE_SHIFT; + + if (range =3D=3D 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + +/* + * This function will use the architecture-specific hardware random + * number generator if it is available. It is not recommended for + * use. Use get_random_bytes() instead. It returns the number of + * bytes filled in. + */ +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) +{ + size_t left =3D nbytes; + u8 *p =3D buf; + + while (left) { + unsigned long v; + size_t chunk =3D min_t(size_t, left, sizeof(unsigned long)); + + if (!arch_get_random_long(&v)) + break; + + memcpy(p, &v, chunk); + p +=3D chunk; + left -=3D chunk; + } + + return nbytes - left; +} +EXPORT_SYMBOL(get_random_bytes_arch); + +enum { + POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, + POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ +}; + +/* + * Static global variables + */ +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +static struct { + struct blake2s_state hash; + spinlock_t lock; + unsigned int entropy_count; +} input_pool =3D { + .hash.h =3D { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), + BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, + BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 }, + .hash.outlen =3D BLAKE2S_HASH_SIZE, + .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), +}; + +static void extract_entropy(void *buf, size_t nbytes); +static bool drain_entropy(void *buf, size_t nbytes); + +static void crng_reseed(void); + +/* + * This function adds bytes into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_bits if this is appropriate. + */ +static void _mix_pool_bytes(const void *in, size_t nbytes) +{ + blake2s_update(&input_pool.hash, in, nbytes); +} + +static void mix_pool_bytes(const void *in, size_t nbytes) +{ + unsigned long flags; + + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(in, nbytes); + spin_unlock_irqrestore(&input_pool.lock, flags); +} + +struct fast_pool { + union { + u32 pool32[4]; + u64 pool64[2]; + }; + unsigned long last; + u16 reg_idx; + u8 count; +}; + +/* + * This is a fast mixing routine used by the interrupt randomness + * collector. It's hardcoded for an 128 bit pool and assumes that any + * locks that might be needed are taken by the caller. + */ +static void fast_mix(u32 pool[4]) +{ + u32 a =3D pool[0], b =3D pool[1]; + u32 c =3D pool[2], d =3D pool[3]; + + a +=3D b; c +=3D d; + b =3D rol32(b, 6); d =3D rol32(d, 27); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 16); d =3D rol32(d, 14); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 6); d =3D rol32(d, 27); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 16); d =3D rol32(d, 14); + d ^=3D a; b ^=3D c; + + pool[0] =3D a; pool[1] =3D b; + pool[2] =3D c; pool[3] =3D d; +} + +static void credit_entropy_bits(size_t nbits) +{ + unsigned int entropy_count, orig, add; + + if (!nbits) + return; + + add =3D min_t(size_t, nbits, POOL_BITS); + + do { + orig =3D READ_ONCE(input_pool.entropy_count); + entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); + + if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) + crng_reseed(); +} + /********************************************************************* * * Entropy input management @@ -1045,57 +1274,6 @@ static bool drain_entropy(void *buf, siz } =20 /* - * This function is the exported kernel interface. It returns some - * number of good random numbers, suitable for key generation, seeding - * TCP sequence numbers, etc. It does not rely on the hardware random - * number generator. For random bytes direct from the hardware RNG - * (when available), use get_random_bytes_arch(). In order to ensure - * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once - * at any point prior. - */ -static void _get_random_bytes(void *buf, size_t nbytes) -{ - u32 chacha_state[CHACHA_STATE_WORDS]; - u8 tmp[CHACHA_BLOCK_SIZE]; - size_t len; - - if (!nbytes) - return; - - len =3D min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, buf, len); - nbytes -=3D len; - buf +=3D len; - - while (nbytes) { - if (nbytes < CHACHA_BLOCK_SIZE) { - chacha20_block(chacha_state, tmp); - memcpy(buf, tmp, nbytes); - memzero_explicit(tmp, sizeof(tmp)); - break; - } - - chacha20_block(chacha_state, buf); - if (unlikely(chacha_state[12] =3D=3D 0)) - ++chacha_state[13]; - nbytes -=3D CHACHA_BLOCK_SIZE; - buf +=3D CHACHA_BLOCK_SIZE; - } - - memzero_explicit(chacha_state, sizeof(chacha_state)); -} - -void get_random_bytes(void *buf, size_t nbytes) -{ - static void *previous; - - warn_unseeded_randomness(&previous); - _get_random_bytes(buf, nbytes); -} -EXPORT_SYMBOL(get_random_bytes); - -/* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another * CPU, the timer activity will be touching the stack of the CPU that is @@ -1144,33 +1322,6 @@ static void try_to_generate_entropy(void mix_pool_bytes(&stack.now, sizeof(stack.now)); } =20 -/* - * This function will use the architecture-specific hardware random - * number generator if it is available. It is not recommended for - * use. Use get_random_bytes() instead. It returns the number of - * bytes filled in. - */ -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) -{ - size_t left =3D nbytes; - u8 *p =3D buf; - - while (left) { - unsigned long v; - size_t chunk =3D min_t(size_t, left, sizeof(unsigned long)); - - if (!arch_get_random_long(&v)) - break; - - memcpy(p, &v, chunk); - p +=3D chunk; - left -=3D chunk; - } - - return nbytes - left; -} -EXPORT_SYMBOL(get_random_bytes_arch); - static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); static int __init parse_trust_cpu(char *arg) { @@ -1533,129 +1684,6 @@ static int __init random_sysctls_init(vo device_initcall(random_sysctls_init); #endif /* CONFIG_SYSCTL */ =20 -struct batched_entropy { - union { - /* - * We make this 1.5x a ChaCha block, so that we get the - * remaining 32 bytes from fast key erasure, plus one full - * block from the detached ChaCha state. We can increase - * the size of this later if needed so long as we keep the - * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. - */ - u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; - u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; - }; - local_lock_t lock; - unsigned long generation; - unsigned int position; -}; - -/* - * Get a random word for internal kernel use only. The quality of the rand= om - * number is good as /dev/urandom. In order to ensure that the randomness - * provided by this function is okay, the function wait_for_random_bytes() - * should be called and return 0 at least once at any point prior. - */ -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock), - .position =3D UINT_MAX -}; - -u64 get_random_u64(void) -{ - u64 ret; - unsigned long flags; - struct batched_entropy *batch; - static void *previous; - unsigned long next_gen; - - warn_unseeded_randomness(&previous); - - local_lock_irqsave(&batched_entropy_u64.lock, flags); - batch =3D raw_cpu_ptr(&batched_entropy_u64); - - next_gen =3D READ_ONCE(base_crng.generation); - if (batch->position >=3D ARRAY_SIZE(batch->entropy_u64) || - next_gen !=3D batch->generation) { - _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); - batch->position =3D 0; - batch->generation =3D next_gen; - } - - ret =3D batch->entropy_u64[batch->position]; - batch->entropy_u64[batch->position] =3D 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u64.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u64); - -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock), - .position =3D UINT_MAX -}; - -u32 get_random_u32(void) -{ - u32 ret; - unsigned long flags; - struct batched_entropy *batch; - static void *previous; - unsigned long next_gen; - - warn_unseeded_randomness(&previous); - - local_lock_irqsave(&batched_entropy_u32.lock, flags); - batch =3D raw_cpu_ptr(&batched_entropy_u32); - - next_gen =3D READ_ONCE(base_crng.generation); - if (batch->position >=3D ARRAY_SIZE(batch->entropy_u32) || - next_gen !=3D batch->generation) { - _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); - batch->position =3D 0; - batch->generation =3D next_gen; - } - - ret =3D batch->entropy_u32[batch->position]; - batch->entropy_u32[batch->position] =3D 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u32.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u32); - -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed = that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -=3D PAGE_ALIGN(start) - start; - start =3D PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range =3D ULONG_MAX - start; - - range >>=3D PAGE_SHIFT; - - if (range =3D=3D 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /* Interface for in-kernel drivers of true hardware RNGs. * Those devices may produce endless random bits and will be throttled * when our pool is full. From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B7E6C433F5 for ; Fri, 27 May 2022 09:02:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349940AbiE0JCk (ORCPT ); Fri, 27 May 2022 05:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350599AbiE0JAL (ORCPT ); Fri, 27 May 2022 05:00:11 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B665031DE5; Fri, 27 May 2022 01:56:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2AFB4B823E1; Fri, 27 May 2022 08:56:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 464ADC385B8; Fri, 27 May 2022 08:56:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641770; bh=TKtKwCRmNQngjBUJ2nTn1VXui/HBhuKGLvNVpp+LRBY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BFmavDjENWcVyPyJ+hawRgdedT20x3/5HxJAypujJf9POHO6uYogtHglWFPI4QVcH 3YfTIF+fkw9dXqNw5AkGdtsL2qP5vWkrfGVuVSNZWcIbmrE5A0+rg3PvafNGd3rsab mOxZ5fTu2OD9izNnBuEXXi34CiDNLGT2d0oqfCMU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 033/111] random: group entropy extraction functions Date: Fri, 27 May 2022 10:49:05 +0200 Message-Id: <20220527084824.160650233@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a5ed7cb1a7732ef11959332d507889fbc39ebbb4 upstream. This pulls all of the entropy extraction-focused functions into the third labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 216 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 109 insertions(+), 107 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -895,23 +895,36 @@ size_t __must_check get_random_bytes_arc } EXPORT_SYMBOL(get_random_bytes_arch); =20 + +/********************************************************************** + * + * Entropy accumulation and extraction routines. + * + * Callers may add entropy via: + * + * static void mix_pool_bytes(const void *in, size_t nbytes) + * + * After which, if added entropy should be credited: + * + * static void credit_entropy_bits(size_t nbits) + * + * Finally, extract entropy via these two, with the latter one + * setting the entropy count to zero and extracting only if there + * is POOL_MIN_BITS entropy credited prior: + * + * static void extract_entropy(void *buf, size_t nbytes) + * static bool drain_entropy(void *buf, size_t nbytes) + * + **********************************************************************/ + enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ }; =20 -/* - * Static global variables - */ +/* For notifying userspace should write into /dev/random. */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); =20 -/********************************************************************** - * - * OS independent entropy store. Here are the functions which handle - * storing entropy in an entropy pool. - * - **********************************************************************/ - static struct { struct blake2s_state hash; spinlock_t lock; @@ -924,28 +937,106 @@ static struct { .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), }; =20 -static void extract_entropy(void *buf, size_t nbytes); -static bool drain_entropy(void *buf, size_t nbytes); - -static void crng_reseed(void); +static void _mix_pool_bytes(const void *in, size_t nbytes) +{ + blake2s_update(&input_pool.hash, in, nbytes); +} =20 /* * This function adds bytes into the entropy "pool". It does not * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. */ -static void _mix_pool_bytes(const void *in, size_t nbytes) +static void mix_pool_bytes(const void *in, size_t nbytes) { - blake2s_update(&input_pool.hash, in, nbytes); + unsigned long flags; + + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(in, nbytes); + spin_unlock_irqrestore(&input_pool.lock, flags); } =20 -static void mix_pool_bytes(const void *in, size_t nbytes) +static void credit_entropy_bits(size_t nbits) +{ + unsigned int entropy_count, orig, add; + + if (!nbits) + return; + + add =3D min_t(size_t, nbits, POOL_BITS); + + do { + orig =3D READ_ONCE(input_pool.entropy_count); + entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); + + if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) + crng_reseed(); +} + +/* + * This is an HKDF-like construction for using the hashed collected entropy + * as a PRF key, that's then expanded block-by-block. + */ +static void extract_entropy(void *buf, size_t nbytes) { unsigned long flags; + u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; + struct { + unsigned long rdseed[32 / sizeof(long)]; + size_t counter; + } block; + size_t i; + + for (i =3D 0; i < ARRAY_SIZE(block.rdseed); ++i) { + if (!arch_get_random_seed_long(&block.rdseed[i]) && + !arch_get_random_long(&block.rdseed[i])) + block.rdseed[i] =3D random_get_entropy(); + } =20 spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(in, nbytes); + + /* seed =3D HASHPRF(last_key, entropy_input) */ + blake2s_final(&input_pool.hash, seed); + + /* next_key =3D HASHPRF(seed, RDSEED || 0) */ + block.counter =3D 0; + blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), si= zeof(seed)); + blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(ne= xt_key)); + spin_unlock_irqrestore(&input_pool.lock, flags); + memzero_explicit(next_key, sizeof(next_key)); + + while (nbytes) { + i =3D min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + /* output =3D HASHPRF(seed, RDSEED || ++counter) */ + ++block.counter; + blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); + nbytes -=3D i; + buf +=3D i; + } + + memzero_explicit(seed, sizeof(seed)); + memzero_explicit(&block, sizeof(block)); +} + +/* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, and th= en we + * set the entropy count to zero (but don't actually touch any data). Only= then + * can we extract a new key with extract_entropy(). + */ +static bool drain_entropy(void *buf, size_t nbytes) +{ + unsigned int entropy_count; + do { + entropy_count =3D READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return false; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); + extract_entropy(buf, nbytes); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + return true; } =20 struct fast_pool { @@ -988,24 +1079,6 @@ static void fast_mix(u32 pool[4]) pool[2] =3D c; pool[3] =3D d; } =20 -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; - - if (!nbits) - return; - - add =3D min_t(size_t, nbits, POOL_BITS); - - do { - orig =3D READ_ONCE(input_pool.entropy_count); - entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); - - if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) - crng_reseed(); -} - /********************************************************************* * * Entropy input management @@ -1202,77 +1275,6 @@ void add_disk_randomness(struct gendisk EXPORT_SYMBOL_GPL(add_disk_randomness); #endif =20 -/********************************************************************* - * - * Entropy extraction routines - * - *********************************************************************/ - -/* - * This is an HKDF-like construction for using the hashed collected entropy - * as a PRF key, that's then expanded block-by-block. - */ -static void extract_entropy(void *buf, size_t nbytes) -{ - unsigned long flags; - u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; - struct { - unsigned long rdseed[32 / sizeof(long)]; - size_t counter; - } block; - size_t i; - - for (i =3D 0; i < ARRAY_SIZE(block.rdseed); ++i) { - if (!arch_get_random_seed_long(&block.rdseed[i]) && - !arch_get_random_long(&block.rdseed[i])) - block.rdseed[i] =3D random_get_entropy(); - } - - spin_lock_irqsave(&input_pool.lock, flags); - - /* seed =3D HASHPRF(last_key, entropy_input) */ - blake2s_final(&input_pool.hash, seed); - - /* next_key =3D HASHPRF(seed, RDSEED || 0) */ - block.counter =3D 0; - blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), si= zeof(seed)); - blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(ne= xt_key)); - - spin_unlock_irqrestore(&input_pool.lock, flags); - memzero_explicit(next_key, sizeof(next_key)); - - while (nbytes) { - i =3D min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); - /* output =3D HASHPRF(seed, RDSEED || ++counter) */ - ++block.counter; - blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); - nbytes -=3D i; - buf +=3D i; - } - - memzero_explicit(seed, sizeof(seed)); - memzero_explicit(&block, sizeof(block)); -} - -/* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, and th= en we - * set the entropy count to zero (but don't actually touch any data). Only= then - * can we extract a new key with extract_entropy(). - */ -static bool drain_entropy(void *buf, size_t nbytes) -{ - unsigned int entropy_count; - do { - entropy_count =3D READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return false; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); - extract_entropy(buf, nbytes); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - return true; -} - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04F94C433FE for ; Fri, 27 May 2022 09:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349563AbiE0JC0 (ORCPT ); Fri, 27 May 2022 05:02:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350590AbiE0JAK (ORCPT ); Fri, 27 May 2022 05:00:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8043219C33; Fri, 27 May 2022 01:56:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EFF9461CB7; Fri, 27 May 2022 08:56:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3C46C385A9; Fri, 27 May 2022 08:56:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641778; bh=m/WayzKQ2sQH0unzhO7ev677XcMs5UCQ4qjoYDgR/EE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cua3tJmU3FbBRw49DE+ZWVJpg6FhFJBapZY+dCyKqLrWOgiYRl7F9Rbv0pHDv5Rf0 zspcPNEmqR6QsOLF+wTyb+8SQIbRVZGyeJkBYTCjLSqO4DhbAFJW5b10zQEcEAm3on YTchzPJtfebsZGPhLUSUbtss7miNJghWv9C1f+dg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 034/111] random: group entropy collection functions Date: Fri, 27 May 2022 10:49:06 +0200 Message-Id: <20220527084824.283285016@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 92c653cf14400946f376a29b828d6af7e01f38dd upstream. This pulls all of the entropy collection-focused functions into the fourth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 370 +++++++++++++++++++++++++++------------------= ----- 1 file changed, 206 insertions(+), 164 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1039,60 +1039,112 @@ static bool drain_entropy(void *buf, siz return true; } =20 -struct fast_pool { - union { - u32 pool32[4]; - u64 pool64[2]; - }; - unsigned long last; - u16 reg_idx; - u8 count; -}; + +/********************************************************************** + * + * Entropy collection routines. + * + * The following exported functions are used for pushing entropy into + * the above entropy accumulation routines: + * + * void add_device_randomness(const void *buf, size_t size); + * void add_input_randomness(unsigned int type, unsigned int code, + * unsigned int value); + * void add_disk_randomness(struct gendisk *disk); + * void add_hwgenerator_randomness(const void *buffer, size_t count, + * size_t entropy); + * void add_bootloader_randomness(const void *buf, size_t size); + * void add_interrupt_randomness(int irq); + * + * add_device_randomness() adds data to the input pool that + * is likely to differ between two devices (or possibly even per boot). + * This would be things like MAC addresses or serial numbers, or the + * read-out of the RTC. This does *not* credit any actual entropy to + * the pool, but it initializes the pool to different values for devices + * that might otherwise be identical and have very little entropy + * available to them (particularly common in the embedded world). + * + * add_input_randomness() uses the input layer interrupt timing, as well + * as the event type information from the hardware. + * + * add_disk_randomness() uses what amounts to the seek time of block + * layer request events, on a per-disk_devt basis, as input to the + * entropy pool. Note that high-speed solid state drives with very low + * seek times do not make for good sources of entropy, as their seek + * times are usually fairly consistent. + * + * The above two routines try to estimate how many bits of entropy + * to credit. They do this by keeping track of the first and second + * order deltas of the event timings. + * + * add_hwgenerator_randomness() is for true hardware RNGs, and will credit + * entropy as specified by the caller. If the entropy pool is full it will + * block until more entropy is needed. + * + * add_bootloader_randomness() is the same as add_hwgenerator_randomness()= or + * add_device_randomness(), depending on whether or not the configuration + * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * + * add_interrupt_randomness() uses the interrupt timing as random + * inputs to the entropy pool. Using the cycle counters and the irq source + * as inputs, it feeds the input pool roughly once a second or after 64 + * interrupts, crediting 1 bit of entropy for whichever comes first. + * + **********************************************************************/ + +static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); +static int __init parse_trust_cpu(char *arg) +{ + return kstrtobool(arg, &trust_cpu); +} +early_param("random.trust_cpu", parse_trust_cpu); =20 /* - * This is a fast mixing routine used by the interrupt randomness - * collector. It's hardcoded for an 128 bit pool and assumes that any - * locks that might be needed are taken by the caller. + * The first collection of entropy occurs at system boot while interrupts + * are still turned off. Here we push in RDSEED, a timestamp, and utsname(= ). + * Depending on the above configuration knob, RDSEED may be considered + * sufficient for initialization. Note that much earlier setup may already + * have pushed entropy into the input pool by the time we get here. */ -static void fast_mix(u32 pool[4]) +int __init rand_initialize(void) { - u32 a =3D pool[0], b =3D pool[1]; - u32 c =3D pool[2], d =3D pool[3]; - - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; + size_t i; + ktime_t now =3D ktime_get_real(); + bool arch_init =3D true; + unsigned long rv; =20 - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; + for (i =3D 0; i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { + if (!arch_get_random_seed_long_early(&rv) && + !arch_get_random_long_early(&rv)) { + rv =3D random_get_entropy(); + arch_init =3D false; + } + mix_pool_bytes(&rv, sizeof(rv)); + } + mix_pool_bytes(&now, sizeof(now)); + mix_pool_bytes(utsname(), sizeof(*(utsname()))); =20 - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; + extract_entropy(base_crng.key, sizeof(base_crng.key)); + ++base_crng.generation; =20 - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; + if (arch_init && trust_cpu && crng_init < 2) { + crng_init =3D 2; + pr_notice("crng init done (trusting CPU's manufacturer)\n"); + } =20 - pool[0] =3D a; pool[1] =3D b; - pool[2] =3D c; pool[3] =3D d; + if (ratelimit_disable) { + urandom_warning.interval =3D 0; + unseeded_warning.interval =3D 0; + } + return 0; } =20 -/********************************************************************* - * - * Entropy input management - * - *********************************************************************/ - /* There is one of these per entropy source */ struct timer_rand_state { cycles_t last_time; long last_delta, last_delta2; }; =20 -#define INIT_TIMER_RAND_STATE { INITIAL_JIFFIES, }; - /* * Add device- or boot-specific data to the input pool to help * initialize it. @@ -1116,8 +1168,6 @@ void add_device_randomness(const void *b } EXPORT_SYMBOL(add_device_randomness); =20 -static struct timer_rand_state input_timer_state =3D INIT_TIMER_RAND_STATE; - /* * This function adds entropy to the entropy "pool" by using timing * delays. It uses the timer_rand_state structure to make an estimate @@ -1179,8 +1229,9 @@ void add_input_randomness(unsigned int t unsigned int value) { static unsigned char last_value; + static struct timer_rand_state input_timer_state =3D { INITIAL_JIFFIES }; =20 - /* ignore autorepeat and the like */ + /* Ignore autorepeat and the like. */ if (value =3D=3D last_value) return; =20 @@ -1190,6 +1241,119 @@ void add_input_randomness(unsigned int t } EXPORT_SYMBOL_GPL(add_input_randomness); =20 +#ifdef CONFIG_BLOCK +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* First major is 1, so we get >=3D 0x200 here. */ + add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); +} +EXPORT_SYMBOL_GPL(add_disk_randomness); + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kzalloc returns null, we just won't use that entropy + * source. + */ + state =3D kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + state->last_time =3D INITIAL_JIFFIES; + disk->random =3D state; + } +} +#endif + +/* + * Interface for in-kernel drivers of true hardware RNGs. + * Those devices may produce endless random bits and will be throttled + * when our pool is full. + */ +void add_hwgenerator_randomness(const void *buffer, size_t count, + size_t entropy) +{ + if (unlikely(crng_init =3D=3D 0)) { + size_t ret =3D crng_fast_load(buffer, count); + mix_pool_bytes(buffer, ret); + count -=3D ret; + buffer +=3D ret; + if (!count || crng_init =3D=3D 0) + return; + } + + /* + * Throttle writing if we're above the trickle threshold. + * We'll be woken up again once below POOL_MIN_BITS, when + * the calling thread is about to terminate, or once + * CRNG_RESEED_INTERVAL has elapsed. + */ + wait_event_interruptible_timeout(random_write_wait, + !system_wq || kthread_should_stop() || + input_pool.entropy_count < POOL_MIN_BITS, + CRNG_RESEED_INTERVAL); + mix_pool_bytes(buffer, count); + credit_entropy_bits(entropy); +} +EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); + +/* + * Handle random seed passed by bootloader. + * If the seed is trustworthy, it would be regarded as hardware RNGs. Othe= rwise + * it would be regarded as device data. + * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. + */ +void add_bootloader_randomness(const void *buf, size_t size) +{ + if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) + add_hwgenerator_randomness(buf, size, size * 8); + else + add_device_randomness(buf, size); +} +EXPORT_SYMBOL_GPL(add_bootloader_randomness); + +struct fast_pool { + union { + u32 pool32[4]; + u64 pool64[2]; + }; + unsigned long last; + u16 reg_idx; + u8 count; +}; + +/* + * This is a fast mixing routine used by the interrupt randomness + * collector. It's hardcoded for an 128 bit pool and assumes that any + * locks that might be needed are taken by the caller. + */ +static void fast_mix(u32 pool[4]) +{ + u32 a =3D pool[0], b =3D pool[1]; + u32 c =3D pool[2], d =3D pool[3]; + + a +=3D b; c +=3D d; + b =3D rol32(b, 6); d =3D rol32(d, 27); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 16); d =3D rol32(d, 14); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 6); d =3D rol32(d, 27); + d ^=3D a; b ^=3D c; + + a +=3D b; c +=3D d; + b =3D rol32(b, 16); d =3D rol32(d, 14); + d ^=3D a; b ^=3D c; + + pool[0] =3D a; pool[1] =3D b; + pool[2] =3D c; pool[3] =3D d; +} + static DEFINE_PER_CPU(struct fast_pool, irq_randomness); =20 static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) @@ -1259,22 +1423,11 @@ void add_interrupt_randomness(int irq) =20 fast_pool->count =3D 0; =20 - /* award one bit for the contents of the fast pool */ + /* Award one bit for the contents of the fast pool. */ credit_entropy_bits(1); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); =20 -#ifdef CONFIG_BLOCK -void add_disk_randomness(struct gendisk *disk) -{ - if (!disk || !disk->random) - return; - /* first major is 1, so we get >=3D 0x200 here */ - add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); -} -EXPORT_SYMBOL_GPL(add_disk_randomness); -#endif - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another @@ -1324,73 +1477,6 @@ static void try_to_generate_entropy(void mix_pool_bytes(&stack.now, sizeof(stack.now)); } =20 -static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); -static int __init parse_trust_cpu(char *arg) -{ - return kstrtobool(arg, &trust_cpu); -} -early_param("random.trust_cpu", parse_trust_cpu); - -/* - * Note that setup_arch() may call add_device_randomness() - * long before we get here. This allows seeding of the pools - * with some platform dependent data very early in the boot - * process. But it limits our options here. We must use - * statically allocated structures that already have all - * initializations complete at compile time. We should also - * take care not to overwrite the precious per platform data - * we were given. - */ -int __init rand_initialize(void) -{ - size_t i; - ktime_t now =3D ktime_get_real(); - bool arch_init =3D true; - unsigned long rv; - - for (i =3D 0; i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv =3D random_get_entropy(); - arch_init =3D false; - } - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(&now, sizeof(now)); - mix_pool_bytes(utsname(), sizeof(*(utsname()))); - - extract_entropy(base_crng.key, sizeof(base_crng.key)); - ++base_crng.generation; - - if (arch_init && trust_cpu && crng_init < 2) { - crng_init =3D 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } - - if (ratelimit_disable) { - urandom_warning.interval =3D 0; - unseeded_warning.interval =3D 0; - } - return 0; -} - -#ifdef CONFIG_BLOCK -void rand_initialize_disk(struct gendisk *disk) -{ - struct timer_rand_state *state; - - /* - * If kzalloc returns null, we just won't use that entropy - * source. - */ - state =3D kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - state->last_time =3D INITIAL_JIFFIES; - disk->random =3D state; - } -} -#endif - static ssize_t urandom_read(struct file *file, char __user *buf, size_t nb= ytes, loff_t *ppos) { @@ -1685,47 +1771,3 @@ static int __init random_sysctls_init(vo } device_initcall(random_sysctls_init); #endif /* CONFIG_SYSCTL */ - -/* Interface for in-kernel drivers of true hardware RNGs. - * Those devices may produce endless random bits and will be throttled - * when our pool is full. - */ -void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy) -{ - if (unlikely(crng_init =3D=3D 0)) { - size_t ret =3D crng_fast_load(buffer, count); - mix_pool_bytes(buffer, ret); - count -=3D ret; - buffer +=3D ret; - if (!count || crng_init =3D=3D 0) - return; - } - - /* Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below POOL_MIN_BITS, when - * the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has elapsed. - */ - wait_event_interruptible_timeout(random_write_wait, - !system_wq || kthread_should_stop() || - input_pool.entropy_count < POOL_MIN_BITS, - CRNG_RESEED_INTERVAL); - mix_pool_bytes(buffer, count); - credit_entropy_bits(entropy); -} -EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); - -/* Handle random seed passed by bootloader. - * If the seed is trustworthy, it would be regarded as hardware RNGs. Othe= rwise - * it would be regarded as device data. - * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. - */ -void add_bootloader_randomness(const void *buf, size_t size) -{ - if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) - add_hwgenerator_randomness(buf, size, size * 8); - else - add_device_randomness(buf, size); -} -EXPORT_SYMBOL_GPL(add_bootloader_randomness); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 767B7C433FE for ; Fri, 27 May 2022 09:03:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350295AbiE0JDI (ORCPT ); Fri, 27 May 2022 05:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350605AbiE0JAL (ORCPT ); Fri, 27 May 2022 05:00:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7947419A2; Fri, 27 May 2022 01:56:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EDEC961D91; Fri, 27 May 2022 08:56:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABD7FC385A9; Fri, 27 May 2022 08:56:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641785; bh=eTiNwnTuZEIS520BxGxeULXm+yAoNMpLHA1evdbh5pc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CHMFEHFAmLLGioz9UIqAysvzVA5InkaZPoApiGplu+2l/AIvqaEyMoDcZpsGouEvk dV3MsifcZY9xRIexOSVm7BvGX3i3eDFT4QfkZN8qOrzD9HF8+gF6Ck8qcrl6+5uJCE WA1omsjpd71bNZR9EqNjM182SUMyawSmXYcrm6kc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 035/111] random: group userspace read/write functions Date: Fri, 27 May 2022 10:49:07 +0200 Message-Id: <20220527084824.422536111@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a6adf8e7a605250b911e94793fd077933709ff9e upstream. This pulls all of the userspace read/write-focused functions into the fifth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 125 ++++++++++++++++++++++++++++++---------------= ----- 1 file changed, 77 insertions(+), 48 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1477,30 +1477,61 @@ static void try_to_generate_entropy(void mix_pool_bytes(&stack.now, sizeof(stack.now)); } =20 -static ssize_t urandom_read(struct file *file, char __user *buf, size_t nb= ytes, - loff_t *ppos) + +/********************************************************************** + * + * Userspace reader/writer interfaces. + * + * getrandom(2) is the primary modern interface into the RNG and should + * be used in preference to anything else. + * + * Reading from /dev/random has the same functionality as calling + * getrandom(2) with flags=3D0. In earlier versions, however, it had + * vastly different semantics and should therefore be avoided, to + * prevent backwards compatibility issues. + * + * Reading from /dev/urandom has the same functionality as calling + * getrandom(2) with flags=3DGRND_INSECURE. Because it does not block + * waiting for the RNG to be ready, it should not be used. + * + * Writing to either /dev/random or /dev/urandom adds entropy to + * the input pool but does not credit it. + * + * Polling on /dev/random indicates when the RNG is initialized, on + * the read side, and when it wants new entropy, on the write side. + * + * Both /dev/random and /dev/urandom have the same set of ioctls for + * adding entropy, getting the entropy count, zeroing the count, and + * reseeding the crng. + * + **********************************************************************/ + +SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, + flags) { - static int maxwarn =3D 10; + if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) + return -EINVAL; =20 - if (!crng_ready() && maxwarn > 0) { - maxwarn--; - if (__ratelimit(&urandom_warning)) - pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, nbytes); - } + /* + * Requesting insecure and blocking randomness at the same time makes + * no sense. + */ + if ((flags & (GRND_INSECURE | GRND_RANDOM)) =3D=3D (GRND_INSECURE | GRND_= RANDOM)) + return -EINVAL; =20 - return get_random_bytes_user(buf, nbytes); -} + if (count > INT_MAX) + count =3D INT_MAX; =20 -static ssize_t random_read(struct file *file, char __user *buf, size_t nby= tes, - loff_t *ppos) -{ - int ret; + if (!(flags & GRND_INSECURE) && !crng_ready()) { + int ret; =20 - ret =3D wait_for_random_bytes(); - if (ret !=3D 0) - return ret; - return get_random_bytes_user(buf, nbytes); + if (flags & GRND_NONBLOCK) + return -EAGAIN; + ret =3D wait_for_random_bytes(); + if (unlikely(ret)) + return ret; + } + return get_random_bytes_user(buf, count); } =20 static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1552,6 +1583,32 @@ static ssize_t random_write(struct file return (ssize_t)count; } =20 +static ssize_t urandom_read(struct file *file, char __user *buf, size_t nb= ytes, + loff_t *ppos) +{ + static int maxwarn =3D 10; + + if (!crng_ready() && maxwarn > 0) { + maxwarn--; + if (__ratelimit(&urandom_warning)) + pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", + current->comm, nbytes); + } + + return get_random_bytes_user(buf, nbytes); +} + +static ssize_t random_read(struct file *file, char __user *buf, size_t nby= tes, + loff_t *ppos) +{ + int ret; + + ret =3D wait_for_random_bytes(); + if (ret !=3D 0) + return ret; + return get_random_bytes_user(buf, nbytes); +} + static long random_ioctl(struct file *f, unsigned int cmd, unsigned long a= rg) { int size, ent_count; @@ -1560,7 +1617,7 @@ static long random_ioctl(struct file *f, =20 switch (cmd) { case RNDGETENTCNT: - /* inherently racy, no point locking */ + /* Inherently racy, no point locking. */ if (put_user(input_pool.entropy_count, p)) return -EFAULT; return 0; @@ -1636,34 +1693,6 @@ const struct file_operations urandom_fop .llseek =3D noop_llseek, }; =20 -SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, - flags) -{ - if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) - return -EINVAL; - - /* - * Requesting insecure and blocking randomness at the same time makes - * no sense. - */ - if ((flags & (GRND_INSECURE | GRND_RANDOM)) =3D=3D (GRND_INSECURE | GRND_= RANDOM)) - return -EINVAL; - - if (count > INT_MAX) - count =3D INT_MAX; - - if (!(flags & GRND_INSECURE) && !crng_ready()) { - int ret; - - if (flags & GRND_NONBLOCK) - return -EAGAIN; - ret =3D wait_for_random_bytes(); - if (unlikely(ret)) - return ret; - } - return get_random_bytes_user(buf, count); -} - /******************************************************************** * * Sysctl interface From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB0FC4167E for ; Fri, 27 May 2022 09:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351146AbiE0JF4 (ORCPT ); Fri, 27 May 2022 05:05:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350660AbiE0JAP (ORCPT ); Fri, 27 May 2022 05:00:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 638DF62129; Fri, 27 May 2022 01:56:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EC2E361D92; Fri, 27 May 2022 08:56:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BFF0C385B8; Fri, 27 May 2022 08:56:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641793; bh=o/vWndbOaURYXkT/zS2MSGUMqwWluO4wbWkZcZ5tb8Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rq1kbe53k9UrtvSRg3c1WRMyg7KnYaZuR7lNUgHGJE2/vTbtyHwenLJUa/b/MYbaO UEPP4BlMsmxB7eiqP7ruZwnlutrHDiIdns0e+C3XGhHx/n7++tKieeP48ujwH1LfeR eOCS7xJQEa+X3lsZhIt0GiPFIfZ+yYLRjPf8an50= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 036/111] random: group sysctl functions Date: Fri, 27 May 2022 10:49:08 +0200 Message-Id: <20220527084824.550347557@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 0deff3c43206c24e746b1410f11125707ad3040e upstream. This pulls all of the sysctl-focused functions into the sixth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1693,9 +1693,34 @@ const struct file_operations urandom_fop .llseek =3D noop_llseek, }; =20 + /******************************************************************** * - * Sysctl interface + * Sysctl interface. + * + * These are partly unused legacy knobs with dummy values to not break + * userspace and partly still useful things. They are usually accessible + * in /proc/sys/kernel/random/ and are as follows: + * + * - boot_id - a UUID representing the current boot. + * + * - uuid - a random UUID, different each time the file is read. + * + * - poolsize - the number of bits of entropy that the input pool can + * hold, tied to the POOL_BITS constant. + * + * - entropy_avail - the number of bits of entropy currently in the + * input pool. Always <=3D poolsize. + * + * - write_wakeup_threshold - the amount of entropy in the input pool + * below which write polls to /dev/random will unblock, requesting + * more entropy, tied to the POOL_MIN_BITS constant. It is writable + * to avoid breaking old userspaces, but writing to it does not + * change any behavior of the RNG. + * + * - urandom_min_reseed_secs - fixed to the meaningless value "60". + * It is writable to avoid breaking old userspaces, but writing + * to it does not change any behavior of the RNG. * ********************************************************************/ =20 @@ -1703,8 +1728,8 @@ const struct file_operations urandom_fop =20 #include =20 -static int random_min_urandom_seed =3D 60; -static int random_write_wakeup_bits =3D POOL_MIN_BITS; +static int sysctl_random_min_urandom_seed =3D 60; +static int sysctl_random_write_wakeup_bits =3D POOL_MIN_BITS; static int sysctl_poolsize =3D POOL_BITS; static char sysctl_bootid[16]; =20 @@ -1761,14 +1786,14 @@ static struct ctl_table random_table[] =3D }, { .procname =3D "write_wakeup_threshold", - .data =3D &random_write_wakeup_bits, + .data =3D &sysctl_random_write_wakeup_bits, .maxlen =3D sizeof(int), .mode =3D 0644, .proc_handler =3D proc_dointvec, }, { .procname =3D "urandom_min_reseed_secs", - .data =3D &random_min_urandom_seed, + .data =3D &sysctl_random_min_urandom_seed, .maxlen =3D sizeof(int), .mode =3D 0644, .proc_handler =3D proc_dointvec, @@ -1799,4 +1824,4 @@ static int __init random_sysctls_init(vo return 0; } device_initcall(random_sysctls_init); -#endif /* CONFIG_SYSCTL */ +#endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAADCC4167B for ; Fri, 27 May 2022 09:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351027AbiE0JFp (ORCPT ); Fri, 27 May 2022 05:05:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350683AbiE0JAT (ORCPT ); Fri, 27 May 2022 05:00:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D43463BCF; Fri, 27 May 2022 01:56:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5F572B823D9; Fri, 27 May 2022 08:56:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D941C385B8; Fri, 27 May 2022 08:56:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641801; bh=rM3kl0vv2bjgSwp+qxFhlAqocre7jchX693t5VQTPnQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MGJ2zRJzsuKE0HI74QbCBVNxG1sdXtbP/hCMSg87YJE4jG4Z4BJJyv2htyAnNtV+7 Enj+Ko34TGwHXrPXjM6ZexfQIU1dVjI6yRUqpCf0en1k37eldCivq4QGAQUZCEMv+1 SD1oM7ID3BlBhq4jOm8tnN+jUd+v2Ayp2EUcIs7k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 037/111] random: rewrite header introductory comment Date: Fri, 27 May 2022 10:49:09 +0200 Message-Id: <20220527084824.699721210@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5f75d9f3babea8ae0a2d06724656874f41d317f5 upstream. Now that we've re-documented the various sections, we can remove the outdated text here and replace it with a high-level overview. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 179 +++++----------------------------------------= ----- 1 file changed, 19 insertions(+), 160 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2,168 +2,27 @@ /* * Copyright (C) 2017-2022 Jason A. Donenfeld . All Right= s Reserved. * Copyright Matt Mackall , 2003, 2004, 2005 - * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All - * rights reserved. - */ - -/* - * Exported interfaces ---- output - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D - * - * There are four exported interfaces; two for use within the kernel, - * and two for use from userspace. - * - * Exported interfaces ---- userspace output - * ----------------------------------------- - * - * The userspace interfaces are two character devices /dev/random and - * /dev/urandom. /dev/random is suitable for use when very high - * quality randomness is desired (for example, for key generation or - * one-time pads), as it will only return a maximum of the number of - * bits of randomness (as estimated by the random number generator) - * contained in the entropy pool. - * - * The /dev/urandom device does not have this limit, and will return - * as many bytes as are requested. As more and more random bytes are - * requested without giving time for the entropy pool to recharge, - * this will result in random numbers that are merely cryptographically - * strong. For many applications, however, this is acceptable. - * - * Exported interfaces ---- kernel output - * -------------------------------------- - * - * The primary kernel interfaces are: - * - * void get_random_bytes(void *buf, size_t nbytes); - * u32 get_random_u32() - * u64 get_random_u64() - * unsigned int get_random_int() - * unsigned long get_random_long() - * - * These interfaces will return the requested number of random bytes - * into the given buffer or as a return value. This is equivalent to a - * read from /dev/urandom. The get_random_{u32,u64,int,long}() family - * of functions may be higher performance for one-off random integers, - * because they do a bit of buffering. - * - * prandom_u32() - * ------------- - * - * For even weaker applications, see the pseudorandom generator - * prandom_u32(), prandom_max(), and prandom_bytes(). If the random - * numbers aren't security-critical at all, these are *far* cheaper. - * Useful for self-tests, random error simulation, randomized backoffs, - * and any other application where you trust that nobody is trying to - * maliciously mess with you by guessing the "random" numbers. - * - * Exported interfaces ---- input - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D - * - * The current exported interfaces for gathering environmental noise - * from the devices are: - * - * void add_device_randomness(const void *buf, size_t size); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); - * void add_interrupt_randomness(int irq); - * void add_disk_randomness(struct gendisk *disk); - * void add_hwgenerator_randomness(const void *buffer, size_t count, - * size_t entropy); - * void add_bootloader_randomness(const void *buf, size_t size); - * - * add_device_randomness() is for adding data to the random pool that - * is likely to differ between two devices (or possibly even per boot). - * This would be things like MAC addresses or serial numbers, or the - * read-out of the RTC. This does *not* add any actual entropy to the - * pool, but it initializes the pool to different values for devices - * that might otherwise be identical and have very little entropy - * available to them (particularly common in the embedded world). - * - * add_input_randomness() uses the input layer interrupt timing, as well as - * the event type information from the hardware. - * - * add_interrupt_randomness() uses the interrupt timing as random - * inputs to the entropy pool. Using the cycle counters and the irq source - * as inputs, it feeds the randomness roughly once a second. - * - * add_disk_randomness() uses what amounts to the seek time of block - * layer request events, on a per-disk_devt basis, as input to the - * entropy pool. Note that high-speed solid state drives with very low - * seek times do not make for good sources of entropy, as their seek - * times are usually fairly consistent. - * - * All of these routines try to estimate how many bits of randomness a - * particular randomness source. They do this by keeping track of the - * first and second order deltas of the event timings. - * - * add_hwgenerator_randomness() is for true hardware RNGs, and will credit - * entropy as specified by the caller. If the entropy pool is full it will - * block until more entropy is needed. - * - * add_bootloader_randomness() is the same as add_hwgenerator_randomness()= or - * add_device_randomness(), depending on whether or not the configuration - * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. - * - * Ensuring unpredictability at system startup - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - * - * When any operating system starts up, it will go through a sequence - * of actions that are fairly predictable by an adversary, especially - * if the start-up does not involve interaction with a human operator. - * This reduces the actual number of bits of unpredictability in the - * entropy pool below the value in entropy_count. In order to - * counteract this effect, it helps to carry information in the - * entropy pool across shut-downs and start-ups. To do this, put the - * following lines an appropriate script which is run during the boot - * sequence: - * - * echo "Initializing random number generator..." - * random_seed=3D/var/run/random-seed - * # Carry a random seed from start-up to start-up - * # Load and then save the whole entropy pool - * if [ -f $random_seed ]; then - * cat $random_seed >/dev/urandom - * else - * touch $random_seed - * fi - * chmod 600 $random_seed - * dd if=3D/dev/urandom of=3D$random_seed count=3D1 bs=3D512 - * - * and the following lines in an appropriate script which is run as - * the system is shutdown: - * - * # Carry a random seed from shut-down to start-up - * # Save the whole entropy pool - * echo "Saving random seed..." - * random_seed=3D/var/run/random-seed - * touch $random_seed - * chmod 600 $random_seed - * dd if=3D/dev/urandom of=3D$random_seed count=3D1 bs=3D512 - * - * For example, on most modern systems using the System V init - * scripts, such code fragments would be found in - * /etc/rc.d/init.d/random. On older Linux systems, the correct script - * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. - * - * Effectively, these commands cause the contents of the entropy pool - * to be saved at shut-down time and reloaded into the entropy pool at - * start-up. (The 'dd' in the addition to the bootup script is to - * make sure that /etc/random-seed is different for every start-up, - * even if the system crashes without executing rc.0.) Even with - * complete knowledge of the start-up activities, predicting the state - * of the entropy pool requires knowledge of the previous history of - * the system. - * - * Configuring the /dev/random driver under Linux - * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights= reserved. * - * The /dev/random driver under Linux uses minor numbers 8 and 9 of - * the /dev/mem major number (#1). So if your system does not have - * /dev/random and /dev/urandom created already, they can be created - * by using the commands: + * This driver produces cryptographically secure pseudorandom data. It is = divided + * into roughly six sections, each with a section header: * - * mknod /dev/random c 1 8 - * mknod /dev/urandom c 1 9 + * - Initialization and readiness waiting. + * - Fast key erasure RNG, the "crng". + * - Entropy accumulation and extraction routines. + * - Entropy collection routines. + * - Userspace reader/writer interfaces. + * - Sysctl interface. + * + * The high level overview is that there is one input pool, into which + * various pieces of data are hashed. Some of that data is then "credited"= as + * having a certain number of bits of entropy. When enough bits of entropy= are + * available, the hash is finalized and handed as a key to a stream cipher= that + * expands it indefinitely for various consumers. This key is periodically + * refreshed as the various entropy collectors, described below, add data = to the + * input pool and credit it. There is currently no Fortuna-like scheduler + * involved, which can lead to malicious entropy sources causing a prematu= re + * reseed, and the entropy estimates are, at best, conservative guesses. */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABB32C4167D for ; Fri, 27 May 2022 09:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350984AbiE0JFf (ORCPT ); Fri, 27 May 2022 05:05:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350739AbiE0JAg (ORCPT ); Fri, 27 May 2022 05:00:36 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C8927037E; Fri, 27 May 2022 01:56:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 73475CE238F; Fri, 27 May 2022 08:56:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E221C34100; Fri, 27 May 2022 08:56:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641811; bh=kK+727lFJ/EMNk+O4EfQALlAPBxYd7tgSmT3zvBhMYo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nrapQck22FRSPip7GcEvg1Aogyil+PRdPrfibqS9+Un9lu4Ukn2CiTRnxYoOkNs4l wjlyc0V3BA8F01NzEMz2mB/iBOjduZFNfCC2XdjELa6CskLDH6Pcng5YZvSK8xhMha 8Y/H3b2waKx28gGCbbBDnC7lL45PkDPZXudrSVCs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Peter Zijlstra , Theodore Tso , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Sebastian Andrzej Siewior , Sultan Alsawaf , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 038/111] random: defer fast pool mixing to worker Date: Fri, 27 May 2022 10:49:10 +0200 Message-Id: <20220527084824.877637447@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >=3D 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >=3D 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neusch=C3=A4fer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 63 ++++++++++++++++++++++++++++++++++++++-------= ----- 1 file changed, 49 insertions(+), 14 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1178,9 +1178,10 @@ struct fast_pool { u32 pool32[4]; u64 pool64[2]; }; + struct work_struct mix; unsigned long last; + atomic_t count; u16 reg_idx; - u8 count; }; =20 /* @@ -1230,12 +1231,49 @@ static u32 get_reg(struct fast_pool *f, return *ptr; } =20 +static void mix_interrupt_randomness(struct work_struct *work) +{ + struct fast_pool *fast_pool =3D container_of(work, struct fast_pool, mix); + u32 pool[4]; + + /* Check to see if we're running on the wrong CPU due to hotplug. */ + local_irq_disable(); + if (fast_pool !=3D this_cpu_ptr(&irq_randomness)) { + local_irq_enable(); + /* + * If we are unlucky enough to have been moved to another CPU, + * during CPU hotplug while the CPU was shutdown then we set + * our count to zero atomically so that when the CPU comes + * back online, it can enqueue work again. The _release here + * pairs with the atomic_inc_return_acquire in + * add_interrupt_randomness(). + */ + atomic_set_release(&fast_pool->count, 0); + return; + } + + /* + * Copy the pool to the stack so that the mixer always has a + * consistent view, before we reenable irqs again. + */ + memcpy(pool, fast_pool->pool32, sizeof(pool)); + atomic_set(&fast_pool->count, 0); + fast_pool->last =3D jiffies; + local_irq_enable(); + + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + memzero_explicit(pool, sizeof(pool)); +} + void add_interrupt_randomness(int irq) { + enum { MIX_INFLIGHT =3D 1U << 31 }; struct fast_pool *fast_pool =3D this_cpu_ptr(&irq_randomness); struct pt_regs *regs =3D get_irq_regs(); unsigned long now =3D jiffies; cycles_t cycles =3D random_get_entropy(); + unsigned int new_count; =20 if (cycles =3D=3D 0) cycles =3D get_reg(fast_pool, regs); @@ -1255,12 +1293,13 @@ void add_interrupt_randomness(int irq) } =20 fast_mix(fast_pool->pool32); - ++fast_pool->count; + /* The _acquire here pairs with the atomic_set_release in mix_interrupt_r= andomness(). */ + new_count =3D (unsigned int)atomic_inc_return_acquire(&fast_pool->count); =20 if (unlikely(crng_init =3D=3D 0)) { - if (fast_pool->count >=3D 64 && + if (new_count >=3D 64 && crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { - fast_pool->count =3D 0; + atomic_set(&fast_pool->count, 0); fast_pool->last =3D now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1270,20 +1309,16 @@ void add_interrupt_randomness(int irq) return; } =20 - if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) + if (new_count & MIX_INFLIGHT) return; =20 - if (!spin_trylock(&input_pool.lock)) + if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) return; =20 - fast_pool->last =3D now; - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - - fast_pool->count =3D 0; - - /* Award one bit for the contents of the fast pool. */ - credit_entropy_bits(1); + if (unlikely(!fast_pool->mix.func)) + INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); + atomic_or(MIX_INFLIGHT, &fast_pool->count); + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); =20 From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D0F4C433EF for ; Fri, 27 May 2022 09:11:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350406AbiE0JEz (ORCPT ); Fri, 27 May 2022 05:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350780AbiE0JAk (ORCPT ); Fri, 27 May 2022 05:00:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32E328A31B; Fri, 27 May 2022 01:57:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 82D5E61D7F; Fri, 27 May 2022 08:57:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47272C385B8; Fri, 27 May 2022 08:56:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641819; bh=vtFNrDwG4XkEmWxVaHM5wkzf3qnqfc1b2p90edXAux0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zYaK13xWu+GsSRzJW5mqxUmuzlBjX662+/c5nYZnR8gRTBGuscPWh7vHBZcgkE+Eu vWAkA40+70PbQUrdGkx7KRZ7LEVH6uff1RM3bl61awqpwu8/0qEeapzWPZ73Aioxte 1hSd5aZEm21lRjajVmxs6N1Y+QztmuooQ1n+aJjY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 039/111] random: do not take pool spinlock at boot Date: Fri, 27 May 2022 10:49:11 +0200 Message-Id: <20220527084825.017037657@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit afba0b80b977b2a8f16234f2acd982f82710ba33 upstream. Since rand_initialize() is run while interrupts are still off and nothing else is running, we don't need to repeatedly take and release the pool spinlock, especially in the RDSEED loop. Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -978,10 +978,10 @@ int __init rand_initialize(void) rv =3D random_get_entropy(); arch_init =3D false; } - mix_pool_bytes(&rv, sizeof(rv)); + _mix_pool_bytes(&rv, sizeof(rv)); } - mix_pool_bytes(&now, sizeof(now)); - mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(utsname(), sizeof(*(utsname()))); =20 extract_entropy(base_crng.key, sizeof(base_crng.key)); ++base_crng.generation; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B31ACC433FE for ; Fri, 27 May 2022 11:40:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350977AbiE0LkW (ORCPT ); Fri, 27 May 2022 07:40:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351559AbiE0Ljl (ORCPT ); Fri, 27 May 2022 07:39:41 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8921D132A34; Fri, 27 May 2022 04:38:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 05F45B824D7; Fri, 27 May 2022 11:38:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AAE6C385A9; Fri, 27 May 2022 11:38:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651518; bh=+2Lf1VwOBe85sA9dKLxQZnyNBJt8wLoU6e4Vf6mYvkc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DYK3M1KVUr519+zl/6G10SlJDFGMxOdzhYRK+dcmcGh5yUFZv3lh1LzF2HC1vimem fZI0h1zD9Cd3brQja/g+OQ0h+kPcFM2WJCstio1D8ZBf++U8wknjy9W7+teD3+rdFF x64MBQ1sR1LZwPzveitPvVIIWzTR4zMv0+KHF5/4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 040/111] random: unify early init crng load accounting Date: Fri, 27 May 2022 10:49:12 +0200 Message-Id: <20220527084825.140968745@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit da792c6d5f59a76c10a310c5d4c93428fd18f996 upstream. crng_fast_load() and crng_slow_load() have different semantics: - crng_fast_load() xors and accounts with crng_init_cnt. - crng_slow_load() hashes and doesn't account. However add_hwgenerator_randomness() can afford to hash (it's called from a kthread), and it should account. Additionally, ones that can afford to hash don't need to take a trylock but can take a normal lock. So, we combine these into one function, crng_pre_init_inject(), which allows us to control these in a uniform way. This will make it simpler later to simplify this all down when the time comes for that. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 114 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 59 insertions(+), 55 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -386,7 +386,7 @@ static void crng_make_state(u32 chacha_s * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not * ready, we do fast key erasure with the base_crng directly, because - * this is what crng_{fast,slow}_load mutate during early init. + * this is what crng_pre_init_inject() mutates during early init. */ if (unlikely(!crng_ready())) { bool ready; @@ -437,72 +437,75 @@ static void crng_make_state(u32 chacha_s } =20 /* - * This function is for crng_init =3D=3D 0 only. + * This function is for crng_init =3D=3D 0 only. It loads entropy directly + * into the crng's key, without going through the input pool. It is, + * generally speaking, not very safe, but we use this only at early + * boot time when it's better to have something there rather than + * nothing. + * + * There are two paths, a slow one and a fast one. The slow one + * hashes the input along with the current key. The fast one simply + * xors it in, and should only be used from interrupt context. + * + * If account is set, then the crng_init_cnt counter is incremented. + * This shouldn't be set by functions like add_device_randomness(), + * where we can't trust the buffer passed to it is guaranteed to be + * unpredictable (so it might not have any entropy at all). * - * crng_fast_load() can be called by code in the interrupt service - * path. So we can't afford to dilly-dally. Returns the number of - * bytes processed from cp. + * Returns the number of bytes processed from input, which is bounded + * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_fast_load(const void *cp, size_t len) +static size_t crng_pre_init_inject(const void *input, size_t len, + bool fast, bool account) { static int crng_init_cnt =3D 0; unsigned long flags; - const u8 *src =3D (const u8 *)cp; - size_t ret =3D 0; =20 - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; + if (fast) { + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return 0; + } else { + spin_lock_irqsave(&base_crng.lock, flags); + } + if (crng_init !=3D 0) { spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^=3D *src; - src++; crng_init_cnt++; len--; ret++; - } - if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; - crng_init =3D 1; - } - spin_unlock_irqrestore(&base_crng.lock, flags); - if (crng_init =3D=3D 1) - pr_notice("fast init done\n"); - return ret; -} =20 -/* - * This function is for crng_init =3D=3D 0 only. - * - * crng_slow_load() is called by add_device_randomness, which has two - * attributes. (1) We can't trust the buffer passed to it is - * guaranteed to be unpredictable (so it might not have any entropy at - * all), and (2) it doesn't have the performance constraints of - * crng_fast_load(). - * - * So, we simply hash the contents in with the current key. Finally, - * we do *not* advance crng_init_cnt since buffer we may get may be - * something like a fixed DMI table (for example), which might very - * well be unique to the machine, but is otherwise unvarying. - */ -static void crng_slow_load(const void *cp, size_t len) -{ - unsigned long flags; - struct blake2s_state hash; + if (account) + len =3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); =20 - blake2s_init(&hash, sizeof(base_crng.key)); - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return; - if (crng_init !=3D 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; + if (fast) { + const u8 *src =3D input; + size_t i; + + for (i =3D 0; i < len; ++i) + base_crng.key[(crng_init_cnt + i) % + sizeof(base_crng.key)] ^=3D src[i]; + } else { + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, input, len); + blake2s_final(&hash, base_crng.key); + } + + if (account) { + crng_init_cnt +=3D len; + if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { + ++base_crng.generation; + crng_init =3D 1; + } } =20 - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, cp, len); - blake2s_final(&hash, base_crng.key); - spin_unlock_irqrestore(&base_crng.lock, flags); + + if (crng_init =3D=3D 1) + pr_notice("fast init done\n"); + + return len; } =20 static void _get_random_bytes(void *buf, size_t nbytes) @@ -1018,7 +1021,7 @@ void add_device_randomness(const void *b unsigned long flags; =20 if (!crng_ready() && size) - crng_slow_load(buf, size); + crng_pre_init_inject(buf, size, false, false); =20 spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(buf, size); @@ -1135,7 +1138,7 @@ void add_hwgenerator_randomness(const vo size_t entropy) { if (unlikely(crng_init =3D=3D 0)) { - size_t ret =3D crng_fast_load(buffer, count); + size_t ret =3D crng_pre_init_inject(buffer, count, false, true); mix_pool_bytes(buffer, ret); count -=3D ret; buffer +=3D ret; @@ -1298,7 +1301,8 @@ void add_interrupt_randomness(int irq) =20 if (unlikely(crng_init =3D=3D 0)) { if (new_count >=3D 64 && - crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { + crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), + true, true) > 0) { atomic_set(&fast_pool->count, 0); fast_pool->last =3D now; if (spin_trylock(&input_pool.lock)) { From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7298EC433FE for ; Fri, 27 May 2022 11:38:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234414AbiE0Li0 (ORCPT ); Fri, 27 May 2022 07:38:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351392AbiE0LiG (ORCPT ); Fri, 27 May 2022 07:38:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10E376EB0F; Fri, 27 May 2022 04:37:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 61DA361CE2; Fri, 27 May 2022 11:37:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 714B9C3411A; Fri, 27 May 2022 11:37:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651470; bh=0MYS7OwGSsxOj9r4dJeXvJnS30+TdN5SqX5eby8CsFw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YAoDOc0rTPq4AR7UCkkG6R3zzJeuC5yqZsu8uqtLuwf+2C7tvFhRXPXW1E/rL/1A9 jAZlT/VfTU0bf9Ok82BbPRVeTFX1P3GQVSafRas/godTOOCfMbr80GKE3etKo7b2BD gUxx4pYLqWxsC58tXB8STrvDZFEg/byQVj392XCM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 041/111] random: check for crng_init == 0 in add_device_randomness() Date: Fri, 27 May 2022 10:49:13 +0200 Message-Id: <20220527084825.267606569@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1daf2f387652bf3a7044aea042f5023b3f6b189b upstream. This has no real functional change, as crng_pre_init_inject() (and before that, crng_slow_init()) always checks for =3D=3D 0, not >=3D 2. So correct the outer unlocked change to reflect that. Before this used crng_ready(), which was not correct. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1020,7 +1020,7 @@ void add_device_randomness(const void *b unsigned long time =3D random_get_entropy() ^ jiffies; unsigned long flags; =20 - if (!crng_ready() && size) + if (crng_init =3D=3D 0 && size) crng_pre_init_inject(buf, size, false, false); =20 spin_lock_irqsave(&input_pool.lock, flags); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 858CDC433EF for ; Fri, 27 May 2022 11:39:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351475AbiE0Lji (ORCPT ); Fri, 27 May 2022 07:39:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351478AbiE0Lir (ORCPT ); Fri, 27 May 2022 07:38:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE2451157DB; Fri, 27 May 2022 04:38:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 11CADB824D9; Fri, 27 May 2022 11:38:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 526B0C36AF5; Fri, 27 May 2022 11:38:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651500; bh=16LFNup+v25/VvNo3CxgLkxi4CTpG2cxX/qybxkt+Iw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g5IP7/G5zu3RFbiq9RdD6jAHCmEP9nnUA/UjnCc5zWMDVloGdCsMhBEaRr/wJY2gk sectZcGEgEcUzuFDJDAUG7nQVLR86oWJBO0e3t2uMauyHDt+hVe8XnfHP+YprnMPPN QlGBaERpeIZdQvFss7yuYpyx3J8/JpAlCJI65n6Q= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Matt Mackall , Theodore Tso , Herbert Xu , Eric Biggers , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 042/111] random: pull add_hwgenerator_randomness() declaration into random.h Date: Fri, 27 May 2022 10:49:14 +0200 Message-Id: <20220527084825.418150401@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit b777c38239fec5a528e59f55b379e31b1a187524 upstream. add_hwgenerator_randomness() is a function implemented and documented inside of random.c. It is the way that hardware RNGs push data into it. Therefore, it should be declared in random.h. Otherwise sparse complains with: random.c:1137:6: warning: symbol 'add_hwgenerator_randomness' was not decla= red. Should it be static? The alternative would be to include hw_random.h into random.c, but that wouldn't really be good for anything except slowing down compile time. Cc: Matt Mackall Cc: Theodore Ts'o Acked-by: Herbert Xu Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/hw_random/core.c | 1 + include/linux/hw_random.h | 2 -- include/linux/random.h | 2 ++ 3 files changed, 3 insertions(+), 2 deletions(-) --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include --- a/include/linux/hw_random.h +++ b/include/linux/hw_random.h @@ -60,7 +60,5 @@ extern int devm_hwrng_register(struct de /** Unregister a Hardware Random Number Generator driver. */ extern void hwrng_unregister(struct hwrng *rng); extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng); -/** Feed random bits into the pool. */ -extern void add_hwgenerator_randomness(const void *buffer, size_t count, s= ize_t entropy); =20 #endif /* LINUX_HWRANDOM_H_ */ --- a/include/linux/random.h +++ b/include/linux/random.h @@ -32,6 +32,8 @@ static inline void add_latent_entropy(vo extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; extern void add_interrupt_randomness(int irq) __latent_entropy; +extern void add_hwgenerator_randomness(const void *buffer, size_t count, + size_t entropy); =20 extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2461C433EF for ; Fri, 27 May 2022 11:40:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351577AbiE0Ljw (ORCPT ); Fri, 27 May 2022 07:39:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351419AbiE0LjP (ORCPT ); Fri, 27 May 2022 07:39:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE37EBE82; Fri, 27 May 2022 04:38:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D86E1B824D6; Fri, 27 May 2022 11:38:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C0FFC385A9; Fri, 27 May 2022 11:38:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651506; bh=aB/IfmTPypZrb7iAtYiks+SNwarQEJGkQSwZ4SEfByI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LVwLUGJv7PbwH+zaaNoBkNvA0ljXvSNuTk7AUK5Bq2pmx4ld511McCa0Upybp2OVZ 2E1fjy3lD998Q9BRIl1nis7cWZobWBDYkmMVixnf5hs/JjfBpEEEU6zxr+tXEfdgbi Fxd0VpBPpspiok4A/BCNm3PkCCxCzN1tNcsvuxiw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Peter Zijlstra , Theodore Tso , Sultan Alsawaf , Dominik Brodowski , Sebastian Andrzej Siewior , "Jason A. Donenfeld" Subject: [PATCH 5.17 043/111] random: clear fast pool, crng, and batches in cpuhp bring up Date: Fri, 27 May 2022 10:49:15 +0200 Message-Id: <20220527084825.544022635@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 3191dd5a1179ef0fad5a050a1702ae98b6251e8f upstream. For the irq randomness fast pool, rather than having to use expensive atomics, which were visibly the most expensive thing in the entire irq handler, simply take care of the extreme edge case of resetting count to zero in the cpuhp online handler, just after workqueues have been reenabled. This simplifies the code a bit and lets us use vanilla variables rather than atomics, and performance should be improved. As well, very early on when the CPU comes up, while interrupts are still disabled, we clear out the per-cpu crng and its batches, so that it always starts with fresh randomness. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Sultan Alsawaf Cc: Dominik Brodowski Acked-by: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 62 ++++++++++++++++++++++++++++++++++------= ----- include/linux/cpuhotplug.h | 2 + include/linux/random.h | 5 +++ kernel/cpu.c | 11 +++++++ 4 files changed, 65 insertions(+), 15 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -698,6 +698,25 @@ u32 get_random_u32(void) } EXPORT_SYMBOL(get_random_u32); =20 +#ifdef CONFIG_SMP +/* + * This function is called when the CPU is coming up, with entry + * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP. + */ +int random_prepare_cpu(unsigned int cpu) +{ + /* + * When the cpu comes back online, immediately invalidate both + * the per-cpu crng and all batches, so that we serve fresh + * randomness. + */ + per_cpu_ptr(&crngs, cpu)->generation =3D ULONG_MAX; + per_cpu_ptr(&batched_entropy_u32, cpu)->position =3D UINT_MAX; + per_cpu_ptr(&batched_entropy_u64, cpu)->position =3D UINT_MAX; + return 0; +} +#endif + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. @@ -1183,7 +1202,7 @@ struct fast_pool { }; struct work_struct mix; unsigned long last; - atomic_t count; + unsigned int count; u16 reg_idx; }; =20 @@ -1219,6 +1238,29 @@ static void fast_mix(u32 pool[4]) =20 static DEFINE_PER_CPU(struct fast_pool, irq_randomness); =20 +#ifdef CONFIG_SMP +/* + * This function is called when the CPU has just come online, with + * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE. + */ +int random_online_cpu(unsigned int cpu) +{ + /* + * During CPU shutdown and before CPU onlining, add_interrupt_ + * randomness() may schedule mix_interrupt_randomness(), and + * set the MIX_INFLIGHT flag. However, because the worker can + * be scheduled on a different CPU during this period, that + * flag will never be cleared. For that reason, we zero out + * the flag here, which runs just after workqueues are onlined + * for the CPU again. This also has the effect of setting the + * irq randomness count to zero so that new accumulated irqs + * are fresh. + */ + per_cpu_ptr(&irq_randomness, cpu)->count =3D 0; + return 0; +} +#endif + static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { u32 *ptr =3D (u32 *)regs; @@ -1243,15 +1285,6 @@ static void mix_interrupt_randomness(str local_irq_disable(); if (fast_pool !=3D this_cpu_ptr(&irq_randomness)) { local_irq_enable(); - /* - * If we are unlucky enough to have been moved to another CPU, - * during CPU hotplug while the CPU was shutdown then we set - * our count to zero atomically so that when the CPU comes - * back online, it can enqueue work again. The _release here - * pairs with the atomic_inc_return_acquire in - * add_interrupt_randomness(). - */ - atomic_set_release(&fast_pool->count, 0); return; } =20 @@ -1260,7 +1293,7 @@ static void mix_interrupt_randomness(str * consistent view, before we reenable irqs again. */ memcpy(pool, fast_pool->pool32, sizeof(pool)); - atomic_set(&fast_pool->count, 0); + fast_pool->count =3D 0; fast_pool->last =3D jiffies; local_irq_enable(); =20 @@ -1296,14 +1329,13 @@ void add_interrupt_randomness(int irq) } =20 fast_mix(fast_pool->pool32); - /* The _acquire here pairs with the atomic_set_release in mix_interrupt_r= andomness(). */ - new_count =3D (unsigned int)atomic_inc_return_acquire(&fast_pool->count); + new_count =3D ++fast_pool->count; =20 if (unlikely(crng_init =3D=3D 0)) { if (new_count >=3D 64 && crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), true, true) > 0) { - atomic_set(&fast_pool->count, 0); + fast_pool->count =3D 0; fast_pool->last =3D now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1321,7 +1353,7 @@ void add_interrupt_randomness(int irq) =20 if (unlikely(!fast_pool->mix.func)) INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); - atomic_or(MIX_INFLIGHT, &fast_pool->count); + fast_pool->count |=3D MIX_INFLIGHT; queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -100,6 +100,7 @@ enum cpuhp_state { CPUHP_AP_ARM_CACHE_B15_RAC_DEAD, CPUHP_PADATA_DEAD, CPUHP_AP_DTPM_CPU_DEAD, + CPUHP_RANDOM_PREPARE, CPUHP_WORKQUEUE_PREP, CPUHP_POWER_NUMA_PREPARE, CPUHP_HRTIMERS_PREPARE, @@ -240,6 +241,7 @@ enum cpuhp_state { CPUHP_AP_PERF_CSKY_ONLINE, CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, + CPUHP_AP_RANDOM_ONLINE, CPUHP_AP_RCUTREE_ONLINE, CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, --- a/include/linux/random.h +++ b/include/linux/random.h @@ -156,4 +156,9 @@ static inline bool __init arch_get_rando } #endif =20 +#ifdef CONFIG_SMP +extern int random_prepare_cpu(unsigned int cpu); +extern int random_online_cpu(unsigned int cpu); +#endif + #endif /* _LINUX_RANDOM_H */ --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #define CREATE_TRACE_POINTS @@ -1659,6 +1660,11 @@ static struct cpuhp_step cpuhp_hp_states .startup.single =3D perf_event_init_cpu, .teardown.single =3D perf_event_exit_cpu, }, + [CPUHP_RANDOM_PREPARE] =3D { + .name =3D "random:prepare", + .startup.single =3D random_prepare_cpu, + .teardown.single =3D NULL, + }, [CPUHP_WORKQUEUE_PREP] =3D { .name =3D "workqueue:prepare", .startup.single =3D workqueue_prepare_cpu, @@ -1782,6 +1788,11 @@ static struct cpuhp_step cpuhp_hp_states .startup.single =3D workqueue_online_cpu, .teardown.single =3D workqueue_offline_cpu, }, + [CPUHP_AP_RANDOM_ONLINE] =3D { + .name =3D "random:online", + .startup.single =3D random_online_cpu, + .teardown.single =3D NULL, + }, [CPUHP_AP_RCUTREE_ONLINE] =3D { .name =3D "RCU/tree:online", .startup.single =3D rcutree_online_cpu, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E261C433FE for ; Fri, 27 May 2022 11:40:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351499AbiE0LkR (ORCPT ); Fri, 27 May 2022 07:40:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351443AbiE0Ljb (ORCPT ); Fri, 27 May 2022 07:39:31 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 774AA606FD; Fri, 27 May 2022 04:38:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D6732B824D7; Fri, 27 May 2022 11:38:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49F4EC385A9; Fri, 27 May 2022 11:38:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651512; bh=41Qm0OR+Mf6ZsBxsL2Yv1rcdzmNMckEe0jBWiGIyEPs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mp8Ys6JONgrzYPpjEBzoVlYXtSnAuRuFc0ygCUdQ7lcptSoYRmtYUIoPO4SsGrLgU iBmwe1hrGoO1vET3/Vudyac4M5rmW06lqwv6s6x2vlBF9TiorR4GnqZA8FkWyRXF/L fLhCWfpR9oYrFt0o6T8Hvv9z12LrJantamTF4dYs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 044/111] random: round-robin registers as ulong, not u32 Date: Fri, 27 May 2022 10:49:16 +0200 Message-Id: <20220527084825.691156778@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit da3951ebdcd1cb1d5c750e08cd05aee7b0c04d9a upstream. When the interrupt handler does not have a valid cycle counter, it calls get_reg() to read a register from the irq stack, in round-robin. Currently it does this assuming that registers are 32-bit. This is _probably_ the case, and probably all platforms without cycle counters are in fact 32-bit platforms. But maybe not, and either way, it's not quite correct. This commit fixes that to deal with `unsigned long` rather than `u32`. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1261,15 +1261,15 @@ int random_online_cpu(unsigned int cpu) } #endif =20 -static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) +static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs) { - u32 *ptr =3D (u32 *)regs; + unsigned long *ptr =3D (unsigned long *)regs; unsigned int idx; =20 if (regs =3D=3D NULL) return 0; idx =3D READ_ONCE(f->reg_idx); - if (idx >=3D sizeof(struct pt_regs) / sizeof(u32)) + if (idx >=3D sizeof(struct pt_regs) / sizeof(unsigned long)) idx =3D 0; ptr +=3D idx++; WRITE_ONCE(f->reg_idx, idx); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A101C433EF for ; Fri, 27 May 2022 11:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351329AbiE0Lh0 (ORCPT ); Fri, 27 May 2022 07:37:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349040AbiE0LhY (ORCPT ); Fri, 27 May 2022 07:37:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B9A2522C6; Fri, 27 May 2022 04:37:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0BA5E61C3F; Fri, 27 May 2022 11:37:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C87AC385A9; Fri, 27 May 2022 11:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651442; bh=lnFjtwLUDdubcl79VcwkRkWPLGZJDO44WzGc606Jztc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CU3gdKWyIUf6oVuW1Dgfnq5e3RhP0tRbtQBP0o71hiDRTHjG3B3X1qMv4afldSNej ZDyr7DQkKc9l1fQWU87OTByJcfIn4jz06kwLx1c/zoYJxELcbILOxeBLeQjW64dZ9P Is/SpW9WmCXEl0aDj4sz27vZD/zwP+PTZlBHpygQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 045/111] random: only wake up writers after zap if threshold was passed Date: Fri, 27 May 2022 10:49:17 +0200 Message-Id: <20220527084825.820184840@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a3f9e8910e1584d7725ef7d5ac870920d42d0bb4 upstream. The only time that we need to wake up /dev/random writers on RNDCLEARPOOL/RNDZAPPOOL is when we're changing from a value that is greater than or equal to POOL_MIN_BITS to zero, because if we're changing from below POOL_MIN_BITS to zero, the writers are already unblocked. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1582,7 +1582,7 @@ static long random_ioctl(struct file *f, */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0)) { + if (xchg(&input_pool.entropy_count, 0) >=3D POOL_MIN_BITS) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91AB6C433FE for ; Fri, 27 May 2022 11:37:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351402AbiE0Lhs (ORCPT ); Fri, 27 May 2022 07:37:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345594AbiE0Lhm (ORCPT ); Fri, 27 May 2022 07:37:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 738066540C; Fri, 27 May 2022 04:37:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EC9D761CE2; Fri, 27 May 2022 11:37:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0485DC34113; Fri, 27 May 2022 11:37:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651452; bh=amemK8Q9YbQzV/hIYUpP+5+ychYfZ9KcS9kDPk/te5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bIuoGKPcsD4I0+vMHmg/YpdHDERhXAFnOxsdkwULULHPCNVCbHHdDEMS2wsldx311 iTVWwljACoWNI2JC8Kwt5OMvNcpzJ31WgHPktDXNxWp/iLIOFCi9PnpqKk8RqOptqK NAGtTAlZmgSJGMuA09uZNtGg8EnGx3OR6NXbYADo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 046/111] random: cleanup UUID handling Date: Fri, 27 May 2022 10:49:18 +0200 Message-Id: <20220527084825.965332823@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 64276a9939ff414f2f0db38036cf4e1a0a703394 upstream. Rather than hard coding various lengths, we can use the right constants. Strings should be `char *` while buffers should be `u8 *`. Rather than have a nonsensical and unused maxlength, just remove it. Finally, use snprintf instead of sprintf, just out of good hygiene. As well, remove the old comment about returning a binary UUID via the binary sysctl syscall. That syscall was removed from the kernel in 5.5, and actually, the "uuid_strategy" function and related infrastructure for even serving it via the binary sysctl syscall was removed with 894d2491153a ("sysctl drivers: Remove dead binary sysctl support") back in 2.6.33. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1661,22 +1661,25 @@ const struct file_operations urandom_fop static int sysctl_random_min_urandom_seed =3D 60; static int sysctl_random_write_wakeup_bits =3D POOL_MIN_BITS; static int sysctl_poolsize =3D POOL_BITS; -static char sysctl_bootid[16]; +static u8 sysctl_bootid[UUID_SIZE]; =20 /* * This function is used to return both the bootid UUID, and random - * UUID. The difference is in whether table->data is NULL; if it is, + * UUID. The difference is in whether table->data is NULL; if it is, * then a new UUID is generated and returned to the user. - * - * If the user accesses this via the proc interface, the UUID will be - * returned as an ASCII string in the standard UUID format; if via the - * sysctl system call, as 16 bytes of binary data. */ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { - struct ctl_table fake_table; - unsigned char buf[64], tmp_uuid[16], *uuid; + u8 tmp_uuid[UUID_SIZE], *uuid; + char uuid_string[UUID_STRING_LEN + 1]; + struct ctl_table fake_table =3D { + .data =3D uuid_string, + .maxlen =3D UUID_STRING_LEN + }; + + if (write) + return -EPERM; =20 uuid =3D table->data; if (!uuid) { @@ -1691,12 +1694,8 @@ static int proc_do_uuid(struct ctl_table spin_unlock(&bootid_spinlock); } =20 - sprintf(buf, "%pU", uuid); - - fake_table.data =3D buf; - fake_table.maxlen =3D sizeof(buf); - - return proc_dostring(&fake_table, write, buffer, lenp, ppos); + snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid); + return proc_dostring(&fake_table, 0, buffer, lenp, ppos); } =20 static struct ctl_table random_table[] =3D { @@ -1731,13 +1730,11 @@ static struct ctl_table random_table[] =3D { .procname =3D "boot_id", .data =3D &sysctl_bootid, - .maxlen =3D 16, .mode =3D 0444, .proc_handler =3D proc_do_uuid, }, { .procname =3D "uuid", - .maxlen =3D 16, .mode =3D 0444, .proc_handler =3D proc_do_uuid, }, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97DF1C433F5 for ; Fri, 27 May 2022 11:37:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351371AbiE0Lh5 (ORCPT ); Fri, 27 May 2022 07:37:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351399AbiE0Lhq (ORCPT ); Fri, 27 May 2022 07:37:46 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0177367D25; Fri, 27 May 2022 04:37:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 14DD061CE2; Fri, 27 May 2022 11:37:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23325C385A9; Fri, 27 May 2022 11:37:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651461; bh=R5S11L18U+ycjIsCbQywW4B/Thxu/hrtB6rymJ6gJS8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N4GwGoNaZ8M11TQjplwO9xw7gTL1WuEB4AMd7r90Y8wZYg3taxMn6skiI4X8de4Oe iARI1zXoQiSN/lGyb5Fv+VpvZw4N9QNkjmOwHULwqE/IseGdBLTA0mzXV83cSqLKHf t21tFRMxVB7+ULogCtKmgEt6W2Mq4ycjYqcrKNHc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 047/111] random: unify cycles_t and jiffies usage and types Date: Fri, 27 May 2022 10:49:19 +0200 Message-Id: <20220527084826.090918026@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit abded93ec1e9692920fe309f07f40bd1035f2940 upstream. random_get_entropy() returns a cycles_t, not an unsigned long, which is sometimes 64 bits on various 32-bit platforms, including x86. Conversely, jiffies is always unsigned long. This commit fixes things to use cycles_t for fields that use random_get_entropy(), named "cycles", and unsigned long for fields that use jiffies, named "now". It's also good to mix in a cycles_t and a jiffies in the same way for both add_device_randomness and add_timer_randomness, rather than using xor in one case. Finally, we unify the order of these volatile reads, always reading the more precise cycles counter, and then jiffies, so that the cycle counter is as close to the event as possible. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 56 ++++++++++++++++++++++++---------------------= ----- 1 file changed, 27 insertions(+), 29 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1020,12 +1020,6 @@ int __init rand_initialize(void) return 0; } =20 -/* There is one of these per entropy source */ -struct timer_rand_state { - cycles_t last_time; - long last_delta, last_delta2; -}; - /* * Add device- or boot-specific data to the input pool to help * initialize it. @@ -1036,19 +1030,26 @@ struct timer_rand_state { */ void add_device_randomness(const void *buf, size_t size) { - unsigned long time =3D random_get_entropy() ^ jiffies; - unsigned long flags; + cycles_t cycles =3D random_get_entropy(); + unsigned long flags, now =3D jiffies; =20 if (crng_init =3D=3D 0 && size) crng_pre_init_inject(buf, size, false, false); =20 spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&cycles, sizeof(cycles)); + _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(buf, size); - _mix_pool_bytes(&time, sizeof(time)); spin_unlock_irqrestore(&input_pool.lock, flags); } EXPORT_SYMBOL(add_device_randomness); =20 +/* There is one of these per entropy source */ +struct timer_rand_state { + unsigned long last_time; + long last_delta, last_delta2; +}; + /* * This function adds entropy to the entropy "pool" by using timing * delays. It uses the timer_rand_state structure to make an estimate @@ -1057,29 +1058,26 @@ EXPORT_SYMBOL(add_device_randomness); * The number "num" is also added to the pool - it should somehow describe * the type of event which just happened. This is currently 0-255 for * keyboard scan codes, and 256 upwards for interrupts. - * */ static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) { - struct { - long jiffies; - unsigned int cycles; - unsigned int num; - } sample; + cycles_t cycles =3D random_get_entropy(); + unsigned long flags, now =3D jiffies; long delta, delta2, delta3; =20 - sample.jiffies =3D jiffies; - sample.cycles =3D random_get_entropy(); - sample.num =3D num; - mix_pool_bytes(&sample, sizeof(sample)); + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&cycles, sizeof(cycles)); + _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); =20 /* * Calculate number of bits of randomness we probably added. * We take into account the first, second and third-order deltas * in order to make our estimate. */ - delta =3D sample.jiffies - READ_ONCE(state->last_time); - WRITE_ONCE(state->last_time, sample.jiffies); + delta =3D now - READ_ONCE(state->last_time); + WRITE_ONCE(state->last_time, now); =20 delta2 =3D delta - READ_ONCE(state->last_delta); WRITE_ONCE(state->last_delta, delta); @@ -1305,10 +1303,10 @@ static void mix_interrupt_randomness(str void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT =3D 1U << 31 }; + cycles_t cycles =3D random_get_entropy(); + unsigned long now =3D jiffies; struct fast_pool *fast_pool =3D this_cpu_ptr(&irq_randomness); struct pt_regs *regs =3D get_irq_regs(); - unsigned long now =3D jiffies; - cycles_t cycles =3D random_get_entropy(); unsigned int new_count; =20 if (cycles =3D=3D 0) @@ -1383,28 +1381,28 @@ static void entropy_timer(struct timer_l static void try_to_generate_entropy(void) { struct { - unsigned long now; + cycles_t cycles; struct timer_list timer; } stack; =20 - stack.now =3D random_get_entropy(); + stack.cycles =3D random_get_entropy(); =20 /* Slow counter - or none. Don't even bother */ - if (stack.now =3D=3D random_get_entropy()) + if (stack.cycles =3D=3D random_get_entropy()) return; =20 timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready()) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); - mix_pool_bytes(&stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); schedule(); - stack.now =3D random_get_entropy(); + stack.cycles =3D random_get_entropy(); } =20 del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62A39C433F5 for ; Fri, 27 May 2022 11:38:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348283AbiE0Lin (ORCPT ); Fri, 27 May 2022 07:38:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351399AbiE0LiN (ORCPT ); Fri, 27 May 2022 07:38:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FECC108AA9; Fri, 27 May 2022 04:37:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7FF1D61CE3; Fri, 27 May 2022 11:37:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62F23C385A9; Fri, 27 May 2022 11:37:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651474; bh=ucDUMGX37eDIy4saYwPoYkShV8xHADijJcGiVrrzHbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=v0G3VhNlQpjJVTCN1UK+oUcEqZN4Rkkkrkc/ZeOVUHymg4Mfte0MoUHr6qbCMRJ7K p2FegT7WUFk+u0zQHmEnnrPn9voPG9di4Qlx3YZRBwM7np0PH21EvkEdcn1QZLbsgR fvAhvMOzT0NdC+/XEmlpuCwQ+NGA29X6rgViu5yM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sultan Alsawaf , Thomas Gleixner , Peter Zijlstra , Eric Biggers , Theodore Tso , Sebastian Andrzej Siewior , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 048/111] random: do crng pre-init loading in worker rather than irq Date: Fri, 27 May 2022 10:49:20 +0200 Message-Id: <20220527084826.224814493@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit c2a7de4feb6e09f23af7accc0f882a8fa92e7ae5 upstream. Taking spinlocks from IRQ context is generally problematic for PREEMPT_RT. That is, in part, why we take trylocks instead. However, a spin_try_lock() is also problematic since another spin_lock() invocation can potentially PI-boost the wrong task, as the spin_try_lock() is invoked from an IRQ-context, so the task on CPU (random task or idle) is not the actual owner. Additionally, by deferring the crng pre-init loading to the worker, we can use the cryptographic hash function rather than xor, which is perhaps a meaningful difference when considering this data has only been through the relatively weak fast_mix() function. The biggest downside of this approach is that the pre-init loading is now deferred until later, which means things that need random numbers after interrupts are enabled, but before workqueues are running -- or before this particular worker manages to run -- are going to get into trouble. Hopefully in the real world, this window is rather small, especially since this code won't run until 64 interrupts had occurred. Cc: Sultan Alsawaf Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Eric Biggers Cc: Theodore Ts'o Acked-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 65 ++++++++++++++-------------------------------= ----- 1 file changed, 19 insertions(+), 46 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -443,10 +443,6 @@ static void crng_make_state(u32 chacha_s * boot time when it's better to have something there rather than * nothing. * - * There are two paths, a slow one and a fast one. The slow one - * hashes the input along with the current key. The fast one simply - * xors it in, and should only be used from interrupt context. - * * If account is set, then the crng_init_cnt counter is incremented. * This shouldn't be set by functions like add_device_randomness(), * where we can't trust the buffer passed to it is guaranteed to be @@ -455,19 +451,15 @@ static void crng_make_state(u32 chacha_s * Returns the number of bytes processed from input, which is bounded * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_pre_init_inject(const void *input, size_t len, - bool fast, bool account) +static size_t crng_pre_init_inject(const void *input, size_t len, bool acc= ount) { static int crng_init_cnt =3D 0; + struct blake2s_state hash; unsigned long flags; =20 - if (fast) { - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; - } else { - spin_lock_irqsave(&base_crng.lock, flags); - } + blake2s_init(&hash, sizeof(base_crng.key)); =20 + spin_lock_irqsave(&base_crng.lock, flags); if (crng_init !=3D 0) { spin_unlock_irqrestore(&base_crng.lock, flags); return 0; @@ -476,21 +468,9 @@ static size_t crng_pre_init_inject(const if (account) len =3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); =20 - if (fast) { - const u8 *src =3D input; - size_t i; - - for (i =3D 0; i < len; ++i) - base_crng.key[(crng_init_cnt + i) % - sizeof(base_crng.key)] ^=3D src[i]; - } else { - struct blake2s_state hash; - - blake2s_init(&hash, sizeof(base_crng.key)); - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, input, len); - blake2s_final(&hash, base_crng.key); - } + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, input, len); + blake2s_final(&hash, base_crng.key); =20 if (account) { crng_init_cnt +=3D len; @@ -1034,7 +1014,7 @@ void add_device_randomness(const void *b unsigned long flags, now =3D jiffies; =20 if (crng_init =3D=3D 0 && size) - crng_pre_init_inject(buf, size, false, false); + crng_pre_init_inject(buf, size, false); =20 spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&cycles, sizeof(cycles)); @@ -1155,7 +1135,7 @@ void add_hwgenerator_randomness(const vo size_t entropy) { if (unlikely(crng_init =3D=3D 0)) { - size_t ret =3D crng_pre_init_inject(buffer, count, false, true); + size_t ret =3D crng_pre_init_inject(buffer, count, true); mix_pool_bytes(buffer, ret); count -=3D ret; buffer +=3D ret; @@ -1295,8 +1275,14 @@ static void mix_interrupt_randomness(str fast_pool->last =3D jiffies; local_irq_enable(); =20 - mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); + if (unlikely(crng_init =3D=3D 0)) { + crng_pre_init_inject(pool, sizeof(pool), true); + mix_pool_bytes(pool, sizeof(pool)); + } else { + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + } + memzero_explicit(pool, sizeof(pool)); } =20 @@ -1329,24 +1315,11 @@ void add_interrupt_randomness(int irq) fast_mix(fast_pool->pool32); new_count =3D ++fast_pool->count; =20 - if (unlikely(crng_init =3D=3D 0)) { - if (new_count >=3D 64 && - crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), - true, true) > 0) { - fast_pool->count =3D 0; - fast_pool->last =3D now; - if (spin_trylock(&input_pool.lock)) { - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - } - } - return; - } - if (new_count & MIX_INFLIGHT) return; =20 - if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) + if (new_count < 64 && (!time_after(now, fast_pool->last + HZ) || + unlikely(crng_init =3D=3D 0))) return; =20 if (unlikely(!fast_pool->mix.func)) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18C4DC433EF for ; Fri, 27 May 2022 11:38:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245672AbiE0Lij (ORCPT ); Fri, 27 May 2022 07:38:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351486AbiE0LiQ (ORCPT ); Fri, 27 May 2022 07:38:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1961E1269A1; Fri, 27 May 2022 04:38:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7516461C3F; Fri, 27 May 2022 11:38:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8186CC385A9; Fri, 27 May 2022 11:38:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651482; bh=7HTA8dBismSNtEddUBqn+LQFyLT1UhrR03jQn5QntLw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VAGWAqzwfoo2w7Q6zC4AN5ae45uZ4n8EuUPPn9mAP2OopNVq51LCsHWHgGVShfoIK 3pM5y2Uyj9U8aTQIgxyDCQ19JNONTS2KRVJNMB/FEGbB3OwzD1H6tzv9aD9s66Yq6m ABMZZscVNDgzzFSTjYcsNQUdPSBuaUSL4CrZpDls= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 049/111] random: give sysctl_random_min_urandom_seed a more sensible value Date: Fri, 27 May 2022 10:49:21 +0200 Message-Id: <20220527084826.370322288@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit d0efdf35a6a71d307a250199af6fce122a7c7e11 upstream. This isn't used by anything or anywhere, but we can't delete it due to compatibility. So at least give it the correct value of what it's supposed to be instead of a garbage one. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1619,7 +1619,7 @@ const struct file_operations urandom_fop * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * - * - urandom_min_reseed_secs - fixed to the meaningless value "60". + * - urandom_min_reseed_secs - fixed to the value CRNG_RESEED_INTERVAL. * It is writable to avoid breaking old userspaces, but writing * to it does not change any behavior of the RNG. * @@ -1629,7 +1629,7 @@ const struct file_operations urandom_fop =20 #include =20 -static int sysctl_random_min_urandom_seed =3D 60; +static int sysctl_random_min_urandom_seed =3D CRNG_RESEED_INTERVAL / HZ; static int sysctl_random_write_wakeup_bits =3D POOL_MIN_BITS; static int sysctl_poolsize =3D POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D27BC433FE for ; Fri, 27 May 2022 11:39:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351541AbiE0LjN (ORCPT ); Fri, 27 May 2022 07:39:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351542AbiE0LiS (ORCPT ); Fri, 27 May 2022 07:38:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD676131F1D; Fri, 27 May 2022 04:38:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 35E2861CE7; Fri, 27 May 2022 11:38:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46974C385A9; Fri, 27 May 2022 11:38:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651491; bh=tRdguKMsCMb3cgWGW47OTk5Q8fgyO7J3GqqlMWdLRDs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KCFRwMZsUmJcR4f8A7+fhzrNGyj6jAn9JSGD0d7NklE6qTzQTPw2wYHxx7ZetR6t4 ROoTQMxSFro7xbqPaLXmgLCdcDMl3AnDrSWjjW7fhG54NKHTPAlxmgDOgrs/cDrBI/ fGz1lUT6k9SNiCK/rpoXuwwXmNeYLa5FvGTgLMgI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 050/111] random: dont let 644 read-only sysctls be written to Date: Fri, 27 May 2022 10:49:22 +0200 Message-Id: <20220527084826.508704032@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 77553cf8f44863b31da242cf24671d76ddb61597 upstream. We leave around these old sysctls for compatibility, and we keep them "writable" for compatibility, but even after writing, we should keep reporting the same value. This is consistent with how userspaces tend to use sysctl_random_write_wakeup_bits, writing to it, and then later reading from it and using the value. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1669,6 +1669,13 @@ static int proc_do_uuid(struct ctl_table return proc_dostring(&fake_table, 0, buffer, lenp, ppos); } =20 +/* The same as proc_dointvec, but writes don't change anything. */ +static int proc_do_rointvec(struct ctl_table *table, int write, void *buff= er, + size_t *lenp, loff_t *ppos) +{ + return write ? 0 : proc_dointvec(table, 0, buffer, lenp, ppos); +} + static struct ctl_table random_table[] =3D { { .procname =3D "poolsize", @@ -1689,14 +1696,14 @@ static struct ctl_table random_table[] =3D .data =3D &sysctl_random_write_wakeup_bits, .maxlen =3D sizeof(int), .mode =3D 0644, - .proc_handler =3D proc_dointvec, + .proc_handler =3D proc_do_rointvec, }, { .procname =3D "urandom_min_reseed_secs", .data =3D &sysctl_random_min_urandom_seed, .maxlen =3D sizeof(int), .mode =3D 0644, - .proc_handler =3D proc_dointvec, + .proc_handler =3D proc_do_rointvec, }, { .procname =3D "boot_id", From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17B62C433EF for ; Fri, 27 May 2022 11:43:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351732AbiE0Lnl (ORCPT ); Fri, 27 May 2022 07:43:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351550AbiE0Ljv (ORCPT ); Fri, 27 May 2022 07:39:51 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53BE562BE5; Fri, 27 May 2022 04:38:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DBD43B82466; Fri, 27 May 2022 11:38:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50A9CC385A9; Fri, 27 May 2022 11:38:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651521; bh=Dx9FTw2hD0zBHYgrCojlnuEo3ur2EIBl1dIfh/vavCw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uwUOlGuI8/MdMlOv6V1e1eYDizXlWKvU6wQ0DVrMnhIr0zbYpWVhdgTG7IPYGW4Zg f8ko8W0nCT4p9Ujt2eYU79Gl298WytG+4bQtdRoZAFxQB9C0hpW9Sq/icctrVTvVC/ 9BAIBUGSlJgPP6rODoZG+hBCbgbe6Dtv3Ptt9j0U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 051/111] random: replace custom notifier chain with standard one Date: Fri, 27 May 2022 10:49:23 +0200 Message-Id: <20220527084826.670412702@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5acd35487dc911541672b3ffc322851769c32a56 upstream. We previously rolled our own randomness readiness notifier, which only has two users in the whole kernel. Replace this with a more standard atomic notifier block that serves the same purpose with less code. Also unexport the symbols, because no modules use it, only unconditional builtins. The only drawback is that it's possible for a notification handler returning the "stop" code to prevent further processing, but given that there are only two users, and that we're unexporting this anyway, that doesn't seem like a significant drawback for the simplification we receive here. Cc: Greg Kroah-Hartman Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 67 +++++++++++++-------------------------------= ----- include/linux/random.h | 10 ++----- lib/random32.c | 12 +++++--- lib/vsprintf.c | 10 ++++--- 4 files changed, 35 insertions(+), 64 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -83,8 +83,8 @@ static int crng_init =3D 0; /* Various types of waiters for crng_init->2 transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; -static DEFINE_SPINLOCK(random_ready_list_lock); -static LIST_HEAD(random_ready_list); +static DEFINE_SPINLOCK(random_ready_chain_lock); +static RAW_NOTIFIER_HEAD(random_ready_chain); =20 /* Control how we warn userspace. */ static struct ratelimit_state unseeded_warning =3D @@ -147,72 +147,43 @@ EXPORT_SYMBOL(wait_for_random_bytes); * * returns: 0 if callback is successfully added * -EALREADY if pool is already initialised (callback not called) - * -ENOENT if module for callback is not alive */ -int add_random_ready_callback(struct random_ready_callback *rdy) +int register_random_ready_notifier(struct notifier_block *nb) { - struct module *owner; unsigned long flags; - int err =3D -EALREADY; + int ret =3D -EALREADY; =20 if (crng_ready()) - return err; + return ret; =20 - owner =3D rdy->owner; - if (!try_module_get(owner)) - return -ENOENT; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (crng_ready()) - goto out; - - owner =3D NULL; - - list_add(&rdy->list, &random_ready_list); - err =3D 0; - -out: - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); - - return err; + spin_lock_irqsave(&random_ready_chain_lock, flags); + if (!crng_ready()) + ret =3D raw_notifier_chain_register(&random_ready_chain, nb); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); + return ret; } -EXPORT_SYMBOL(add_random_ready_callback); =20 /* * Delete a previously registered readiness callback function. */ -void del_random_ready_callback(struct random_ready_callback *rdy) +int unregister_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; - struct module *owner =3D NULL; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (!list_empty(&rdy->list)) { - list_del_init(&rdy->list); - owner =3D rdy->owner; - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); + int ret; =20 - module_put(owner); + spin_lock_irqsave(&random_ready_chain_lock, flags); + ret =3D raw_notifier_chain_unregister(&random_ready_chain, nb); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); + return ret; } -EXPORT_SYMBOL(del_random_ready_callback); =20 static void process_random_ready_list(void) { unsigned long flags; - struct random_ready_callback *rdy, *tmp; =20 - spin_lock_irqsave(&random_ready_list_lock, flags); - list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { - struct module *owner =3D rdy->owner; - - list_del_init(&rdy->list); - rdy->func(rdy); - module_put(owner); - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); + spin_lock_irqsave(&random_ready_chain_lock, flags); + raw_notifier_call_chain(&random_ready_chain, 0, NULL); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); } =20 #define warn_unseeded_randomness(previous) \ --- a/include/linux/random.h +++ b/include/linux/random.h @@ -10,11 +10,7 @@ =20 #include =20 -struct random_ready_callback { - struct list_head list; - void (*func)(struct random_ready_callback *rdy); - struct module *owner; -}; +struct notifier_block; =20 extern void add_device_randomness(const void *, size_t); extern void add_bootloader_randomness(const void *, size_t); @@ -39,8 +35,8 @@ extern void get_random_bytes(void *buf, extern int wait_for_random_bytes(void); extern int __init rand_initialize(void); extern bool rng_is_initialized(void); -extern int add_random_ready_callback(struct random_ready_callback *rdy); -extern void del_random_ready_callback(struct random_ready_callback *rdy); +extern int register_random_ready_notifier(struct notifier_block *nb); +extern int unregister_random_ready_notifier(struct notifier_block *nb); extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); =20 #ifndef MODULE --- a/lib/random32.c +++ b/lib/random32.c @@ -551,9 +551,11 @@ static void prandom_reseed(struct timer_ * To avoid worrying about whether it's safe to delay that interrupt * long enough to seed all CPUs, just schedule an immediate timer event. */ -static void prandom_timer_start(struct random_ready_callback *unused) +static int prandom_timer_start(struct notifier_block *nb, + unsigned long action, void *data) { mod_timer(&seed_timer, jiffies); + return 0; } =20 #ifdef CONFIG_RANDOM32_SELFTEST @@ -617,13 +619,13 @@ core_initcall(prandom32_state_selftest); */ static int __init prandom_init_late(void) { - static struct random_ready_callback random_ready =3D { - .func =3D prandom_timer_start + static struct notifier_block random_ready =3D { + .notifier_call =3D prandom_timer_start }; - int ret =3D add_random_ready_callback(&random_ready); + int ret =3D register_random_ready_notifier(&random_ready); =20 if (ret =3D=3D -EALREADY) { - prandom_timer_start(&random_ready); + prandom_timer_start(&random_ready, 0, NULL); ret =3D 0; } return ret; --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -762,14 +762,16 @@ static void enable_ptr_key_workfn(struct =20 static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn); =20 -static void fill_random_ptr_key(struct random_ready_callback *unused) +static int fill_random_ptr_key(struct notifier_block *nb, + unsigned long action, void *data) { /* This may be in an interrupt handler. */ queue_work(system_unbound_wq, &enable_ptr_key_work); + return 0; } =20 -static struct random_ready_callback random_ready =3D { - .func =3D fill_random_ptr_key +static struct notifier_block random_ready =3D { + .notifier_call =3D fill_random_ptr_key }; =20 static int __init initialize_ptr_random(void) @@ -783,7 +785,7 @@ static int __init initialize_ptr_random( return 0; } =20 - ret =3D add_random_ready_callback(&random_ready); + ret =3D register_random_ready_notifier(&random_ready); if (!ret) { return 0; } else if (ret =3D=3D -EALREADY) { From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A275C433EF for ; Fri, 27 May 2022 11:42:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236677AbiE0Lm2 (ORCPT ); Fri, 27 May 2022 07:42:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351684AbiE0LlX (ORCPT ); Fri, 27 May 2022 07:41:23 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52143132A0A; Fri, 27 May 2022 04:39:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0CE0AB82466; Fri, 27 May 2022 11:39:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 74720C385A9; Fri, 27 May 2022 11:39:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651586; bh=+UzSE4L7X3vucOd6BowZ+lRMGHuz9Mj0EHDzCrNu3n4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i61TiDKneQv1L7J0sM2mRUSd9Js37hIDLkTXCAYnH3O9l7vWocituGLWmLsuvxlxo iPooBLU4HjsH7s1d9ZMRnS7Sk7a8Ty+ts//VTiZchT7/EIYTZIgxjUpabmi6BC6Olp cGoaPe11/jyE18zZHRuBQ7WcsRo8dsrig/Hu7Ht4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Jean-Philippe Aumasson , "Jason A. Donenfeld" Subject: [PATCH 5.17 052/111] random: use SipHash as interrupt entropy accumulator Date: Fri, 27 May 2022 10:49:24 +0200 Message-Id: <20220527084826.817621048@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit f5eab0e2db4f881fb2b62b3fdad5b9be673dd7ae upstream. The current fast_mix() function is a piece of classic mailing list crypto, where it just sort of sprung up by an anonymous author without a lot of real analysis of what precisely it was accomplishing. As an ARX permutation alone, there are some easily searchable differential trails in it, and as a means of preventing malicious interrupts, it completely fails, since it xors new data into the entire state every time. It can't really be analyzed as a random permutation, because it clearly isn't, and it can't be analyzed as an interesting linear algebraic structure either, because it's also not that. There really is very little one can say about it in terms of entropy accumulation. It might diffuse bits, some of the time, maybe, we hope, I guess. But for the most part, it fails to accomplish anything concrete. As a reminder, the simple goal of add_interrupt_randomness() is to simply accumulate entropy until ~64 interrupts have elapsed, and then dump it into the main input pool, which uses a cryptographic hash. It would be nice to have something cryptographically strong in the interrupt handler itself, in case a malicious interrupt compromises a per-cpu fast pool within the 64 interrupts / 1 second window, and then inside of that same window somehow can control its return address and cycle counter, even if that's a bit far fetched. However, with a very CPU-limited budget, actually doing that remains an active research project (and perhaps there'll be something useful for Linux to come out of it). And while the abundance of caution would be nice, this isn't *currently* the security model, and we don't yet have a fast enough solution to make it our security model. Plus there's not exactly a pressing need to do that. (And for the avoidance of doubt, the actual cluster of 64 accumulated interrupts still gets dumped into our cryptographically secure input pool.) So, for now we are going to stick with the existing interrupt security model, which assumes that each cluster of 64 interrupt data samples is mostly non-malicious and not colluding with an infoleaker. With this as our goal, we have a few more choices, simply aiming to accumulate entropy, while discarding the least amount of it. We know from that random oracles, instantiated as computational hash functions, make good entropy accumulators and extractors, which is the justification for using BLAKE2s in the main input pool. As mentioned, we don't have that luxury here, but we also don't have the same security model requirements, because we're assuming that there aren't malicious inputs. A pseudorandom function instance can approximately behave like a random oracle, provided that the key is uniformly random. But since we're not concerned with malicious inputs, we can pick a fixed key, which is not secret, knowing that "nature" won't interact with a sufficiently chosen fixed key by accident. So we pick a PRF with a fixed initial key, and accumulate into it continuously, dumping the result every 64 interrupts into our cryptographically secure input pool. For this, we make use of SipHash-1-x on 64-bit and HalfSipHash-1-x on 32-bit, which are already in use in the kernel's hsiphash family of functions and achieve the same performance as the function they replace. It would be nice to do two rounds, but we don't exactly have the CPU budget handy for that, and one round alone is already sufficient. As mentioned, we start with a fixed initial key (zeros is fine), and allow SipHash's symmetry breaking constants to turn that into a useful starting point. Also, since we're dumping the result (or half of it on 64-bit so as to tax our hash function the same amount on all platforms) into the cryptographically secure input pool, there's no point in finalizing SipHash's output, since it'll wind up being finalized by something much stronger. This means that all we need to do is use the ordinary round function word-by-word, as normal SipHash does. Simplified, the flow is as follows: Initialize: siphash_state_t state; siphash_init(&state, key=3D{0, 0, 0, 0}); Update (accumulate) on interrupt: siphash_update(&state, interrupt_data_and_timing); Dump into input pool after 64 interrupts: blake2s_update(&input_pool, &state, sizeof(state) / 2); The result of all of this is that the security model is unchanged from before -- we assume non-malicious inputs -- yet we now implement that model with a stronger argument. I would like to emphasize, again, that the purpose of this commit is to improve the existing design, by making it analyzable, without changing any fundamental assumptions. There may well be value down the road in changing up the existing design, using something cryptographically strong, or simply using a ring buffer of samples rather than having a fast_mix() at all, or changing which and how much data we collect each interrupt so that we can use something linear, or a variety of other ideas. This commit does not invalidate the potential for those in the future. For example, in the future, if we're able to characterize the data we're collecting on each interrupt, we may be able to inch toward information theoretic accumulators. shows that `s =3D ror32(s, 7) ^ x` and `s =3D ror64(s, 19) ^ x` make very good accumulators for 2-monotone distributions, which would apply to timestamp counters, like random_get_entropy() or jiffies, but would not apply to our current combination of the two values, or to the various function addresses and register values we mix in. Alternatively, shows that max-period linear functions with no non-trivial invariant subspace make good extractors, used in the form `s =3D f(s) ^ x`. However, this only works if the input data is both identical and independent, and obviously a collection of address values and counters fails; so it goes with theoretical papers. Future directions here may involve trying to characterize more precisely what we actually need to collect in the interrupt handler, and building something specific around that. However, as mentioned, the morass of data we're gathering at the interrupt handler presently defies characterization, and so we use SipHash for now, which works well and performs well. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 94 +++++++++++++++++++++++++++++----------------= ----- 1 file changed, 55 insertions(+), 39 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1145,48 +1145,51 @@ void add_bootloader_randomness(const voi EXPORT_SYMBOL_GPL(add_bootloader_randomness); =20 struct fast_pool { - union { - u32 pool32[4]; - u64 pool64[2]; - }; struct work_struct mix; + unsigned long pool[4]; unsigned long last; unsigned int count; u16 reg_idx; }; =20 +static DEFINE_PER_CPU(struct fast_pool, irq_randomness) =3D { +#ifdef CONFIG_64BIT + /* SipHash constants */ + .pool =3D { 0x736f6d6570736575UL, 0x646f72616e646f6dUL, + 0x6c7967656e657261UL, 0x7465646279746573UL } +#else + /* HalfSipHash constants */ + .pool =3D { 0, 0, 0x6c796765U, 0x74656462U } +#endif +}; + /* - * This is a fast mixing routine used by the interrupt randomness - * collector. It's hardcoded for an 128 bit pool and assumes that any - * locks that might be needed are taken by the caller. + * This is [Half]SipHash-1-x, starting from an empty key. Because + * the key is fixed, it assumes that its inputs are non-malicious, + * and therefore this has no security on its own. s represents the + * 128 or 256-bit SipHash state, while v represents a 128-bit input. */ -static void fast_mix(u32 pool[4]) +static void fast_mix(unsigned long s[4], const unsigned long *v) { - u32 a =3D pool[0], b =3D pool[1]; - u32 c =3D pool[2], d =3D pool[3]; - - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 6); d =3D rol32(d, 27); - d ^=3D a; b ^=3D c; - - a +=3D b; c +=3D d; - b =3D rol32(b, 16); d =3D rol32(d, 14); - d ^=3D a; b ^=3D c; + size_t i; =20 - pool[0] =3D a; pool[1] =3D b; - pool[2] =3D c; pool[3] =3D d; + for (i =3D 0; i < 16 / sizeof(long); ++i) { + s[3] ^=3D v[i]; +#ifdef CONFIG_64BIT + s[0] +=3D s[1]; s[1] =3D rol64(s[1], 13); s[1] ^=3D s[0]; s[0] =3D rol64= (s[0], 32); + s[2] +=3D s[3]; s[3] =3D rol64(s[3], 16); s[3] ^=3D s[2]; + s[0] +=3D s[3]; s[3] =3D rol64(s[3], 21); s[3] ^=3D s[0]; + s[2] +=3D s[1]; s[1] =3D rol64(s[1], 17); s[1] ^=3D s[2]; s[2] =3D rol64= (s[2], 32); +#else + s[0] +=3D s[1]; s[1] =3D rol32(s[1], 5); s[1] ^=3D s[0]; s[0] =3D rol32= (s[0], 16); + s[2] +=3D s[3]; s[3] =3D rol32(s[3], 8); s[3] ^=3D s[2]; + s[0] +=3D s[3]; s[3] =3D rol32(s[3], 7); s[3] ^=3D s[0]; + s[2] +=3D s[1]; s[1] =3D rol32(s[1], 13); s[1] ^=3D s[2]; s[2] =3D rol32= (s[2], 16); +#endif + s[0] ^=3D v[i]; + } } =20 -static DEFINE_PER_CPU(struct fast_pool, irq_randomness); - #ifdef CONFIG_SMP /* * This function is called when the CPU has just come online, with @@ -1228,7 +1231,15 @@ static unsigned long get_reg(struct fast static void mix_interrupt_randomness(struct work_struct *work) { struct fast_pool *fast_pool =3D container_of(work, struct fast_pool, mix); - u32 pool[4]; + /* + * The size of the copied stack pool is explicitly 16 bytes so that we + * tax mix_pool_byte()'s compression function the same amount on all + * platforms. This means on 64-bit we copy half the pool into this, + * while on 32-bit we copy all of it. The entropy is supposed to be + * sufficiently dispersed between bits that in the sponge-like + * half case, on average we don't wind up "losing" some. + */ + u8 pool[16]; =20 /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1241,7 +1252,7 @@ static void mix_interrupt_randomness(str * Copy the pool to the stack so that the mixer always has a * consistent view, before we reenable irqs again. */ - memcpy(pool, fast_pool->pool32, sizeof(pool)); + memcpy(pool, fast_pool->pool, sizeof(pool)); fast_pool->count =3D 0; fast_pool->last =3D jiffies; local_irq_enable(); @@ -1265,25 +1276,30 @@ void add_interrupt_randomness(int irq) struct fast_pool *fast_pool =3D this_cpu_ptr(&irq_randomness); struct pt_regs *regs =3D get_irq_regs(); unsigned int new_count; + union { + u32 u32[4]; + u64 u64[2]; + unsigned long longs[16 / sizeof(long)]; + } irq_data; =20 if (cycles =3D=3D 0) cycles =3D get_reg(fast_pool, regs); =20 if (sizeof(cycles) =3D=3D 8) - fast_pool->pool64[0] ^=3D cycles ^ rol64(now, 32) ^ irq; + irq_data.u64[0] =3D cycles ^ rol64(now, 32) ^ irq; else { - fast_pool->pool32[0] ^=3D cycles ^ irq; - fast_pool->pool32[1] ^=3D now; + irq_data.u32[0] =3D cycles ^ irq; + irq_data.u32[1] =3D now; } =20 if (sizeof(unsigned long) =3D=3D 8) - fast_pool->pool64[1] ^=3D regs ? instruction_pointer(regs) : _RET_IP_; + irq_data.u64[1] =3D regs ? instruction_pointer(regs) : _RET_IP_; else { - fast_pool->pool32[2] ^=3D regs ? instruction_pointer(regs) : _RET_IP_; - fast_pool->pool32[3] ^=3D get_reg(fast_pool, regs); + irq_data.u32[2] =3D regs ? instruction_pointer(regs) : _RET_IP_; + irq_data.u32[3] =3D get_reg(fast_pool, regs); } =20 - fast_mix(fast_pool->pool32); + fast_mix(fast_pool->pool, irq_data.longs); new_count =3D ++fast_pool->count; =20 if (new_count & MIX_INFLIGHT) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E24A7C433EF for ; Fri, 27 May 2022 11:43:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235761AbiE0LnN (ORCPT ); Fri, 27 May 2022 07:43:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351701AbiE0Ll2 (ORCPT ); Fri, 27 May 2022 07:41:28 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AB9162BE5; Fri, 27 May 2022 04:39:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1628561D19; Fri, 27 May 2022 11:39:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 230C4C385A9; Fri, 27 May 2022 11:39:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651598; bh=ala25M/On/9ldOxK5Rpy89ehIRvhHOgMr0s9fvI87B8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wvn2n8gv271xr9V0JRLs33v0ebnRGcSJsqUxqr3mZz7kHuzlXagjDdDecT0bivDdw rPn3XSladcETwx22k9CFqRFvNzDnECyzeXABt3W/HOCTVU14GPcGGn4dMpeEh1Zwi9 LN6xJS101r+zigjhalU+gbW0LxTXsuKUvo16pNpM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 053/111] random: make consistent usage of crng_ready() Date: Fri, 27 May 2022 10:49:25 +0200 Message-Id: <20220527084826.965361996@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a96cfe2d427064325ecbf56df8816c6b871ec285 upstream. Rather than sometimes checking `crng_init < 2`, we should always use the crng_ready() macro, so that should we change anything later, it's consistent. Additionally, that macro already has a likely() around it, which means we don't need to open code our own likely() and unlikely() annotations. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -125,18 +125,13 @@ static void try_to_generate_entropy(void */ int wait_for_random_bytes(void) { - if (likely(crng_ready())) - return 0; - - do { + while (!crng_ready()) { int ret; ret =3D wait_event_interruptible_timeout(crng_init_wait, crng_ready(), H= Z); if (ret) return ret > 0 ? 0 : ret; - try_to_generate_entropy(); - } while (!crng_ready()); - + } return 0; } EXPORT_SYMBOL(wait_for_random_bytes); @@ -291,7 +286,7 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (crng_init < 2) { + if (!crng_ready()) { crng_init =3D 2; finalize_init =3D true; } @@ -359,7 +354,7 @@ static void crng_make_state(u32 chacha_s * ready, we do fast key erasure with the base_crng directly, because * this is what crng_pre_init_inject() mutates during early init. */ - if (unlikely(!crng_ready())) { + if (!crng_ready()) { bool ready; =20 spin_lock_irqsave(&base_crng.lock, flags); @@ -802,7 +797,7 @@ static void credit_entropy_bits(size_t n entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); =20 - if (crng_init < 2 && entropy_count >=3D POOL_MIN_BITS) + if (!crng_ready() && entropy_count >=3D POOL_MIN_BITS) crng_reseed(); } =20 @@ -959,7 +954,7 @@ int __init rand_initialize(void) extract_entropy(base_crng.key, sizeof(base_crng.key)); ++base_crng.generation; =20 - if (arch_init && trust_cpu && crng_init < 2) { + if (arch_init && trust_cpu && !crng_ready()) { crng_init =3D 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } @@ -1548,7 +1543,7 @@ static long random_ioctl(struct file *f, case RNDRESEEDCRNG: if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (crng_init < 2) + if (!crng_ready()) return -ENODATA; crng_reseed(); return 0; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46626C433EF for ; Fri, 27 May 2022 11:43:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351673AbiE0LnJ (ORCPT ); Fri, 27 May 2022 07:43:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351660AbiE0Llg (ORCPT ); Fri, 27 May 2022 07:41:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CD13132A22; Fri, 27 May 2022 04:40:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EDA1361CDB; Fri, 27 May 2022 11:40:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 011FCC385A9; Fri, 27 May 2022 11:40:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651607; bh=MDm/pljiyWy7z5p2QamWBTY2xuk1WtPtJ+TRVpkiheA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pinZ29/uO3IJJhNcJE47Op+svW6/Owhj/JIqJ/SVOOQ3rIlfaMB9soGo/HPE7sstM kPX3bSEmg7IR1siwuSamsa6WTJZ0steAZDMsIgpgeQX3BcLXCWk2WqCsizCdKr5FDY e21Z2B1u8oy4gq6fdPvfHCOuC8quYeD1gz/aDybs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 054/111] random: reseed more often immediately after booting Date: Fri, 27 May 2022 10:49:26 +0200 Message-Id: <20220527084827.112732987@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 7a7ff644aeaf071d433caffb3b8ea57354b55bd3 upstream. In order to chip away at the "premature first" problem, we augment our existing entropy accounting with more frequent reseedings at boot. The idea is that at boot, we're getting entropy from various places, and we're not very sure which of early boot entropy is good and which isn't. Even when we're crediting the entropy, we're still not totally certain that it's any good. Since boot is the one time (aside from a compromise) that we have zero entropy, it's important that we shepherd entropy into the crng fairly often. At the same time, we don't want a "premature next" problem, whereby an attacker can brute force individual bits of added entropy. In lieu of going full-on Fortuna (for now), we can pick a simpler strategy of just reseeding more often during the first 5 minutes after boot. This is still bounded by the 256-bit entropy credit requirement, so we'll skip a reseeding if we haven't reached that, but in case entropy /is/ coming in, this ensures that it makes its way into the crng rather rapidly during these early stages. Ordinarily we reseed if the previous reseeding is 300 seconds old. This commit changes things so that for the first 600 seconds of boot time, we reseed if the previous reseeding is uptime / 2 seconds old. That means that we'll reseed at the very least double the uptime of the previous reseeding. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -336,6 +336,28 @@ static void crng_fast_key_erasure(u8 key } =20 /* + * Return whether the crng seed is considered to be sufficiently + * old that a reseeding might be attempted. This happens if the last + * reseeding was CRNG_RESEED_INTERVAL ago, or during early boot, at + * an interval proportional to the uptime. + */ +static bool crng_has_old_seed(void) +{ + static bool early_boot =3D true; + unsigned long interval =3D CRNG_RESEED_INTERVAL; + + if (unlikely(READ_ONCE(early_boot))) { + time64_t uptime =3D ktime_get_seconds(); + if (uptime >=3D CRNG_RESEED_INTERVAL / HZ * 2) + WRITE_ONCE(early_boot, false); + else + interval =3D max_t(unsigned int, 5 * HZ, + (unsigned int)uptime / 2 * HZ); + } + return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); +} + +/* * This function returns a ChaCha state that you may use for generating * random data. It also returns up to 32 bytes on its own of random data * that may be used; random_data_len may not be greater than 32. @@ -368,10 +390,10 @@ static void crng_make_state(u32 chacha_s } =20 /* - * If the base_crng is more than 5 minutes old, we reseed, which - * in turn bumps the generation counter that we check below. + * If the base_crng is old enough, we try to reseed, which in turn + * bumps the generation counter that we check below. */ - if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED= _INTERVAL))) + if (unlikely(crng_has_old_seed())) crng_reseed(); =20 local_lock_irqsave(&crngs.lock, flags); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3256BC433FE for ; Fri, 27 May 2022 11:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345158AbiE0Lkv (ORCPT ); Fri, 27 May 2022 07:40:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351625AbiE0Lj4 (ORCPT ); Fri, 27 May 2022 07:39:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63B0D134E04; Fri, 27 May 2022 04:38:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C6258B82466; Fri, 27 May 2022 11:38:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33E98C385A9; Fri, 27 May 2022 11:38:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651530; bh=mC/9GikzS8f2FfdEMT+0hg7kDAl1vwOz2+EJxxrpi88=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RYDW+L9ZEUclCrq5o40EGGN/EIJF8EyT8p2hPdt4fh/EHeDRDH8M9bxHwPRhc+/yS 4MWUTgrmZWZHV7z+wduTzmaz+Kv7N+tqZBpgMzJjeX++vz/pmnqPsQY6aFC5QyJ2fx 3eWgEQUYN1if8h08iAR157S9mzyprYcdAYo38Aig= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 055/111] random: check for signal and try earlier when generating entropy Date: Fri, 27 May 2022 10:49:27 +0200 Message-Id: <20220527084827.253743083@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 3e504d2026eb6c8762cd6040ae57db166516824a upstream. Rather than waiting a full second in an interruptable waiter before trying to generate entropy, try to generate entropy first and wait second. While waiting one second might give an extra second for getting entropy from elsewhere, we're already pretty late in the init process here, and whatever else is generating entropy will still continue to contribute. This has implications on signal handling: we call try_to_generate_entropy() from wait_for_random_bytes(), and wait_for_random_bytes() always uses wait_event_interruptible_timeout() when waiting, since it's called by userspace code in restartable contexts, where signals can pend. Since try_to_generate_entropy() now runs first, if a signal is pending, it's necessary for try_to_generate_entropy() to check for signals, since it won't hit the wait until after try_to_generate_entropy() has returned. And even before this change, when entering a busy loop in try_to_generate_entropy(), we should have been checking to see if any signals are pending, so that a process doesn't get stuck in that loop longer than expected. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -127,10 +127,11 @@ int wait_for_random_bytes(void) { while (!crng_ready()) { int ret; + + try_to_generate_entropy(); ret =3D wait_event_interruptible_timeout(crng_init_wait, crng_ready(), H= Z); if (ret) return ret > 0 ? 0 : ret; - try_to_generate_entropy(); } return 0; } @@ -1369,7 +1370,7 @@ static void try_to_generate_entropy(void return; =20 timer_setup_on_stack(&stack.timer, entropy_timer, 0); - while (!crng_ready()) { + while (!crng_ready() && !signal_pending(current)) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 358C6C433EF for ; Fri, 27 May 2022 11:40:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351536AbiE0Lke (ORCPT ); Fri, 27 May 2022 07:40:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351679AbiE0LkA (ORCPT ); Fri, 27 May 2022 07:40:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4D2913B8F6; Fri, 27 May 2022 04:39:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3646C61C3F; Fri, 27 May 2022 11:39:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40EB8C385A9; Fri, 27 May 2022 11:38:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651539; bh=GkJ2hQVuIH413bDUm/mBPKu7FkevDDIPxj+f5coBLP8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FIcpkWdV/AZgpjFz9ynaZfMtq7aTC9jM5JzfW3fLt7DIfiL35kvMSu0v+oj1KFOiH VnS2XdKvr/71CXB9v/kTL4xSyz4S793Jyq7ZKZ+xPqUZdM/BQwfZxh66AJKY7zpNrF NCRW8pdaWIc5H6XAHCKDysT7UV4SKq0ihhh5wpiI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 056/111] random: skip fast_init if hwrng provides large chunk of entropy Date: Fri, 27 May 2022 10:49:28 +0200 Message-Id: <20220527084827.391877284@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit af704c856e888fb044b058d731d61b46eeec499d upstream. At boot time, EFI calls add_bootloader_randomness(), which in turn calls add_hwgenerator_randomness(). Currently add_hwgenerator_randomness() feeds the first 64 bytes of randomness to the "fast init" non-crypto-grade phase. But if add_hwgenerator_randomness() gets called with more than POOL_MIN_BITS of entropy, there's no point in passing it off to the "fast init" stage, since that's enough entropy to bootstrap the real RNG. The "fast init" stage is just there to provide _something_ in the case where we don't have enough entropy to properly bootstrap the RNG. But if we do have enough entropy to bootstrap the RNG, the current logic doesn't serve a purpose. So, in the case where we're passed greater than or equal to POOL_MIN_BITS of entropy, this commit makes us skip the "fast init" phase. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1123,7 +1123,7 @@ void rand_initialize_disk(struct gendisk void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { - if (unlikely(crng_init =3D=3D 0)) { + if (unlikely(crng_init =3D=3D 0 && entropy < POOL_MIN_BITS)) { size_t ret =3D crng_pre_init_inject(buffer, count, true); mix_pool_bytes(buffer, ret); count -=3D ret; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D3FEC433EF for ; Fri, 27 May 2022 11:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244056AbiE0LlM (ORCPT ); Fri, 27 May 2022 07:41:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351736AbiE0LkD (ORCPT ); Fri, 27 May 2022 07:40:03 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A259613C1D3; Fri, 27 May 2022 04:39:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C3755B824D9; Fri, 27 May 2022 11:39:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13BEEC385A9; Fri, 27 May 2022 11:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651548; bh=BfqSagvv1GFEPwDcbduwMmLRg/Yr4EQBFhb0nUB/Tow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c1Bi6NtzT/2O7p0AXkV9qFGCdWgLiC+2xHA4TsbwwuzWCjsZ60gvR7/ieeMd+vQzZ WQ2uBVeS13GtcvUsxgvaZVAEIDgyfW4cRtcIRYicLH72jsPX4CHP3cwc9l8H/tNzKj XLPXbOoCqO2MMLISfbr2l3dm8cGjLcseCyG6PV3c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Graham Christensen , Ard Biesheuvel , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 057/111] random: treat bootloader trust toggle the same way as cpu trust toggle Date: Fri, 27 May 2022 10:49:29 +0200 Message-Id: <20220527084827.533540891@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit d97c68d178fbf8aaaf21b69b446f2dfb13909316 upstream. If CONFIG_RANDOM_TRUST_CPU is set, the RNG initializes using RDRAND. But, the user can disable (or enable) this behavior by setting `random.trust_cpu=3D0/1` on the kernel command line. This allows system builders to do reasonable things while avoiding howls from tinfoil hatters. (Or vice versa.) CONFIG_RANDOM_TRUST_BOOTLOADER is basically the same thing, but regards the seed passed via EFI or device tree, which might come from RDRAND or a TPM or somewhere else. In order to allow distros to more easily enable this while avoiding those same howls (or vice versa), this commit adds the corresponding `random.trust_bootloader=3D0/1` toggle. Cc: Theodore Ts'o Cc: Graham Christensen Reviewed-by: Ard Biesheuvel Reviewed-by: Dominik Brodowski Link: https://github.com/NixOS/nixpkgs/pull/165355 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- Documentation/admin-guide/kernel-parameters.txt | 6 ++++++ drivers/char/Kconfig | 3 ++- drivers/char/random.c | 8 +++++++- 3 files changed, 15 insertions(+), 2 deletions(-) --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4355,6 +4355,12 @@ fully seed the kernel's CRNG. Default is controlled by CONFIG_RANDOM_TRUST_CPU. =20 + random.trust_bootloader=3D{on,off} + [KNL] Enable or disable trusting the use of a + seed passed by the bootloader (if available) to + fully seed the kernel's CRNG. Default is controlled + by CONFIG_RANDOM_TRUST_BOOTLOADER. + randomize_kstack_offset=3D [KNL] Enable or disable kernel stack offset randomization, which provides roughly 5 bits of --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -449,6 +449,7 @@ config RANDOM_TRUST_BOOTLOADER device randomness. Say Y here to assume the entropy provided by the booloader is trustworthy so it will be added to the kernel's entropy pool. Otherwise, say N here so it will be regarded as device input that - only mixes the entropy pool. + only mixes the entropy pool. This can also be configured at boot with + "random.trust_bootloader=3Don/off". =20 endmenu --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -943,11 +943,17 @@ static bool drain_entropy(void *buf, siz **********************************************************************/ =20 static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); +static bool trust_bootloader __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_= TRUST_BOOTLOADER); static int __init parse_trust_cpu(char *arg) { return kstrtobool(arg, &trust_cpu); } +static int __init parse_trust_bootloader(char *arg) +{ + return kstrtobool(arg, &trust_bootloader); +} early_param("random.trust_cpu", parse_trust_cpu); +early_param("random.trust_bootloader", parse_trust_bootloader); =20 /* * The first collection of entropy occurs at system boot while interrupts @@ -1155,7 +1161,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random */ void add_bootloader_randomness(const void *buf, size_t size) { - if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) + if (trust_bootloader) add_hwgenerator_randomness(buf, size, size * 8); else add_device_randomness(buf, size); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 114BCC433EF for ; Fri, 27 May 2022 11:41:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232037AbiE0Llu (ORCPT ); Fri, 27 May 2022 07:41:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351459AbiE0LkX (ORCPT ); Fri, 27 May 2022 07:40:23 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14FAC13C0A2; Fri, 27 May 2022 04:39:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8C5D0B82466; Fri, 27 May 2022 11:39:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DACB9C385A9; Fri, 27 May 2022 11:39:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651560; bh=jyNLySF5vsx/KrVHGxCws/YWkN5WY3oKvFLrahlNFEs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aqiF1EW5BUjHiBbdz2fNQMErmPpY5KeMj8qmvkBsHGjRcDZCDtmhWgBwuYtRFsFRJ ZQRMAkjLln+w9uSvrY1oZ8LI86ozER55GEu7LYsUjRfBH7SeX9xSRNyJlk+4xCp+F7 fbnu9MqRf0SGqz5zTWwF+Qk6S0gSwkpB0M3KK3CU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 058/111] random: re-add removed comment about get_random_{u32,u64} reseeding Date: Fri, 27 May 2022 10:49:30 +0200 Message-Id: <20220527084827.705061603@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit dd7aa36e535797926d8eb311da7151919130139d upstream. The comment about get_random_{u32,u64}() not invoking reseeding got added in an unrelated commit, that then was recently reverted by 0313bc278dac ("Revert "random: block in /dev/urandom""). So this adds that little comment snippet back, and improves the wording a bit too. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -224,9 +224,10 @@ static void _warn_unseeded_randomness(co * * These interfaces will return the requested number of random bytes * into the given buffer or as a return value. This is equivalent to - * a read from /dev/urandom. The integer family of functions may be - * higher performance for one-off random integers, because they do a - * bit of buffering. + * a read from /dev/urandom. The u32, u64, int, and long family of + * functions may be higher performance for one-off random integers, + * because they do a bit of buffering and do not invoke reseeding + * until the buffer is emptied. * *********************************************************************/ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB3B9C433F5 for ; Fri, 27 May 2022 11:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351609AbiE0Ll6 (ORCPT ); Fri, 27 May 2022 07:41:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351574AbiE0Lkr (ORCPT ); Fri, 27 May 2022 07:40:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F55913C1F4; Fri, 27 May 2022 04:39:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B9DB461CC4; Fri, 27 May 2022 11:39:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3772C385A9; Fri, 27 May 2022 11:39:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651569; bh=hnMVFKzJakkFudy48mRUI+VFFoGuT+sdo+aAIP6T100=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jsfso5SSCF12QNUW1Sa2P1qYKa3xf/FKQEUq81nEkiuWkHnoRTUr5pqIHvpmywwwI a+SN+ta2zydQv0ROSbcvPcUoSjEvIh9vVn8D+63BX9nb4Rd8qxogDJaD288BfFJk4o i+9taOR7I4gW55rQb/BhNJUGrXfSvGMrxhhx6CSM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Theodore Tso , "Jason A. Donenfeld" Subject: [PATCH 5.17 059/111] random: mix build-time latent entropy into pool at init Date: Fri, 27 May 2022 10:49:31 +0200 Message-Id: <20220527084827.872544791@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1754abb3e7583c570666fa1e1ee5b317e88c89a0 upstream. Prior, the "input_pool_data" array needed no real initialization, and so it was easy to mark it with __latent_entropy to populate it during compile-time. In switching to using a hash function, this required us to specifically initialize it to some specific state, which means we dropped the __latent_entropy attribute. An unfortunate side effect was this meant the pool was no longer seeded using compile-time random data. In order to bring this back, we declare an array in rand_initialize() with __latent_entropy and call mix_pool_bytes() on that at init, which accomplishes the same thing as before. We make this __initconst, so that it doesn't take up space at runtime after init. Fixes: 6e8ec2552c7d ("random: use computational hash for entropy extraction= ") Reviewed-by: Dominik Brodowski Reviewed-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -970,6 +970,11 @@ int __init rand_initialize(void) bool arch_init =3D true; unsigned long rv; =20 +#if defined(LATENT_ENTROPY_PLUGIN) + static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent= _entropy; + _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed)); +#endif + for (i =3D 0; i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89DF9C433EF for ; Fri, 27 May 2022 11:42:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351612AbiE0LmO (ORCPT ); Fri, 27 May 2022 07:42:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351669AbiE0LlG (ORCPT ); Fri, 27 May 2022 07:41:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A30EF119041; Fri, 27 May 2022 04:39:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3870DB824D6; Fri, 27 May 2022 11:39:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98039C385A9; Fri, 27 May 2022 11:39:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651578; bh=1RFClOdMABeJEovmbuytbG4/BNa2AAWG/TRS1Oa10Js=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OeZaHG2/YljLzmxZnsKuRPUNNOyidkmaM+UdB9eWgs/qBa7NNwP71JMRsUOjlWaMj o7dGd9Ab/9Dm7UeitIQBmR8AfOJOkhlz3D9UmAQXCcN3P/WJ1k/3rGi1qJXl26VHw+ BXEr67GQaCln9P5ObKs92OHZoqsLhLd2bYFAGHx0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jan Varho , "Jason A. Donenfeld" Subject: [PATCH 5.17 060/111] random: do not split fast init input in add_hwgenerator_randomness() Date: Fri, 27 May 2022 10:49:32 +0200 Message-Id: <20220527084828.030101425@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jan Varho commit 527a9867af29ff89f278d037db704e0ed50fb666 upstream. add_hwgenerator_randomness() tries to only use the required amount of input for fast init, but credits all the entropy, rather than a fraction of it. Since it's hard to determine how much entropy is left over out of a non-unformly random sample, either give it all to fast init or credit it, but don't attempt to do both. In the process, we can clean up the injection code to no longer need to return a value. Signed-off-by: Jan Varho [Jason: expanded commit message] Fixes: 73c7733f122e ("random: do not throw away excess input to crng_fast_l= oad") Cc: stable@vger.kernel.org # 5.17+, requires af704c856e88 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -437,11 +437,8 @@ static void crng_make_state(u32 chacha_s * This shouldn't be set by functions like add_device_randomness(), * where we can't trust the buffer passed to it is guaranteed to be * unpredictable (so it might not have any entropy at all). - * - * Returns the number of bytes processed from input, which is bounded - * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_pre_init_inject(const void *input, size_t len, bool acc= ount) +static void crng_pre_init_inject(const void *input, size_t len, bool accou= nt) { static int crng_init_cnt =3D 0; struct blake2s_state hash; @@ -452,18 +449,15 @@ static size_t crng_pre_init_inject(const spin_lock_irqsave(&base_crng.lock, flags); if (crng_init !=3D 0) { spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; + return; } =20 - if (account) - len =3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); blake2s_update(&hash, input, len); blake2s_final(&hash, base_crng.key); =20 if (account) { - crng_init_cnt +=3D len; + crng_init_cnt +=3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_c= nt); if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { ++base_crng.generation; crng_init =3D 1; @@ -474,8 +468,6 @@ static size_t crng_pre_init_inject(const =20 if (crng_init =3D=3D 1) pr_notice("fast init done\n"); - - return len; } =20 static void _get_random_bytes(void *buf, size_t nbytes) @@ -1136,12 +1128,9 @@ void add_hwgenerator_randomness(const vo size_t entropy) { if (unlikely(crng_init =3D=3D 0 && entropy < POOL_MIN_BITS)) { - size_t ret =3D crng_pre_init_inject(buffer, count, true); - mix_pool_bytes(buffer, ret); - count -=3D ret; - buffer +=3D ret; - if (!count || crng_init =3D=3D 0) - return; + crng_pre_init_inject(buffer, count, true); + mix_pool_bytes(buffer, count); + return; } =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3404C433F5 for ; Fri, 27 May 2022 11:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351712AbiE0Lme (ORCPT ); Fri, 27 May 2022 07:42:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351604AbiE0LlZ (ORCPT ); Fri, 27 May 2022 07:41:25 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 596E61269A1; Fri, 27 May 2022 04:39:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 133CEB824D9; Fri, 27 May 2022 11:39:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6CD43C385A9; Fri, 27 May 2022 11:39:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651589; bh=ZE8Oti+fu3e1FxWiLUTqce4e2vXw7qouPFIc2w4kBJs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dP/zqrSNHHYeiWGOVYLv1i+Ldcpa9f1NWUgDEus236UCIJSMV6omFW+TwTlswaMDD 5DEremEd6ObZFH8Ds2bIDq+0f38UGxRfZ9QV5n0UMR4DL2dlb73yfRaUdFj3ismbU4 LNWIh1gGJ2oVCLs224F/VhbnIK9EFcfeqpUtKC4E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Jann Horn , "Jason A. Donenfeld" Subject: [PATCH 5.17 061/111] random: do not allow user to keep crng key around on stack Date: Fri, 27 May 2022 10:49:33 +0200 Message-Id: <20220527084828.191546873@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit aba120cc101788544aa3e2c30c8da88513892350 upstream. The fast key erasure RNG design relies on the key that's used to be used and then discarded. We do this, making judicious use of memzero_explicit(). However, reads to /dev/urandom and calls to getrandom() involve a copy_to_user(), and userspace can use FUSE or userfaultfd, or make a massive call, dynamically remap memory addresses as it goes, and set the process priority to idle, in order to keep a kernel stack alive indefinitely. By probing /proc/sys/kernel/random/entropy_avail to learn when the crng key is refreshed, a malicious userspace could mount this attack every 5 minutes thereafter, breaking the crng's forward secrecy. In order to fix this, we just overwrite the stack's key with the first 32 bytes of the "free" fast key erasure output. If we're returning <=3D 32 bytes to the user, then we can still return those bytes directly, so that short reads don't become slower. And for long reads, the difference is hopefully lost in the amortization, so it doesn't change much, with that amortization helping variously for medium reads. We don't need to do this for get_random_bytes() and the various kernel-space callers, and later, if we ever switch to always batching, this won't be necessary either, so there's no need to change the API of these functions. Cc: Theodore Ts'o Reviewed-by: Jann Horn Fixes: c92e040d575a ("random: add backtracking protection to the CRNG") Fixes: 186873c549df ("random: use simpler fast key erasure flow on per-cpu = keys") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -532,19 +532,29 @@ static ssize_t get_random_bytes_user(voi if (!nbytes) return 0; =20 - len =3D min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, output, len); - - if (copy_to_user(buf, output, len)) - return -EFAULT; - nbytes -=3D len; - buf +=3D len; - ret +=3D len; + /* + * Immediately overwrite the ChaCha key at index 4 with random + * bytes, in case userspace causes copy_to_user() below to sleep + * forever, so that we still retain forward secrecy in that case. + */ + crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE); + /* + * However, if we're doing a read of len <=3D 32, we don't need to + * use chacha_state after, so we can simply return those bytes to + * the user directly. + */ + if (nbytes <=3D CHACHA_KEY_SIZE) { + ret =3D copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes; + goto out_zero_chacha; + } =20 - while (nbytes) { + do { if (large_request && need_resched()) { - if (signal_pending(current)) + if (signal_pending(current)) { + if (!ret) + ret =3D -ERESTARTSYS; break; + } schedule(); } =20 @@ -561,10 +571,11 @@ static ssize_t get_random_bytes_user(voi nbytes -=3D len; buf +=3D len; ret +=3D len; - } + } while (nbytes); =20 - memzero_explicit(chacha_state, sizeof(chacha_state)); memzero_explicit(output, sizeof(output)); +out_zero_chacha: + memzero_explicit(chacha_state, sizeof(chacha_state)); return ret; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B25EC433FE for ; Fri, 27 May 2022 11:49:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351882AbiE0LtM (ORCPT ); Fri, 27 May 2022 07:49:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351610AbiE0Lon (ORCPT ); Fri, 27 May 2022 07:44:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69826AF1D9; Fri, 27 May 2022 04:41:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0478C61CB7; Fri, 27 May 2022 11:41:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F147C385A9; Fri, 27 May 2022 11:41:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651678; bh=OpoznZMK1Ok1ZJg+f5Ff1jh+Gv+HjqOFJILFgEo6XZU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=esRQmetImWxIdQyn6v1tey2w0+ogxUo4I36yzJNMmHVU3mibhsaQWz3WlCwPVSoyN aECkt8bynqE/VVNKwDU5086M06FklIjGAZijMGKQs1P/8uUoNhgdWRYpNjLADBwAIx PxJHLlSdtF/R03aHcjHdm14H/Zqn3JAhWmCxwbNI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jann Horn , "Jason A. Donenfeld" Subject: [PATCH 5.17 062/111] random: check for signal_pending() outside of need_resched() check Date: Fri, 27 May 2022 10:49:34 +0200 Message-Id: <20220527084828.307791407@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jann Horn commit 1448769c9cdb69ad65287f4f7ab58bc5f2f5d7ba upstream. signal_pending() checks TIF_NOTIFY_SIGNAL and TIF_SIGPENDING, which signal that the task should bail out of the syscall when possible. This is a separate concept from need_resched(), which checks TIF_NEED_RESCHED, signaling that the task should preempt. In particular, with the current code, the signal_pending() bailout probably won't work reliably. Change this to look like other functions that read lots of data, such as read_zero(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -549,13 +549,13 @@ static ssize_t get_random_bytes_user(voi } =20 do { - if (large_request && need_resched()) { + if (large_request) { if (signal_pending(current)) { if (!ret) ret =3D -ERESTARTSYS; break; } - schedule(); + cond_resched(); } =20 chacha20_block(chacha_state, output); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B88CC433EF for ; Fri, 27 May 2022 11:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351900AbiE0Ls4 (ORCPT ); Fri, 27 May 2022 07:48:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351935AbiE0LpJ (ORCPT ); Fri, 27 May 2022 07:45:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CA2413C351; Fri, 27 May 2022 04:41:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BD52461C3F; Fri, 27 May 2022 11:41:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C766FC385A9; Fri, 27 May 2022 11:41:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651690; bh=5Qvl9MRNPbfZ5zmGTUMKK2Aok2MPBZGiLGgl4WAat1s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lYwx4WEvQ1SNgCUPKmPlqp283w+Z+TdbnJXAsOeZfxvvRtoncvKizDdFf9ENfhsCr HUsiNZovbwAGvHpv7EGiKgl/PgZWjTEnFfobjhc0z78T37aT4x3hruDaGkEmC0TeHB zoD+Gt0K7LtLxmsdp0HtQlfKnnLEXu1J6xHLuc9o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jann Horn , Theodore Tso , "Jason A. Donenfeld" Subject: [PATCH 5.17 063/111] random: check for signals every PAGE_SIZE chunk of /dev/[u]random Date: Fri, 27 May 2022 10:49:35 +0200 Message-Id: <20220527084828.462781628@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e3c1c4fd9e6d14059ed93ebfe15e1c57793b1a05 upstream. In 1448769c9cdb ("random: check for signal_pending() outside of need_resched() check"), Jann pointed out that we previously were only checking the TIF_NOTIFY_SIGNAL and TIF_SIGPENDING flags if the process had TIF_NEED_RESCHED set, which meant in practice, super long reads to /dev/[u]random would delay signal handling by a long time. I tried this using the below program, and indeed I wasn't able to interrupt a /dev/urandom read until after several megabytes had been read. The bug he fixed has always been there, and so code that reads from /dev/urandom without checking the return value of read() has mostly worked for a long time, for most sizes, not just for <=3D 256. Maybe it makes sense to keep that code working. The reason it was so small prior, ignoring the fact that it didn't work anyway, was likely because /dev/random used to block, and that could happen for pretty large lengths of time while entropy was gathered. But now, it's just a chacha20 call, which is extremely fast and is just operating on pure data, without having to wait for some external event. In that sense, /dev/[u]random is a lot more like /dev/zero. Taking a page out of /dev/zero's read_zero() function, it always returns at least one chunk, and then checks for signals after each chunk. Chunk sizes there are of length PAGE_SIZE. Let's just copy the same thing for /dev/[u]random, and check for signals and cond_resched() for every PAGE_SIZE amount of data. This makes the behavior more consistent with expectations, and should mitigate the impact of Jann's fix for the age-old signal check bug. Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee ---- test program ---- #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid =3D getpid(), child; signal(SIGUSR1, handle); if (!(child =3D fork())) { for (;;) kill(pid, SIGUSR1); } pause(); printf("interrupted after reading %zd bytes\n", getrandom(x, sizeof(x),= 0)); kill(child, SIGTERM); return 0; } Cc: Jann Horn Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -523,7 +523,6 @@ EXPORT_SYMBOL(get_random_bytes); =20 static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - bool large_request =3D nbytes > 256; ssize_t ret =3D 0; size_t len; u32 chacha_state[CHACHA_STATE_WORDS]; @@ -549,15 +548,6 @@ static ssize_t get_random_bytes_user(voi } =20 do { - if (large_request) { - if (signal_pending(current)) { - if (!ret) - ret =3D -ERESTARTSYS; - break; - } - cond_resched(); - } - chacha20_block(chacha_state, output); if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; @@ -571,6 +561,13 @@ static ssize_t get_random_bytes_user(voi nbytes -=3D len; buf +=3D len; ret +=3D len; + + BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE !=3D 0); + if (!(ret % PAGE_SIZE) && nbytes) { + if (signal_pending(current)) + break; + cond_resched(); + } } while (nbytes); =20 memzero_explicit(output, sizeof(output)); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24984C433EF for ; Fri, 27 May 2022 11:47:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351665AbiE0LrQ (ORCPT ); Fri, 27 May 2022 07:47:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351989AbiE0LpN (ORCPT ); Fri, 27 May 2022 07:45:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F61713C361; Fri, 27 May 2022 04:41:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9BCAD61CE7; Fri, 27 May 2022 11:41:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9E41C34113; Fri, 27 May 2022 11:41:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651699; bh=hZxyEAMbBobh8dF4JKUDHBDC9LPGBRyQuIRVk05DggI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dbs5vTX4+ch1f9o93wCcZCCxVn4+1ne+iUGeXALGO8yJnCcsuQqIqGTqNbhp//A// zhYxI7LII9at5XZ4hJcx6cxQAizOgwDRVqxdbZHYizJ/fNKalX78ZywEM/5Kx9ibp8 9NhwMd+KwM+0Ktcu0/XinvlfKS/kU7VOw905lGws= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Linus Torvalds , Jann Horn , "Jason A. Donenfeld" Subject: [PATCH 5.17 064/111] random: allow partial reads if later user copies fail Date: Fri, 27 May 2022 10:49:36 +0200 Message-Id: <20220527084828.597913681@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5209aed5137880fa229746cb521f715e55596460 upstream. Rather than failing entirely if a copy_to_user() fails at some point, instead we should return a partial read for the amount that succeeded prior, unless none succeeded at all, in which case we return -EFAULT as before. This makes it consistent with other reader interfaces. For example, the following snippet for /dev/zero outputs "4" followed by "1": int fd; void *x =3D mmap(NULL, 4096, PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1,= 0); assert(x !=3D MAP_FAILED); fd =3D open("/dev/zero", O_RDONLY); assert(fd >=3D 0); printf("%zd\n", read(fd, x, 4)); printf("%zd\n", read(fd, x + 4095, 4)); close(fd); This brings that same standard behavior to the various RNG reader interfaces. While we're at it, we can streamline the loop logic a little bit. Suggested-by: Linus Torvalds Cc: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -523,8 +523,7 @@ EXPORT_SYMBOL(get_random_bytes); =20 static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - ssize_t ret =3D 0; - size_t len; + size_t len, left, ret =3D 0; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; =20 @@ -543,37 +542,40 @@ static ssize_t get_random_bytes_user(voi * the user directly. */ if (nbytes <=3D CHACHA_KEY_SIZE) { - ret =3D copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes; + ret =3D nbytes - copy_to_user(buf, &chacha_state[4], nbytes); goto out_zero_chacha; } =20 - do { + for (;;) { chacha20_block(chacha_state, output); if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; =20 len =3D min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, output, len)) { - ret =3D -EFAULT; + left =3D copy_to_user(buf, output, len); + if (left) { + ret +=3D len - left; break; } =20 - nbytes -=3D len; buf +=3D len; ret +=3D len; + nbytes -=3D len; + if (!nbytes) + break; =20 BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE !=3D 0); - if (!(ret % PAGE_SIZE) && nbytes) { + if (ret % PAGE_SIZE =3D=3D 0) { if (signal_pending(current)) break; cond_resched(); } - } while (nbytes); + } =20 memzero_explicit(output, sizeof(output)); out_zero_chacha: memzero_explicit(chacha_state, sizeof(chacha_state)); - return ret; + return ret ? ret : -EFAULT; } =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF9C5C433EF for ; Fri, 27 May 2022 11:43:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229669AbiE0Lno (ORCPT ); Fri, 27 May 2022 07:43:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351408AbiE0LmI (ORCPT ); Fri, 27 May 2022 07:42:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1872C1356AE; Fri, 27 May 2022 04:40:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3607361D19; Fri, 27 May 2022 11:40:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42688C34100; Fri, 27 May 2022 11:40:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651621; bh=PAjY9qR4iWsIQjlVzy0NbULhgShv2RAFtgIWOpoQyC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kISjWCE5IKJBsw7FQcmRXlOAZ+fBULZHpnYqQF8Tamy83FcSHrFjXoXHtxdhO3PTk WXQsj97yQ+yQZbl8SQx747457Om2jY7Vaz1Yo47idUjlDWW+SdIJnp15ahR+8oZDTZ si94o+12kU0Igo0Pn2IgGIxr8I9mdikANYwxd4dQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Theodore Tso , Thomas Gleixner , "Jason A. Donenfeld" Subject: [PATCH 5.17 065/111] random: make random_get_entropy() return an unsigned long Date: Fri, 27 May 2022 10:49:37 +0200 Message-Id: <20220527084828.733939362@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit b0c3e796f24b588b862b61ce235d3c9417dc8983 upstream. Some implementations were returning type `unsigned long`, while others that fell back to get_cycles() were implicitly returning a `cycles_t` or an untyped constant int literal. That makes for weird and confusing code, and basically all code in the kernel already handled it like it was an `unsigned long`. I recently tried to handle it as the largest type it could be, a `cycles_t`, but doing so doesn't really help with much. Instead let's just make random_get_entropy() return an unsigned long all the time. This also matches the commonly used `arch_get_random_long()` function, so now RDRAND and RDTSC return the same sized integer, which means one can fallback to the other more gracefully. Cc: Dominik Brodowski Cc: Theodore Ts'o Acked-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 20 +++++++------------- include/linux/timex.h | 2 +- 2 files changed, 8 insertions(+), 14 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1013,7 +1013,7 @@ int __init rand_initialize(void) */ void add_device_randomness(const void *buf, size_t size) { - cycles_t cycles =3D random_get_entropy(); + unsigned long cycles =3D random_get_entropy(); unsigned long flags, now =3D jiffies; =20 if (crng_init =3D=3D 0 && size) @@ -1044,8 +1044,7 @@ struct timer_rand_state { */ static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) { - cycles_t cycles =3D random_get_entropy(); - unsigned long flags, now =3D jiffies; + unsigned long cycles =3D random_get_entropy(), now =3D jiffies, flags; long delta, delta2, delta3; =20 spin_lock_irqsave(&input_pool.lock, flags); @@ -1300,8 +1299,7 @@ static void mix_interrupt_randomness(str void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT =3D 1U << 31 }; - cycles_t cycles =3D random_get_entropy(); - unsigned long now =3D jiffies; + unsigned long cycles =3D random_get_entropy(), now =3D jiffies; struct fast_pool *fast_pool =3D this_cpu_ptr(&irq_randomness); struct pt_regs *regs =3D get_irq_regs(); unsigned int new_count; @@ -1314,16 +1312,12 @@ void add_interrupt_randomness(int irq) if (cycles =3D=3D 0) cycles =3D get_reg(fast_pool, regs); =20 - if (sizeof(cycles) =3D=3D 8) + if (sizeof(unsigned long) =3D=3D 8) { irq_data.u64[0] =3D cycles ^ rol64(now, 32) ^ irq; - else { + irq_data.u64[1] =3D regs ? instruction_pointer(regs) : _RET_IP_; + } else { irq_data.u32[0] =3D cycles ^ irq; irq_data.u32[1] =3D now; - } - - if (sizeof(unsigned long) =3D=3D 8) - irq_data.u64[1] =3D regs ? instruction_pointer(regs) : _RET_IP_; - else { irq_data.u32[2] =3D regs ? instruction_pointer(regs) : _RET_IP_; irq_data.u32[3] =3D get_reg(fast_pool, regs); } @@ -1370,7 +1364,7 @@ static void entropy_timer(struct timer_l static void try_to_generate_entropy(void) { struct { - cycles_t cycles; + unsigned long cycles; struct timer_list timer; } stack; =20 --- a/include/linux/timex.h +++ b/include/linux/timex.h @@ -75,7 +75,7 @@ * By default we use get_cycles() for this purpose, but individual * architectures may override this in their asm/timex.h header file. */ -#define random_get_entropy() get_cycles() +#define random_get_entropy() ((unsigned long)get_cycles()) #endif =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3539FC433F5 for ; Fri, 27 May 2022 11:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351762AbiE0LoB (ORCPT ); Fri, 27 May 2022 07:44:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351773AbiE0Lms (ORCPT ); Fri, 27 May 2022 07:42:48 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 871ED13C378; Fri, 27 May 2022 04:40:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 23EDF61CC4; Fri, 27 May 2022 11:40:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30F62C385A9; Fri, 27 May 2022 11:40:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651630; bh=lAC8Rg8DGs9N/DHOReGQjRb/mAJJ4BlH1Icf3YpCBYk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XOgsfVs9oWp/5OKRefXqJGSOPUz4icYuxeenO1MhZTpozH3Sjw6Ml0iKhyaAtsQTc L71w3NIebOUBWgxWLZwx01zGKkiAbGJk+jYQ1TzWWP4UmVPeD98fC+JiJlIb+Nd89w mc2y4s59tBFFPzOwRCAARg6pgaBckLliDJfMDtSE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Biggers , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 066/111] random: document crng_fast_key_erasure() destination possibility Date: Fri, 27 May 2022 10:49:38 +0200 Message-Id: <20220527084828.878348557@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 8717627d6ac53251ee012c3c7aca392f29f38a42 upstream. This reverts 35a33ff3807d ("random: use memmove instead of memcpy for remaining 32 bytes"), which was made on a totally bogus basis. The thing it was worried about overlapping came from the stack, not from one of its arguments, as Eric pointed out. But the fact that this confusion even happened draws attention to the fact that it's a bit non-obvious that the random_data parameter can alias chacha_state, and in fact should do so when the caller can't rely on the stack being cleared in a timely manner. So this commit documents that. Reported-by: Eric Biggers Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 7 +++++++ 1 file changed, 7 insertions(+) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -318,6 +318,13 @@ static void crng_reseed(void) * the resultant ChaCha state to the user, along with the second * half of the block containing 32 bytes of random data that may * be used; random_data_len may not be greater than 32. + * + * The returned ChaCha state contains within it a copy of the old + * key value, at index 4, so the state should always be zeroed out + * immediately after using in order to maintain forward secrecy. + * If the state cannot be erased in a timely manner, then it is + * safer to set the random_data parameter to &chacha_state[4] so + * that this function overwrites it before returning. */ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], u32 chacha_state[CHACHA_STATE_WORDS], From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6D4AC433EF for ; Fri, 27 May 2022 11:44:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351816AbiE0LoR (ORCPT ); Fri, 27 May 2022 07:44:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351743AbiE0LnX (ORCPT ); Fri, 27 May 2022 07:43:23 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 191A013CA35; Fri, 27 May 2022 04:40:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 10E8661D29; Fri, 27 May 2022 11:40:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A457C34100; Fri, 27 May 2022 11:40:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651639; bh=g9yd1PyzmAB4C6qyqGY7YgXtwOLALFzGRlSbehuBFGM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KZZGNpquLdX+M0VzOe+ZJV0/rIKQ1aKuFNwukLyasT8WBcE5DbA7cDqYOubSW5drr znGQQI9JkqrfvUQL/uoI3b5DWGhL30XeJNCXYuJeeCmpTOOpeC1ORh62dPQA4Cviii CazuBZCIWKRkFTC/JTbdxbLW3hyTuaoG4qjJ+bLI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 067/111] random: fix sysctl documentation nits Date: Fri, 27 May 2022 10:49:39 +0200 Message-Id: <20220527084829.025720524@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 069c4ea6871c18bd368f27756e0f91ffb524a788 upstream. A semicolon was missing, and the almost-alphabetical-but-not ordering was confusing, so regroup these by category instead. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- Documentation/admin-guide/sysctl/kernel.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1025,6 +1025,9 @@ This is a directory, with the following * ``boot_id``: a UUID generated the first time this is retrieved, and unvarying after that; =20 +* ``uuid``: a UUID generated every time this is retrieved (this can + thus be used to generate UUIDs at will); + * ``entropy_avail``: the pool's entropy count, in bits; =20 * ``poolsize``: the entropy pool size, in bits; @@ -1032,10 +1035,7 @@ This is a directory, with the following * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum number of seconds between urandom pool reseeding). This file is writable for compatibility purposes, but writing to it has no effect - on any RNG behavior. - -* ``uuid``: a UUID generated every time this is retrieved (this can - thus be used to generate UUIDs at will); + on any RNG behavior; =20 * ``write_wakeup_threshold``: when the entropy count drops below this (as a number of bits), processes waiting to write to ``/dev/random`` From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AEDDC433EF for ; Fri, 27 May 2022 11:44:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351780AbiE0Lon (ORCPT ); Fri, 27 May 2022 07:44:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351698AbiE0Ln7 (ORCPT ); Fri, 27 May 2022 07:43:59 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2729F131F1A; Fri, 27 May 2022 04:40:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 967B7B82466; Fri, 27 May 2022 11:40:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07FCFC385A9; Fri, 27 May 2022 11:40:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651651; bh=IrtVp04WXrGF+u/Jx0GCro9bCp6PLkr34Fu9nMRutX8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fw8BhT0q3/ZwVMUrKEB/8zYJnN0o32gx35pO5mTUkJn2VHphXUH40AgAHNCuAbQH6 eu2PgISA4xeuTLHaCXwHrSy74BxeJfTpIb/Kd5uaOXUAUPGvkQfyRynsafIdDakBw1 vDCB5xuaLgjc0kO53Ptbefkqju5DN6P2fPScY+iI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrew Morton , Stafford Horne , "Jason A. Donenfeld" Subject: [PATCH 5.17 068/111] init: call time_init() before rand_initialize() Date: Fri, 27 May 2022 10:49:40 +0200 Message-Id: <20220527084829.166442186@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit fe222a6ca2d53c38433cba5d3be62a39099e708e upstream. Currently time_init() is called after rand_initialize(), but rand_initialize() makes use of the timer on various platforms, and sometimes this timer needs to be initialized by time_init() first. In order for random_get_entropy() to not return zero during early boot when it's potentially used as an entropy source, reverse the order of these two calls. The block doing random initialization was right before time_init() before, so changing the order shouldn't have any complicated effects. Cc: Andrew Morton Reviewed-by: Stafford Horne Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- init/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/init/main.c +++ b/init/main.c @@ -1035,11 +1035,13 @@ asmlinkage __visible void __init __no_sa softirq_init(); timekeeping_init(); kfence_init(); + time_init(); =20 /* * For best initial stack canary entropy, prepare it after: * - setup_arch() for any UEFI RNG entropy and boot cmdline access * - timekeeping_init() for ktime entropy used in rand_initialize() + * - time_init() for making random_get_entropy() work on some platforms * - rand_initialize() to get any arch-specific entropy like RDRAND * - add_latent_entropy() to get any latent entropy * - adding command line entropy @@ -1049,7 +1051,6 @@ asmlinkage __visible void __init __no_sa add_device_randomness(command_line, strlen(command_line)); boot_init_stack_canary(); =20 - time_init(); perf_event_init(); profile_init(); call_function_init(); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8CDEC433EF for ; Fri, 27 May 2022 11:46:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352560AbiE0LqH (ORCPT ); Fri, 27 May 2022 07:46:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351837AbiE0LoB (ORCPT ); Fri, 27 May 2022 07:44:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 710B013F43C; Fri, 27 May 2022 04:41:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0D6C361D22; Fri, 27 May 2022 11:41:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12165C385A9; Fri, 27 May 2022 11:40:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651660; bh=IPrm7l7Y4DEnj3K/unR0h1dsLN/bo46PUM9Qs6lbRoE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FDuELd8XK78idI6nsi4egY0lO/SZJmv72FcNmInNAna4dgc3zpjVzk7091MlJlOC8 +HnunhdRAq+ZyNmh+52ZKBTIDWdcP5ogxBfD6H1tcexGKMcZq8KiFNaWA4UIztgp3b FFZvS3Ze/Pn47WfUxaxHeZp3xMWH+xRlywjpWf3U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , "Jason A. Donenfeld" Subject: [PATCH 5.17 069/111] ia64: define get_cycles macro for arch-override Date: Fri, 27 May 2022 10:49:41 +0200 Message-Id: <20220527084829.303759570@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 57c0900b91d8891ab43f0e6b464d059fda51d102 upstream. Itanium defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/ia64/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) --- a/arch/ia64/include/asm/timex.h +++ b/arch/ia64/include/asm/timex.h @@ -39,6 +39,7 @@ get_cycles (void) ret =3D ia64_getreg(_IA64_REG_AR_ITC); return ret; } +#define get_cycles get_cycles =20 extern void ia64_cpu_local_tick (void); extern unsigned long long ia64_native_sched_clock (void); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3B5AC433F5 for ; Fri, 27 May 2022 11:46:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351728AbiE0LqT (ORCPT ); Fri, 27 May 2022 07:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351879AbiE0LoG (ORCPT ); Fri, 27 May 2022 07:44:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C08C13F91A; Fri, 27 May 2022 04:41:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2535461CF0; Fri, 27 May 2022 11:41:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33C80C385A9; Fri, 27 May 2022 11:41:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651669; bh=T0hgMG9ClkBYAoRPRZ4y4+hXqRnCKgRJ9WydT4BZvWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yvas9Brkr4zZPXribTun0KrWQNNXCd2AprTT/0sgsLLpueA+pdm1QCGELF+lrW9F7 3SvB7dRLJy/btHUmZPukKtvo5EEENgePSgMixcE+lauyWcCDA/nMJEzuzeEIXXBntx zQOcttOKz3nSwG7SJlACtFAIdKv/QYnDzewt7ytE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Heiko Carstens , "Jason A. Donenfeld" Subject: [PATCH 5.17 070/111] s390: define get_cycles macro for arch-override Date: Fri, 27 May 2022 10:49:42 +0200 Message-Id: <20220527084829.436055525@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 2e3df523256cb9836de8441e9c791a796759bb3c upstream. S390x defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Acked-by: Heiko Carstens Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/s390/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) --- a/arch/s390/include/asm/timex.h +++ b/arch/s390/include/asm/timex.h @@ -201,6 +201,7 @@ static inline cycles_t get_cycles(void) { return (cycles_t) get_tod_clock() >> 2; } +#define get_cycles get_cycles =20 int get_phys_clock(unsigned long *clock); void init_cpu_timer(void); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D7D5C433F5 for ; Fri, 27 May 2022 11:49:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230377AbiE0LtG (ORCPT ); Fri, 27 May 2022 07:49:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351895AbiE0LpD (ORCPT ); Fri, 27 May 2022 07:45:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8664A12FEF1; Fri, 27 May 2022 04:41:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E688761CB7; Fri, 27 May 2022 11:41:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01207C385A9; Fri, 27 May 2022 11:41:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651681; bh=S0GE2QBe3vQH8SstAe6gAuiluPqa7/LtbKcsMUsmrik=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mrJhyAqqbHcxsgY8nfx5mdqciGBrDuuP+26bG7sABZobqCs3GPkQdksVsWsuwsroR 6sURFgeoba76yHZfMsf/eTyAFtqMU1PronOsaOmqlxF/XyK+KbgS5fKdrJ7Y8V0gXH zf3PSjlT5GIECwGaO6q6l/FGSBwUWGORGWn/CX7Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Helge Deller , "Jason A. Donenfeld" Subject: [PATCH 5.17 071/111] parisc: define get_cycles macro for arch-override Date: Fri, 27 May 2022 10:49:43 +0200 Message-Id: <20220527084829.568165652@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 8865bbe6ba1120e67f72201b7003a16202cd42be upstream. PA-RISC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Helge Deller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/parisc/include/asm/timex.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/arch/parisc/include/asm/timex.h +++ b/arch/parisc/include/asm/timex.h @@ -13,9 +13,10 @@ =20 typedef unsigned long cycles_t; =20 -static inline cycles_t get_cycles (void) +static inline cycles_t get_cycles(void) { return mfctl(16); } +#define get_cycles get_cycles =20 #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B88AC433EF for ; Fri, 27 May 2022 11:51:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352111AbiE0LuE (ORCPT ); Fri, 27 May 2022 07:50:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352624AbiE0LqL (ORCPT ); Fri, 27 May 2022 07:46:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B1EA149DB8; Fri, 27 May 2022 04:42:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 98A7E61D50; Fri, 27 May 2022 11:42:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4DF8C385A9; Fri, 27 May 2022 11:42:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651776; bh=8x2PXet/gVQG8oCK7x0zUPajkdDHPhoto6H8ynW72Sw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1olzxF7r5w9LQAqBsgycrf7g7U5C7NPMR3z0NIgjRncos8t/DVm2D7bmasOpdsbkC DJ31FK51VLaDWNktBE6PP5XDTJp0d+0sVUZL75ScMytrre1hPpggaD7PKSKNwLkKYS E9WjU0qd0lDc66uAiJ/OF+mnDICgjDR9qQbHrE5A= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , "Jason A. Donenfeld" Subject: [PATCH 5.17 072/111] alpha: define get_cycles macro for arch-override Date: Fri, 27 May 2022 10:49:44 +0200 Message-Id: <20220527084829.685787096@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1097710bc9660e1e588cf2186a35db3d95c4d258 upstream. Alpha defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Acked-by: Matt Turner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/alpha/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) --- a/arch/alpha/include/asm/timex.h +++ b/arch/alpha/include/asm/timex.h @@ -28,5 +28,6 @@ static inline cycles_t get_cycles (void) __asm__ __volatile__ ("rpcc %0" : "=3Dr"(ret)); return ret; } +#define get_cycles get_cycles =20 #endif fo { POOL_BYTES =3D POOL_WORDS * sizeof(u32), POOL_BITS =3D POOL_BYTES * 8, POOL_BITSHIFT =3D ilog2(POOL_WORDS) + 5, - POOL_FRACBITS =3D POOL_WORDS << (ENTROPY_SHIFT + 5), + POOL_FRACBITS =3D POOL_WORDS << (POOL_ENTROPY_SHIFT + 5), =20 /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ POOL_TAP1 =3D 104, @@ -650,7 +650,7 @@ static void process_random_ready_list(vo static void credit_entropy_bits(int nbits) { int entropy_count, entropy_bits, orig; - int nfrac =3D nbits << ENTROPY_SHIFT; + int nfrac =3D nbits << POOL_ENTROPY_SHIFT; =20 if (!nbits) return; @@ -683,7 +683,7 @@ retry: * turns no matter how large nbits is. */ int pnfrac =3D nfrac; - const int s =3D POOL_BITSHIFT + ENTROPY_SHIFT + 2; + const int s =3D POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2; /* The +2 corresponds to the /4 in the denominator */ =20 do { @@ -704,9 +704,9 @@ retry: if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D orig) goto retry; =20 - trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_= ); + trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RE= T_IP_); =20 - entropy_bits =3D entropy_count >> ENTROPY_SHIFT; + entropy_bits =3D entropy_count >> POOL_ENTROPY_SHIFT; if (crng_init < 2 && entropy_bits >=3D 128) crng_reseed(&primary_crng, true); } @@ -1187,7 +1187,7 @@ void add_input_randomness(unsigned int t last_value =3D value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(ENTROPY_BITS()); + trace_add_input_randomness(POOL_ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_input_randomness); =20 @@ -1286,7 +1286,7 @@ void add_disk_randomness(struct gendisk return; /* first major is 1, so we get >=3D 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS()); + trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -1313,7 +1313,7 @@ retry: entropy_count =3D orig =3D READ_ONCE(input_pool.entropy_count); ibytes =3D nbytes; /* never pull more than available */ - have_bytes =3D entropy_count >> (ENTROPY_SHIFT + 3); + have_bytes =3D entropy_count >> (POOL_ENTROPY_SHIFT + 3); =20 if (have_bytes < 0) have_bytes =3D 0; @@ -1325,7 +1325,7 @@ retry: pr_warn("negative entropy count: count %d\n", entropy_count); entropy_count =3D 0; } - nfrac =3D ibytes << (ENTROPY_SHIFT + 3); + nfrac =3D ibytes << (POOL_ENTROPY_SHIFT + 3); if ((size_t) entropy_count > nfrac) entropy_count -=3D nfrac; else @@ -1335,7 +1335,7 @@ retry: goto retry; =20 trace_debit_entropy(8 * ibytes); - if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) { + if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1423,7 +1423,7 @@ static ssize_t _extract_entropy(void *bu */ static ssize_t extract_entropy(void *buf, size_t nbytes, int min) { - trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_); + trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); nbytes =3D account(nbytes, min); return _extract_entropy(buf, nbytes); } @@ -1749,9 +1749,9 @@ urandom_read_nowarn(struct file *file, c { int ret; =20 - nbytes =3D min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); + nbytes =3D min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3)); ret =3D extract_crng_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS()); + trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS()); return ret; } =20 @@ -1791,7 +1791,7 @@ random_poll(struct file *file, poll_tabl mask =3D 0; if (crng_ready()) mask |=3D EPOLLIN | EPOLLRDNORM; - if (ENTROPY_BITS() < random_write_wakeup_bits) + if (POOL_ENTROPY_BITS() < random_write_wakeup_bits) mask |=3D EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1847,7 +1847,7 @@ static long random_ioctl(struct file *f, switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ - ent_count =3D ENTROPY_BITS(); + ent_count =3D POOL_ENTROPY_BITS(); if (put_user(ent_count, p)) return -EFAULT; return 0; @@ -2008,7 +2008,7 @@ static int proc_do_entropy(struct ctl_ta struct ctl_table fake_table; int entropy_count; =20 - entropy_count =3D *(int *)table->data >> ENTROPY_SHIFT; + entropy_count =3D *(int *)table->data >> POOL_ENTROPY_SHIFT; =20 fake_table.data =3D &entropy_count; fake_table.maxlen =3D sizeof(entropy_count); @@ -2227,7 +2227,7 @@ void add_hwgenerator_randomness(const ch */ wait_event_interruptible(random_write_wait, !system_wq || kthread_should_stop() || - ENTROPY_BITS() <=3D random_write_wakeup_bits); + POOL_ENTROPY_BITS() <=3D random_write_wakeup_bits); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E080BC43217 for ; Fri, 27 May 2022 11:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352334AbiE0LuU (ORCPT ); Fri, 27 May 2022 07:50:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241735AbiE0LrJ (ORCPT ); Fri, 27 May 2022 07:47:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A83DF13C084; Fri, 27 May 2022 04:43:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A19A961D46; Fri, 27 May 2022 11:43:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABE40C385A9; Fri, 27 May 2022 11:43:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651785; bh=N4I2vRnlz5HhcFVb3tpYEZeesn0q49B0+WAvsvwec5E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TNwRBDbJWhFPrm25OpprEfkoFfh8nWoifYWUVMFsIo5/LcToHhXKl+rLQtmlsTbMH qc5ZZrjYoSJYVd3lNgIPi0QVU7T9fz/koBPh1dpo4Wda6QQAJLhS2CEcHRpT90gqpM SN76DhnVmXuIENPHb0ILUyuaq6QkSl/6GQa/y/V0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , "Jason A. Donenfeld" Subject: [PATCH 5.17 073/111] powerpc: define get_cycles macro for arch-override Date: Fri, 27 May 2022 10:49:45 +0200 Message-Id: <20220527084829.811811836@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 408835832158df0357e18e96da7f2d1ed6b80e7f upstream. PowerPC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Acked-by: Michael Ellerman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/powerpc/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) --- a/arch/powerpc/include/asm/timex.h +++ b/arch/powerpc/include/asm/timex.h @@ -19,6 +19,7 @@ static inline cycles_t get_cycles(void) { return mftb(); } +#define get_cycles get_cycles =20 #endif /* __KERNEL__ */ #endif /* _ASM_POWERPC_TIMEX_H */ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B07B8C433EF for ; Fri, 27 May 2022 11:47:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351863AbiE0LrB (ORCPT ); Fri, 27 May 2022 07:47:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352067AbiE0LpU (ORCPT ); Fri, 27 May 2022 07:45:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0199140401; Fri, 27 May 2022 04:41:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7D8A661CE7; Fri, 27 May 2022 11:41:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C07BC34100; Fri, 27 May 2022 11:41:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651707; bh=Ca/av4HY0exhaMWqnerAnWuF1QOI4lsHmT1pw78Fx1E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SSbexPcXX9bp2INgPMtzhyo+NeLBQTFI8bEq0iH8G4DVL51riyN5vCMC6Whg4Y37h 7VF4zX0l30BgbjsUM57yHze2vk8wrA/nbjVBPYzZtzCEc4kNzU0W696qODjtO03bsg DO2KY6dlbewxPDwMpEH/WxHO1oiVp3GD424gF0PA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , "Jason A. Donenfeld" , Arnd Bergmann , Theodore Tso Subject: [PATCH 5.17 074/111] timekeeping: Add raw clock fallback for random_get_entropy() Date: Fri, 27 May 2022 10:49:46 +0200 Message-Id: <20220527084829.962361074@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1366992e16bddd5e2d9a561687f367f9f802e2e4 upstream. The addition of random_get_entropy_fallback() provides access to whichever time source has the highest frequency, which is useful for gathering entropy on platforms without available cycle counters. It's not necessarily as good as being able to quickly access a cycle counter that the CPU has, but it's still something, even when it falls back to being jiffies-based. In the event that a given arch does not define get_cycles(), falling back to the get_cycles() default implementation that returns 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Finally, since random_get_entropy_fallback() is used during extremely early boot when randomizing freelists in mm_init(), it can be called before timekeeping has been initialized. In that case there really is nothing we can do; jiffies hasn't even started ticking yet. So just give up and return 0. Suggested-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Arnd Bergmann Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- include/linux/timex.h | 8 ++++++++ kernel/time/timekeeping.c | 15 +++++++++++++++ 2 files changed, 23 insertions(+) --- a/include/linux/timex.h +++ b/include/linux/timex.h @@ -62,6 +62,8 @@ #include #include =20 +unsigned long random_get_entropy_fallback(void); + #include =20 #ifndef random_get_entropy @@ -74,8 +76,14 @@ * * By default we use get_cycles() for this purpose, but individual * architectures may override this in their asm/timex.h header file. + * If a given arch does not have get_cycles(), then we fallback to + * using random_get_entropy_fallback(). */ +#ifdef get_cycles #define random_get_entropy() ((unsigned long)get_cycles()) +#else +#define random_get_entropy() random_get_entropy_fallback() +#endif #endif =20 /* --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -2380,6 +2381,20 @@ static int timekeeping_validate_timex(co return 0; } =20 +/** + * random_get_entropy_fallback - Returns the raw clock source value, + * used by random.c for platforms with no valid random_get_entropy(). + */ +unsigned long random_get_entropy_fallback(void) +{ + struct tk_read_base *tkr =3D &tk_core.timekeeper.tkr_mono; + struct clocksource *clock =3D READ_ONCE(tkr->clock); + + if (unlikely(timekeeping_suspended || !clock)) + return 0; + return clock->read(clock); +} +EXPORT_SYMBOL_GPL(random_get_entropy_fallback); =20 /** * do_adjtimex() - Accessor function to NTP __do_adjtimex function From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96C57C433F5 for ; Fri, 27 May 2022 11:49:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343945AbiE0LtP (ORCPT ); Fri, 27 May 2022 07:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352095AbiE0LpW (ORCPT ); Fri, 27 May 2022 07:45:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3CCF140868; Fri, 27 May 2022 04:41:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4E0DD61D46; Fri, 27 May 2022 11:41:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CEC0C385A9; Fri, 27 May 2022 11:41:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651716; bh=YVFCm9IvwmgkDNWZPZYSkH8bYCAZpNilJPeiWNu6Xuk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tJa0bnddslGgrff/oR3XdFvx18vcwFhEHGEjrGcL3j9vikd7djdrXtCHhoLu4R+YO vFvET2O+j0tldo+yu1VsqZRnRAcIFPKBdP1P9zyXXOba1aR8Bsph8rJKtV+9RaMSns ZSd3m50ompDFxeVjMAAPlwSztBrR9rbVKwwHO5q4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Geert Uytterhoeven , "Jason A. Donenfeld" Subject: [PATCH 5.17 075/111] m68k: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:47 +0200 Message-Id: <20220527084830.091696429@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 0f392c95391f2d708b12971a07edaa7973f9eece upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/m68k/include/asm/timex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/m68k/include/asm/timex.h +++ b/arch/m68k/include/asm/timex.h @@ -35,7 +35,7 @@ static inline unsigned long random_get_e { if (mach_random_get_entropy) return mach_random_get_entropy(); - return 0; + return random_get_entropy_fallback(); } #define random_get_entropy random_get_entropy From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF50C433FE for ; Fri, 27 May 2022 11:47:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351876AbiE0LrY (ORCPT ); Fri, 27 May 2022 07:47:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352144AbiE0Lp3 (ORCPT ); Fri, 27 May 2022 07:45:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 526F7133250; Fri, 27 May 2022 04:42:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3607761D50; Fri, 27 May 2022 11:42:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42BE8C385A9; Fri, 27 May 2022 11:42:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651725; bh=N+njQa8dbPvGRCd+m8kR8WWw+lhJD4cHuiN1wI+6arY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K5v7JD5gnBPv3o35wXmmgZwQ9d4Of+Rmc5P4ls++lF8a613oDLwJkdCQKp4e1eiQg uaZAY3ZvFyTIi4r6gpv25GrIpcqZ1LVnLtGUX8Ay6b7pu/nFTviwkTM3j8G2YOGaJr Qaq96VUMnX7Tsp8ktV1e47odRBZX6kvtgtlgSHh4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Paul Walmsley , Palmer Dabbelt , "Jason A. Donenfeld" Subject: [PATCH 5.17 076/111] riscv: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:48 +0200 Message-Id: <20220527084830.218540795@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 6d01238623faa9425f820353d2066baf6c9dc872 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Paul Walmsley Acked-by: Palmer Dabbelt Reviewed-by: Palmer Dabbelt Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/riscv/include/asm/timex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/riscv/include/asm/timex.h +++ b/arch/riscv/include/asm/timex.h @@ -41,7 +41,7 @@ static inline u32 get_cycles_hi(void) static inline unsigned long random_get_entropy(void) { if (unlikely(clint_time_val =3D=3D NULL)) - return 0; + return random_get_entropy_fallback(); return get_cycles(); } #define random_get_entropy() random_get_entropy() From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16C46C433EF for ; Fri, 27 May 2022 11:49:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352025AbiE0Ltf (ORCPT ); Fri, 27 May 2022 07:49:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352232AbiE0Lpf (ORCPT ); Fri, 27 May 2022 07:45:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6CA814641E; Fri, 27 May 2022 04:42:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 14B8E61CDB; Fri, 27 May 2022 11:42:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20321C385A9; Fri, 27 May 2022 11:42:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651734; bh=Af/mC/zq4tKhaB0LwhH5Vk0R8nnqO8wO9ms7HejQAZU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pD7bvtrPlS8ERDSZdLnMF1C9HtNY1paDKTXP8kQC3kSr+1h23VSWODCAXYJ4VF2cn EqFqQXeQ5L5nOguwHk16ogaJW4k36bpEzqgXj4wVnX1uULGHmlTRms8Dlz1uEXxEix vjl0yabrZINAziZkVc3+W60xvhGOJmhe9rDWxB70= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , "Maciej W. Rozycki" , Thomas Bogendoerfer , "Jason A. Donenfeld" Subject: [PATCH 5.17 077/111] mips: use fallback for random_get_entropy() instead of just c0 random Date: Fri, 27 May 2022 10:49:49 +0200 Message-Id: <20220527084830.372270403@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1c99c6a7c3c599a68321b01b9ec243215ede5a68 upstream. For situations in which we don't have a c0 counter register available, we've been falling back to reading the c0 "random" register, which is usually bounded by the amount of TLB entries and changes every other cycle or so. This means it wraps extremely often. We can do better by combining this fast-changing counter with a potentially slower-changing counter from random_get_entropy_fallback() in the more significant bits. This commit combines the two, taking into account that the changing bits are in a different bit position depending on the CPU model. In addition, we previously were falling back to 0 for ancient CPUs that Linux does not support anyway; remove that dead path entirely. Cc: Thomas Gleixner Cc: Arnd Bergmann Tested-by: Maciej W. Rozycki Acked-by: Thomas Bogendoerfer Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/mips/include/asm/timex.h | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) --- a/arch/mips/include/asm/timex.h +++ b/arch/mips/include/asm/timex.h @@ -76,25 +76,24 @@ static inline cycles_t get_cycles(void) else return 0; /* no usable counter */ } +#define get_cycles get_cycles =20 /* * Like get_cycles - but where c0_count is not available we desperately * use c0_random in an attempt to get at least a little bit of entropy. - * - * R6000 and R6000A neither have a count register nor a random register. - * That leaves no entropy source in the CPU itself. */ static inline unsigned long random_get_entropy(void) { - unsigned int prid =3D read_c0_prid(); - unsigned int imp =3D prid & PRID_IMP_MASK; + unsigned int c0_random; =20 - if (can_use_mips_counter(prid)) + if (can_use_mips_counter(read_c0_prid())) return read_c0_count(); - else if (likely(imp !=3D PRID_IMP_R6000 && imp !=3D PRID_IMP_R6000A)) - return read_c0_random(); + + if (cpu_has_3kex) + c0_random =3D (read_c0_random() >> 8) & 0x3f; else - return 0; /* no usable register */ + c0_random =3D read_c0_random() & 0x3f; + return (random_get_entropy_fallback() << 6) | (0x3f - c0_random); } #define random_get_entropy random_get_entropy From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81D39C433F5 for ; Fri, 27 May 2022 11:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351874AbiE0Ltm (ORCPT ); Fri, 27 May 2022 07:49:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352342AbiE0Lpw (ORCPT ); Fri, 27 May 2022 07:45:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF95314677D; Fri, 27 May 2022 04:42:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F178F61D19; Fri, 27 May 2022 11:42:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09AACC385A9; Fri, 27 May 2022 11:42:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651746; bh=YAaAnfdN5DGX6izEPZPLbxRmKyDvGykDk8xyS2lO+OE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bANu/a6KxG0XE7w7jHxMKV/PKmnFg5azMUd7VatrJIyJtz4EgGC0/a2uiEWZ4sKJd PydBRoUZ8/JVdF+PfO/BujyANPKgH6fruCysQgThizwURKzC9wb0guzr3kkaGr+6JQ DWJcgHc/xwBcp75MhKV+HzrVrtTJAROqbIrooTQU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , "Russell King (Oracle)" , "Jason A. Donenfeld" Subject: [PATCH 5.17 078/111] arm: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:50 +0200 Message-Id: <20220527084830.509591474@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit ff8a8f59c99f6a7c656387addc4d9f2247d75077 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Reviewed-by: Russell King (Oracle) Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/arm/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) --- a/arch/arm/include/asm/timex.h +++ b/arch/arm/include/asm/timex.h @@ -11,5 +11,6 @@ =20 typedef unsigned long cycles_t; #define get_cycles() ({ cycles_t c; read_current_timer(&c) ? 0 : c; }) +#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_= entropy_fallback()) =20 #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AAE7C433F5 for ; Fri, 27 May 2022 11:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351995AbiE0Ltr (ORCPT ); Fri, 27 May 2022 07:49:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352450AbiE0Lp7 (ORCPT ); Fri, 27 May 2022 07:45:59 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C8BA1498EC; Fri, 27 May 2022 04:42:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 551F0B8091D; Fri, 27 May 2022 11:42:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE50EC385A9; Fri, 27 May 2022 11:42:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651755; bh=bYGVN7kfmeQdES4CFzs8Cvo+1Ksjh/MOcCcr62PeBgo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1vratfvROfyHWnAVuHCqX+NrFTcbPsfydTy66baXd37vLGetSaIQwzxmCnwnA/LYZ isaST/gsNGp2ayTf16PfDmbaeo+ZRrswv8pblZN/AknTPnbBwvPHwuD1QeYA8HEwhW 5zDJvPOth3jYL/F3yvvwmep6Qwj3mHqvSCzKrGxQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Dinh Nguyen , "Jason A. Donenfeld" Subject: [PATCH 5.17 079/111] nios2: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:51 +0200 Message-Id: <20220527084830.661733994@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit c04e72700f2293013dab40208e809369378f224c upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Dinh Nguyen Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/nios2/include/asm/timex.h | 3 +++ 1 file changed, 3 insertions(+) --- a/arch/nios2/include/asm/timex.h +++ b/arch/nios2/include/asm/timex.h @@ -8,5 +8,8 @@ typedef unsigned long cycles_t; =20 extern cycles_t get_cycles(void); +#define get_cycles get_cycles + +#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_= entropy_fallback()) =20 #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0E66C433EF for ; Fri, 27 May 2022 11:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232184AbiE0Ls2 (ORCPT ); Fri, 27 May 2022 07:48:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352524AbiE0LqE (ORCPT ); Fri, 27 May 2022 07:46:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4CA31498D1; Fri, 27 May 2022 04:42:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 65B15B824E0; Fri, 27 May 2022 11:42:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C8611C385A9; Fri, 27 May 2022 11:42:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651764; bh=AkmbyX7kj5nPzG+eUDjqwRv35ba6cPqLkRss36eEYDQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YZkbgQSLUoPYqzsHpUxQ7ANaaXO29vjLMjYNjQhFRZoWSG9xbgIynvpUvN37oZ6u+ I6YLQujgZr4dtugg16fIt7COuhu/Eb46BJS3M4QAyrIu3QTYElmuaEJTR8Zp6v1/8r RSDcMs4qxjHl2lf4GR+x4niMJ51BM7QC4goOUh+8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" , Thomas Gleixner , Arnd Bergmann , Borislav Petkov , x86@kernel.org Subject: [PATCH 5.17 080/111] x86/tsc: Use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:52 +0200 Message-Id: <20220527084830.796831598@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 3bd4abc07a267e6a8b33d7f8717136e18f921c53 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is suboptimal. Instead, fallback to calling random_get_entropy_fallback(), which isn't extremely high precision or guaranteed to be entropic, but is certainly better than returning zero all the time. If CONFIG_X86_TSC=3Dn, then it's possible for the kernel to run on systems without RDTSC, such as 486 and certain 586, so the fallback code is only required for that case. As well, fix up both the new function and the get_cycles() function from which it was derived to use cpu_feature_enabled() rather than boot_cpu_has(), and use !IS_ENABLED() instead of #ifndef. Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Borislav Petkov Cc: x86@kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/x86/include/asm/timex.h | 9 +++++++++ arch/x86/include/asm/tsc.h | 7 +++---- 2 files changed, 12 insertions(+), 4 deletions(-) --- a/arch/x86/include/asm/timex.h +++ b/arch/x86/include/asm/timex.h @@ -5,6 +5,15 @@ #include #include =20 +static inline unsigned long random_get_entropy(void) +{ + if (!IS_ENABLED(CONFIG_X86_TSC) && + !cpu_feature_enabled(X86_FEATURE_TSC)) + return random_get_entropy_fallback(); + return rdtsc(); +} +#define random_get_entropy random_get_entropy + /* Assume we use the PIT time source for the clock tick */ #define CLOCK_TICK_RATE PIT_TICK_RATE =20 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -20,13 +20,12 @@ extern void disable_TSC(void); =20 static inline cycles_t get_cycles(void) { -#ifndef CONFIG_X86_TSC - if (!boot_cpu_has(X86_FEATURE_TSC)) + if (!IS_ENABLED(CONFIG_X86_TSC) && + !cpu_feature_enabled(X86_FEATURE_TSC)) return 0; -#endif - return rdtsc(); } +#define get_cycles get_cycles =20 extern struct system_counterval_t convert_art_to_tsc(u64 art); extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A490BC433F5 for ; Fri, 27 May 2022 11:56:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351689AbiE0Ly4 (ORCPT ); Fri, 27 May 2022 07:54:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352519AbiE0Luc (ORCPT ); Fri, 27 May 2022 07:50:32 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF05313F913; Fri, 27 May 2022 04:44:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6618BB8091D; Fri, 27 May 2022 11:44:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CECF8C385A9; Fri, 27 May 2022 11:44:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651883; bh=Trgz+USAQPCv1R/UHnPbOXYDe7Kd+M22X+yARLpqOVw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j3bNJrTh1ef/dncMHHVLBYvvmcy0JN0RvTck0O3whC3cIir3d/RTfVfPprcSEboNP wAXZzzuDtYay5XlQWcROYMAl462r5q1ZBPb8nPBKeUHFmCChxHb5gQ7cjHvOu3as1R XAfpWz5SVkueftF+xWDKuv2xqtp3p66Nil4l/wWs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Richard Weinberger , Anton Ivanov , Johannes Berg , "Jason A. Donenfeld" Subject: [PATCH 5.17 081/111] um: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:53 +0200 Message-Id: <20220527084830.934991691@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 9f13fb0cd11ed2327abff69f6501a2c124c88b5a upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Weinberger Cc: Anton Ivanov Acked-by: Johannes Berg Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/um/include/asm/timex.h | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) --- a/arch/um/include/asm/timex.h +++ b/arch/um/include/asm/timex.h @@ -2,13 +2,8 @@ #ifndef __UM_TIMEX_H #define __UM_TIMEX_H =20 -typedef unsigned long cycles_t; - -static inline cycles_t get_cycles (void) -{ - return 0; -} - #define CLOCK_TICK_RATE (HZ) =20 +#include + #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48CEFC41535 for ; Fri, 27 May 2022 11:51:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352621AbiE0Lul (ORCPT ); Fri, 27 May 2022 07:50:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351958AbiE0LrY (ORCPT ); Fri, 27 May 2022 07:47:24 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64F5F13B8CA; Fri, 27 May 2022 04:43:20 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 13C83B824CA; Fri, 27 May 2022 11:43:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 581DEC385A9; Fri, 27 May 2022 11:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651796; bh=sbrL44tMkKlNhB/ubeCGu9MeR5vV2zIzvhWF5VTfUMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qRhV7yFks5LvkB90XxA4dn4OQUE+RoiBEBi74xflfL7dSiYQ37HeXbIa/uAUhAjrL 6K+xjyFFjYrsX7yRtBpLbsHFFgkQPpAxUB5UB/w6wz5jEhv0zmEJmqdUYQRmXcal1i lFZigYWygmiaZwBZ4+HkDA8poUagIzoDHbgbPS2Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , "David S. Miller" , "Jason A. Donenfeld" Subject: [PATCH 5.17 082/111] sparc: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:54 +0200 Message-Id: <20220527084831.068127467@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit ac9756c79797bb98972736b13cfb239fd2cffb79 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: David S. Miller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/sparc/include/asm/timex_32.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) --- a/arch/sparc/include/asm/timex_32.h +++ b/arch/sparc/include/asm/timex_32.h @@ -9,8 +9,6 @@ =20 #define CLOCK_TICK_RATE 1193180 /* Underlying HZ */ =20 -/* XXX Maybe do something better at some point... -DaveM */ -typedef unsigned long cycles_t; -#define get_cycles() (0) +#include =20 #endif From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 988DAC433F5 for ; Fri, 27 May 2022 11:54:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352180AbiE0LyH (ORCPT ); Fri, 27 May 2022 07:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352288AbiE0LuR (ORCPT ); Fri, 27 May 2022 07:50:17 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D8D614D79B; Fri, 27 May 2022 04:44:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id D7BA7CE250E; Fri, 27 May 2022 11:44:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3322C385A9; Fri, 27 May 2022 11:44:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651862; bh=o4soIyc1vQYJQ67dGq9GBUeAqDilcx63IcfzN7572IA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yG+DcucXBKGrDWPIAjjVyvdXH+smM+xEQrO2vB8kwgM1OSJl+k4/DLgaf531B1Sg8 07cLUXITHTkxrCHRENQMWJYLpN8Qug/yFijOv/Xnl4oJw5pLGBmugHTe12S8InbWDo Tg104bmm566ENnnRyiql7bkEYjJxEbdLz03NhLr0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Arnd Bergmann , Max Filippov , "Jason A. Donenfeld" Subject: [PATCH 5.17 083/111] xtensa: use fallback for random_get_entropy() instead of zero Date: Fri, 27 May 2022 10:49:55 +0200 Message-Id: <20220527084831.238456575@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e10e2f58030c5c211d49042a8c2a1b93d40b2ffb upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Max Filippov Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- arch/xtensa/include/asm/timex.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- a/arch/xtensa/include/asm/timex.h +++ b/arch/xtensa/include/asm/timex.h @@ -29,10 +29,6 @@ =20 extern unsigned long ccount_freq; =20 -typedef unsigned long long cycles_t; - -#define get_cycles() (0) - void local_timer_setup(unsigned cpu); =20 /* @@ -59,4 +55,6 @@ static inline void set_linux_timer (unsi xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER); } =20 +#include + #endif /* _XTENSA_TIMEX_H */ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43C3FC433F5 for ; Fri, 27 May 2022 11:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352060AbiE0LyZ (ORCPT ); Fri, 27 May 2022 07:54:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352383AbiE0LuW (ORCPT ); Fri, 27 May 2022 07:50:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFF1314E2DB; Fri, 27 May 2022 04:44:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B2E8561D54; Fri, 27 May 2022 11:44:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B53F1C385A9; Fri, 27 May 2022 11:44:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651871; bh=0xUvJZQxRrzvQ7jg/0g8r+Bpw7u9XXvXqVGdMoishvk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Oi+rDkwscUM23OlGWX13un3QN34d/9ozNpPyk4N8nM1NmYyjJiGtksib8W3v1zK8P aDoGkpUs2Tq0eDdreqJIOhzQWVXDvCFNfjVkYphOuAmDMOje6Hgkpf7blC93fkqnjJ AdwQXyzyrFSt6qYCFJb/L9wlzk9ark3wmeZ4/HS4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , "Jason A. Donenfeld" Subject: [PATCH 5.17 084/111] random: insist on random_get_entropy() existing in order to simplify Date: Fri, 27 May 2022 10:49:56 +0200 Message-Id: <20220527084831.400243355@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 4b758eda851eb9336ca86a0041a4d3da55f66511 upstream. All platforms are now guaranteed to provide some value for random_get_entropy(). In case some bug leads to this not being so, we print a warning, because that indicates that something is really very wrong (and likely other things are impacted too). This should never be hit, but it's a good and cheap way of finding out if something ever is problematic. Since we now have viable fallback code for random_get_entropy() on all platforms, which is, in the worst case, not worse than jiffies, we can count on getting the best possible value out of it. That means there's no longer a use for using jiffies as entropy input. It also means we no longer have a reason for doing the round-robin register flow in the IRQ handler, which was always of fairly dubious value. Instead we can greatly simplify the IRQ handler inputs and also unify the construction between 64-bits and 32-bits. We now collect the cycle counter and the return address, since those are the two things that matter. Because the return address and the irq number are likely related, to the extent we mix in the irq number, we can just xor it into the top unchanging bytes of the return address, rather than the bottom changing bytes of the cycle counter as before. Then, we can do a fixed 2 rounds of SipHash/HSipHash. Finally, we use the same construction of hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're not actually discarding any entropy, since that entropy is carried through until the next time. And more importantly, it lets us do the same sponge-like construction everywhere. Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 86 +++++++++++++++------------------------------= ----- 1 file changed, 26 insertions(+), 60 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1020,15 +1020,14 @@ int __init rand_initialize(void) */ void add_device_randomness(const void *buf, size_t size) { - unsigned long cycles =3D random_get_entropy(); - unsigned long flags, now =3D jiffies; + unsigned long entropy =3D random_get_entropy(); + unsigned long flags; =20 if (crng_init =3D=3D 0 && size) crng_pre_init_inject(buf, size, false); =20 spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&cycles, sizeof(cycles)); - _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(buf, size); spin_unlock_irqrestore(&input_pool.lock, flags); } @@ -1051,12 +1050,11 @@ struct timer_rand_state { */ static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) { - unsigned long cycles =3D random_get_entropy(), now =3D jiffies, flags; + unsigned long entropy =3D random_get_entropy(), now =3D jiffies, flags; long delta, delta2, delta3; =20 spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&cycles, sizeof(cycles)); - _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(&num, sizeof(num)); spin_unlock_irqrestore(&input_pool.lock, flags); =20 @@ -1184,7 +1182,6 @@ struct fast_pool { unsigned long pool[4]; unsigned long last; unsigned int count; - u16 reg_idx; }; =20 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) =3D { @@ -1202,13 +1199,13 @@ static DEFINE_PER_CPU(struct fast_pool, * This is [Half]SipHash-1-x, starting from an empty key. Because * the key is fixed, it assumes that its inputs are non-malicious, * and therefore this has no security on its own. s represents the - * 128 or 256-bit SipHash state, while v represents a 128-bit input. + * four-word SipHash state, while v represents a two-word input. */ -static void fast_mix(unsigned long s[4], const unsigned long *v) +static void fast_mix(unsigned long s[4], const unsigned long v[2]) { size_t i; =20 - for (i =3D 0; i < 16 / sizeof(long); ++i) { + for (i =3D 0; i < 2; ++i) { s[3] ^=3D v[i]; #ifdef CONFIG_64BIT s[0] +=3D s[1]; s[1] =3D rol64(s[1], 13); s[1] ^=3D s[0]; s[0] =3D rol64= (s[0], 32); @@ -1248,33 +1245,17 @@ int random_online_cpu(unsigned int cpu) } #endif =20 -static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs) -{ - unsigned long *ptr =3D (unsigned long *)regs; - unsigned int idx; - - if (regs =3D=3D NULL) - return 0; - idx =3D READ_ONCE(f->reg_idx); - if (idx >=3D sizeof(struct pt_regs) / sizeof(unsigned long)) - idx =3D 0; - ptr +=3D idx++; - WRITE_ONCE(f->reg_idx, idx); - return *ptr; -} - static void mix_interrupt_randomness(struct work_struct *work) { struct fast_pool *fast_pool =3D container_of(work, struct fast_pool, mix); /* - * The size of the copied stack pool is explicitly 16 bytes so that we - * tax mix_pool_byte()'s compression function the same amount on all - * platforms. This means on 64-bit we copy half the pool into this, - * while on 32-bit we copy all of it. The entropy is supposed to be - * sufficiently dispersed between bits that in the sponge-like - * half case, on average we don't wind up "losing" some. + * The size of the copied stack pool is explicitly 2 longs so that we + * only ever ingest half of the siphash output each time, retaining + * the other half as the next "key" that carries over. The entropy is + * supposed to be sufficiently dispersed between bits so on average + * we don't wind up "losing" some. */ - u8 pool[16]; + unsigned long pool[2]; =20 /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1306,36 +1287,21 @@ static void mix_interrupt_randomness(str void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT =3D 1U << 31 }; - unsigned long cycles =3D random_get_entropy(), now =3D jiffies; + unsigned long entropy =3D random_get_entropy(); struct fast_pool *fast_pool =3D this_cpu_ptr(&irq_randomness); struct pt_regs *regs =3D get_irq_regs(); unsigned int new_count; - union { - u32 u32[4]; - u64 u64[2]; - unsigned long longs[16 / sizeof(long)]; - } irq_data; - - if (cycles =3D=3D 0) - cycles =3D get_reg(fast_pool, regs); - - if (sizeof(unsigned long) =3D=3D 8) { - irq_data.u64[0] =3D cycles ^ rol64(now, 32) ^ irq; - irq_data.u64[1] =3D regs ? instruction_pointer(regs) : _RET_IP_; - } else { - irq_data.u32[0] =3D cycles ^ irq; - irq_data.u32[1] =3D now; - irq_data.u32[2] =3D regs ? instruction_pointer(regs) : _RET_IP_; - irq_data.u32[3] =3D get_reg(fast_pool, regs); - } =20 - fast_mix(fast_pool->pool, irq_data.longs); + fast_mix(fast_pool->pool, (unsigned long[2]){ + entropy, + (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq) + }); new_count =3D ++fast_pool->count; =20 if (new_count & MIX_INFLIGHT) return; =20 - if (new_count < 64 && (!time_after(now, fast_pool->last + HZ) || + if (new_count < 64 && (!time_is_before_jiffies(fast_pool->last + HZ) || unlikely(crng_init =3D=3D 0))) return; =20 @@ -1371,28 +1337,28 @@ static void entropy_timer(struct timer_l static void try_to_generate_entropy(void) { struct { - unsigned long cycles; + unsigned long entropy; struct timer_list timer; } stack; =20 - stack.cycles =3D random_get_entropy(); + stack.entropy =3D random_get_entropy(); =20 /* Slow counter - or none. Don't even bother */ - if (stack.cycles =3D=3D random_get_entropy()) + if (stack.entropy =3D=3D random_get_entropy()) return; =20 timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready() && !signal_pending(current)) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); - mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); + mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); schedule(); - stack.cycles =3D random_get_entropy(); + stack.entropy =3D random_get_entropy(); } =20 del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); + mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04BAAC433F5 for ; Fri, 27 May 2022 11:54:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345143AbiE0Lyi (ORCPT ); Fri, 27 May 2022 07:54:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352482AbiE0Lua (ORCPT ); Fri, 27 May 2022 07:50:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDD6013F929; Fri, 27 May 2022 04:44:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 64838B824D2; Fri, 27 May 2022 11:44:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE006C385A9; Fri, 27 May 2022 11:44:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651880; bh=F6HYdTu8y5D04U97NP8ICb8h5IlyZ/Zzy3gf8MAugFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RPc6XXyHGlL/IfGYYcTmgyq9RkoxACUvpmidgNE2trGTiYenAaW+ZwLjyUPVxRY/g 52G7VfQdAP1xokfNrIKV42guvYZlbms79WbD/4+1EEyEBHIBKTM8ktV/4vz56AH0Fg IbIj9odeS52XIXCT8Db2Aa11pgP7vIh0BIpfY/Ek= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 085/111] random: do not use batches when !crng_ready() Date: Fri, 27 May 2022 10:49:57 +0200 Message-Id: <20220527084831.533585046@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream. It's too hard to keep the batches synchronized, and pointless anyway, since in !crng_ready(), we're updating the base_crng key really often, where batching only hurts. So instead, if the crng isn't ready, just call into get_random_bytes(). At this stage nothing is performance critical anyhow. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -465,10 +465,8 @@ static void crng_pre_init_inject(const v =20 if (account) { crng_init_cnt +=3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_c= nt); - if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; + if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) crng_init =3D 1; - } } =20 spin_unlock_irqrestore(&base_crng.lock, flags); @@ -624,6 +622,11 @@ u64 get_random_u64(void) =20 warn_unseeded_randomness(&previous); =20 + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u64.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u64); =20 @@ -658,6 +661,11 @@ u32 get_random_u32(void) =20 warn_unseeded_randomness(&previous); =20 + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u32.lock, flags); batch =3D raw_cpu_ptr(&batched_entropy_u32); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BC94C433FE for ; Fri, 27 May 2022 11:51:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239403AbiE0Lvt (ORCPT ); Fri, 27 May 2022 07:51:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351942AbiE0Lrz (ORCPT ); Fri, 27 May 2022 07:47:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5521214ACB8; Fri, 27 May 2022 04:43:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DC94DB8091D; Fri, 27 May 2022 11:43:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D16FC385A9; Fri, 27 May 2022 11:43:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651805; bh=WjN6iPaEemEdOYwuxrw2exSj48k7SUAyeKYjL1IMm6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1a+QTOoylsUCKFAiRRoN5oA8T1mE1KVTeAkenYihgPOcJZ0PoB1tODE3h6dHnYgaO 6a6gLRDcU7/hHPO6a6F2TXGW/47JA3wQX1YIyGPq2pvAnn3ZW7R9sKFzKP7ITQiFEa KWGOhkwUkFISZQ8m7symw24VsRuVDwnzRYylfaT4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 086/111] random: use first 128 bits of input as fast init Date: Fri, 27 May 2022 10:49:58 +0200 Message-Id: <20220527084831.660455927@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5c3b747ef54fa2a7318776777f6044540d99f721 upstream. Before, the first 64 bytes of input, regardless of how entropic it was, would be used to mutate the crng base key directly, and none of those bytes would be credited as having entropy. Then 256 bits of credited input would be accumulated, and only then would the rng transition from the earlier "fast init" phase into being actually initialized. The thinking was that by mixing and matching fast init and real init, an attacker who compromised the fast init state, considered easy to do given how little entropy might be in those first 64 bytes, would then be able to bruteforce bits from the actual initialization. By keeping these separate, bruteforcing became impossible. However, by not crediting potentially creditable bits from those first 64 bytes of input, we delay initialization, and actually make the problem worse, because it means the user is drawing worse random numbers for a longer period of time. Instead, we can take the first 128 bits as fast init, and allow them to be credited, and then hold off on the next 128 bits until they've accumulated. This is still a wide enough margin to prevent bruteforcing the rng state, while still initializing much faster. Then, rather than trying to piecemeal inject into the base crng key at various points, instead just extract from the pool when we need it, for the crng_init=3D=3D0 phase. Performance may even be better for the various inputs here, since there are likely more calls to mix_pool_bytes() then there are to get_random_bytes() during this phase of system execution. Since the preinit injection code is gone, bootloader randomness can then do something significantly more straight forward, removing the weird system_wq hack in hwgenerator randomness. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 146 ++++++++++++++++-----------------------------= ----- 1 file changed, 49 insertions(+), 97 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -231,10 +231,7 @@ static void _warn_unseeded_randomness(co * *********************************************************************/ =20 -enum { - CRNG_RESEED_INTERVAL =3D 300 * HZ, - CRNG_INIT_CNT_THRESH =3D 2 * CHACHA_KEY_SIZE -}; +enum { CRNG_RESEED_INTERVAL =3D 300 * HZ }; =20 static struct { u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); @@ -258,6 +255,8 @@ static DEFINE_PER_CPU(struct crng, crngs =20 /* Used by crng_reseed() to extract a new seed from the input pool. */ static bool drain_entropy(void *buf, size_t nbytes); +/* Used by crng_make_state() to extract a new seed when crng_init=3D=3D0. = */ +static void extract_entropy(void *buf, size_t nbytes); =20 /* * This extracts a new crng key from the input pool, but only if there is a @@ -382,17 +381,20 @@ static void crng_make_state(u32 chacha_s /* * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not - * ready, we do fast key erasure with the base_crng directly, because - * this is what crng_pre_init_inject() mutates during early init. + * ready, we do fast key erasure with the base_crng directly, extracting + * when crng_init=3D=3D0. */ if (!crng_ready()) { bool ready; =20 spin_lock_irqsave(&base_crng.lock, flags); ready =3D crng_ready(); - if (!ready) + if (!ready) { + if (crng_init =3D=3D 0) + extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_fast_key_erasure(base_crng.key, chacha_state, random_data, random_data_len); + } spin_unlock_irqrestore(&base_crng.lock, flags); if (!ready) return; @@ -433,48 +435,6 @@ static void crng_make_state(u32 chacha_s local_unlock_irqrestore(&crngs.lock, flags); } =20 -/* - * This function is for crng_init =3D=3D 0 only. It loads entropy directly - * into the crng's key, without going through the input pool. It is, - * generally speaking, not very safe, but we use this only at early - * boot time when it's better to have something there rather than - * nothing. - * - * If account is set, then the crng_init_cnt counter is incremented. - * This shouldn't be set by functions like add_device_randomness(), - * where we can't trust the buffer passed to it is guaranteed to be - * unpredictable (so it might not have any entropy at all). - */ -static void crng_pre_init_inject(const void *input, size_t len, bool accou= nt) -{ - static int crng_init_cnt =3D 0; - struct blake2s_state hash; - unsigned long flags; - - blake2s_init(&hash, sizeof(base_crng.key)); - - spin_lock_irqsave(&base_crng.lock, flags); - if (crng_init !=3D 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; - } - - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, input, len); - blake2s_final(&hash, base_crng.key); - - if (account) { - crng_init_cnt +=3D min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_c= nt); - if (crng_init_cnt >=3D CRNG_INIT_CNT_THRESH) - crng_init =3D 1; - } - - spin_unlock_irqrestore(&base_crng.lock, flags); - - if (crng_init =3D=3D 1) - pr_notice("fast init done\n"); -} - static void _get_random_bytes(void *buf, size_t nbytes) { u32 chacha_state[CHACHA_STATE_WORDS]; @@ -787,7 +747,8 @@ EXPORT_SYMBOL(get_random_bytes_arch); =20 enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS =3D POOL_BITS /* No point in settling for less. */ + POOL_MIN_BITS =3D POOL_BITS, /* No point in settling for less. */ + POOL_FAST_INIT_BITS =3D POOL_MIN_BITS / 2 }; =20 /* For notifying userspace should write into /dev/random. */ @@ -824,24 +785,6 @@ static void mix_pool_bytes(const void *i spin_unlock_irqrestore(&input_pool.lock, flags); } =20 -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; - - if (!nbits) - return; - - add =3D min_t(size_t, nbits, POOL_BITS); - - do { - orig =3D READ_ONCE(input_pool.entropy_count); - entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); - - if (!crng_ready() && entropy_count >=3D POOL_MIN_BITS) - crng_reseed(); -} - /* * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. @@ -907,6 +850,33 @@ static bool drain_entropy(void *buf, siz return true; } =20 +static void credit_entropy_bits(size_t nbits) +{ + unsigned int entropy_count, orig, add; + unsigned long flags; + + if (!nbits) + return; + + add =3D min_t(size_t, nbits, POOL_BITS); + + do { + orig =3D READ_ONCE(input_pool.entropy_count); + entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); + + if (!crng_ready() && entropy_count >=3D POOL_MIN_BITS) + crng_reseed(); + else if (unlikely(crng_init =3D=3D 0 && entropy_count >=3D POOL_FAST_INIT= _BITS)) { + spin_lock_irqsave(&base_crng.lock, flags); + if (crng_init =3D=3D 0) { + extract_entropy(base_crng.key, sizeof(base_crng.key)); + crng_init =3D 1; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + } +} + =20 /********************************************************************** * @@ -949,9 +919,9 @@ static bool drain_entropy(void *buf, siz * entropy as specified by the caller. If the entropy pool is full it will * block until more entropy is needed. * - * add_bootloader_randomness() is the same as add_hwgenerator_randomness()= or - * add_device_randomness(), depending on whether or not the configuration - * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * add_bootloader_randomness() is called by bootloader drivers, such as EFI + * and device tree, and credits its input depending on whether or not the + * configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set. * * add_interrupt_randomness() uses the interrupt timing as random * inputs to the entropy pool. Using the cycle counters and the irq source @@ -1031,9 +1001,6 @@ void add_device_randomness(const void *b unsigned long entropy =3D random_get_entropy(); unsigned long flags; =20 - if (crng_init =3D=3D 0 && size) - crng_pre_init_inject(buf, size, false); - spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(buf, size); @@ -1149,12 +1116,6 @@ void rand_initialize_disk(struct gendisk void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { - if (unlikely(crng_init =3D=3D 0 && entropy < POOL_MIN_BITS)) { - crng_pre_init_inject(buffer, count, true); - mix_pool_bytes(buffer, count); - return; - } - /* * Throttle writing if we're above the trickle threshold. * We'll be woken up again once below POOL_MIN_BITS, when @@ -1162,7 +1123,7 @@ void add_hwgenerator_randomness(const vo * CRNG_RESEED_INTERVAL has elapsed. */ wait_event_interruptible_timeout(random_write_wait, - !system_wq || kthread_should_stop() || + kthread_should_stop() || input_pool.entropy_count < POOL_MIN_BITS, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); @@ -1171,17 +1132,14 @@ void add_hwgenerator_randomness(const vo EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); =20 /* - * Handle random seed passed by bootloader. - * If the seed is trustworthy, it would be regarded as hardware RNGs. Othe= rwise - * it would be regarded as device data. - * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. + * Handle random seed passed by bootloader, and credit it if + * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ void add_bootloader_randomness(const void *buf, size_t size) { + mix_pool_bytes(buf, size); if (trust_bootloader) - add_hwgenerator_randomness(buf, size, size * 8); - else - add_device_randomness(buf, size); + credit_entropy_bits(size * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); =20 @@ -1281,13 +1239,8 @@ static void mix_interrupt_randomness(str fast_pool->last =3D jiffies; local_irq_enable(); =20 - if (unlikely(crng_init =3D=3D 0)) { - crng_pre_init_inject(pool, sizeof(pool), true); - mix_pool_bytes(pool, sizeof(pool)); - } else { - mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); - } + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); =20 memzero_explicit(pool, sizeof(pool)); } @@ -1309,8 +1262,7 @@ void add_interrupt_randomness(int irq) if (new_count & MIX_INFLIGHT) return; =20 - if (new_count < 64 && (!time_is_before_jiffies(fast_pool->last + HZ) || - unlikely(crng_init =3D=3D 0))) + if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ)) return; =20 if (unlikely(!fast_pool->mix.func)) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEA69C433F5 for ; Fri, 27 May 2022 11:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351925AbiE0LwY (ORCPT ); Fri, 27 May 2022 07:52:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351975AbiE0LsH (ORCPT ); Fri, 27 May 2022 07:48:07 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A429914B655; Fri, 27 May 2022 04:43:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3FC2261C3F; Fri, 27 May 2022 11:43:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A341C385A9; Fri, 27 May 2022 11:43:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651814; bh=IqRbwiFfIOOBY2hWAe4DIlowFVD0OXI853om5H/LTRE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IQLpuZpx0dKFYXzhV1ir1RlrxKAccdZ+wvHeAXwYOBZlyy8GdwoL0zQaVlcPGOCzV RfiWjk7oUNjs6/Q8HrAo6wy2NVGj4ISO/Iavrn+NTLl8Yka0dFdYD6A+t1oiEKilxe w5Zdsn4f0MXNJ6JXEstFt9ASW7l0JyEYFsMKFm7k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Nadia Heninger , Tom Ristenpart , Eric Biggers , "Jason A. Donenfeld" Subject: [PATCH 5.17 087/111] random: do not pretend to handle premature next security model Date: Fri, 27 May 2022 10:49:59 +0200 Message-Id: <20220527084831.805844152@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e85c0fc1d94c52483a603651748d4c76d6aa1c6b upstream. Per the thread linked below, "premature next" is not considered to be a realistic threat model, and leads to more serious security problems. "Premature next" is the scenario in which: - Attacker compromises the current state of a fully initialized RNG via some kind of infoleak. - New bits of entropy are added directly to the key used to generate the /dev/urandom stream, without any buffering or pooling. - Attacker then, somehow having read access to /dev/urandom, samples RNG output and brute forces the individual new bits that were added. - Result: the RNG never "recovers" from the initial compromise, a so-called violation of what academics term "post-compromise security". The usual solutions to this involve some form of delaying when entropy gets mixed into the crng. With Fortuna, this involves multiple input buckets. With what the Linux RNG was trying to do prior, this involves entropy estimation. However, by delaying when entropy gets mixed in, it also means that RNG compromises are extremely dangerous during the window of time before the RNG has gathered enough entropy, during which time nonces may become predictable (or repeated), ephemeral keys may not be secret, and so forth. Moreover, it's unclear how realistic "premature next" is from an attack perspective, if these attacks even make sense in practice. Put together -- and discussed in more detail in the thread below -- these constitute grounds for just doing away with the current code that pretends to handle premature next. I say "pretends" because it wasn't doing an especially great job at it either; should we change our mind about this direction, we would probably implement Fortuna to "fix" the "problem", in which case, removing the pretend solution still makes sense. This also reduces the crng reseed period from 5 minutes down to 1 minute. The rationale from the thread might lead us toward reducing that even further in the future (or even eliminating it), but that remains a topic of a future commit. At a high level, this patch changes semantics from: Before: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every five minutes, but only if 256 new "bits" have been accumulated since the last reseeding. After: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every minute. Most of this patch is renaming and removing: POOL_MIN_BITS becomes POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(), crng_reseed() loses its "force" parameter since it's now always true, the drain_entropy() function no longer has any use so it's removed, entropy estimation is skipped if we've already init'd, the various notifiers for "low on entropy" are now only active prior to init, and finally, some documentation comments are cleaned up here and there. Link: https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ Cc: Theodore Ts'o Cc: Nadia Heninger Cc: Tom Ristenpart Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 174 +++++++++++++++++----------------------------= ----- 1 file changed, 62 insertions(+), 112 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -15,14 +15,12 @@ * - Sysctl interface. * * The high level overview is that there is one input pool, into which - * various pieces of data are hashed. Some of that data is then "credited"= as - * having a certain number of bits of entropy. When enough bits of entropy= are - * available, the hash is finalized and handed as a key to a stream cipher= that - * expands it indefinitely for various consumers. This key is periodically - * refreshed as the various entropy collectors, described below, add data = to the - * input pool and credit it. There is currently no Fortuna-like scheduler - * involved, which can lead to malicious entropy sources causing a prematu= re - * reseed, and the entropy estimates are, at best, conservative guesses. + * various pieces of data are hashed. Prior to initialization, some of that + * data is then "credited" as having a certain number of bits of entropy. + * When enough bits of entropy are available, the hash is finalized and + * handed as a key to a stream cipher that expands it indefinitely for + * various consumers. This key is periodically refreshed as the various + * entropy collectors, described below, add data to the input pool. */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -231,7 +229,10 @@ static void _warn_unseeded_randomness(co * *********************************************************************/ =20 -enum { CRNG_RESEED_INTERVAL =3D 300 * HZ }; +enum { + CRNG_RESEED_START_INTERVAL =3D HZ, + CRNG_RESEED_INTERVAL =3D 60 * HZ +}; =20 static struct { u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); @@ -253,16 +254,10 @@ static DEFINE_PER_CPU(struct crng, crngs .lock =3D INIT_LOCAL_LOCK(crngs.lock), }; =20 -/* Used by crng_reseed() to extract a new seed from the input pool. */ -static bool drain_entropy(void *buf, size_t nbytes); -/* Used by crng_make_state() to extract a new seed when crng_init=3D=3D0. = */ +/* Used by crng_reseed() and crng_make_state() to extract a new seed from = the input pool. */ static void extract_entropy(void *buf, size_t nbytes); =20 -/* - * This extracts a new crng key from the input pool, but only if there is a - * sufficient amount of entropy available, in order to mitigate bruteforci= ng - * of newly added bits. - */ +/* This extracts a new crng key from the input pool. */ static void crng_reseed(void) { unsigned long flags; @@ -270,9 +265,7 @@ static void crng_reseed(void) u8 key[CHACHA_KEY_SIZE]; bool finalize_init =3D false; =20 - /* Only reseed if we can, to prevent brute forcing a small amount of new = bits. */ - if (!drain_entropy(key, sizeof(key))) - return; + extract_entropy(key, sizeof(key)); =20 /* * We copy the new key into the base_crng, overwriting the old one, @@ -344,10 +337,10 @@ static void crng_fast_key_erasure(u8 key } =20 /* - * Return whether the crng seed is considered to be sufficiently - * old that a reseeding might be attempted. This happens if the last - * reseeding was CRNG_RESEED_INTERVAL ago, or during early boot, at - * an interval proportional to the uptime. + * Return whether the crng seed is considered to be sufficiently old + * that a reseeding is needed. This happens if the last reseeding + * was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval + * proportional to the uptime. */ static bool crng_has_old_seed(void) { @@ -359,7 +352,7 @@ static bool crng_has_old_seed(void) if (uptime >=3D CRNG_RESEED_INTERVAL / HZ * 2) WRITE_ONCE(early_boot, false); else - interval =3D max_t(unsigned int, 5 * HZ, + interval =3D max_t(unsigned int, CRNG_RESEED_START_INTERVAL, (unsigned int)uptime / 2 * HZ); } return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); @@ -401,8 +394,8 @@ static void crng_make_state(u32 chacha_s } =20 /* - * If the base_crng is old enough, we try to reseed, which in turn - * bumps the generation counter that we check below. + * If the base_crng is old enough, we reseed, which in turn bumps the + * generation counter that we check below. */ if (unlikely(crng_has_old_seed())) crng_reseed(); @@ -734,30 +727,24 @@ EXPORT_SYMBOL(get_random_bytes_arch); * * After which, if added entropy should be credited: * - * static void credit_entropy_bits(size_t nbits) + * static void credit_init_bits(size_t nbits) * - * Finally, extract entropy via these two, with the latter one - * setting the entropy count to zero and extracting only if there - * is POOL_MIN_BITS entropy credited prior: + * Finally, extract entropy via: * * static void extract_entropy(void *buf, size_t nbytes) - * static bool drain_entropy(void *buf, size_t nbytes) * **********************************************************************/ =20 enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS =3D POOL_BITS, /* No point in settling for less. */ - POOL_FAST_INIT_BITS =3D POOL_MIN_BITS / 2 + POOL_INIT_BITS =3D POOL_BITS, /* No point in settling for less. */ + POOL_FAST_INIT_BITS =3D POOL_INIT_BITS / 2 }; =20 -/* For notifying userspace should write into /dev/random. */ -static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); - static struct { struct blake2s_state hash; spinlock_t lock; - unsigned int entropy_count; + unsigned int init_bits; } input_pool =3D { .hash.h =3D { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, @@ -772,9 +759,9 @@ static void _mix_pool_bytes(const void * } =20 /* - * This function adds bytes into the entropy "pool". It does not - * update the entropy estimate. The caller should call - * credit_entropy_bits if this is appropriate. + * This function adds bytes into the input pool. It does not + * update the initialization bit counter; the caller should call + * credit_init_bits if this is appropriate. */ static void mix_pool_bytes(const void *in, size_t nbytes) { @@ -831,43 +818,24 @@ static void extract_entropy(void *buf, s memzero_explicit(&block, sizeof(block)); } =20 -/* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, and th= en we - * set the entropy count to zero (but don't actually touch any data). Only= then - * can we extract a new key with extract_entropy(). - */ -static bool drain_entropy(void *buf, size_t nbytes) -{ - unsigned int entropy_count; - do { - entropy_count =3D READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return false; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) !=3D entrop= y_count); - extract_entropy(buf, nbytes); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - return true; -} - -static void credit_entropy_bits(size_t nbits) +static void credit_init_bits(size_t nbits) { - unsigned int entropy_count, orig, add; + unsigned int init_bits, orig, add; unsigned long flags; =20 - if (!nbits) + if (crng_ready() || !nbits) return; =20 add =3D min_t(size_t, nbits, POOL_BITS); =20 do { - orig =3D READ_ONCE(input_pool.entropy_count); - entropy_count =3D min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) !=3D ori= g); + orig =3D READ_ONCE(input_pool.init_bits); + init_bits =3D min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.init_bits, orig, init_bits) !=3D orig); =20 - if (!crng_ready() && entropy_count >=3D POOL_MIN_BITS) + if (!crng_ready() && init_bits >=3D POOL_INIT_BITS) crng_reseed(); - else if (unlikely(crng_init =3D=3D 0 && entropy_count >=3D POOL_FAST_INIT= _BITS)) { + else if (unlikely(crng_init =3D=3D 0 && init_bits >=3D POOL_FAST_INIT_BIT= S)) { spin_lock_irqsave(&base_crng.lock, flags); if (crng_init =3D=3D 0) { extract_entropy(base_crng.key, sizeof(base_crng.key)); @@ -973,13 +941,10 @@ int __init rand_initialize(void) _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); =20 - extract_entropy(base_crng.key, sizeof(base_crng.key)); - ++base_crng.generation; - - if (arch_init && trust_cpu && !crng_ready()) { - crng_init =3D 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } + if (crng_ready()) + crng_reseed(); + else if (arch_init && trust_cpu) + credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); =20 if (ratelimit_disable) { urandom_warning.interval =3D 0; @@ -1033,6 +998,9 @@ static void add_timer_randomness(struct _mix_pool_bytes(&num, sizeof(num)); spin_unlock_irqrestore(&input_pool.lock, flags); =20 + if (crng_ready()) + return; + /* * Calculate number of bits of randomness we probably added. * We take into account the first, second and third-order deltas @@ -1063,7 +1031,7 @@ static void add_timer_randomness(struct * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11)); + credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); } =20 void add_input_randomness(unsigned int type, unsigned int code, @@ -1116,18 +1084,15 @@ void rand_initialize_disk(struct gendisk void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { + mix_pool_bytes(buffer, count); + credit_init_bits(entropy); + /* - * Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below POOL_MIN_BITS, when - * the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has elapsed. + * Throttle writing to once every CRNG_RESEED_INTERVAL, unless + * we're not yet initialized. */ - wait_event_interruptible_timeout(random_write_wait, - kthread_should_stop() || - input_pool.entropy_count < POOL_MIN_BITS, - CRNG_RESEED_INTERVAL); - mix_pool_bytes(buffer, count); - credit_entropy_bits(entropy); + if (!kthread_should_stop() && crng_ready()) + schedule_timeout_interruptible(CRNG_RESEED_INTERVAL); } EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); =20 @@ -1139,7 +1104,7 @@ void add_bootloader_randomness(const voi { mix_pool_bytes(buf, size); if (trust_bootloader) - credit_entropy_bits(size * 8); + credit_init_bits(size * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); =20 @@ -1240,7 +1205,7 @@ static void mix_interrupt_randomness(str local_irq_enable(); =20 mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); + credit_init_bits(1); =20 memzero_explicit(pool, sizeof(pool)); } @@ -1287,7 +1252,7 @@ EXPORT_SYMBOL_GPL(add_interrupt_randomne */ static void entropy_timer(struct timer_list *t) { - credit_entropy_bits(1); + credit_init_bits(1); } =20 /* @@ -1380,16 +1345,8 @@ SYSCALL_DEFINE3(getrandom, char __user * =20 static __poll_t random_poll(struct file *file, poll_table *wait) { - __poll_t mask; - poll_wait(file, &crng_init_wait, wait); - poll_wait(file, &random_write_wait, wait); - mask =3D 0; - if (crng_ready()) - mask |=3D EPOLLIN | EPOLLRDNORM; - if (input_pool.entropy_count < POOL_MIN_BITS) - mask |=3D EPOLLOUT | EPOLLWRNORM; - return mask; + return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } =20 static int write_pool(const char __user *ubuf, size_t count) @@ -1462,7 +1419,7 @@ static long random_ioctl(struct file *f, switch (cmd) { case RNDGETENTCNT: /* Inherently racy, no point locking. */ - if (put_user(input_pool.entropy_count, p)) + if (put_user(input_pool.init_bits, p)) return -EFAULT; return 0; case RNDADDTOENTCNT: @@ -1472,7 +1429,7 @@ static long random_ioctl(struct file *f, return -EFAULT; if (ent_count < 0) return -EINVAL; - credit_entropy_bits(ent_count); + credit_init_bits(ent_count); return 0; case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) @@ -1486,20 +1443,13 @@ static long random_ioctl(struct file *f, retval =3D write_pool((const char __user *)p, size); if (retval < 0) return retval; - credit_entropy_bits(ent_count); + credit_init_bits(ent_count); return 0; case RNDZAPENTCNT: case RNDCLEARPOOL: - /* - * Clear the entropy pool counters. We no longer clear - * the entropy pool, as that's silly. - */ + /* No longer has any effect. */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0) >=3D POOL_MIN_BITS) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } return 0; case RNDRESEEDCRNG: if (!capable(CAP_SYS_ADMIN)) @@ -1558,7 +1508,7 @@ const struct file_operations urandom_fop * * - write_wakeup_threshold - the amount of entropy in the input pool * below which write polls to /dev/random will unblock, requesting - * more entropy, tied to the POOL_MIN_BITS constant. It is writable + * more entropy, tied to the POOL_INIT_BITS constant. It is writable * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * @@ -1573,7 +1523,7 @@ const struct file_operations urandom_fop #include =20 static int sysctl_random_min_urandom_seed =3D CRNG_RESEED_INTERVAL / HZ; -static int sysctl_random_write_wakeup_bits =3D POOL_MIN_BITS; +static int sysctl_random_write_wakeup_bits =3D POOL_INIT_BITS; static int sysctl_poolsize =3D POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; =20 @@ -1629,7 +1579,7 @@ static struct ctl_table random_table[] =3D }, { .procname =3D "entropy_avail", - .data =3D &input_pool.entropy_count, + .data =3D &input_pool.init_bits, .maxlen =3D sizeof(int), .mode =3D 0444, .proc_handler =3D proc_dointvec, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 105F5C4332F for ; Fri, 27 May 2022 11:52:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232066AbiE0Lwp (ORCPT ); Fri, 27 May 2022 07:52:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351936AbiE0Lsf (ORCPT ); Fri, 27 May 2022 07:48:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B53C514C74E; Fri, 27 May 2022 04:43:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 24DA761CDB; Fri, 27 May 2022 11:43:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3330BC34113; Fri, 27 May 2022 11:43:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651823; bh=cgbPIQszWkrcmbZplChPASPgooEY2h0+ZDxX6OBheQI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vseai2vi6CRznMyDYDEKeo9rDOsTVVJ4pRGKJxgaDS/bG26kCaAxmf6wxrUiDqsOb ibjAaAOxuPyjv3xiwiduKp8NoId7b+D7eaRsRMbCsVTSQ18koEVI6X4FreLDHKuqFU 1BIszKQdVapObdM7SrV8xyHdNXcq+BMzenoRXEGQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 088/111] random: order timer entropy functions below interrupt functions Date: Fri, 27 May 2022 10:50:00 +0200 Message-Id: <20220527084831.939787810@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a4b5c26b79ffdfcfb816c198f2fc2b1e7b5b580f upstream. There are no code changes here; this is just a reordering of functions, so that in subsequent commits, the timer entropy functions can call into the interrupt ones. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 238 +++++++++++++++++++++++++--------------------= ----- 1 file changed, 119 insertions(+), 119 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -854,13 +854,13 @@ static void credit_init_bits(size_t nbit * the above entropy accumulation routines: * * void add_device_randomness(const void *buf, size_t size); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); - * void add_disk_randomness(struct gendisk *disk); * void add_hwgenerator_randomness(const void *buffer, size_t count, * size_t entropy); * void add_bootloader_randomness(const void *buf, size_t size); * void add_interrupt_randomness(int irq); + * void add_input_randomness(unsigned int type, unsigned int code, + * unsigned int value); + * void add_disk_randomness(struct gendisk *disk); * * add_device_randomness() adds data to the input pool that * is likely to differ between two devices (or possibly even per boot). @@ -870,19 +870,6 @@ static void credit_init_bits(size_t nbit * that might otherwise be identical and have very little entropy * available to them (particularly common in the embedded world). * - * add_input_randomness() uses the input layer interrupt timing, as well - * as the event type information from the hardware. - * - * add_disk_randomness() uses what amounts to the seek time of block - * layer request events, on a per-disk_devt basis, as input to the - * entropy pool. Note that high-speed solid state drives with very low - * seek times do not make for good sources of entropy, as their seek - * times are usually fairly consistent. - * - * The above two routines try to estimate how many bits of entropy - * to credit. They do this by keeping track of the first and second - * order deltas of the event timings. - * * add_hwgenerator_randomness() is for true hardware RNGs, and will credit * entropy as specified by the caller. If the entropy pool is full it will * block until more entropy is needed. @@ -896,6 +883,19 @@ static void credit_init_bits(size_t nbit * as inputs, it feeds the input pool roughly once a second or after 64 * interrupts, crediting 1 bit of entropy for whichever comes first. * + * add_input_randomness() uses the input layer interrupt timing, as well + * as the event type information from the hardware. + * + * add_disk_randomness() uses what amounts to the seek time of block + * layer request events, on a per-disk_devt basis, as input to the + * entropy pool. Note that high-speed solid state drives with very low + * seek times do not make for good sources of entropy, as their seek + * times are usually fairly consistent. + * + * The last two routines try to estimate how many bits of entropy + * to credit. They do this by keeping track of the first and second + * order deltas of the event timings. + * **********************************************************************/ =20 static bool trust_cpu __ro_after_init =3D IS_ENABLED(CONFIG_RANDOM_TRUST_C= PU); @@ -973,109 +973,6 @@ void add_device_randomness(const void *b } EXPORT_SYMBOL(add_device_randomness); =20 -/* There is one of these per entropy source */ -struct timer_rand_state { - unsigned long last_time; - long last_delta, last_delta2; -}; - -/* - * This function adds entropy to the entropy "pool" by using timing - * delays. It uses the timer_rand_state structure to make an estimate - * of how many bits of entropy this call has added to the pool. - * - * The number "num" is also added to the pool - it should somehow describe - * the type of event which just happened. This is currently 0-255 for - * keyboard scan codes, and 256 upwards for interrupts. - */ -static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) -{ - unsigned long entropy =3D random_get_entropy(), now =3D jiffies, flags; - long delta, delta2, delta3; - - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(&num, sizeof(num)); - spin_unlock_irqrestore(&input_pool.lock, flags); - - if (crng_ready()) - return; - - /* - * Calculate number of bits of randomness we probably added. - * We take into account the first, second and third-order deltas - * in order to make our estimate. - */ - delta =3D now - READ_ONCE(state->last_time); - WRITE_ONCE(state->last_time, now); - - delta2 =3D delta - READ_ONCE(state->last_delta); - WRITE_ONCE(state->last_delta, delta); - - delta3 =3D delta2 - READ_ONCE(state->last_delta2); - WRITE_ONCE(state->last_delta2, delta2); - - if (delta < 0) - delta =3D -delta; - if (delta2 < 0) - delta2 =3D -delta2; - if (delta3 < 0) - delta3 =3D -delta3; - if (delta > delta2) - delta =3D delta2; - if (delta > delta3) - delta =3D delta3; - - /* - * delta is now minimum absolute delta. - * Round down by 1 bit on general principles, - * and limit entropy estimate to 12 bits. - */ - credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); -} - -void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) -{ - static unsigned char last_value; - static struct timer_rand_state input_timer_state =3D { INITIAL_JIFFIES }; - - /* Ignore autorepeat and the like. */ - if (value =3D=3D last_value) - return; - - last_value =3D value; - add_timer_randomness(&input_timer_state, - (type << 4) ^ code ^ (code >> 4) ^ value); -} -EXPORT_SYMBOL_GPL(add_input_randomness); - -#ifdef CONFIG_BLOCK -void add_disk_randomness(struct gendisk *disk) -{ - if (!disk || !disk->random) - return; - /* First major is 1, so we get >=3D 0x200 here. */ - add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); -} -EXPORT_SYMBOL_GPL(add_disk_randomness); - -void rand_initialize_disk(struct gendisk *disk) -{ - struct timer_rand_state *state; - - /* - * If kzalloc returns null, we just won't use that entropy - * source. - */ - state =3D kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - state->last_time =3D INITIAL_JIFFIES; - disk->random =3D state; - } -} -#endif - /* * Interface for in-kernel drivers of true hardware RNGs. * Those devices may produce endless random bits and will be throttled @@ -1237,6 +1134,109 @@ void add_interrupt_randomness(int irq) } EXPORT_SYMBOL_GPL(add_interrupt_randomness); =20 +/* There is one of these per entropy source */ +struct timer_rand_state { + unsigned long last_time; + long last_delta, last_delta2; +}; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) +{ + unsigned long entropy =3D random_get_entropy(), now =3D jiffies, flags; + long delta, delta2, delta3; + + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&entropy, sizeof(entropy)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); + + if (crng_ready()) + return; + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + delta =3D now - READ_ONCE(state->last_time); + WRITE_ONCE(state->last_time, now); + + delta2 =3D delta - READ_ONCE(state->last_delta); + WRITE_ONCE(state->last_delta, delta); + + delta3 =3D delta2 - READ_ONCE(state->last_delta2); + WRITE_ONCE(state->last_delta2, delta2); + + if (delta < 0) + delta =3D -delta; + if (delta2 < 0) + delta2 =3D -delta2; + if (delta3 < 0) + delta3 =3D -delta3; + if (delta > delta2) + delta =3D delta2; + if (delta > delta3) + delta =3D delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy estimate to 12 bits. + */ + credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); +} + +void add_input_randomness(unsigned int type, unsigned int code, + unsigned int value) +{ + static unsigned char last_value; + static struct timer_rand_state input_timer_state =3D { INITIAL_JIFFIES }; + + /* Ignore autorepeat and the like. */ + if (value =3D=3D last_value) + return; + + last_value =3D value; + add_timer_randomness(&input_timer_state, + (type << 4) ^ code ^ (code >> 4) ^ value); +} +EXPORT_SYMBOL_GPL(add_input_randomness); + +#ifdef CONFIG_BLOCK +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* First major is 1, so we get >=3D 0x200 here. */ + add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); +} +EXPORT_SYMBOL_GPL(add_disk_randomness); + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kzalloc returns null, we just won't use that entropy + * source. + */ + state =3D kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + state->last_time =3D INITIAL_JIFFIES; + disk->random =3D state; + } +} +#endif + /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEC43C433F5 for ; Fri, 27 May 2022 11:53:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352253AbiE0LxQ (ORCPT ); Fri, 27 May 2022 07:53:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352007AbiE0LtG (ORCPT ); Fri, 27 May 2022 07:49:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EE0D13C4EB; Fri, 27 May 2022 04:43:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E83AAB824DA; Fri, 27 May 2022 11:43:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D838C385A9; Fri, 27 May 2022 11:43:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651835; bh=jEf0mTx3WuYMuIwHhlJ0sBiImxV0YR5fG8sQC3YJewk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=REgORDipJW3Z5TkWX3IV/TRg8f0KnzVZAK0oUwazz7+69Nh+3H7hrG8DwN/603Z08 BD70Sz9Z5w6htYHVzdI0TerIK1XQHlbtWLN7F1Q73IFKf52fYs3bdKVtU5y1t7//fk l29VKHjNliKXPrKkF5/J99mR/DMuXd/5z5F0H03E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Filipe Manana , Peter Zijlstra , Borislav Petkov , Theodore Tso , "Jason A. Donenfeld" Subject: [PATCH 5.17 089/111] random: do not use input pool from hard IRQs Date: Fri, 27 May 2022 10:50:01 +0200 Message-Id: <20220527084832.063873145@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream. Years ago, a separate fast pool was added for interrupts, so that the cost associated with taking the input pool spinlocks and mixing into it would be avoided in places where latency is critical. However, one oversight was that add_input_randomness() and add_disk_randomness() still sometimes are called directly from the interrupt handler, rather than being deferred to a thread. This means that some unlucky interrupts will be caught doing a blake2s_compress() call and potentially spinning on input_pool.lock, which can also be taken by unprivileged users by writing into /dev/urandom. In order to fix this, add_timer_randomness() now checks whether it is being called from a hard IRQ and if so, just mixes into the per-cpu IRQ fast pool using fast_mix(), which is much faster and can be done lock-free. A nice consequence of this, as well, is that it means hard IRQ context FPU support is likely no longer useful. The entropy estimation algorithm used by add_timer_randomness() is also somewhat different than the one used for add_interrupt_randomness(). The former looks at deltas of deltas of deltas, while the latter just waits for 64 interrupts for one bit or for one second since the last bit. In order to bridge these, and since add_interrupt_randomness() runs after an add_timer_randomness() that's called from hard IRQ, we add to the fast pool credit the related amount, and then subtract one to account for add_interrupt_randomness()'s contribution. A downside of this, however, is that the num argument is potentially attacker controlled, which puts a bit more pressure on the fast_mix() sponge to do more than it's really intended to do. As a mitigating factor, the first 96 bits of input aren't attacker controlled (a cycle counter followed by zeros), which means it's essentially two rounds of siphash rather than one, which is somewhat better. It's also not that much different from add_interrupt_randomness()'s use of the irq stack instruction pointer register. Cc: Thomas Gleixner Cc: Filipe Manana Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 51 +++++++++++++++++++++++++++++++++++----------= ----- 1 file changed, 36 insertions(+), 15 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1084,6 +1084,7 @@ static void mix_interrupt_randomness(str * we don't wind up "losing" some. */ unsigned long pool[2]; + unsigned int count; =20 /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1097,12 +1098,13 @@ static void mix_interrupt_randomness(str * consistent view, before we reenable irqs again. */ memcpy(pool, fast_pool->pool, sizeof(pool)); + count =3D fast_pool->count; fast_pool->count =3D 0; fast_pool->last =3D jiffies; local_irq_enable(); =20 mix_pool_bytes(pool, sizeof(pool)); - credit_init_bits(1); + credit_init_bits(max(1u, (count & U16_MAX) / 64)); =20 memzero_explicit(pool, sizeof(pool)); } @@ -1142,22 +1144,30 @@ struct timer_rand_state { =20 /* * This function adds entropy to the entropy "pool" by using timing - * delays. It uses the timer_rand_state structure to make an estimate - * of how many bits of entropy this call has added to the pool. - * - * The number "num" is also added to the pool - it should somehow describe - * the type of event which just happened. This is currently 0-255 for - * keyboard scan codes, and 256 upwards for interrupts. + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. The + * value "num" is also added to the pool; it should somehow describe + * the type of event that just happened. */ static void add_timer_randomness(struct timer_rand_state *state, unsigned = int num) { unsigned long entropy =3D random_get_entropy(), now =3D jiffies, flags; long delta, delta2, delta3; + unsigned int bits; =20 - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(&num, sizeof(num)); - spin_unlock_irqrestore(&input_pool.lock, flags); + /* + * If we're in a hard IRQ, add_interrupt_randomness() will be called + * sometime after, so mix into the fast pool. + */ + if (in_hardirq()) { + fast_mix(this_cpu_ptr(&irq_randomness)->pool, + (unsigned long[2]){ entropy, num }); + } else { + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&entropy, sizeof(entropy)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); + } =20 if (crng_ready()) return; @@ -1188,11 +1198,22 @@ static void add_timer_randomness(struct delta =3D delta3; =20 /* - * delta is now minimum absolute delta. - * Round down by 1 bit on general principles, - * and limit entropy estimate to 12 bits. + * delta is now minimum absolute delta. Round down by 1 bit + * on general principles, and limit entropy estimate to 11 bits. + */ + bits =3D min(fls(delta >> 1), 11); + + /* + * As mentioned above, if we're in a hard IRQ, add_interrupt_randomness() + * will run after this, which uses a different crediting scheme of 1 bit + * per every 64 interrupts. In order to let that function do accounting + * close to the one in this function, we credit a full 64/64 bit per bit, + * and then subtract one to account for the extra one added. */ - credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); + if (in_hardirq()) + this_cpu_ptr(&irq_randomness)->count +=3D max(1u, bits * 64) - 1; + else + credit_init_bits(bits); } =20 void add_input_randomness(unsigned int type, unsigned int code, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AC5DC433F5 for ; Fri, 27 May 2022 12:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352102AbiE0MCN (ORCPT ); Fri, 27 May 2022 08:02:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352081AbiE0Lt4 (ORCPT ); Fri, 27 May 2022 07:49:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73C79106A45; Fri, 27 May 2022 04:44:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B0B59B824D2; Fri, 27 May 2022 11:44:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 25AB4C385A9; Fri, 27 May 2022 11:44:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651844; bh=e2vokvoRbRzLhCP2hbE6rLdYK8oIbpw8kVNUH1b8o14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MjaZyfzjydMaxu3aX7oyH2b350TOl6WByyE4U2Xx96huI7gp4rsAyTURR6O8DXmrD d2+IZfsTL/WV8gCGlkVqFIaIpOD7sCkUcusLCPS0IHPE16jdEsOJCExamj91DaPM3b BBPL/BuOt8rMa45s1illuDFtiAlXPLBrbSSZKSP0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 090/111] random: help compiler out with fast_mix() by using simpler arguments Date: Fri, 27 May 2022 10:50:02 +0200 Message-Id: <20220527084832.178099361@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 791332b3cbb080510954a4c152ce02af8832eac9 upstream. Now that fast_mix() has more than one caller, gcc no longer inlines it. That's fine. But it also doesn't handle the compound literal argument we pass it very efficiently, nor does it handle the loop as well as it could. So just expand the code to spell out this function so that it generates the same code as it did before. Performance-wise, this now behaves as it did before the last commit. The difference in actual code size on x86 is 45 bytes, which is less than a cache line. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 44 +++++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 21 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1029,25 +1029,30 @@ static DEFINE_PER_CPU(struct fast_pool, * and therefore this has no security on its own. s represents the * four-word SipHash state, while v represents a two-word input. */ -static void fast_mix(unsigned long s[4], const unsigned long v[2]) +static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v= 2) { - size_t i; - - for (i =3D 0; i < 2; ++i) { - s[3] ^=3D v[i]; #ifdef CONFIG_64BIT - s[0] +=3D s[1]; s[1] =3D rol64(s[1], 13); s[1] ^=3D s[0]; s[0] =3D rol64= (s[0], 32); - s[2] +=3D s[3]; s[3] =3D rol64(s[3], 16); s[3] ^=3D s[2]; - s[0] +=3D s[3]; s[3] =3D rol64(s[3], 21); s[3] ^=3D s[0]; - s[2] +=3D s[1]; s[1] =3D rol64(s[1], 17); s[1] ^=3D s[2]; s[2] =3D rol64= (s[2], 32); +#define PERM() do { \ + s[0] +=3D s[1]; s[1] =3D rol64(s[1], 13); s[1] ^=3D s[0]; s[0] =3D rol64(= s[0], 32); \ + s[2] +=3D s[3]; s[3] =3D rol64(s[3], 16); s[3] ^=3D s[2]; \ + s[0] +=3D s[3]; s[3] =3D rol64(s[3], 21); s[3] ^=3D s[0]; \ + s[2] +=3D s[1]; s[1] =3D rol64(s[1], 17); s[1] ^=3D s[2]; s[2] =3D rol64(= s[2], 32); \ +} while (0) #else - s[0] +=3D s[1]; s[1] =3D rol32(s[1], 5); s[1] ^=3D s[0]; s[0] =3D rol32= (s[0], 16); - s[2] +=3D s[3]; s[3] =3D rol32(s[3], 8); s[3] ^=3D s[2]; - s[0] +=3D s[3]; s[3] =3D rol32(s[3], 7); s[3] ^=3D s[0]; - s[2] +=3D s[1]; s[1] =3D rol32(s[1], 13); s[1] ^=3D s[2]; s[2] =3D rol32= (s[2], 16); +#define PERM() do { \ + s[0] +=3D s[1]; s[1] =3D rol32(s[1], 5); s[1] ^=3D s[0]; s[0] =3D rol32(= s[0], 16); \ + s[2] +=3D s[3]; s[3] =3D rol32(s[3], 8); s[3] ^=3D s[2]; \ + s[0] +=3D s[3]; s[3] =3D rol32(s[3], 7); s[3] ^=3D s[0]; \ + s[2] +=3D s[1]; s[1] =3D rol32(s[1], 13); s[1] ^=3D s[2]; s[2] =3D rol32(= s[2], 16); \ +} while (0) #endif - s[0] ^=3D v[i]; - } + + s[3] ^=3D v1; + PERM(); + s[0] ^=3D v1; + s[3] ^=3D v2; + PERM(); + s[0] ^=3D v2; } =20 #ifdef CONFIG_SMP @@ -1117,10 +1122,8 @@ void add_interrupt_randomness(int irq) struct pt_regs *regs =3D get_irq_regs(); unsigned int new_count; =20 - fast_mix(fast_pool->pool, (unsigned long[2]){ - entropy, - (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq) - }); + fast_mix(fast_pool->pool, entropy, + (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq)); new_count =3D ++fast_pool->count; =20 if (new_count & MIX_INFLIGHT) @@ -1160,8 +1163,7 @@ static void add_timer_randomness(struct * sometime after, so mix into the fast pool. */ if (in_hardirq()) { - fast_mix(this_cpu_ptr(&irq_randomness)->pool, - (unsigned long[2]){ entropy, num }); + fast_mix(this_cpu_ptr(&irq_randomness)->pool, entropy, num); } else { spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A97AC4332F for ; Fri, 27 May 2022 11:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239138AbiE0Lx5 (ORCPT ); Fri, 27 May 2022 07:53:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352185AbiE0LuL (ORCPT ); Fri, 27 May 2022 07:50:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8187314CDCB; Fri, 27 May 2022 04:44:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EF3C861D52; Fri, 27 May 2022 11:44:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B56EC385A9; Fri, 27 May 2022 11:44:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651853; bh=WIchkRjGTO416Jgw6cGrJ4/dI+cgw0tsxUh97aeTTjM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UFWMi8pYKvLDIjaR0M1p4kPHgb6FcPkRPZR6tEeoyEslkcJEbZXfEnFxWDR41R8fa YydLxhyrOwey0BTUhsfmBYvd57UTXKvfW8WDolg66TLySFpjGxCnJJmZFZnmGQQAeO U/L8zFNJd0ZSnUzDzXBJtnPpznTBoUmPZFKz3uRo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 091/111] siphash: use one source of truth for siphash permutations Date: Fri, 27 May 2022 10:50:03 +0200 Message-Id: <20220527084832.319059938@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream. The SipHash family of permutations is currently used in three places: - siphash.c itself, used in the ordinary way it was intended. - random32.c, in a construction from an anonymous contributor. - random.c, as part of its fast_mix function. Each one of these places reinvents the wheel with the same C code, same rotation constants, and same symmetry-breaking constants. This commit tidies things up a bit by placing macros for the permutations and constants into siphash.h, where each of the three .c users can access them. It also leaves a note dissuading more users of them from emerging. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 30 +++++++----------------------- include/linux/prandom.h | 23 +++++++---------------- include/linux/siphash.h | 28 ++++++++++++++++++++++++++++ lib/siphash.c | 32 ++++++++++---------------------- 4 files changed, 52 insertions(+), 61 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -1014,12 +1015,11 @@ struct fast_pool { =20 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) =3D { #ifdef CONFIG_64BIT - /* SipHash constants */ - .pool =3D { 0x736f6d6570736575UL, 0x646f72616e646f6dUL, - 0x6c7967656e657261UL, 0x7465646279746573UL } +#define FASTMIX_PERM SIPHASH_PERMUTATION + .pool =3D { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CO= NST_3 } #else - /* HalfSipHash constants */ - .pool =3D { 0, 0, 0x6c796765U, 0x74656462U } +#define FASTMIX_PERM HSIPHASH_PERMUTATION + .pool =3D { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHAS= H_CONST_3 } #endif }; =20 @@ -1031,27 +1031,11 @@ static DEFINE_PER_CPU(struct fast_pool, */ static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v= 2) { -#ifdef CONFIG_64BIT -#define PERM() do { \ - s[0] +=3D s[1]; s[1] =3D rol64(s[1], 13); s[1] ^=3D s[0]; s[0] =3D rol64(= s[0], 32); \ - s[2] +=3D s[3]; s[3] =3D rol64(s[3], 16); s[3] ^=3D s[2]; \ - s[0] +=3D s[3]; s[3] =3D rol64(s[3], 21); s[3] ^=3D s[0]; \ - s[2] +=3D s[1]; s[1] =3D rol64(s[1], 17); s[1] ^=3D s[2]; s[2] =3D rol64(= s[2], 32); \ -} while (0) -#else -#define PERM() do { \ - s[0] +=3D s[1]; s[1] =3D rol32(s[1], 5); s[1] ^=3D s[0]; s[0] =3D rol32(= s[0], 16); \ - s[2] +=3D s[3]; s[3] =3D rol32(s[3], 8); s[3] ^=3D s[2]; \ - s[0] +=3D s[3]; s[3] =3D rol32(s[3], 7); s[3] ^=3D s[0]; \ - s[2] +=3D s[1]; s[1] =3D rol32(s[1], 13); s[1] ^=3D s[2]; s[2] =3D rol32(= s[2], 16); \ -} while (0) -#endif - s[3] ^=3D v1; - PERM(); + FASTMIX_PERM(s[0], s[1], s[2], s[3]); s[0] ^=3D v1; s[3] ^=3D v2; - PERM(); + FASTMIX_PERM(s[0], s[1], s[2], s[3]); s[0] ^=3D v2; } =20 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -10,6 +10,7 @@ =20 #include #include +#include =20 u32 prandom_u32(void); void prandom_bytes(void *buf, size_t nbytes); @@ -27,15 +28,10 @@ DECLARE_PER_CPU(unsigned long, net_rand_ * The core SipHash round function. Each line can be executed in * parallel given enough CPU resources. */ -#define PRND_SIPROUND(v0, v1, v2, v3) ( \ - v0 +=3D v1, v1 =3D rol64(v1, 13), v2 +=3D v3, v3 =3D rol64(v3, 16), \ - v1 ^=3D v0, v0 =3D rol64(v0, 32), v3 ^=3D v2, \ - v0 +=3D v3, v3 =3D rol64(v3, 21), v2 +=3D v1, v1 =3D rol64(v1, 17), \ - v3 ^=3D v0, v1 ^=3D v2, v2 =3D rol64(v2, 32) \ -) +#define PRND_SIPROUND(v0, v1, v2, v3) SIPHASH_PERMUTATION(v0, v1, v2, v3) =20 -#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261) -#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573) +#define PRND_K0 (SIPHASH_CONST_0 ^ SIPHASH_CONST_2) +#define PRND_K1 (SIPHASH_CONST_1 ^ SIPHASH_CONST_3) =20 #elif BITS_PER_LONG =3D=3D 32 /* @@ -43,14 +39,9 @@ DECLARE_PER_CPU(unsigned long, net_rand_ * This is weaker, but 32-bit machines are not used for high-traffic * applications, so there is less output for an attacker to analyze. */ -#define PRND_SIPROUND(v0, v1, v2, v3) ( \ - v0 +=3D v1, v1 =3D rol32(v1, 5), v2 +=3D v3, v3 =3D rol32(v3, 8), \ - v1 ^=3D v0, v0 =3D rol32(v0, 16), v3 ^=3D v2, \ - v0 +=3D v3, v3 =3D rol32(v3, 7), v2 +=3D v1, v1 =3D rol32(v1, 13), \ - v3 ^=3D v0, v1 ^=3D v2, v2 =3D rol32(v2, 16) \ -) -#define PRND_K0 0x6c796765 -#define PRND_K1 0x74656462 +#define PRND_SIPROUND(v0, v1, v2, v3) HSIPHASH_PERMUTATION(v0, v1, v2, v3) +#define PRND_K0 (HSIPHASH_CONST_0 ^ HSIPHASH_CONST_2) +#define PRND_K1 (HSIPHASH_CONST_1 ^ HSIPHASH_CONST_3) =20 #else #error Unsupported BITS_PER_LONG --- a/include/linux/siphash.h +++ b/include/linux/siphash.h @@ -138,4 +138,32 @@ static inline u32 hsiphash(const void *d return ___hsiphash_aligned(data, len, key); } =20 +/* + * These macros expose the raw SipHash and HalfSipHash permutations. + * Do not use them directly! If you think you have a use for them, + * be sure to CC the maintainer of this file explaining why. + */ + +#define SIPHASH_PERMUTATION(a, b, c, d) ( \ + (a) +=3D (b), (b) =3D rol64((b), 13), (b) ^=3D (a), (a) =3D rol64((a), 32= ), \ + (c) +=3D (d), (d) =3D rol64((d), 16), (d) ^=3D (c), \ + (a) +=3D (d), (d) =3D rol64((d), 21), (d) ^=3D (a), \ + (c) +=3D (b), (b) =3D rol64((b), 17), (b) ^=3D (c), (c) =3D rol64((c), 32= )) + +#define SIPHASH_CONST_0 0x736f6d6570736575ULL +#define SIPHASH_CONST_1 0x646f72616e646f6dULL +#define SIPHASH_CONST_2 0x6c7967656e657261ULL +#define SIPHASH_CONST_3 0x7465646279746573ULL + +#define HSIPHASH_PERMUTATION(a, b, c, d) ( \ + (a) +=3D (b), (b) =3D rol32((b), 5), (b) ^=3D (a), (a) =3D rol32((a), 16)= , \ + (c) +=3D (d), (d) =3D rol32((d), 8), (d) ^=3D (c), \ + (a) +=3D (d), (d) =3D rol32((d), 7), (d) ^=3D (a), \ + (c) +=3D (b), (b) =3D rol32((b), 13), (b) ^=3D (c), (c) =3D rol32((c), 16= )) + +#define HSIPHASH_CONST_0 0U +#define HSIPHASH_CONST_1 0U +#define HSIPHASH_CONST_2 0x6c796765U +#define HSIPHASH_CONST_3 0x74656462U + #endif /* _LINUX_SIPHASH_H */ --- a/lib/siphash.c +++ b/lib/siphash.c @@ -18,19 +18,13 @@ #include #endif =20 -#define SIPROUND \ - do { \ - v0 +=3D v1; v1 =3D rol64(v1, 13); v1 ^=3D v0; v0 =3D rol64(v0, 32); \ - v2 +=3D v3; v3 =3D rol64(v3, 16); v3 ^=3D v2; \ - v0 +=3D v3; v3 =3D rol64(v3, 21); v3 ^=3D v0; \ - v2 +=3D v1; v1 =3D rol64(v1, 17); v1 ^=3D v2; v2 =3D rol64(v2, 32); \ - } while (0) +#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3) =20 #define PREAMBLE(len) \ - u64 v0 =3D 0x736f6d6570736575ULL; \ - u64 v1 =3D 0x646f72616e646f6dULL; \ - u64 v2 =3D 0x6c7967656e657261ULL; \ - u64 v3 =3D 0x7465646279746573ULL; \ + u64 v0 =3D SIPHASH_CONST_0; \ + u64 v1 =3D SIPHASH_CONST_1; \ + u64 v2 =3D SIPHASH_CONST_2; \ + u64 v3 =3D SIPHASH_CONST_3; \ u64 b =3D ((u64)(len)) << 56; \ v3 ^=3D key->key[1]; \ v2 ^=3D key->key[0]; \ @@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const } EXPORT_SYMBOL(hsiphash_4u32); #else -#define HSIPROUND \ - do { \ - v0 +=3D v1; v1 =3D rol32(v1, 5); v1 ^=3D v0; v0 =3D rol32(v0, 16); \ - v2 +=3D v3; v3 =3D rol32(v3, 8); v3 ^=3D v2; \ - v0 +=3D v3; v3 =3D rol32(v3, 7); v3 ^=3D v0; \ - v2 +=3D v1; v1 =3D rol32(v1, 13); v1 ^=3D v2; v2 =3D rol32(v2, 16); \ - } while (0) +#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3) =20 #define HPREAMBLE(len) \ - u32 v0 =3D 0; \ - u32 v1 =3D 0; \ - u32 v2 =3D 0x6c796765U; \ - u32 v3 =3D 0x74656462U; \ + u32 v0 =3D HSIPHASH_CONST_0; \ + u32 v1 =3D HSIPHASH_CONST_1; \ + u32 v2 =3D HSIPHASH_CONST_2; \ + u32 v3 =3D HSIPHASH_CONST_3; \ u32 b =3D ((u32)(len)) << 24; \ v3 ^=3D key->key[1]; \ v2 ^=3D key->key[0]; \ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34C33C43219 for ; Fri, 27 May 2022 12:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352604AbiE0MAp (ORCPT ); Fri, 27 May 2022 08:00:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352871AbiE0Lu5 (ORCPT ); Fri, 27 May 2022 07:50:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAC84276; Fri, 27 May 2022 04:46:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 529FF61D96; Fri, 27 May 2022 11:46:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 638D1C385A9; Fri, 27 May 2022 11:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651975; bh=Ml2drK8wflGx1WqYyuyVWtr3gasKF1wxogQ6AqqIt3Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A1FXYeaQ07CvAsXTdV0Z2A1IP+qZyw1EeeDBHOuMAG6KxvUiUDrcdp57IAMHQ5AMg lNWmwpZ6pLjk8exz79R3b751fOcwz397oA158caODy5gLELojfWJplk1sHpSnX/oRr 1o7iUdvnzVXhlPqEi8mPODv4jRnV/GxuPcxLpSJg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Joe Perches , "Jason A. Donenfeld" Subject: [PATCH 5.17 092/111] random: use symbolic constants for crng_init states Date: Fri, 27 May 2022 10:50:04 +0200 Message-Id: <20220527084832.450066928@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit e3d2c5e79a999aa4e7d6f0127e16d3da5a4ff70d upstream. crng_init represents a state machine, with three states, and various rules for transitions. For the longest time, we've been managing these with "0", "1", and "2", and expecting people to figure it out. To make the code more obvious, replace these with proper enum values representing the transition, and then redocument what each of these states mean. Reviewed-by: Dominik Brodowski Cc: Joe Perches Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -70,16 +70,16 @@ *********************************************************************/ =20 /* - * crng_init =3D 0 --> Uninitialized - * 1 --> Initialized - * 2 --> Initialized from input_pool - * * crng_init is protected by base_crng->lock, and only increases - * its value (from 0->1->2). + * its value (from empty->early->ready). */ -static int crng_init =3D 0; -#define crng_ready() (likely(crng_init > 1)) -/* Various types of waiters for crng_init->2 transition. */ +static enum { + CRNG_EMPTY =3D 0, /* Little to no entropy collected */ + CRNG_EARLY =3D 1, /* At least POOL_EARLY_BITS collected */ + CRNG_READY =3D 2 /* Fully initialized with POOL_READY_BITS collected */ +} crng_init =3D CRNG_EMPTY; +#define crng_ready() (likely(crng_init >=3D CRNG_READY)) +/* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_chain_lock); @@ -282,7 +282,7 @@ static void crng_reseed(void) WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); if (!crng_ready()) { - crng_init =3D 2; + crng_init =3D CRNG_READY; finalize_init =3D true; } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -376,7 +376,7 @@ static void crng_make_state(u32 chacha_s * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not * ready, we do fast key erasure with the base_crng directly, extracting - * when crng_init=3D=3D0. + * when crng_init is CRNG_EMPTY. */ if (!crng_ready()) { bool ready; @@ -384,7 +384,7 @@ static void crng_make_state(u32 chacha_s spin_lock_irqsave(&base_crng.lock, flags); ready =3D crng_ready(); if (!ready) { - if (crng_init =3D=3D 0) + if (crng_init =3D=3D CRNG_EMPTY) extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_fast_key_erasure(base_crng.key, chacha_state, random_data, random_data_len); @@ -738,8 +738,8 @@ EXPORT_SYMBOL(get_random_bytes_arch); =20 enum { POOL_BITS =3D BLAKE2S_HASH_SIZE * 8, - POOL_INIT_BITS =3D POOL_BITS, /* No point in settling for less. */ - POOL_FAST_INIT_BITS =3D POOL_INIT_BITS / 2 + POOL_READY_BITS =3D POOL_BITS, /* When crng_init->CRNG_READY */ + POOL_EARLY_BITS =3D POOL_READY_BITS / 2 /* When crng_init->CRNG_EARLY */ }; =20 static struct { @@ -834,13 +834,13 @@ static void credit_init_bits(size_t nbit init_bits =3D min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.init_bits, orig, init_bits) !=3D orig); =20 - if (!crng_ready() && init_bits >=3D POOL_INIT_BITS) + if (!crng_ready() && init_bits >=3D POOL_READY_BITS) crng_reseed(); - else if (unlikely(crng_init =3D=3D 0 && init_bits >=3D POOL_FAST_INIT_BIT= S)) { + else if (unlikely(crng_init =3D=3D CRNG_EMPTY && init_bits >=3D POOL_EARL= Y_BITS)) { spin_lock_irqsave(&base_crng.lock, flags); - if (crng_init =3D=3D 0) { + if (crng_init =3D=3D CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); - crng_init =3D 1; + crng_init =3D CRNG_EARLY; } spin_unlock_irqrestore(&base_crng.lock, flags); } @@ -1515,7 +1515,7 @@ const struct file_operations urandom_fop * * - write_wakeup_threshold - the amount of entropy in the input pool * below which write polls to /dev/random will unblock, requesting - * more entropy, tied to the POOL_INIT_BITS constant. It is writable + * more entropy, tied to the POOL_READY_BITS constant. It is writable * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * @@ -1530,7 +1530,7 @@ const struct file_operations urandom_fop #include =20 static int sysctl_random_min_urandom_seed =3D CRNG_RESEED_INTERVAL / HZ; -static int sysctl_random_write_wakeup_bits =3D POOL_INIT_BITS; +static int sysctl_random_write_wakeup_bits =3D POOL_READY_BITS; static int sysctl_poolsize =3D POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4882C4167B for ; Fri, 27 May 2022 11:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352792AbiE0Lzk (ORCPT ); Fri, 27 May 2022 07:55:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352793AbiE0Luw (ORCPT ); Fri, 27 May 2022 07:50:52 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A5501269B6; Fri, 27 May 2022 04:45:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id D2C1ACE1164; Fri, 27 May 2022 11:45:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E65B5C385A9; Fri, 27 May 2022 11:45:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651919; bh=s6T+HuT78bAP0HH1vMwcpZGWnTvNxrhKCd8tLlgc4lE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kskKI+FT047460vEP/Q6bHt/q1i89kVyK+EsQnyo2MsMYD3f06qDcGydsO/GXBK4f Uj2Mc24qQb+BJOMdon6FsFGYSyGl3oggLisB6CoC+EwbAS01vlZBGWA1qTjF2T9ir7 cZD3ETAGDqMHlv2STKBujq5E/M3gclZIJrRrB26A= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 093/111] random: avoid initializing twice in credit race Date: Fri, 27 May 2022 10:50:05 +0200 Message-Id: <20220527084832.577576318@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream. Since all changes of crng_init now go through credit_init_bits(), we can fix a long standing race in which two concurrent callers of credit_init_bits() have the new bit count >=3D some threshold, but are doing so with crng_init as a lower threshold, checked outside of a lock, resulting in crng_reseed() or similar being called twice. In order to fix this, we can use the original cmpxchg value of the bit count, and only change crng_init when the bit count transitions from below a threshold to meeting the threshold. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -821,7 +821,7 @@ static void extract_entropy(void *buf, s =20 static void credit_init_bits(size_t nbits) { - unsigned int init_bits, orig, add; + unsigned int new, orig, add; unsigned long flags; =20 if (crng_ready() || !nbits) @@ -831,12 +831,12 @@ static void credit_init_bits(size_t nbit =20 do { orig =3D READ_ONCE(input_pool.init_bits); - init_bits =3D min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.init_bits, orig, init_bits) !=3D orig); + new =3D min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.init_bits, orig, new) !=3D orig); =20 - if (!crng_ready() && init_bits >=3D POOL_READY_BITS) + if (orig < POOL_READY_BITS && new >=3D POOL_READY_BITS) crng_reseed(); - else if (unlikely(crng_init =3D=3D CRNG_EMPTY && init_bits >=3D POOL_EARL= Y_BITS)) { + else if (orig < POOL_EARLY_BITS && new >=3D POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); if (crng_init =3D=3D CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D149C38A05 for ; Fri, 27 May 2022 11:56:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353329AbiE0L4Y (ORCPT ); Fri, 27 May 2022 07:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352844AbiE0Lu4 (ORCPT ); Fri, 27 May 2022 07:50:56 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC2F2132A0B; Fri, 27 May 2022 04:46:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 6D4D5CE2511; Fri, 27 May 2022 11:46:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E56AC385A9; Fri, 27 May 2022 11:46:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651960; bh=yLXlgNDwc3M6yVpHiI52bGwxKJCHaWRA55AS1O+jWyU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z9SaRS4750CVUPYgEPpjGOdY1SrfVfxPk58vbQSSl0f0w6O6guTv5mvpcuBddVkJH 4VwTNFvGn1MStjzTHqnaUvuFo7scfBcURb761gARgJ+T2ZNPk4QJA3rApDF8CXw1aa bNbGHb9rEnlSKs1vLCAwDL6EB8cy6jMjqezo1qMA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 094/111] random: move initialization out of reseeding hot path Date: Fri, 27 May 2022 10:50:06 +0200 Message-Id: <20220527084832.696256284@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 68c9c8b192c6dae9be6278e98ee44029d5da2d31 upstream. Initialization happens once -- by way of credit_init_bits() -- and then it never happens again. Therefore, it doesn't need to be in crng_reseed(), which is a hot path that is called multiple times. It also doesn't make sense to have there, as initialization activity is better associated with initialization routines. After the prior commit, crng_reseed() now won't be called by multiple concurrent callers, which means that we can safely move the "finialize_init" logic into crng_init_bits() unconditionally. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 42 +++++++++++++++++++----------------------- 1 file changed, 19 insertions(+), 23 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -264,7 +264,6 @@ static void crng_reseed(void) unsigned long flags; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; - bool finalize_init =3D false; =20 extract_entropy(key, sizeof(key)); =20 @@ -281,28 +280,10 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (!crng_ready()) { + if (!crng_ready()) crng_init =3D CRNG_READY; - finalize_init =3D true; - } spin_unlock_irqrestore(&base_crng.lock, flags); memzero_explicit(key, sizeof(key)); - if (finalize_init) { - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed =3D 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed =3D 0; - } - } } =20 /* @@ -834,10 +815,25 @@ static void credit_init_bits(size_t nbit new =3D min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.init_bits, orig, new) !=3D orig); =20 - if (orig < POOL_READY_BITS && new >=3D POOL_READY_BITS) - crng_reseed(); - else if (orig < POOL_EARLY_BITS && new >=3D POOL_EARLY_BITS) { + if (orig < POOL_READY_BITS && new >=3D POOL_READY_BITS) { + crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed =3D 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed =3D 0; + } + } else if (orig < POOL_EARLY_BITS && new >=3D POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); + /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */ if (crng_init =3D=3D CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_init =3D CRNG_EARLY; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27AE5C3526F for ; Fri, 27 May 2022 11:56:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353097AbiE0L4L (ORCPT ); Fri, 27 May 2022 07:56:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352856AbiE0Lu4 (ORCPT ); Fri, 27 May 2022 07:50:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF2C7132A16; Fri, 27 May 2022 04:46:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5323B61CF0; Fri, 27 May 2022 11:46:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FDD6C385A9; Fri, 27 May 2022 11:46:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651969; bh=iTYQaJYlXPbDVyH9KPjW3gONTItiebfNQm2WzJRyfxY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jKJNUdHwgjemIdw3eNv/qmczeoE5oSIwa860AmTKtBwghK5Ks0u4A6aLUwBD/CXaG JGvXTUR76Ug/No2JCtCIdgWveImYn8AgakCnh5/LihZ3zD/OLCLoT16oCf+yRl7DAt PngUkmp5pS4h3SxwFBKWNNUH67jn1COT4VV84B/o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 095/111] random: remove ratelimiting for in-kernel unseeded randomness Date: Fri, 27 May 2022 10:50:07 +0200 Message-Id: <20220527084832.819481404@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit cc1e127bfa95b5fb2f9307e7168bf8b2b45b4c5e upstream. The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the kernel warns about all unseeded randomness or just the first instance. There's some complicated rate limiting and comparison to the previous caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled, developers still don't see all the messages or even an accurate count of how many were missed. This is the result of basically parallel mechanisms aimed at accomplishing more or less the same thing, added at different points in random.c history, which sort of compete with the first-instance-only limiting we have now. It turns out, however, that nobody cares about the first unseeded randomness instance of in-kernel users. The same first user has been there for ages now, and nobody is doing anything about it. It isn't even clear that anybody _can_ do anything about it. Most places that can do something about it have switched over to using get_random_bytes_wait() or wait_for_random_bytes(), which is the right thing to do, but there is still much code that needs randomness sometimes during init, and as a geeneral rule, if you're not using one of the _wait functions or the readiness notifier callback, you're bound to be doing it wrong just based on that fact alone. So warning about this same first user that can't easily change is simply not an effective mechanism for anything at all. Users can't do anything about it, as the Kconfig text points out -- the problem isn't in userspace code -- and kernel developers don't or more often can't react to it. Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_R= ANDOM is set, so that developers can debug things need be, or if it isn't set, don't show a warning at all. At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting random.ratelimit_disable=3D1 on by default, since if you care about one you probably care about the other too. And we can clean up usage around the related urandom_warning ratelimiter as well (whose behavior isn't changing), so that it properly counts missed messages after the 10 message threshold is reached. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 61 ++++++++++++++-------------------------------= ----- lib/Kconfig.debug | 3 -- 2 files changed, 19 insertions(+), 45 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -86,11 +86,10 @@ static DEFINE_SPINLOCK(random_ready_chai static RAW_NOTIFIER_HEAD(random_ready_chain); =20 /* Control how we warn userspace. */ -static struct ratelimit_state unseeded_warning =3D - RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); static struct ratelimit_state urandom_warning =3D RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); -static int ratelimit_disable __read_mostly; +static int ratelimit_disable __read_mostly =3D + IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM); module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"= ); =20 @@ -181,27 +180,15 @@ static void process_random_ready_list(vo spin_unlock_irqrestore(&random_ready_chain_lock, flags); } =20 -#define warn_unseeded_randomness(previous) \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) +#define warn_unseeded_randomness() \ + _warn_unseeded_randomness(__func__, (void *)_RET_IP_) =20 -static void _warn_unseeded_randomness(const char *func_name, void *caller,= void **previous) +static void _warn_unseeded_randomness(const char *func_name, void *caller) { -#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM - const bool print_once =3D false; -#else - static bool print_once __read_mostly; -#endif - - if (print_once || crng_ready() || - (previous && (caller =3D=3D READ_ONCE(*previous)))) + if (!IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) || crng_ready()) return; - WRITE_ONCE(*previous, caller); -#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM - print_once =3D true; -#endif - if (__ratelimit(&unseeded_warning)) - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init= =3D%d\n", - func_name, caller, crng_init); + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=3D= %d\n", + func_name, caller, crng_init); } =20 =20 @@ -454,9 +441,7 @@ static void _get_random_bytes(void *buf, */ void get_random_bytes(void *buf, size_t nbytes) { - static void *previous; - - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); _get_random_bytes(buf, nbytes); } EXPORT_SYMBOL(get_random_bytes); @@ -552,10 +537,9 @@ u64 get_random_u64(void) u64 ret; unsigned long flags; struct batched_entropy *batch; - static void *previous; unsigned long next_gen; =20 - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); =20 if (!crng_ready()) { _get_random_bytes(&ret, sizeof(ret)); @@ -591,10 +575,9 @@ u32 get_random_u32(void) u32 ret; unsigned long flags; struct batched_entropy *batch; - static void *previous; unsigned long next_gen; =20 - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); =20 if (!crng_ready()) { _get_random_bytes(&ret, sizeof(ret)); @@ -821,16 +804,9 @@ static void credit_init_bits(size_t nbit wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed =3D 0; - } - if (urandom_warning.missed) { + if (urandom_warning.missed) pr_notice("%d urandom warning(s) missed due to ratelimiting\n", urandom_warning.missed); - urandom_warning.missed =3D 0; - } } else if (orig < POOL_EARLY_BITS && new >=3D POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */ @@ -943,10 +919,6 @@ int __init rand_initialize(void) else if (arch_init && trust_cpu) credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); =20 - if (ratelimit_disable) { - urandom_warning.interval =3D 0; - unseeded_warning.interval =3D 0; - } return 0; } =20 @@ -1392,11 +1364,14 @@ static ssize_t urandom_read(struct file { static int maxwarn =3D 10; =20 - if (!crng_ready() && maxwarn > 0) { - maxwarn--; - if (__ratelimit(&urandom_warning)) + if (!crng_ready()) { + if (!ratelimit_disable && maxwarn <=3D 0) + ++urandom_warning.missed; + else if (ratelimit_disable || __ratelimit(&urandom_warning)) { + --maxwarn; pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", current->comm, nbytes); + } } =20 return get_random_bytes_user(buf, nbytes); --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1566,8 +1566,7 @@ config WARN_ALL_UNSEEDED_RANDOM so architecture maintainers really need to do what they can to get the CRNG seeded sooner after the system is booted. However, since users cannot do anything actionable to - address this, by default the kernel will issue only a single - warning for the first use of unseeded randomness. + address this, by default this option is disabled. =20 Say Y here if you want to receive warnings for all uses of unseeded randomness. This will be of use primarily for From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34124C433FE for ; Fri, 27 May 2022 11:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352226AbiE0LzP (ORCPT ); Fri, 27 May 2022 07:55:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352602AbiE0Lug (ORCPT ); Fri, 27 May 2022 07:50:36 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 446B9153502; Fri, 27 May 2022 04:44:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B8959B824D2; Fri, 27 May 2022 11:44:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05C59C385A9; Fri, 27 May 2022 11:44:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651892; bh=wvyHoClL7ERvP595o/9+eYZSIUBBJhySOGYGJJR06BA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yxqx74MYQMqgKWJR9OaWagir5y+xQoU4LwoiwixmQBSEN1PRi4UjwZQtGCCW1d8eb xbO8Ch+hMtGaFiVp6KuXy1RkvcyPn+LwwgYl8WE7JDHbRo+dRSpZErQu3ycB6m/7My woHpXzV01n6MrBFJMIT8RTTNxzUe2TqoqLdWnTCM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 096/111] random: use proper jiffies comparison macro Date: Fri, 27 May 2022 10:50:08 +0200 Message-Id: <20220527084832.962007527@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 8a5b8a4a4ceb353b4dd5bafd09e2b15751bcdb51 upstream. This expands to exactly the same code that it replaces, but makes things consistent by using the same macro for jiffy comparisons throughout. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -324,7 +324,7 @@ static bool crng_has_old_seed(void) interval =3D max_t(unsigned int, CRNG_RESEED_START_INTERVAL, (unsigned int)uptime / 2 * HZ); } - return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); + return time_is_before_jiffies(READ_ONCE(base_crng.birth) + interval); } =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88B41C433FE for ; Fri, 27 May 2022 11:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352193AbiE0L6Q (ORCPT ); Fri, 27 May 2022 07:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352691AbiE0Lur (ORCPT ); Fri, 27 May 2022 07:50:47 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85C8213C1D8; Fri, 27 May 2022 04:45:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C238DCE2511; Fri, 27 May 2022 11:45:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4855C385A9; Fri, 27 May 2022 11:45:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651901; bh=pFrOz5QcrMrMDXNOqEP/gGH08/ZDba0RwjiN0I5lLCI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tSk+EcqpJs+HqzPpzhr08NKODAfMS0mcX8VwusGztPhtMhZFHEdBQM48KQTzYkxkx FV0zMqcf5PeTCX61GiG5Em2hOivPtZDu7zRbPqYhoCKKlWubtl1wGh0b6hQoZkeD1P xmm/YBFsJeAO0nDFrZ637aViPjnsUisFLizJWsjg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 097/111] random: handle latent entropy and command line from random_init() Date: Fri, 27 May 2022 10:50:09 +0200 Message-Id: <20220527084833.098608577@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 2f14062bb14b0fcfcc21e6dc7d5b5c0d25966164 upstream. Currently, start_kernel() adds latent entropy and the command line to the entropy bool *after* the RNG has been initialized, deferring when it's actually used by things like stack canaries until the next time the pool is seeded. This surely is not intended. Rather than splitting up which entropy gets added where and when between start_kernel() and random_init(), just do everything in random_init(), which should eliminate these kinds of bugs in the future. While we're at it, rename the awkwardly titled "rand_initialize()" to the more standard "random_init()" nomenclature. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 17 ++++++++++------- include/linux/random.h | 16 +++++++--------- init/main.c | 10 +++------- 3 files changed, 20 insertions(+), 23 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -886,12 +886,13 @@ early_param("random.trust_bootloader", p =20 /* * The first collection of entropy occurs at system boot while interrupts - * are still turned off. Here we push in RDSEED, a timestamp, and utsname(= ). - * Depending on the above configuration knob, RDSEED may be considered - * sufficient for initialization. Note that much earlier setup may already - * have pushed entropy into the input pool by the time we get here. + * are still turned off. Here we push in latent entropy, RDSEED, a timesta= mp, + * utsname(), and the command line. Depending on the above configuration k= nob, + * RDSEED may be considered sufficient for initialization. Note that much + * earlier setup may already have pushed entropy into the input pool by the + * time we get here. */ -int __init rand_initialize(void) +int __init random_init(const char *command_line) { size_t i; ktime_t now =3D ktime_get_real(); @@ -913,6 +914,8 @@ int __init rand_initialize(void) } _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(command_line, strlen(command_line)); + add_latent_entropy(); =20 if (crng_ready()) crng_reseed(); @@ -1591,8 +1594,8 @@ static struct ctl_table random_table[] =3D }; =20 /* - * rand_initialize() is called before sysctl_init(), - * so we cannot call register_sysctl_init() in rand_initialize() + * random_init() is called before sysctl_init(), + * so we cannot call register_sysctl_init() in random_init() */ static int __init random_sysctls_init(void) { --- a/include/linux/random.h +++ b/include/linux/random.h @@ -14,26 +14,24 @@ struct notifier_block; =20 extern void add_device_randomness(const void *, size_t); extern void add_bootloader_randomness(const void *, size_t); +extern void add_input_randomness(unsigned int type, unsigned int code, + unsigned int value) __latent_entropy; +extern void add_interrupt_randomness(int irq) __latent_entropy; +extern void add_hwgenerator_randomness(const void *buffer, size_t count, + size_t entropy); =20 #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) { - add_device_randomness((const void *)&latent_entropy, - sizeof(latent_entropy)); + add_device_randomness((const void *)&latent_entropy, sizeof(latent_entrop= y)); } #else static inline void add_latent_entropy(void) {} #endif =20 -extern void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) __latent_entropy; -extern void add_interrupt_randomness(int irq) __latent_entropy; -extern void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy); - extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); -extern int __init rand_initialize(void); +extern int __init random_init(const char *command_line); extern bool rng_is_initialized(void); extern int register_random_ready_notifier(struct notifier_block *nb); extern int unregister_random_ready_notifier(struct notifier_block *nb); --- a/init/main.c +++ b/init/main.c @@ -1040,15 +1040,11 @@ asmlinkage __visible void __init __no_sa /* * For best initial stack canary entropy, prepare it after: * - setup_arch() for any UEFI RNG entropy and boot cmdline access - * - timekeeping_init() for ktime entropy used in rand_initialize() + * - timekeeping_init() for ktime entropy used in random_init() * - time_init() for making random_get_entropy() work on some platforms - * - rand_initialize() to get any arch-specific entropy like RDRAND - * - add_latent_entropy() to get any latent entropy - * - adding command line entropy + * - random_init() to initialize the RNG from from early entropy sources */ - rand_initialize(); - add_latent_entropy(); - add_device_randomness(command_line, strlen(command_line)); + random_init(command_line); boot_init_stack_canary(); =20 perf_event_init(); From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8437DC433EF for ; Fri, 27 May 2022 11:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351997AbiE0L6c (ORCPT ); Fri, 27 May 2022 07:58:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352756AbiE0Luu (ORCPT ); Fri, 27 May 2022 07:50:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACC669D065; Fri, 27 May 2022 04:45:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7ECA1B82466; Fri, 27 May 2022 11:45:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4456C385A9; Fri, 27 May 2022 11:45:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651910; bh=uIT1rqjhLQShOLM5dOAiBT3jfc1VpAjwvSp8QCgiBb4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nW0dc19IbK+Thp1F+hIwlJhlgRt85RDlEQUnd6Xu6jEXnh6MweD2jlS4ck43dlTGk dWb4e2mm9BSZ+JfRyqu6aNf5AGOx3xHj8ouq0b7Ea1YlPcMTbWZuYUdfTSsI/jmb2F slppgRuY7kAE+PHtyirZXlwP/6MxIQ966KdHfWN8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 098/111] random: credit architectural init the exact amount Date: Fri, 27 May 2022 10:50:10 +0200 Message-Id: <20220527084833.230360322@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream. RDRAND and RDSEED can fail sometimes, which is fine. We currently initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits of those to succeed in order to initialize the RNG. Instead of the current "all or nothing" approach, actually credit these contributions the amount that is actually contributed. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -894,9 +894,8 @@ early_param("random.trust_bootloader", p */ int __init random_init(const char *command_line) { - size_t i; ktime_t now =3D ktime_get_real(); - bool arch_init =3D true; + unsigned int i, arch_bytes; unsigned long rv; =20 #if defined(LATENT_ENTROPY_PLUGIN) @@ -904,11 +903,12 @@ int __init random_init(const char *comma _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed)); #endif =20 - for (i =3D 0; i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { + for (i =3D 0, arch_bytes =3D BLAKE2S_BLOCK_SIZE; + i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv =3D random_get_entropy(); - arch_init =3D false; + arch_bytes -=3D sizeof(rv); } _mix_pool_bytes(&rv, sizeof(rv)); } @@ -919,8 +919,8 @@ int __init random_init(const char *comma =20 if (crng_ready()) crng_reseed(); - else if (arch_init && trust_cpu) - credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); + else if (trust_cpu) + credit_init_bits(arch_bytes * 8); =20 return 0; } From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAA36C4167E for ; Fri, 27 May 2022 11:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352973AbiE0Lzs (ORCPT ); Fri, 27 May 2022 07:55:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352800AbiE0Lux (ORCPT ); Fri, 27 May 2022 07:50:53 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60085132767; Fri, 27 May 2022 04:45:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C1170CE1164; Fri, 27 May 2022 11:45:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7668C34100; Fri, 27 May 2022 11:45:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651922; bh=p0g7pnTg5in2qe6JCETSkgXyTLgyL5g1T3Tge7sAnR8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VOc7BpnyKQQQWmNIoI9n/2aV3PfMKAyq4Gk/UFq3WRG3zubtvbo3uTPFtjwcvt4+H nCfHq6W9HFVOCCNPTKZqbuAAWsz6UhZnQoIRjt0lQeTH+Z8a6A/OK2MMZ2L2L0k5Wg NyWsCn7bi3HgnBULeUG9uuuZw3fhUgbUNTFNn/eU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Theodore Tso , Sultan Alsawaf , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 099/111] random: use static branch for crng_ready() Date: Fri, 27 May 2022 10:50:11 +0200 Message-Id: <20220527084833.358046273@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream. Since crng_ready() is only false briefly during initialization and then forever after becomes true, we don't need to evaluate it after, making it a prime candidate for a static branch. One complication, however, is that it changes state in a particular call to credit_init_bits(), which might be made from atomic context, which means we must kick off a workqueue to change the static key. Further complicating things, credit_init_bits() may be called sufficiently early on in system initialization such that system_wq is NULL. Fortunately, there exists the nice function execute_in_process_context(), which will immediately execute the function if !in_interrupt(), and otherwise defer it to a workqueue. During early init, before workqueues are available, in_interrupt() is always false, because interrupts haven't even been enabled yet, which means the function in that case executes immediately. Later on, after workqueues are available, in_interrupt() might be true, but in that case, the work is queued in system_wq and all goes well. Cc: Theodore Ts'o Cc: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -77,8 +77,9 @@ static enum { CRNG_EMPTY =3D 0, /* Little to no entropy collected */ CRNG_EARLY =3D 1, /* At least POOL_EARLY_BITS collected */ CRNG_READY =3D 2 /* Fully initialized with POOL_READY_BITS collected */ -} crng_init =3D CRNG_EMPTY; -#define crng_ready() (likely(crng_init >=3D CRNG_READY)) +} crng_init __read_mostly =3D CRNG_EMPTY; +static DEFINE_STATIC_KEY_FALSE(crng_is_ready); +#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= =3D CRNG_READY) /* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; @@ -108,6 +109,11 @@ bool rng_is_initialized(void) } EXPORT_SYMBOL(rng_is_initialized); =20 +static void crng_set_ready(struct work_struct *work) +{ + static_branch_enable(&crng_is_ready); +} + /* Used by wait_for_random_bytes(), and considered an entropy collector, b= elow. */ static void try_to_generate_entropy(void); =20 @@ -267,7 +273,7 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (!crng_ready()) + if (!static_branch_likely(&crng_is_ready)) crng_init =3D CRNG_READY; spin_unlock_irqrestore(&base_crng.lock, flags); memzero_explicit(key, sizeof(key)); @@ -785,6 +791,7 @@ static void extract_entropy(void *buf, s =20 static void credit_init_bits(size_t nbits) { + static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; =20 @@ -800,6 +807,7 @@ static void credit_init_bits(size_t nbit =20 if (orig < POOL_READY_BITS && new >=3D POOL_READY_BITS) { crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ + execute_in_process_context(crng_set_ready, &set_ready); process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); @@ -1309,7 +1317,7 @@ SYSCALL_DEFINE3(getrandom, char __user * if (count > INT_MAX) count =3D INT_MAX; =20 - if (!(flags & GRND_INSECURE) && !crng_ready()) { + if (!crng_ready() && !(flags & GRND_INSECURE)) { int ret; =20 if (flags & GRND_NONBLOCK) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5690AC43219 for ; Fri, 27 May 2022 12:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349367AbiE0MBc (ORCPT ); Fri, 27 May 2022 08:01:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352810AbiE0Luy (ORCPT ); Fri, 27 May 2022 07:50:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18754132774; Fri, 27 May 2022 04:45:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A5A5961D5C; Fri, 27 May 2022 11:45:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B582CC385A9; Fri, 27 May 2022 11:45:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651931; bh=FHmPyj2D2tGbC8s1f5us7ysnhTDu4Fn5V+6b2VESaaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FFVR9/F732TnXiQEPlOPEYHckddX/Sb9yfN8Zzd8uPOEj1dFX/ut5dsqGDokz6Voc /mv8tA3llVkk56JFF5FdpL5BfKQlUMmX+QhxhBD0xuDA4POczYR6mNzHWS7ifkW1ln zxxwiFd5myzGuIPnGvUceyEC4ZPh6xMCEPXJpHD4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 100/111] random: remove extern from functions in header Date: Fri, 27 May 2022 10:50:12 +0200 Message-Id: <20220527084833.491338664@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 7782cfeca7d420e8bb707613d4cfb0f7ff29bb3a upstream. Accoriding to the kernel style guide, having `extern` on functions in headers is old school and deprecated, and doesn't add anything. So remove them from random.h, and tidy up the file a little bit too. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- include/linux/random.h | 71 +++++++++++++++++++-------------------------= ----- 1 file changed, 28 insertions(+), 43 deletions(-) --- a/include/linux/random.h +++ b/include/linux/random.h @@ -12,13 +12,12 @@ =20 struct notifier_block; =20 -extern void add_device_randomness(const void *, size_t); -extern void add_bootloader_randomness(const void *, size_t); -extern void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) __latent_entropy; -extern void add_interrupt_randomness(int irq) __latent_entropy; -extern void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy); +void add_device_randomness(const void *, size_t); +void add_bootloader_randomness(const void *, size_t); +void add_input_randomness(unsigned int type, unsigned int code, + unsigned int value) __latent_entropy; +void add_interrupt_randomness(int irq) __latent_entropy; +void add_hwgenerator_randomness(const void *buffer, size_t count, size_t e= ntropy); =20 #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -26,21 +25,11 @@ static inline void add_latent_entropy(vo add_device_randomness((const void *)&latent_entropy, sizeof(latent_entrop= y)); } #else -static inline void add_latent_entropy(void) {} -#endif - -extern void get_random_bytes(void *buf, size_t nbytes); -extern int wait_for_random_bytes(void); -extern int __init random_init(const char *command_line); -extern bool rng_is_initialized(void); -extern int register_random_ready_notifier(struct notifier_block *nb); -extern int unregister_random_ready_notifier(struct notifier_block *nb); -extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); - -#ifndef MODULE -extern const struct file_operations random_fops, urandom_fops; +static inline void add_latent_entropy(void) { } #endif =20 +void get_random_bytes(void *buf, size_t nbytes); +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); u32 get_random_u32(void); u64 get_random_u64(void); static inline unsigned int get_random_int(void) @@ -72,11 +61,17 @@ static inline unsigned long get_random_l =20 static inline unsigned long get_random_canary(void) { - unsigned long val =3D get_random_long(); - - return val & CANARY_MASK; + return get_random_long() & CANARY_MASK; } =20 +unsigned long randomize_page(unsigned long start, unsigned long range); + +int __init random_init(const char *command_line); +bool rng_is_initialized(void); +int wait_for_random_bytes(void); +int register_random_ready_notifier(struct notifier_block *nb); +int unregister_random_ready_notifier(struct notifier_block *nb); + /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbyt= es). * Returns the result of the call to wait_for_random_bytes. */ static inline int get_random_bytes_wait(void *buf, size_t nbytes) @@ -100,8 +95,6 @@ declare_get_random_var_wait(int) declare_get_random_var_wait(long) #undef declare_get_random_var =20 -unsigned long randomize_page(unsigned long start, unsigned long range); - /* * This is designed to be standalone for just prandom * users, but for now we include it from @@ -112,22 +105,10 @@ unsigned long randomize_page(unsigned lo #ifdef CONFIG_ARCH_RANDOM # include #else -static inline bool __must_check arch_get_random_long(unsigned long *v) -{ - return false; -} -static inline bool __must_check arch_get_random_int(unsigned int *v) -{ - return false; -} -static inline bool __must_check arch_get_random_seed_long(unsigned long *v) -{ - return false; -} -static inline bool __must_check arch_get_random_seed_int(unsigned int *v) -{ - return false; -} +static inline bool __must_check arch_get_random_long(unsigned long *v) { r= eturn false; } +static inline bool __must_check arch_get_random_int(unsigned int *v) { ret= urn false; } +static inline bool __must_check arch_get_random_seed_long(unsigned long *v= ) { return false; } +static inline bool __must_check arch_get_random_seed_int(unsigned int *v) = { return false; } #endif =20 /* @@ -151,8 +132,12 @@ static inline bool __init arch_get_rando #endif =20 #ifdef CONFIG_SMP -extern int random_prepare_cpu(unsigned int cpu); -extern int random_online_cpu(unsigned int cpu); +int random_prepare_cpu(unsigned int cpu); +int random_online_cpu(unsigned int cpu); +#endif + +#ifndef MODULE +extern const struct file_operations random_fops, urandom_fops; #endif =20 #endif /* _LINUX_RANDOM_H */ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0BA4C433FE for ; Fri, 27 May 2022 12:01:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234715AbiE0MBW (ORCPT ); Fri, 27 May 2022 08:01:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352831AbiE0Luz (ORCPT ); Fri, 27 May 2022 07:50:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ED131312A5; Fri, 27 May 2022 04:45:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 42559B8091D; Fri, 27 May 2022 11:45:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A2A3C385A9; Fri, 27 May 2022 11:45:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651940; bh=JHeNvVQv+nXpZEwiZAVU/EY4i1gnNMnF72ZktDobAGA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=x/OHhXRpL6V73ilUnUuE17rTv1JEbFoMG8jVqErjbgJad+4I5EfoF3EQuXGw1mxow b9MYA+yfUGkA30zhWAGYqfTA6zwsC7QRrSuaExGPTMn/CRIzCF9Acx2utHBmrXiszC RHeJd791ri+RNpA5g6uw2fDpcmZQWV0XAlU1VF1A= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 101/111] random: use proper return types on get_random_{int,long}_wait() Date: Fri, 27 May 2022 10:50:13 +0200 Message-Id: <20220527084833.622978721@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 7c3a8a1db5e03d02cc0abb3357a84b8b326dfac3 upstream. Before these were returning signed values, but the API is intended to be used with unsigned values. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- include/linux/random.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/include/linux/random.h +++ b/include/linux/random.h @@ -81,18 +81,18 @@ static inline int get_random_bytes_wait( return ret; } =20 -#define declare_get_random_var_wait(var) \ - static inline int get_random_ ## var ## _wait(var *out) { \ +#define declare_get_random_var_wait(name, ret_type) \ + static inline int get_random_ ## name ## _wait(ret_type *out) { \ int ret =3D wait_for_random_bytes(); \ if (unlikely(ret)) \ return ret; \ - *out =3D get_random_ ## var(); \ + *out =3D get_random_ ## name(); \ return 0; \ } -declare_get_random_var_wait(u32) -declare_get_random_var_wait(u64) -declare_get_random_var_wait(int) -declare_get_random_var_wait(long) +declare_get_random_var_wait(u32, u32) +declare_get_random_var_wait(u64, u32) +declare_get_random_var_wait(int, unsigned int) +declare_get_random_var_wait(long, unsigned long) #undef declare_get_random_var =20 /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4214BC433FE for ; Fri, 27 May 2022 12:01:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352969AbiE0MBI (ORCPT ); Fri, 27 May 2022 08:01:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352835AbiE0Luz (ORCPT ); Fri, 27 May 2022 07:50:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19E88126987; Fri, 27 May 2022 04:45:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 90D6761CF0; Fri, 27 May 2022 11:45:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A715C385A9; Fri, 27 May 2022 11:45:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651949; bh=FO66C+CDYibIoP/G2HHOgHvtDJg+GJa47G67QqYXujg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R5aTzI6n5wdn4+Ul93J7+l22hkqPZwnrKPVV265WLVo/fhTElNMF9ONMYJN5IMf7a 2klRvil+QSVgqvg+aWXF2wDKiSL5148HLA0xB5e+M/JKP/bSI2ri2eG/QrHzyNT026 5gIrIipXvYlkNWK2GbAAJtWU8eJ5ZpyYWsedUTjw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" Subject: [PATCH 5.17 102/111] random: make consistent use of buf and len Date: Fri, 27 May 2022 10:50:14 +0200 Message-Id: <20220527084833.762922509@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit a19402634c435a4eae226df53c141cdbb9922e7b upstream. The current code was a mix of "nbytes", "count", "size", "buffer", "in", and so forth. Instead, let's clean this up by naming input parameters "buf" (or "ubuf") and "len", so that you always understand that you're reading this variety of function argument. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 193 +++++++++++++++++++++++---------------------= ----- include/linux/random.h | 10 +- 2 files changed, 99 insertions(+), 104 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -208,7 +208,7 @@ static void _warn_unseeded_randomness(co * * There are a few exported interfaces for use by other drivers: * - * void get_random_bytes(void *buf, size_t nbytes) + * void get_random_bytes(void *buf, size_t len) * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() @@ -249,7 +249,7 @@ static DEFINE_PER_CPU(struct crng, crngs }; =20 /* Used by crng_reseed() and crng_make_state() to extract a new seed from = the input pool. */ -static void extract_entropy(void *buf, size_t nbytes); +static void extract_entropy(void *buf, size_t len); =20 /* This extracts a new crng key from the input pool. */ static void crng_reseed(void) @@ -403,24 +403,24 @@ static void crng_make_state(u32 chacha_s local_unlock_irqrestore(&crngs.lock, flags); } =20 -static void _get_random_bytes(void *buf, size_t nbytes) +static void _get_random_bytes(void *buf, size_t len) { u32 chacha_state[CHACHA_STATE_WORDS]; u8 tmp[CHACHA_BLOCK_SIZE]; - size_t len; + size_t first_block_len; =20 - if (!nbytes) + if (!len) return; =20 - len =3D min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, buf, len); - nbytes -=3D len; - buf +=3D len; + first_block_len =3D min_t(size_t, 32, len); + crng_make_state(chacha_state, buf, first_block_len); + len -=3D first_block_len; + buf +=3D first_block_len; =20 - while (nbytes) { - if (nbytes < CHACHA_BLOCK_SIZE) { + while (len) { + if (len < CHACHA_BLOCK_SIZE) { chacha20_block(chacha_state, tmp); - memcpy(buf, tmp, nbytes); + memcpy(buf, tmp, len); memzero_explicit(tmp, sizeof(tmp)); break; } @@ -428,7 +428,7 @@ static void _get_random_bytes(void *buf, chacha20_block(chacha_state, buf); if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; - nbytes -=3D CHACHA_BLOCK_SIZE; + len -=3D CHACHA_BLOCK_SIZE; buf +=3D CHACHA_BLOCK_SIZE; } =20 @@ -445,20 +445,20 @@ static void _get_random_bytes(void *buf, * wait_for_random_bytes() should be called and return 0 at least once * at any point prior. */ -void get_random_bytes(void *buf, size_t nbytes) +void get_random_bytes(void *buf, size_t len) { warn_unseeded_randomness(); - _get_random_bytes(buf, nbytes); + _get_random_bytes(buf, len); } EXPORT_SYMBOL(get_random_bytes); =20 -static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) +static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) { - size_t len, left, ret =3D 0; + size_t block_len, left, ret =3D 0; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; =20 - if (!nbytes) + if (!len) return 0; =20 /* @@ -472,8 +472,8 @@ static ssize_t get_random_bytes_user(voi * use chacha_state after, so we can simply return those bytes to * the user directly. */ - if (nbytes <=3D CHACHA_KEY_SIZE) { - ret =3D nbytes - copy_to_user(buf, &chacha_state[4], nbytes); + if (len <=3D CHACHA_KEY_SIZE) { + ret =3D len - copy_to_user(ubuf, &chacha_state[4], len); goto out_zero_chacha; } =20 @@ -482,17 +482,17 @@ static ssize_t get_random_bytes_user(voi if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; =20 - len =3D min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); - left =3D copy_to_user(buf, output, len); + block_len =3D min_t(size_t, len, CHACHA_BLOCK_SIZE); + left =3D copy_to_user(ubuf, output, block_len); if (left) { - ret +=3D len - left; + ret +=3D block_len - left; break; } =20 - buf +=3D len; - ret +=3D len; - nbytes -=3D len; - if (!nbytes) + ubuf +=3D block_len; + ret +=3D block_len; + len -=3D block_len; + if (!len) break; =20 BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE !=3D 0); @@ -666,24 +666,24 @@ unsigned long randomize_page(unsigned lo * use. Use get_random_bytes() instead. It returns the number of * bytes filled in. */ -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) +size_t __must_check get_random_bytes_arch(void *buf, size_t len) { - size_t left =3D nbytes; + size_t left =3D len; u8 *p =3D buf; =20 while (left) { unsigned long v; - size_t chunk =3D min_t(size_t, left, sizeof(unsigned long)); + size_t block_len =3D min_t(size_t, left, sizeof(unsigned long)); =20 if (!arch_get_random_long(&v)) break; =20 - memcpy(p, &v, chunk); - p +=3D chunk; - left -=3D chunk; + memcpy(p, &v, block_len); + p +=3D block_len; + left -=3D block_len; } =20 - return nbytes - left; + return len - left; } EXPORT_SYMBOL(get_random_bytes_arch); =20 @@ -694,15 +694,15 @@ EXPORT_SYMBOL(get_random_bytes_arch); * * Callers may add entropy via: * - * static void mix_pool_bytes(const void *in, size_t nbytes) + * static void mix_pool_bytes(const void *buf, size_t len) * * After which, if added entropy should be credited: * - * static void credit_init_bits(size_t nbits) + * static void credit_init_bits(size_t bits) * * Finally, extract entropy via: * - * static void extract_entropy(void *buf, size_t nbytes) + * static void extract_entropy(void *buf, size_t len) * **********************************************************************/ =20 @@ -724,9 +724,9 @@ static struct { .lock =3D __SPIN_LOCK_UNLOCKED(input_pool.lock), }; =20 -static void _mix_pool_bytes(const void *in, size_t nbytes) +static void _mix_pool_bytes(const void *buf, size_t len) { - blake2s_update(&input_pool.hash, in, nbytes); + blake2s_update(&input_pool.hash, buf, len); } =20 /* @@ -734,12 +734,12 @@ static void _mix_pool_bytes(const void * * update the initialization bit counter; the caller should call * credit_init_bits if this is appropriate. */ -static void mix_pool_bytes(const void *in, size_t nbytes) +static void mix_pool_bytes(const void *buf, size_t len) { unsigned long flags; =20 spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(in, nbytes); + _mix_pool_bytes(buf, len); spin_unlock_irqrestore(&input_pool.lock, flags); } =20 @@ -747,7 +747,7 @@ static void mix_pool_bytes(const void *i * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. */ -static void extract_entropy(void *buf, size_t nbytes) +static void extract_entropy(void *buf, size_t len) { unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; @@ -776,12 +776,12 @@ static void extract_entropy(void *buf, s spin_unlock_irqrestore(&input_pool.lock, flags); memzero_explicit(next_key, sizeof(next_key)); =20 - while (nbytes) { - i =3D min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + while (len) { + i =3D min_t(size_t, len, BLAKE2S_HASH_SIZE); /* output =3D HASHPRF(seed, RDSEED || ++counter) */ ++block.counter; blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); - nbytes -=3D i; + len -=3D i; buf +=3D i; } =20 @@ -789,16 +789,16 @@ static void extract_entropy(void *buf, s memzero_explicit(&block, sizeof(block)); } =20 -static void credit_init_bits(size_t nbits) +static void credit_init_bits(size_t bits) { static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; =20 - if (crng_ready() || !nbits) + if (crng_ready() || !bits) return; =20 - add =3D min_t(size_t, nbits, POOL_BITS); + add =3D min_t(size_t, bits, POOL_BITS); =20 do { orig =3D READ_ONCE(input_pool.init_bits); @@ -834,13 +834,11 @@ static void credit_init_bits(size_t nbit * The following exported functions are used for pushing entropy into * the above entropy accumulation routines: * - * void add_device_randomness(const void *buf, size_t size); - * void add_hwgenerator_randomness(const void *buffer, size_t count, - * size_t entropy); - * void add_bootloader_randomness(const void *buf, size_t size); + * void add_device_randomness(const void *buf, size_t len); + * void add_hwgenerator_randomness(const void *buf, size_t len, size_t ent= ropy); + * void add_bootloader_randomness(const void *buf, size_t len); * void add_interrupt_randomness(int irq); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); + * void add_input_randomness(unsigned int type, unsigned int code, unsigne= d int value); * void add_disk_randomness(struct gendisk *disk); * * add_device_randomness() adds data to the input pool that @@ -904,7 +902,7 @@ int __init random_init(const char *comma { ktime_t now =3D ktime_get_real(); unsigned int i, arch_bytes; - unsigned long rv; + unsigned long entropy; =20 #if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent= _entropy; @@ -912,13 +910,13 @@ int __init random_init(const char *comma #endif =20 for (i =3D 0, arch_bytes =3D BLAKE2S_BLOCK_SIZE; - i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(rv)) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv =3D random_get_entropy(); - arch_bytes -=3D sizeof(rv); + i < BLAKE2S_BLOCK_SIZE; i +=3D sizeof(entropy)) { + if (!arch_get_random_seed_long_early(&entropy) && + !arch_get_random_long_early(&entropy)) { + entropy =3D random_get_entropy(); + arch_bytes -=3D sizeof(entropy); } - _mix_pool_bytes(&rv, sizeof(rv)); + _mix_pool_bytes(&entropy, sizeof(entropy)); } _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); @@ -941,14 +939,14 @@ int __init random_init(const char *comma * the entropy pool having similar initial state across largely * identical devices. */ -void add_device_randomness(const void *buf, size_t size) +void add_device_randomness(const void *buf, size_t len) { unsigned long entropy =3D random_get_entropy(); unsigned long flags; =20 spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(buf, size); + _mix_pool_bytes(buf, len); spin_unlock_irqrestore(&input_pool.lock, flags); } EXPORT_SYMBOL(add_device_randomness); @@ -958,10 +956,9 @@ EXPORT_SYMBOL(add_device_randomness); * Those devices may produce endless random bits and will be throttled * when our pool is full. */ -void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy) +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entrop= y) { - mix_pool_bytes(buffer, count); + mix_pool_bytes(buf, len); credit_init_bits(entropy); =20 /* @@ -977,11 +974,11 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random * Handle random seed passed by bootloader, and credit it if * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ -void add_bootloader_randomness(const void *buf, size_t size) +void add_bootloader_randomness(const void *buf, size_t len) { - mix_pool_bytes(buf, size); + mix_pool_bytes(buf, len); if (trust_bootloader) - credit_init_bits(size * 8); + credit_init_bits(len * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); =20 @@ -1181,8 +1178,7 @@ static void add_timer_randomness(struct credit_init_bits(bits); } =20 -void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) +void add_input_randomness(unsigned int type, unsigned int code, unsigned i= nt value) { static unsigned char last_value; static struct timer_rand_state input_timer_state =3D { INITIAL_JIFFIES }; @@ -1301,8 +1297,7 @@ static void try_to_generate_entropy(void * **********************************************************************/ =20 -SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, - flags) +SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int,= flags) { if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; @@ -1314,8 +1309,8 @@ SYSCALL_DEFINE3(getrandom, char __user * if ((flags & (GRND_INSECURE | GRND_RANDOM)) =3D=3D (GRND_INSECURE | GRND_= RANDOM)) return -EINVAL; =20 - if (count > INT_MAX) - count =3D INT_MAX; + if (len > INT_MAX) + len =3D INT_MAX; =20 if (!crng_ready() && !(flags & GRND_INSECURE)) { int ret; @@ -1326,7 +1321,7 @@ SYSCALL_DEFINE3(getrandom, char __user * if (unlikely(ret)) return ret; } - return get_random_bytes_user(buf, count); + return get_random_bytes_user(ubuf, len); } =20 static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1335,21 +1330,21 @@ static __poll_t random_poll(struct file return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } =20 -static int write_pool(const char __user *ubuf, size_t count) +static int write_pool(const char __user *ubuf, size_t len) { - size_t len; + size_t block_len; int ret =3D 0; u8 block[BLAKE2S_BLOCK_SIZE]; =20 - while (count) { - len =3D min(count, sizeof(block)); - if (copy_from_user(block, ubuf, len)) { + while (len) { + block_len =3D min(len, sizeof(block)); + if (copy_from_user(block, ubuf, block_len)) { ret =3D -EFAULT; goto out; } - count -=3D len; - ubuf +=3D len; - mix_pool_bytes(block, len); + len -=3D block_len; + ubuf +=3D block_len; + mix_pool_bytes(block, block_len); cond_resched(); } =20 @@ -1358,20 +1353,20 @@ out: return ret; } =20 -static ssize_t random_write(struct file *file, const char __user *buffer, - size_t count, loff_t *ppos) +static ssize_t random_write(struct file *file, const char __user *ubuf, + size_t len, loff_t *ppos) { int ret; =20 - ret =3D write_pool(buffer, count); + ret =3D write_pool(ubuf, len); if (ret) return ret; =20 - return (ssize_t)count; + return (ssize_t)len; } =20 -static ssize_t urandom_read(struct file *file, char __user *buf, size_t nb= ytes, - loff_t *ppos) +static ssize_t urandom_read(struct file *file, char __user *ubuf, + size_t len, loff_t *ppos) { static int maxwarn =3D 10; =20 @@ -1381,22 +1376,22 @@ static ssize_t urandom_read(struct file else if (ratelimit_disable || __ratelimit(&urandom_warning)) { --maxwarn; pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, nbytes); + current->comm, len); } } =20 - return get_random_bytes_user(buf, nbytes); + return get_random_bytes_user(ubuf, len); } =20 -static ssize_t random_read(struct file *file, char __user *buf, size_t nby= tes, - loff_t *ppos) +static ssize_t random_read(struct file *file, char __user *ubuf, + size_t len, loff_t *ppos) { int ret; =20 ret =3D wait_for_random_bytes(); if (ret !=3D 0) return ret; - return get_random_bytes_user(buf, nbytes); + return get_random_bytes_user(ubuf, len); } =20 static long random_ioctl(struct file *f, unsigned int cmd, unsigned long a= rg) @@ -1521,7 +1516,7 @@ static u8 sysctl_bootid[UUID_SIZE]; * UUID. The difference is in whether table->data is NULL; if it is, * then a new UUID is generated and returned to the user. */ -static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, +static int proc_do_uuid(struct ctl_table *table, int write, void *buf, size_t *lenp, loff_t *ppos) { u8 tmp_uuid[UUID_SIZE], *uuid; @@ -1548,14 +1543,14 @@ static int proc_do_uuid(struct ctl_table } =20 snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid); - return proc_dostring(&fake_table, 0, buffer, lenp, ppos); + return proc_dostring(&fake_table, 0, buf, lenp, ppos); } =20 /* The same as proc_dointvec, but writes don't change anything. */ -static int proc_do_rointvec(struct ctl_table *table, int write, void *buff= er, +static int proc_do_rointvec(struct ctl_table *table, int write, void *buf, size_t *lenp, loff_t *ppos) { - return write ? 0 : proc_dointvec(table, 0, buffer, lenp, ppos); + return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos); } =20 static struct ctl_table random_table[] =3D { --- a/include/linux/random.h +++ b/include/linux/random.h @@ -12,12 +12,12 @@ =20 struct notifier_block; =20 -void add_device_randomness(const void *, size_t); -void add_bootloader_randomness(const void *, size_t); +void add_device_randomness(const void *buf, size_t len); +void add_bootloader_randomness(const void *buf, size_t len); void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; void add_interrupt_randomness(int irq) __latent_entropy; -void add_hwgenerator_randomness(const void *buffer, size_t count, size_t e= ntropy); +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entrop= y); =20 #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -28,8 +28,8 @@ static inline void add_latent_entropy(vo static inline void add_latent_entropy(void) { } #endif =20 -void get_random_bytes(void *buf, size_t nbytes); -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); +void get_random_bytes(void *buf, size_t len); +size_t __must_check get_random_bytes_arch(void *buf, size_t len); u32 get_random_u32(void); u64 get_random_u64(void); static inline unsigned int get_random_int(void) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2694FC433FE for ; Fri, 27 May 2022 12:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352537AbiE0MAm (ORCPT ); Fri, 27 May 2022 08:00:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352885AbiE0Lu6 (ORCPT ); Fri, 27 May 2022 07:50:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2A52389; Fri, 27 May 2022 04:46:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4567B61D7F; Fri, 27 May 2022 11:46:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50C73C34100; Fri, 27 May 2022 11:46:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651978; bh=Z5pJfDHLQ0+e1gBSa0TwJXXMrWtqq9wh9y2ZJhTg8gc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=itrND8BGSUeyTXZ8meTRtYRZmYYfBuclpYanoL/tbGNTGTa14mUhG+wJzKqtR6J4K bzUCavEVNvI434wSJHBiH/3BgFRvGiBrmB7wbHNoog39GtPPFfl2mhn0gfNgyvgUKv 3XBk+mlbf1lAhfoyDhhCj4ZNGyoYO9YK5MDTcPxU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 103/111] random: move initialization functions out of hot pages Date: Fri, 27 May 2022 10:50:15 +0200 Message-Id: <20220527084833.881967920@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 560181c27b582557d633ecb608110075433383af upstream. Much of random.c is devoted to initializing the rng and accounting for when a sufficient amount of entropy has been added. In a perfect world, this would all happen during init, and so we could mark these functions as __init. But in reality, this isn't the case: sometimes the rng only finishes initializing some seconds after system init is finished. For this reason, at the moment, a whole host of functions that are only used relatively close to system init and then never again are intermixed with functions that are used in hot code all the time. This creates more cache misses than necessary. In order to pack the hot code closer together, this commit moves the initialization functions that can't be marked as __init into .text.unlikely by way of the __cold attribute. Of particular note is moving credit_init_bits() into a macro wrapper that inlines the crng_ready() static branch check. This avoids a function call to a nop+ret, and most notably prevents extra entropy arithmetic from being computed in mix_interrupt_randomness(). Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 40 ++++++++++++++++++---------------------- 1 file changed, 18 insertions(+), 22 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -109,7 +109,7 @@ bool rng_is_initialized(void) } EXPORT_SYMBOL(rng_is_initialized); =20 -static void crng_set_ready(struct work_struct *work) +static void __cold crng_set_ready(struct work_struct *work) { static_branch_enable(&crng_is_ready); } @@ -148,7 +148,7 @@ EXPORT_SYMBOL(wait_for_random_bytes); * returns: 0 if callback is successfully added * -EALREADY if pool is already initialised (callback not called) */ -int register_random_ready_notifier(struct notifier_block *nb) +int __cold register_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; int ret =3D -EALREADY; @@ -166,7 +166,7 @@ int register_random_ready_notifier(struc /* * Delete a previously registered readiness callback function. */ -int unregister_random_ready_notifier(struct notifier_block *nb) +int __cold unregister_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; int ret; @@ -177,7 +177,7 @@ int unregister_random_ready_notifier(str return ret; } =20 -static void process_random_ready_list(void) +static void __cold process_random_ready_list(void) { unsigned long flags; =20 @@ -187,15 +187,9 @@ static void process_random_ready_list(vo } =20 #define warn_unseeded_randomness() \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_) - -static void _warn_unseeded_randomness(const char *func_name, void *caller) -{ - if (!IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) || crng_ready()) - return; - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=3D= %d\n", - func_name, caller, crng_init); -} + if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \ + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init= =3D%d\n", \ + __func__, (void *)_RET_IP_, crng_init) =20 =20 /********************************************************************* @@ -614,7 +608,7 @@ EXPORT_SYMBOL(get_random_u32); * This function is called when the CPU is coming up, with entry * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP. */ -int random_prepare_cpu(unsigned int cpu) +int __cold random_prepare_cpu(unsigned int cpu) { /* * When the cpu comes back online, immediately invalidate both @@ -789,13 +783,15 @@ static void extract_entropy(void *buf, s memzero_explicit(&block, sizeof(block)); } =20 -static void credit_init_bits(size_t bits) +#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits) + +static void __cold _credit_init_bits(size_t bits) { static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; =20 - if (crng_ready() || !bits) + if (!bits) return; =20 add =3D min_t(size_t, bits, POOL_BITS); @@ -974,7 +970,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random * Handle random seed passed by bootloader, and credit it if * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ -void add_bootloader_randomness(const void *buf, size_t len) +void __cold add_bootloader_randomness(const void *buf, size_t len) { mix_pool_bytes(buf, len); if (trust_bootloader) @@ -1020,7 +1016,7 @@ static void fast_mix(unsigned long s[4], * This function is called when the CPU has just come online, with * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE. */ -int random_online_cpu(unsigned int cpu) +int __cold random_online_cpu(unsigned int cpu) { /* * During CPU shutdown and before CPU onlining, add_interrupt_ @@ -1175,7 +1171,7 @@ static void add_timer_randomness(struct if (in_hardirq()) this_cpu_ptr(&irq_randomness)->count +=3D max(1u, bits * 64) - 1; else - credit_init_bits(bits); + _credit_init_bits(bits); } =20 void add_input_randomness(unsigned int type, unsigned int code, unsigned i= nt value) @@ -1203,7 +1199,7 @@ void add_disk_randomness(struct gendisk } EXPORT_SYMBOL_GPL(add_disk_randomness); =20 -void rand_initialize_disk(struct gendisk *disk) +void __cold rand_initialize_disk(struct gendisk *disk) { struct timer_rand_state *state; =20 @@ -1232,7 +1228,7 @@ void rand_initialize_disk(struct gendisk * * So the re-arming always happens in the entropy loop itself. */ -static void entropy_timer(struct timer_list *t) +static void __cold entropy_timer(struct timer_list *t) { credit_init_bits(1); } @@ -1241,7 +1237,7 @@ static void entropy_timer(struct timer_l * If we have an actual cycle counter, see if we can * generate enough entropy with timing noise */ -static void try_to_generate_entropy(void) +static void __cold try_to_generate_entropy(void) { struct { unsigned long entropy; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDA00C433EF for ; Fri, 27 May 2022 12:01:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352203AbiE0MAF (ORCPT ); Fri, 27 May 2022 08:00:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352054AbiE0Lvg (ORCPT ); Fri, 27 May 2022 07:51:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C696013B8F9; Fri, 27 May 2022 04:47:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A847F61D56; Fri, 27 May 2022 11:47:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7CC4C34100; Fri, 27 May 2022 11:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652044; bh=2oUbVd+6dHSAZdRKmcjumSINDFPdE0+wqNXdZSaOxHI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=orIvsSwElZ5so4lG5c5l4uGPSB7rIGqddpPjwajXRvoBmVwOFryYj66znN7cO8l7p 803MXau2IM1mHJHgH0TNsUf/Jp4788wdddP+LApIk10GFSejsmFdb6w0QzxKff+ltL 6N9jFcIRg6g+2M2bIW5oLByXPe9/kWsGDOCyBiTw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrew Morton , "Jason A. Donenfeld" Subject: [PATCH 5.17 104/111] random: move randomize_page() into mm where it belongs Date: Fri, 27 May 2022 10:50:16 +0200 Message-Id: <20220527084833.997496542@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream. randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 32 -------------------------------- include/linux/mm.h | 1 + include/linux/random.h | 2 -- mm/util.c | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 33 insertions(+), 34 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -622,38 +622,6 @@ int __cold random_prepare_cpu(unsigned i } #endif =20 -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed = that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -=3D PAGE_ALIGN(start) - start; - start =3D PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range =3D ULONG_MAX - start; - - range >>=3D PAGE_SHIFT; - - if (range =3D=3D 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /* * This function will use the architecture-specific hardware random * number generator if it is available. It is not recommended for --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2678,6 +2678,7 @@ extern int install_special_mapping(struc unsigned long flags, struct page **pages); =20 unsigned long randomize_stack_top(unsigned long stack_top); +unsigned long randomize_page(unsigned long start, unsigned long range); =20 extern unsigned long get_unmapped_area(struct file *, unsigned long, unsig= ned long, unsigned long, unsigned long); =20 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -64,8 +64,6 @@ static inline unsigned long get_random_c return get_random_long() & CANARY_MASK; } =20 -unsigned long randomize_page(unsigned long start, unsigned long range); - int __init random_init(const char *command_line); bool rng_is_initialized(void); int wait_for_random_bytes(void); --- a/mm/util.c +++ b/mm/util.c @@ -343,6 +343,38 @@ unsigned long randomize_stack_top(unsign #endif } =20 +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed = that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -=3D PAGE_ALIGN(start) - start; + start =3D PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range =3D ULONG_MAX - start; + + range >>=3D PAGE_SHIFT; + + if (range =3D=3D 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT unsigned long arch_randomize_brk(struct mm_struct *mm) { From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E26CAC433F5 for ; Fri, 27 May 2022 12:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352350AbiE0MCp (ORCPT ); Fri, 27 May 2022 08:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352194AbiE0LwY (ORCPT ); Fri, 27 May 2022 07:52:24 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8762614AA5F; Fri, 27 May 2022 04:47:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3B2CEB824DD; Fri, 27 May 2022 11:47:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99404C34100; Fri, 27 May 2022 11:47:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652053; bh=iMBXmKHWLQ51QDGO5IeQitVLpi/PoVqA8seqpwKdUIE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0jlbN95ukEjIFgNVbqkElUdze2YIHAESUG/K62c+PGdYulTpgXPtjp9YN/F/QPdFL WuQpN62UAyw0BsMCEij7nUPagALiwcTe+Dq8vKLxNK9kWX3v5JMOfSxyiU+L73rVoj U69+rhULmC4R6zHWOlsu4oJ1jOHQKeESyKmWLzU4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 105/111] random: unify batched entropy implementations Date: Fri, 27 May 2022 10:50:17 +0200 Message-Id: <20220527084834.133562744@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 3092adcef3ffd2ef59634998297ca8358461ebce upstream. There are currently two separate batched entropy implementations, for u32 and u64, with nearly identical code, with the goal of avoiding unaligned memory accesses and letting the buffers be used more efficiently. Having to maintain these two functions independently is a bit of a hassle though, considering that they always need to be kept in sync. This commit factors them out into a type-generic macro, so that the expansion produces the same code as before, such that diffing the assembly shows no differences. This will also make it easier in the future to add u16 and u8 batches. This was initially tested using an always_inline function and letting gcc constant fold the type size in, but the code gen was less efficient, and in general it was more verbose and harder to follow. So this patch goes with the boring macro solution, similar to what's already done for the _wait functions in random.h. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 145 ++++++++++++++++++---------------------------= ----- 1 file changed, 54 insertions(+), 91 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -509,99 +509,62 @@ out_zero_chacha: * provided by this function is okay, the function wait_for_random_bytes() * should be called and return 0 at least once at any point prior. */ -struct batched_entropy { - union { - /* - * We make this 1.5x a ChaCha block, so that we get the - * remaining 32 bytes from fast key erasure, plus one full - * block from the detached ChaCha state. We can increase - * the size of this later if needed so long as we keep the - * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. - */ - u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; - u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; - }; - local_lock_t lock; - unsigned long generation; - unsigned int position; -}; =20 +#define DEFINE_BATCHED_ENTROPY(type) \ +struct batch_ ##type { \ + /* \ + * We make this 1.5x a ChaCha block, so that we get the \ + * remaining 32 bytes from fast key erasure, plus one full \ + * block from the detached ChaCha state. We can increase \ + * the size of this later if needed so long as we keep the \ + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. \ + */ \ + type entropy[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(type))]; \ + local_lock_t lock; \ + unsigned long generation; \ + unsigned int position; \ +}; \ + \ +static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) =3D {= \ + .lock =3D INIT_LOCAL_LOCK(batched_entropy_ ##type.lock), \ + .position =3D UINT_MAX \ +}; \ + \ +type get_random_ ##type(void) \ +{ \ + type ret; \ + unsigned long flags; \ + struct batch_ ##type *batch; \ + unsigned long next_gen; \ + \ + warn_unseeded_randomness(); \ + \ + if (!crng_ready()) { \ + _get_random_bytes(&ret, sizeof(ret)); \ + return ret; \ + } \ + \ + local_lock_irqsave(&batched_entropy_ ##type.lock, flags); \ + batch =3D raw_cpu_ptr(&batched_entropy_##type); \ + \ + next_gen =3D READ_ONCE(base_crng.generation); \ + if (batch->position >=3D ARRAY_SIZE(batch->entropy) || \ + next_gen !=3D batch->generation) { \ + _get_random_bytes(batch->entropy, sizeof(batch->entropy)); \ + batch->position =3D 0; \ + batch->generation =3D next_gen; \ + } \ + \ + ret =3D batch->entropy[batch->position]; \ + batch->entropy[batch->position] =3D 0; \ + ++batch->position; \ + local_unlock_irqrestore(&batched_entropy_ ##type.lock, flags); \ + return ret; \ +} \ +EXPORT_SYMBOL(get_random_ ##type); =20 -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u64.lock), - .position =3D UINT_MAX -}; - -u64 get_random_u64(void) -{ - u64 ret; - unsigned long flags; - struct batched_entropy *batch; - unsigned long next_gen; - - warn_unseeded_randomness(); - - if (!crng_ready()) { - _get_random_bytes(&ret, sizeof(ret)); - return ret; - } - - local_lock_irqsave(&batched_entropy_u64.lock, flags); - batch =3D raw_cpu_ptr(&batched_entropy_u64); - - next_gen =3D READ_ONCE(base_crng.generation); - if (batch->position >=3D ARRAY_SIZE(batch->entropy_u64) || - next_gen !=3D batch->generation) { - _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); - batch->position =3D 0; - batch->generation =3D next_gen; - } - - ret =3D batch->entropy_u64[batch->position]; - batch->entropy_u64[batch->position] =3D 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u64.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u64); - -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) =3D { - .lock =3D INIT_LOCAL_LOCK(batched_entropy_u32.lock), - .position =3D UINT_MAX -}; - -u32 get_random_u32(void) -{ - u32 ret; - unsigned long flags; - struct batched_entropy *batch; - unsigned long next_gen; - - warn_unseeded_randomness(); - - if (!crng_ready()) { - _get_random_bytes(&ret, sizeof(ret)); - return ret; - } - - local_lock_irqsave(&batched_entropy_u32.lock, flags); - batch =3D raw_cpu_ptr(&batched_entropy_u32); - - next_gen =3D READ_ONCE(base_crng.generation); - if (batch->position >=3D ARRAY_SIZE(batch->entropy_u32) || - next_gen !=3D batch->generation) { - _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); - batch->position =3D 0; - batch->generation =3D next_gen; - } - - ret =3D batch->entropy_u32[batch->position]; - batch->entropy_u32[batch->position] =3D 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u32.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u32); +DEFINE_BATCHED_ENTROPY(u64) +DEFINE_BATCHED_ENTROPY(u32) =20 #ifdef CONFIG_SMP /* From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AB3DC433F5 for ; Fri, 27 May 2022 12:02:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352365AbiE0MCw (ORCPT ); Fri, 27 May 2022 08:02:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352299AbiE0Lxj (ORCPT ); Fri, 27 May 2022 07:53:39 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DF6E158959; Fri, 27 May 2022 04:47:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 00FC2B824DB; Fri, 27 May 2022 11:47:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62E1CC385A9; Fri, 27 May 2022 11:47:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652061; bh=zRqnFMXIDw2Yu96Nrl9zsNmWv/5/4NOdYhmH9t607/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eWmA/H9tz34esRs76XFeY+N/tKZyIvVugxzh52Bj7bAv1N9QOBP4R1Inmb6xZrD6y uTYvefknk2pTvKvIFWWoZDgZs6XuSCdtUa11SXLqJk3OrqPwV1IAqfuLqVfK6Dp/2z aUKdU7XkXBy6vWVrVcykrUlnc+SzW0840h25Rwv0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jens Axboe , Al Viro , "Jason A. Donenfeld" Subject: [PATCH 5.17 106/111] random: convert to using fops->read_iter() Date: Fri, 27 May 2022 10:50:18 +0200 Message-Id: <20220527084834.270012151@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jens Axboe commit 1b388e7765f2eaa137cf5d92b47ef5925ad83ced upstream. This is a pre-requisite to wiring up splice() again for the random and urandom drivers. It also allows us to remove the INT_MAX check in getrandom(), because import_single_range() applies capping internally. Signed-off-by: Jens Axboe [Jason: rewrote get_random_bytes_user() to simplify and also incorporate additional suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 66 ++++++++++++++++++++++-----------------------= ----- 1 file changed, 30 insertions(+), 36 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include #include @@ -446,13 +447,13 @@ void get_random_bytes(void *buf, size_t } EXPORT_SYMBOL(get_random_bytes); =20 -static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) +static ssize_t get_random_bytes_user(struct iov_iter *iter) { - size_t block_len, left, ret =3D 0; u32 chacha_state[CHACHA_STATE_WORDS]; - u8 output[CHACHA_BLOCK_SIZE]; + u8 block[CHACHA_BLOCK_SIZE]; + size_t ret =3D 0, copied; =20 - if (!len) + if (unlikely(!iov_iter_count(iter))) return 0; =20 /* @@ -466,30 +467,22 @@ static ssize_t get_random_bytes_user(voi * use chacha_state after, so we can simply return those bytes to * the user directly. */ - if (len <=3D CHACHA_KEY_SIZE) { - ret =3D len - copy_to_user(ubuf, &chacha_state[4], len); + if (iov_iter_count(iter) <=3D CHACHA_KEY_SIZE) { + ret =3D copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter); goto out_zero_chacha; } =20 for (;;) { - chacha20_block(chacha_state, output); + chacha20_block(chacha_state, block); if (unlikely(chacha_state[12] =3D=3D 0)) ++chacha_state[13]; =20 - block_len =3D min_t(size_t, len, CHACHA_BLOCK_SIZE); - left =3D copy_to_user(ubuf, output, block_len); - if (left) { - ret +=3D block_len - left; - break; - } - - ubuf +=3D block_len; - ret +=3D block_len; - len -=3D block_len; - if (!len) + copied =3D copy_to_iter(block, sizeof(block), iter); + ret +=3D copied; + if (!iov_iter_count(iter) || copied !=3D sizeof(block)) break; =20 - BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE !=3D 0); + BUILD_BUG_ON(PAGE_SIZE % sizeof(block) !=3D 0); if (ret % PAGE_SIZE =3D=3D 0) { if (signal_pending(current)) break; @@ -497,7 +490,7 @@ static ssize_t get_random_bytes_user(voi } } =20 - memzero_explicit(output, sizeof(output)); + memzero_explicit(block, sizeof(block)); out_zero_chacha: memzero_explicit(chacha_state, sizeof(chacha_state)); return ret ? ret : -EFAULT; @@ -1226,6 +1219,10 @@ static void __cold try_to_generate_entro =20 SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int,= flags) { + struct iov_iter iter; + struct iovec iov; + int ret; + if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; =20 @@ -1236,19 +1233,18 @@ SYSCALL_DEFINE3(getrandom, char __user * if ((flags & (GRND_INSECURE | GRND_RANDOM)) =3D=3D (GRND_INSECURE | GRND_= RANDOM)) return -EINVAL; =20 - if (len > INT_MAX) - len =3D INT_MAX; - if (!crng_ready() && !(flags & GRND_INSECURE)) { - int ret; - if (flags & GRND_NONBLOCK) return -EAGAIN; ret =3D wait_for_random_bytes(); if (unlikely(ret)) return ret; } - return get_random_bytes_user(ubuf, len); + + ret =3D import_single_range(READ, ubuf, len, &iov, &iter); + if (unlikely(ret)) + return ret; + return get_random_bytes_user(&iter); } =20 static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1292,8 +1288,7 @@ static ssize_t random_write(struct file return (ssize_t)len; } =20 -static ssize_t urandom_read(struct file *file, char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *ite= r) { static int maxwarn =3D 10; =20 @@ -1302,23 +1297,22 @@ static ssize_t urandom_read(struct file ++urandom_warning.missed; else if (ratelimit_disable || __ratelimit(&urandom_warning)) { --maxwarn; - pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, len); + pr_notice("%s: uninitialized urandom read (%zu bytes read)\n", + current->comm, iov_iter_count(iter)); } } =20 - return get_random_bytes_user(ubuf, len); + return get_random_bytes_user(iter); } =20 -static ssize_t random_read(struct file *file, char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter) { int ret; =20 ret =3D wait_for_random_bytes(); if (ret !=3D 0) return ret; - return get_random_bytes_user(ubuf, len); + return get_random_bytes_user(iter); } =20 static long random_ioctl(struct file *f, unsigned int cmd, unsigned long a= rg) @@ -1380,7 +1374,7 @@ static int random_fasync(int fd, struct } =20 const struct file_operations random_fops =3D { - .read =3D random_read, + .read_iter =3D random_read_iter, .write =3D random_write, .poll =3D random_poll, .unlocked_ioctl =3D random_ioctl, @@ -1390,7 +1384,7 @@ const struct file_operations random_fops }; =20 const struct file_operations urandom_fops =3D { - .read =3D urandom_read, + .read_iter =3D urandom_read_iter, .write =3D random_write, .unlocked_ioctl =3D random_ioctl, .compat_ioctl =3D compat_ptr_ioctl, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 434B9C433FE for ; Fri, 27 May 2022 11:56:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351974AbiE0L4w (ORCPT ); Fri, 27 May 2022 07:56:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352975AbiE0LvE (ORCPT ); Fri, 27 May 2022 07:51:04 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AC0E5E147; Fri, 27 May 2022 04:46:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 0CACACE2515; Fri, 27 May 2022 11:46:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16C34C3411B; Fri, 27 May 2022 11:46:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651987; bh=2SasQTV6ClENlko5/hdJg+orqIXDfNMPBOI6d7o77qo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VPVPBv1+cVErIK7A1bvVMVXrqZ9EJEwiI7xN4CjmxsRmA8U/8/mfha7FQM/VIvsoV gLIND98rl2iNTkbd5lpyHoTGPOAdtsnAmDNPYsre+j/ogxWesbLSL2svQI8xXuuY5D tLQ/Oxp0HsH8PBcCDxFN5MRR4uyStIz8xdtuM2Vk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jens Axboe , Al Viro , "Jason A. Donenfeld" Subject: [PATCH 5.17 107/111] random: convert to using fops->write_iter() Date: Fri, 27 May 2022 10:50:19 +0200 Message-Id: <20220527084834.398838429@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jens Axboe commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream. Now that the read side has been converted to fix a regression with splice, convert the write side as well to have some symmetry in the interface used (and help deprecate ->write()). Signed-off-by: Jens Axboe [Jason: cleaned up random_ioctl a bit, require full writes in RNDADDENTROPY since it's crediting entropy, simplify control flow of write_pool(), and incorporate suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 67 ++++++++++++++++++++++++++-------------------= ----- 1 file changed, 35 insertions(+), 32 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1253,39 +1253,31 @@ static __poll_t random_poll(struct file return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } =20 -static int write_pool(const char __user *ubuf, size_t len) +static ssize_t write_pool(struct iov_iter *iter) { - size_t block_len; - int ret =3D 0; u8 block[BLAKE2S_BLOCK_SIZE]; + ssize_t ret =3D 0; + size_t copied; =20 - while (len) { - block_len =3D min(len, sizeof(block)); - if (copy_from_user(block, ubuf, block_len)) { - ret =3D -EFAULT; - goto out; - } - len -=3D block_len; - ubuf +=3D block_len; - mix_pool_bytes(block, block_len); + if (unlikely(!iov_iter_count(iter))) + return 0; + + for (;;) { + copied =3D copy_from_iter(block, sizeof(block), iter); + ret +=3D copied; + mix_pool_bytes(block, copied); + if (!iov_iter_count(iter) || copied !=3D sizeof(block)) + break; cond_resched(); } =20 -out: memzero_explicit(block, sizeof(block)); - return ret; + return ret ? ret : -EFAULT; } =20 -static ssize_t random_write(struct file *file, const char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *ite= r) { - int ret; - - ret =3D write_pool(ubuf, len); - if (ret) - return ret; - - return (ssize_t)len; + return write_pool(iter); } =20 static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *ite= r) @@ -1317,9 +1309,8 @@ static ssize_t random_read_iter(struct k =20 static long random_ioctl(struct file *f, unsigned int cmd, unsigned long a= rg) { - int size, ent_count; int __user *p =3D (int __user *)arg; - int retval; + int ent_count; =20 switch (cmd) { case RNDGETENTCNT: @@ -1336,20 +1327,32 @@ static long random_ioctl(struct file *f, return -EINVAL; credit_init_bits(ent_count); return 0; - case RNDADDENTROPY: + case RNDADDENTROPY: { + struct iov_iter iter; + struct iovec iov; + ssize_t ret; + int len; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; if (get_user(ent_count, p++)) return -EFAULT; if (ent_count < 0) return -EINVAL; - if (get_user(size, p++)) + if (get_user(len, p++)) + return -EFAULT; + ret =3D import_single_range(WRITE, p, len, &iov, &iter); + if (unlikely(ret)) + return ret; + ret =3D write_pool(&iter); + if (unlikely(ret < 0)) + return ret; + /* Since we're crediting, enforce that it was all written into the pool.= */ + if (unlikely(ret !=3D len)) return -EFAULT; - retval =3D write_pool((const char __user *)p, size); - if (retval < 0) - return retval; credit_init_bits(ent_count); return 0; + } case RNDZAPENTCNT: case RNDCLEARPOOL: /* No longer has any effect. */ @@ -1375,7 +1378,7 @@ static int random_fasync(int fd, struct =20 const struct file_operations random_fops =3D { .read_iter =3D random_read_iter, - .write =3D random_write, + .write_iter =3D random_write_iter, .poll =3D random_poll, .unlocked_ioctl =3D random_ioctl, .compat_ioctl =3D compat_ptr_ioctl, @@ -1385,7 +1388,7 @@ const struct file_operations random_fops =20 const struct file_operations urandom_fops =3D { .read_iter =3D urandom_read_iter, - .write =3D random_write, + .write_iter =3D random_write_iter, .unlocked_ioctl =3D random_ioctl, .compat_ioctl =3D compat_ptr_ioctl, .fasync =3D random_fasync, From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06CA8C433FE for ; Fri, 27 May 2022 11:57:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349995AbiE0L5R (ORCPT ); Fri, 27 May 2022 07:57:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353033AbiE0LvJ (ORCPT ); Fri, 27 May 2022 07:51:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B70651053F1; Fri, 27 May 2022 04:46:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 162E361D94; Fri, 27 May 2022 11:46:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1EB9FC385A9; Fri, 27 May 2022 11:46:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651996; bh=BlPUSq4NXxaVerrK3wlsEA2VYB0g37GE3yAVYt5V2tU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mCvuL2kXbvte0z9u/b9V9slpPgAi6u7Ruz/dKpDFMhNJluc71NWQkvsdeplKKs0I5 eBuBW4tTOeLk3Hv6A9V9BexBPrMPeBwuUb0qE6kmU68cpttL+SpJ6xLx9XQOX6RTbU nKMw/a4sJPK4gNG81a4TdNt29nDvcF+hLrwKsFOg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jens Axboe , Al Viro , "Jason A. Donenfeld" Subject: [PATCH 5.17 108/111] random: wire up fops->splice_{read,write}_iter() Date: Fri, 27 May 2022 10:50:20 +0200 Message-Id: <20220527084834.539636961@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jens Axboe commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream. Now that random/urandom is using {read,write}_iter, we can wire it up to using the generic splice handlers. Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit op= s") Signed-off-by: Jens Axboe [Jason: added the splice_write path. Note that sendfile() and such still does not work for read, though it does for write, because of a file type restriction in splice_direct_to_actor(), which I'll address separately.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1384,6 +1384,8 @@ const struct file_operations random_fops .compat_ioctl =3D compat_ptr_ioctl, .fasync =3D random_fasync, .llseek =3D noop_llseek, + .splice_read =3D generic_file_splice_read, + .splice_write =3D iter_file_splice_write, }; =20 const struct file_operations urandom_fops =3D { @@ -1393,6 +1395,8 @@ const struct file_operations urandom_fop .compat_ioctl =3D compat_ptr_ioctl, .fasync =3D random_fasync, .llseek =3D noop_llseek, + .splice_read =3D generic_file_splice_read, + .splice_write =3D iter_file_splice_write, }; From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D880C433FE for ; Fri, 27 May 2022 11:57:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346986AbiE0L5m (ORCPT ); Fri, 27 May 2022 07:57:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353110AbiE0LvT (ORCPT ); Fri, 27 May 2022 07:51:19 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB470140401; Fri, 27 May 2022 04:46:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 29D78CE2511; Fri, 27 May 2022 11:46:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29971C34100; Fri, 27 May 2022 11:46:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652005; bh=trZ59+XpiQPUmzkHj3yrl4lopt/Bhj9dNgOw0Ox5BlY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n4Sqq8v4T5K79MI8MRzpdbevGqofgxq5pR0qqkIFzh558NI+dMaWL2YjzFH5uYb8Q Aa5KfwfoYvnsf5v/QpeTyJnYD+/Gbx9zANKc0A1av84ocbCwdSqzDSWHq5M70CINQ+ 7hEUjZNgiPTrw+BUAwQWYV7gP2ZHE24iX2E62cMc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.17 109/111] random: check for signals after page of pool writes Date: Fri, 27 May 2022 10:50:21 +0200 Message-Id: <20220527084834.659525831@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Jason A. Donenfeld" commit 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb upstream. get_random_bytes_user() checks for signals after producing a PAGE_SIZE worth of output, just like /dev/zero does. write_pool() is doing basically the same work (actually, slightly more expensive), and so should stop to check for signals in the same way. Let's also name it write_pool_user() to match get_random_bytes_user(), so this won't be misused in the future. Before this patch, massive writes to /dev/urandom would tie up the process for an extremely long time and make it unterminatable. After, it can be successfully interrupted. The following test program can be used to see this works as intended: #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid =3D getpid(), child; int fd; signal(SIGUSR1, handle); if (!(child =3D fork())) { for (;;) kill(pid, SIGUSR1); } fd =3D open("/dev/urandom", O_WRONLY); pause(); printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x))= ); close(fd); kill(child, SIGTERM); return 0; } Result before: "interrupted after writing 2147479552 bytes" Result after: "interrupted after writing 4096 bytes" Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/char/random.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1253,7 +1253,7 @@ static __poll_t random_poll(struct file return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } =20 -static ssize_t write_pool(struct iov_iter *iter) +static ssize_t write_pool_user(struct iov_iter *iter) { u8 block[BLAKE2S_BLOCK_SIZE]; ssize_t ret =3D 0; @@ -1268,7 +1268,13 @@ static ssize_t write_pool(struct iov_ite mix_pool_bytes(block, copied); if (!iov_iter_count(iter) || copied !=3D sizeof(block)) break; - cond_resched(); + + BUILD_BUG_ON(PAGE_SIZE % sizeof(block) !=3D 0); + if (ret % PAGE_SIZE =3D=3D 0) { + if (signal_pending(current)) + break; + cond_resched(); + } } =20 memzero_explicit(block, sizeof(block)); @@ -1277,7 +1283,7 @@ static ssize_t write_pool(struct iov_ite =20 static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *ite= r) { - return write_pool(iter); + return write_pool_user(iter); } =20 static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *ite= r) @@ -1344,7 +1350,7 @@ static long random_ioctl(struct file *f, ret =3D import_single_range(WRITE, p, len, &iov, &iter); if (unlikely(ret)) return ret; - ret =3D write_pool(&iter); + ret =3D write_pool_user(&iter); if (unlikely(ret < 0)) return ret; /* Since we're crediting, enforce that it was all written into the pool.= */ From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A020C4332F for ; Fri, 27 May 2022 12:01:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352311AbiE0MAL (ORCPT ); Fri, 27 May 2022 08:00:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353171AbiE0LvX (ORCPT ); Fri, 27 May 2022 07:51:23 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8EC3146763; Fri, 27 May 2022 04:46:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BCDC161D19; Fri, 27 May 2022 11:46:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCBDBC385A9; Fri, 27 May 2022 11:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652017; bh=Yy5gQPcfw9NC/kuApdrxoyH05hh4EnBXxxCCeB24uyw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pqh8z+m+v3QW8DDUkhKwzymCHNTBC3CHngPQQ5sR5tyNGJbIz2BzqnmNLc5ftMpGk HoOYnvsX3tTrodci24pdk+G7DvCKoDASQ0Hamfj3xmXU2fIDyvBT9ispXm3/tsOf7Y DVtjnYwu/gfa7FgjtfmUtO2fxAqa0aMzc1NT9TMk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Lorenzo Pieralisi , Veronika Kabatova , Aristeu Rozanski , Ard Biesheuvel , "Rafael J. Wysocki" , dann frazier Subject: [PATCH 5.17 110/111] ACPI: sysfs: Fix BERT error region memory mapping Date: Fri, 27 May 2022 10:50:22 +0200 Message-Id: <20220527084834.798679404@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Lorenzo Pieralisi commit 1bbc21785b7336619fb6a67f1fff5afdaf229acc upstream. Currently the sysfs interface maps the BERT error region as "memory" (through acpi_os_map_memory()) in order to copy the error records into memory buffers through memory operations (eg memory_read_from_buffer()). The OS system cannot detect whether the BERT error region is part of system RAM or it is "device memory" (eg BMC memory) and therefore it cannot detect which memory attributes the bus to memory support (and corresponding kernel mapping, unless firmware provides the required information). The acpi_os_map_memory() arch backend implementation determines the mapping attributes. On arm64, if the BERT error region is not present in the EFI memory map, the error region is mapped as device-nGnRnE; this triggers alignment faults since memcpy unaligned accesses are not allowed in device-nGnRnE regions. The ACPI sysfs code cannot therefore map by default the BERT error region with memory semantics but should use a safer default. Change the sysfs code to map the BERT error region as MMIO (through acpi_os_map_iomem()) and use the memcpy_fromio() interface to read the error region into the kernel buffer. Link: https://lore.kernel.org/linux-arm-kernel/31ffe8fc-f5ee-2858-26c5-0fd8= bdd68702@arm.com Link: https://lore.kernel.org/linux-acpi/CAJZ5v0g+OVbhuUUDrLUCfX_mVqY_e8ubg= LTU98=3DjfjTeb4t+Pw@mail.gmail.com Signed-off-by: Lorenzo Pieralisi Tested-by: Veronika Kabatova Tested-by: Aristeu Rozanski Acked-by: Ard Biesheuvel Signed-off-by: Rafael J. Wysocki Cc: dann frazier Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- drivers/acpi/sysfs.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) --- a/drivers/acpi/sysfs.c +++ b/drivers/acpi/sysfs.c @@ -415,19 +415,30 @@ static ssize_t acpi_data_show(struct fil loff_t offset, size_t count) { struct acpi_data_attr *data_attr; - void *base; - ssize_t rc; + void __iomem *base; + ssize_t size; =20 data_attr =3D container_of(bin_attr, struct acpi_data_attr, attr); + size =3D data_attr->attr.size; =20 - base =3D acpi_os_map_memory(data_attr->addr, data_attr->attr.size); + if (offset < 0) + return -EINVAL; + + if (offset >=3D size) + return 0; + + if (count > size - offset) + count =3D size - offset; + + base =3D acpi_os_map_iomem(data_attr->addr, size); if (!base) return -ENOMEM; - rc =3D memory_read_from_buffer(buf, count, &offset, base, - data_attr->attr.size); - acpi_os_unmap_memory(base, data_attr->attr.size); =20 - return rc; + memcpy_fromio(buf, base + offset, count); + + acpi_os_unmap_iomem(base, size); + + return count; } =20 static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr) From nobody Tue Apr 28 23:18:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FC6BC433F5 for ; Fri, 27 May 2022 11:59:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349027AbiE0L7s (ORCPT ); Fri, 27 May 2022 07:59:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353260AbiE0Lvb (ORCPT ); Fri, 27 May 2022 07:51:31 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC0AE134E08; Fri, 27 May 2022 04:47:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 76DB4B8091D; Fri, 27 May 2022 11:47:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3437C34100; Fri, 27 May 2022 11:47:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652026; bh=RUEsyhJFnaOVYv98/ybfm3mKHtkusHDBD7ScaWxPpOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ER6sHd46K3EfgboB/GtDbJTk9F+dJ1F3+ZUesTVc+AeEJDeyMs6J+YJ2sKhU7FVcy ZYF8gcVtUKvgFH4q1b8IxUelhRsDV5YA9VUcIAh2NFyN9C34jYlRKhX+2NTW5sLkk4 0FT37Wmbgs/t4k6KezywO/XpZO8QTwuNoSdQV9r0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Edward Matijevic , Takashi Iwai Subject: [PATCH 5.17 111/111] ALSA: ctxfi: Add SB046x PCI ID Date: Fri, 27 May 2022 10:50:23 +0200 Message-Id: <20220527084834.912274011@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Edward Matijevic commit 1b073ebb174d0c7109b438e0a5eb4495137803ec upstream. Adds the PCI ID for X-Fi cards sold under the Platnum and XtremeMusic names Before: snd_ctxfi 0000:05:05.0: chip 20K1 model Unknown (1102:0021) is found After: snd_ctxfi 0000:05:05.0: chip 20K1 model SB046x (1102:0021) is found [ This is only about defining the model name string, and the rest is handled just like before, as a default unknown device. Edward confirmed that the stuff has been working fine -- tiwai ] Signed-off-by: Edward Matijevic Cc: Link: https://lore.kernel.org/r/cae7d1a4-8bd9-7dfe-7427-db7e766f7272@gmail.= com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Sudip Mukherjee --- sound/pci/ctxfi/ctatc.c | 2 ++ sound/pci/ctxfi/cthardware.h | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) --- a/sound/pci/ctxfi/ctatc.c +++ b/sound/pci/ctxfi/ctatc.c @@ -36,6 +36,7 @@ | ((IEC958_AES3_CON_FS_48000) << 24)) =20 static const struct snd_pci_quirk subsys_20k1_list[] =3D { + SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0021, "SB046x", CTSB046X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0022, "SB055x", CTSB055X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x002f, "SB055x", CTSB055X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0029, "SB073x", CTSB073X), @@ -64,6 +65,7 @@ static const struct snd_pci_quirk subsys =20 static const char *ct_subsys_name[NUM_CTCARDS] =3D { /* 20k1 models */ + [CTSB046X] =3D "SB046x", [CTSB055X] =3D "SB055x", [CTSB073X] =3D "SB073x", [CTUAA] =3D "UAA", --- a/sound/pci/ctxfi/cthardware.h +++ b/sound/pci/ctxfi/cthardware.h @@ -26,8 +26,9 @@ enum CHIPTYP { =20 enum CTCARDS { /* 20k1 models */ + CTSB046X, + CT20K1_MODEL_FIRST =3D CTSB046X, CTSB055X, - CT20K1_MODEL_FIRST =3D CTSB055X, CTSB073X, CTUAA, CT20K1_UNKNOWN,