drivers/char/random.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
Previously, the fast pool was dumped into the main pool peroidically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:
do_some_stuff()
spin_lock()
do_some_other_stuff()
spin_unlock()
to doing:
do_some_stuff()
queue_work_on(some_other_stuff_worker)
This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:
> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
Rather than incur the scheduling latency from queue_work_on, we can
instead switch to a tasklet, which will run on the same core -- exactly
what we want -- and happen during context transition without additional
scheduling latency, and minimized logic in the enqueuing path.
Hopefully this restores performance from prior to the RT changes.
Reported-by: Sherry Yang <sherry.yang@oracle.com>
Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com>
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/lkml/YyuREcGAXV9828w5@zx2c4.com/
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Hi Sherry,
I'm not going to commit to this until I receive your `Tested-by:`, so
please let me know if this fixes the problem. If not, we'll try
something else.
Thanks,
Jason
drivers/char/random.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 520a385c7dab..ad17b36cf977 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -918,13 +918,16 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier);
#endif
struct fast_pool {
- struct work_struct mix;
+ struct tasklet_struct mix;
unsigned long pool[4];
unsigned long last;
unsigned int count;
};
+static void mix_interrupt_randomness(struct tasklet_struct *work);
+
static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
+ .mix = { .use_callback = true, .callback = mix_interrupt_randomness },
#ifdef CONFIG_64BIT
#define FASTMIX_PERM SIPHASH_PERMUTATION
.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
@@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu)
}
#endif
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct tasklet_struct *work)
{
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
/*
@@ -1027,10 +1030,8 @@ void add_interrupt_randomness(int irq)
if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
return;
- if (unlikely(!fast_pool->mix.func))
- INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
fast_pool->count |= MIX_INFLIGHT;
- queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+ tasklet_hi_schedule(&fast_pool->mix);
}
EXPORT_SYMBOL_GPL(add_interrupt_randomness);
--
2.37.3
Previously, the fast pool was dumped into the main pool peroidically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:
do_some_stuff()
spin_lock()
do_some_other_stuff()
spin_unlock()
to doing:
do_some_stuff()
queue_work_on(some_other_stuff_worker)
This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:
> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
Rather than incur the scheduling latency from queue_work_on, we can
instead switch to running on the next timer tick, on the same core,
deferrably so. This also batches things a bit more -- once per jiffy --
which is probably okay now that mix_interrupt_randomness() can credit
multiple bits at once. It still puts a bit of pressure on fast_mix(),
but hopefully that's acceptable.
Hopefully this restores performance from prior to the RT changes.
Reported-by: Sherry Yang <sherry.yang@oracle.com>
Reported-by: Paul Webb <paul.x.webb@oracle.com>
Cc: Sherry Yang <sherry.yang@oracle.com>
Cc: Phillip Goerl <phillip.goerl@oracle.com>
Cc: Jack Vogel <jack.vogel@oracle.com>
Cc: Nicky Veitch <nicky.veitch@oracle.com>
Cc: Colm Harrington <colm.harrington@oracle.com>
Cc: Ramanan Govindarajan <ramanan.govindarajan@oracle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: stable@vger.kernel.org
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
drivers/char/random.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1cb53495e8f7..08bb46a50802 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -928,17 +928,20 @@ struct fast_pool {
unsigned long pool[4];
unsigned long last;
unsigned int count;
- struct work_struct mix;
+ struct timer_list mix;
};
+static void mix_interrupt_randomness(struct timer_list *work);
+
static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
#ifdef CONFIG_64BIT
#define FASTMIX_PERM SIPHASH_PERMUTATION
- .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+ .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
#else
#define FASTMIX_PERM HSIPHASH_PERMUTATION
- .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+ .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
#endif
+ .mix = __TIMER_INITIALIZER(mix_interrupt_randomness, TIMER_DEFERRABLE)
};
/*
@@ -980,7 +983,7 @@ int __cold random_online_cpu(unsigned int cpu)
}
#endif
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct timer_list *work)
{
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
/*
@@ -1034,10 +1037,11 @@ void add_interrupt_randomness(int irq)
if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
return;
- if (unlikely(!fast_pool->mix.func))
- INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
fast_pool->count |= MIX_INFLIGHT;
- queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+ if (!timer_pending(&fast_pool->mix)) {
+ fast_pool->mix.expires = jiffies;
+ add_timer_on(&fast_pool->mix, raw_smp_processor_id());
+ }
}
EXPORT_SYMBOL_GPL(add_interrupt_randomness);
--
2.37.3
From: Jason A. Donenfeld > Sent: 26 September 2022 23:05 > > Previously, the fast pool was dumped into the main pool peroidically in > the fast pool's hard IRQ handler. This worked fine and there weren't > problems with it, until RT came around. Since RT converts spinlocks into > sleeping locks, problems cropped up. Rather than switching to raw > spinlocks, the RT developers preferred we make the transformation from > originally doing: > > do_some_stuff() > spin_lock() > do_some_other_stuff() > spin_unlock() > > to doing: > > do_some_stuff() > queue_work_on(some_other_stuff_worker) > > This is an ordinary pattern done all over the kernel. However, Sherry > noticed a 10% performance regression in qperf TCP over a 40gbps > InfiniBand card. Quoting her message: > > > MT27500 Family [ConnectX-3] cards: > > Infiniband device 'mlx4_0' port 1 status: > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1 > > base lid: 0x6 > > sm lid: 0x1 > > state: 4: ACTIVE > > phys state: 5: LinkUp > > rate: 40 Gb/sec (4X QDR) > > link_layer: InfiniBand > > > > Cards are configured with IP addresses on private subnet for IPoIB > > performance testing. > > Regression identified in this bug is in TCP latency in this stack as reported > > by qperf tcp_lat metric: > > > > We have one system listen as a qperf server: > > [root@yourQperfServer ~]# qperf > > > > Have the other system connect to qperf server as a client (in this > > case, it’s X7 server with Mellanox card): > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 - > oo msg_size:4K:1024K:*2 tcp_lat > > Rather than incur the scheduling latency from queue_work_on, we can > instead switch to running on the next timer tick, on the same core, > deferrably so. This also batches things a bit more -- once per jiffy -- > which is probably okay now that mix_interrupt_randomness() can credit > multiple bits at once. It still puts a bit of pressure on fast_mix(), > but hopefully that's acceptable. I though NOHZ systems didn't take a timer interrupt every 'jiffy'. If that is true what actually happens? David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Tue, Sep 27, 2022 at 07:41:52AM +0000, David Laight wrote: > From: Jason A. Donenfeld > > Sent: 26 September 2022 23:05 > > > > Previously, the fast pool was dumped into the main pool peroidically in > > the fast pool's hard IRQ handler. This worked fine and there weren't > > problems with it, until RT came around. Since RT converts spinlocks into > > sleeping locks, problems cropped up. Rather than switching to raw > > spinlocks, the RT developers preferred we make the transformation from > > originally doing: > > > > do_some_stuff() > > spin_lock() > > do_some_other_stuff() > > spin_unlock() > > > > to doing: > > > > do_some_stuff() > > queue_work_on(some_other_stuff_worker) > > > > This is an ordinary pattern done all over the kernel. However, Sherry > > noticed a 10% performance regression in qperf TCP over a 40gbps > > InfiniBand card. Quoting her message: > > > > > MT27500 Family [ConnectX-3] cards: > > > Infiniband device 'mlx4_0' port 1 status: > > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1 > > > base lid: 0x6 > > > sm lid: 0x1 > > > state: 4: ACTIVE > > > phys state: 5: LinkUp > > > rate: 40 Gb/sec (4X QDR) > > > link_layer: InfiniBand > > > > > > Cards are configured with IP addresses on private subnet for IPoIB > > > performance testing. > > > Regression identified in this bug is in TCP latency in this stack as reported > > > by qperf tcp_lat metric: > > > > > > We have one system listen as a qperf server: > > > [root@yourQperfServer ~]# qperf > > > > > > Have the other system connect to qperf server as a client (in this > > > case, it’s X7 server with Mellanox card): > > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 - > > oo msg_size:4K:1024K:*2 tcp_lat > > > > Rather than incur the scheduling latency from queue_work_on, we can > > instead switch to running on the next timer tick, on the same core, > > deferrably so. This also batches things a bit more -- once per jiffy -- > > which is probably okay now that mix_interrupt_randomness() can credit > > multiple bits at once. It still puts a bit of pressure on fast_mix(), > > but hopefully that's acceptable. > > I though NOHZ systems didn't take a timer interrupt every 'jiffy'. > If that is true what actually happens? The TIMER_DEFERRABLE part of this patch is a mistake; I'm going to make that 0. However, since expires==jiffies, there's no difference. It's still undesirable though. Jason
Previously, the fast pool was dumped into the main pool periodically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:
do_some_stuff()
spin_lock()
do_some_other_stuff()
spin_unlock()
to doing:
do_some_stuff()
queue_work_on(some_other_stuff_worker)
This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:
> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
Rather than incur the scheduling latency from queue_work_on, we can
instead switch to running on the next timer tick, on the same core. This
also batches things a bit more -- once per jiffy -- which is okay now
that mix_interrupt_randomness() can credit multiple bits at once.
Reported-by: Sherry Yang <sherry.yang@oracle.com>
Tested-by: Paul Webb <paul.x.webb@oracle.com>
Cc: Sherry Yang <sherry.yang@oracle.com>
Cc: Phillip Goerl <phillip.goerl@oracle.com>
Cc: Jack Vogel <jack.vogel@oracle.com>
Cc: Nicky Veitch <nicky.veitch@oracle.com>
Cc: Colm Harrington <colm.harrington@oracle.com>
Cc: Ramanan Govindarajan <ramanan.govindarajan@oracle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: stable@vger.kernel.org
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
drivers/char/random.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index a90d96f4b3bb..e591c6aadca4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -921,17 +921,20 @@ struct fast_pool {
unsigned long pool[4];
unsigned long last;
unsigned int count;
- struct work_struct mix;
+ struct timer_list mix;
};
+static void mix_interrupt_randomness(struct timer_list *work);
+
static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
#ifdef CONFIG_64BIT
#define FASTMIX_PERM SIPHASH_PERMUTATION
- .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+ .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
#else
#define FASTMIX_PERM HSIPHASH_PERMUTATION
- .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+ .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
#endif
+ .mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
};
/*
@@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu)
}
#endif
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct timer_list *work)
{
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
/*
@@ -1027,10 +1030,11 @@ void add_interrupt_randomness(int irq)
if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
return;
- if (unlikely(!fast_pool->mix.func))
- INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
fast_pool->count |= MIX_INFLIGHT;
- queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+ if (!timer_pending(&fast_pool->mix)) {
+ fast_pool->mix.expires = jiffies;
+ add_timer_on(&fast_pool->mix, raw_smp_processor_id());
+ }
}
EXPORT_SYMBOL_GPL(add_interrupt_randomness);
--
2.37.3
On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: … > This is an ordinary pattern done all over the kernel. However, Sherry > noticed a 10% performance regression in qperf TCP over a 40gbps > InfiniBand card. Quoting her message: > > > MT27500 Family [ConnectX-3] cards: > > Infiniband device 'mlx4_0' port 1 status: … While looking at the mlx4 driver, it looks like they don't use any NAPI handling in their interrupt handler which _might_ be the case that they handle more than 1k interrupts a second. I'm still curious to get that ACKed from Sherry's side. Jason, from random's point of view: deferring until 1k interrupts + 1sec delay is not desired due to low entropy, right? > Rather than incur the scheduling latency from queue_work_on, we can > instead switch to running on the next timer tick, on the same core. This > also batches things a bit more -- once per jiffy -- which is okay now > that mix_interrupt_randomness() can credit multiple bits at once. Hmmm. Do you see higher contention on input_pool.lock? Just asking because if more than once CPUs invokes this timer callback aligned, then they block on the same lock. Sebastian
Hi Sebastian, On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote: > On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: > … > > This is an ordinary pattern done all over the kernel. However, Sherry > > noticed a 10% performance regression in qperf TCP over a 40gbps > > InfiniBand card. Quoting her message: > > > > > MT27500 Family [ConnectX-3] cards: > > > Infiniband device 'mlx4_0' port 1 status: > … > > While looking at the mlx4 driver, it looks like they don't use any NAPI > handling in their interrupt handler which _might_ be the case that they > handle more than 1k interrupts a second. I'm still curious to get that > ACKed from Sherry's side. Are you sure about that? So far as I can tell drivers/net/ethernet/ mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are you looking at the infiniband driver instead? I don't really know how these interact. But yea, if we've got a driver not using NAPI at 40gbps that's obviously going to be a problem. > Jason, from random's point of view: deferring until 1k interrupts + 1sec > delay is not desired due to low entropy, right? Definitely || is preferable to &&. > > > Rather than incur the scheduling latency from queue_work_on, we can > > instead switch to running on the next timer tick, on the same core. This > > also batches things a bit more -- once per jiffy -- which is okay now > > that mix_interrupt_randomness() can credit multiple bits at once. > > Hmmm. Do you see higher contention on input_pool.lock? Just asking > because if more than once CPUs invokes this timer callback aligned, then > they block on the same lock. I've been doing various experiments, sending mini patches to Oracle and having them test this in their rig. So far, it looks like the cost of the body of the worker itself doesn't matter much, but rather the cost of the enqueueing function is key. Still investigating though. It's a bit frustrating, as all I have to work with are results from the tests, and no perf analysis. It'd be great if an engineer at Oracle was capable of tackling this interactively, but at the moment it's just me sending them patches. So we'll see. Getting closer though, albeit very slowly. Jason
On 2022-09-28 18:15:46 [+0200], Jason A. Donenfeld wrote: > Hi Sebastian, Hi Jason, > On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote: > > On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: > > … > > > This is an ordinary pattern done all over the kernel. However, Sherry > > > noticed a 10% performance regression in qperf TCP over a 40gbps > > > InfiniBand card. Quoting her message: > > > > > > > MT27500 Family [ConnectX-3] cards: > > > > Infiniband device 'mlx4_0' port 1 status: > > … > > > > While looking at the mlx4 driver, it looks like they don't use any NAPI > > handling in their interrupt handler which _might_ be the case that they > > handle more than 1k interrupts a second. I'm still curious to get that > > ACKed from Sherry's side. > > Are you sure about that? So far as I can tell drivers/net/ethernet/ > mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are > you looking at the infiniband driver instead? I don't really know how > these interact. I've been looking at mlx4_msi_x_interrupt() and it appears that it iterates over a ring buffer. I guess that mlx4_cq_completion() will invoke mlx4_en_rx_irq() which schedules NAPI. > But yea, if we've got a driver not using NAPI at 40gbps that's obviously > going to be a problem. So I'm wondering if we get 1 worker a second which kills the performance or if we get more than 1k interrupts in less than second resulting in more wakeups within a second.. > > Jason, from random's point of view: deferring until 1k interrupts + 1sec > > delay is not desired due to low entropy, right? > > Definitely || is preferable to &&. > > > > > > Rather than incur the scheduling latency from queue_work_on, we can > > > instead switch to running on the next timer tick, on the same core. This > > > also batches things a bit more -- once per jiffy -- which is okay now > > > that mix_interrupt_randomness() can credit multiple bits at once. > > > > Hmmm. Do you see higher contention on input_pool.lock? Just asking > > because if more than once CPUs invokes this timer callback aligned, then > > they block on the same lock. > > I've been doing various experiments, sending mini patches to Oracle and > having them test this in their rig. So far, it looks like the cost of > the body of the worker itself doesn't matter much, but rather the cost > of the enqueueing function is key. Still investigating though. > > It's a bit frustrating, as all I have to work with are results from the > tests, and no perf analysis. It'd be great if an engineer at Oracle was > capable of tackling this interactively, but at the moment it's just me > sending them patches. So we'll see. Getting closer though, albeit very > slowly. Oh boy. Okay. > Jason Sebastian
© 2016 - 2025 Red Hat, Inc.