Two issues related to reweight_entity() were raised; poking at all that got me these patches. They're in queue.git/sched/core and I spend most of yesterday staring at traces trying to find anything wrong. So far, so good. Please test.
Hello Peter,
On 1/30/2026 3:04 PM, Peter Zijlstra wrote:
> Two issues related to reweight_entity() were raised; poking at all that got me
> these patches.
>
> They're in queue.git/sched/core and I spend most of yesterday staring at traces
> trying to find anything wrong. So far, so good.
>
> Please test.
I put this on top of tip:sched/urgent + tip:sched/core which contains Ingo's
cleanup of removing the union and at some point in the benchmark run I hit:
BUG: kernel NULL pointer dereference, address: 0000000000000051
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD c153802067 P4D c1750e7067 PUD c16067e067 PMD 0
Oops: Oops: 0000 [#1] SMP NOPTI
CPU: 200 UID: 1000 PID: 92850 Comm: schbench Not tainted 6.19.0-rc6-peterz-eevdf-fix+ #4 PREEMPT(full)
Hardware name: ... (Zen4c server)
RIP: 0010:pick_task_fair+0x3c/0x130
Code: ...
RSP: 0000:ff5cc03f25ecfd58 EFLAGS: 00010046
RAX: 0000000000000000 RBX: ff3087d6eb032380 RCX: 00000000056ae402
RDX: fffe78b16a6ed620 RSI: fffe790e92f4c046 RDI: 00027caa24e6c3ee
RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000002
R10: 0000086bfb248f00 R11: 0000000000000438 R12: ff3087d6eb032480
R13: ff5cc03f25ecfea0 R14: ff3087d6eb032380 R15: ff3087d6eb032380
FS: 00007f176438a640(0000) GS:ff3087d73d0e2000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000051 CR3: 000000c0d275c048 CR4: 0000000000f71ef0
PKRU: 55555554
Call Trace:
<TASK>
pick_next_task_fair+0x46/0x7b0
? task_tick_fair+0xf1/0x8b0
? perf_event_task_tick+0x5e/0xc0
__pick_next_task+0x41/0x1d0
__schedule+0x26e/0x17a0
? srso_alias_return_thunk+0x5/0xfbef5
? timerqueue_add+0x9f/0xc0
? __hrtimer_run_queues+0x139/0x240
? ktime_get+0x3f/0xf0
? srso_alias_return_thunk+0x5/0xfbef5
? srso_alias_return_thunk+0x5/0xfbef5
? srso_alias_return_thunk+0x5/0xfbef5
? clockevents_program_event+0xaa/0x100
schedule+0x27/0xd0
irqentry_exit+0x2a8/0x610
? srso_alias_return_thunk+0x5/0xfbef5
? __irq_exit_rcu+0x3f/0xf0
asm_sysvec_apic_timer_interrupt+0x1a/0x20
RIP: 0033:0x7f17f2498e58
Code: ...
RSP: 002b:00007f1764389d48 EFLAGS: 00000202
RAX: 0000000000000010 RBX: 00000000000000c8 RCX: 00007f17f24e57f8
RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000014cd2820
RBP: 0000000014cd2820 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000293 R12: 00007f1764389e00
R13: 00007f176439fed0 R14: 0000000000002a40 R15: 00007f17b414c778
</TASK>
Modules linked in: ...
CR2: 0000000000000051
---[ end trace 0000000000000000 ]---
RIP points to "se->sched_delayed" dereference in pick_task_fair():
$ scripts/faddr2line vmlinux pick_task_fair+0x3c/0x130
pick_task_fair+0x3c/0x130:
pick_next_entity at kernel/sched/fair.c:5648
(inlined by) pick_task_fair at kernel/sched/fair.c:9061
$ sed -n '5645,5651p' kernel/sched/fair.c
struct sched_entity *se;
se = pick_eevdf(cfs_rq);
if (se->sched_delayed) {
dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
/*
* Must not reference @se again, see __block_task().
so something went sideways with the avg_vruntime calculation I presume.
I'm rerunning with the PARANOID_AVG feat now.
Just re-running the particular schbench variant hasn't crashed the kernel
in the half hour it has been running so I've re-triggered the same set of
benchmarks to see if flipping PARANOID_AVG makes any difference.
If you have a debug patch somewhere that you would like data on this run
from, please do let me know, else I plan on capturing the rq state at
the time of crash (cfs_rq walk, dumping all the vruntimes of all the
queued entities).
--
Thanks and Regards,
Prateek
On Tue, Feb 03, 2026 at 12:15:56PM +0530, K Prateek Nayak wrote: > Hello Peter, > > On 1/30/2026 3:04 PM, Peter Zijlstra wrote: > > Two issues related to reweight_entity() were raised; poking at all that got me > > these patches. > > > > They're in queue.git/sched/core and I spend most of yesterday staring at traces > > trying to find anything wrong. So far, so good. > > > > Please test. > > I put this on top of tip:sched/urgent + tip:sched/core which contains Ingo's > cleanup of removing the union and at some point in the benchmark run I hit: > > BUG: kernel NULL pointer dereference, address: 0000000000000051 :-( > > so something went sideways with the avg_vruntime calculation I presume. > I'm rerunning with the PARANOID_AVG feat now. > > Just re-running the particular schbench variant hasn't crashed the kernel > in the half hour it has been running so I've re-triggered the same set of > benchmarks to see if flipping PARANOID_AVG makes any difference. If you run with PARANOID_AVG, the condition ends up visible as: grep shift /debug/sched/debug If any of the fields are !0, you tripped an overflow. Once its !0, you can't get it back to 0 (except perhaps if its cgroup things, in which case you can destroy and re-create the cgroups I suppose) other than reboot. Anyway, if you can reproduce without PARANOID_AVG (or indeed have tripped overflow) could you share the specific schbench invocation you used? I'm not sure I have valuable tracing patches, I just stick random trace_printk()s in.
Hello Peter,
On 2/3/2026 4:41 PM, Peter Zijlstra wrote:
> On Tue, Feb 03, 2026 at 12:15:56PM +0530, K Prateek Nayak wrote:
>> Hello Peter,
>>
>> On 1/30/2026 3:04 PM, Peter Zijlstra wrote:
>>> Two issues related to reweight_entity() were raised; poking at all that got me
>>> these patches.
>>>
>>> They're in queue.git/sched/core and I spend most of yesterday staring at traces
>>> trying to find anything wrong. So far, so good.
>>>
>>> Please test.
>>
>> I put this on top of tip:sched/urgent + tip:sched/core which contains Ingo's
>> cleanup of removing the union and at some point in the benchmark run I hit:
>>
>> BUG: kernel NULL pointer dereference, address: 0000000000000051
>
> :-(
>
>>
>> so something went sideways with the avg_vruntime calculation I presume.
>> I'm rerunning with the PARANOID_AVG feat now.
>>
>> Just re-running the particular schbench variant hasn't crashed the kernel
>> in the half hour it has been running so I've re-triggered the same set of
>> benchmarks to see if flipping PARANOID_AVG makes any difference.
>
> If you run with PARANOID_AVG, the condition ends up visible as:
>
> grep shift /debug/sched/debug
>
> If any of the fields are !0, you tripped an overflow.
Yup I see a few !0 values. Some inching closer to the BUG_ON()
grep "shift.*: [^0]$" /sys/kernel/debug/sched/debug
.sum_shift : 4
.sum_shift : 3
.sum_shift : 5
.sum_shift : 1
.sum_shift : 2
.sum_shift : 3
>
> Once its !0, you can't get it back to 0 (except perhaps if its cgroup
> things, in which case you can destroy and re-create the cgroups I
> suppose) other than reboot.
>
> Anyway, if you can reproduce without PARANOID_AVG (or indeed have
> tripped overflow) could you share the specific schbench invocation you
> used?
This trips when I'm running a (very) old version of schbench at commit
e4aa540 ("Make sure rps isn't zero in auto_rps mode.")
I'm running the following on a 512 CPU server:
#!/bin/bash
DIR=$1
MESSENGERS=1
MAX_ITERS=2
SCHBENCH=./schbench
for i in 1 2 4 8 16 32 64 128 256 512 768 1024;
do
THISDIR=$DIR/$i-workers
if [ ! -d $THISDIR ]
then
mkdir -p $THISDIR
fi
for j in `seq 0 $MAX_ITERS`
do
echo "===== Worker $i : Iter $j ======";
$SCHBENCH -m $MESSENGERS -t $i |& tee $THISDIR/iter-$j.log;
sleep 2
done
done
Fails when it is running with 768 workers. Standalone runs didn't
fail - have to run a cumulative runner that runs sched-messaging,
stream, tbench, netperf, first before running schbench :-(
>
> I'm not sure I have valuable tracing patches, I just stick random
> trace_printk()s in.
I'll plop those in and update once the I get a log for sum_shift++.
--
Thanks and Regards,
Prateek
On Tue, Feb 03, 2026 at 05:49:16PM +0530, K Prateek Nayak wrote: > > If any of the fields are !0, you tripped an overflow. > > Yup I see a few !0 values. Some inching closer to the BUG_ON() > > grep "shift.*: [^0]$" /sys/kernel/debug/sched/debug > .sum_shift : 4 > .sum_shift : 3 > .sum_shift : 5 > .sum_shift : 1 > .sum_shift : 2 > .sum_shift : 3 Whee. Clearly I have some work to do ;-) I'll go prod at this. Thanks!
Hello Peter,
On 2/3/2026 5:49 PM, K Prateek Nayak wrote:
>> I'm not sure I have valuable tracing patches, I just stick random
>> trace_printk()s in.
>
> I'll plop those in and update once the I get a log for sum_shift++.
Here is one set of log:
# schbench enqueue
schbench-103551 [255] ... : place_entity: Placed se: weight(1048576) vruntime(722711379921) vlag(2140867) deadline(722714179921) curr?(0)
schbench-103551 [255] ... : place_entity: Placed on cfs_rq: depth(0) weight(4194304) nr_queued(4) sum_w_vruntime(0) sum_weight(3145728) zero_vruntime(722714445432) sum_shift(0) avg_vruntime(722714056004)
schbench-103551 [255] ... : __enqueue_entity: Enqueue cfs_rq: depth(0) weight(5242880) nr_queued(5) sum_w_vruntime(663820959744) sum_weight(4194304) zero_vruntime(722713520787) sum_shift(0) avg_vruntime(722713520787)
# Couple of reweight while running
schbench-103551 [255] ... : reweight_entity: Reweight before se: weight(3459) vruntime(701806887728588) vlag(0) deadline(701807851411101) curr?(1)
schbench-103551 [255] ... : reweight_entity: Before cfs_rq: depth(-1) weight(3459) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701806887728588)
schbench-103551 [255] ... : reweight_entity: Reweight after se: weight(3505) vruntime(701806939248075) vlag(0) deadline(701807839439774) curr?(1)
schbench-103551 [255] ... : reweight_entity: After cfs_rq: depth(-1) weight(3505) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701806939248075)
schbench-103551 [255] ... : reweight_entity: Reweight before se: weight(3505) vruntime(701808246440736) vlag(0) deadline(701809202174069) curr?(1)
schbench-103551 [255] ... : reweight_entity: Before cfs_rq: depth(-1) weight(3505) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701808246440736)
schbench-103551 [255] ... : reweight_entity: Reweight after se: weight(3513) vruntime(701808246440736) vlag(0) deadline(701809199997619) curr?(1)
schbench-103551 [255] ... : reweight_entity: After cfs_rq: depth(-1) weight(3513) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701808246440736)
# put_prev_entity?
schbench-103551 [255] ... : __enqueue_entity: Enqueue cfs_rq: depth(0) weight(5242880) nr_queued(5) sum_w_vruntime(-2130969624576) sum_weight(5242880) zero_vruntime(722714695180) sum_shift(0) avg_vruntime(722714695180)
# set_next_entity?
schbench-103551 [255] ... : __dequeue_entity: Dequeue cfs_rq: depth(0) weight(5242880) nr_queued(5) sum_w_vruntime(0) sum_weight(4194304) zero_vruntime(722715015932) sum_shift(0) avg_vruntime(722715015932)
# More reweight
<...>-102371 [255] ... : reweight_entity: Reweight before se: weight(3513) vruntime(701809611552543) vlag(0) deadline(701810567285876) curr?(1)
<...>-102371 [255] ... : reweight_entity: Before cfs_rq: depth(-1) weight(3513) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809611552543)
<...>-102371 [255] ... : reweight_entity: Reweight after se: weight(3508) vruntime(701809611552543) vlag(0) deadline(701810568648095) curr?(1)
<...>-102371 [255] ... : reweight_entity: After cfs_rq: depth(-1) weight(3508) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809611552543)
<...>-102371 [255] ... : place_entity: Placed se: weight(90891264) vruntime(701808975077099) vlag(24732) deadline(701808975109401) curr?(0)
<...>-102371 [255] ... : place_entity: Placed on cfs_rq: depth(-1) weight(3508) nr_queued(1) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809615900788)
# Overflow on enqueue
<...>-102371 [255] ... : __enqueue_entity: Overflowed cfs_rq:
<...>-102371 [255] ... : dump_h_overflow_cfs_rq: cfs_rq: depth(0) weight(90894772) nr_queued(2) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809615900788)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(3508) vruntime(701809615900788) slice(2800000) deadline(701810568648095) curr?(1) task?(1) <-------- cfs_rq->curr
<...>-102371 [255] ... : __enqueue_entity: Overflowed se:
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(90891264) vruntime(701808975077099) slice(2800000) deadline(701808975109401) curr?(0) task?(0) <-------- new se
# Botched attempt at dumping the whole hierarchy
<...>-102371 [255] ... : __enqueue_entity: Overflowed hierarchy from root:
<...>-102371 [255] ... : dump_h_overflow_cfs_rq: cfs_rq: depth(0) weight(90894772) nr_queued(2) sum_w_vruntime(0) sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809615900788)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(3508) vruntime(701809615900788) slice(2800000) deadline(701810568648095) curr?(1) task?(1)
<...>-102371 [255] ... : dump_h_overflow_cfs_rq: cfs_rq: depth(1) weight(5242880) nr_queued(5) sum_w_vruntime(0) sum_weight(4194304) zero_vruntime(722715015932) sum_shift(0) avg_vruntime(722715086591)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(1048576) vruntime(722715369227) slice(2800000) deadline(722718169227) curr?(1) task?(0)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(1048576) vruntime(722713453675) slice(2800000) deadline(722716247576) curr?(1) task?(0)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(1048576) vruntime(722713498238) slice(2800000) deadline(722716290797) curr?(1) task?(0)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(1048576) vruntime(722716384383) slice(2800000) deadline(722719172114) curr?(1) task?(0)
<...>-102371 [255] ... : dump_h_overflow_entity: se: weight(1048576) vruntime(722716727432) slice(2800000) deadline(722719517387) curr?(1) task?(0)
Attached is the debug patch. that can be used to interpret this data more.
per-CPU padding is just a variable to add padding for higher depths when
printing the hierarchy.
--
Thanks and Regards,
Prateek
From 8fe3036b04a3a529dc500a9c880e23bcfd1daa42 Mon Sep 17 00:00:00 2001
From: "Gautham R. Shenoy" <gautham.shenoy@amd.com>
Date: Wed, 4 Feb 2026 10:08:07 +0000
Subject: [PATCH] sched/fair: Debug multiplication overflow
Read the trace buffer whn dmesg says:
EEVDF: Overflow (mul)!
EEVDF: Overflow CPU/<X>
per_cpu/cpu<X>/trace should have all the information.
Not-signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
kernel/sched/fair.c | 83 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 81 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6f1a86f7969a..15d521f795ff 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -670,6 +670,53 @@ __sum_w_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
cfs_rq->sum_weight += weight;
}
+static DEFINE_PER_CPU(char[100], dump_padding);
+u64 avg_vruntime(struct cfs_rq *cfs_rq);
+
+static void dump_h_overflow_entity(struct sched_entity *se, int depth, bool curr)
+{
+ char *padding = *this_cpu_ptr(&dump_padding);
+
+ padding[depth] = '\0';
+ trace_printk("%sse: weight(%lu) vruntime(%llu) slice(%llu) deadline(%llu) curr?(%d) task?(%d)\n",
+ padding, se->load.weight, se->vruntime, se->slice, se->deadline, curr, !entity_is_task(se));
+}
+
+static void dump_h_overflow_cfs_rq(struct cfs_rq *cfs_rq, int depth, bool rec)
+{
+ struct rb_node *left = rb_first_cached(&cfs_rq->tasks_timeline);
+ char *padding = *this_cpu_ptr(&dump_padding);
+
+ padding[depth] = '\0';
+ trace_printk("%scfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ padding, depth, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
+
+ if (cfs_rq->curr)
+ dump_h_overflow_entity(cfs_rq->curr, depth, true);
+
+ while (left) {
+ dump_h_overflow_entity(__node_2_se(left), depth, true);
+ left = rb_next(left);
+ }
+
+ if (rec) {
+ padding[depth] = ' ';
+ if (cfs_rq->curr && !entity_is_task(cfs_rq->curr))
+ dump_h_overflow_cfs_rq(group_cfs_rq(cfs_rq->curr), depth + 1, true);
+
+ left = rb_first_cached(&cfs_rq->tasks_timeline);
+ while (left) {
+ struct sched_entity *se = __node_2_se(left);
+
+ if (!entity_is_task(se))
+ dump_h_overflow_cfs_rq(group_cfs_rq(se), depth + 1, true);
+ left = rb_next(left);
+ }
+ }
+
+ padding[depth] = '\0';
+}
+
static void
sum_w_vruntime_add_paranoid(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
@@ -680,17 +727,32 @@ sum_w_vruntime_add_paranoid(struct cfs_rq *cfs_rq, struct sched_entity *se)
weight = avg_vruntime_weight(cfs_rq, se->load.weight);
key = entity_key(cfs_rq, se);
- if (check_mul_overflow(key, weight, &key))
+ if (check_mul_overflow(key, weight, &key)) {
+ pr_warn("EEVDF: Overflow (mul)!\n");
+ pr_warn("EEVDF: Overflow CPU/%d\n", smp_processor_id());
goto overflow;
+ }
- if (check_add_overflow(cfs_rq->sum_w_vruntime, key, &tmp))
+ if (check_add_overflow(cfs_rq->sum_w_vruntime, key, &tmp)) {
+ pr_warn("EEVDF: Overflow (add)!\n");
+ pr_warn("EEVDF: Overflow CPU/%d\n", smp_processor_id());
goto overflow;
+ }
cfs_rq->sum_w_vruntime = tmp;
cfs_rq->sum_weight += weight;
return;
overflow:
+ trace_printk("Overflowed cfs_rq:\n");
+ dump_h_overflow_cfs_rq(cfs_rq, 0, false);
+
+ trace_printk("Overflowed se:\n");
+ dump_h_overflow_entity(se, 0, se == cfs_rq->curr);
+
+ trace_printk("Overflowed hierarchy from root:\n");
+ dump_h_overflow_cfs_rq(&rq_of(cfs_rq)->cfs, 0, true);
+
/*
* There's gotta be a limit -- if we're still failing at this point
* there's really nothing much to be done about things.
@@ -921,6 +983,8 @@ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
se->min_slice = se->slice;
rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,
__entity_less, &min_vruntime_cb);
+ trace_printk("Enqueue cfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ se->depth - 1, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
}
static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
@@ -929,6 +993,8 @@ static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
&min_vruntime_cb);
sum_w_vruntime_sub(cfs_rq, se);
update_zero_vruntime(cfs_rq);
+ trace_printk("Dequeue cfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ se->depth - 1, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
}
struct sched_entity *__pick_root_entity(struct cfs_rq *cfs_rq)
@@ -3963,6 +4029,10 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
bool rel_vprot = false;
u64 avruntime = 0;
+ trace_printk("Reweight before se: weight(%lu) vruntime(%llu) vlag(%lld) deadline(%llu) curr?(%d)\n",
+ se->load.weight, se->vruntime, se->vlag, se->deadline, se == cfs_rq->curr);
+ trace_printk("Before cfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ se->depth - 1, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
if (se->on_rq) {
/* commit outstanding execution time */
update_curr(cfs_rq);
@@ -3998,6 +4068,10 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
__enqueue_entity(cfs_rq, se);
cfs_rq->nr_queued++;
}
+ trace_printk("Reweight after se: weight(%lu) vruntime(%llu) vlag(%lld) deadline(%llu) curr?(%d)\n",
+ se->load.weight, se->vruntime, se->vlag, se->deadline, se == cfs_rq->curr);
+ trace_printk("After cfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ se->depth - 1, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
}
static void reweight_task_fair(struct rq *rq, struct task_struct *p,
@@ -5367,6 +5441,11 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
* EEVDF: vd_i = ve_i + r_i/w_i
*/
se->deadline = se->vruntime + vslice;
+
+ trace_printk("Placed se: weight(%lu) vruntime(%llu) vlag(%lld) deadline(%llu) curr?(%d)\n",
+ se->load.weight, se->vruntime, se->vlag, se->deadline, se == cfs_rq->curr);
+ trace_printk("Placed on cfs_rq: depth(%d) weight(%lu) nr_queued(%u) sum_w_vruntime(%lld) sum_weight(%llu) zero_vruntime(%llu) sum_shift(%u) avg_vruntime(%llu)\n",
+ se->depth - 1, cfs_rq->load.weight, cfs_rq->nr_queued, cfs_rq->sum_w_vruntime, cfs_rq->sum_weight, cfs_rq->zero_vruntime, cfs_rq->sum_shift, avg_vruntime(cfs_rq));
}
static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
--
2.34.1
Hi All,
On 2026.02.03 04:19 K Prateek Nayak wrote:
> On 2/3/2026 4:41 PM, Peter Zijlstra wrote:
>> On Tue, Feb 03, 2026 at 12:15:56PM +0530, K Prateek Nayak wrote:
>>> On 1/30/2026 3:04 PM, Peter Zijlstra wrote:
>>>> Two issues related to reweight_entity() were raised; poking at all that got me
>>>> these patches.
>>>>
>>>> They're in queue.git/sched/core and I spend most of yesterday staring at traces
>>>> trying to find anything wrong. So far, so good.
>>>>
>>>> Please test.
>>>
>>> I put this on top of tip:sched/urgent + tip:sched/core which contains Ingo's
>>> cleanup of removing the union and at some point in the benchmark run I hit:
>>>
>>> BUG: kernel NULL pointer dereference, address: 0000000000000051
... snip ...
> This trips when I'm running a (very) old version of schbench at commit
> e4aa540 ("Make sure rps isn't zero in auto_rps mode.")
>
> I'm running the following on a 512 CPU server:
>
> #!/bin/bash
>
> DIR=$1
> MESSENGERS=1
> MAX_ITERS=2
> SCHBENCH=./schbench
>
> for i in 1 2 4 8 16 32 64 128 256 512 768 1024;
> do
> THISDIR=$DIR/$i-workers
> if [ ! -d $THISDIR ]
> then
> mkdir -p $THISDIR
> fi
> for j in `seq 0 $MAX_ITERS`
> do
> echo "===== Worker $i : Iter $j ======";
> $SCHBENCH -m $MESSENGERS -t $i |& tee $THISDIR/iter-$j.log;
> sleep 2
> done
> done
>
> Fails when it is running with 768 workers. Standalone runs didn't
> fail - have to run a cumulative runner that runs sched-messaging,
> stream, tbench, netperf, first before running schbench :-(
Further to my email from the other day, where all was good [1],
I have continued to test, in particular the severe overload conditions
from [2].
Under heavy overload my test computer just hangs. My multiple
ssh sessions eventually terminate. I have left it for any hours, but
have to reset it in the end.
The first time there were no log entries at all, at least that I could
find.
The second time:
kernel: BUG: kernel NULL pointer dereference, address: 0000000000000051
kernel: #PF: supervisor read access in kernel mode
kernel: #PF: error_code(0x0000) - not-present page
kernel: PGD 0 P4D 0
kernel: Oops: Oops: 0000 [#1] SMP NOPTI
kernel: CPU: 11 UID: 1000 PID: 3597 Comm: yes Not tainted 6.19.0-rc1-pz #1 PREEMPT(full)
...
The entire relevant part is attached.
Conditions:
Greater than 12,500 X (yes > /dev/null) tasks
But less than 15,000 X ( yes > /dev/null) tasks
I have tested up to 20,000 X (yes > /dev/null) tasks
with previous kernels, including mainline 6.19-rc1.
I would not disagree if you say my operating conditions
are ridiculous.
System:
Processor: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz, 6 cores 12 CPUs.
CPU frequency scaling driver: intel_pstate; Governor powersave.
HWP: Enabled
[1] https://lore.kernel.org/lkml/000d01dc939e$0fc99fe0$2f5cdfa0$@telus.net/
[2] https://lore.kernel.org/lkml/002401dbb6bd$4527ec00$cf77c400$@telus.net/
... Doug
2026-02-02T16:13:33.017238-08:00 s19 kernel: BUG: kernel NULL pointer dereference, address: 0000000000000051
2026-02-02T16:13:33.017248-08:00 s19 kernel: #PF: supervisor read access in kernel mode
2026-02-02T16:13:33.017249-08:00 s19 kernel: #PF: error_code(0x0000) - not-present page
2026-02-02T16:13:33.017251-08:00 s19 kernel: PGD 0 P4D 0
2026-02-02T16:13:33.017382-08:00 s19 kernel: Oops: Oops: 0000 [#1] SMP NOPTI
2026-02-02T16:13:33.017384-08:00 s19 kernel: CPU: 11 UID: 1000 PID: 3597 Comm: yes Not tainted 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:13:33.017385-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:13:33.017386-08:00 s19 kernel: RIP: 0010:pick_task_fair+0x3e/0x140
2026-02-02T16:13:33.017386-08:00 s19 kernel: Code: 53 48 89 fb 48 83 ec 08 8b 8f 10 01 00 00 85 c9 0f 84 96 00 00 00 4c 89 ef 45 31 e4 eb 27 66 90 be 01 00 00 00 e8 92 41 ff ff <80> 78 51 00 75 5e 48 85 c0 74 69 48 8b b8 b0 00 00 00 48 85 ff 0f
2026-02-02T16:13:33.017387-08:00 s19 kernel: RSP: 0018:ffffd095863ef7a8 EFLAGS: 00010046
2026-02-02T16:13:33.017388-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e5b2f40 RCX: 0000000000000000
2026-02-02T16:13:33.017388-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
2026-02-02T16:13:33.017389-08:00 s19 kernel: RBP: ffffd095863ef7c8 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:13:33.017390-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:13:33.017390-08:00 s19 kernel: R13: ffff89298e5b3040 R14: ffff89298e5b2f40 R15: 0000000000000000
2026-02-02T16:13:33.017391-08:00 s19 kernel: FS: 00007e443ac91740(0000) GS:ffff8929d1bd7000(0000) knlGS:0000000000000000
2026-02-02T16:13:33.017391-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:13:33.017392-08:00 s19 kernel: CR2: 0000000000000051 CR3: 0000000147ece001 CR4: 00000000007726f0
2026-02-02T16:13:33.017392-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:13:33.017393-08:00 s19 kernel: Call Trace:
2026-02-02T16:13:33.017394-08:00 s19 kernel: <TASK>
2026-02-02T16:13:33.017394-08:00 s19 kernel: pick_next_task_fair+0x37/0x930
2026-02-02T16:13:33.017395-08:00 s19 kernel: __pick_next_task+0x44/0x1e0
2026-02-02T16:13:33.017395-08:00 s19 kernel: __schedule+0x1e6/0x1810
2026-02-02T16:13:33.017396-08:00 s19 kernel: ? aa_file_perm+0x6e/0x5d0
2026-02-02T16:13:33.017398-08:00 s19 kernel: message repeated 2 times: [ ? aa_file_perm+0x6e/0x5d0]
2026-02-02T16:13:33.017399-08:00 s19 kernel: ? show_vpd_pgb2+0x14/0x80
2026-02-02T16:13:33.017402-08:00 s19 kernel: ? aa_file_perm+0x6e/0x5d0
2026-02-02T16:13:33.017402-08:00 s19 kernel: preempt_schedule_irq+0x38/0x70
2026-02-02T16:13:33.017403-08:00 s19 kernel: raw_irqentry_exit_cond_resched+0x31/0x40
2026-02-02T16:13:33.017403-08:00 s19 kernel: irqentry_exit+0x34/0x710
2026-02-02T16:13:33.017404-08:00 s19 kernel: sysvec_apic_timer_interrupt+0x57/0xc0
2026-02-02T16:13:33.017404-08:00 s19 kernel: asm_sysvec_apic_timer_interrupt+0x1b/0x20
2026-02-02T16:13:33.017405-08:00 s19 kernel: RIP: 0010:vfs_write+0x2c1/0x480
2026-02-02T16:13:33.017406-08:00 s19 kernel: Code: 00 00 e8 2e 92 c2 05 49 89 c4 48 3d ef fd ff ff 0f 84 2f 01 00 00 48 85 c0 0f 8e 63 fe ff ff 48 8b 45 a8 49 89 06 41 8b 45 04 <25> 00 00 00 06 3d 00 00 00 02 74 57 49 8b 7d 48 4c 8b 4f 30 49 8b
2026-02-02T16:13:33.017406-08:00 s19 kernel: RSP: 0018:ffffd095863efa80 EFLAGS: 00000206
2026-02-02T16:13:33.017407-08:00 s19 kernel: RAX: 000000000c0c001e RBX: ffff8922879caa00 RCX: ffffd095863efb20
2026-02-02T16:13:33.017407-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000633d552d74a0 RDI: ffff892287eba840
2026-02-02T16:13:33.017408-08:00 s19 kernel: RBP: ffffd095863efb10 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:13:33.017408-08:00 s19 kernel: R10: 0000000000002000 R11: 0000000000000000 R12: 0000000000002000
2026-02-02T16:13:33.017409-08:00 s19 kernel: R13: ffff892287eba840 R14: ffffd095863efb20 R15: 0000633d552d74a0
2026-02-02T16:13:33.017409-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017410-08:00 s19 kernel: ksys_write+0x71/0xf0
2026-02-02T16:13:33.017410-08:00 s19 kernel: __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017411-08:00 s19 kernel: x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017411-08:00 s19 kernel: do_syscall_64+0x81/0x500
2026-02-02T16:13:33.017412-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017413-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017414-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017415-08:00 s19 kernel: message repeated 2 times: [ ? ksys_write+0x71/0xf0]
2026-02-02T16:13:33.017416-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017417-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017417-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017418-08:00 s19 kernel: ? common_file_perm+0x6b/0x1a0
2026-02-02T16:13:33.017418-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017420-08:00 s19 kernel: message repeated 2 times: [ ? vfs_write+0x324/0x480]
2026-02-02T16:13:33.017420-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017421-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017422-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017423-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017423-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017424-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017429-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017430-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:13:33.017431-08:00 s19 kernel: RIP: 0033:0x7e443ab1c5a4
2026-02-02T16:13:33.017431-08:00 s19 kernel: Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d a5 ea 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89
2026-02-02T16:13:33.017432-08:00 s19 kernel: RSP: 002b:00007ffe3db6c1f8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
2026-02-02T16:13:33.017433-08:00 s19 kernel: RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007e443ab1c5a4
2026-02-02T16:13:33.017433-08:00 s19 kernel: RDX: 0000000000002000 RSI: 0000633d552d74a0 RDI: 0000000000000001
2026-02-02T16:13:33.017434-08:00 s19 kernel: RBP: 00007ffe3db6c250 R08: 00007e443ac03b20 R09: 0000000000000000
2026-02-02T16:13:33.017434-08:00 s19 kernel: R10: 0000000000000001 R11: 0000000000000202 R12: 0000000000002000
2026-02-02T16:13:33.017435-08:00 s19 kernel: R13: 0000633d552d74a0 R14: 0000000000002000 R15: 0000000000002000
2026-02-02T16:13:33.017435-08:00 s19 kernel: </TASK>
2026-02-02T16:13:33.017436-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:13:33.017437-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:13:33.017438-08:00 s19 kernel: CR2: 0000000000000051
2026-02-02T16:13:33.017439-08:00 s19 kernel: ---[ end trace 0000000000000000 ]---
2026-02-02T16:13:33.017439-08:00 s19 kernel: watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
2026-02-02T16:13:33.017440-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:13:33.017488-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:13:33.017488-08:00 s19 kernel: CPU: 5 UID: 1000 PID: 15564 Comm: yes Tainted: G D 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:13:33.017489-08:00 s19 kernel: Tainted: [D]=DIE
2026-02-02T16:13:33.017489-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:13:33.017490-08:00 s19 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x81/0x300
2026-02-02T16:13:33.017490-08:00 s19 kernel: Code: 00 00 f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 6c 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 5b 41 5c 41 5d 41 5e
2026-02-02T16:13:33.017491-08:00 s19 kernel: RSP: 0018:ffffd09580294d00 EFLAGS: 00000002
2026-02-02T16:13:33.017492-08:00 s19 kernel: RAX: 0000000000000001 RBX: ffff89298e5b2f88 RCX: 0000000000000000
2026-02-02T16:13:33.017492-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff89298e5b2f88
2026-02-02T16:13:33.017493-08:00 s19 kernel: RBP: ffffd09580294d28 R08: ffff89298e5b2f40 R09: 0000000000000000
2026-02-02T16:13:33.017493-08:00 s19 kernel: R10: 000000000000082a R11: 000000000000000c R12: 0000000000000400
2026-02-02T16:13:33.017494-08:00 s19 kernel: R13: ffffd09580294e78 R14: ffffffffbc9dbf40 R15: ffffffffbc9dbf40
2026-02-02T16:13:33.017494-08:00 s19 kernel: FS: 00007b6638d38740(0000) GS:ffff8929d18d7000(0000) knlGS:0000000000000000
2026-02-02T16:13:33.017495-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:13:33.017495-08:00 s19 kernel: CR2: 000073fa8d0ee200 CR3: 00000001ec51c001 CR4: 00000000007726f0
2026-02-02T16:13:33.017496-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:13:33.017496-08:00 s19 kernel: Call Trace:
2026-02-02T16:13:33.017497-08:00 s19 kernel: <IRQ>
2026-02-02T16:13:33.017497-08:00 s19 kernel: _raw_spin_lock+0x3f/0x60
2026-02-02T16:13:33.017498-08:00 s19 kernel: raw_spin_rq_lock_nested+0x23/0xb0
2026-02-02T16:13:33.017498-08:00 s19 kernel: _raw_spin_rq_lock_irqsave+0x21/0x40
2026-02-02T16:13:33.017499-08:00 s19 kernel: sched_balance_rq+0x705/0x14d0
2026-02-02T16:13:33.017499-08:00 s19 kernel: sched_balance_domains+0x26f/0x3c0
2026-02-02T16:13:33.017500-08:00 s19 kernel: sched_balance_softirq+0x51/0x80
2026-02-02T16:13:33.017500-08:00 s19 kernel: handle_softirqs+0xe7/0x340
2026-02-02T16:13:33.017501-08:00 s19 kernel: __irq_exit_rcu+0x10e/0x130
2026-02-02T16:13:33.017502-08:00 s19 kernel: irq_exit_rcu+0xe/0x20
2026-02-02T16:13:33.017502-08:00 s19 kernel: sysvec_apic_timer_interrupt+0xa0/0xc0
2026-02-02T16:13:33.017503-08:00 s19 kernel: </IRQ>
2026-02-02T16:13:33.017503-08:00 s19 kernel: <TASK>
2026-02-02T16:13:33.017504-08:00 s19 kernel: asm_sysvec_apic_timer_interrupt+0x1b/0x20
2026-02-02T16:13:33.017504-08:00 s19 kernel: RIP: 0010:do_syscall_64+0x47/0x500
2026-02-02T16:13:33.017505-08:00 s19 kernel: Code: 00 00 48 83 c0 0f 25 f8 07 00 00 48 29 c4 48 8d 44 24 0f 48 83 e0 f0 4d 63 ee 0f 1f 44 00 00 0f 1f 44 00 00 fb 0f 1f 44 00 00 <65> 4c 8b 25 c9 d4 43 01 49 8b 54 24 08 f6 c2 bf 0f 85 e8 02 00 00
2026-02-02T16:13:33.017505-08:00 s19 kernel: RSP: 0018:ffffd095ad857ca0 EFLAGS: 00000286
2026-02-02T16:13:33.017506-08:00 s19 kernel: RAX: ffffd095ad857ca0 RBX: ffffd095ad857f48 RCX: 0000000000000000
2026-02-02T16:13:33.017506-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffd095ad857f48
2026-02-02T16:13:33.017507-08:00 s19 kernel: RBP: ffffd095ad857f38 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:13:33.017507-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:13:33.017508-08:00 s19 kernel: R13: 0000000000000001 R14: 0000000000000001 R15: 0000000000000000
2026-02-02T16:13:33.017508-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017509-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017510-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017510-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017511-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017511-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017512-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017513-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017513-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017514-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017514-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017515-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017515-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017516-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017516-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017517-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017517-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017518-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:13:33.017519-08:00 s19 kernel: RIP: 0033:0x7b6638b1c5a4
2026-02-02T16:13:33.017519-08:00 s19 kernel: Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d a5 ea 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89
2026-02-02T16:13:33.017520-08:00 s19 kernel: RSP: 002b:00007ffc8d0f7d88 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
2026-02-02T16:13:33.017521-08:00 s19 kernel: RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007b6638b1c5a4
2026-02-02T16:13:33.017521-08:00 s19 kernel: RDX: 0000000000002000 RSI: 000056a0858e24a0 RDI: 0000000000000001
2026-02-02T16:13:33.017522-08:00 s19 kernel: RBP: 00007ffc8d0f7de0 R08: 00007b6638c03b20 R09: 0000000000000000
2026-02-02T16:13:33.017522-08:00 s19 kernel: R10: 0000000000000001 R11: 0000000000000202 R12: 0000000000002000
2026-02-02T16:13:33.017523-08:00 s19 kernel: R13: 000056a0858e24a0 R14: 0000000000002000 R15: 0000000000002000
2026-02-02T16:13:33.017523-08:00 s19 kernel: </TASK>
2026-02-02T16:13:33.017524-08:00 s19 kernel: watchdog: CPU11: Watchdog detected hard LOCKUP on cpu 11
2026-02-02T16:13:33.017524-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:13:33.017525-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:13:33.017526-08:00 s19 kernel: CPU: 11 UID: 1000 PID: 3597 Comm: yes Tainted: G D 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:13:33.017526-08:00 s19 kernel: Tainted: [D]=DIE
2026-02-02T16:13:33.017527-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:13:33.017527-08:00 s19 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x238/0x300
2026-02-02T16:13:33.017528-08:00 s19 kernel: Code: 41 c1 e5 10 c1 e0 12 45 89 ee 41 09 c6 44 89 f0 c1 e8 10 66 87 43 02 89 c2 c1 e2 10 81 fa ff ff 00 00 77 51 31 d2 eb 02 f3 90 <8b> 03 66 85 c0 75 f7 44 39 f0 0f 84 91 00 00 00 c6 03 01 48 85 d2
2026-02-02T16:13:33.017529-08:00 s19 kernel: RSP: 0018:ffffd095863eefe0 EFLAGS: 00000002
2026-02-02T16:13:33.017529-08:00 s19 kernel: RAX: 0000000000300101 RBX: ffff89298e5b2f88 RCX: ffffffffbc9dbf40
2026-02-02T16:13:33.017530-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000101 RDI: ffff89298e5b2f88
2026-02-02T16:13:33.017530-08:00 s19 kernel: RBP: ffffd095863ef008 R08: 0000000000000000 R09: 000000000000000b
2026-02-02T16:13:33.017531-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff89298e5b3e40
2026-02-02T16:13:33.017531-08:00 s19 kernel: R13: 0000000000000000 R14: 0000000000300000 R15: ffff89298e5b2f40
2026-02-02T16:13:33.017585-08:00 s19 kernel: FS: 00007e443ac91740(0000) GS:ffff8929d1bd7000(0000) knlGS:0000000000000000
2026-02-02T16:13:33.017585-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:13:33.017586-08:00 s19 kernel: CR2: 0000000000000051 CR3: 0000000147ece001 CR4: 00000000007726f0
2026-02-02T16:13:33.017586-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:13:33.017587-08:00 s19 kernel: Call Trace:
2026-02-02T16:13:33.017588-08:00 s19 kernel: <TASK>
2026-02-02T16:13:33.017588-08:00 s19 kernel: _raw_spin_lock+0x3f/0x60
2026-02-02T16:13:33.017589-08:00 s19 kernel: raw_spin_rq_lock_nested+0x23/0xb0
2026-02-02T16:13:33.017589-08:00 s19 kernel: try_to_wake_up+0x272/0x780
2026-02-02T16:13:33.017590-08:00 s19 kernel: wake_up_process+0x15/0x30
2026-02-02T16:13:33.017590-08:00 s19 kernel: kick_pool+0x8c/0x1b0
2026-02-02T16:13:33.017591-08:00 s19 kernel: __queue_work+0x2f9/0x570
2026-02-02T16:13:33.017591-08:00 s19 kernel: queue_work_on+0x77/0x80
2026-02-02T16:13:33.017592-08:00 s19 kernel: drm_fb_helper_damage.part.0+0xb3/0xe0
2026-02-02T16:13:33.017592-08:00 s19 kernel: drm_fb_helper_damage_area+0x31/0x50
2026-02-02T16:13:33.017593-08:00 s19 kernel: intel_fbdev_defio_imageblit+0x2b/0x40 [i915]
2026-02-02T16:13:33.017594-08:00 s19 kernel: soft_cursor+0x198/0x240
2026-02-02T16:13:33.017594-08:00 s19 kernel: bit_cursor+0x3cd/0x670
2026-02-02T16:13:33.017595-08:00 s19 kernel: ? __pfx_bit_cursor+0x10/0x10
2026-02-02T16:13:33.017595-08:00 s19 kernel: fbcon_cursor+0x130/0x1b0
2026-02-02T16:13:33.017596-08:00 s19 kernel: hide_cursor+0x2f/0xd0
2026-02-02T16:13:33.017596-08:00 s19 kernel: vt_console_print+0x475/0x4f0
2026-02-02T16:13:33.017597-08:00 s19 kernel: ? nbcon_get_cpu_emergency_nesting+0xe/0x40
2026-02-02T16:13:33.017597-08:00 s19 kernel: console_flush_one_record+0x2a4/0x3b0
2026-02-02T16:13:33.017598-08:00 s19 kernel: console_unlock+0x7d/0x130
2026-02-02T16:13:33.017598-08:00 s19 kernel: vprintk_emit+0x386/0x3f0
2026-02-02T16:13:33.017599-08:00 s19 kernel: vprintk_default+0x1d/0x30
2026-02-02T16:13:33.017599-08:00 s19 kernel: vprintk+0x18/0x50
2026-02-02T16:13:33.017600-08:00 s19 kernel: _printk+0x5f/0x90
2026-02-02T16:13:33.017600-08:00 s19 kernel: oops_exit+0x26/0x50
2026-02-02T16:13:33.017601-08:00 s19 kernel: oops_end+0x6d/0xf0
2026-02-02T16:13:33.017601-08:00 s19 kernel: page_fault_oops+0x199/0x5c0
2026-02-02T16:13:33.017602-08:00 s19 kernel: ? update_load_avg+0x7b/0x250
2026-02-02T16:13:33.017602-08:00 s19 kernel: do_user_addr_fault+0x4af/0x8d0
2026-02-02T16:13:33.017603-08:00 s19 kernel: ? update_cfs_rq_load_avg+0x2e/0x230
2026-02-02T16:13:33.017603-08:00 s19 kernel: exc_page_fault+0x7f/0x1b0
2026-02-02T16:13:33.017604-08:00 s19 kernel: asm_exc_page_fault+0x27/0x30
2026-02-02T16:13:33.017604-08:00 s19 kernel: RIP: 0010:pick_task_fair+0x3e/0x140
2026-02-02T16:13:33.017605-08:00 s19 kernel: Code: 53 48 89 fb 48 83 ec 08 8b 8f 10 01 00 00 85 c9 0f 84 96 00 00 00 4c 89 ef 45 31 e4 eb 27 66 90 be 01 00 00 00 e8 92 41 ff ff <80> 78 51 00 75 5e 48 85 c0 74 69 48 8b b8 b0 00 00 00 48 85 ff 0f
2026-02-02T16:13:33.017606-08:00 s19 kernel: RSP: 0018:ffffd095863ef7a8 EFLAGS: 00010046
2026-02-02T16:13:33.017606-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e5b2f40 RCX: 0000000000000000
2026-02-02T16:13:33.017607-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
2026-02-02T16:13:33.017607-08:00 s19 kernel: RBP: ffffd095863ef7c8 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:13:33.017608-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:13:33.017608-08:00 s19 kernel: R13: ffff89298e5b3040 R14: ffff89298e5b2f40 R15: 0000000000000000
2026-02-02T16:13:33.017609-08:00 s19 kernel: ? pick_task_fair+0x3e/0x140
2026-02-02T16:13:33.017609-08:00 s19 kernel: pick_next_task_fair+0x37/0x930
2026-02-02T16:13:33.017610-08:00 s19 kernel: __pick_next_task+0x44/0x1e0
2026-02-02T16:13:33.017610-08:00 s19 kernel: __schedule+0x1e6/0x1810
2026-02-02T16:13:33.017611-08:00 s19 kernel: ? aa_file_perm+0x6e/0x5d0
2026-02-02T16:13:33.017612-08:00 s19 kernel: message repeated 2 times: [ ? aa_file_perm+0x6e/0x5d0]
2026-02-02T16:13:33.017613-08:00 s19 kernel: ? show_vpd_pgb2+0x14/0x80
2026-02-02T16:13:33.017614-08:00 s19 kernel: ? aa_file_perm+0x6e/0x5d0
2026-02-02T16:13:33.017615-08:00 s19 kernel: preempt_schedule_irq+0x38/0x70
2026-02-02T16:13:33.017615-08:00 s19 kernel: raw_irqentry_exit_cond_resched+0x31/0x40
2026-02-02T16:13:33.017616-08:00 s19 kernel: irqentry_exit+0x34/0x710
2026-02-02T16:13:33.017616-08:00 s19 kernel: sysvec_apic_timer_interrupt+0x57/0xc0
2026-02-02T16:13:33.017617-08:00 s19 kernel: asm_sysvec_apic_timer_interrupt+0x1b/0x20
2026-02-02T16:13:33.017617-08:00 s19 kernel: RIP: 0010:vfs_write+0x2c1/0x480
2026-02-02T16:13:33.017618-08:00 s19 kernel: Code: 00 00 e8 2e 92 c2 05 49 89 c4 48 3d ef fd ff ff 0f 84 2f 01 00 00 48 85 c0 0f 8e 63 fe ff ff 48 8b 45 a8 49 89 06 41 8b 45 04 <25> 00 00 00 06 3d 00 00 00 02 74 57 49 8b 7d 48 4c 8b 4f 30 49 8b
2026-02-02T16:13:33.017618-08:00 s19 kernel: RSP: 0018:ffffd095863efa80 EFLAGS: 00000206
2026-02-02T16:13:33.017619-08:00 s19 kernel: RAX: 000000000c0c001e RBX: ffff8922879caa00 RCX: ffffd095863efb20
2026-02-02T16:13:33.017619-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000633d552d74a0 RDI: ffff892287eba840
2026-02-02T16:13:33.017620-08:00 s19 kernel: RBP: ffffd095863efb10 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:13:33.017620-08:00 s19 kernel: R10: 0000000000002000 R11: 0000000000000000 R12: 0000000000002000
2026-02-02T16:13:33.017621-08:00 s19 kernel: R13: ffff892287eba840 R14: ffffd095863efb20 R15: 0000633d552d74a0
2026-02-02T16:13:33.017621-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017622-08:00 s19 kernel: ksys_write+0x71/0xf0
2026-02-02T16:13:33.017622-08:00 s19 kernel: __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017623-08:00 s19 kernel: x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017623-08:00 s19 kernel: do_syscall_64+0x81/0x500
2026-02-02T16:13:33.017624-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017624-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017625-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017627-08:00 s19 kernel: message repeated 2 times: [ ? ksys_write+0x71/0xf0]
2026-02-02T16:13:33.017628-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017629-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017629-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017630-08:00 s19 kernel: ? common_file_perm+0x6b/0x1a0
2026-02-02T16:13:33.017630-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:13:33.017631-08:00 s19 kernel: message repeated 2 times: [ ? vfs_write+0x324/0x480]
2026-02-02T16:13:33.017632-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017633-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:13:33.017634-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:13:33.017634-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:13:33.017635-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:13:33.017635-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017636-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:13:33.017637-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:13:33.017637-08:00 s19 kernel: RIP: 0033:0x7e443ab1c5a4
2026-02-02T16:13:33.017638-08:00 s19 kernel: Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d a5 ea 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89
2026-02-02T16:13:33.017639-08:00 s19 kernel: RSP: 002b:00007ffe3db6c1f8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
2026-02-02T16:13:33.017639-08:00 s19 kernel: RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007e443ab1c5a4
2026-02-02T16:13:33.017640-08:00 s19 kernel: RDX: 0000000000002000 RSI: 0000633d552d74a0 RDI: 0000000000000001
2026-02-02T16:13:33.017640-08:00 s19 kernel: RBP: 00007ffe3db6c250 R08: 00007e443ac03b20 R09: 0000000000000000
2026-02-02T16:13:33.017641-08:00 s19 kernel: R10: 0000000000000001 R11: 0000000000000202 R12: 0000000000002000
2026-02-02T16:13:33.017641-08:00 s19 kernel: R13: 0000633d552d74a0 R14: 0000000000002000 R15: 0000000000002000
2026-02-02T16:13:33.017642-08:00 s19 kernel: </TASK>
2026-02-02T16:13:33.017642-08:00 s19 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
2026-02-02T16:13:33.017643-08:00 s19 kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P12753 P6509 P5261 P15769 P8594 P5431 P10107 P14333 P6906 } 61514 jiffies s: 873 root: 0x0/T
2026-02-02T16:13:33.017643-08:00 s19 kernel: rcu: blocking rcu_node structures (internal RCU debug):
2026-02-02T16:15:23.234373-08:00 s19 kernel: watchdog: CPU0: Watchdog detected hard LOCKUP on cpu 0
2026-02-02T16:15:23.234414-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:15:23.234418-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:15:23.234419-08:00 s19 kernel: CPU: 0 UID: 1000 PID: 4539 Comm: yes Tainted: G D 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:15:23.234420-08:00 s19 kernel: Tainted: [D]=DIE
2026-02-02T16:15:23.234420-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:15:23.234421-08:00 s19 kernel: RIP: 0010:vprintk_emit+0x30f/0x3f0
2026-02-02T16:15:23.234421-08:00 s19 kernel: Code: 84 d5 00 00 00 65 4c 3b 35 16 52 58 02 0f 84 c7 00 00 00 48 c7 c7 28 6b a5 bc c6 05 05 ad 61 02 01 e8 15 26 16 01 eb 02 f3 90 <44> 0f b6 25 f4 ac 61 02 41 80 fc 01 0f 87 ac ca dc ff 41 83 e4 01
2026-02-02T16:15:23.234422-08:00 s19 kernel: RSP: 0018:ffffd09580003cc0 EFLAGS: 00000002
2026-02-02T16:15:23.234422-08:00 s19 kernel: RAX: 0000000000000000 RBX: 0000000000000035 RCX: 0000000000000000
2026-02-02T16:15:23.234423-08:00 s19 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
2026-02-02T16:15:23.234423-08:00 s19 kernel: RBP: ffffd09580003d18 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:15:23.234424-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
2026-02-02T16:15:23.234425-08:00 s19 kernel: R13: 0000000000000046 R14: ffff8922879caa00 R15: ffff89298e0219c0
2026-02-02T16:15:23.234425-08:00 s19 kernel: FS: 00007a4931c5b740(0000) GS:ffff8929d1657000(0000) knlGS:0000000000000000
2026-02-02T16:15:23.234426-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:15:23.234426-08:00 s19 kernel: CR2: 00005e6c5c0456c8 CR3: 00000001529ed001 CR4: 00000000007726f0
2026-02-02T16:15:23.234427-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:15:23.234427-08:00 s19 kernel: Call Trace:
2026-02-02T16:15:23.234428-08:00 s19 kernel: <IRQ>
2026-02-02T16:15:23.234428-08:00 s19 kernel: vprintk_default+0x1d/0x30
2026-02-02T16:15:23.234429-08:00 s19 kernel: vprintk+0x18/0x50
2026-02-02T16:15:23.234429-08:00 s19 kernel: _printk+0x5f/0x90
2026-02-02T16:15:23.234430-08:00 s19 kernel: rcu_sched_clock_irq+0xb8a/0x1800
2026-02-02T16:15:23.234430-08:00 s19 kernel: ? update_cfs_rq_load_avg+0x2e/0x230
2026-02-02T16:15:23.234431-08:00 s19 kernel: ? __cgroup_account_cputime_field+0x41/0x80
2026-02-02T16:15:23.234431-08:00 s19 kernel: ? account_system_index_time+0x9a/0xd0
2026-02-02T16:15:23.234432-08:00 s19 kernel: ? tmigr_requires_handle_remote+0x8c/0x110
2026-02-02T16:15:23.234432-08:00 s19 kernel: update_process_times+0x72/0xd0
2026-02-02T16:15:23.234433-08:00 s19 kernel: tick_nohz_handler+0x97/0x1b0
2026-02-02T16:15:23.234433-08:00 s19 kernel: ? __pfx_tick_nohz_handler+0x10/0x10
2026-02-02T16:15:23.234434-08:00 s19 kernel: __hrtimer_run_queues+0x109/0x240
2026-02-02T16:15:23.234434-08:00 s19 kernel: hrtimer_interrupt+0xfd/0x260
2026-02-02T16:15:23.234435-08:00 s19 kernel: __sysvec_apic_timer_interrupt+0x56/0x130
2026-02-02T16:15:23.234436-08:00 s19 kernel: sysvec_apic_timer_interrupt+0x9b/0xc0
2026-02-02T16:15:23.234436-08:00 s19 kernel: </IRQ>
2026-02-02T16:15:23.234437-08:00 s19 kernel: <TASK>
2026-02-02T16:15:23.234437-08:00 s19 kernel: asm_sysvec_apic_timer_interrupt+0x1b/0x20
2026-02-02T16:15:23.234438-08:00 s19 kernel: RIP: 0010:vfs_write+0x2e0/0x480
2026-02-02T16:15:23.234438-08:00 s19 kernel: Code: 48 8b 45 a8 49 89 06 41 8b 45 04 25 00 00 00 06 3d 00 00 00 02 74 57 49 8b 7d 48 4c 8b 4f 30 49 8b 41 28 48 8b 80 b8 03 00 00 <48> 85 c0 74 3f 48 8b 40 08 48 85 c0 74 36 41 0f b7 01 49 8d 75 40
2026-02-02T16:15:23.234439-08:00 s19 kernel: RSP: 0018:ffffd0958814fb90 EFLAGS: 00000206
2026-02-02T16:15:23.234439-08:00 s19 kernel: RAX: ffff8922538a76e0 RBX: ffff892292845400 RCX: ffffd0958814fc30
2026-02-02T16:15:23.234440-08:00 s19 kernel: RDX: 0000000000000000 RSI: 00005ec7f3eb54a0 RDI: ffff8922405ad480
2026-02-02T16:15:23.234440-08:00 s19 kernel: RBP: ffffd0958814fc20 R08: 0000000000000000 R09: ffff892241aba798
2026-02-02T16:15:23.234686-08:00 s19 kernel: R10: 0000000000002000 R11: 0000000000000000 R12: 0000000000002000
2026-02-02T16:15:23.234688-08:00 s19 kernel: R13: ffff8922929dec00 R14: ffffd0958814fc30 R15: 00005ec7f3eb54a0
2026-02-02T16:15:23.234689-08:00 s19 kernel: ksys_write+0x71/0xf0
2026-02-02T16:15:23.234689-08:00 s19 kernel: __x64_sys_write+0x19/0x30
2026-02-02T16:15:23.234690-08:00 s19 kernel: x64_sys_call+0x259/0x26e0
2026-02-02T16:15:23.234690-08:00 s19 kernel: do_syscall_64+0x81/0x500
2026-02-02T16:15:23.234691-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:15:23.234691-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:15:23.234693-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:15:23.234694-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:15:23.234695-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:15:23.234695-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:15:23.234696-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234696-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:15:23.234697-08:00 s19 kernel: ? vfs_write+0x324/0x480
2026-02-02T16:15:23.234698-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:15:23.234698-08:00 s19 kernel: ? ksys_write+0x71/0xf0
2026-02-02T16:15:23.234699-08:00 s19 kernel: ? __x64_sys_write+0x19/0x30
2026-02-02T16:15:23.234700-08:00 s19 kernel: ? x64_sys_call+0x259/0x26e0
2026-02-02T16:15:23.234700-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234701-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:23.234701-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:23.234702-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:15:23.234703-08:00 s19 kernel: RIP: 0033:0x7a4931b1c5a4
2026-02-02T16:15:23.234703-08:00 s19 kernel: Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d a5 ea 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89
2026-02-02T16:15:23.234704-08:00 s19 kernel: RSP: 002b:00007ffd835d42d8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
2026-02-02T16:15:23.234704-08:00 s19 kernel: RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007a4931b1c5a4
2026-02-02T16:15:23.234705-08:00 s19 kernel: RDX: 0000000000002000 RSI: 00005ec7f3eb54a0 RDI: 0000000000000001
2026-02-02T16:15:23.234706-08:00 s19 kernel: RBP: 00007ffd835d4330 R08: 00007a4931c03b20 R09: 0000000000000000
2026-02-02T16:15:23.234706-08:00 s19 kernel: R10: 0000000000000001 R11: 0000000000000202 R12: 0000000000002000
2026-02-02T16:15:23.234707-08:00 s19 kernel: R13: 00005ec7f3eb54a0 R14: 0000000000002000 R15: 0000000000002000
2026-02-02T16:15:23.234707-08:00 s19 kernel: </TASK>
2026-02-02T16:15:23.234708-08:00 s19 kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 23s! [systemd-network:841]
2026-02-02T16:15:23.234708-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:15:23.234742-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:15:23.234743-08:00 s19 kernel: CPU: 6 UID: 998 PID: 841 Comm: systemd-network Tainted: G D 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:15:23.234743-08:00 s19 kernel: Tainted: [D]=DIE
2026-02-02T16:15:23.234744-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:15:23.234744-08:00 s19 kernel: RIP: 0010:smp_call_function_many_cond+0x12c/0x590
2026-02-02T16:15:23.234745-08:00 s19 kernel: Code: 36 4c 63 e0 49 8b 5d 00 49 81 fc 00 20 00 00 0f 83 43 04 00 00 4a 03 1c e5 e0 50 d9 bb 8b 53 08 48 89 de 83 e2 01 74 0a f3 90 <8b> 4e 08 83 e1 01 75 f6 83 c0 01 eb b0 48 83 c4 50 5b 41 5c 41 5d
2026-02-02T16:15:23.234746-08:00 s19 kernel: RSP: 0018:ffffd09582a4b808 EFLAGS: 00000202
2026-02-02T16:15:23.234746-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e03c900 RCX: 0000000000000001
2026-02-02T16:15:23.234747-08:00 s19 kernel: RDX: 0000000000000001 RSI: ffff89298e03c900 RDI: 0000000000000000
2026-02-02T16:15:23.234747-08:00 s19 kernel: RBP: ffffd09582a4b880 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:15:23.234748-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:15:23.234748-08:00 s19 kernel: R13: ffff89298e334300 R14: 0000000000000006 R15: 0000000000000006
2026-02-02T16:15:23.234749-08:00 s19 kernel: FS: 0000000000000000(0000) GS:ffff8929d1957000(0000) knlGS:0000000000000000
2026-02-02T16:15:23.234749-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:15:23.234750-08:00 s19 kernel: CR2: 000070f72bb90030 CR3: 0000000174642006 CR4: 00000000007726f0
2026-02-02T16:15:23.234750-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:15:23.234751-08:00 s19 kernel: Call Trace:
2026-02-02T16:15:23.234751-08:00 s19 kernel: <TASK>
2026-02-02T16:15:23.234752-08:00 s19 kernel: ? __pfx_flush_tlb_func+0x10/0x10
2026-02-02T16:15:23.234753-08:00 s19 kernel: on_each_cpu_cond_mask+0x24/0x60
2026-02-02T16:15:23.234753-08:00 s19 kernel: native_flush_tlb_multi+0x73/0x170
2026-02-02T16:15:23.234754-08:00 s19 kernel: flush_tlb_mm_range+0x2d8/0x790
2026-02-02T16:15:23.234754-08:00 s19 kernel: tlb_finish_mmu+0x12f/0x1b0
2026-02-02T16:15:23.234755-08:00 s19 kernel: exit_mmap+0x192/0x3f0
2026-02-02T16:15:23.234755-08:00 s19 kernel: __mmput+0x41/0x150
2026-02-02T16:15:23.234756-08:00 s19 kernel: mmput+0x31/0x40
2026-02-02T16:15:23.234756-08:00 s19 kernel: do_exit+0x276/0xa40
2026-02-02T16:15:23.234757-08:00 s19 kernel: ? collect_signal+0xb0/0x150
2026-02-02T16:15:23.234757-08:00 s19 kernel: do_group_exit+0x34/0x90
2026-02-02T16:15:23.234758-08:00 s19 kernel: get_signal+0x926/0x950
2026-02-02T16:15:23.234758-08:00 s19 kernel: arch_do_signal_or_restart+0x41/0x240
2026-02-02T16:15:23.234759-08:00 s19 kernel: exit_to_user_mode_loop+0x8e/0x520
2026-02-02T16:15:23.234759-08:00 s19 kernel: do_syscall_64+0x2ab/0x500
2026-02-02T16:15:23.234760-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:15:23.234760-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:15:23.234761-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234761-08:00 s19 kernel: ? __x64_sys_close+0x3e/0x90
2026-02-02T16:15:23.234762-08:00 s19 kernel: ? x64_sys_call+0x1b7c/0x26e0
2026-02-02T16:15:23.234763-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234763-08:00 s19 kernel: ? __sys_sendmsg+0x8c/0x100
2026-02-02T16:15:23.234764-08:00 s19 kernel: ? __x64_sys_sendmsg+0x1d/0x30
2026-02-02T16:15:23.234764-08:00 s19 kernel: ? x64_sys_call+0x25cc/0x26e0
2026-02-02T16:15:23.234765-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234765-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:15:23.234766-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:15:23.234766-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:23.234767-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:23.234767-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:23.234768-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:15:23.234769-08:00 s19 kernel: RIP: 0033:0x7fca8d52a037
2026-02-02T16:15:23.234769-08:00 s19 kernel: Code: Unable to access opcode bytes at 0x7fca8d52a00d.
2026-02-02T16:15:23.234770-08:00 s19 kernel: RSP: 002b:00007ffd2688e2f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
2026-02-02T16:15:23.234770-08:00 s19 kernel: RAX: fffffffffffffffc RBX: 0000000000000022 RCX: 00007fca8d52a037
2026-02-02T16:15:23.234771-08:00 s19 kernel: RDX: 0000000000000022 RSI: 000064ecd6897d00 RDI: 0000000000000005
2026-02-02T16:15:23.234771-08:00 s19 kernel: RBP: 00007ffd2688e410 R08: 0000000000000000 R09: 0000000000000005
2026-02-02T16:15:23.234772-08:00 s19 kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: 000064ecd6897d00
2026-02-02T16:15:23.234772-08:00 s19 kernel: R13: 0000000000000018 R14: 000064ecd6873c70 R15: ffffffffffffffff
2026-02-02T16:15:23.234773-08:00 s19 kernel: </TASK>
2026-02-02T16:15:51.234381-08:00 s19 kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 49s! [systemd-network:841]
2026-02-02T16:15:51.234617-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:15:51.234623-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:15:51.234624-08:00 s19 kernel: CPU: 6 UID: 998 PID: 841 Comm: systemd-network Tainted: G D L 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:15:51.234624-08:00 s19 kernel: Tainted: [D]=DIE, [L]=SOFTLOCKUP
2026-02-02T16:15:51.234625-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:15:51.234626-08:00 s19 kernel: RIP: 0010:smp_call_function_many_cond+0x12c/0x590
2026-02-02T16:15:51.234626-08:00 s19 kernel: Code: 36 4c 63 e0 49 8b 5d 00 49 81 fc 00 20 00 00 0f 83 43 04 00 00 4a 03 1c e5 e0 50 d9 bb 8b 53 08 48 89 de 83 e2 01 74 0a f3 90 <8b> 4e 08 83 e1 01 75 f6 83 c0 01 eb b0 48 83 c4 50 5b 41 5c 41 5d
2026-02-02T16:15:51.234627-08:00 s19 kernel: RSP: 0018:ffffd09582a4b808 EFLAGS: 00000202
2026-02-02T16:15:51.234627-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e03c900 RCX: 0000000000000001
2026-02-02T16:15:51.234628-08:00 s19 kernel: RDX: 0000000000000001 RSI: ffff89298e03c900 RDI: 0000000000000000
2026-02-02T16:15:51.234629-08:00 s19 kernel: RBP: ffffd09582a4b880 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:15:51.234629-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:15:51.234630-08:00 s19 kernel: R13: ffff89298e334300 R14: 0000000000000006 R15: 0000000000000006
2026-02-02T16:15:51.234630-08:00 s19 kernel: FS: 0000000000000000(0000) GS:ffff8929d1957000(0000) knlGS:0000000000000000
2026-02-02T16:15:51.234631-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:15:51.234632-08:00 s19 kernel: CR2: 000070f72bb90030 CR3: 0000000174642006 CR4: 00000000007726f0
2026-02-02T16:15:51.234632-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:15:51.234633-08:00 s19 kernel: Call Trace:
2026-02-02T16:15:51.234633-08:00 s19 kernel: <TASK>
2026-02-02T16:15:51.234634-08:00 s19 kernel: ? __pfx_flush_tlb_func+0x10/0x10
2026-02-02T16:15:51.234634-08:00 s19 kernel: on_each_cpu_cond_mask+0x24/0x60
2026-02-02T16:15:51.234635-08:00 s19 kernel: native_flush_tlb_multi+0x73/0x170
2026-02-02T16:15:51.234635-08:00 s19 kernel: flush_tlb_mm_range+0x2d8/0x790
2026-02-02T16:15:51.234636-08:00 s19 kernel: tlb_finish_mmu+0x12f/0x1b0
2026-02-02T16:15:51.234636-08:00 s19 kernel: exit_mmap+0x192/0x3f0
2026-02-02T16:15:51.234637-08:00 s19 kernel: __mmput+0x41/0x150
2026-02-02T16:15:51.234637-08:00 s19 kernel: mmput+0x31/0x40
2026-02-02T16:15:51.234638-08:00 s19 kernel: do_exit+0x276/0xa40
2026-02-02T16:15:51.234638-08:00 s19 kernel: ? collect_signal+0xb0/0x150
2026-02-02T16:15:51.234639-08:00 s19 kernel: do_group_exit+0x34/0x90
2026-02-02T16:15:51.234639-08:00 s19 kernel: get_signal+0x926/0x950
2026-02-02T16:15:51.234640-08:00 s19 kernel: arch_do_signal_or_restart+0x41/0x240
2026-02-02T16:15:51.234640-08:00 s19 kernel: exit_to_user_mode_loop+0x8e/0x520
2026-02-02T16:15:51.234641-08:00 s19 kernel: do_syscall_64+0x2ab/0x500
2026-02-02T16:15:51.234641-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:15:51.234642-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:15:51.234642-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:51.234643-08:00 s19 kernel: ? __x64_sys_close+0x3e/0x90
2026-02-02T16:15:51.234643-08:00 s19 kernel: ? x64_sys_call+0x1b7c/0x26e0
2026-02-02T16:15:51.234644-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:51.234644-08:00 s19 kernel: ? __sys_sendmsg+0x8c/0x100
2026-02-02T16:15:51.234645-08:00 s19 kernel: ? __x64_sys_sendmsg+0x1d/0x30
2026-02-02T16:15:51.234646-08:00 s19 kernel: ? x64_sys_call+0x25cc/0x26e0
2026-02-02T16:15:51.234646-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:51.234647-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:15:51.234647-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:15:51.234648-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:15:51.234648-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:51.234649-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:15:51.234650-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:15:51.234651-08:00 s19 kernel: RIP: 0033:0x7fca8d52a037
2026-02-02T16:15:51.234652-08:00 s19 kernel: Code: Unable to access opcode bytes at 0x7fca8d52a00d.
2026-02-02T16:15:51.234652-08:00 s19 kernel: RSP: 002b:00007ffd2688e2f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
2026-02-02T16:15:51.234653-08:00 s19 kernel: RAX: fffffffffffffffc RBX: 0000000000000022 RCX: 00007fca8d52a037
2026-02-02T16:15:51.234653-08:00 s19 kernel: RDX: 0000000000000022 RSI: 000064ecd6897d00 RDI: 0000000000000005
2026-02-02T16:15:51.234654-08:00 s19 kernel: RBP: 00007ffd2688e410 R08: 0000000000000000 R09: 0000000000000005
2026-02-02T16:15:51.234654-08:00 s19 kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: 000064ecd6897d00
2026-02-02T16:15:51.234655-08:00 s19 kernel: R13: 0000000000000018 R14: 000064ecd6873c70 R15: ffffffffffffffff
2026-02-02T16:15:51.234656-08:00 s19 kernel: </TASK>
2026-02-02T16:16:19.234349-08:00 s19 kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 75s! [systemd-network:841]
2026-02-02T16:16:19.234390-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:16:19.234392-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:16:19.234393-08:00 s19 kernel: CPU: 6 UID: 998 PID: 841 Comm: systemd-network Tainted: G D L 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:16:19.234393-08:00 s19 kernel: Tainted: [D]=DIE, [L]=SOFTLOCKUP
2026-02-02T16:16:19.234394-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:16:19.234394-08:00 s19 kernel: RIP: 0010:smp_call_function_many_cond+0x12c/0x590
2026-02-02T16:16:19.234395-08:00 s19 kernel: Code: 36 4c 63 e0 49 8b 5d 00 49 81 fc 00 20 00 00 0f 83 43 04 00 00 4a 03 1c e5 e0 50 d9 bb 8b 53 08 48 89 de 83 e2 01 74 0a f3 90 <8b> 4e 08 83 e1 01 75 f6 83 c0 01 eb b0 48 83 c4 50 5b 41 5c 41 5d
2026-02-02T16:16:19.234396-08:00 s19 kernel: RSP: 0018:ffffd09582a4b808 EFLAGS: 00000202
2026-02-02T16:16:19.234396-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e03c900 RCX: 0000000000000001
2026-02-02T16:16:19.234397-08:00 s19 kernel: RDX: 0000000000000001 RSI: ffff89298e03c900 RDI: 0000000000000000
2026-02-02T16:16:19.234398-08:00 s19 kernel: RBP: ffffd09582a4b880 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:16:19.234398-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:16:19.234399-08:00 s19 kernel: R13: ffff89298e334300 R14: 0000000000000006 R15: 0000000000000006
2026-02-02T16:16:19.234399-08:00 s19 kernel: FS: 0000000000000000(0000) GS:ffff8929d1957000(0000) knlGS:0000000000000000
2026-02-02T16:16:19.234400-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:16:19.234400-08:00 s19 kernel: CR2: 000070f72bb90030 CR3: 0000000174642006 CR4: 00000000007726f0
2026-02-02T16:16:19.234401-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:16:19.234401-08:00 s19 kernel: Call Trace:
2026-02-02T16:16:19.234402-08:00 s19 kernel: <TASK>
2026-02-02T16:16:19.234402-08:00 s19 kernel: ? __pfx_flush_tlb_func+0x10/0x10
2026-02-02T16:16:19.234403-08:00 s19 kernel: on_each_cpu_cond_mask+0x24/0x60
2026-02-02T16:16:19.234404-08:00 s19 kernel: native_flush_tlb_multi+0x73/0x170
2026-02-02T16:16:19.234404-08:00 s19 kernel: flush_tlb_mm_range+0x2d8/0x790
2026-02-02T16:16:19.234405-08:00 s19 kernel: tlb_finish_mmu+0x12f/0x1b0
2026-02-02T16:16:19.234405-08:00 s19 kernel: exit_mmap+0x192/0x3f0
2026-02-02T16:16:19.234406-08:00 s19 kernel: __mmput+0x41/0x150
2026-02-02T16:16:19.234406-08:00 s19 kernel: mmput+0x31/0x40
2026-02-02T16:16:19.234407-08:00 s19 kernel: do_exit+0x276/0xa40
2026-02-02T16:16:19.234407-08:00 s19 kernel: ? collect_signal+0xb0/0x150
2026-02-02T16:16:19.234408-08:00 s19 kernel: do_group_exit+0x34/0x90
2026-02-02T16:16:19.234408-08:00 s19 kernel: get_signal+0x926/0x950
2026-02-02T16:16:19.234409-08:00 s19 kernel: arch_do_signal_or_restart+0x41/0x240
2026-02-02T16:16:19.234409-08:00 s19 kernel: exit_to_user_mode_loop+0x8e/0x520
2026-02-02T16:16:19.234410-08:00 s19 kernel: do_syscall_64+0x2ab/0x500
2026-02-02T16:16:19.234410-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:16:19.234411-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:16:19.234412-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:19.234412-08:00 s19 kernel: ? __x64_sys_close+0x3e/0x90
2026-02-02T16:16:19.234413-08:00 s19 kernel: ? x64_sys_call+0x1b7c/0x26e0
2026-02-02T16:16:19.234413-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:19.234414-08:00 s19 kernel: ? __sys_sendmsg+0x8c/0x100
2026-02-02T16:16:19.234414-08:00 s19 kernel: ? __x64_sys_sendmsg+0x1d/0x30
2026-02-02T16:16:19.234415-08:00 s19 kernel: ? x64_sys_call+0x25cc/0x26e0
2026-02-02T16:16:19.234415-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:19.234416-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:16:19.234416-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:16:19.234417-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:19.234417-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:16:19.234418-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:16:19.234419-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:16:19.234420-08:00 s19 kernel: RIP: 0033:0x7fca8d52a037
2026-02-02T16:16:19.234420-08:00 s19 kernel: Code: Unable to access opcode bytes at 0x7fca8d52a00d.
2026-02-02T16:16:19.234421-08:00 s19 kernel: RSP: 002b:00007ffd2688e2f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
2026-02-02T16:16:19.234422-08:00 s19 kernel: RAX: fffffffffffffffc RBX: 0000000000000022 RCX: 00007fca8d52a037
2026-02-02T16:16:19.234422-08:00 s19 kernel: RDX: 0000000000000022 RSI: 000064ecd6897d00 RDI: 0000000000000005
2026-02-02T16:16:19.234423-08:00 s19 kernel: RBP: 00007ffd2688e410 R08: 0000000000000000 R09: 0000000000000005
2026-02-02T16:16:19.234423-08:00 s19 kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: 000064ecd6897d00
2026-02-02T16:16:19.234424-08:00 s19 kernel: R13: 0000000000000018 R14: 000064ecd6873c70 R15: ffffffffffffffff
2026-02-02T16:16:19.234424-08:00 s19 kernel: </TASK>
2026-02-02T16:16:33.241297-08:00 s19 kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P12753 P6509 P5261 P15769 P8594 P5431 P10107 P14333 P6906 } 241738 jiffies s: 873 root: 0x0/T
2026-02-02T16:16:33.241336-08:00 s19 kernel: rcu: blocking rcu_node structures (internal RCU debug):
2026-02-02T16:16:47.234360-08:00 s19 kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 101s! [systemd-network:841]
2026-02-02T16:16:47.234400-08:00 s19 kernel: Modules linked in: tls snd_hda_codec_intelhdmi snd_hda_codec_alc882 snd_hda_codec_realtek_lib qrtr snd_hda_codec_generic snd_hda_intel snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common bridge intel_uncore_frequency snd_sof stp intel_uncore_frequency_common llc snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation cfg80211 snd_soc_sdw_utils snd_soc_acpi soundwire_bus snd_soc_sdca crc8 snd_soc_avs snd_soc_hda_codec snd_hda_ext_core intel_tcc_cooling x86_pkg_temp_thermal snd_hda_codec intel_powerclamp coretemp snd_hda_core snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep binfmt_misc kvm_intel cmdlinepart i915 snd_soc_core spi_nor snd_compress ee1004 mtd mei_hdcp mei_pxp ac97_bus drm_buddy snd_pcm_dmaengine kvm snd_pcm ttm irqbypass
2026-02-02T16:16:47.234402-08:00 s19 kernel: drm_display_helper i2c_i801 eeepc_wmi snd_timer rapl nls_iso8859_1 intel_cstate asus_wmi snd i2c_smbus platform_profile spi_intel_pci cec wmi_bmof i2c_mux mei_me soundcore intel_wmi_thunderbolt sparse_keymap mxm_wmi spi_intel rc_core mei i2c_algo_bit intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec acpi_pad acpi_tad joydev input_leds mac_hid sch_fq_codel dm_multipath msr nvme_fabrics efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b libblake2b raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 linear hid_generic usbhid hid nvme nvme_core nvme_keyring ghash_clmulni_intel nvme_auth intel_lpss_pci ahci igc intel_lpss hkdf libahci idma64 video wmi aesni_intel
2026-02-02T16:16:47.234403-08:00 s19 kernel: CPU: 6 UID: 998 PID: 841 Comm: systemd-network Tainted: G D L 6.19.0-rc1-pz #1 PREEMPT(full)
2026-02-02T16:16:47.234404-08:00 s19 kernel: Tainted: [D]=DIE, [L]=SOFTLOCKUP
2026-02-02T16:16:47.234405-08:00 s19 kernel: Hardware name: ASUS System Product Name/PRIME Z490-A, BIOS 9902 09/15/2021
2026-02-02T16:16:47.234405-08:00 s19 kernel: RIP: 0010:smp_call_function_many_cond+0x12c/0x590
2026-02-02T16:16:47.234406-08:00 s19 kernel: Code: 36 4c 63 e0 49 8b 5d 00 49 81 fc 00 20 00 00 0f 83 43 04 00 00 4a 03 1c e5 e0 50 d9 bb 8b 53 08 48 89 de 83 e2 01 74 0a f3 90 <8b> 4e 08 83 e1 01 75 f6 83 c0 01 eb b0 48 83 c4 50 5b 41 5c 41 5d
2026-02-02T16:16:47.234407-08:00 s19 kernel: RSP: 0018:ffffd09582a4b808 EFLAGS: 00000202
2026-02-02T16:16:47.234407-08:00 s19 kernel: RAX: 0000000000000000 RBX: ffff89298e03c900 RCX: 0000000000000001
2026-02-02T16:16:47.234408-08:00 s19 kernel: RDX: 0000000000000001 RSI: ffff89298e03c900 RDI: 0000000000000000
2026-02-02T16:16:47.234408-08:00 s19 kernel: RBP: ffffd09582a4b880 R08: 0000000000000000 R09: 0000000000000000
2026-02-02T16:16:47.234409-08:00 s19 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
2026-02-02T16:16:47.234410-08:00 s19 kernel: R13: ffff89298e334300 R14: 0000000000000006 R15: 0000000000000006
2026-02-02T16:16:47.234410-08:00 s19 kernel: FS: 0000000000000000(0000) GS:ffff8929d1957000(0000) knlGS:0000000000000000
2026-02-02T16:16:47.234411-08:00 s19 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2026-02-02T16:16:47.234411-08:00 s19 kernel: CR2: 000070f72bb90030 CR3: 0000000174642006 CR4: 00000000007726f0
2026-02-02T16:16:47.234412-08:00 s19 kernel: PKRU: 55555554
2026-02-02T16:16:47.234412-08:00 s19 kernel: Call Trace:
2026-02-02T16:16:47.234413-08:00 s19 kernel: <TASK>
2026-02-02T16:16:47.234413-08:00 s19 kernel: ? __pfx_flush_tlb_func+0x10/0x10
2026-02-02T16:16:47.234414-08:00 s19 kernel: on_each_cpu_cond_mask+0x24/0x60
2026-02-02T16:16:47.234414-08:00 s19 kernel: native_flush_tlb_multi+0x73/0x170
2026-02-02T16:16:47.234415-08:00 s19 kernel: flush_tlb_mm_range+0x2d8/0x790
2026-02-02T16:16:47.234416-08:00 s19 kernel: tlb_finish_mmu+0x12f/0x1b0
2026-02-02T16:16:47.234416-08:00 s19 kernel: exit_mmap+0x192/0x3f0
2026-02-02T16:16:47.234417-08:00 s19 kernel: __mmput+0x41/0x150
2026-02-02T16:16:47.234417-08:00 s19 kernel: mmput+0x31/0x40
2026-02-02T16:16:47.234418-08:00 s19 kernel: do_exit+0x276/0xa40
2026-02-02T16:16:47.234418-08:00 s19 kernel: ? collect_signal+0xb0/0x150
2026-02-02T16:16:47.234419-08:00 s19 kernel: do_group_exit+0x34/0x90
2026-02-02T16:16:47.234419-08:00 s19 kernel: get_signal+0x926/0x950
2026-02-02T16:16:47.234420-08:00 s19 kernel: arch_do_signal_or_restart+0x41/0x240
2026-02-02T16:16:47.234420-08:00 s19 kernel: exit_to_user_mode_loop+0x8e/0x520
2026-02-02T16:16:47.234421-08:00 s19 kernel: do_syscall_64+0x2ab/0x500
2026-02-02T16:16:47.234421-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:16:47.234422-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:16:47.234422-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:47.234423-08:00 s19 kernel: ? __x64_sys_close+0x3e/0x90
2026-02-02T16:16:47.234423-08:00 s19 kernel: ? x64_sys_call+0x1b7c/0x26e0
2026-02-02T16:16:47.234424-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:47.234424-08:00 s19 kernel: ? __sys_sendmsg+0x8c/0x100
2026-02-02T16:16:47.234425-08:00 s19 kernel: ? __x64_sys_sendmsg+0x1d/0x30
2026-02-02T16:16:47.234425-08:00 s19 kernel: ? x64_sys_call+0x25cc/0x26e0
2026-02-02T16:16:47.234426-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:47.234426-08:00 s19 kernel: ? __do_sys_gettid+0x1a/0x30
2026-02-02T16:16:47.234427-08:00 s19 kernel: ? x64_sys_call+0x72d/0x26e0
2026-02-02T16:16:47.234427-08:00 s19 kernel: ? do_syscall_64+0xbf/0x500
2026-02-02T16:16:47.234431-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:16:47.234432-08:00 s19 kernel: ? clear_bhb_loop+0x30/0x80
2026-02-02T16:16:47.234433-08:00 s19 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
2026-02-02T16:16:47.234434-08:00 s19 kernel: RIP: 0033:0x7fca8d52a037
2026-02-02T16:16:47.234434-08:00 s19 kernel: Code: Unable to access opcode bytes at 0x7fca8d52a00d.
2026-02-02T16:16:47.234435-08:00 s19 kernel: RSP: 002b:00007ffd2688e2f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
2026-02-02T16:16:47.234436-08:00 s19 kernel: RAX: fffffffffffffffc RBX: 0000000000000022 RCX: 00007fca8d52a037
2026-02-02T16:16:47.234436-08:00 s19 kernel: RDX: 0000000000000022 RSI: 000064ecd6897d00 RDI: 0000000000000005
2026-02-02T16:16:47.234437-08:00 s19 kernel: RBP: 00007ffd2688e410 R08: 0000000000000000 R09: 0000000000000005
2026-02-02T16:16:47.234437-08:00 s19 kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: 000064ecd6897d00
2026-02-02T16:16:47.234438-08:00 s19 kernel: R13: 0000000000000018 R14: 000064ecd6873c70 R15: ffffffffffffffff
2026-02-02T16:16:47.234438-08:00 s19 kernel: </TASK>
2026-02-02T19:58:31.230818-08:00 s19 kernel: Linux version 6.19.0-rc1-pz (doug@s19) (gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #1 SMP PREEMPT_DYNAMIC Fri Jan 30 11:33:51 PST 2026
2026-02-02T19:58:31.230822-08:00 s19 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.19.0-rc1-pz root=UUID=823b5f2c-38f5-4b05-92f6-6cb77b157c8a ro ipv6.disable=1 consoleblank=300 msr.allow_writes=on cpuidle.governor=teo
Hi Peter, Thank you for including me on this set of emails. I assume I was copied on this patch set because I reported an issue a year ago, and I see patch 4 of 4 reverts the fix from that time. I also note that there is a pending update to patch 4 of 4. I will re-test then. On 2026.01.30 01:35 Peter Zijlstra wrote: > Two issues related to reweight_entity() were raised; poking at all that got me > these patches. > > They're in queue.git/sched/core Thanks. I tried to apply them to kernel 6.19-rc5, but patch 3 of 4 would not apply. It took me awhile to figure out what you meant but got there in the end. > and I spend most of yesterday staring at traces > trying to find anything wrong. So far, so good. > > Please test. Happy to. There were 2 issues raised a year ago: One was extremely long CPU migration times under specific conditions, thousands of times longer than reasonable; The second was similar, but much much less in magnitude. The second issue was hidden by the first but became apparent once the first was fixed. For more background, readers are referred to the long email thread [1]. Testing of this patch set: For those that don't want to read: Summary: all good. The main diagnostic tool used here is turbostat, where the issues are shown via anomalies in the time between samples. The test setup is an otherwise very idle system with a 100.0% load applied. Command used: sudo turbostat --quiet --Summary --show Busy%,Bzy_MHz,IRQ,PkgWatt,PkgTmp,TSC_MHz,Time_Of_Day_Seconds,usec --interval 1 --out /dev/shm/turbo.log The data is post processed and a histogram of the times between samples is created. 1 millisecond per histogram bin. Step 1: Confirm where we left off a year ago: The exact same kernel from a year ago, that we ended up happy with, was used. doug@s19:~/tmp/peterz/6.19/turbo$ cat 613.his Kernel: 6.13.0-stock gov: powersave HWP: enabled 1.000000, 23195 1.001000, 10897 1.002000, 49 1.003000, 23 1.004000, 21 1.005000, 9 Total: 34194 : Total >= 10 mSec: 0 ( 0.00 percent) So, over 9 hours and never a nominal sample time exceeded by over 5 milliseconds. Very good. Step 2: Take a baseline sample before this patch set: Mainline kernel 6.19-rc1 was used: doug@s19:~/tmp/peterz/6.19/turbo$ cat rc1.his Kernel: 6.19.0-rc1-stock gov: powersave HWP: enabled 1.000000, 19509 1.001000, 10430 1.002000, 32 1.003000, 19 1.004000, 24 1.005000, 13 1.006000, 9 1.007000, 4 1.008000, 3 1.009000, 4 1.010000, 6 1.011000, 2 1.012000, 1 1.013000, 4 1.014000, 10 1.015000, 10 1.016000, 7 1.017000, 10 1.018000, 20 1.019000, 12 1.020000, 5 1.021000, 3 1.022000, 1 1.023000, 2 1.024000, 2 <<< Clamped. Actually 26 and 25 milliseconds Total: 30142 : Total >= 10 mSec: 95 ( 0.32 percent) What!!! Over 8 hours. It seems something has regressed over the last year. Our threshold of 10 milliseconds was rather arbitrary. Step 3: This patch set and from Peter's git tree: doug@s19:~/tmp/peterz/6.19/turbo$ cat 02.his kernel: 6.19.0-rc1-pz gov: powersave HWP: enabled 1.000000, 19139 1.001000, 9532 1.002000, 19 1.003000, 17 1.004000, 8 1.005000, 3 1.006000, 2 1.009000, 1 Total: 28721 : Total >= 10 mSec: 0 ( 0.00 percent) Just about 8 hours. Never a time >= our arbitrary threshold of 10 milliseconds. So, good. I will redo this test with the revised patch 4 of 4 when it is available. ... Doug [1] https://lore.kernel.org/lkml/005f01db5a44$3bb698e0$b323caa0$@telus.net/
© 2016 - 2026 Red Hat, Inc.