kernel/sched/core.c | 2 +- kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++-------- kernel/sched/sched.h | 2 +- kernel/sched/topology.c | 8 +++-- 4 files changed, 60 insertions(+), 17 deletions(-)
Hello!
v2 of a patch series [3] that addresses two issues affecting DEADLINE
bandwidth accounting during non-destructive changes to root domains and
hotplug operations. The series is based on top of Waiman's
"cgroup/cpuset: Remove redundant rebuild_sched_domains_locked() calls"
series [1] which is now merged into cgroups/for-6.13 (this series is
based on top of that, commit c4c9cebe2fb9). The discussion that
eventually led to these two series can be found at [2].
Waiman reported that v1 still failed to make his test_cpuset_prs.sh
happy, so I had to change both patches a little. It now seems to pass on
my runs.
Patch 01/02 deals with non-destructive root domain changes. With respect
to v1 we now always restore dl_server contributions, considering root
domain span and active cpus mask (otherwise accounting on the default
root domain would end up to be incorrect).
Patch 02/02 deals with hotplug. With respect to v1 I added special
casing when total_bw = 0 (so no DEADLINE tasks to consider) and when a
root domain is left with no cpus due to hotplug.
In all honesty, I still see intermittent issues that seems to however be
related to the dance we do in sched_cpu_deactivate(), where we first
turn everything related to a cpu/rq off and revert that if
cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these
seem to be orthogonal to the original discussion we started from, I
wanted to send this out as an hopefully meaningful update/improvement
since yesterday. Will continue looking into this.
Please go forth and test/review.
Series also available at
git@github.com:jlelli/linux.git upstream/dl-server-apply
Best,
Juri
[1] https://lore.kernel.org/lkml/20241110025023.664487-1-longman@redhat.com/
[2] https://lore.kernel.org/lkml/20241029225116.3998487-1-joel@joelfernandes.org/
[3] v1 - https://lore.kernel.org/lkml/20241113125724.450249-1-juri.lelli@redhat.com/
Juri Lelli (2):
sched/deadline: Restore dl_server bandwidth on non-destructive root
domain changes
sched/deadline: Correctly account for allocated bandwidth during
hotplug
kernel/sched/core.c | 2 +-
kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++--------
kernel/sched/sched.h | 2 +-
kernel/sched/topology.c | 8 +++--
4 files changed, 60 insertions(+), 17 deletions(-)
--
2.47.0
Thanks Waiman and Phil for the super quick review/test of this v2!
On 14/11/24 14:28, Juri Lelli wrote:
...
> In all honesty, I still see intermittent issues that seems to however be
> related to the dance we do in sched_cpu_deactivate(), where we first
> turn everything related to a cpu/rq off and revert that if
> cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these
> seem to be orthogonal to the original discussion we started from, I
> wanted to send this out as an hopefully meaningful update/improvement
> since yesterday. Will continue looking into this.
About this that I mentioned, it looks like the below cures it (and
hopefully doesn't regress wrt the other 2 patches).
What do everybody think?
---
Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug
Currently we check for bandwidth overflow potentially due to hotplug
operations at the end of sched_cpu_deactivate(), after the cpu going
offline has already been removed from scheduling, active_mask, etc.
This can create issues for DEADLINE tasks, as there is a substantial
race window between the start of sched_cpu_deactivate() and the moment
we possibly decide to roll-back the operation if dl_bw_deactivate()
returns failure in cpuset_cpu_inactive(). An example is a throttled
task that sees its replenishment timer firing while the cpu it was
previously running on is considered offline, but before
dl_bw_deactivate() had a chance to say no and roll-back happened.
Fix this by directly calling dl_bw_deactivate() first thing in
sched_cpu_deactivate() and do the required calculation in the former
function considering the cpu passed as an argument as offline already.
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
kernel/sched/core.c | 9 +++++----
kernel/sched/deadline.c | 12 ++++++++++--
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d1049e784510..43dfb3968eb8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void)
static int cpuset_cpu_inactive(unsigned int cpu)
{
if (!cpuhp_tasks_frozen) {
- int ret = dl_bw_deactivate(cpu);
-
- if (ret)
- return ret;
cpuset_update_active_cpus();
} else {
num_cpus_frozen++;
@@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu)
struct rq *rq = cpu_rq(cpu);
int ret;
+ ret = dl_bw_deactivate(cpu);
+
+ if (ret)
+ return ret;
+
/*
* Remove CPU from nohz.idle_cpus_mask to prevent participating in
* load balancing when not active
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 267ea8bacaf6..6e988d4cd787 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
}
break;
case dl_bw_req_deactivate:
+ /*
+ * cpu is not off yet, but we need to do the math by
+ * considering it off already (i.e., what would happen if we
+ * turn cpu off?).
+ */
+ cap -= arch_scale_cpu_capacity(cpu);
+
/*
* cpu is going offline and NORMAL tasks will be moved away
* from it. We can thus discount dl_server bandwidth
@@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
- * wise thing to do.
+ * wise thing to do. As said above, cpu is not offline
+ * yet, so account for that.
*/
- if (dl_bw_cpus(cpu))
+ if (dl_bw_cpus(cpu) - 1)
overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
else
overflow = 1;
On Thu, Nov 14, 2024 at 04:14:00PM +0000 Juri Lelli wrote:
> Thanks Waiman and Phil for the super quick review/test of this v2!
>
> On 14/11/24 14:28, Juri Lelli wrote:
>
> ...
>
> > In all honesty, I still see intermittent issues that seems to however be
> > related to the dance we do in sched_cpu_deactivate(), where we first
> > turn everything related to a cpu/rq off and revert that if
> > cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these
> > seem to be orthogonal to the original discussion we started from, I
> > wanted to send this out as an hopefully meaningful update/improvement
> > since yesterday. Will continue looking into this.
>
> About this that I mentioned, it looks like the below cures it (and
> hopefully doesn't regress wrt the other 2 patches).
>
> What do everybody think?
>
I think that makes sense. I think it's better not to have that
deadline call buried the cpuset code as well.
Reviewed-by: Phil Auld <pauld@redhat.com>
> ---
> Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug
>
> Currently we check for bandwidth overflow potentially due to hotplug
> operations at the end of sched_cpu_deactivate(), after the cpu going
> offline has already been removed from scheduling, active_mask, etc.
> This can create issues for DEADLINE tasks, as there is a substantial
> race window between the start of sched_cpu_deactivate() and the moment
> we possibly decide to roll-back the operation if dl_bw_deactivate()
> returns failure in cpuset_cpu_inactive(). An example is a throttled
> task that sees its replenishment timer firing while the cpu it was
> previously running on is considered offline, but before
> dl_bw_deactivate() had a chance to say no and roll-back happened.
>
> Fix this by directly calling dl_bw_deactivate() first thing in
> sched_cpu_deactivate() and do the required calculation in the former
> function considering the cpu passed as an argument as offline already.
>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> ---
> kernel/sched/core.c | 9 +++++----
> kernel/sched/deadline.c | 12 ++++++++++--
> 2 files changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d1049e784510..43dfb3968eb8 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void)
> static int cpuset_cpu_inactive(unsigned int cpu)
> {
> if (!cpuhp_tasks_frozen) {
> - int ret = dl_bw_deactivate(cpu);
> -
> - if (ret)
> - return ret;
> cpuset_update_active_cpus();
> } else {
> num_cpus_frozen++;
> @@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu)
> struct rq *rq = cpu_rq(cpu);
> int ret;
>
> + ret = dl_bw_deactivate(cpu);
> +
> + if (ret)
> + return ret;
> +
> /*
> * Remove CPU from nohz.idle_cpus_mask to prevent participating in
> * load balancing when not active
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 267ea8bacaf6..6e988d4cd787 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> }
> break;
> case dl_bw_req_deactivate:
> + /*
> + * cpu is not off yet, but we need to do the math by
> + * considering it off already (i.e., what would happen if we
> + * turn cpu off?).
> + */
> + cap -= arch_scale_cpu_capacity(cpu);
> +
> /*
> * cpu is going offline and NORMAL tasks will be moved away
> * from it. We can thus discount dl_server bandwidth
> @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> if (dl_b->total_bw - fair_server_bw > 0) {
> /*
> * Leaving at least one CPU for DEADLINE tasks seems a
> - * wise thing to do.
> + * wise thing to do. As said above, cpu is not offline
> + * yet, so account for that.
> */
> - if (dl_bw_cpus(cpu))
> + if (dl_bw_cpus(cpu) - 1)
> overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
> else
> overflow = 1;
>
--
On 11/14/24 11:14 AM, Juri Lelli wrote:
> Thanks Waiman and Phil for the super quick review/test of this v2!
>
> On 14/11/24 14:28, Juri Lelli wrote:
>
> ...
>
>> In all honesty, I still see intermittent issues that seems to however be
>> related to the dance we do in sched_cpu_deactivate(), where we first
>> turn everything related to a cpu/rq off and revert that if
>> cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these
>> seem to be orthogonal to the original discussion we started from, I
>> wanted to send this out as an hopefully meaningful update/improvement
>> since yesterday. Will continue looking into this.
> About this that I mentioned, it looks like the below cures it (and
> hopefully doesn't regress wrt the other 2 patches).
>
> What do everybody think?
>
> ---
> Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug
>
> Currently we check for bandwidth overflow potentially due to hotplug
> operations at the end of sched_cpu_deactivate(), after the cpu going
> offline has already been removed from scheduling, active_mask, etc.
> This can create issues for DEADLINE tasks, as there is a substantial
> race window between the start of sched_cpu_deactivate() and the moment
> we possibly decide to roll-back the operation if dl_bw_deactivate()
> returns failure in cpuset_cpu_inactive(). An example is a throttled
> task that sees its replenishment timer firing while the cpu it was
> previously running on is considered offline, but before
> dl_bw_deactivate() had a chance to say no and roll-back happened.
>
> Fix this by directly calling dl_bw_deactivate() first thing in
> sched_cpu_deactivate() and do the required calculation in the former
> function considering the cpu passed as an argument as offline already.
>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> ---
> kernel/sched/core.c | 9 +++++----
> kernel/sched/deadline.c | 12 ++++++++++--
> 2 files changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d1049e784510..43dfb3968eb8 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void)
> static int cpuset_cpu_inactive(unsigned int cpu)
> {
> if (!cpuhp_tasks_frozen) {
> - int ret = dl_bw_deactivate(cpu);
> -
> - if (ret)
> - return ret;
> cpuset_update_active_cpus();
> } else {
> num_cpus_frozen++;
> @@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu)
> struct rq *rq = cpu_rq(cpu);
> int ret;
>
> + ret = dl_bw_deactivate(cpu);
> +
> + if (ret)
> + return ret;
> +
> /*
> * Remove CPU from nohz.idle_cpus_mask to prevent participating in
> * load balancing when not active
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 267ea8bacaf6..6e988d4cd787 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> }
> break;
> case dl_bw_req_deactivate:
> + /*
> + * cpu is not off yet, but we need to do the math by
> + * considering it off already (i.e., what would happen if we
> + * turn cpu off?).
> + */
> + cap -= arch_scale_cpu_capacity(cpu);
> +
> /*
> * cpu is going offline and NORMAL tasks will be moved away
> * from it. We can thus discount dl_server bandwidth
> @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> if (dl_b->total_bw - fair_server_bw > 0) {
> /*
> * Leaving at least one CPU for DEADLINE tasks seems a
> - * wise thing to do.
> + * wise thing to do. As said above, cpu is not offline
> + * yet, so account for that.
> */
> - if (dl_bw_cpus(cpu))
> + if (dl_bw_cpus(cpu) - 1)
> overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
> else
> overflow = 1;
>
I have applied this new patch to my test system and there was no
regression to the test_cpuet_prs.sh test.
Tested-by: Waiman Long <longman@redhat.com>
Currently we check for bandwidth overflow potentially due to hotplug
operations at the end of sched_cpu_deactivate(), after the cpu going
offline has already been removed from scheduling, active_mask, etc.
This can create issues for DEADLINE tasks, as there is a substantial
race window between the start of sched_cpu_deactivate() and the moment
we possibly decide to roll-back the operation if dl_bw_deactivate()
returns failure in cpuset_cpu_inactive(). An example is a throttled
task that sees its replenishment timer firing while the cpu it was
previously running on is considered offline, but before
dl_bw_deactivate() had a chance to say no and roll-back happened.
Fix this by directly calling dl_bw_deactivate() first thing in
sched_cpu_deactivate() and do the required calculation in the former
function considering the cpu passed as an argument as offline already.
By doing so we also simplify sched_cpu_deactivate(), as there is no need
anymore for any kind of roll-back if we fail early.
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
Thanks Waiman and Phil for testing and reviewing the scratch version of
this change. I think the below might be better, as we end up with a
clean-up as well.
Please take another look when you/others have time.
---
kernel/sched/core.c | 22 +++++++---------------
kernel/sched/deadline.c | 12 ++++++++++--
2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d1049e784510..e2c6eacf793e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8054,19 +8054,14 @@ static void cpuset_cpu_active(void)
cpuset_update_active_cpus();
}
-static int cpuset_cpu_inactive(unsigned int cpu)
+static void cpuset_cpu_inactive(unsigned int cpu)
{
if (!cpuhp_tasks_frozen) {
- int ret = dl_bw_deactivate(cpu);
-
- if (ret)
- return ret;
cpuset_update_active_cpus();
} else {
num_cpus_frozen++;
partition_sched_domains(1, NULL, NULL);
}
- return 0;
}
static inline void sched_smt_present_inc(int cpu)
@@ -8128,6 +8123,11 @@ int sched_cpu_deactivate(unsigned int cpu)
struct rq *rq = cpu_rq(cpu);
int ret;
+ ret = dl_bw_deactivate(cpu);
+
+ if (ret)
+ return ret;
+
/*
* Remove CPU from nohz.idle_cpus_mask to prevent participating in
* load balancing when not active
@@ -8173,15 +8173,7 @@ int sched_cpu_deactivate(unsigned int cpu)
return 0;
sched_update_numa(cpu, false);
- ret = cpuset_cpu_inactive(cpu);
- if (ret) {
- sched_smt_present_inc(cpu);
- sched_set_rq_online(rq, cpu);
- balance_push_set(cpu, false);
- set_cpu_active(cpu, true);
- sched_update_numa(cpu, true);
- return ret;
- }
+ cpuset_cpu_inactive(cpu);
sched_domains_numa_masks_clear(cpu);
return 0;
}
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 267ea8bacaf6..6e988d4cd787 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
}
break;
case dl_bw_req_deactivate:
+ /*
+ * cpu is not off yet, but we need to do the math by
+ * considering it off already (i.e., what would happen if we
+ * turn cpu off?).
+ */
+ cap -= arch_scale_cpu_capacity(cpu);
+
/*
* cpu is going offline and NORMAL tasks will be moved away
* from it. We can thus discount dl_server bandwidth
@@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
- * wise thing to do.
+ * wise thing to do. As said above, cpu is not offline
+ * yet, so account for that.
*/
- if (dl_bw_cpus(cpu))
+ if (dl_bw_cpus(cpu) - 1)
overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
else
overflow = 1;
--
2.47.0
Hi Juri,
On 15/11/2024 11:48, Juri Lelli wrote:
> Currently we check for bandwidth overflow potentially due to hotplug
> operations at the end of sched_cpu_deactivate(), after the cpu going
> offline has already been removed from scheduling, active_mask, etc.
> This can create issues for DEADLINE tasks, as there is a substantial
> race window between the start of sched_cpu_deactivate() and the moment
> we possibly decide to roll-back the operation if dl_bw_deactivate()
> returns failure in cpuset_cpu_inactive(). An example is a throttled
> task that sees its replenishment timer firing while the cpu it was
> previously running on is considered offline, but before
> dl_bw_deactivate() had a chance to say no and roll-back happened.
>
> Fix this by directly calling dl_bw_deactivate() first thing in
> sched_cpu_deactivate() and do the required calculation in the former
> function considering the cpu passed as an argument as offline already.
>
> By doing so we also simplify sched_cpu_deactivate(), as there is no need
> anymore for any kind of roll-back if we fail early.
>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> ---
> Thanks Waiman and Phil for testing and reviewing the scratch version of
> this change. I think the below might be better, as we end up with a
> clean-up as well.
>
> Please take another look when you/others have time.
> ---
> kernel/sched/core.c | 22 +++++++---------------
> kernel/sched/deadline.c | 12 ++++++++++--
> 2 files changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d1049e784510..e2c6eacf793e 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8054,19 +8054,14 @@ static void cpuset_cpu_active(void)
> cpuset_update_active_cpus();
> }
>
> -static int cpuset_cpu_inactive(unsigned int cpu)
> +static void cpuset_cpu_inactive(unsigned int cpu)
> {
> if (!cpuhp_tasks_frozen) {
> - int ret = dl_bw_deactivate(cpu);
> -
> - if (ret)
> - return ret;
> cpuset_update_active_cpus();
> } else {
> num_cpus_frozen++;
> partition_sched_domains(1, NULL, NULL);
> }
> - return 0;
> }
>
> static inline void sched_smt_present_inc(int cpu)
> @@ -8128,6 +8123,11 @@ int sched_cpu_deactivate(unsigned int cpu)
> struct rq *rq = cpu_rq(cpu);
> int ret;
>
> + ret = dl_bw_deactivate(cpu);
> +
> + if (ret)
> + return ret;
> +
> /*
> * Remove CPU from nohz.idle_cpus_mask to prevent participating in
> * load balancing when not active
> @@ -8173,15 +8173,7 @@ int sched_cpu_deactivate(unsigned int cpu)
> return 0;
>
> sched_update_numa(cpu, false);
> - ret = cpuset_cpu_inactive(cpu);
> - if (ret) {
> - sched_smt_present_inc(cpu);
> - sched_set_rq_online(rq, cpu);
> - balance_push_set(cpu, false);
> - set_cpu_active(cpu, true);
> - sched_update_numa(cpu, true);
> - return ret;
> - }
> + cpuset_cpu_inactive(cpu);
> sched_domains_numa_masks_clear(cpu);
> return 0;
> }
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 267ea8bacaf6..6e988d4cd787 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> }
> break;
> case dl_bw_req_deactivate:
> + /*
> + * cpu is not off yet, but we need to do the math by
> + * considering it off already (i.e., what would happen if we
> + * turn cpu off?).
> + */
> + cap -= arch_scale_cpu_capacity(cpu);
> +
> /*
> * cpu is going offline and NORMAL tasks will be moved away
> * from it. We can thus discount dl_server bandwidth
> @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> if (dl_b->total_bw - fair_server_bw > 0) {
> /*
> * Leaving at least one CPU for DEADLINE tasks seems a
> - * wise thing to do.
> + * wise thing to do. As said above, cpu is not offline
> + * yet, so account for that.
> */
> - if (dl_bw_cpus(cpu))
> + if (dl_bw_cpus(cpu) - 1)
> overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
> else
> overflow = 1;
I have noticed a suspend regression on one of our Tegra boards and
bisect is pointing to this commit. If I revert this on top of -next then
I don't see the issue.
The only messages I see when suspend fails are ...
[ 53.905976] Error taking CPU1 down: -16
[ 53.909887] Non-boot CPUs are not disabled
So far this is only happening on Tegra186 (ARM64). Let me know if you
have any thoughts.
Thanks
Jon
--
nvpublic
Hi Jon,
On 10/01/25 11:52, Jon Hunter wrote:
> Hi Juri,
>
...
> I have noticed a suspend regression on one of our Tegra boards and bisect is
> pointing to this commit. If I revert this on top of -next then I don't see
> the issue.
>
> The only messages I see when suspend fails are ...
>
> [ 53.905976] Error taking CPU1 down: -16
> [ 53.909887] Non-boot CPUs are not disabled
>
> So far this is only happening on Tegra186 (ARM64). Let me know if you have
> any thoughts.
Are you running any DEADLINE task in your configuration?
In any case, could you please repro with the following (as a start)?
It should print additional debugging info on the console.
Thanks!
Juri
---
kernel/sched/deadline.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 62192ac79c30..77736bab1992 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3530,6 +3530,7 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
* dl_servers we can discount, as tasks will be moved out the
* offlined CPUs anyway.
*/
+ printk_deferred("%s: cpu=%d cap=%lu fair_server_bw=%llu total_bw=%llu dl_bw_cpus=%d\n", __func__, cpu, cap, fair_server_bw, dl_b->total_bw, dl_bw_cpus(cpu));
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
Hi Juri,
On 10/01/2025 15:45, Juri Lelli wrote:
> Hi Jon,
>
> On 10/01/25 11:52, Jon Hunter wrote:
>> Hi Juri,
>>
>
> ...
>
>> I have noticed a suspend regression on one of our Tegra boards and bisect is
>> pointing to this commit. If I revert this on top of -next then I don't see
>> the issue.
>>
>> The only messages I see when suspend fails are ...
>>
>> [ 53.905976] Error taking CPU1 down: -16
>> [ 53.909887] Non-boot CPUs are not disabled
>>
>> So far this is only happening on Tegra186 (ARM64). Let me know if you have
>> any thoughts.
>
> Are you running any DEADLINE task in your configuration?
Not that I am aware of.
> In any case, could you please repro with the following (as a start)?
> It should print additional debugging info on the console.
>
> Thanks!
> Juri
>
> ---
> kernel/sched/deadline.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 62192ac79c30..77736bab1992 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -3530,6 +3530,7 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
> * dl_servers we can discount, as tasks will be moved out the
> * offlined CPUs anyway.
> */
> + printk_deferred("%s: cpu=%d cap=%lu fair_server_bw=%llu total_bw=%llu dl_bw_cpus=%d\n", __func__, cpu, cap, fair_server_bw, dl_b->total_bw, dl_bw_cpus(cpu));
> if (dl_b->total_bw - fair_server_bw > 0) {
> /*
> * Leaving at least one CPU for DEADLINE tasks seems a
>
With the above I see the following ...
[ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
[ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
[ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
[ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2
[ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1
[ 53.971068] Error taking CPU1 down: -16
[ 53.974912] Non-boot CPUs are not disabled
Thanks
Jon
--
nvpublic
On 10/01/25 18:40, Jon Hunter wrote: ... > With the above I see the following ... > > [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 > [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 > [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 So far so good. > [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 But, this above doesn't sound right. > [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 > [ 53.971068] Error taking CPU1 down: -16 > [ 53.974912] Non-boot CPUs are not disabled What is the topology of your board? Are you using any cpuset configuration for partitioning CPUs? Also, could you please add sched_debug to the kernel cmdline and enable CONFIG_SCHED_DEBUG (if not enabled already)? That should print additional information about scheduling domains in case they get reconfigured for some reason. Thanks! Juri
On 13/01/2025 09:32, Juri Lelli wrote: > On 10/01/25 18:40, Jon Hunter wrote: > > ... > >> With the above I see the following ... >> >> [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 >> [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 >> [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 > > So far so good. > >> [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 > > But, this above doesn't sound right. > >> [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 >> [ 53.971068] Error taking CPU1 down: -16 >> [ 53.974912] Non-boot CPUs are not disabled > > What is the topology of your board? > > Are you using any cpuset configuration for partitioning CPUs? I just noticed that by default we do boot this board with 'isolcpus=1-2'. I see that this is a deprecated cmdline argument now and I must admit I don't know the history of this for this specific board. It is quite old now. Thierry, I am curious if you have this set for Tegra186 or not? Looks like our BSP (r35 based) sets this by default. I did try removing this and that does appear to fix it. Juri, let me know your thoughts. Thanks! Jon -- nvpublic
On 14/01/25 13:52, Jon Hunter wrote: > > On 13/01/2025 09:32, Juri Lelli wrote: > > On 10/01/25 18:40, Jon Hunter wrote: > > > > ... > > > > > With the above I see the following ... > > > > > > [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 > > > [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 > > > [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 > > > > So far so good. > > > > > [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 > > > > But, this above doesn't sound right. > > > > > [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 > > > [ 53.971068] Error taking CPU1 down: -16 > > > [ 53.974912] Non-boot CPUs are not disabled > > > > What is the topology of your board? > > > > Are you using any cpuset configuration for partitioning CPUs? > > > I just noticed that by default we do boot this board with 'isolcpus=1-2'. I > see that this is a deprecated cmdline argument now and I must admit I don't > know the history of this for this specific board. It is quite old now. > > Thierry, I am curious if you have this set for Tegra186 or not? Looks like > our BSP (r35 based) sets this by default. > > I did try removing this and that does appear to fix it. OK, good. > Juri, let me know your thoughts. Thanks for the additional info. I guess I could now try to repro using isolcpus at boot on systems I have access to (to possibly understand what the underlying problem is). Best, Juri
On 14/01/25 15:02, Juri Lelli wrote:
> On 14/01/25 13:52, Jon Hunter wrote:
> >
> > On 13/01/2025 09:32, Juri Lelli wrote:
> > > On 10/01/25 18:40, Jon Hunter wrote:
> > >
> > > ...
> > >
> > > > With the above I see the following ...
> > > >
> > > > [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> > > > [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
> > > > [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
> > >
> > > So far so good.
> > >
> > > > [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2
> > >
> > > But, this above doesn't sound right.
> > >
> > > > [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1
> > > > [ 53.971068] Error taking CPU1 down: -16
> > > > [ 53.974912] Non-boot CPUs are not disabled
> > >
> > > What is the topology of your board?
> > >
> > > Are you using any cpuset configuration for partitioning CPUs?
> >
> >
> > I just noticed that by default we do boot this board with 'isolcpus=1-2'. I
> > see that this is a deprecated cmdline argument now and I must admit I don't
> > know the history of this for this specific board. It is quite old now.
> >
> > Thierry, I am curious if you have this set for Tegra186 or not? Looks like
> > our BSP (r35 based) sets this by default.
> >
> > I did try removing this and that does appear to fix it.
>
> OK, good.
>
> > Juri, let me know your thoughts.
>
> Thanks for the additional info. I guess I could now try to repro using
> isolcpus at boot on systems I have access to (to possibly understand
> what the underlying problem is).
I think the problem lies in the def_root_domain accounting of dl_servers
(which isolated cpus remains attached to).
Came up with the following, of which I'm not yet fully convinced, but
could you please try it out on top of the debug patch and see how it
does with the original failing setup using isolcpus?
Thanks!
---
kernel/sched/deadline.c | 15 +++++++++++++++
kernel/sched/sched.h | 1 +
kernel/sched/topology.c | 3 +++
3 files changed, 19 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 77736bab1992..9a47decd099a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1709,6 +1709,21 @@ void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq)
__dl_add(dl_b, new_bw, dl_bw_cpus(cpu));
}
+void __dl_server_detach_root(struct sched_dl_entity *dl_se, struct rq *rq)
+{
+ u64 old_bw = dl_se->dl_bw;
+ int cpu = cpu_of(rq);
+ struct dl_bw *dl_b;
+
+ dl_b = dl_bw_of(cpu_of(rq));
+ guard(raw_spinlock)(&dl_b->lock);
+
+ if (!dl_bw_cpus(cpu))
+ return;
+
+ __dl_sub(dl_b, old_bw, dl_bw_cpus(cpu));
+}
+
int dl_server_apply_params(struct sched_dl_entity *dl_se, u64 runtime, u64 period, bool init)
{
u64 old_bw = init ? 0 : to_ratio(dl_se->dl_period, dl_se->dl_runtime);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 65fa64845d9f..ec0dfd82157e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -395,6 +395,7 @@ extern void dl_server_update_idle_time(struct rq *rq,
struct task_struct *p);
extern void fair_server_init(struct rq *rq);
extern void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq);
+extern void __dl_server_detach_root(struct sched_dl_entity *dl_se, struct rq *rq);
extern int dl_server_apply_params(struct sched_dl_entity *dl_se,
u64 runtime, u64 period, bool init);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index da33ec9e94ab..93b08e76a52a 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -495,6 +495,9 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
if (rq->rd) {
old_rd = rq->rd;
+ if (rq->fair_server.dl_server)
+ __dl_server_detach_root(&rq->fair_server, rq);
+
if (cpumask_test_cpu(rq->cpu, old_rd->online))
set_rq_offline(rq);
--
On 15/01/2025 16:10, Juri Lelli wrote: > On 14/01/25 15:02, Juri Lelli wrote: >> On 14/01/25 13:52, Jon Hunter wrote: >>> >>> On 13/01/2025 09:32, Juri Lelli wrote: >>>> On 10/01/25 18:40, Jon Hunter wrote: >>>> >>>> ... >>>> >>>>> With the above I see the following ... >>>>> >>>>> [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 >>>>> [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 >>>>> [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 >>>> >>>> So far so good. >>>> >>>>> [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 >>>> >>>> But, this above doesn't sound right. >>>> >>>>> [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 >>>>> [ 53.971068] Error taking CPU1 down: -16 >>>>> [ 53.974912] Non-boot CPUs are not disabled >>>> >>>> What is the topology of your board? >>>> >>>> Are you using any cpuset configuration for partitioning CPUs? >>> >>> >>> I just noticed that by default we do boot this board with 'isolcpus=1-2'. I >>> see that this is a deprecated cmdline argument now and I must admit I don't >>> know the history of this for this specific board. It is quite old now. >>> >>> Thierry, I am curious if you have this set for Tegra186 or not? Looks like >>> our BSP (r35 based) sets this by default. >>> >>> I did try removing this and that does appear to fix it. >> >> OK, good. >> >>> Juri, let me know your thoughts. >> >> Thanks for the additional info. I guess I could now try to repro using >> isolcpus at boot on systems I have access to (to possibly understand >> what the underlying problem is). > > I think the problem lies in the def_root_domain accounting of dl_servers > (which isolated cpus remains attached to). > > Came up with the following, of which I'm not yet fully convinced, but > could you please try it out on top of the debug patch and see how it > does with the original failing setup using isolcpus? Thanks I added the change, but suspend is still failing with this ... [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 [ 210.645858] Error taking CPU1 down: -16 [ 210.649713] Non-boot CPUs are not disabled Jon -- nvpublic
On 16/01/25 13:14, Jon Hunter wrote:
>
> On 15/01/2025 16:10, Juri Lelli wrote:
> > On 14/01/25 15:02, Juri Lelli wrote:
> > > On 14/01/25 13:52, Jon Hunter wrote:
> > > >
> > > > On 13/01/2025 09:32, Juri Lelli wrote:
> > > > > On 10/01/25 18:40, Jon Hunter wrote:
> > > > >
> > > > > ...
> > > > >
> > > > > > With the above I see the following ...
> > > > > >
> > > > > > [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> > > > > > [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
> > > > > > [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
> > > > >
> > > > > So far so good.
> > > > >
> > > > > > [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2
> > > > >
> > > > > But, this above doesn't sound right.
> > > > >
> > > > > > [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1
> > > > > > [ 53.971068] Error taking CPU1 down: -16
> > > > > > [ 53.974912] Non-boot CPUs are not disabled
> > > > >
> > > > > What is the topology of your board?
> > > > >
> > > > > Are you using any cpuset configuration for partitioning CPUs?
> > > >
> > > >
> > > > I just noticed that by default we do boot this board with 'isolcpus=1-2'. I
> > > > see that this is a deprecated cmdline argument now and I must admit I don't
> > > > know the history of this for this specific board. It is quite old now.
> > > >
> > > > Thierry, I am curious if you have this set for Tegra186 or not? Looks like
> > > > our BSP (r35 based) sets this by default.
> > > >
> > > > I did try removing this and that does appear to fix it.
> > >
> > > OK, good.
> > >
> > > > Juri, let me know your thoughts.
> > >
> > > Thanks for the additional info. I guess I could now try to repro using
> > > isolcpus at boot on systems I have access to (to possibly understand
> > > what the underlying problem is).
> >
> > I think the problem lies in the def_root_domain accounting of dl_servers
> > (which isolated cpus remains attached to).
> >
> > Came up with the following, of which I'm not yet fully convinced, but
> > could you please try it out on top of the debug patch and see how it
> > does with the original failing setup using isolcpus?
>
>
> Thanks I added the change, but suspend is still failing with this ...
Thanks!
> [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
> [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
> [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2
> [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1
^
Different than before but still not what I expected. Looks like there
are conditions/path I currently cannot replicate on my setup, so more
thinking. Unfortunately I will be out traveling next week, so this
might required a bit of time.
Best,
Juri
Hi Juri, On 16/01/2025 15:55, Juri Lelli wrote: > On 16/01/25 13:14, Jon Hunter wrote: >> >> On 15/01/2025 16:10, Juri Lelli wrote: >>> On 14/01/25 15:02, Juri Lelli wrote: >>>> On 14/01/25 13:52, Jon Hunter wrote: >>>>> >>>>> On 13/01/2025 09:32, Juri Lelli wrote: >>>>>> On 10/01/25 18:40, Jon Hunter wrote: >>>>>> >>>>>> ... >>>>>> >>>>>>> With the above I see the following ... >>>>>>> >>>>>>> [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 >>>>>>> [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 >>>>>>> [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 >>>>>> >>>>>> So far so good. >>>>>> >>>>>>> [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 >>>>>> >>>>>> But, this above doesn't sound right. >>>>>> >>>>>>> [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 >>>>>>> [ 53.971068] Error taking CPU1 down: -16 >>>>>>> [ 53.974912] Non-boot CPUs are not disabled >>>>>> >>>>>> What is the topology of your board? >>>>>> >>>>>> Are you using any cpuset configuration for partitioning CPUs? >>>>> >>>>> >>>>> I just noticed that by default we do boot this board with 'isolcpus=1-2'. I >>>>> see that this is a deprecated cmdline argument now and I must admit I don't >>>>> know the history of this for this specific board. It is quite old now. >>>>> >>>>> Thierry, I am curious if you have this set for Tegra186 or not? Looks like >>>>> our BSP (r35 based) sets this by default. >>>>> >>>>> I did try removing this and that does appear to fix it. >>>> >>>> OK, good. >>>> >>>>> Juri, let me know your thoughts. >>>> >>>> Thanks for the additional info. I guess I could now try to repro using >>>> isolcpus at boot on systems I have access to (to possibly understand >>>> what the underlying problem is). >>> >>> I think the problem lies in the def_root_domain accounting of dl_servers >>> (which isolated cpus remains attached to). >>> >>> Came up with the following, of which I'm not yet fully convinced, but >>> could you please try it out on top of the debug patch and see how it >>> does with the original failing setup using isolcpus? >> >> >> Thanks I added the change, but suspend is still failing with this ... > > Thanks! > >> [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 >> [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 >> [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 >> [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 >> [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 > ^ > Different than before but still not what I expected. Looks like there > are conditions/path I currently cannot replicate on my setup, so more > thinking. Unfortunately I will be out traveling next week, so this > might required a bit of time. I see that this is now in the mainline and our board is still failing to suspend. Let me know if there is anything else you need me to test. Thanks Jon -- nvpublic
On 03/02/25 11:01, Jon Hunter wrote:
> Hi Juri,
>
> On 16/01/2025 15:55, Juri Lelli wrote:
> > On 16/01/25 13:14, Jon Hunter wrote:
...
> > > [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> > > [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
> > > [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
> > > [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2
> > > [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1
> > ^
> > Different than before but still not what I expected. Looks like there
> > are conditions/path I currently cannot replicate on my setup, so more
> > thinking. Unfortunately I will be out traveling next week, so this
> > might required a bit of time.
>
>
> I see that this is now in the mainline and our board is still failing to
> suspend. Let me know if there is anything else you need me to test.
Ah, can you actually add 'sched_verbose' and to your kernel cmdline? It
should print our additional debug info on the console when domains get
reconfigured by hotplug/suspends, e.g.
dl_bw_manage: cpu=3 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
CPU0 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU0 attaching sched-domain(s):
domain-0: span=0,4-5 level=MC
groups: 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }
CPU4 attaching sched-domain(s):
domain-0: span=0,4-5 level=MC
groups: 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }
CPU5 attaching sched-domain(s):
domain-0: span=0,4-5 level=MC
groups: 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }
root domain span: 0,4-5
rd 0,4-5: Checking EAS, CPUs do not have asymmetric capacities
psci: CPU3 killed (polled 0 ms)
Can you please share this information as well if you are able to collect
it (while still running with my last proposed fix)?
Thanks!
Juri
On 05/02/25 07:53, Juri Lelli wrote:
> On 03/02/25 11:01, Jon Hunter wrote:
> > Hi Juri,
> >
> > On 16/01/2025 15:55, Juri Lelli wrote:
> > > On 16/01/25 13:14, Jon Hunter wrote:
>
> ...
>
> > > > [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> > > > [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
> > > > [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
> > > > [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2
> > > > [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1
> > > ^
> > > Different than before but still not what I expected. Looks like there
> > > are conditions/path I currently cannot replicate on my setup, so more
> > > thinking. Unfortunately I will be out traveling next week, so this
> > > might required a bit of time.
> >
> >
> > I see that this is now in the mainline and our board is still failing to
> > suspend. Let me know if there is anything else you need me to test.
>
> Ah, can you actually add 'sched_verbose' and to your kernel cmdline? It
> should print our additional debug info on the console when domains get
> reconfigured by hotplug/suspends, e.g.
>
> dl_bw_manage: cpu=3 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
> CPU0 attaching NULL sched-domain.
> CPU3 attaching NULL sched-domain.
> CPU4 attaching NULL sched-domain.
> CPU5 attaching NULL sched-domain.
> CPU0 attaching sched-domain(s):
> domain-0: span=0,4-5 level=MC
> groups: 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }
> CPU4 attaching sched-domain(s):
> domain-0: span=0,4-5 level=MC
> groups: 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }
> CPU5 attaching sched-domain(s):
> domain-0: span=0,4-5 level=MC
> groups: 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }
> root domain span: 0,4-5
> rd 0,4-5: Checking EAS, CPUs do not have asymmetric capacities
> psci: CPU3 killed (polled 0 ms)
>
> Can you please share this information as well if you are able to collect
> it (while still running with my last proposed fix)?
Also, if you don't mind, add the following on top of the existing
changes.
Just to be sure we don't get out of sync, I pushed current set to
https://github.com/jlelli/linux.git experimental/dl-debug
---
kernel/sched/deadline.c | 2 +-
kernel/sched/topology.c | 5 ++++-
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 9a47decd099a..504ff302299a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3545,7 +3545,7 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
* dl_servers we can discount, as tasks will be moved out the
* offlined CPUs anyway.
*/
- printk_deferred("%s: cpu=%d cap=%lu fair_server_bw=%llu total_bw=%llu dl_bw_cpus=%d\n", __func__, cpu, cap, fair_server_bw, dl_b->total_bw, dl_bw_cpus(cpu));
+ printk_deferred("%s: cpu=%d cap=%lu fair_server_bw=%llu total_bw=%llu dl_bw_cpus=%d type=%s span=%*pbl\n", __func__, cpu, cap, fair_server_bw, dl_b->total_bw, dl_bw_cpus(cpu), (cpu_rq(cpu)->rd == &def_root_domain) ? "DEF" : "DYN", cpumask_pr_args(cpu_rq(cpu)->rd->span));
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 93b08e76a52a..996270cd5bd2 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -137,6 +137,7 @@ static void sched_domain_debug(struct sched_domain *sd, int cpu)
if (!sd) {
printk(KERN_DEBUG "CPU%d attaching NULL sched-domain.\n", cpu);
+ printk(KERN_CONT "span=%*pbl\n", cpumask_pr_args(def_root_domain.span));
return;
}
@@ -2534,8 +2535,10 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
if (has_cluster)
static_branch_inc_cpuslocked(&sched_cluster_active);
- if (rq && sched_debug_verbose)
+ if (rq && sched_debug_verbose) {
pr_info("root domain span: %*pbl\n", cpumask_pr_args(cpu_map));
+ pr_info("default domain span: %*pbl\n", cpumask_pr_args(def_root_domain.span));
+ }
ret = 0;
error:
Hi Juri,
On 05/02/2025 10:12, Juri Lelli wrote:
> On 05/02/25 07:53, Juri Lelli wrote:
>> On 03/02/25 11:01, Jon Hunter wrote:
>>> Hi Juri,
>>>
>>> On 16/01/2025 15:55, Juri Lelli wrote:
>>>> On 16/01/25 13:14, Jon Hunter wrote:
>>
>> ...
>>
>>>>> [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
>>>>> [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3
>>>>> [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
>>>>> [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2
>>>>> [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1
>>>> ^
>>>> Different than before but still not what I expected. Looks like there
>>>> are conditions/path I currently cannot replicate on my setup, so more
>>>> thinking. Unfortunately I will be out traveling next week, so this
>>>> might required a bit of time.
>>>
>>>
>>> I see that this is now in the mainline and our board is still failing to
>>> suspend. Let me know if there is anything else you need me to test.
>>
>> Ah, can you actually add 'sched_verbose' and to your kernel cmdline? It
>> should print our additional debug info on the console when domains get
>> reconfigured by hotplug/suspends, e.g.
>>
>> dl_bw_manage: cpu=3 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4
>> CPU0 attaching NULL sched-domain.
>> CPU3 attaching NULL sched-domain.
>> CPU4 attaching NULL sched-domain.
>> CPU5 attaching NULL sched-domain.
>> CPU0 attaching sched-domain(s):
>> domain-0: span=0,4-5 level=MC
>> groups: 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }
>> CPU4 attaching sched-domain(s):
>> domain-0: span=0,4-5 level=MC
>> groups: 4:{ span=4 cap=908 }, 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }
>> CPU5 attaching sched-domain(s):
>> domain-0: span=0,4-5 level=MC
>> groups: 5:{ span=5 cap=989 }, 0:{ span=0 cap=766 }, 4:{ span=4 cap=908 }
>> root domain span: 0,4-5
>> rd 0,4-5: Checking EAS, CPUs do not have asymmetric capacities
>> psci: CPU3 killed (polled 0 ms)
>>
>> Can you please share this information as well if you are able to collect
>> it (while still running with my last proposed fix)?
>
> Also, if you don't mind, add the following on top of the existing
> changes.
>
> Just to be sure we don't get out of sync, I pushed current set to
>
> https://github.com/jlelli/linux.git experimental/dl-debug
Thanks! That did make it easier :-)
Here is what I see ...
[ 53.823979] PM: suspend entry (deep)
[ 53.827715] Filesystems sync: 0.000 seconds
[ 53.832859] Freezing user space processes
[ 53.838132] Freezing user space processes completed (elapsed 0.001 seconds)
[ 53.845118] OOM killer disabled.
[ 53.848348] Freezing remaining freezable tasks
[ 53.853884] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 53.900686] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 53.918492] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
[ 53.962316] Disabling non-boot CPUs ...
[ 53.966192] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
[ 53.966231] CPU0 attaching NULL sched-domain.
[ 53.980574] span=1-2
[ 53.982767] CPU3 attaching NULL sched-domain.
[ 53.987119] span=0-2
[ 53.989309] CPU4 attaching NULL sched-domain.
[ 53.993662] span=0-3
[ 53.995853] CPU5 attaching NULL sched-domain.
[ 54.000206] span=0-4
[ 54.002433] CPU0 attaching sched-domain(s):
[ 54.006614] domain-0: span=0,3-4 level=MC
[ 54.010711] groups: 0:{ span=0 cap=1022 }, 3:{ span=3 cap=1022 }, 4:{ span=4 }
[ 54.018126] CPU3 attaching sched-domain(s):
[ 54.022307] domain-0: span=0,3-4 level=MC
[ 54.026404] groups: 3:{ span=3 cap=1022 }, 4:{ span=4 }, 0:{ span=0 cap=1023 }
[ 54.033821] CPU4 attaching sched-domain(s):
[ 54.038001] domain-0: span=0,3-4 level=MC
[ 54.042098] groups: 4:{ span=4 }, 0:{ span=0 cap=1023 }, 3:{ span=3 cap=1022 }
[ 54.049508] root domain span: 0,3-4
[ 54.052997] default domain span: 1-2,5
[ 54.056756] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 54.064688] psci: CPU5 killed (polled 0 ms)
[ 54.069495] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,3-4
[ 54.069547] CPU0 attaching NULL sched-domain.
[ 54.083910] span=1-2,5
[ 54.086277] CPU3 attaching NULL sched-domain.
[ 54.090633] span=0-2,5
[ 54.092999] CPU4 attaching NULL sched-domain.
[ 54.097351] span=0-3,5
[ 54.099756] CPU0 attaching sched-domain(s):
[ 54.103941] domain-0: span=0,3 level=MC
[ 54.107865] groups: 0:{ span=0 }, 3:{ span=3 cap=1023 }
[ 54.113279] CPU3 attaching sched-domain(s):
[ 54.117459] domain-0: span=0,3 level=MC
[ 54.121382] groups: 3:{ span=3 cap=1023 }, 0:{ span=0 }
[ 54.126793] root domain span: 0,3
[ 54.130109] default domain span: 1-2,4-5
[ 54.134040] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 54.141597] psci: CPU4 killed (polled 0 ms)
[ 54.146819] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,3
[ 54.156727] CPU0 attaching NULL sched-domain.
[ 54.161086] span=1-2,4-5
[ 54.163632] CPU3 attaching NULL sched-domain.
[ 54.167988] span=0-2,4-5
[ 54.170553] CPU0 attaching NULL sched-domain.
[ 54.174909] span=0-5
[ 54.177096] root domain span: 0
[ 54.180239] default domain span: 1-5
[ 54.183821] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.191728] psci: CPU3 killed (polled 4 ms)
[ 54.196389] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 type=DEF span=1-5
[ 54.196518] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.214816] psci: CPU2 killed (polled 0 ms)
[ 54.219411] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 type=DEF span=1-5
[ 54.219493] Error taking CPU1 down: -16
[ 54.232948] Non-boot CPUs are not disabled
[ 54.237046] Enabling non-boot CPUs ...
[ 54.241216] Detected PIPT I-cache on CPU2
[ 54.245258] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 54.256744] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 54.268954] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 54.280865] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 54.288270] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.295061] CPU2 is up
[ 54.297599] Detected PIPT I-cache on CPU3
[ 54.301642] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 54.308419] CPU0 attaching NULL sched-domain.
[ 54.312786] span=1-5
[ 54.315031] CPU0 attaching sched-domain(s):
[ 54.319220] domain-0: span=0,3 level=MC
[ 54.323145] groups: 0:{ span=0 }, 3:{ span=3 cap=1016 }
[ 54.328564] CPU3 attaching sched-domain(s):
[ 54.332746] domain-0: span=0,3 level=MC
[ 54.336671] groups: 3:{ span=3 cap=1016 }, 0:{ span=0 }
[ 54.342080] root domain span: 0,3
[ 54.345405] default domain span: 1-2,4-5
[ 54.349338] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 54.356122] CPU3 is up
[ 54.358649] Detected PIPT I-cache on CPU4
[ 54.362677] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 54.369399] CPU0 attaching NULL sched-domain.
[ 54.373767] span=1-2,4-5
[ 54.376310] CPU3 attaching NULL sched-domain.
[ 54.380667] span=0-2,4-5
[ 54.383251] CPU0 attaching sched-domain(s):
[ 54.387439] domain-0: span=0,3-4 level=MC
[ 54.391538] groups: 0:{ span=0 }, 3:{ span=3 cap=1021 }, 4:{ span=4 }
[ 54.398173] CPU3 attaching sched-domain(s):
[ 54.402356] domain-0: span=0,3-4 level=MC
[ 54.406456] groups: 3:{ span=3 cap=1021 }, 4:{ span=4 }, 0:{ span=0 }
[ 54.413090] CPU4 attaching sched-domain(s):
[ 54.417271] domain-0: span=0,3-4 level=MC
[ 54.421373] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 cap=1021 }
[ 54.428005] root domain span: 0,3-4
[ 54.431503] default domain span: 1-2,5
[ 54.435259] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 54.442287] CPU4 is up
[ 54.444821] Detected PIPT I-cache on CPU5
[ 54.448848] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 54.455574] CPU0 attaching NULL sched-domain.
[ 54.459950] span=1-2,5
[ 54.462315] CPU3 attaching NULL sched-domain.
[ 54.466674] span=0-2,5
[ 54.469042] CPU4 attaching NULL sched-domain.
[ 54.473401] span=0-3,5
[ 54.475812] CPU0 attaching sched-domain(s):
[ 54.480000] domain-0: span=0,3-5 level=MC
[ 54.484099] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 54.491171] CPU3 attaching sched-domain(s):
[ 54.495352] domain-0: span=0,3-5 level=MC
[ 54.499452] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
[ 54.506519] CPU4 attaching sched-domain(s):
[ 54.510703] domain-0: span=0,3-5 level=MC
[ 54.514800] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }
[ 54.521869] CPU5 attaching sched-domain(s):
[ 54.526050] domain-0: span=0,3-5 level=MC
[ 54.530150] groups: 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 54.537217] root domain span: 0,3-5
[ 54.540716] default domain span: 1-2
[ 54.544303] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 54.551393] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 54.558281] CPU5 is up
[ 54.568000] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 55.585391] dwc-eth-dwmac 2490000.ethernet: Failed to reset the dma
[ 55.591664] dwc-eth-dwmac 2490000.ethernet eth0: stmmac_hw_setup: DMA engine initialization failed
[ 55.600905] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 55.615015] usb-conn-gpio 3520000.padctl:ports:usb2-0:connector: repeated role: device
[ 55.617967] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 55.655665] OOM killer enabled.
[ 55.658813] Restarting tasks ... done.
[ 55.664082] random: crng reseeded on system resumption
[ 55.664403] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 55.674862] PM: suspend exit
--
nvpublic
On 05/02/25 16:56, Jon Hunter wrote: ... > Thanks! That did make it easier :-) > > Here is what I see ... Thanks! Still different from what I can repro over here, so, unfortunately, I had to add additional debug printks. Pushed to the same branch/repo. Could I ask for another run with it? Please also share the complete dmesg from boot, as I would need to check debug output when CPUs are first onlined. Best, Juri
On 06/02/2025 09:29, Juri Lelli wrote:
> On 05/02/25 16:56, Jon Hunter wrote:
>
> ...
>
>> Thanks! That did make it easier :-)
>>
>> Here is what I see ...
>
> Thanks!
>
> Still different from what I can repro over here, so, unfortunately, I
> had to add additional debug printks. Pushed to the same branch/repo.
>
> Could I ask for another run with it? Please also share the complete
> dmesg from boot, as I would need to check debug output when CPUs are
> first onlined.
Yes no problem. Attached is the complete log.
Thanks!
Jon
--
nvpublic
U-Boot 2020.04-g6b630d64fd (Feb 19 2021 - 08:38:59 -0800)
SoC: tegra186
Model: NVIDIA P2771-0000-500
Board: NVIDIA P2771-0000
DRAM: 7.8 GiB
MMC: sdhci@3400000: 1, sdhci@3460000: 0
Loading Environment from MMC... *** Warning - bad CRC, using default environment
In: serial
Out: serial
Err: serial
Net:
Warning: ethernet@2490000 using MAC address from ROM
eth0: ethernet@2490000
Hit any key to stop autoboot: 2 1 0
MMC: no card present
switch to partitions #0, OK
mmc0(part 0) is current device
Scanning mmc 0:1...
Found /boot/extlinux/extlinux.conf
Retrieving file: /boot/extlinux/extlinux.conf
489 bytes read in 18 ms (26.4 KiB/s)
1: primary kernel
Retrieving file: /boot/initrd
7236840 bytes read in 187 ms (36.9 MiB/s)
Retrieving file: /boot/Image
47976960 bytes read in 1147 ms (39.9 MiB/s)
append: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1 no_console_suspend root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2
Retrieving file: /boot/dtb/tegra186-p2771-0000.dtb
108349 bytes read in 21 ms (4.9 MiB/s)
## Flattened Device Tree blob at 88400000
Booting using the fdt blob at 0x88400000
Using Device Tree in place at 0000000088400000, end 000000008841d73c
copying carveout for /host1x@13e00000/display-hub@15200000/display@15200000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15210000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15220000...
DT node /trusty missing in source; can't copy status
DT node /reserved-memory/fb0_carveout missing in source; can't copy
DT node /reserved-memory/fb1_carveout missing in source; can't copy
DT node /reserved-memory/fb2_carveout missing in source; can't copy
DT node /reserved-memory/ramoops_carveout missing in source; can't copy
DT node /reserved-memory/vpr-carveout missing in source; can't copy
Starting kernel ...
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00004-g85aea528c849 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT Thu Feb 6 05:58:56 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008831] Console: colour dummy device 80x25
[ 0.013494] printk: legacy console [tty0] enabled
[ 0.018421] printk: legacy bootconsole [uart0] disabled
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00004-g85aea528c849 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT Thu Feb 6 05:58:56 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008831] Console: colour dummy device 80x25
[ 0.013494] printk: legacy console [tty0] enabled
[ 0.018421] printk: legacy bootconsole [uart0] disabled
[ 0.023949] Calibrating delay loop (skipped), value calculated using timer frequency.. 62.50 BogoMIPS (lpj=125000)
[ 0.023965] pid_max: default: 32768 minimum: 301
[ 0.024013] LSM: initializing lsm=capability
[ 0.024115] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024143] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024656] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 0.035968] rcu: Hierarchical SRCU implementation.
[ 0.035979] rcu: Max phase no-delay instances is 1000.
[ 0.036161] Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level
[ 0.038482] Tegra Revision: A02 SKU: 220 CPU Process: 0 SoC Process: 0
[ 0.040133] EFI services will not be available.
[ 0.040366] smp: Bringing up secondary CPUs ...
[ 0.048932] CPU features: detected: Kernel page table isolation (KPTI)
[ 0.048969] Detected PIPT I-cache on CPU1
[ 0.048985] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU1: 0x0000009444c004
[ 0.049006] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU1: 0x00000010305116
[ 0.049037] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU1: 0x00000003001066
[ 0.049074] CPU features: Unsupported CPU feature variation detected.
[ 0.049264] CPU1: Booted secondary processor 0x0000000000 [0x4e0f0030]
[ 0.049331] __dl_add: cpus=1 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 0.052684] Detected PIPT I-cache on CPU2
[ 0.052705] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 0.052726] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 0.052754] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 0.052922] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 0.052982] __dl_add: cpus=2 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 0.060457] Detected PIPT I-cache on CPU3
[ 0.060554] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 0.060579] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 0.068476] Detected PIPT I-cache on CPU4
[ 0.068539] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 0.068560] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.069093] Detected PIPT I-cache on CPU5
[ 0.069154] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 0.069177] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 0.069254] smp: Brought up 1 node, 6 CPUs
[ 0.069289] SMP: Total of 6 processors activated.
[ 0.069296] CPU: All CPU(s) started at EL2
[ 0.069308] CPU features: detected: 32-bit EL0 Support
[ 0.069315] CPU features: detected: 32-bit EL1 Support
[ 0.069323] CPU features: detected: CRC32 instructions
[ 0.069432] alternatives: applying system-wide alternatives
[ 0.077906] CPU0 attaching sched-domain(s):
[ 0.077926] domain-0: span=0,3-5 level=MC
[ 0.077940] groups: 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 0.077982] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.077988] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 0.077996] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.078000] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 0.078004] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 0.078009] CPU3 attaching sched-domain(s):
[ 0.078036] domain-0: span=0,3-5 level=MC
[ 0.078046] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }
[ 0.078084] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 0.078088] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 0.078093] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 0.078096] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 0.078100] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 0.078104] CPU4 attaching sched-domain(s):
[ 0.078130] domain-0: span=0,3-5 level=MC
[ 0.078140] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }
[ 0.078177] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 0.078181] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 0.078186] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 0.078189] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 0.078193] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 0.078197] CPU5 attaching sched-domain(s):
[ 0.078224] domain-0: span=0,3-5 level=MC
[ 0.078234] groups: 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }
[ 0.078271] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
[ 0.078276] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
[ 0.078280] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 0.078283] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 0.078287] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 0.078291] root domain span: 0,3-5
[ 0.078317] default domain span: 1-2
[ 0.078381] Memory: 7902468K/8220672K available (17856K kernel code, 5188K rwdata, 12720K rodata, 10944K init, 1132K bss, 280192K reserved, 32768K cma-reserved)
[ 0.079457] devtmpfs: initialized
[ 0.093855] DMA-API: preallocated 65536 debug entries
[ 0.093878] DMA-API: debugging enabled by kernel config
[ 0.093892] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 0.093913] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[ 0.094302] 20752 pages in range for non-PLT usage
[ 0.094310] 512272 pages in range for PLT usage
[ 0.094455] pinctrl core: initialized pinctrl subsystem
[ 0.096807] DMI not present or invalid.
[ 0.098904] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[ 0.099692] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
[ 0.099901] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[ 0.100200] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[ 0.100246] audit: initializing netlink subsys (disabled)
[ 0.100369] audit: type=2000 audit(0.084:1): state=initialized audit_enabled=0 res=1
[ 0.101986] thermal_sys: Registered thermal governor 'step_wise'
[ 0.101993] thermal_sys: Registered thermal governor 'power_allocator'
[ 0.102057] cpuidle: using governor menu
[ 0.102285] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
[ 0.102487] ASID allocator initialised with 32768 entries
[ 0.104478] Serial: AMBA PL011 UART driver
[ 0.112565] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 0.112597] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 0.112619] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 0.112640] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 0.112660] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 0.112679] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 0.112699] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 0.112718] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 0.112737] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 0.112756] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 0.112775] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 0.112794] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 0.112813] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 0.112833] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 0.112852] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 0.112872] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 0.112891] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 0.112910] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 0.112930] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 0.112950] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 0.112970] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 0.112990] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 0.113010] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 0.113030] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 0.113050] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 0.113069] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 0.113089] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 0.113109] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 0.113129] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 0.113185] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113246] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113306] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113366] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113425] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113485] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113546] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113605] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113664] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113722] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113781] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113858] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113926] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.113989] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114052] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114116] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114177] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114238] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114300] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114358] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114416] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114475] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114535] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114594] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114654] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114712] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114770] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114847] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.114931] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.127363] /memory-controller@2c00000/external-memory-controller@2c60000: Fixed dependency cycle(s) with /bpmp
[ 0.127572] /bpmp: Fixed dependency cycle(s) with /memory-controller@2c00000/external-memory-controller@2c60000
[ 0.131733] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[ 0.131749] HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
[ 0.131759] HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
[ 0.131768] HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
[ 0.131776] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.131783] HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
[ 0.131791] HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
[ 0.131798] HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
[ 0.133071] ACPI: Interpreter disabled.
[ 0.135046] iommu: Default domain type: Translated
[ 0.135062] iommu: DMA domain TLB invalidation policy: strict mode
[ 0.135554] SCSI subsystem initialized
[ 0.135701] libata version 3.00 loaded.
[ 0.135831] usbcore: registered new interface driver usbfs
[ 0.135855] usbcore: registered new interface driver hub
[ 0.135884] usbcore: registered new device driver usb
[ 0.136443] pps_core: LinuxPPS API ver. 1 registered
[ 0.136452] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[ 0.136467] PTP clock support registered
[ 0.136539] EDAC MC: Ver: 3.0.0
[ 0.137044] scmi_core: SCMI protocol bus registered
[ 0.137686] FPGA manager framework
[ 0.137750] Advanced Linux Sound Architecture Driver Initialized.
[ 0.138406] vgaarb: loaded
[ 0.138781] clocksource: Switched to clocksource arch_sys_counter
[ 0.138940] VFS: Disk quotas dquot_6.6.0
[ 0.138960] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.139115] pnp: PnP ACPI: disabled
[ 0.144305] NET: Registered PF_INET protocol family
[ 0.144510] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[ 0.148379] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
[ 0.148472] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[ 0.148493] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
[ 0.148810] TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
[ 0.149958] TCP: Hash tables configured (established 65536 bind 65536)
[ 0.150038] UDP hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.150246] UDP-Lite hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.150538] NET: Registered PF_UNIX/PF_LOCAL protocol family
[ 0.150879] RPC: Registered named UNIX socket transport module.
[ 0.150894] RPC: Registered udp transport module.
[ 0.150901] RPC: Registered tcp transport module.
[ 0.150907] RPC: Registered tcp-with-tls transport module.
[ 0.150913] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 0.150927] PCI: CLS 0 bytes, default 64
[ 0.151086] Unpacking initramfs...
[ 0.157201] kvm [1]: nv: 566 coarse grained trap handlers
[ 0.157514] kvm [1]: IPA Size Limit: 40 bits
[ 0.159012] kvm [1]: vgic interrupt IRQ9
[ 0.159078] kvm [1]: Hyp nVHE mode initialized successfully
[ 0.160326] Initialise system trusted keyrings
[ 0.160473] workingset: timestamp_bits=42 max_order=21 bucket_order=0
[ 0.160684] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[ 0.160874] NFS: Registering the id_resolver key type
[ 0.160898] Key type id_resolver registered
[ 0.160905] Key type id_legacy registered
[ 0.160924] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[ 0.160933] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[ 0.161046] 9p: Installing v9fs 9p2000 file system support
[ 0.193130] Key type asymmetric registered
[ 0.193156] Asymmetric key parser 'x509' registered
[ 0.193221] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245)
[ 0.193234] io scheduler mq-deadline registered
[ 0.193242] io scheduler kyber registered
[ 0.193272] io scheduler bfq registered
[ 0.202619] ledtrig-cpu: registered to indicate activity on CPUs
[ 0.224906] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 0.227527] msm_serial: driver initialized
[ 0.227775] SuperH (H)SCI(F) driver initialized
[ 0.227890] STM32 USART driver initialized
[ 0.230733] arm-smmu 12000000.iommu: probing hardware configuration...
[ 0.230750] arm-smmu 12000000.iommu: SMMUv2 with:
[ 0.230760] arm-smmu 12000000.iommu: stage 1 translation
[ 0.230769] arm-smmu 12000000.iommu: stage 2 translation
[ 0.230798] arm-smmu 12000000.iommu: nested translation
[ 0.230807] arm-smmu 12000000.iommu: stream matching with 128 register groups
[ 0.230819] arm-smmu 12000000.iommu: 64 context banks (0 stage-2 only)
[ 0.230831] arm-smmu 12000000.iommu: Supported page sizes: 0x61311000
[ 0.230839] arm-smmu 12000000.iommu: Stage-1: 48-bit VA -> 48-bit IPA
[ 0.230848] arm-smmu 12000000.iommu: Stage-2: 48-bit IPA -> 48-bit PA
[ 0.230889] arm-smmu 12000000.iommu: preserved 0 boot mappings
[ 0.236091] loop: module loaded
[ 0.236840] megasas: 07.727.03.00-rc1
[ 0.242082] tun: Universal TUN/TAP device driver, 1.6
[ 0.242761] thunder_xcv, ver 1.0
[ 0.242829] thunder_bgx, ver 1.0
[ 0.242852] nicpf, ver 1.0
[ 0.243681] hns3: Hisilicon Ethernet Network Driver for Hip08 Family - version
[ 0.243693] hns3: Copyright (c) 2017 Huawei Corporation.
[ 0.243725] hclge is initializing
[ 0.243749] e1000: Intel(R) PRO/1000 Network Driver
[ 0.243757] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 0.243779] e1000e: Intel(R) PRO/1000 Network Driver
[ 0.243786] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 0.243806] igb: Intel(R) Gigabit Ethernet Network Driver
[ 0.243813] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 0.243834] igbvf: Intel(R) Gigabit Virtual Function Network Driver
[ 0.243842] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[ 0.244068] sky2: driver version 1.30
[ 0.245909] usbcore: registered new device driver r8152-cfgselector
[ 0.245934] usbcore: registered new interface driver r8152
[ 0.246205] VFIO - User Level meta-driver version: 0.3
[ 0.248341] usbcore: registered new interface driver usb-storage
[ 0.250604] i2c_dev: i2c /dev entries driver
[ 0.256181] sdhci: Secure Digital Host Controller Interface driver
[ 0.256198] sdhci: Copyright(c) Pierre Ossman
[ 0.256733] Synopsys Designware Multimedia Card Interface Driver
[ 0.257419] sdhci-pltfm: SDHCI platform and OF driver helper
[ 0.259570] tegra-bpmp bpmp: Adding to iommu group 0
[ 0.260080] tegra-bpmp bpmp: firmware: 91572a54614f84d0fd0c270beec2c56f
[ 0.261833] /bpmp/i2c/pmic@3c: Fixed dependency cycle(s) with /bpmp/i2c/pmic@3c/pinmux
[ 0.263132] max77620 0-003c: PMIC Version OTP:0x45 and ES:0x8
[ 0.270384] VDD_DDR_1V1_PMIC: Bringing 1125000uV into 1100000-1100000uV
[ 0.280836] VDD_RTC: Bringing 800000uV into 1000000-1000000uV
[ 0.281894] VDDIO_SDMMC3_AP: Bringing 1800000uV into 2800000-2800000uV
[ 0.283539] VDD_HDMI_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.284321] VDD_PEX_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.361935] Freeing initrd memory: 7064K
[ 0.405555] max77686-rtc max77620-rtc: registered as rtc0
[ 0.423316] max77686-rtc max77620-rtc: setting system clock to 2021-09-04T12:46:38 UTC (1630759598)
[ 0.553383] clocksource: tsc: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.553408] clocksource: osc: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 49772407460 ns
[ 0.553419] clocksource: usec: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275 ns
[ 0.553864] usbcore: registered new interface driver usbhid
[ 0.553875] usbhid: USB HID core driver
[ 0.557319] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 (0,8000003f) counters available
[ 0.557877] hw perfevents: enabled with armv8_nvidia_denver PMU driver, 7 (0,8000003f) counters available
[ 0.562517] NET: Registered PF_PACKET protocol family
[ 0.562585] 9pnet: Installing 9P2000 support
[ 0.562632] Key type dns_resolver registered
[ 0.569821] registered taskstats version 1
[ 0.569948] Loading compiled-in X.509 certificates
[ 0.574916] Demotion targets for Node 0: null
[ 0.594968] tegra-pcie 10003000.pcie: Adding to iommu group 1
[ 0.595269] tegra-pcie 10003000.pcie: host bridge /pcie@10003000 ranges:
[ 0.595302] tegra-pcie 10003000.pcie: MEM 0x0010000000..0x0010001fff -> 0x0010000000
[ 0.595322] tegra-pcie 10003000.pcie: MEM 0x0010004000..0x0010004fff -> 0x0010004000
[ 0.595340] tegra-pcie 10003000.pcie: IO 0x0050000000..0x005000ffff -> 0x0000000000
[ 0.595360] tegra-pcie 10003000.pcie: MEM 0x0050100000..0x0057ffffff -> 0x0050100000
[ 0.595375] tegra-pcie 10003000.pcie: MEM 0x0058000000..0x007fffffff -> 0x0058000000
[ 0.595461] tegra-pcie 10003000.pcie: 4x1, 1x1 configuration
[ 0.596858] tegra-pcie 10003000.pcie: probing port 0, using 4 lanes
[ 1.809377] tegra-pcie 10003000.pcie: link 0 down, ignoring
[ 1.809833] tegra-pcie 10003000.pcie: PCI host bridge to bus 0000:00
[ 1.809851] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.809862] pci_bus 0000:00: root bus resource [mem 0x10000000-0x10001fff]
[ 1.809872] pci_bus 0000:00: root bus resource [mem 0x10004000-0x10004fff]
[ 1.809882] pci_bus 0000:00: root bus resource [io 0x0000-0xffff]
[ 1.809891] pci_bus 0000:00: root bus resource [mem 0x50100000-0x57ffffff]
[ 1.809900] pci_bus 0000:00: root bus resource [mem 0x58000000-0x7fffffff pref]
[ 1.813485] pci_bus 0000:00: resource 4 [mem 0x10000000-0x10001fff]
[ 1.813501] pci_bus 0000:00: resource 5 [mem 0x10004000-0x10004fff]
[ 1.813510] pci_bus 0000:00: resource 6 [io 0x0000-0xffff]
[ 1.813520] pci_bus 0000:00: resource 7 [mem 0x50100000-0x57ffffff]
[ 1.813530] pci_bus 0000:00: resource 8 [mem 0x58000000-0x7fffffff pref]
[ 1.814536] tegra-gpcdma 2600000.dma-controller: Adding to iommu group 2
[ 1.816404] tegra-gpcdma 2600000.dma-controller: GPC DMA driver register 31 channels
[ 1.818669] printk: legacy console [ttyS0] disabled
[ 1.818875] 3100000.serial: ttyS0 at MMIO 0x3100000 (irq = 23, base_baud = 25500000) is a Tegra
[ 1.818913] printk: legacy console [ttyS0] enabled
[ 4.532136] dwc-eth-dwmac 2490000.ethernet: Adding to iommu group 3
[ 4.550952] dwc-eth-dwmac 2490000.ethernet: User ID: 0x10, Synopsys ID: 0x41
[ 4.558018] dwc-eth-dwmac 2490000.ethernet: DWMAC4/5
[ 4.563086] dwc-eth-dwmac 2490000.ethernet: DMA HW capability register supported
[ 4.570481] dwc-eth-dwmac 2490000.ethernet: RX Checksum Offload Engine supported
[ 4.577874] dwc-eth-dwmac 2490000.ethernet: TX Checksum insertion supported
[ 4.584833] dwc-eth-dwmac 2490000.ethernet: Wake-Up On Lan supported
[ 4.591220] dwc-eth-dwmac 2490000.ethernet: TSO supported
[ 4.596622] dwc-eth-dwmac 2490000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[ 4.604536] dwc-eth-dwmac 2490000.ethernet: Enabled L3L4 Flow TC (entries=8)
[ 4.611584] dwc-eth-dwmac 2490000.ethernet: Enabled RFS Flow TC (entries=10)
[ 4.618630] dwc-eth-dwmac 2490000.ethernet: TSO feature enabled
[ 4.624548] dwc-eth-dwmac 2490000.ethernet: Using 40/40 bits DMA host/device width
[ 4.632879] irq: IRQ73: trimming hierarchy from :pmc@c360000
[ 4.643185] tegra_rtc c2a0000.rtc: registered as rtc1
[ 4.648256] tegra_rtc c2a0000.rtc: Tegra internal Real Time Clock
[ 4.656930] irq: IRQ76: trimming hierarchy from :pmc@c360000
[ 4.662832] pca953x 1-0074: using no AI
[ 4.669780] irq: IRQ77: trimming hierarchy from :pmc@c360000
[ 4.675586] pca953x 1-0077: using no AI
[ 4.694391] cpufreq: cpufreq_online: CPU0: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.705324] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.705332] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.705338] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.705343] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.705347] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.705351] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.745754] dl_clear_root_domain: span=1-2 type=DEF
[ 4.745760] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.745765] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.745823] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 4.745845] __dl_sub: cpus=4 tsk_bw=104857 total_bw=209712 span=0,3-5 type=DYN
[ 4.745924] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.745930] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.745936] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.745940] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.745945] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.745949] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.819431] dl_clear_root_domain: span=1-2 type=DEF
[ 4.819437] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.819441] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.819483] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 4.819489] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 4.819500] __dl_sub: cpus=4 tsk_bw=104857 total_bw=209712 span=0,3-5 type=DYN
[ 4.819553] cpufreq: cpufreq_online: CPU3: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.870505] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.870511] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.870517] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.870521] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.870525] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.870529] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.910866] dl_clear_root_domain: span=1-2 type=DEF
[ 4.910870] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.910874] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.910915] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 4.910920] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 4.910925] __dl_add: cpus=2 tsk_bw=104857 total_bw=314570 span=1-2 type=DEF
[ 4.910982] cpufreq: cpufreq_online: CPU4: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.961784] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.961790] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.961794] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.961798] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.961802] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.961806] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.002124] dl_clear_root_domain: span=1-2 type=DEF
[ 5.002127] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.002129] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.002156] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 5.002159] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 5.002162] __dl_add: cpus=2 tsk_bw=104857 total_bw=314570 span=1-2 type=DEF
[ 5.002165] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 5.002206] cpufreq: cpufreq_online: CPU5: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 5.060213] dl_clear_root_domain: span=0,3-5 type=DYN
[ 5.060219] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 5.060223] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 5.060227] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 5.060231] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 5.060235] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.100556] dl_clear_root_domain: span=1-2 type=DEF
[ 5.100560] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.100563] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.100588] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 5.100592] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 5.100595] __dl_add: cpus=2 tsk_bw=104857 total_bw=314570 span=1-2 type=DEF
[ 5.100598] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 5.100601] __dl_add: cpus=4 tsk_bw=104857 total_bw=524283 span=0,3-5 type=DYN
[ 5.102795] dl_clear_root_domain: span=0,3-5 type=DYN
[ 5.102803] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 5.102806] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 5.102809] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 5.102812] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 5.102816] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.103447] sdhci-tegra 3440000.mmc: Adding to iommu group 4
[ 5.107652] sdhci-tegra 3460000.mmc: Adding to iommu group 5
[ 5.107664] dl_clear_root_domain: span=1-2 type=DEF
[ 5.107667] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.107670] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.107722] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 5.107726] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 5.107729] __dl_add: cpus=2 tsk_bw=104857 total_bw=314570 span=1-2 type=DEF
[ 5.107732] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 5.107735] __dl_add: cpus=4 tsk_bw=104857 total_bw=524283 span=0,3-5 type=DYN
[ 5.107738] __dl_add: cpus=4 tsk_bw=104857 total_bw=629140 span=0,3-5 type=DYN
[ 5.271913] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.272912] irq: IRQ86: trimming hierarchy from :pmc@c360000
[ 5.279468] mmc0: CQHCI version 5.10
[ 5.284580] tegra-xusb 3530000.usb: Adding to iommu group 6
[ 5.295841] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 5.303161] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.306788] mmc2: SDHCI controller on 3440000.mmc [3440000.mmc] using ADMA 64-bit
[ 5.308498] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 1
[ 5.318798] mmc0: SDHCI controller on 3460000.mmc [3460000.mmc] using ADMA 64-bit
[ 5.324125] tegra-xusb 3530000.usb: hcc params 0x0184fd25 hci version 0x100 quirks 0x0000000000000810
[ 5.340179] tegra-xusb 3530000.usb: irq 87, io mem 0x03530000
[ 5.345573] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.346075] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.356734] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 2
[ 5.364230] tegra-xusb 3530000.usb: Host supports USB 3.0 SuperSpeed
[ 5.370991] hub 1-0:1.0: USB hub found
[ 5.374760] hub 1-0:1.0: 4 ports detected
[ 5.379350] hub 2-0:1.0: USB hub found
[ 5.383126] hub 2-0:1.0: 3 ports detected
[ 5.390972] sdhci-tegra 3400000.mmc: Adding to iommu group 7
[ 5.397045] irq: IRQ90: trimming hierarchy from :interrupt-controller@3881000
[ 5.404571] irq: IRQ92: trimming hierarchy from :pmc@c360000
[ 5.404619] sdhci-tegra 3400000.mmc: Got CD GPIO
[ 5.410634] irq: IRQ93: trimming hierarchy from :pmc@c360000
[ 5.414870] sdhci-tegra 3400000.mmc: Got WP GPIO
[ 5.425564] input: gpio-keys as /devices/platform/gpio-keys/input/input0
[ 5.429998] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.472881] irq: IRQ94: trimming hierarchy from :pmc@c360000
[ 5.479047] mmc1: SDHCI controller on 3400000.mmc [3400000.mmc] using ADMA 64-bit
[ 5.479843] mmc0: Command Queue Engine enabled
[ 5.487500] dwc-eth-dwmac 2490000.ethernet eth0: Register MEM_TYPE_PAGE_POOL RxQ-0
[ 5.491067] mmc0: new HS400 MMC card at address 0001
[ 5.504362] mmcblk0: mmc0:0001 032G34 29.1 GiB
[ 5.505619] dwc-eth-dwmac 2490000.ethernet eth0: PHY [stmmac-0:00] driver [Broadcom BCM89610] (irq=73)
[ 5.514609] mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17 p18 p19 p20 p21 p22 p23 p24 p25 p26 p27 p28 p29 p30 p31 p32 p33
[ 5.518651] dwmac4: Master AXI performs any burst length
[ 5.534026] mmcblk0boot0: mmc0:0001 032G34 4.00 MiB
[ 5.536830] dwc-eth-dwmac 2490000.ethernet eth0: No Safety Features support found
[ 5.542403] mmcblk0boot1: mmc0:0001 032G34 4.00 MiB
[ 5.553912] dwc-eth-dwmac 2490000.ethernet eth0: IEEE 1588-2008 Advanced Timestamp supported
[ 5.554510] mmcblk0rpmb: mmc0:0001 032G34 4.00 MiB, chardev (234:0)
[ 5.562485] dwc-eth-dwmac 2490000.ethernet eth0: registered PTP clock
[ 5.575721] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 5.589983] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 8.537008] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 8.558803] IP-Config: Complete:
[ 8.562037] device=eth0, hwaddr=00:04:4b:8c:56:1e, ipaddr=192.168.99.2, mask=255.255.255.0, gw=192.168.99.1
[ 8.572228] host=192.168.99.2, domain=, nis-domain=(none)
[ 8.578068] bootserver=192.168.99.1, rootserver=192.168.99.1, rootpath=
[ 8.578205] clk: Disabling unused clocks
[ 8.610251] PM: genpd: Disabling unused power domains
[ 8.615374] ALSA device list:
[ 8.618349] No soundcards found.
[ 8.626496] Freeing unused kernel memory: 10944K
[ 8.631222] Run /init as init process
[ 8.634902] with arguments:
[ 8.637867] /init
[ 8.640148] netdevwait
[ 8.642894] vpr_resize
[ 8.645601] with environment:
[ 8.648766] HOME=/
[ 8.651148] TERM=linux
[ 8.653853] nvdumper_reserved=0x2772e0000
[ 8.658227] tegraid=18.1.2.0.0
[ 8.661644] bl_prof_dataptr=0x10000@0x275840000
[ 8.698856] Root device found: nfs
[ 8.709669] Ethernet interface: eth0
[ 8.719529] IP Address: 192.168.99.2
[ 8.786703] Rootfs mounted over nfs
[ 8.814750] Switching from initrd to actual rootfs
[ 9.078708] systemd[1]: System time before build time, advancing clock.
[ 9.193278] NET: Registered PF_INET6 protocol family
[ 9.199880] Segment Routing with IPv6
[ 9.203599] In-situ OAM (IOAM) with IPv6
[ 9.244192] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[ 9.265876] systemd[1]: Detected architecture arm64.
[ 9.307997] systemd[1]: Set hostname to <tegra-ubuntu>.
[ 10.998857] random: crng init done
[ 11.002524] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[ 11.010680] systemd[1]: Reached target Remote File Systems.
[ 11.016357] systemd[1]: Reached target Swap.
[ 11.021073] systemd[1]: Created slice User and Session Slice.
[ 11.027180] systemd[1]: Created slice System Slice.
[ 11.032271] systemd[1]: Listening on udev Kernel Socket.
[ 11.037874] systemd[1]: Listening on Journal Audit Socket.
[ 12.024420] systemd-journald[175]: Received request to flush runtime journal from PID 1
[ 12.842603] tegra-host1x 13e00000.host1x: Adding to iommu group 8
[ 12.873046] tegra-hda 3510000.hda: Adding to iommu group 9
[ 12.881284] host1x-context host1x-ctx.0: Adding to iommu group 10
[ 12.889278] tegra-xudc 3550000.usb: Adding to iommu group 11
[ 12.896595] host1x-context host1x-ctx.1: Adding to iommu group 12
[ 12.903147] host1x-context host1x-ctx.2: Adding to iommu group 13
[ 12.910107] host1x-context host1x-ctx.3: Adding to iommu group 14
[ 12.919804] host1x-context host1x-ctx.4: Adding to iommu group 15
[ 12.928982] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 12.942682] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 12.955958] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 12.962563] host1x-context host1x-ctx.5: Adding to iommu group 16
[ 12.967337] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 12.983759] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 12.995477] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 12.998684] host1x-context host1x-ctx.6: Adding to iommu group 17
[ 13.006433] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 13.023283] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 13.034212] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 13.034993] host1x-context host1x-ctx.7: Adding to iommu group 18
[ 13.045005] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 13.061889] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 13.073294] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 13.084083] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 13.094875] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 13.107994] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 13.118663] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 13.129322] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 13.141105] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 13.171322] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 13.182837] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 13.197545] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 13.208529] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 13.219540] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 13.230935] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 13.241750] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 13.253269] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 13.264477] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 13.275361] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 13.286100] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 13.297217] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.308313] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.319186] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.330033] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.340836] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.351615] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.362502] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.373246] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.384254] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.395070] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.405855] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.416657] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.427353] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.438235] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.448986] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.459712] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.470569] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.481317] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.492021] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.502982] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.513803] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.524840] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.535816] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.546608] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.558578] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.569298] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.580270] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.591306] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.602523] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.615735] tegra-audio-graph-card sound: Adding to iommu group 19
[ 13.618353] at24 6-0050: 256 byte 24c02 EEPROM, read-only
[ 13.629298] at24 6-0057: 256 byte 24c02 EEPROM, read-only
[ 13.630409] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=3 as /devices/platform/3510000.hda/sound/card0/input1
[ 13.647552] gic 2a41000.interrupt-controller: GIC IRQ controller registered
[ 13.650175] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=7 as /devices/platform/3510000.hda/sound/card0/input2
[ 13.654672] tegra-aconnect aconnect@2900000: Tegra ACONNECT bus registered
[ 13.704210] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 13.704442] tegra-adma 2930000.dma-controller: Tegra210 ADMA driver registered 32 channels
[ 13.715128] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.735447] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 13.746197] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.763606] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 13.775203] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.788245] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 13.799032] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.812091] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 13.822874] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.835942] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 13.846702] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.859634] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 13.870368] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.883233] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 13.894093] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.907306] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 13.918020] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.931142] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 13.942057] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.954340] tegra-dc 15200000.display: Adding to iommu group 20
[ 13.960546] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 13.971264] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.983437] tegra-dc 15210000.display: Adding to iommu group 20
[ 13.989547] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 14.000272] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.011427] tegra-dc 15220000.display: Adding to iommu group 20
[ 14.017743] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 14.028601] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.042294] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 14.053197] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.067251] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 14.078150] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.092208] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 14.103014] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.114168] tegra-vic 15340000.vic: Adding to iommu group 21
[ 14.120137] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 14.130905] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.142420] irq: IRQ138: trimming hierarchy from :pmc@c360000
[ 14.142733] tegra-nvdec 15480000.nvdec: Adding to iommu group 22
[ 14.154513] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 14.165954] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.179306] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 14.190134] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.203525] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 14.213113] [drm] Initialized tegra 1.0.0 for drm on minor 0
[ 14.214359] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.221413] drm drm: [drm] Cannot find any crtc or sizes
[ 14.237200] drm drm: [drm] Cannot find any crtc or sizes
[ 14.238115] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 14.242700] drm drm: [drm] Cannot find any crtc or sizes
[ 14.253358] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.271787] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 14.282578] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.295614] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 14.306447] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.319422] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 14.331390] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.345521] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 14.356236] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.368967] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 14.379677] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.392509] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 14.403455] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.416531] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 14.427489] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 14.441064] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 14.451852] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
Ubuntu 18.04.6 LTS tegra-ubuntu ttyS0
tegra-ubuntu login: ubuntu (automatic login)
[ 17.659143] dl_clear_root_domain: span=0,3-5 type=DYN
[ 17.659155] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 17.659160] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 17.659163] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 17.659166] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 17.659170] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 17.700077] dl_clear_root_domain: span=1-2 type=DEF
[ 17.700085] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 17.700089] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 17.700137] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 17.700140] __dl_add: cpus=2 tsk_bw=104857 total_bw=314570 span=1-2 type=DEF
[ 17.700144] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 17.700147] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 17.700151] __dl_add: cpus=4 tsk_bw=104857 total_bw=524283 span=0,3-5 type=DYN
[ 17.763061] dl_clear_root_domain: span=0,3-5 type=DYN
[ 17.763069] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 17.763073] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 17.763076] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 17.763079] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 17.763082] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 17.803505] dl_clear_root_domain: span=1-2 type=DEF
[ 17.803510] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 17.803514] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 17.803558] __dl_add: cpus=2 tsk_bw=104857 total_bw=209713 span=1-2 type=DEF
[ 17.803562] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 17.803566] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 17.803569] __dl_add: cpus=4 tsk_bw=104857 total_bw=524283 span=0,3-5 type=DYN
[ 17.852171] dl_clear_root_domain: span=0,3-5 type=DYN
[ 17.852179] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 17.852182] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 17.852185] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 17.852188] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 17.852191] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 17.893063] dl_clear_root_domain: span=1-2 type=DEF
[ 17.893072] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 17.893075] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 17.893126] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 17.893130] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 17.893135] __dl_add: cpus=4 tsk_bw=104857 total_bw=524283 span=0,3-5 type=DYN
[ 17.939091] dl_clear_root_domain: span=0,3-5 type=DYN
[ 17.939103] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 17.939108] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 17.939111] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 17.939113] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 17.939117] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 17.983943] dl_clear_root_domain: span=1-2 type=DEF
[ 17.983953] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 17.983957] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 17.984004] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 17.984008] __dl_add: cpus=4 tsk_bw=104857 total_bw=419426 span=0,3-5 type=DYN
[ 18.027142] dl_clear_root_domain: span=0,3-5 type=DYN
[ 18.027153] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 18.027158] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 18.027161] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 18.027164] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 18.027167] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 18.069111] dl_clear_root_domain: span=1-2 type=DEF
[ 18.069122] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 18.069127] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 18.069187] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
[ 18.099209] dl_clear_root_domain: span=0,3-5 type=DYN
[ 18.099221] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 18.099224] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 18.099227] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 18.099230] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 18.099234] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 18.140009] dl_clear_root_domain: span=1-2 type=DEF
[ 18.140017] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 18.140021] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 6.13.0-rc6-next-20250110-00004-g85aea528c849 aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@tegra-ubuntu:~$ [ 24.555002] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 3507000.sata
[ 24.564290] tegra-mc 2c00000.memory-controller: sync_state() pending due to 3507000.sata
[ 24.572453] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 15380000.nvjpg
[ 24.581849] tegra-mc 2c00000.memory-controller: sync_state() pending due to 15380000.nvjpg
[ 24.590112] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 154c0000.nvenc
[ 24.599498] tegra-mc 2c00000.memory-controller: sync_state() pending due to 154c0000.nvenc
[ 39.914848] VDD_RTC: disabling
[ 54.684299] PM: suspend entry (deep)
[ 54.687994] Filesystems sync: 0.000 seconds
[ 54.693120] Freezing user space processes
[ 54.698410] Freezing user space processes completed (elapsed 0.001 seconds)
[ 54.705399] OOM killer disabled.
[ 54.708627] Freezing remaining freezable tasks
[ 54.714159] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 54.759994] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 54.777739] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
[ 54.821506] Disabling non-boot CPUs ...
[ 54.825385] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
[ 54.825423] CPU0 attaching NULL sched-domain.
[ 54.839768] span=1-2
[ 54.841952] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 54.841956] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
[ 54.841964] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
[ 54.841968] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
[ 54.841972] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
[ 54.841975] CPU3 attaching NULL sched-domain.
[ 54.879275] span=0-2
[ 54.881458] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
[ 54.881461] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
[ 54.881469] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 54.881472] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
[ 54.881474] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
[ 54.881477] CPU4 attaching NULL sched-domain.
[ 54.918434] span=0-3
[ 54.920622] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
[ 54.920626] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
[ 54.920633] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 54.920636] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
[ 54.920639] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
[ 54.920641] CPU5 attaching NULL sched-domain.
[ 54.957421] span=0-4
[ 54.959612] rq_attach_root: cpu=5 old_span= new_span=0-4
[ 54.959616] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 54.959619] __dl_server_attach_root: cpu=5 rd_span=0-5 total_bw=314568
[ 54.959673] CPU0 attaching sched-domain(s):
[ 54.982639] domain-0: span=0,3-4 level=MC
[ 54.986738] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 54.992586] __dl_sub: cpus=5 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 54.992590] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 54.992597] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.992600] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.992602] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.992606] CPU3 attaching sched-domain(s):
[ 55.028600] domain-0: span=0,3-4 level=MC
[ 55.032699] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 55.038544] __dl_sub: cpus=4 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 55.038548] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 55.038554] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 55.038556] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 55.038559] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 55.038562] CPU4 attaching sched-domain(s):
[ 55.075165] domain-0: span=0,3-4 level=MC
[ 55.079266] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 55.085111] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 55.085115] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 55.085120] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 55.085123] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 55.085126] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 55.085130] root domain span: 0,3-4
[ 55.122254] default domain span: 1-2,5
[ 55.126012] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 55.133836] psci: CPU5 killed (polled 0 ms)
[ 55.138763] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,3-4
[ 55.148810] CPU0 attaching NULL sched-domain.
[ 55.153167] span=1-2,5
[ 55.155530] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 55.155534] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 55.155541] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 55.155545] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DEF
[ 55.155548] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=209712
[ 55.155551] CPU3 attaching NULL sched-domain.
[ 55.193350] span=0-2,5
[ 55.195709] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 55.195712] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 55.195719] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 55.195722] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-3,5 type=DEF
[ 55.195724] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=262140
[ 55.195727] CPU4 attaching NULL sched-domain.
[ 55.233005] span=0-3,5
[ 55.235365] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 55.235369] __dl_add: cpus=4 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 55.235372] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=314568
[ 55.235415] CPU0 attaching sched-domain(s):
[ 55.258539] domain-0: span=0,3 level=MC
[ 55.262464] groups: 0:{ span=0 }, 3:{ span=3 }
[ 55.267091] __dl_sub: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 55.267095] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 55.267102] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 55.267104] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.267107] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 55.267110] CPU3 attaching sched-domain(s):
[ 55.303088] domain-0: span=0,3 level=MC
[ 55.307010] groups: 3:{ span=3 }, 0:{ span=0 }
[ 55.311635] __dl_sub: cpus=3 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 55.311638] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 55.311644] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 55.311646] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 55.311650] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 55.311655] root domain span: 0,3
[ 55.347392] default domain span: 1-2,4-5
[ 55.351325] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 55.359476] psci: CPU4 killed (polled 4 ms)
[ 55.364219] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,3
[ 55.364258] CPU0 attaching NULL sched-domain.
[ 55.378437] span=1-2,4-5
[ 55.380974] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 55.380979] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 55.380985] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 55.380989] __dl_add: cpus=3 tsk_bw=52428 total_bw=262140 span=0-2,4-5 type=DEF
[ 55.380992] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=262140
[ 55.380995] CPU3 attaching NULL sched-domain.
[ 55.418794] span=0-2,4-5
[ 55.421322] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 55.421326] __dl_add: cpus=3 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 55.421329] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=314568
[ 55.421366] CPU0 attaching NULL sched-domain.
[ 55.444834] span=0-5
[ 55.447020] __dl_sub: cpus=3 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 55.447024] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 55.447030] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 55.447033] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.447035] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 55.447038] root domain span: 0
[ 55.481974] default domain span: 1-5
[ 55.485558] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 55.493435] psci: CPU3 killed (polled 0 ms)
[ 55.498466] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 type=DEF span=1-5
[ 55.498599] dl_clear_root_domain: span=0 type=DYN
[ 55.498609] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.498621] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 55.528320] psci: CPU2 killed (polled 0 ms)
[ 55.532764] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 type=DEF span=1-5
[ 55.532841] Error taking CPU1 down: -16
[ 55.546253] Non-boot CPUs are not disabled
[ 55.550379] Enabling non-boot CPUs ...
[ 55.554546] Detected PIPT I-cache on CPU2
[ 55.558586] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 55.570082] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 55.582282] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 55.594181] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 55.601508] dl_clear_root_domain: span=0 type=DYN
[ 55.601519] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.601532] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 55.619723] CPU2 is up
[ 55.622262] Detected PIPT I-cache on CPU3
[ 55.626304] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 55.633073] CPU0 attaching NULL sched-domain.
[ 55.637447] span=1-5
[ 55.639639] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=0 type=DYN
[ 55.639643] __dl_server_detach_root: cpu=0 rd_span=0 total_bw=0
[ 55.639658] rq_attach_root: cpu=0 old_span= new_span=1-5
[ 55.639662] __dl_add: cpus=4 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 55.639665] __dl_server_attach_root: cpu=0 rd_span=0-5 total_bw=314568
[ 55.639709] CPU0 attaching sched-domain(s):
[ 55.674929] domain-0: span=0,3 level=MC
[ 55.678853] groups: 0:{ span=0 }, 3:{ span=3 }
[ 55.683482] __dl_sub: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 55.683486] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 55.683492] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 55.683494] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.683497] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 55.683501] CPU3 attaching sched-domain(s):
[ 55.719492] domain-0: span=0,3 level=MC
[ 55.723416] groups: 3:{ span=3 }, 0:{ span=0 }
[ 55.728042] __dl_sub: cpus=3 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 55.728045] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 55.728048] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 55.728050] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 55.728053] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 55.728055] root domain span: 0,3
[ 55.763795] default domain span: 1-2,4-5
[ 55.767728] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 55.774521] CPU3 is up
[ 55.777057] Detected PIPT I-cache on CPU4
[ 55.781088] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 55.787830] CPU0 attaching NULL sched-domain.
[ 55.792197] span=1-2,4-5
[ 55.794728] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 55.794732] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 55.794741] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 55.794744] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-2,4-5 type=DEF
[ 55.794747] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=262140
[ 55.794750] CPU3 attaching NULL sched-domain.
[ 55.832569] span=0-2,4-5
[ 55.835106] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=3 type=DYN
[ 55.835110] __dl_server_detach_root: cpu=3 rd_span=3 total_bw=0
[ 55.835117] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 55.835121] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 55.835123] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=314568
[ 55.835168] CPU0 attaching sched-domain(s):
[ 55.870732] domain-0: span=0,3-4 level=MC
[ 55.874834] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 55.880678] __dl_sub: cpus=5 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 55.880682] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 55.880688] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 55.880691] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.880693] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 55.880697] CPU3 attaching sched-domain(s):
[ 55.916691] domain-0: span=0,3-4 level=MC
[ 55.920791] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 55.926634] __dl_sub: cpus=4 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 55.926638] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 55.926644] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 55.926647] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 55.926650] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 55.926653] CPU4 attaching sched-domain(s):
[ 55.963253] domain-0: span=0,3-4 level=MC
[ 55.967351] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 55.973195] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 55.973199] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 55.973202] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 55.973205] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 55.973207] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 55.973212] root domain span: 0,3-4
[ 56.010345] default domain span: 1-2,5
[ 56.014102] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 56.021128] CPU4 is up
[ 56.023663] Detected PIPT I-cache on CPU5
[ 56.027691] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 56.034425] CPU0 attaching NULL sched-domain.
[ 56.038789] span=1-2,5
[ 56.041146] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 56.041150] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 56.041158] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 56.041161] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DEF
[ 56.041164] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=209712
[ 56.041167] CPU3 attaching NULL sched-domain.
[ 56.078970] span=0-2,5
[ 56.081325] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 56.081328] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 56.081334] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 56.081336] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-3,5 type=DEF
[ 56.081339] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=262140
[ 56.081342] CPU4 attaching NULL sched-domain.
[ 56.118619] span=0-3,5
[ 56.120978] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=4 type=DYN
[ 56.120981] __dl_server_detach_root: cpu=4 rd_span=4 total_bw=0
[ 56.120989] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 56.120991] __dl_add: cpus=6 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 56.120994] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=314568
[ 56.121042] CPU0 attaching sched-domain(s):
[ 56.156417] domain-0: span=0,3-5 level=MC
[ 56.160516] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 56.167578] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 56.167582] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 56.167587] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 56.167590] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 56.167592] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 56.167596] CPU3 attaching sched-domain(s):
[ 56.203573] domain-0: span=0,3-5 level=MC
[ 56.207668] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
[ 56.214730] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 56.214734] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 56.214740] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 56.214742] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 56.214745] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 56.214749] CPU4 attaching sched-domain(s):
[ 56.251331] domain-0: span=0,3-5 level=MC
[ 56.255426] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }
[ 56.262488] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 56.262492] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 56.262497] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 56.262499] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 56.262502] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 56.262506] CPU5 attaching sched-domain(s):
[ 56.300303] domain-0: span=0,3-5 level=MC
[ 56.304398] groups: 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 56.311458] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
[ 56.311461] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
[ 56.311464] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 56.311467] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 56.311469] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 56.311474] root domain span: 0,3-5
[ 56.348430] default domain span: 1-2
[ 56.352018] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 56.359094] dl_clear_root_domain: span=0,3-5 type=DYN
[ 56.359097] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 56.359101] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 56.359104] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 56.359106] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 56.359110] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 56.399452] dl_clear_root_domain: span=1-2 type=DEF
[ 56.399455] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 56.399458] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 56.399473] CPU5 is up
[ 56.428022] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 56.435924] dwmac4: Master AXI performs any burst length
[ 56.441270] dwc-eth-dwmac 2490000.ethernet eth0: No Safety Features support found
[ 56.448780] dwc-eth-dwmac 2490000.ethernet eth0: IEEE 1588-2008 Advanced Timestamp supported
[ 56.457517] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 56.471929] usb-conn-gpio 3520000.padctl:ports:usb2-0:connector: repeated role: device
[ 56.474558] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 56.511542] OOM killer enabled.
[ 56.514690] Restarting tasks ... done.
[ 56.519645] random: crng reseeded on system resumption
[ 56.525153] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.530729] PM: suspend exit
[ 56.582681] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.644287] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.713508] VDDIO_SDMMC3_AP: voltage operation not allowed
On 07/02/25 10:38, Jon Hunter wrote:
>
> On 06/02/2025 09:29, Juri Lelli wrote:
> > On 05/02/25 16:56, Jon Hunter wrote:
> >
> > ...
> >
> > > Thanks! That did make it easier :-)
> > >
> > > Here is what I see ...
> >
> > Thanks!
> >
> > Still different from what I can repro over here, so, unfortunately, I
> > had to add additional debug printks. Pushed to the same branch/repo.
> >
> > Could I ask for another run with it? Please also share the complete
> > dmesg from boot, as I would need to check debug output when CPUs are
> > first onlined.
>
>
> Yes no problem. Attached is the complete log.
Great, thanks!
...
> [ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
> [ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
> [ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> [ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> [ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
...
> [ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
> [ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
> [ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> [ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> [ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
...
> [ 0.040366] smp: Bringing up secondary CPUs ...
> [ 0.048932] CPU features: detected: Kernel page table isolation (KPTI)
> [ 0.048969] Detected PIPT I-cache on CPU1
> [ 0.048985] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU1: 0x0000009444c004
> [ 0.049006] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU1: 0x00000010305116
> [ 0.049037] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU1: 0x00000003001066
> [ 0.049074] CPU features: Unsupported CPU feature variation detected.
> [ 0.049264] CPU1: Booted secondary processor 0x0000000000 [0x4e0f0030]
> [ 0.049331] __dl_add: cpus=1 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
> [ 0.052684] Detected PIPT I-cache on CPU2
> [ 0.052705] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
> [ 0.052726] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
> [ 0.052754] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
> [ 0.052922] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
> [ 0.052982] __dl_add: cpus=2 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
> [ 0.060457] Detected PIPT I-cache on CPU3
> [ 0.060554] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
> [ 0.060579] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
> [ 0.068476] Detected PIPT I-cache on CPU4
> [ 0.068539] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
> [ 0.068560] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 0.069093] Detected PIPT I-cache on CPU5
> [ 0.069154] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
> [ 0.069177] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
> [ 0.069254] smp: Brought up 1 node, 6 CPUs
> [ 0.069289] SMP: Total of 6 processors activated.
> [ 0.069296] CPU: All CPU(s) started at EL2
> [ 0.069308] CPU features: detected: 32-bit EL0 Support
> [ 0.069315] CPU features: detected: 32-bit EL1 Support
> [ 0.069323] CPU features: detected: CRC32 instructions
> [ 0.069432] alternatives: applying system-wide alternatives
> [ 0.077906] CPU0 attaching sched-domain(s):
> [ 0.077926] domain-0: span=0,3-5 level=MC
> [ 0.077940] groups: 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
> [ 0.077982] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 0.077988] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
> [ 0.077996] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.078000] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 0.078004] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
> [ 0.078009] CPU3 attaching sched-domain(s):
> [ 0.078036] domain-0: span=0,3-5 level=MC
> [ 0.078046] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }
> [ 0.078084] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
> [ 0.078088] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
> [ 0.078093] rq_attach_root: cpu=3 old_span=NULL new_span=0
> [ 0.078096] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
> [ 0.078100] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
> [ 0.078104] CPU4 attaching sched-domain(s):
> [ 0.078130] domain-0: span=0,3-5 level=MC
> [ 0.078140] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }
> [ 0.078177] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
> [ 0.078181] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
> [ 0.078186] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
> [ 0.078189] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
> [ 0.078193] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
> [ 0.078197] CPU5 attaching sched-domain(s):
> [ 0.078224] domain-0: span=0,3-5 level=MC
> [ 0.078234] groups: 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }
> [ 0.078271] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
> [ 0.078276] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
> [ 0.078280] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
> [ 0.078283] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 0.078287] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
> [ 0.078291] root domain span: 0,3-5
> [ 0.078317] default domain span: 1-2
Up until here it looks alright: 1,2 are left on DEF root domain since
they are isolated; the rest on a single dynamic domain. Also dl server
bandwidth sums up correctly.
...
> [ 4.694391] cpufreq: cpufreq_online: CPU0: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
I didn't of course have cpufreq in my virt env! :)
> [ 4.705324] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 4.705332] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 4.705338] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 4.705343] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 4.705347] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 4.705351] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 4.745754] dl_clear_root_domain: span=1-2 type=DEF
> [ 4.745760] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 4.745765] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 4.745823] __dl_add: cpus=4 tsk_bw=104857 total_bw=314569 span=0,3-5 type=DYN
> [ 4.745845] __dl_sub: cpus=4 tsk_bw=104857 total_bw=209712 span=0,3-5 type=DYN
This above doesn't already make much sense to me and I still need to
understand what is going on. The rest is actually also not that easy to
follow, so...
I thought maybe we could try switching to use ftrace.
Pushed yet additional debug changes to the repo (please update).
The idea would be to boot with something like "ftrace=nop
trace_buf_size=50K" added to kernel cmdline, then, right after boot
collect the trace buffer with
# cat /sys/kernel/debug/tracing/trace > trace.out
and also collect dmesg as you did already. Going over the two side by
side should hopefully provide more information on what is actually
triggering the total_bw add/sub calls (as I enabled stack traces). I
don't think it's necessary just yet to collect tracing info across
suspend events, as accounting seems already broken after boot. :/
Again, really appreciating the help with debugging this!
And Dietmar, thanks for starting to look into this as well! Of course
feel free to suggest different approaches to debugging this. :)
Best,
Juri
On 07/02/2025 11:38, Jon Hunter wrote: > > On 06/02/2025 09:29, Juri Lelli wrote: >> On 05/02/25 16:56, Jon Hunter wrote: >> >> ... >> >>> Thanks! That did make it easier :-) >>> >>> Here is what I see ... >> >> Thanks! >> >> Still different from what I can repro over here, so, unfortunately, I >> had to add additional debug printks. Pushed to the same branch/repo. >> >> Could I ask for another run with it? Please also share the complete >> dmesg from boot, as I would need to check debug output when CPUs are >> first onlined. So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are isol CPUs? This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] [...]
On 07/02/2025 13:38, Dietmar Eggemann wrote: > On 07/02/2025 11:38, Jon Hunter wrote: >> >> On 06/02/2025 09:29, Juri Lelli wrote: >>> On 05/02/25 16:56, Jon Hunter wrote: >>> >>> ... >>> >>>> Thanks! That did make it easier :-) >>>> >>>> Here is what I see ... >>> >>> Thanks! >>> >>> Still different from what I can repro over here, so, unfortunately, I >>> had to add additional debug printks. Pushed to the same branch/repo. >>> >>> Could I ask for another run with it? Please also share the complete >>> dmesg from boot, as I would need to check debug output when CPUs are >>> first onlined. > > So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 > A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are > isol CPUs? I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree). > This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] Yes I think it is similar to this. Thanks! Jon -- nvpublic
On 2/7/25 14:04, Jon Hunter wrote:
>
>
> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>
>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>
>>>> ...
>>>>
>>>>> Thanks! That did make it easier :-)
>>>>>
>>>>> Here is what I see ...
>>>>
>>>> Thanks!
>>>>
>>>> Still different from what I can repro over here, so, unfortunately, I
>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>
>>>> Could I ask for another run with it? Please also share the complete
>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>> first onlined.
>>
>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>> isol CPUs?
>
> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
Interesting, I have yet to reproduce this with equal capacities in isolcpus.
Maybe I didn't try hard enough yet.
>
>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>
> Yes I think it is similar to this.
>
> Thanks!
> Jon
>
I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
the offlining order:
echo 0 > /sys/devices/system/cpu/cpu5/online
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 0 > /sys/devices/system/cpu/cpu3/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu4/online
while the following offlining order succeeds:
echo 0 > /sys/devices/system/cpu/cpu5/online
echo 0 > /sys/devices/system/cpu/cpu4/online
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu3/online
(Both offline an isolcpus last, both have CPU0 online)
The issue only triggers with sugov DL threads (I guess that's obvious, but
just to mention it).
I'll investigate some more later but wanted to share for now.
A log just to ensure we're looking at the same thing (this is just 6.14-rc1
with Juri's printk):
Successful offlining:
# echo 0 > /sys/devices/system/cpu/cpu5/online
[ 37.063862] dl_bw_manage: cpu=5 cap=1143 fair_server_bw=52428 total_bw=314569 dl_bw_cpus=4
[ 37.070925] CPU0 attaching NULL sched-domain.
[ 37.071323] CPU1 attaching NULL sched-domain.
[ 37.071743] CPU2 attaching NULL sched-domain.
[ 37.072135] CPU5 attaching NULL sched-domain.
[ 37.072618] CPU0 attaching sched-domain(s):
[ 37.073008] domain-0: span=0-2 level=MC
[ 37.073370] groups: 0:{ span=0 cap=379 }, 1:{ span=1 cap=380 }, 2:{ span=2 cap=381 }
[ 37.074131] CPU1 attaching sched-domain(s):
[ 37.074503] domain-0: span=0-2 level=MC
[ 37.074871] groups: 1:{ span=1 cap=380 }, 2:{ span=2 cap=381 }, 0:{ span=0 cap=379 }
[ 37.075614] CPU2 attaching sched-domain(s):
[ 37.075998] domain-0: span=0-2 level=MC
[ 37.076354] groups: 2:{ span=2 cap=381 }, 0:{ span=0 cap=379 }, 1:{ span=1 cap=380 }
[ 37.077108] root domain span: 0-2
[ 37.077425] rd 0-2: Checking EAS, CPUs do not have asymmetric capacities
[ 37.078028] sched_energy_set: stopping EAS
[ 37.086645] psci: CPU5 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu4/online
[ 40.357879] dl_bw_manage: cpu=4 cap=1024 fair_server_bw=52428 total_bw=209713 dl_bw_cpus=2
[ 40.367705] rd 0-2: Checking EAS, CPUs do not have asymmetric capacities
[ 40.379707] psci: CPU4 killed (polled 1 ms)
[ 40.380449] rd 0-2: Checking EAS, CPUs do not have asymmetric capacities
# echo 0 > /sys/devices/system/cpu/cpu1/online
[ 43.285829] dl_bw_manage: cpu=1 cap=762 fair_server_bw=52428 total_bw=262141 dl_bw_cpus=3
[ 43.295728] CPU0 attaching NULL sched-domain.
[ 43.296139] CPU1 attaching NULL sched-domain.
[ 43.296535] CPU2 attaching NULL sched-domain.
[ 43.297116] CPU0 attaching sched-domain(s):
[ 43.297496] domain-0: span=0,2 level=MC
[ 43.297893] groups: 0:{ span=0 cap=380 }, 2:{ span=2 cap=381 }
[ 43.298491] CPU2 attaching sched-domain(s):
[ 43.298891] domain-0: span=0,2 level=MC
[ 43.299257] groups: 2:{ span=2 cap=381 }, 0:{ span=0 cap=380 }
[ 43.299866] root domain span: 0,2
[ 43.300203] rd 0,2: Checking EAS, CPUs do not have asymmetric capacities
[ 43.315715] psci: CPU1 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu2/online
[ 46.975824] dl_bw_manage: cpu=2 cap=381 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2
[ 46.981728] CPU0 attaching NULL sched-domain.
[ 46.982138] CPU2 attaching NULL sched-domain.
[ 46.982649] CPU0 attaching NULL sched-domain.
[ 46.983078] root domain span: 0
[ 46.983399] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 46.993699] psci: CPU2 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu3/online
[ 49.858842] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=52428 dl_bw_cpus=1
[ 49.865730] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 49.877688] psci: CPU3 killed (polled 2 ms)
#
Failed offlining:
# echo 0 > /sys/devices/system/cpu/cpu5/online
[ 29.596992] dl_bw_manage: cpu=5 cap=1143 fair_server_bw=52428 total_bw=314569 dl_bw_cpus=4
[ 29.607022] CPU0 attaching NULL sched-domain.
[ 29.607419] CPU1 attaching NULL sched-domain.
[ 29.607838] CPU2 attaching NULL sched-domain.
[ 29.608228] CPU5 attaching NULL sched-domain.
[ 29.608732] CPU0 attaching sched-domain(s):
[ 29.609124] domain-0: span=0-2 level=MC
[ 29.609485] groups: 0:{ span=0 cap=380 }, 1:{ span=1 cap=381 }, 2:{ span=2 cap=381 }
[ 29.610245] CPU1 attaching sched-domain(s):
[ 29.610617] domain-0: span=0-2 level=MC
[ 29.610986] groups: 1:{ span=1 cap=381 }, 2:{ span=2 cap=381 }, 0:{ span=0 cap=380 }
[ 29.611731] CPU2 attaching sched-domain(s):
[ 29.612122] domain-0: span=0-2 level=MC
[ 29.612478] groups: 2:{ span=2 cap=381 }, 0:{ span=0 cap=380 }, 1:{ span=1 cap=381 }
[ 29.613230] root domain span: 0-2
[ 29.613547] rd 0-2: Checking EAS, CPUs do not have asymmetric capacities
[ 29.614152] sched_energy_set: stopping EAS
[ 29.629987] psci: CPU5 killed (polled 0 ms)
# echo 0 > /sys/devices/system/cpu/cpu1/online
[ 32.945954] dl_bw_manage: cpu=1 cap=762 fair_server_bw=52428 total_bw=262141 dl_bw_cpus=3
[ 32.955858] CPU0 attaching NULL sched-domain.
[ 32.956269] CPU1 attaching NULL sched-domain.
[ 32.956662] CPU2 attaching NULL sched-domain.
[ 32.957244] CPU0 attaching sched-domain(s):
[ 32.957624] domain-0: span=0,2 level=MC
[ 32.958021] groups: 0:{ span=0 cap=380 }, 2:{ span=2 cap=381 }
[ 32.958617] CPU2 attaching sched-domain(s):
[ 32.959015] domain-0: span=0,2 level=MC
[ 32.959381] groups: 2:{ span=2 cap=381 }, 0:{ span=0 cap=380 }
[ 32.959993] root domain span: 0,2
[ 32.960330] rd 0,2: Checking EAS, CPUs do not have asymmetric capacities
[ 32.972841] psci: CPU1 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu3/online
[ 35.921962] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=314570 dl_bw_cpus=2
[ 35.927945] rd 0,2: Checking EAS, CPUs do not have asymmetric capacities
[ 35.935828] psci: CPU3 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu2/online
[ 38.999952] dl_bw_manage: cpu=2 cap=381 fair_server_bw=52428 total_bw=209713 dl_bw_cpus=2
[ 39.005003] CPU0 attaching NULL sched-domain.
[ 39.005412] CPU2 attaching NULL sched-domain.
[ 39.005969] CPU0 attaching NULL sched-domain.
[ 39.006370] root domain span: 0
[ 39.006687] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 39.016825] psci: CPU2 killed (polled 1 ms)
# echo 0 > /sys/devices/system/cpu/cpu4/online
sh: write error: Device or resource busy
# [ 42.936031] dl_bw_manage: cpu=4 cap=0 fair_server_bw=52428 total_bw=157285 dl_bw_cpus=1
cat /sys/devices/system/cpu/cpu*/cpu_capacity
381
381
381
381
1024
1024
Hi Christian, Thanks for taking a look as well. On 07/02/25 15:55, Christian Loehle wrote: > On 2/7/25 14:04, Jon Hunter wrote: > > > > > > On 07/02/2025 13:38, Dietmar Eggemann wrote: > >> On 07/02/2025 11:38, Jon Hunter wrote: > >>> > >>> On 06/02/2025 09:29, Juri Lelli wrote: > >>>> On 05/02/25 16:56, Jon Hunter wrote: > >>>> > >>>> ... > >>>> > >>>>> Thanks! That did make it easier :-) > >>>>> > >>>>> Here is what I see ... > >>>> > >>>> Thanks! > >>>> > >>>> Still different from what I can repro over here, so, unfortunately, I > >>>> had to add additional debug printks. Pushed to the same branch/repo. > >>>> > >>>> Could I ask for another run with it? Please also share the complete > >>>> dmesg from boot, as I would need to check debug output when CPUs are > >>>> first onlined. > >> > >> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 > >> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are > >> isol CPUs? > > > > I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree). > > Interesting, I have yet to reproduce this with equal capacities in isolcpus. > Maybe I didn't try hard enough yet. > > > > >> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] > > > > Yes I think it is similar to this. > > > > Thanks! > > Jon > > > > I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and > the offlining order: > echo 0 > /sys/devices/system/cpu/cpu5/online > echo 0 > /sys/devices/system/cpu/cpu1/online > echo 0 > /sys/devices/system/cpu/cpu3/online > echo 0 > /sys/devices/system/cpu/cpu2/online > echo 0 > /sys/devices/system/cpu/cpu4/online > > while the following offlining order succeeds: > echo 0 > /sys/devices/system/cpu/cpu5/online > echo 0 > /sys/devices/system/cpu/cpu4/online > echo 0 > /sys/devices/system/cpu/cpu1/online > echo 0 > /sys/devices/system/cpu/cpu2/online > echo 0 > /sys/devices/system/cpu/cpu3/online > (Both offline an isolcpus last, both have CPU0 online) > > The issue only triggers with sugov DL threads (I guess that's obvious, but > just to mention it). It wasn't obvious to me at first :). So thanks for confirming. > I'll investigate some more later but wanted to share for now. So, problem actually is that I am not yet sure what we should do with sugovs' bandwidth wrt root domain accounting. W/o isolation it's all good, as it gets accounted for correctly on the dynamic domains sugov tasks can run on. But with isolation and sugov affected_cpus that cross isolation domains (e.g., one BIG one little), we can get into troubles not knowing if sugov contribution should fall on the DEF or DYN domain. Hummm, need to think more about it. Thanks, Juri
On 10/02/2025 18:09, Juri Lelli wrote:
> Hi Christian,
>
> Thanks for taking a look as well.
>
> On 07/02/25 15:55, Christian Loehle wrote:
>> On 2/7/25 14:04, Jon Hunter wrote:
>>>
>>>
>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>
>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>
>>>>>> ...
>>>>>>
>>>>>>> Thanks! That did make it easier :-)
>>>>>>>
>>>>>>> Here is what I see ...
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Still different from what I can repro over here, so, unfortunately, I
>>>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>>>
>>>>>> Could I ask for another run with it? Please also share the complete
>>>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>>>> first onlined.
>>>>
>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>>>> isol CPUs?
>>>
>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
>>
>> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
>> Maybe I didn't try hard enough yet.
>>
>>>
>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>>>
>>> Yes I think it is similar to this.
>>>
>>> Thanks!
>>> Jon
>>>
>>
>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
>> the offlining order:
>> echo 0 > /sys/devices/system/cpu/cpu5/online
>> echo 0 > /sys/devices/system/cpu/cpu1/online
>> echo 0 > /sys/devices/system/cpu/cpu3/online
>> echo 0 > /sys/devices/system/cpu/cpu2/online
>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>
>> while the following offlining order succeeds:
>> echo 0 > /sys/devices/system/cpu/cpu5/online
>> echo 0 > /sys/devices/system/cpu/cpu4/online
>> echo 0 > /sys/devices/system/cpu/cpu1/online
>> echo 0 > /sys/devices/system/cpu/cpu2/online
>> echo 0 > /sys/devices/system/cpu/cpu3/online
>> (Both offline an isolcpus last, both have CPU0 online)
>>
Could reproduce on Juno-r0:
0 1 2 3 4 5
L b b L L L
^^^
isol = [3-4] so both L
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 0 > /sys/devices/system/cpu/cpu4/online
echo 0 > /sys/devices/system/cpu/cpu5/online
echo 0 > /sys/devices/system/cpu/cpu2/online - isol
echo 0 > /sys/devices/system/cpu/cpu3/online - isol
>> The issue only triggers with sugov DL threads (I guess that's obvious, but
>> just to mention it).
IMHO, it doesn't have to be a sugov DL task. Any DL task will do.
// on a 2. shell:
# chrt -d -T 5000000 -D 10000000 -P 16666666 -p 0 $$
# ps -eTo comm,pid,class | grep DLN
bash 1243 DLN
5000000/16666666 = 0.3, 0.3 << 10 = 307 (task util, bandwidth requirement)
> It wasn't obvious to me at first :). So thanks for confirming.
>
>> I'll investigate some more later but wanted to share for now.
>
> So, problem actually is that I am not yet sure what we should do with
> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
> good, as it gets accounted for correctly on the dynamic domains sugov
> tasks can run on. But with isolation and sugov affected_cpus that cross
> isolation domains (e.g., one BIG one little), we can get into troubles
> not knowing if sugov contribution should fall on the DEF or DYN domain.
# echo 0 > /sys/devices/system/cpu/cpu1/online
[ 87.402722] __dl_bw_capacity() mask=0-2,5 cap=2940
[ 87.407551] dl_bw_cpus() cpu=1 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4
[ 87.416019] dl_bw_manage: cpu=1 cap=1916 fair_server_bw=52428 total_bw=524284 dl_bw_cpus=4 type=DYN span=0-2,5
# echo 0 > /sys/devices/system/cpu/cpu2/online
[ 95.562270] __dl_bw_capacity() mask=0,2,5 cap=1916
[ 95.567091] dl_bw_cpus() cpu=2 rd->span=0,2,5 cpu_active_mask=0,2-5 cpumask_weight(rd->span)=3
[ 95.575735] dl_bw_manage: cpu=2 cap=892 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,2,5
# echo 0 > /sys/devices/system/cpu/cpu5/online
[ 100.573131] __dl_bw_capacity() mask=0,5 cap=892
[ 100.577713] dl_bw_cpus() cpu=5 rd->span=0,5 cpu_active_mask=0,3-5 cpumask_weight(rd->span)=2
[ 100.586186] dl_bw_manage: cpu=5 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,5
# echo 0 > /sys/devices/system/cpu/cpu3/online
[ 110.232755] __dl_bw_capacity() mask=1-5 cap=892
[ 110.237333] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,3-4 cpus=2
[ 110.244064] dl_bw_manage: cpu=3 cap=446 fair_server_bw=52428 total_bw=419428 dl_bw_cpus=2 type=DEF span=1-5
# echo 0 > /sys/devices/system/cpu/cpu4/online
[ 175.870273] __dl_bw_capacity() mask=1-5 cap=446
[ 175.874850] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,4 cpus=1
[ 175.881407] dl_bw_manage: cpu=4 cap=0 fair_server_bw=52428 total_bw=367000 dl_bw_cpus=1 type=DEF span=1-5
^^^^^ ^^^^^^^^
w/o/ cpu4 cap is 0! cpu0 is not part of it
...
[ 175.897600] dl_bw_manage() cpu=4 cap=0 overflow=1 return=-16
^^^^^^^^^^ -EBUSY
-bash: echo: write error: Device or resource busy
sched_cpu_deactivate()
dl_bw_deactivate(cpu)
dl_bw_manage(dl_bw_req_deactivate, cpu, 0);
return overflow ? -EBUSY : 0;
Looks like in DEF there is no CPU capacity left but we still have 1 DLN
task with a bandwidth requirement of 307.
On 11/02/25 09:36, Dietmar Eggemann wrote: > On 10/02/2025 18:09, Juri Lelli wrote: > > Hi Christian, > > > > Thanks for taking a look as well. > > > > On 07/02/25 15:55, Christian Loehle wrote: > >> On 2/7/25 14:04, Jon Hunter wrote: > >>> > >>> > >>> On 07/02/2025 13:38, Dietmar Eggemann wrote: > >>>> On 07/02/2025 11:38, Jon Hunter wrote: > >>>>> > >>>>> On 06/02/2025 09:29, Juri Lelli wrote: > >>>>>> On 05/02/25 16:56, Jon Hunter wrote: > >>>>>> > >>>>>> ... > >>>>>> > >>>>>>> Thanks! That did make it easier :-) > >>>>>>> > >>>>>>> Here is what I see ... > >>>>>> > >>>>>> Thanks! > >>>>>> > >>>>>> Still different from what I can repro over here, so, unfortunately, I > >>>>>> had to add additional debug printks. Pushed to the same branch/repo. > >>>>>> > >>>>>> Could I ask for another run with it? Please also share the complete > >>>>>> dmesg from boot, as I would need to check debug output when CPUs are > >>>>>> first onlined. > >>>> > >>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 > >>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are > >>>> isol CPUs? > >>> > >>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree). > >> > >> Interesting, I have yet to reproduce this with equal capacities in isolcpus. > >> Maybe I didn't try hard enough yet. > >> > >>> > >>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] > >>> > >>> Yes I think it is similar to this. > >>> > >>> Thanks! > >>> Jon > >>> > >> > >> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and > >> the offlining order: > >> echo 0 > /sys/devices/system/cpu/cpu5/online > >> echo 0 > /sys/devices/system/cpu/cpu1/online > >> echo 0 > /sys/devices/system/cpu/cpu3/online > >> echo 0 > /sys/devices/system/cpu/cpu2/online > >> echo 0 > /sys/devices/system/cpu/cpu4/online > >> > >> while the following offlining order succeeds: > >> echo 0 > /sys/devices/system/cpu/cpu5/online > >> echo 0 > /sys/devices/system/cpu/cpu4/online > >> echo 0 > /sys/devices/system/cpu/cpu1/online > >> echo 0 > /sys/devices/system/cpu/cpu2/online > >> echo 0 > /sys/devices/system/cpu/cpu3/online > >> (Both offline an isolcpus last, both have CPU0 online) > >> > > Could reproduce on Juno-r0: > > 0 1 2 3 4 5 > > L b b L L L > > ^^^ > isol = [3-4] so both L > > echo 0 > /sys/devices/system/cpu/cpu1/online > echo 0 > /sys/devices/system/cpu/cpu4/online > echo 0 > /sys/devices/system/cpu/cpu5/online > echo 0 > /sys/devices/system/cpu/cpu2/online - isol > echo 0 > /sys/devices/system/cpu/cpu3/online - isol > > >> The issue only triggers with sugov DL threads (I guess that's obvious, but > >> just to mention it). > > IMHO, it doesn't have to be a sugov DL task. Any DL task will do. OK, but in this case we actually want to fail. If we have allocated bandwidth for an actual DL task (not a dl server or a 'fake' sugov), we don't want to inadvertently leave it w/o bandwidth by turning CPUs off. > // on a 2. shell: > # chrt -d -T 5000000 -D 10000000 -P 16666666 -p 0 $$ > > # ps -eTo comm,pid,class | grep DLN > bash 1243 DLN > > 5000000/16666666 = 0.3, 0.3 << 10 = 307 (task util, bandwidth requirement) > > > It wasn't obvious to me at first :). So thanks for confirming. > > > >> I'll investigate some more later but wanted to share for now. > > > > So, problem actually is that I am not yet sure what we should do with > > sugovs' bandwidth wrt root domain accounting. W/o isolation it's all > > good, as it gets accounted for correctly on the dynamic domains sugov > > tasks can run on. But with isolation and sugov affected_cpus that cross > > isolation domains (e.g., one BIG one little), we can get into troubles > > not knowing if sugov contribution should fall on the DEF or DYN domain. > > # echo 0 > /sys/devices/system/cpu/cpu1/online > [ 87.402722] __dl_bw_capacity() mask=0-2,5 cap=2940 > [ 87.407551] dl_bw_cpus() cpu=1 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 > [ 87.416019] dl_bw_manage: cpu=1 cap=1916 fair_server_bw=52428 total_bw=524284 dl_bw_cpus=4 type=DYN span=0-2,5 > > # echo 0 > /sys/devices/system/cpu/cpu2/online > [ 95.562270] __dl_bw_capacity() mask=0,2,5 cap=1916 > [ 95.567091] dl_bw_cpus() cpu=2 rd->span=0,2,5 cpu_active_mask=0,2-5 cpumask_weight(rd->span)=3 > [ 95.575735] dl_bw_manage: cpu=2 cap=892 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,2,5 > > # echo 0 > /sys/devices/system/cpu/cpu5/online > [ 100.573131] __dl_bw_capacity() mask=0,5 cap=892 > [ 100.577713] dl_bw_cpus() cpu=5 rd->span=0,5 cpu_active_mask=0,3-5 cpumask_weight(rd->span)=2 > [ 100.586186] dl_bw_manage: cpu=5 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,5 > > # echo 0 > /sys/devices/system/cpu/cpu3/online > [ 110.232755] __dl_bw_capacity() mask=1-5 cap=892 > [ 110.237333] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,3-4 cpus=2 > [ 110.244064] dl_bw_manage: cpu=3 cap=446 fair_server_bw=52428 total_bw=419428 dl_bw_cpus=2 type=DEF span=1-5 > > > # echo 0 > /sys/devices/system/cpu/cpu4/online > [ 175.870273] __dl_bw_capacity() mask=1-5 cap=446 > [ 175.874850] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,4 cpus=1 > [ 175.881407] dl_bw_manage: cpu=4 cap=0 fair_server_bw=52428 total_bw=367000 dl_bw_cpus=1 type=DEF span=1-5 > ^^^^^ ^^^^^^^^ > w/o/ cpu4 cap is 0! cpu0 is not part of it > ... > [ 175.897600] dl_bw_manage() cpu=4 cap=0 overflow=1 return=-16 > ^^^^^^^^^^ -EBUSY > > -bash: echo: write error: Device or resource busy > > sched_cpu_deactivate() > > dl_bw_deactivate(cpu) > > dl_bw_manage(dl_bw_req_deactivate, cpu, 0); > > return overflow ? -EBUSY : 0; > > Looks like in DEF there is no CPU capacity left but we still have 1 DLN > task with a bandwidth requirement of 307. >
On 11/02/2025 10:21, Juri Lelli wrote: > On 11/02/25 09:36, Dietmar Eggemann wrote: >> On 10/02/2025 18:09, Juri Lelli wrote: >>> Hi Christian, >>> >>> Thanks for taking a look as well. >>> >>> On 07/02/25 15:55, Christian Loehle wrote: >>>> On 2/7/25 14:04, Jon Hunter wrote: >>>>> >>>>> >>>>> On 07/02/2025 13:38, Dietmar Eggemann wrote: >>>>>> On 07/02/2025 11:38, Jon Hunter wrote: >>>>>>> >>>>>>> On 06/02/2025 09:29, Juri Lelli wrote: >>>>>>>> On 05/02/25 16:56, Jon Hunter wrote: >>>>>>>> >>>>>>>> ... >>>>>>>> >>>>>>>>> Thanks! That did make it easier :-) >>>>>>>>> >>>>>>>>> Here is what I see ... >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> Still different from what I can repro over here, so, unfortunately, I >>>>>>>> had to add additional debug printks. Pushed to the same branch/repo. >>>>>>>> >>>>>>>> Could I ask for another run with it? Please also share the complete >>>>>>>> dmesg from boot, as I would need to check debug output when CPUs are >>>>>>>> first onlined. >>>>>> >>>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 >>>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are >>>>>> isol CPUs? >>>>> >>>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree). >>>> >>>> Interesting, I have yet to reproduce this with equal capacities in isolcpus. >>>> Maybe I didn't try hard enough yet. >>>> >>>>> >>>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] >>>>> >>>>> Yes I think it is similar to this. >>>>> >>>>> Thanks! >>>>> Jon >>>>> >>>> >>>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and >>>> the offlining order: >>>> echo 0 > /sys/devices/system/cpu/cpu5/online >>>> echo 0 > /sys/devices/system/cpu/cpu1/online >>>> echo 0 > /sys/devices/system/cpu/cpu3/online >>>> echo 0 > /sys/devices/system/cpu/cpu2/online >>>> echo 0 > /sys/devices/system/cpu/cpu4/online >>>> >>>> while the following offlining order succeeds: >>>> echo 0 > /sys/devices/system/cpu/cpu5/online >>>> echo 0 > /sys/devices/system/cpu/cpu4/online >>>> echo 0 > /sys/devices/system/cpu/cpu1/online >>>> echo 0 > /sys/devices/system/cpu/cpu2/online >>>> echo 0 > /sys/devices/system/cpu/cpu3/online >>>> (Both offline an isolcpus last, both have CPU0 online) >>>> >> >> Could reproduce on Juno-r0: >> >> 0 1 2 3 4 5 >> >> L b b L L L >> >> ^^^ >> isol = [3-4] so both L >> >> echo 0 > /sys/devices/system/cpu/cpu1/online >> echo 0 > /sys/devices/system/cpu/cpu4/online >> echo 0 > /sys/devices/system/cpu/cpu5/online >> echo 0 > /sys/devices/system/cpu/cpu2/online - isol >> echo 0 > /sys/devices/system/cpu/cpu3/online - isol >> >>>> The issue only triggers with sugov DL threads (I guess that's obvious, but >>>> just to mention it). >> >> IMHO, it doesn't have to be a sugov DL task. Any DL task will do. > > OK, but in this case we actually want to fail. If we have allocated > bandwidth for an actual DL task (not a dl server or a 'fake' sugov), we > don't want to inadvertently leave it w/o bandwidth by turning CPUs off. Obviously ... ;-) Same platform w/ isol = [2-3] with slow switching CPUfreq driver to force having 'sugov' tasks. # ps2 | grep DLN 95 95 S 140 0 - DLN sugov:0 96 96 S 140 0 - DLN sugov:1 # taskset -p 95; taskset -p 96 pid 95's current affinity mask: 39 pid 96's current affinity mask: 6 offline order: CPU1 -> 4 -> 5 -> 3 -> 2 ... pid 95's current affinity mask: 1 pid 96's current affinity mask: 4 root@juno:~# echo 0 > /sys/devices/system/cpu/cpu2/online [ 227.673757] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,2 cpus=1 [ 227.680329] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,2 cpus=1 [ 227.686882] dl_bw_manage: cpu=2 cap=0 fair_server_bw=52428 total_bw=157285 dl_bw_cpus=1 type=DEF span=1-5 [ 227.686900] dl_bw_cpus() cpu=6 rd->span=1-5 cpu_active_mask=0,2 cpus=1 [ 227.703066] dl_bw_manage() cpu=2 cap=0 overflow=1 return=-16 -bash: echo: write error: Device or resource busy So it seems 'sugov:1' getting in the way here. pid 95's current affinity mask: 1 pid 96's current affinity mask: 5 Looks like it's not a 'bL' issue but rather one with '>=2 CPU frequency policies' and slow-switching CPUfreq drivers.
On 2/10/25 17:09, Juri Lelli wrote:
> Hi Christian,
>
> Thanks for taking a look as well.
>
> On 07/02/25 15:55, Christian Loehle wrote:
>> On 2/7/25 14:04, Jon Hunter wrote:
>>>
>>>
>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>
>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>
>>>>>> ...
>>>>>>
>>>>>>> Thanks! That did make it easier :-)
>>>>>>>
>>>>>>> Here is what I see ...
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Still different from what I can repro over here, so, unfortunately, I
>>>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>>>
>>>>>> Could I ask for another run with it? Please also share the complete
>>>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>>>> first onlined.
>>>>
>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>>>> isol CPUs?
>>>
>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
>>
>> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
>> Maybe I didn't try hard enough yet.
>>
>>>
>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>>>
>>> Yes I think it is similar to this.
>>>
>>> Thanks!
>>> Jon
>>>
>>
>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
>> the offlining order:
>> echo 0 > /sys/devices/system/cpu/cpu5/online
>> echo 0 > /sys/devices/system/cpu/cpu1/online
>> echo 0 > /sys/devices/system/cpu/cpu3/online
>> echo 0 > /sys/devices/system/cpu/cpu2/online
>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>
>> while the following offlining order succeeds:
>> echo 0 > /sys/devices/system/cpu/cpu5/online
>> echo 0 > /sys/devices/system/cpu/cpu4/online
>> echo 0 > /sys/devices/system/cpu/cpu1/online
>> echo 0 > /sys/devices/system/cpu/cpu2/online
>> echo 0 > /sys/devices/system/cpu/cpu3/online
>> (Both offline an isolcpus last, both have CPU0 online)
>>
>> The issue only triggers with sugov DL threads (I guess that's obvious, but
>> just to mention it).
>
> It wasn't obvious to me at first :). So thanks for confirming.
>
>> I'll investigate some more later but wanted to share for now.
>
> So, problem actually is that I am not yet sure what we should do with
> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
> good, as it gets accounted for correctly on the dynamic domains sugov
> tasks can run on. But with isolation and sugov affected_cpus that cross
> isolation domains (e.g., one BIG one little), we can get into troubles
> not knowing if sugov contribution should fall on the DEF or DYN domain.
>
> Hummm, need to think more about it.
That is indeed tricky.
I would've found it super appealing to always just have sugov DL tasks activate
on this_cpu and not have to worry about all this, but then you have contention
amongst CPUs of a cluster and there are energy improvements from always
having little cores handle all sugov DL tasks, even for the big CPUs,
that's why I introduced
commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
but that really doesn't make this any easier.
On 11/02/25 10:15, Christian Loehle wrote:
> On 2/10/25 17:09, Juri Lelli wrote:
> > Hi Christian,
> >
> > Thanks for taking a look as well.
> >
> > On 07/02/25 15:55, Christian Loehle wrote:
> >> On 2/7/25 14:04, Jon Hunter wrote:
> >>>
> >>>
> >>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
> >>>> On 07/02/2025 11:38, Jon Hunter wrote:
> >>>>>
> >>>>> On 06/02/2025 09:29, Juri Lelli wrote:
> >>>>>> On 05/02/25 16:56, Jon Hunter wrote:
> >>>>>>
> >>>>>> ...
> >>>>>>
> >>>>>>> Thanks! That did make it easier :-)
> >>>>>>>
> >>>>>>> Here is what I see ...
> >>>>>>
> >>>>>> Thanks!
> >>>>>>
> >>>>>> Still different from what I can repro over here, so, unfortunately, I
> >>>>>> had to add additional debug printks. Pushed to the same branch/repo.
> >>>>>>
> >>>>>> Could I ask for another run with it? Please also share the complete
> >>>>>> dmesg from boot, as I would need to check debug output when CPUs are
> >>>>>> first onlined.
> >>>>
> >>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
> >>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
> >>>> isol CPUs?
> >>>
> >>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
> >>
> >> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
> >> Maybe I didn't try hard enough yet.
> >>
> >>>
> >>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
> >>>
> >>> Yes I think it is similar to this.
> >>>
> >>> Thanks!
> >>> Jon
> >>>
> >>
> >> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
> >> the offlining order:
> >> echo 0 > /sys/devices/system/cpu/cpu5/online
> >> echo 0 > /sys/devices/system/cpu/cpu1/online
> >> echo 0 > /sys/devices/system/cpu/cpu3/online
> >> echo 0 > /sys/devices/system/cpu/cpu2/online
> >> echo 0 > /sys/devices/system/cpu/cpu4/online
> >>
> >> while the following offlining order succeeds:
> >> echo 0 > /sys/devices/system/cpu/cpu5/online
> >> echo 0 > /sys/devices/system/cpu/cpu4/online
> >> echo 0 > /sys/devices/system/cpu/cpu1/online
> >> echo 0 > /sys/devices/system/cpu/cpu2/online
> >> echo 0 > /sys/devices/system/cpu/cpu3/online
> >> (Both offline an isolcpus last, both have CPU0 online)
> >>
> >> The issue only triggers with sugov DL threads (I guess that's obvious, but
> >> just to mention it).
> >
> > It wasn't obvious to me at first :). So thanks for confirming.
> >
> >> I'll investigate some more later but wanted to share for now.
> >
> > So, problem actually is that I am not yet sure what we should do with
> > sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
> > good, as it gets accounted for correctly on the dynamic domains sugov
> > tasks can run on. But with isolation and sugov affected_cpus that cross
> > isolation domains (e.g., one BIG one little), we can get into troubles
> > not knowing if sugov contribution should fall on the DEF or DYN domain.
> >
> > Hummm, need to think more about it.
>
> That is indeed tricky.
> I would've found it super appealing to always just have sugov DL tasks activate
> on this_cpu and not have to worry about all this, but then you have contention
> amongst CPUs of a cluster and there are energy improvements from always
> having little cores handle all sugov DL tasks, even for the big CPUs,
> that's why I introduced
> commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
> but that really doesn't make this any easier.
What about we actually ignore them consistently? We already do that for
admission control, so maybe we can do that when rebuilding domains as
well (until we find maybe a better way to deal with them).
Does the following make any difference?
---
kernel/sched/deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b254d878789d..8f7420e0c9d6 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct task_struct *p)
struct dl_bw *dl_b;
raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
- if (!dl_task(p)) {
+ if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
return;
}
On 11/02/2025 10:42, Juri Lelli wrote:
> On 11/02/25 10:15, Christian Loehle wrote:
>> On 2/10/25 17:09, Juri Lelli wrote:
>>> Hi Christian,
>>>
>>> Thanks for taking a look as well.
>>>
>>> On 07/02/25 15:55, Christian Loehle wrote:
>>>> On 2/7/25 14:04, Jon Hunter wrote:
>>>>>
>>>>>
>>>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>>>
>>>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>>> Thanks! That did make it easier :-)
>>>>>>>>>
>>>>>>>>> Here is what I see ...
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>>
>>>>>>>> Still different from what I can repro over here, so, unfortunately, I
>>>>>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>>>>>
>>>>>>>> Could I ask for another run with it? Please also share the complete
>>>>>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>>>>>> first onlined.
>>>>>>
>>>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>>>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>>>>>> isol CPUs?
>>>>>
>>>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
>>>>
>>>> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
>>>> Maybe I didn't try hard enough yet.
>>>>
>>>>>
>>>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>>>>>
>>>>> Yes I think it is similar to this.
>>>>>
>>>>> Thanks!
>>>>> Jon
>>>>>
>>>>
>>>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
>>>> the offlining order:
>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>
>>>> while the following offlining order succeeds:
>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>> (Both offline an isolcpus last, both have CPU0 online)
>>>>
>>>> The issue only triggers with sugov DL threads (I guess that's obvious, but
>>>> just to mention it).
>>>
>>> It wasn't obvious to me at first :). So thanks for confirming.
>>>
>>>> I'll investigate some more later but wanted to share for now.
>>>
>>> So, problem actually is that I am not yet sure what we should do with
>>> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
>>> good, as it gets accounted for correctly on the dynamic domains sugov
>>> tasks can run on. But with isolation and sugov affected_cpus that cross
>>> isolation domains (e.g., one BIG one little), we can get into troubles
>>> not knowing if sugov contribution should fall on the DEF or DYN domain.
>>>
>>> Hummm, need to think more about it.
>>
>> That is indeed tricky.
>> I would've found it super appealing to always just have sugov DL tasks activate
>> on this_cpu and not have to worry about all this, but then you have contention
>> amongst CPUs of a cluster and there are energy improvements from always
>> having little cores handle all sugov DL tasks, even for the big CPUs,
>> that's why I introduced
>> commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
>> but that really doesn't make this any easier.
>
> What about we actually ignore them consistently? We already do that for
> admission control, so maybe we can do that when rebuilding domains as
> well (until we find maybe a better way to deal with them).
>
> Does the following make any difference?
>
> ---
> kernel/sched/deadline.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index b254d878789d..8f7420e0c9d6 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct task_struct *p)
> struct dl_bw *dl_b;
>
> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
> - if (!dl_task(p)) {
> + if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
> return;
> }
>
I have tested this on top of v6.14-rc2, but this is still not resolving
the issue for me :-(
Jon
--
nvpublic
On 12/02/25 23:01, Jon Hunter wrote:
>
> On 11/02/2025 10:42, Juri Lelli wrote:
> > On 11/02/25 10:15, Christian Loehle wrote:
> > > On 2/10/25 17:09, Juri Lelli wrote:
> > > > Hi Christian,
> > > >
> > > > Thanks for taking a look as well.
> > > >
> > > > On 07/02/25 15:55, Christian Loehle wrote:
> > > > > On 2/7/25 14:04, Jon Hunter wrote:
> > > > > >
> > > > > >
> > > > > > On 07/02/2025 13:38, Dietmar Eggemann wrote:
> > > > > > > On 07/02/2025 11:38, Jon Hunter wrote:
> > > > > > > >
> > > > > > > > On 06/02/2025 09:29, Juri Lelli wrote:
> > > > > > > > > On 05/02/25 16:56, Jon Hunter wrote:
> > > > > > > > >
> > > > > > > > > ...
> > > > > > > > >
> > > > > > > > > > Thanks! That did make it easier :-)
> > > > > > > > > >
> > > > > > > > > > Here is what I see ...
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > >
> > > > > > > > > Still different from what I can repro over here, so, unfortunately, I
> > > > > > > > > had to add additional debug printks. Pushed to the same branch/repo.
> > > > > > > > >
> > > > > > > > > Could I ask for another run with it? Please also share the complete
> > > > > > > > > dmesg from boot, as I would need to check debug output when CPUs are
> > > > > > > > > first onlined.
> > > > > > >
> > > > > > > So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
> > > > > > > A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
> > > > > > > isol CPUs?
> > > > > >
> > > > > > I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
> > > > >
> > > > > Interesting, I have yet to reproduce this with equal capacities in isolcpus.
> > > > > Maybe I didn't try hard enough yet.
> > > > >
> > > > > >
> > > > > > > This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
> > > > > >
> > > > > > Yes I think it is similar to this.
> > > > > >
> > > > > > Thanks!
> > > > > > Jon
> > > > > >
> > > > >
> > > > > I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
> > > > > the offlining order:
> > > > > echo 0 > /sys/devices/system/cpu/cpu5/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu1/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu3/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu2/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu4/online
> > > > >
> > > > > while the following offlining order succeeds:
> > > > > echo 0 > /sys/devices/system/cpu/cpu5/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu4/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu1/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu2/online
> > > > > echo 0 > /sys/devices/system/cpu/cpu3/online
> > > > > (Both offline an isolcpus last, both have CPU0 online)
> > > > >
> > > > > The issue only triggers with sugov DL threads (I guess that's obvious, but
> > > > > just to mention it).
> > > >
> > > > It wasn't obvious to me at first :). So thanks for confirming.
> > > >
> > > > > I'll investigate some more later but wanted to share for now.
> > > >
> > > > So, problem actually is that I am not yet sure what we should do with
> > > > sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
> > > > good, as it gets accounted for correctly on the dynamic domains sugov
> > > > tasks can run on. But with isolation and sugov affected_cpus that cross
> > > > isolation domains (e.g., one BIG one little), we can get into troubles
> > > > not knowing if sugov contribution should fall on the DEF or DYN domain.
> > > >
> > > > Hummm, need to think more about it.
> > >
> > > That is indeed tricky.
> > > I would've found it super appealing to always just have sugov DL tasks activate
> > > on this_cpu and not have to worry about all this, but then you have contention
> > > amongst CPUs of a cluster and there are energy improvements from always
> > > having little cores handle all sugov DL tasks, even for the big CPUs,
> > > that's why I introduced
> > > commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
> > > but that really doesn't make this any easier.
> >
> > What about we actually ignore them consistently? We already do that for
> > admission control, so maybe we can do that when rebuilding domains as
> > well (until we find maybe a better way to deal with them).
> >
> > Does the following make any difference?
> >
> > ---
> > kernel/sched/deadline.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index b254d878789d..8f7420e0c9d6 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct task_struct *p)
> > struct dl_bw *dl_b;
> > raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
> > - if (!dl_task(p)) {
> > + if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> > raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
> > return;
> > }
> >
>
> I have tested this on top of v6.14-rc2, but this is still not resolving the
> issue for me :-(
Thanks for testing.
Was the testing using the full stack of changes I proposed so far? I
believe we still have to fix the accounting of dl_servers for def
root domain (there is a patch that should do that).
I updated the branch with the full set. In case it still fails, could
you please collect dmesg and tracing output as I suggested and share?
Best,
Juri
On 13/02/2025 06:16, Juri Lelli wrote:
> On 12/02/25 23:01, Jon Hunter wrote:
>>
>> On 11/02/2025 10:42, Juri Lelli wrote:
>>> On 11/02/25 10:15, Christian Loehle wrote:
>>>> On 2/10/25 17:09, Juri Lelli wrote:
>>>>> Hi Christian,
>>>>>
>>>>> Thanks for taking a look as well.
>>>>>
>>>>> On 07/02/25 15:55, Christian Loehle wrote:
>>>>>> On 2/7/25 14:04, Jon Hunter wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>>>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>>>>>
>>>>>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>>>>>
>>>>>>>>>> ...
>>>>>>>>>>
>>>>>>>>>>> Thanks! That did make it easier :-)
>>>>>>>>>>>
>>>>>>>>>>> Here is what I see ...
>>>>>>>>>>
>>>>>>>>>> Thanks!
>>>>>>>>>>
>>>>>>>>>> Still different from what I can repro over here, so, unfortunately, I
>>>>>>>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>>>>>>>
>>>>>>>>>> Could I ask for another run with it? Please also share the complete
>>>>>>>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>>>>>>>> first onlined.
>>>>>>>>
>>>>>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>>>>>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>>>>>>>> isol CPUs?
>>>>>>>
>>>>>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
>>>>>>
>>>>>> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
>>>>>> Maybe I didn't try hard enough yet.
>>>>>>
>>>>>>>
>>>>>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>>>>>>>
>>>>>>> Yes I think it is similar to this.
>>>>>>>
>>>>>>> Thanks!
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
>>>>>> the offlining order:
>>>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>>>
>>>>>> while the following offlining order succeeds:
>>>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>>>> (Both offline an isolcpus last, both have CPU0 online)
>>>>>>
>>>>>> The issue only triggers with sugov DL threads (I guess that's obvious, but
>>>>>> just to mention it).
>>>>>
>>>>> It wasn't obvious to me at first :). So thanks for confirming.
>>>>>
>>>>>> I'll investigate some more later but wanted to share for now.
>>>>>
>>>>> So, problem actually is that I am not yet sure what we should do with
>>>>> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
>>>>> good, as it gets accounted for correctly on the dynamic domains sugov
>>>>> tasks can run on. But with isolation and sugov affected_cpus that cross
>>>>> isolation domains (e.g., one BIG one little), we can get into troubles
>>>>> not knowing if sugov contribution should fall on the DEF or DYN domain.
>>>>>
>>>>> Hummm, need to think more about it.
>>>>
>>>> That is indeed tricky.
>>>> I would've found it super appealing to always just have sugov DL tasks activate
>>>> on this_cpu and not have to worry about all this, but then you have contention
>>>> amongst CPUs of a cluster and there are energy improvements from always
>>>> having little cores handle all sugov DL tasks, even for the big CPUs,
>>>> that's why I introduced
>>>> commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
>>>> but that really doesn't make this any easier.
>>>
>>> What about we actually ignore them consistently? We already do that for
>>> admission control, so maybe we can do that when rebuilding domains as
>>> well (until we find maybe a better way to deal with them).
>>>
>>> Does the following make any difference?
>>>
>>> ---
>>> kernel/sched/deadline.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>> index b254d878789d..8f7420e0c9d6 100644
>>> --- a/kernel/sched/deadline.c
>>> +++ b/kernel/sched/deadline.c
>>> @@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct task_struct *p)
>>> struct dl_bw *dl_b;
>>> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
>>> - if (!dl_task(p)) {
>>> + if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
>>> raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
>>> return;
>>> }
>>>
>>
>> I have tested this on top of v6.14-rc2, but this is still not resolving the
>> issue for me :-(
>
> Thanks for testing.
>
> Was the testing using the full stack of changes I proposed so far? I
> believe we still have to fix the accounting of dl_servers for def
> root domain (there is a patch that should do that).
>
> I updated the branch with the full set. In case it still fails, could
> you please collect dmesg and tracing output as I suggested and share?
Ah no it was not! OK, let me test the latest branch now.
Thanks
Jon
--
nvpublic
On 13/02/2025 09:53, Jon Hunter wrote:
>
> On 13/02/2025 06:16, Juri Lelli wrote:
>> On 12/02/25 23:01, Jon Hunter wrote:
>>>
>>> On 11/02/2025 10:42, Juri Lelli wrote:
>>>> On 11/02/25 10:15, Christian Loehle wrote:
>>>>> On 2/10/25 17:09, Juri Lelli wrote:
>>>>>> Hi Christian,
>>>>>>
>>>>>> Thanks for taking a look as well.
>>>>>>
>>>>>> On 07/02/25 15:55, Christian Loehle wrote:
>>>>>>> On 2/7/25 14:04, Jon Hunter wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>>>>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>>>>>>
>>>>>>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>>>>>>
>>>>>>>>>>> ...
>>>>>>>>>>>
>>>>>>>>>>>> Thanks! That did make it easier :-)
>>>>>>>>>>>>
>>>>>>>>>>>> Here is what I see ...
>>>>>>>>>>>
>>>>>>>>>>> Thanks!
>>>>>>>>>>>
>>>>>>>>>>> Still different from what I can repro over here, so,
>>>>>>>>>>> unfortunately, I
>>>>>>>>>>> had to add additional debug printks. Pushed to the same
>>>>>>>>>>> branch/repo.
>>>>>>>>>>>
>>>>>>>>>>> Could I ask for another run with it? Please also share the
>>>>>>>>>>> complete
>>>>>>>>>>> dmesg from boot, as I would need to check debug output when
>>>>>>>>>>> CPUs are
>>>>>>>>>>> first onlined.
>>>>>>>>>
>>>>>>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0
>>>>>>>>> Denver1 A57_0
>>>>>>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and
>>>>>>>>> A57_0) are
>>>>>>>>> isol CPUs?
>>>>>>>>
>>>>>>>> I believe that 1-2 are the denvers (even thought they are listed
>>>>>>>> as 0-1 in device-tree).
>>>>>>>
>>>>>>> Interesting, I have yet to reproduce this with equal capacities
>>>>>>> in isolcpus.
>>>>>>> Maybe I didn't try hard enough yet.
>>>>>>>
>>>>>>>>
>>>>>>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57
>>>>>>>>> A53 A53 A53]
>>>>>>>>
>>>>>>>> Yes I think it is similar to this.
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>> I could reproduce that on a different LLLLbb with isolcpus=3,4
>>>>>>> (Lb) and
>>>>>>> the offlining order:
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>>>>
>>>>>>> while the following offlining order succeeds:
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>>>>> (Both offline an isolcpus last, both have CPU0 online)
>>>>>>>
>>>>>>> The issue only triggers with sugov DL threads (I guess that's
>>>>>>> obvious, but
>>>>>>> just to mention it).
>>>>>>
>>>>>> It wasn't obvious to me at first :). So thanks for confirming.
>>>>>>
>>>>>>> I'll investigate some more later but wanted to share for now.
>>>>>>
>>>>>> So, problem actually is that I am not yet sure what we should do with
>>>>>> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
>>>>>> good, as it gets accounted for correctly on the dynamic domains sugov
>>>>>> tasks can run on. But with isolation and sugov affected_cpus that
>>>>>> cross
>>>>>> isolation domains (e.g., one BIG one little), we can get into
>>>>>> troubles
>>>>>> not knowing if sugov contribution should fall on the DEF or DYN
>>>>>> domain.
>>>>>>
>>>>>> Hummm, need to think more about it.
>>>>>
>>>>> That is indeed tricky.
>>>>> I would've found it super appealing to always just have sugov DL
>>>>> tasks activate
>>>>> on this_cpu and not have to worry about all this, but then you have
>>>>> contention
>>>>> amongst CPUs of a cluster and there are energy improvements from
>>>>> always
>>>>> having little cores handle all sugov DL tasks, even for the big CPUs,
>>>>> that's why I introduced
>>>>> commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
>>>>> but that really doesn't make this any easier.
>>>>
>>>> What about we actually ignore them consistently? We already do that for
>>>> admission control, so maybe we can do that when rebuilding domains as
>>>> well (until we find maybe a better way to deal with them).
>>>>
>>>> Does the following make any difference?
>>>>
>>>> ---
>>>> kernel/sched/deadline.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index b254d878789d..8f7420e0c9d6 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct
>>>> task_struct *p)
>>>> struct dl_bw *dl_b;
>>>> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
>>>> - if (!dl_task(p)) {
>>>> + if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
>>>> raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
>>>> return;
>>>> }
>>>>
>>>
>>> I have tested this on top of v6.14-rc2, but this is still not
>>> resolving the
>>> issue for me :-(
>>
>> Thanks for testing.
>>
>> Was the testing using the full stack of changes I proposed so far? I
>> believe we still have to fix the accounting of dl_servers for def
>> root domain (there is a patch that should do that).
>>
>> I updated the branch with the full set. In case it still fails, could
>> you please collect dmesg and tracing output as I suggested and share?
>
>
> Ah no it was not! OK, let me test the latest branch now.
Sorry for the delay, the day got away from me. However, it is still not
working :-(
Console log is attached.
Jon
--
nvpublic
U-Boot 2020.04-g6b630d64fd (Feb 19 2021 - 08:38:59 -0800)
SoC: tegra186
Model: NVIDIA P2771-0000-500
Board: NVIDIA P2771-0000
DRAM: 7.8 GiB
MMC: sdhci@3400000: 1, sdhci@3460000: 0
Loading Environment from MMC... *** Warning - bad CRC, using default environment
In: serial
Out: serial
Err: serial
Net:
Warning: ethernet@2490000 using MAC address from ROM
eth0: ethernet@2490000
Hit any key to stop autoboot: 2 1 0
MMC: no card present
switch to partitions #0, OK
mmc0(part 0) is current device
Scanning mmc 0:1...
Found /boot/extlinux/extlinux.conf
Retrieving file: /boot/extlinux/extlinux.conf
489 bytes read in 17 ms (27.3 KiB/s)
1: primary kernel
Retrieving file: /boot/initrd
7236840 bytes read in 187 ms (36.9 MiB/s)
Retrieving file: /boot/Image
47976960 bytes read in 1147 ms (39.9 MiB/s)
append: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1 no_console_suspend root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2
Retrieving file: /boot/dtb/tegra186-p2771-0000.dtb
108349 bytes read in 21 ms (4.9 MiB/s)
## Flattened Device Tree blob at 88400000
Booting using the fdt blob at 0x88400000
Using Device Tree in place at 0000000088400000, end 000000008841d73c
copying carveout for /host1x@13e00000/display-hub@15200000/display@15200000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15210000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15220000...
DT node /trusty missing in source; can't copy status
DT node /reserved-memory/fb0_carveout missing in source; can't copy
DT node /reserved-memory/fb1_carveout missing in source; can't copy
DT node /reserved-memory/fb2_carveout missing in source; can't copy
DT node /reserved-memory/ramoops_carveout missing in source; can't copy
DT node /reserved-memory/vpr-carveout missing in source; can't copy
Starting kernel ...
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00006-g8af20d375c86 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #2 SMP PREEMPT Fri Feb 14 01:41:10 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008834] Console: colour dummy device 80x25
[ 0.013495] printk: legacy console [tty0] enabled
[ 0.018425] printk: legacy bootconsole [uart0] disabled
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00006-g8af20d375c86 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #2 SMP PREEMPT Fri Feb 14 01:41:10 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008834] Console: colour dummy device 80x25
[ 0.013495] printk: legacy console [tty0] enabled
[ 0.018425] printk: legacy bootconsole [uart0] disabled
[ 0.023954] Calibrating delay loop (skipped), value calculated using timer frequency.. 62.50 BogoMIPS (lpj=125000)
[ 0.023970] pid_max: default: 32768 minimum: 301
[ 0.024018] LSM: initializing lsm=capability
[ 0.024122] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024149] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024665] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 0.028220] rcu: Hierarchical SRCU implementation.
[ 0.028231] rcu: Max phase no-delay instances is 1000.
[ 0.028415] Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level
[ 0.034204] Tegra Revision: A02 SKU: 220 CPU Process: 0 SoC Process: 0
[ 0.035793] EFI services will not be available.
[ 0.039973] smp: Bringing up secondary CPUs ...
[ 0.048898] CPU features: detected: Kernel page table isolation (KPTI)
[ 0.048935] Detected PIPT I-cache on CPU1
[ 0.048951] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU1: 0x0000009444c004
[ 0.048973] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU1: 0x00000010305116
[ 0.049003] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU1: 0x00000003001066
[ 0.049055] CPU features: Unsupported CPU feature variation detected.
[ 0.049237] CPU1: Booted secondary processor 0x0000000000 [0x4e0f0030]
[ 0.049311] __dl_add: cpus=1 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 0.060516] Detected PIPT I-cache on CPU2
[ 0.060536] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 0.060556] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 0.060582] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 0.060738] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 0.060792] __dl_add: cpus=2 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 0.068381] Detected PIPT I-cache on CPU3
[ 0.068475] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 0.068501] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 0.076341] Detected PIPT I-cache on CPU4
[ 0.076406] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 0.076430] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.076974] Detected PIPT I-cache on CPU5
[ 0.077039] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 0.077064] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 0.077141] smp: Brought up 1 node, 6 CPUs
[ 0.077177] SMP: Total of 6 processors activated.
[ 0.077184] CPU: All CPU(s) started at EL2
[ 0.077196] CPU features: detected: 32-bit EL0 Support
[ 0.077203] CPU features: detected: 32-bit EL1 Support
[ 0.077211] CPU features: detected: CRC32 instructions
[ 0.077300] alternatives: applying system-wide alternatives
[ 0.085706] CPU0 attaching sched-domain(s):
[ 0.085726] domain-0: span=0,3-5 level=MC
[ 0.085741] groups: 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 0.085782] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.085789] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 0.085796] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.085801] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 0.085805] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 0.085809] CPU3 attaching sched-domain(s):
[ 0.085836] domain-0: span=0,3-5 level=MC
[ 0.085846] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }
[ 0.085885] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 0.085889] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 0.085894] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 0.085897] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 0.085900] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 0.085904] CPU4 attaching sched-domain(s):
[ 0.085930] domain-0: span=0,3-5 level=MC
[ 0.085940] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }
[ 0.085977] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 0.085981] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 0.085985] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 0.085989] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 0.085993] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 0.085996] CPU5 attaching sched-domain(s):
[ 0.086023] domain-0: span=0,3-5 level=MC
[ 0.086033] groups: 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }
[ 0.086070] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
[ 0.086075] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
[ 0.086079] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 0.086082] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 0.086085] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 0.086089] root domain span: 0,3-5
[ 0.086114] default domain span: 1-2
[ 0.086186] Memory: 7902468K/8220672K available (17856K kernel code, 5188K rwdata, 12720K rodata, 10944K init, 1132K bss, 280192K reserved, 32768K cma-reserved)
[ 0.087272] devtmpfs: initialized
[ 0.101456] DMA-API: preallocated 65536 debug entries
[ 0.101479] DMA-API: debugging enabled by kernel config
[ 0.101493] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 0.101511] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[ 0.101922] 20752 pages in range for non-PLT usage
[ 0.101932] 512272 pages in range for PLT usage
[ 0.102079] pinctrl core: initialized pinctrl subsystem
[ 0.104447] DMI not present or invalid.
[ 0.106550] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[ 0.107337] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
[ 0.107543] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[ 0.107843] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[ 0.107888] audit: initializing netlink subsys (disabled)
[ 0.108009] audit: type=2000 audit(0.092:1): state=initialized audit_enabled=0 res=1
[ 0.109592] thermal_sys: Registered thermal governor 'step_wise'
[ 0.109599] thermal_sys: Registered thermal governor 'power_allocator'
[ 0.109730] cpuidle: using governor menu
[ 0.109959] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
[ 0.110136] ASID allocator initialised with 32768 entries
[ 0.112141] Serial: AMBA PL011 UART driver
[ 0.120112] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 0.120146] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 0.120169] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 0.120190] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 0.120209] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 0.120229] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 0.120248] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 0.120267] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 0.120286] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 0.120305] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 0.120324] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 0.120343] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 0.120362] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 0.120381] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 0.120401] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 0.120420] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 0.120439] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 0.120458] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 0.120478] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 0.120498] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 0.120517] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 0.120537] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 0.120556] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 0.120576] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 0.120596] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 0.120617] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 0.120637] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 0.120657] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 0.120676] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 0.120732] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.120792] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.120852] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.120912] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.120972] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121032] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121091] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121150] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121208] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121266] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121324] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121387] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121449] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121512] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121574] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121637] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121707] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121772] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121834] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121893] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.121952] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122011] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122070] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122131] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122191] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122249] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122308] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122384] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.122467] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.134941] /memory-controller@2c00000/external-memory-controller@2c60000: Fixed dependency cycle(s) with /bpmp
[ 0.135149] /bpmp: Fixed dependency cycle(s) with /memory-controller@2c00000/external-memory-controller@2c60000
[ 0.139315] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[ 0.139331] HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
[ 0.139340] HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
[ 0.139348] HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
[ 0.139356] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.139363] HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
[ 0.139371] HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
[ 0.139378] HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
[ 0.140652] ACPI: Interpreter disabled.
[ 0.142642] iommu: Default domain type: Translated
[ 0.142657] iommu: DMA domain TLB invalidation policy: strict mode
[ 0.143152] SCSI subsystem initialized
[ 0.143260] libata version 3.00 loaded.
[ 0.143391] usbcore: registered new interface driver usbfs
[ 0.143415] usbcore: registered new interface driver hub
[ 0.143444] usbcore: registered new device driver usb
[ 0.144003] pps_core: LinuxPPS API ver. 1 registered
[ 0.144013] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[ 0.144028] PTP clock support registered
[ 0.144101] EDAC MC: Ver: 3.0.0
[ 0.144611] scmi_core: SCMI protocol bus registered
[ 0.145250] FPGA manager framework
[ 0.145313] Advanced Linux Sound Architecture Driver Initialized.
[ 0.145960] vgaarb: loaded
[ 0.146338] clocksource: Switched to clocksource arch_sys_counter
[ 0.146500] VFS: Disk quotas dquot_6.6.0
[ 0.146521] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.146677] pnp: PnP ACPI: disabled
[ 0.151902] NET: Registered PF_INET protocol family
[ 0.152117] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[ 0.156006] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
[ 0.156093] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[ 0.156114] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
[ 0.156432] TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
[ 0.157579] TCP: Hash tables configured (established 65536 bind 65536)
[ 0.157657] UDP hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.157867] UDP-Lite hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.158151] NET: Registered PF_UNIX/PF_LOCAL protocol family
[ 0.158507] RPC: Registered named UNIX socket transport module.
[ 0.158524] RPC: Registered udp transport module.
[ 0.158530] RPC: Registered tcp transport module.
[ 0.158536] RPC: Registered tcp-with-tls transport module.
[ 0.158542] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 0.158556] PCI: CLS 0 bytes, default 64
[ 0.158669] Unpacking initramfs...
[ 0.165065] kvm [1]: nv: 566 coarse grained trap handlers
[ 0.165381] kvm [1]: IPA Size Limit: 40 bits
[ 0.166885] kvm [1]: vgic interrupt IRQ9
[ 0.166950] kvm [1]: Hyp nVHE mode initialized successfully
[ 0.168282] Initialise system trusted keyrings
[ 0.168443] workingset: timestamp_bits=42 max_order=21 bucket_order=0
[ 0.168677] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[ 0.168876] NFS: Registering the id_resolver key type
[ 0.168917] Key type id_resolver registered
[ 0.168925] Key type id_legacy registered
[ 0.168943] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[ 0.168952] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[ 0.169063] 9p: Installing v9fs 9p2000 file system support
[ 0.200858] Key type asymmetric registered
[ 0.200884] Asymmetric key parser 'x509' registered
[ 0.200949] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245)
[ 0.200960] io scheduler mq-deadline registered
[ 0.200968] io scheduler kyber registered
[ 0.200996] io scheduler bfq registered
[ 0.210189] ledtrig-cpu: registered to indicate activity on CPUs
[ 0.232581] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 0.235210] msm_serial: driver initialized
[ 0.235453] SuperH (H)SCI(F) driver initialized
[ 0.235567] STM32 USART driver initialized
[ 0.238436] arm-smmu 12000000.iommu: probing hardware configuration...
[ 0.238455] arm-smmu 12000000.iommu: SMMUv2 with:
[ 0.238465] arm-smmu 12000000.iommu: stage 1 translation
[ 0.238473] arm-smmu 12000000.iommu: stage 2 translation
[ 0.238481] arm-smmu 12000000.iommu: nested translation
[ 0.238489] arm-smmu 12000000.iommu: stream matching with 128 register groups
[ 0.238500] arm-smmu 12000000.iommu: 64 context banks (0 stage-2 only)
[ 0.238518] arm-smmu 12000000.iommu: Supported page sizes: 0x61311000
[ 0.238528] arm-smmu 12000000.iommu: Stage-1: 48-bit VA -> 48-bit IPA
[ 0.238537] arm-smmu 12000000.iommu: Stage-2: 48-bit IPA -> 48-bit PA
[ 0.238575] arm-smmu 12000000.iommu: preserved 0 boot mappings
[ 0.243740] loop: module loaded
[ 0.244499] megasas: 07.727.03.00-rc1
[ 0.249780] tun: Universal TUN/TAP device driver, 1.6
[ 0.250492] thunder_xcv, ver 1.0
[ 0.250520] thunder_bgx, ver 1.0
[ 0.250541] nicpf, ver 1.0
[ 0.251370] hns3: Hisilicon Ethernet Network Driver for Hip08 Family - version
[ 0.251382] hns3: Copyright (c) 2017 Huawei Corporation.
[ 0.251414] hclge is initializing
[ 0.251444] e1000: Intel(R) PRO/1000 Network Driver
[ 0.251453] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 0.251476] e1000e: Intel(R) PRO/1000 Network Driver
[ 0.251483] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 0.251503] igb: Intel(R) Gigabit Ethernet Network Driver
[ 0.251511] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 0.251534] igbvf: Intel(R) Gigabit Virtual Function Network Driver
[ 0.251542] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[ 0.251765] sky2: driver version 1.30
[ 0.253605] usbcore: registered new device driver r8152-cfgselector
[ 0.253629] usbcore: registered new interface driver r8152
[ 0.253894] VFIO - User Level meta-driver version: 0.3
[ 0.256056] usbcore: registered new interface driver usb-storage
[ 0.258274] i2c_dev: i2c /dev entries driver
[ 0.263723] sdhci: Secure Digital Host Controller Interface driver
[ 0.263739] sdhci: Copyright(c) Pierre Ossman
[ 0.264258] Synopsys Designware Multimedia Card Interface Driver
[ 0.264926] sdhci-pltfm: SDHCI platform and OF driver helper
[ 0.267041] tegra-bpmp bpmp: Adding to iommu group 0
[ 0.267554] tegra-bpmp bpmp: firmware: 91572a54614f84d0fd0c270beec2c56f
[ 0.269206] /bpmp/i2c/pmic@3c: Fixed dependency cycle(s) with /bpmp/i2c/pmic@3c/pinmux
[ 0.270535] max77620 0-003c: PMIC Version OTP:0x45 and ES:0x8
[ 0.277815] VDD_DDR_1V1_PMIC: Bringing 1125000uV into 1100000-1100000uV
[ 0.288013] VDD_RTC: Bringing 800000uV into 1000000-1000000uV
[ 0.289064] VDDIO_SDMMC3_AP: Bringing 1800000uV into 2800000-2800000uV
[ 0.290657] VDD_HDMI_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.291432] VDD_PEX_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.371701] Freeing initrd memory: 7064K
[ 0.412833] max77686-rtc max77620-rtc: registered as rtc0
[ 0.445402] max77686-rtc max77620-rtc: setting system clock to 2021-09-12T08:21:06 UTC (1631434866)
[ 0.574812] clocksource: tsc: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.574836] clocksource: osc: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 49772407460 ns
[ 0.574848] clocksource: usec: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275 ns
[ 0.575307] usbcore: registered new interface driver usbhid
[ 0.575320] usbhid: USB HID core driver
[ 0.578844] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 (0,8000003f) counters available
[ 0.579367] hw perfevents: enabled with armv8_nvidia_denver PMU driver, 7 (0,8000003f) counters available
[ 0.584040] NET: Registered PF_PACKET protocol family
[ 0.584108] 9pnet: Installing 9P2000 support
[ 0.584154] Key type dns_resolver registered
[ 0.591139] registered taskstats version 1
[ 0.591270] Loading compiled-in X.509 certificates
[ 0.596136] Demotion targets for Node 0: null
[ 0.616764] tegra-pcie 10003000.pcie: Adding to iommu group 1
[ 0.617062] tegra-pcie 10003000.pcie: host bridge /pcie@10003000 ranges:
[ 0.617094] tegra-pcie 10003000.pcie: MEM 0x0010000000..0x0010001fff -> 0x0010000000
[ 0.617115] tegra-pcie 10003000.pcie: MEM 0x0010004000..0x0010004fff -> 0x0010004000
[ 0.617134] tegra-pcie 10003000.pcie: IO 0x0050000000..0x005000ffff -> 0x0000000000
[ 0.617155] tegra-pcie 10003000.pcie: MEM 0x0050100000..0x0057ffffff -> 0x0050100000
[ 0.617170] tegra-pcie 10003000.pcie: MEM 0x0058000000..0x007fffffff -> 0x0058000000
[ 0.617241] tegra-pcie 10003000.pcie: 4x1, 1x1 configuration
[ 0.618670] tegra-pcie 10003000.pcie: probing port 0, using 4 lanes
[ 1.831045] tegra-pcie 10003000.pcie: link 0 down, ignoring
[ 1.831468] tegra-pcie 10003000.pcie: PCI host bridge to bus 0000:00
[ 1.831486] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.831497] pci_bus 0000:00: root bus resource [mem 0x10000000-0x10001fff]
[ 1.831507] pci_bus 0000:00: root bus resource [mem 0x10004000-0x10004fff]
[ 1.831516] pci_bus 0000:00: root bus resource [io 0x0000-0xffff]
[ 1.831526] pci_bus 0000:00: root bus resource [mem 0x50100000-0x57ffffff]
[ 1.831535] pci_bus 0000:00: root bus resource [mem 0x58000000-0x7fffffff pref]
[ 1.835154] pci_bus 0000:00: resource 4 [mem 0x10000000-0x10001fff]
[ 1.835168] pci_bus 0000:00: resource 5 [mem 0x10004000-0x10004fff]
[ 1.835177] pci_bus 0000:00: resource 6 [io 0x0000-0xffff]
[ 1.835186] pci_bus 0000:00: resource 7 [mem 0x50100000-0x57ffffff]
[ 1.835195] pci_bus 0000:00: resource 8 [mem 0x58000000-0x7fffffff pref]
[ 1.836160] tegra-gpcdma 2600000.dma-controller: Adding to iommu group 2
[ 1.838020] tegra-gpcdma 2600000.dma-controller: GPC DMA driver register 31 channels
[ 1.840668] printk: legacy console [ttyS0] disabled
[ 1.840851] 3100000.serial: ttyS0 at MMIO 0x3100000 (irq = 23, base_baud = 25500000) is a Tegra
[ 1.840888] printk: legacy console [ttyS0] enabled
[ 4.554535] dwc-eth-dwmac 2490000.ethernet: Adding to iommu group 3
[ 4.574296] dwc-eth-dwmac 2490000.ethernet: User ID: 0x10, Synopsys ID: 0x41
[ 4.581367] dwc-eth-dwmac 2490000.ethernet: DWMAC4/5
[ 4.586428] dwc-eth-dwmac 2490000.ethernet: DMA HW capability register supported
[ 4.593823] dwc-eth-dwmac 2490000.ethernet: RX Checksum Offload Engine supported
[ 4.601215] dwc-eth-dwmac 2490000.ethernet: TX Checksum insertion supported
[ 4.608174] dwc-eth-dwmac 2490000.ethernet: Wake-Up On Lan supported
[ 4.614560] dwc-eth-dwmac 2490000.ethernet: TSO supported
[ 4.619961] dwc-eth-dwmac 2490000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[ 4.627879] dwc-eth-dwmac 2490000.ethernet: Enabled L3L4 Flow TC (entries=8)
[ 4.634926] dwc-eth-dwmac 2490000.ethernet: Enabled RFS Flow TC (entries=10)
[ 4.641971] dwc-eth-dwmac 2490000.ethernet: TSO feature enabled
[ 4.647890] dwc-eth-dwmac 2490000.ethernet: Using 40/40 bits DMA host/device width
[ 4.656210] irq: IRQ73: trimming hierarchy from :pmc@c360000
[ 4.666520] tegra_rtc c2a0000.rtc: registered as rtc1
[ 4.671587] tegra_rtc c2a0000.rtc: Tegra internal Real Time Clock
[ 4.680293] irq: IRQ76: trimming hierarchy from :pmc@c360000
[ 4.686187] pca953x 1-0074: using no AI
[ 4.693156] irq: IRQ77: trimming hierarchy from :pmc@c360000
[ 4.698964] pca953x 1-0077: using no AI
[ 4.717856] cpufreq: cpufreq_online: CPU0: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.728789] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.728796] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.728802] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.728806] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.728810] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.728815] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.769218] dl_clear_root_domain: span=1-2 type=DEF
[ 4.769222] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.769227] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.769301] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
[ 4.769377] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.769382] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.769387] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.769392] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.769396] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.769400] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.835665] dl_clear_root_domain: span=1-2 type=DEF
[ 4.835669] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.835673] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.835733] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
[ 4.835784] cpufreq: cpufreq_online: CPU3: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.872499] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.872504] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.872509] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.872513] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.872517] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.872521] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.912870] dl_clear_root_domain: span=1-2 type=DEF
[ 4.912874] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.912879] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.912973] cpufreq: cpufreq_online: CPU4: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 4.942474] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.942478] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.942483] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.942487] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.942491] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.942495] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.982819] dl_clear_root_domain: span=1-2 type=DEF
[ 4.982821] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.982824] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.982889] cpufreq: cpufreq_online: CPU5: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
[ 5.012384] dl_clear_root_domain: span=0,3-5 type=DYN
[ 5.012388] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 5.012393] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 5.012397] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 5.012401] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 5.012405] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.052728] dl_clear_root_domain: span=1-2 type=DEF
[ 5.052730] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.052733] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.054374] dl_clear_root_domain: span=0,3-5 type=DYN
[ 5.054380] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 5.054383] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 5.054386] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 5.054389] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 5.054392] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.060615] sdhci-tegra 3440000.mmc: Adding to iommu group 4
[ 5.066085] dl_clear_root_domain: span=1-2 type=DEF
[ 5.066090] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.066092] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.138051] sdhci-tegra 3460000.mmc: Adding to iommu group 5
[ 5.147537] irq: IRQ86: trimming hierarchy from :pmc@c360000
[ 5.154626] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.154912] tegra-xusb 3530000.usb: Adding to iommu group 6
[ 5.166038] mmc0: CQHCI version 5.10
[ 5.169337] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 5.176962] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.182293] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 1
[ 5.190459] tegra-xusb 3530000.usb: hcc params 0x0184fd25 hci version 0x100 quirks 0x0000000000000810
[ 5.199696] tegra-xusb 3530000.usb: irq 87, io mem 0x03530000
[ 5.203765] mmc2: SDHCI controller on 3440000.mmc [3440000.mmc] using ADMA 64-bit
[ 5.205567] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.218382] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 2
[ 5.226005] tegra-xusb 3530000.usb: Host supports USB 3.0 SuperSpeed
[ 5.226344] mmc0: SDHCI controller on 3460000.mmc [3460000.mmc] using ADMA 64-bit
[ 5.232717] hub 1-0:1.0: USB hub found
[ 5.243750] hub 1-0:1.0: 4 ports detected
[ 5.248670] hub 2-0:1.0: USB hub found
[ 5.252560] hub 2-0:1.0: 3 ports detected
[ 5.261080] sdhci-tegra 3400000.mmc: Adding to iommu group 7
[ 5.267401] irq: IRQ90: trimming hierarchy from :interrupt-controller@3881000
[ 5.271261] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.274632] irq: IRQ92: trimming hierarchy from :pmc@c360000
[ 5.274679] sdhci-tegra 3400000.mmc: Got CD GPIO
[ 5.274697] sdhci-tegra 3400000.mmc: Got WP GPIO
[ 5.294990] irq: IRQ93: trimming hierarchy from :pmc@c360000
[ 5.300737] input: gpio-keys as /devices/platform/gpio-keys/input/input0
[ 5.326442] irq: IRQ94: trimming hierarchy from :pmc@c360000
[ 5.332398] mmc1: SDHCI controller on 3400000.mmc [3400000.mmc] using ADMA 64-bit
[ 5.340678] dwc-eth-dwmac 2490000.ethernet eth0: Register MEM_TYPE_PAGE_POOL RxQ-0
[ 5.346514] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.351792] dwc-eth-dwmac 2490000.ethernet eth0: PHY [stmmac-0:00] driver [Broadcom BCM89610] (irq=73)
[ 5.363537] dwmac4: Master AXI performs any burst length
[ 5.368898] dwc-eth-dwmac 2490000.ethernet eth0: No Safety Features support found
[ 5.373265] mmc0: Command Queue Engine enabled
[ 5.376493] dwc-eth-dwmac 2490000.ethernet eth0: IEEE 1588-2008 Advanced Timestamp supported
[ 5.380826] mmc0: new HS400 MMC card at address 0001
[ 5.381126] mmcblk0: mmc0:0001 032G34 29.1 GiB
[ 5.389484] dwc-eth-dwmac 2490000.ethernet eth0: registered PTP clock
[ 5.398904] mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17 p18 p19 p20 p21 p22 p23 p24 p25 p26 p27 p28 p29 p30 p31 p32 p33
[ 5.415656] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.418104] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 5.420319] mmcblk0boot0: mmc0:0001 032G34 4.00 MiB
[ 5.420885] mmcblk0boot1: mmc0:0001 032G34 4.00 MiB
[ 5.421430] mmcblk0rpmb: mmc0:0001 032G34 4.00 MiB, chardev (234:0)
[ 8.353605] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 8.374360] IP-Config: Complete:
[ 8.377593] device=eth0, hwaddr=00:04:4b:8c:56:1e, ipaddr=192.168.99.2, mask=255.255.255.0, gw=192.168.99.1
[ 8.387782] host=192.168.99.2, domain=, nis-domain=(none)
[ 8.393622] bootserver=192.168.99.1, rootserver=192.168.99.1, rootpath=
[ 8.393763] clk: Disabling unused clocks
[ 8.426139] PM: genpd: Disabling unused power domains
[ 8.431247] ALSA device list:
[ 8.434220] No soundcards found.
[ 8.442314] Freeing unused kernel memory: 10944K
[ 8.447046] Run /init as init process
[ 8.450754] with arguments:
[ 8.453723] /init
[ 8.456023] netdevwait
[ 8.458755] vpr_resize
[ 8.461461] with environment:
[ 8.464618] HOME=/
[ 8.466997] TERM=linux
[ 8.469702] nvdumper_reserved=0x2772e0000
[ 8.474074] tegraid=18.1.2.0.0
[ 8.477482] bl_prof_dataptr=0x10000@0x275840000
[ 8.512409] Root device found: nfs
[ 8.522850] Ethernet interface: eth0
[ 8.533245] IP Address: 192.168.99.2
[ 8.602033] Rootfs mounted over nfs
[ 8.628923] Switching from initrd to actual rootfs
[ 8.902833] systemd[1]: System time before build time, advancing clock.
[ 9.016335] NET: Registered PF_INET6 protocol family
[ 9.022936] Segment Routing with IPv6
[ 9.026651] In-situ OAM (IOAM) with IPv6
[ 9.067213] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[ 9.088927] systemd[1]: Detected architecture arm64.
[ 9.138815] systemd[1]: Set hostname to <tegra-ubuntu>.
[ 10.766341] random: crng init done
[ 10.769946] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[ 10.778041] systemd[1]: Reached target Swap.
[ 10.782775] systemd[1]: Created slice User and Session Slice.
[ 10.788882] systemd[1]: Created slice System Slice.
[ 10.793959] systemd[1]: Listening on udev Kernel Socket.
[ 10.799380] systemd[1]: Reached target Slices.
[ 10.804101] systemd[1]: Listening on Journal Audit Socket.
[ 10.980980] systemd-journald[186]: Received request to flush runtime journal from PID 1
[ 11.575974] tegra-host1x 13e00000.host1x: Adding to iommu group 8
[ 11.587412] host1x-context host1x-ctx.0: Adding to iommu group 9
[ 11.604261] host1x-context host1x-ctx.1: Adding to iommu group 10
[ 11.613009] host1x-context host1x-ctx.2: Adding to iommu group 11
[ 11.619798] host1x-context host1x-ctx.3: Adding to iommu group 12
[ 11.626766] host1x-context host1x-ctx.4: Adding to iommu group 13
[ 11.636361] host1x-context host1x-ctx.5: Adding to iommu group 14
[ 11.643694] host1x-context host1x-ctx.6: Adding to iommu group 15
[ 11.650284] host1x-context host1x-ctx.7: Adding to iommu group 16
[ 11.650639] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 11.717498] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 11.729626] tegra-xudc 3550000.usb: Adding to iommu group 17
[ 11.737831] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 11.754728] tegra-hda 3510000.hda: Adding to iommu group 18
[ 11.760495] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 11.760516] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 11.760526] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 11.760535] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 11.760544] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 11.760553] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 11.760563] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 11.760571] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 11.760580] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 11.760589] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 11.760599] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 11.760608] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 11.760617] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 11.760629] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 11.760640] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 11.760652] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 11.760661] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 11.760671] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 11.760679] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 11.760688] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 11.760697] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 11.760706] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 11.760715] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 11.760723] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 11.760733] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 11.760741] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 11.770153] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.050303] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.061711] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.073683] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.084567] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.095376] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.106171] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.116935] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127699] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127739] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127768] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127800] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127837] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127864] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127893] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127928] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127958] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.127993] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128024] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128052] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128083] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128109] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128136] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128162] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128187] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128212] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128243] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128289] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.128426] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.187825] at24 6-0050: 256 byte 24c02 EEPROM, read-only
[ 12.202707] gic 2a41000.interrupt-controller: GIC IRQ controller registered
[ 12.227796] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=3 as /devices/platform/3510000.hda/sound/card0/input1
[ 12.237140] tegra-aconnect aconnect@2900000: Tegra ACONNECT bus registered
[ 12.247827] at24 6-0057: 256 byte 24c02 EEPROM, read-only
[ 12.248726] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=7 as /devices/platform/3510000.hda/sound/card0/input2
[ 12.325840] tegra-audio-graph-card sound: Adding to iommu group 19
[ 12.410152] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 12.413627] tegra-adma 2930000.dma-controller: Tegra210 ADMA driver registered 32 channels
[ 12.420911] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.441603] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 12.452395] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.458295] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 12.473745] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.487357] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 12.498099] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.510830] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 12.521576] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.535876] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 12.546620] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.560332] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 12.571093] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.584003] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 12.594807] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.608227] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 12.618956] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.632741] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 12.643479] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.656716] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 12.667688] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.681516] tegra-dc 15200000.display: Adding to iommu group 20
[ 12.687696] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 12.698439] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.711376] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 12.722106] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.733516] tegra-dc 15210000.display: Adding to iommu group 20
[ 12.740263] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 12.750984] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.762247] tegra-dc 15220000.display: Adding to iommu group 20
[ 12.768283] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 12.779080] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.793474] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 12.804318] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.817936] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 12.828983] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.842893] irq: IRQ138: trimming hierarchy from :pmc@c360000
[ 12.842930] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 12.859571] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.873471] tegra-vic 15340000.vic: Adding to iommu group 21
[ 12.879476] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 12.890397] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.903399] tegra-nvdec 15480000.nvdec: Adding to iommu group 22
[ 12.910316] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 12.921242] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.934631] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 12.943990] [drm] Initialized tegra 1.0.0 for drm on minor 0
[ 12.945471] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.951870] drm drm: [drm] Cannot find any crtc or sizes
[ 12.967572] drm drm: [drm] Cannot find any crtc or sizes
[ 12.969201] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 12.978416] drm drm: [drm] Cannot find any crtc or sizes
[ 12.983721] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.002610] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 13.013471] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.027335] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 13.039273] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.053351] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 13.064098] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.076821] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 13.087539] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.100397] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 13.111352] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.124438] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 13.135401] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.148561] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 13.159350] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
Ubuntu 18.04.6 LTS tegra-ubuntu ttyS0
tegra-ubuntu login: ubuntu (automatic login)
[ 16.698922] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.698933] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.698941] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.698946] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.698951] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.698956] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.739375] dl_clear_root_domain: span=1-2 type=DEF
[ 16.739382] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.739386] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 16.758528] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.758536] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.758541] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.758544] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.758548] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.758551] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.799668] dl_clear_root_domain: span=1-2 type=DEF
[ 16.799676] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.799680] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 16.814674] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.814681] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.814686] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.814689] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.814692] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.814696] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.860445] dl_clear_root_domain: span=1-2 type=DEF
[ 16.860450] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.860454] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux[ 16.879557] dl_clear_root_domain: span=0,3-5 type=DYN
6.13.0-rc6-next-20250110-00006-[ 16.879564] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
g8af20d375c86 aarch64)
* Doc[ 16.879569] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
umentation: https://help.ubuntu[ 16.879572] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
.com
* Management: [ 16.879575] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
https://landsca[ 16.879578] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
pe.canonical.com
* Support: https://ubuntu.com/pro
This system[ 16.934775] dl_clear_root_domain: span=1-2 type=DEF
[ 16.934781] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.934784] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
has been minimized by removing packages[ 16.959842] dl_clear_root_domain: span=0,3-5 type=DYN
and content that are
not requi[ 16.959853] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
red on a system that users do no[ 16.959861] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
t log into.
To restor[ 16.959868] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
e this content, [ 16.959873] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
you can run the [ 16.959879] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
'unminimize' command.
The programs in[ 17.013809] dl_clear_root_domain: span=1-2 type=DEF
cluded w[ 17.013817] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
ith the [ 17.013822] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
Ubuntu system are free software;[ 17.026473] dl_clear_root_domain: span=0,3-5 type=DYN
the exact distribution[ 17.026480] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
terms for each [ 17.026485] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
program are desc[ 17.026488] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
ribed in the
in[ 17.026491] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
dividual files i[ 17.026495] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
n /usr/share/doc/*/copyright.
Ubuntu [ 17.088060] dl_clear_root_domain: span=1-2 type=DEF
[ 17.088066] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
comes wi[ 17.088071] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
th ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@tegra-ubuntu:~$ [ 23.274543] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 3507000.sata
[ 23.283807] tegra-mc 2c00000.memory-controller: sync_state() pending due to 3507000.sata
[ 23.291981] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 15380000.nvjpg
[ 23.301396] tegra-mc 2c00000.memory-controller: sync_state() pending due to 15380000.nvjpg
[ 23.309681] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 154c0000.nvenc
[ 23.319088] tegra-mc 2c00000.memory-controller: sync_state() pending due to 154c0000.nvenc
[ 39.914396] VDD_RTC: disabling
[ 57.260269] PM: suspend entry (deep)
[ 57.264169] Filesystems sync: 0.000 seconds
[ 57.269287] Freezing user space processes
[ 57.274395] Freezing user space processes completed (elapsed 0.000 seconds)
[ 57.281379] OOM killer disabled.
[ 57.284609] Freezing remaining freezable tasks
[ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
[ 57.397022] Disabling non-boot CPUs ...
[ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
[ 57.400949] CPU0 attaching NULL sched-domain.
[ 57.415298] span=1-2
[ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
[ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
[ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
[ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
[ 57.417507] CPU3 attaching NULL sched-domain.
[ 57.454804] span=0-2
[ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
[ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
[ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
[ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
[ 57.457006] CPU4 attaching NULL sched-domain.
[ 57.493964] span=0-3
[ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
[ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
[ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
[ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
[ 57.496171] CPU5 attaching NULL sched-domain.
[ 57.532952] span=0-4
[ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4
[ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 57.535149] __dl_server_attach_root: cpu=5 rd_span=0-5 total_bw=314568
[ 57.535211] CPU0 attaching sched-domain(s):
[ 57.558178] domain-0: span=0,3-4 level=MC
[ 57.562276] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 57.568126] __dl_sub: cpus=5 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 57.568129] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 57.568136] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 57.568139] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 57.568142] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 57.568145] CPU3 attaching sched-domain(s):
[ 57.604141] domain-0: span=0,3-4 level=MC
[ 57.608242] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 57.614088] __dl_sub: cpus=4 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 57.614091] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 57.614098] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 57.614100] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 57.614103] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 57.614106] CPU4 attaching sched-domain(s):
[ 57.650710] domain-0: span=0,3-4 level=MC
[ 57.654812] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 57.660656] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 57.660660] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 57.660666] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 57.660669] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 57.660671] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 57.660675] root domain span: 0,3-4
[ 57.697801] default domain span: 1-2,5
[ 57.701560] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 57.709917] psci: CPU5 killed (polled 0 ms)
[ 57.714734] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,3-4
[ 57.714773] CPU0 attaching NULL sched-domain.
[ 57.729120] span=1-2,5
[ 57.731483] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 57.731488] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 57.731496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 57.731499] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DEF
[ 57.731503] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=209712
[ 57.731506] CPU3 attaching NULL sched-domain.
[ 57.769309] span=0-2,5
[ 57.771670] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 57.771673] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 57.771680] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 57.771682] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-3,5 type=DEF
[ 57.771685] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=262140
[ 57.771688] CPU4 attaching NULL sched-domain.
[ 57.808967] span=0-3,5
[ 57.811327] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 57.811331] __dl_add: cpus=4 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 57.811334] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=314568
[ 57.811378] CPU0 attaching sched-domain(s):
[ 57.834511] domain-0: span=0,3 level=MC
[ 57.838437] groups: 0:{ span=0 }, 3:{ span=3 }
[ 57.843067] __dl_sub: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 57.843070] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 57.843075] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 57.843078] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 57.843080] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 57.843083] CPU3 attaching sched-domain(s):
[ 57.879064] domain-0: span=0,3 level=MC
[ 57.882987] groups: 3:{ span=3 }, 0:{ span=0 }
[ 57.887613] __dl_sub: cpus=3 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 57.887617] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 57.887622] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 57.887625] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 57.887628] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 57.887632] root domain span: 0,3
[ 57.923352] default domain span: 1-2,4-5
[ 57.927282] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 57.934554] psci: CPU4 killed (polled 0 ms)
[ 57.939539] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,3
[ 57.939579] CPU0 attaching NULL sched-domain.
[ 57.953763] span=1-2,4-5
[ 57.956301] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 57.956305] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 57.956313] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 57.956317] __dl_add: cpus=3 tsk_bw=52428 total_bw=262140 span=0-2,4-5 type=DEF
[ 57.956320] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=262140
[ 57.956322] CPU3 attaching NULL sched-domain.
[ 57.994121] span=0-2,4-5
[ 57.996656] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 57.996660] __dl_add: cpus=3 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 57.996663] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=314568
[ 57.996700] CPU0 attaching NULL sched-domain.
[ 58.020170] span=0-5
[ 58.022357] __dl_sub: cpus=3 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 58.022361] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 58.022367] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 58.022370] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.022372] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 58.022375] root domain span: 0
[ 58.057313] default domain span: 1-5
[ 58.060900] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 58.068835] psci: CPU3 killed (polled 0 ms)
[ 58.073751] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 type=DEF span=1-5
[ 58.073882] dl_clear_root_domain: span=0 type=DYN
[ 58.073895] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.073909] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 58.103900] psci: CPU2 killed (polled 0 ms)
[ 58.108365] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 type=DEF span=1-5
[ 58.108466] Error taking CPU1 down: -16
[ 58.121881] Non-boot CPUs are not disabled
[ 58.126007] Enabling non-boot CPUs ...
[ 58.130263] Detected PIPT I-cache on CPU2
[ 58.134300] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 58.145808] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 58.158044] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 58.169980] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 58.177584] dl_clear_root_domain: span=0 type=DYN
[ 58.177600] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.177616] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 58.195968] CPU2 is up
[ 58.198522] Detected PIPT I-cache on CPU3
[ 58.202566] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 58.209359] CPU0 attaching NULL sched-domain.
[ 58.213728] span=1-5
[ 58.215920] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=0 type=DYN
[ 58.215924] __dl_server_detach_root: cpu=0 rd_span=0 total_bw=0
[ 58.215938] rq_attach_root: cpu=0 old_span= new_span=1-5
[ 58.215942] __dl_add: cpus=4 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 58.215945] __dl_server_attach_root: cpu=0 rd_span=0-5 total_bw=314568
[ 58.215989] CPU0 attaching sched-domain(s):
[ 58.251212] domain-0: span=0,3 level=MC
[ 58.255140] groups: 0:{ span=0 cap=1023 }, 3:{ span=3 }
[ 58.260550] __dl_sub: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 58.260553] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 58.260559] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 58.260562] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.260565] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 58.260568] CPU3 attaching sched-domain(s):
[ 58.296559] domain-0: span=0,3 level=MC
[ 58.300484] groups: 3:{ span=3 }, 0:{ span=0 cap=1023 }
[ 58.305893] __dl_sub: cpus=3 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 58.305896] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 58.305898] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 58.305901] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 58.305903] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 58.305906] root domain span: 0,3
[ 58.341652] default domain span: 1-2,4-5
[ 58.345584] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 58.352381] CPU3 is up
[ 58.354918] Detected PIPT I-cache on CPU4
[ 58.358944] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 58.365683] CPU0 attaching NULL sched-domain.
[ 58.370050] span=1-2,4-5
[ 58.372588] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 58.372591] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 58.372600] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 58.372603] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-2,4-5 type=DEF
[ 58.372606] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=262140
[ 58.372609] CPU3 attaching NULL sched-domain.
[ 58.410451] span=0-2,4-5
[ 58.412983] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=3 type=DYN
[ 58.412986] __dl_server_detach_root: cpu=3 rd_span=3 total_bw=0
[ 58.412994] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 58.412996] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 58.412999] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=314568
[ 58.413050] CPU0 attaching sched-domain(s):
[ 58.448620] domain-0: span=0,3-4 level=MC
[ 58.452720] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 58.458569] __dl_sub: cpus=5 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 58.458573] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 58.458579] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 58.458582] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.458584] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 58.458588] CPU3 attaching sched-domain(s):
[ 58.494583] domain-0: span=0,3-4 level=MC
[ 58.498683] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 58.504528] __dl_sub: cpus=4 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 58.504532] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 58.504537] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 58.504540] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 58.504542] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 58.504546] CPU4 attaching sched-domain(s):
[ 58.541150] domain-0: span=0,3-4 level=MC
[ 58.545250] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 58.551098] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 58.551102] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 58.551104] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 58.551107] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 58.551110] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 58.551114] root domain span: 0,3-4
[ 58.588247] default domain span: 1-2,5
[ 58.592005] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 58.599032] CPU4 is up
[ 58.601554] Detected PIPT I-cache on CPU5
[ 58.605580] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 58.612307] CPU0 attaching NULL sched-domain.
[ 58.616680] span=1-2,5
[ 58.619044] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 58.619048] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 58.619055] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 58.619059] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DEF
[ 58.619062] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=209712
[ 58.619064] CPU3 attaching NULL sched-domain.
[ 58.656885] span=0-2,5
[ 58.659250] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 58.659253] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 58.659259] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 58.659262] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-3,5 type=DEF
[ 58.659264] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=262140
[ 58.659267] CPU4 attaching NULL sched-domain.
[ 58.696560] span=0-3,5
[ 58.698923] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=4 type=DYN
[ 58.698926] __dl_server_detach_root: cpu=4 rd_span=4 total_bw=0
[ 58.698934] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 58.698937] __dl_add: cpus=6 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 58.698940] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=314568
[ 58.698995] CPU0 attaching sched-domain(s):
[ 58.734390] domain-0: span=0,3-5 level=MC
[ 58.738489] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 58.745557] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 58.745560] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 58.745566] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 58.745568] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 58.745571] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 58.745575] CPU3 attaching sched-domain(s):
[ 58.781573] domain-0: span=0,3-5 level=MC
[ 58.785670] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
[ 58.792737] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 58.792741] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 58.792747] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 58.792750] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 58.792752] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 58.792755] CPU4 attaching sched-domain(s):
[ 58.829355] domain-0: span=0,3-5 level=MC
[ 58.833452] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }
[ 58.840519] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 58.840523] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 58.840528] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 58.840531] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 58.840534] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 58.840537] CPU5 attaching sched-domain(s):
[ 58.878360] domain-0: span=0,3-5 level=MC
[ 58.882456] groups: 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 58.889520] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
[ 58.889524] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
[ 58.889527] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 58.889530] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 58.889532] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 58.889536] root domain span: 0,3-5
[ 58.926504] default domain span: 1-2
[ 58.930083] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 58.937155] dl_clear_root_domain: span=0,3-5 type=DYN
[ 58.937158] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 58.937161] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 58.937164] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 58.937167] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 58.937170] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 58.977514] dl_clear_root_domain: span=1-2 type=DEF
[ 58.977517] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 58.977520] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 58.977534] CPU5 is up
[ 59.005875] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 59.013772] dwmac4: Master AXI performs any burst length
[ 59.019112] dwc-eth-dwmac 2490000.ethernet eth0: No Safety Features support found
[ 59.026621] dwc-eth-dwmac 2490000.ethernet eth0: IEEE 1588-2008 Advanced Timestamp supported
[ 59.035356] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 59.049449] usb-conn-gpio 3520000.padctl:ports:usb2-0:connector: repeated role: device
[ 59.052430] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 59.087095] OOM killer enabled.
[ 59.090240] Restarting tasks ... done.
[ 59.095664] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 59.101194] random: crng reseeded on system resumption
[ 59.106418] PM: suspend exit
[ 59.153379] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 59.214971] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 59.284141] VDDIO_SDMMC3_AP: voltage operation not allowed
On 14/02/25 10:05, Jon Hunter wrote:
...
> Sorry for the delay, the day got away from me. However, it is still not
> working :-(
Ouch.
> Console log is attached.
Thanks. Did you happen to also collect a corresponding trace?
>
> Jon
>
> --
> nvpublic
> U-Boot 2020.04-g6b630d64fd (Feb 19 2021 - 08:38:59 -0800)
>
> SoC: tegra186
> Model: NVIDIA P2771-0000-500
> Board: NVIDIA P2771-0000
> DRAM: 7.8 GiB
> MMC: sdhci@3400000: 1, sdhci@3460000: 0
> Loading Environment from MMC... *** Warning - bad CRC, using default environment
>
> In: serial
> Out: serial
> Err: serial
> Net:
> Warning: ethernet@2490000 using MAC address from ROM
> eth0: ethernet@2490000
> Hit any key to stop autoboot: 2 1 0
> MMC: no card present
> switch to partitions #0, OK
> mmc0(part 0) is current device
> Scanning mmc 0:1...
> Found /boot/extlinux/extlinux.conf
> Retrieving file: /boot/extlinux/extlinux.conf
> 489 bytes read in 17 ms (27.3 KiB/s)
> 1: primary kernel
> Retrieving file: /boot/initrd
> 7236840 bytes read in 187 ms (36.9 MiB/s)
> Retrieving file: /boot/Image
> 47976960 bytes read in 1147 ms (39.9 MiB/s)
> append: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1 no_console_suspend root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81104/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2
> Retrieving file: /boot/dtb/tegra186-p2771-0000.dtb
> 108349 bytes read in 21 ms (4.9 MiB/s)
> ## Flattened Device Tree blob at 88400000
> Booting using the fdt blob at 0x88400000
> Using Device Tree in place at 0000000088400000, end 000000008841d73c
> copying carveout for /host1x@13e00000/display-hub@15200000/display@15200000...
> copying carveout for /host1x@13e00000/display-hub@15200000/display@15210000...
> copying carveout for /host1x@13e00000/display-hub@15200000/display@15220000...
> DT node /trusty missing in source; can't copy status
> DT node /reserved-memory/fb0_carveout missing in source; can't copy
> DT node /reserved-memory/fb1_carveout missing in source; can't copy
> DT node /reserved-memory/fb2_carveout missing in source; can't copy
> DT node /reserved-memory/ramoops_carveout missing in source; can't copy
> DT node /reserved-memory/vpr-carveout missing in source; can't copy
>
> Starting kernel ...
>
> [ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
> [ 0.000000] Linux version 6.13.0-rc6-next-20250110-00006-g8af20d375c86 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #2 SMP PREEMPT Fri Feb 14 01:41:10 PST 2025
> [ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
> [ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
> [ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> [ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> [ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
> [ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
> [ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
> [ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> [ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> [ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
> [ 0.024665] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
> [ 0.039973] smp: Bringing up secondary CPUs ...
> [ 0.049237] CPU1: Booted secondary processor 0x0000000000 [0x4e0f0030]
> [ 0.049311] __dl_add: cpus=1 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
> [ 0.060738] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
> [ 0.060792] __dl_add: cpus=2 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
> [ 0.068381] Detected PIPT I-cache on CPU3
> [ 0.068475] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
> [ 0.068501] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
> [ 0.076341] Detected PIPT I-cache on CPU4
> [ 0.076406] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
> [ 0.076430] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 0.076974] Detected PIPT I-cache on CPU5
> [ 0.077039] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
> [ 0.077064] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
> [ 0.077141] smp: Brought up 1 node, 6 CPUs
> [ 0.077177] SMP: Total of 6 processors activated.
> [ 0.077184] CPU: All CPU(s) started at EL2
> [ 0.077196] CPU features: detected: 32-bit EL0 Support
> [ 0.077203] CPU features: detected: 32-bit EL1 Support
> [ 0.077211] CPU features: detected: CRC32 instructions
> [ 0.077300] alternatives: applying system-wide alternatives
> [ 0.085706] CPU0 attaching sched-domain(s):
> [ 0.085726] domain-0: span=0,3-5 level=MC
> [ 0.085741] groups: 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
> [ 0.085782] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 0.085789] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
> [ 0.085796] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 0.085801] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 0.085805] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
> [ 0.085809] CPU3 attaching sched-domain(s):
> [ 0.085836] domain-0: span=0,3-5 level=MC
> [ 0.085846] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }
> [ 0.085885] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
> [ 0.085889] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
> [ 0.085894] rq_attach_root: cpu=3 old_span=NULL new_span=0
> [ 0.085897] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
> [ 0.085900] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
> [ 0.085904] CPU4 attaching sched-domain(s):
> [ 0.085930] domain-0: span=0,3-5 level=MC
> [ 0.085940] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }
> [ 0.085977] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
> [ 0.085981] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
> [ 0.085985] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
> [ 0.085989] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
> [ 0.085993] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
> [ 0.085996] CPU5 attaching sched-domain(s):
> [ 0.086023] domain-0: span=0,3-5 level=MC
> [ 0.086033] groups: 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }
> [ 0.086070] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
> [ 0.086075] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
> [ 0.086079] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
> [ 0.086082] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 0.086085] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
> [ 0.086089] root domain span: 0,3-5
> [ 0.086114] default domain span: 1-2
> [ 4.717856] cpufreq: cpufreq_online: CPU0: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
> [ 4.728789] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 4.728796] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 4.728802] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 4.728806] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 4.728810] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 4.728815] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 4.769218] dl_clear_root_domain: span=1-2 type=DEF
> [ 4.769222] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 4.769227] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 4.769301] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
Not sure where this dl_sub is coming from. The stacktrace in the trace
should probably tell. tsk_bw looks similar to sugov, so maybe still a
missing spot where we should ignore that?
> [ 4.769377] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 4.769382] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 4.769387] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 4.769392] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 4.769396] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 4.769400] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 4.835665] dl_clear_root_domain: span=1-2 type=DEF
> [ 4.835669] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 4.835673] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 4.835733] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
> [ 4.835784] cpufreq: cpufreq_online: CPU3: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
> [ 4.872499] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 4.872504] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 4.872509] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 4.872513] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 4.872517] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 4.872521] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 4.912870] dl_clear_root_domain: span=1-2 type=DEF
> [ 4.912874] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 4.912879] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 4.912973] cpufreq: cpufreq_online: CPU4: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
> [ 4.942474] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 4.942478] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 4.942483] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 4.942487] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 4.942491] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 4.942495] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 4.982819] dl_clear_root_domain: span=1-2 type=DEF
> [ 4.982821] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 4.982824] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 4.982889] cpufreq: cpufreq_online: CPU5: Running at unlisted initial frequency: 1344000 kHz, changing to: 1382400 kHz
> [ 5.012384] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 5.012388] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 5.012393] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 5.012397] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 5.012401] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 5.012405] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 5.052728] dl_clear_root_domain: span=1-2 type=DEF
> [ 5.052730] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 5.052733] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 5.054374] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 5.054380] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 5.054383] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 5.054386] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 5.054389] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 5.054392] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 5.060615] sdhci-tegra 3440000.mmc: Adding to iommu group 4
> [ 5.066085] dl_clear_root_domain: span=1-2 type=DEF
> [ 5.066090] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 5.066092] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
Wonder what is triggering the below rebuild now that cpufreq should be
up and running. Again trace data should hopefully tell.
> [ 16.698922] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 16.698933] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 16.698941] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 16.698946] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 16.698951] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 16.698956] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 16.739375] dl_clear_root_domain: span=1-2 type=DEF
> [ 16.739382] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 16.739386] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 16.758528] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 16.758536] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 16.758541] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 16.758544] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 16.758548] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 16.758551] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 16.799668] dl_clear_root_domain: span=1-2 type=DEF
> [ 16.799676] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 16.799680] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> [ 16.814674] dl_clear_root_domain: span=0,3-5 type=DYN
> [ 16.814681] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> [ 16.814686] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> [ 16.814689] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 16.814692] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> [ 16.814696] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> [ 16.860445] dl_clear_root_domain: span=1-2 type=DEF
> [ 16.860450] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 16.860454] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> Welcome to Ubuntu 18.04.6 LTS (GNU/Linux[ 16.879557] dl_clear_root_domain: span=0,3-5 type=DYN
> 6.13.0-rc6-next-20250110-00006-[ 16.879564] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> g8af20d375c86 aarch64)
>
> * Doc[ 16.879569] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> umentation: https://help.ubuntu[ 16.879572] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> .com
> * Management: [ 16.879575] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> https://landsca[ 16.879578] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> pe.canonical.com
> * Support: https://ubuntu.com/pro
> This system[ 16.934775] dl_clear_root_domain: span=1-2 type=DEF
> [ 16.934781] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> [ 16.934784] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> has been minimized by removing packages[ 16.959842] dl_clear_root_domain: span=0,3-5 type=DYN
> and content that are
> not requi[ 16.959853] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> red on a system that users do no[ 16.959861] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> t log into.
>
> To restor[ 16.959868] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> e this content, [ 16.959873] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> you can run the [ 16.959879] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> 'unminimize' command.
>
> The programs in[ 17.013809] dl_clear_root_domain: span=1-2 type=DEF
> cluded w[ 17.013817] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> ith the [ 17.013822] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
> Ubuntu system are free software;[ 17.026473] dl_clear_root_domain: span=0,3-5 type=DYN
>
> the exact distribution[ 17.026480] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
> terms for each [ 17.026485] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
> program are desc[ 17.026488] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> ribed in the
> in[ 17.026491] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
> dividual files i[ 17.026495] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
> n /usr/share/doc/*/copyright.
>
> Ubuntu [ 17.088060] dl_clear_root_domain: span=1-2 type=DEF
> [ 17.088066] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
> comes wi[ 17.088071] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
At this point I believe you triggered suspend.
> [ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
> [ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
> [ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
> [ 57.397022] Disabling non-boot CPUs ...
Offlining CPU5.
> [ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
> [ 57.400949] CPU0 attaching NULL sched-domain.
> [ 57.415298] span=1-2
> [ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> [ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
> [ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
> [ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
> [ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
> [ 57.417507] CPU3 attaching NULL sched-domain.
> [ 57.454804] span=0-2
> [ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
> [ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
> [ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> [ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
> [ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
> [ 57.457006] CPU4 attaching NULL sched-domain.
> [ 57.493964] span=0-3
> [ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
> [ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
> [ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> [ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
> [ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
> [ 57.496171] CPU5 attaching NULL sched-domain.
> [ 57.532952] span=0-4
> [ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4
> [ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
Maybe we shouldn't add the dl_server contribution of a CPU that is going
to be offline.
> [ 57.535149] __dl_server_attach_root: cpu=5 rd_span=0-5 total_bw=314568
> [ 57.535211] CPU0 attaching sched-domain(s):
> [ 57.558178] domain-0: span=0,3-4 level=MC
> [ 57.562276] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
> [ 57.568126] __dl_sub: cpus=5 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 57.568129] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
> [ 57.568136] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 57.568139] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 57.568142] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
> [ 57.568145] CPU3 attaching sched-domain(s):
> [ 57.604141] domain-0: span=0,3-4 level=MC
> [ 57.608242] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
> [ 57.614088] __dl_sub: cpus=4 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
> [ 57.614091] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
> [ 57.614098] rq_attach_root: cpu=3 old_span=NULL new_span=0
> [ 57.614100] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
> [ 57.614103] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
> [ 57.614106] CPU4 attaching sched-domain(s):
> [ 57.650710] domain-0: span=0,3-4 level=MC
> [ 57.654812] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
> [ 57.660656] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
> [ 57.660660] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
> [ 57.660666] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
> [ 57.660669] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
> [ 57.660671] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
> [ 57.660675] root domain span: 0,3-4
> [ 57.697801] default domain span: 1-2,5
> [ 57.701560] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
> [ 57.709917] psci: CPU5 killed (polled 0 ms)
Offlining CPU4.
> [ 57.714734] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,3-4
> [ 57.714773] CPU0 attaching NULL sched-domain.
> [ 57.729120] span=1-2,5
> [ 57.731483] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
> [ 57.731488] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
> [ 57.731496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
> [ 57.731499] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DEF
> [ 57.731503] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=209712
> [ 57.731506] CPU3 attaching NULL sched-domain.
> [ 57.769309] span=0-2,5
> [ 57.771670] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
> [ 57.771673] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
> [ 57.771680] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
> [ 57.771682] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-3,5 type=DEF
> [ 57.771685] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=262140
> [ 57.771688] CPU4 attaching NULL sched-domain.
> [ 57.808967] span=0-3,5
> [ 57.811327] rq_attach_root: cpu=4 old_span= new_span=0-3,5
> [ 57.811331] __dl_add: cpus=4 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
> [ 57.811334] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=314568
> [ 57.811378] CPU0 attaching sched-domain(s):
> [ 57.834511] domain-0: span=0,3 level=MC
> [ 57.838437] groups: 0:{ span=0 }, 3:{ span=3 }
> [ 57.843067] __dl_sub: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 57.843070] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
> [ 57.843075] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 57.843078] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 57.843080] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
> [ 57.843083] CPU3 attaching sched-domain(s):
> [ 57.879064] domain-0: span=0,3 level=MC
> [ 57.882987] groups: 3:{ span=3 }, 0:{ span=0 }
> [ 57.887613] __dl_sub: cpus=3 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
> [ 57.887617] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
> [ 57.887622] rq_attach_root: cpu=3 old_span=NULL new_span=0
> [ 57.887625] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
> [ 57.887628] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
> [ 57.887632] root domain span: 0,3
> [ 57.923352] default domain span: 1-2,4-5
> [ 57.927282] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
> [ 57.934554] psci: CPU4 killed (polled 0 ms)
Offlining CPU3.
> [ 57.939539] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,3
> [ 57.939579] CPU0 attaching NULL sched-domain.
> [ 57.953763] span=1-2,4-5
> [ 57.956301] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
> [ 57.956305] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
> [ 57.956313] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
> [ 57.956317] __dl_add: cpus=3 tsk_bw=52428 total_bw=262140 span=0-2,4-5 type=DEF
> [ 57.956320] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=262140
> [ 57.956322] CPU3 attaching NULL sched-domain.
> [ 57.994121] span=0-2,4-5
> [ 57.996656] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
> [ 57.996660] __dl_add: cpus=3 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
> [ 57.996663] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=314568
> [ 57.996700] CPU0 attaching NULL sched-domain.
> [ 58.020170] span=0-5
> [ 58.022357] __dl_sub: cpus=3 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
> [ 58.022361] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
> [ 58.022367] rq_attach_root: cpu=0 old_span=NULL new_span=
> [ 58.022370] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 58.022372] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
> [ 58.022375] root domain span: 0
> [ 58.057313] default domain span: 1-5
> [ 58.060900] rd 0: Checking EAS, CPUs do not have asymmetric capacities
> [ 58.068835] psci: CPU3 killed (polled 0 ms)
Offlining CPU2.
> [ 58.073751] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 type=DEF span=1-5
> [ 58.073882] dl_clear_root_domain: span=0 type=DYN
> [ 58.073895] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
> [ 58.073909] rd 0: Checking EAS, CPUs do not have asymmetric capacities
> [ 58.103900] psci: CPU2 killed (polled 0 ms)
We also probably need to remove isolated CPUs contributions to DEF root
domain when they are offlined (missing __dl_sub).
Offlining CPU1 (fail).
> [ 58.108365] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 type=DEF span=1-5
> [ 58.108466] Error taking CPU1 down: -16
> [ 58.121881] Non-boot CPUs are not disabled
> [ 58.126007] Enabling non-boot CPUs ...
Revert follows.
Still wondering why it doesn't fail for me, now that it doesn't seem
related to sugov anymore. :/
Anyway, apart from possibly sharing tracing data. Could you please try
to repro with performance governor (from boot)?
Thanks,
Juri
Hi! On 17/02/25 17:08, Juri Lelli wrote: > On 14/02/25 10:05, Jon Hunter wrote: ... > At this point I believe you triggered suspend. > > > [ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds) > > [ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC > > [ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down > > [ 57.397022] Disabling non-boot CPUs ... > > Offlining CPU5. > > > [ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5 > > [ 57.400949] CPU0 attaching NULL sched-domain. > > [ 57.415298] span=1-2 > > [ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN > > [ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284 > > [ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2 > > [ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF > > [ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284 > > [ 57.417507] CPU3 attaching NULL sched-domain. > > [ 57.454804] span=0-2 > > [ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN > > [ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856 > > [ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2 > > [ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF > > [ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712 > > [ 57.457006] CPU4 attaching NULL sched-domain. > > [ 57.493964] span=0-3 > > [ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN > > [ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428 > > [ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3 > > [ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF > > [ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140 > > [ 57.496171] CPU5 attaching NULL sched-domain. > > [ 57.532952] span=0-4 > > [ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4 > > [ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF > > Maybe we shouldn't add the dl_server contribution of a CPU that is going > to be offline. I tried to implement this idea and ended up with the following. As usual also pushed it to the branch on github. Could you please update and re-test? Another thing that I noticed is that in my case an hotplug operation generating a sched/root domain rebuild ends up calling dl_rebuild_ rd_accounting() (from partition_and_rebuild_sched_domains()) which resets accounting for def and dyn domains. In your case (looking again at the last dmesg you shared) I don't see this call, so I wonder if for some reason related to your setup we do the rebuild by calling partition_ sched_domains() (instead of partition_and_rebuild_) and this doesn't call dl_rebuild_rd_accounting() after partition_sched_domains_locked() - maybe it should? Dietmar, Christian, Peter, what do you think? Thanks, Juri
On 18/02/2025 10:58, Juri Lelli wrote:
> Hi!
>
> On 17/02/25 17:08, Juri Lelli wrote:
>> On 14/02/25 10:05, Jon Hunter wrote:
>
> ...
>
>> At this point I believe you triggered suspend.
>>
>>> [ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
>>> [ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
>>> [ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
>>> [ 57.397022] Disabling non-boot CPUs ...
>>
>> Offlining CPU5.
>>
>>> [ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
>>> [ 57.400949] CPU0 attaching NULL sched-domain.
>>> [ 57.415298] span=1-2
>>> [ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
>>> [ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
>>> [ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
>>> [ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
>>> [ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
>>> [ 57.417507] CPU3 attaching NULL sched-domain.
>>> [ 57.454804] span=0-2
>>> [ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
>>> [ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
>>> [ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
>>> [ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
>>> [ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
>>> [ 57.457006] CPU4 attaching NULL sched-domain.
>>> [ 57.493964] span=0-3
>>> [ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
>>> [ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
>>> [ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
>>> [ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
>>> [ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
>>> [ 57.496171] CPU5 attaching NULL sched-domain.
>>> [ 57.532952] span=0-4
>>> [ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4
>>> [ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
>>
>> Maybe we shouldn't add the dl_server contribution of a CPU that is going
>> to be offline.
>
> I tried to implement this idea and ended up with the following. As usual
> also pushed it to the branch on github. Could you please update and
> re-test?
>
> Another thing that I noticed is that in my case an hotplug operation
> generating a sched/root domain rebuild ends up calling dl_rebuild_
> rd_accounting() (from partition_and_rebuild_sched_domains()) which
> resets accounting for def and dyn domains. In your case (looking again
> at the last dmesg you shared) I don't see this call, so I wonder if for
> some reason related to your setup we do the rebuild by calling partition_
> sched_domains() (instead of partition_and_rebuild_) and this doesn't
> call dl_rebuild_rd_accounting() after partition_sched_domains_locked() -
> maybe it should? Dietmar, Christian, Peter, what do you think?
Yeah, looks like suspend/resume behaves differently compared to CPU hotplug.
On my Juno [L b b L L L]
^^^
isolcpus=[2,3]
# ps2 | grep DLN
98 98 S 140 0 - DLN sugov:0
99 99 S 140 0 - DLN sugov:1
# taskset -p 98; taskset -p 99
pid 98's current affinity mask: 39
pid 99's current affinity mask: 6
[ 87.679282] partition_sched_domains() called
...
[ 87.684013] partition_sched_domains() called
...
[ 87.687961] partition_sched_domains() called
...
[ 87.689419] psci: CPU3 killed (polled 0 ms)
[ 87.689715] __dl_bw_capacity() mask=2-5 cap=1024
[ 87.689739] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
[ 87.689757] dl_bw_manage: cpu=2 cap=0 fair_server_bw=52428
total_bw=209712 dl_bw_cpus=1 type=DEF span=2-5
[ 87.689775] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
[ 87.689789] dl_bw_manage() cpu=2 cap=0 overflow=1 return=-16
[ 87.689864] Error taking CPU2 down: -16 <-- !!!
...
[ 87.690674] partition_sched_domains() called
...
[ 87.691496] partition_sched_domains() called
...
[ 87.693702] partition_sched_domains() called
...
[ 87.695819] partition_and_rebuild_sched_domains() called
On 18/02/25 15:12, Dietmar Eggemann wrote: > On 18/02/2025 10:58, Juri Lelli wrote: > > Hi! > > > > On 17/02/25 17:08, Juri Lelli wrote: > >> On 14/02/25 10:05, Jon Hunter wrote: > > > > ... > > > >> At this point I believe you triggered suspend. > >> > >>> [ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds) > >>> [ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC > >>> [ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down > >>> [ 57.397022] Disabling non-boot CPUs ... > >> > >> Offlining CPU5. > >> > >>> [ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5 > >>> [ 57.400949] CPU0 attaching NULL sched-domain. > >>> [ 57.415298] span=1-2 > >>> [ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN > >>> [ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284 > >>> [ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2 > >>> [ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF > >>> [ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284 > >>> [ 57.417507] CPU3 attaching NULL sched-domain. > >>> [ 57.454804] span=0-2 > >>> [ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN > >>> [ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856 > >>> [ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2 > >>> [ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF > >>> [ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712 > >>> [ 57.457006] CPU4 attaching NULL sched-domain. > >>> [ 57.493964] span=0-3 > >>> [ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN > >>> [ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428 > >>> [ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3 > >>> [ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF > >>> [ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140 > >>> [ 57.496171] CPU5 attaching NULL sched-domain. > >>> [ 57.532952] span=0-4 > >>> [ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4 > >>> [ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF > >> > >> Maybe we shouldn't add the dl_server contribution of a CPU that is going > >> to be offline. > > > > I tried to implement this idea and ended up with the following. As usual > > also pushed it to the branch on github. Could you please update and > > re-test? > > > > Another thing that I noticed is that in my case an hotplug operation > > generating a sched/root domain rebuild ends up calling dl_rebuild_ > > rd_accounting() (from partition_and_rebuild_sched_domains()) which > > resets accounting for def and dyn domains. In your case (looking again > > at the last dmesg you shared) I don't see this call, so I wonder if for > > some reason related to your setup we do the rebuild by calling partition_ > > sched_domains() (instead of partition_and_rebuild_) and this doesn't > > call dl_rebuild_rd_accounting() after partition_sched_domains_locked() - > > maybe it should? Dietmar, Christian, Peter, what do you think? > > Yeah, looks like suspend/resume behaves differently compared to CPU hotplug. > > On my Juno [L b b L L L] > ^^^ > isolcpus=[2,3] > > # ps2 | grep DLN > 98 98 S 140 0 - DLN sugov:0 > 99 99 S 140 0 - DLN sugov:1 > > # taskset -p 98; taskset -p 99 > pid 98's current affinity mask: 39 > pid 99's current affinity mask: 6 > > > [ 87.679282] partition_sched_domains() called > ... > [ 87.684013] partition_sched_domains() called > ... > [ 87.687961] partition_sched_domains() called > ... > [ 87.689419] psci: CPU3 killed (polled 0 ms) > [ 87.689715] __dl_bw_capacity() mask=2-5 cap=1024 > [ 87.689739] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1 > [ 87.689757] dl_bw_manage: cpu=2 cap=0 fair_server_bw=52428 > total_bw=209712 dl_bw_cpus=1 type=DEF span=2-5 > [ 87.689775] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1 > [ 87.689789] dl_bw_manage() cpu=2 cap=0 overflow=1 return=-16 > [ 87.689864] Error taking CPU2 down: -16 <-- !!! > ... > [ 87.690674] partition_sched_domains() called > ... > [ 87.691496] partition_sched_domains() called > ... > [ 87.693702] partition_sched_domains() called > ... > [ 87.695819] partition_and_rebuild_sched_domains() called > Ah, OK. Did you try with my last proposed change?
On 18/02/2025 15:18, Juri Lelli wrote:
> On 18/02/25 15:12, Dietmar Eggemann wrote:
>> On 18/02/2025 10:58, Juri Lelli wrote:
>>> Hi!
>>>
>>> On 17/02/25 17:08, Juri Lelli wrote:
>>>> On 14/02/25 10:05, Jon Hunter wrote:
[...]
>> Yeah, looks like suspend/resume behaves differently compared to CPU hotplug.
>>
>> On my Juno [L b b L L L]
>> ^^^
>> isolcpus=[2,3]
>>
>> # ps2 | grep DLN
>> 98 98 S 140 0 - DLN sugov:0
>> 99 99 S 140 0 - DLN sugov:1
>>
>> # taskset -p 98; taskset -p 99
>> pid 98's current affinity mask: 39
>> pid 99's current affinity mask: 6
>>
>>
>> [ 87.679282] partition_sched_domains() called
>> ...
>> [ 87.684013] partition_sched_domains() called
>> ...
>> [ 87.687961] partition_sched_domains() called
>> ...
>> [ 87.689419] psci: CPU3 killed (polled 0 ms)
>> [ 87.689715] __dl_bw_capacity() mask=2-5 cap=1024
>> [ 87.689739] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
>> [ 87.689757] dl_bw_manage: cpu=2 cap=0 fair_server_bw=52428
>> total_bw=209712 dl_bw_cpus=1 type=DEF span=2-5
>> [ 87.689775] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
>> [ 87.689789] dl_bw_manage() cpu=2 cap=0 overflow=1 return=-16
>> [ 87.689864] Error taking CPU2 down: -16 <-- !!!
>> ...
>> [ 87.690674] partition_sched_domains() called
>> ...
>> [ 87.691496] partition_sched_domains() called
>> ...
>> [ 87.693702] partition_sched_domains() called
>> ...
>> [ 87.695819] partition_and_rebuild_sched_domains() called
>>
>
> Ah, OK. Did you try with my last proposed change?
I did now.
Patch-wise I have:
(1) Putting 'fair_server's __dl_server_[de|at]tach_root() under if
'(cpumask_test_cpu(rq->cpu, [old_rd->online|cpu_active_mask))' in
rq_attach_root()
https://lkml.kernel.org/r/Z7RhNmLpOb7SLImW@jlelli-thinkpadt14gen4.remote.csb
(2) Create __dl_server_detach_root() and call it in rq_attach_root()
https://lkml.kernel.org/r/Z4fd_6M2vhSMSR0i@jlelli-thinkpadt14gen4.remote.csb
plus debug patch:
https://lkml.kernel.org/r/Z6M5fQB9P1_bDF7A@jlelli-thinkpadt14gen4.remote.csb
plus additional debug.
The suspend issue still persists.
My hunch is that it's rather an issue with having 0 CPUs left in DEF
while deactivating the last isol CPU (CPU3) so we set overflow = 1 w/o
calling __dl_overflow(). We want to account fair_server_bw=52428
against 0 CPUs.
l B B l l l
^^^
isolcpus=[3,4]
cpumask_and(mask, rd->span, cpu_active_mask)
mask = [3-5] & [0-3] = [3] -> dl_bw_cpus(3) = 1
---
dl_bw_deactivate() called cpu=5
dl_bw_deactivate() called cpu=4
dl_bw_deactivate() called cpu=3
dl_bw_cpus() cpu=6 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
cpumask_subset(rd->span, cpu_active_mask) is false
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++ <-- cpus is 1 !!!
dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
called w/ 'req = dl_bw_req_deactivate'
dl_b->total_bw - fair_server_bw = 104856 - 52428 > 0
dl_bw_cpus(cpu) - 1 = 0
overflow = 1
So there is simply no capacity left in DEF for DL but
'dl_b->total_bw - old_bw + new_bw' = 52428 > 0
On 19/02/25 10:29, Dietmar Eggemann wrote:
...
> I did now.
Thanks!
> Patch-wise I have:
>
> (1) Putting 'fair_server's __dl_server_[de|at]tach_root() under if
> '(cpumask_test_cpu(rq->cpu, [old_rd->online|cpu_active_mask))' in
> rq_attach_root()
>
> https://lkml.kernel.org/r/Z7RhNmLpOb7SLImW@jlelli-thinkpadt14gen4.remote.csb
>
> (2) Create __dl_server_detach_root() and call it in rq_attach_root()
>
> https://lkml.kernel.org/r/Z4fd_6M2vhSMSR0i@jlelli-thinkpadt14gen4.remote.csb
>
> plus debug patch:
>
> https://lkml.kernel.org/r/Z6M5fQB9P1_bDF7A@jlelli-thinkpadt14gen4.remote.csb
>
> plus additional debug.
So you don't have the one with which we ignore special tasks while
rebuilding domains?
https://lore.kernel.org/all/Z6spnwykg6YSXBX_@jlelli-thinkpadt14gen4.remote.csb/
Could you please double check again against
git@github.com:jlelli/linux.git experimental/dl-debug
> The suspend issue still persists.
>
> My hunch is that it's rather an issue with having 0 CPUs left in DEF
> while deactivating the last isol CPU (CPU3) so we set overflow = 1 w/o
> calling __dl_overflow(). We want to account fair_server_bw=52428
> against 0 CPUs.
>
> l B B l l l
>
> ^^^
> isolcpus=[3,4]
>
>
> cpumask_and(mask, rd->span, cpu_active_mask)
>
> mask = [3-5] & [0-3] = [3] -> dl_bw_cpus(3) = 1
>
> ---
>
> dl_bw_deactivate() called cpu=5
>
> dl_bw_deactivate() called cpu=4
>
> dl_bw_deactivate() called cpu=3
>
> dl_bw_cpus() cpu=6 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
> cpumask_subset(rd->span, cpu_active_mask) is false
>
> for_each_cpu_and(i, rd->span, cpu_active_mask)
> cpus++ <-- cpus is 1 !!!
>
> dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
^^^^^^
This still looks wrong: with a single cpu remaining we should only have
the corresponding dl server bandwidth present (unless there is some
other DL task running.
If you already had the patch ignoring sugovs bandwidth in your set, could
you please share the full dmesg?
Thanks!
On 19/02/2025 11:02, Juri Lelli wrote:
> On 19/02/25 10:29, Dietmar Eggemann wrote:
[...]
> So you don't have the one with which we ignore special tasks while
> rebuilding domains?
>
> https://lore.kernel.org/all/Z6spnwykg6YSXBX_@jlelli-thinkpadt14gen4.remote.csb/
>
> Could you please double check again against
>
> git@github.com:jlelli/linux.git experimental/dl-debug
Sorry, I forgot this one. Yes, I have it as well.
2993 void dl_add_task_root_domain(struct task_struct *p)
2994 {
2995 struct rq_flags rf;
2996 struct rq *rq;
2997 struct dl_bw *dl_b;
2998
2999 raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
3000 if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3001 raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
3002 return;
3003 }
>> The suspend issue still persists.
>>
>> My hunch is that it's rather an issue with having 0 CPUs left in DEF
>> while deactivating the last isol CPU (CPU3) so we set overflow = 1 w/o
>> calling __dl_overflow(). We want to account fair_server_bw=52428
>> against 0 CPUs.
>>
>> l B B l l l
>>
>> ^^^
>> isolcpus=[3,4]
>>
>>
>> cpumask_and(mask, rd->span, cpu_active_mask)
>>
>> mask = [3-5] & [0-3] = [3] -> dl_bw_cpus(3) = 1
>>
>> ---
>>
>> dl_bw_deactivate() called cpu=5
>>
>> dl_bw_deactivate() called cpu=4
>>
>> dl_bw_deactivate() called cpu=3
>>
>> dl_bw_cpus() cpu=6 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
>> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
>> cpumask_subset(rd->span, cpu_active_mask) is false
>>
>> for_each_cpu_and(i, rd->span, cpu_active_mask)
>> cpus++ <-- cpus is 1 !!!
>>
>> dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
> ^^^^^^
> This still looks wrong: with a single cpu remaining we should only have
> the corresponding dl server bandwidth present (unless there is some
> other DL task running.
That's true. '104856 - 52428 = 52428' so util of 51 ? Which is 50% of a
sugov task? Or exactly the fair_server_bw.
But the bw numbers don't matter here since we go straight into the else
path since dl_bw_cpus(3) = 1.
3587 if (dl_bw_cpus(cpu) - 1)
3588 overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
3589 else
3590 overflow = 1;
> If you already had the patch ignoring sugovs bandwidth in your set, could
> you please share the full dmesg?
Will do later today ... busy with other stuff right now ;-(
BTW, I just saw that this issue also happens for me w/o sugov threads
(running with Performance CPUfreq governor)! So the remaining
'total_bw=104856' must be the contribution from 2 CPUs of DEF. Maybe we
just have a CPU-offset in this accounting somewhere during suspend?
On 19/02/2025 14:09, Dietmar Eggemann wrote:
> On 19/02/2025 11:02, Juri Lelli wrote:
>> On 19/02/25 10:29, Dietmar Eggemann wrote:
[...]
>> If you already had the patch ignoring sugovs bandwidth in your set, could
>> you please share the full dmesg?
>
> Will do later today ... busy with other stuff right now ;-(
l B B l l l
^^^
isolcpus=[3,4]
w/o sugov tasks:
The issue seems to be that we call partition_sched_domains() for CPU4
during suspend. Which does not issue a:
build_sched_domains() -> cpu_attach_domain() -> rq_attach_root() ->
__dl_server_[de|at]tach_root()
[ 171.006436] dl_bw_deactivate() called cpu=4
...
[ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
^^^^^^^^^^^^^^^^^^^^^(*)
[ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
...
[ 171.007971] dl_bw_deactivate() called cpu=3
...
[ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
^^^^^^^^^^^^^^^ (*)
[ 171.008010] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
[ 171.008019] dl_bw_manage() cpu=3 cap=0 overflow=1 req=0 return=-16 type=DEF
[ 171.008069] Error taking CPU3 down: -16
You can see how 'dl_b->total_bw' stays 104856 (2 x util = 51) even
though CPU4 is off (*).
If total_bw would be 52428 for CPU3 going down we would still fail with
the current code (taking the else path):
3604 if (dl_bw_cpus(cpu) - 1)
3605 overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
3606 else
3607 overflow = 1;
but if we would take the if path even when 'dl_bw_cpus(cpu) = 1'
_dl_overflow() would return false:
280 return dl_b->bw != -1 &&
281 cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
'0 < 52428 - 52428 + 0' is false
---
[ 170.847396] PM: suspend entry (deep)
[ 170.852093] Filesystems sync: 0.000 seconds
[ 170.859274] Freezing user space processes
[ 170.864616] Freezing user space processes completed (elapsed 0.001 seconds)
[ 170.871614] OOM killer disabled.
[ 170.874861] Freezing remaining freezable tasks
[ 170.880499] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 170.887936] printk: Suspending console(s) (use no_console_suspend to debug)
[ 171.000031] arm-scmi arm-scmi.1.auto: timed out in resp(caller: do_xfer+0x114/0x494)
[ 171.001421] Disabling non-boot CPUs ...
[ 171.001501] dl_bw_deactivate() called cpu=5
[ 171.001518] __dl_bw_capacity() mask=0-2,5 cap=2940
[ 171.001530] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
[ 171.001541] dl_bw_manage: cpu=5 cap=2494 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0-2,5
[ 171.001553] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
[ 171.001567] CPU: 5 UID: 0 PID: 41 Comm: cpuhp/5 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.001578] Hardware name: ARM Juno development board (r0) (DT)
[ 171.001583] Call trace:
[ 171.001587] show_stack+0x18/0x24 (C)
[ 171.001605] dump_stack_lvl+0x74/0x8c
[ 171.001621] dump_stack+0x18/0x24
[ 171.001634] dl_bw_manage+0x3a0/0x500
[ 171.001650] dl_bw_deactivate+0x40/0x50
[ 171.001661] sched_cpu_deactivate+0x34/0x24c
[ 171.001676] cpuhp_invoke_callback+0x138/0x694
[ 171.001689] cpuhp_thread_fun+0xb0/0x198
[ 171.001702] smpboot_thread_fn+0x200/0x224
[ 171.001715] kthread+0x12c/0x204
[ 171.001727] ret_from_fork+0x10/0x20
[ 171.001741] __dl_overflow() dl_b->bw=996147 cap=2494 cap_scale(dl_b->bw, cap)=2426162 dl_b->total_bw=209712 old_bw=52428 new_bw=0 type=DYN rd->span=0-2,5
[ 171.001754] dl_bw_manage() cpu=5 cap=2494 overflow=0 req=0 return=0 type=DYN
[ 171.001814] partition_sched_domains() called
[ 171.001821] CPU: 5 UID: 0 PID: 41 Comm: cpuhp/5 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.001831] Hardware name: ARM Juno development board (r0) (DT)
[ 171.001835] Call trace:
[ 171.001838] show_stack+0x18/0x24 (C)
[ 171.001849] dump_stack_lvl+0x74/0x8c
[ 171.001862] dump_stack+0x18/0x24
[ 171.001875] partition_sched_domains+0x48/0x7c
[ 171.001886] sched_cpu_deactivate+0x1a8/0x24c
[ 171.001900] cpuhp_invoke_callback+0x138/0x694
[ 171.001913] cpuhp_thread_fun+0xb0/0x198
[ 171.001925] smpboot_thread_fn+0x200/0x224
[ 171.001937] kthread+0x12c/0x204
[ 171.001948] ret_from_fork+0x10/0x20
[ 171.001961] partition_sched_domains_locked() ndoms_new=1
[ 171.002012] cpu_attach_domain() called cpu=0 type=DEF
[ 171.002018] CPU0 attaching NULL sched-domain.
[ 171.002022] span=3-4
[ 171.002029] rq_attach_root() called cpu=0 type=DEF
[ 171.002043] dl_bw_cpus() cpu=0 rd->span=0-2,5 cpu_active_mask=0-4 cpus=3 type=DYN
[ 171.002053] __dl_server_detach_root() called cpu=0
[ 171.002059] dl_bw_cpus() cpu=0 rd->span=0-2,5 cpu_active_mask=0-4 cpus=3 type=DYN
[ 171.002068] __dl_sub() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2,5
[ 171.002077] __dl_update() (3) cpu=0 rq->dl.extra_bw=603812
[ 171.002083] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
[ 171.002089] __dl_update() (3) cpu=2 rq->dl.extra_bw=1013623
[ 171.002098] dl_bw_cpus() cpu=0 rd->span=0,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DEF
[ 171.002109] __dl_server_attach_root() called cpu=0
[ 171.002114] dl_bw_cpus() cpu=0 rd->span=0,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DEF
[ 171.002124] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=0,3-4
[ 171.002133] __dl_update() (3) cpu=0 rq->dl.extra_bw=586336
[ 171.002139] __dl_update() (3) cpu=3 rq->dl.extra_bw=1004885
[ 171.002145] __dl_update() (3) cpu=4 rq->dl.extra_bw=1017992
[ 171.002153] cpu_attach_domain() called cpu=1 type=DEF
[ 171.002159] CPU1 attaching NULL sched-domain.
[ 171.002163] span=0,3-4
[ 171.002169] rq_attach_root() called cpu=1 type=DEF
[ 171.002181] dl_bw_cpus() cpu=1 rd->span=1-2,5 cpu_active_mask=0-4 cpus=2 type=DYN
[ 171.002191] __dl_server_detach_root() called cpu=1
[ 171.002196] dl_bw_cpus() cpu=1 rd->span=1-2,5 cpu_active_mask=0-4 cpus=2 type=DYN
[ 171.002206] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=1-2,5
[ 171.002215] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
[ 171.002221] __dl_update() (3) cpu=2 rq->dl.extra_bw=1039837
[ 171.002228] dl_bw_cpus() cpu=1 rd->span=0-1,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=4 type=DEF
[ 171.002238] __dl_server_attach_root() called cpu=1
[ 171.002243] dl_bw_cpus() cpu=1 rd->span=0-1,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=4 type=DEF
[ 171.002253] __dl_add() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-1,3-4
[ 171.002262] __dl_update() (3) cpu=0 rq->dl.extra_bw=573229
[ 171.002267] __dl_update() (3) cpu=1 rq->dl.extra_bw=882553
[ 171.002273] __dl_update() (3) cpu=3 rq->dl.extra_bw=991778
[ 171.002279] __dl_update() (3) cpu=4 rq->dl.extra_bw=1004885
[ 171.002286] cpu_attach_domain() called cpu=2 type=DEF
[ 171.002291] CPU2 attaching NULL sched-domain.
[ 171.002296] span=0-1,3-4
[ 171.002301] rq_attach_root() called cpu=2 type=DEF
[ 171.002314] dl_bw_cpus() cpu=2 rd->span=2,5 cpu_active_mask=0-4 cpus=1 type=DYN
[ 171.002323] __dl_server_detach_root() called cpu=2
[ 171.002329] dl_bw_cpus() cpu=2 rd->span=2,5 cpu_active_mask=0-4 cpus=1 type=DYN
[ 171.002338] __dl_sub() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=2,5
[ 171.002346] __dl_update() (3) cpu=2 rq->dl.extra_bw=1092265
[ 171.002353] dl_bw_cpus() cpu=2 rd->span=0-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=5 type=DEF
[ 171.002363] __dl_server_attach_root() called cpu=2
[ 171.002368] dl_bw_cpus() cpu=2 rd->span=0-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=5 type=DEF
[ 171.002377] __dl_add() tsk_bw=52428 dl_b->total_bw=262140 type=DEF rd->span=0-4
[ 171.002385] __dl_update() (3) cpu=0 rq->dl.extra_bw=562744
[ 171.002391] __dl_update() (3) cpu=1 rq->dl.extra_bw=872068
[ 171.002397] __dl_update() (3) cpu=2 rq->dl.extra_bw=1081780
[ 171.002403] __dl_update() (3) cpu=3 rq->dl.extra_bw=981293
[ 171.002409] __dl_update() (3) cpu=4 rq->dl.extra_bw=994400
[ 171.002416] cpu_attach_domain() called cpu=5 type=DEF
[ 171.002421] CPU5 attaching NULL sched-domain.
[ 171.002425] span=0-4
[ 171.002431] rq_attach_root() called cpu=5 type=DEF
[ 171.002438] build_sched_domains() called cpu_map=0-2
[ 171.002556] cpu_attach_domain() called cpu=0 type=DYN
[ 171.002565] CPU0 attaching sched-domain(s):
[ 171.002571] domain-0: span=0-2 level=PKG
[ 171.002583] groups: 0:{ span=0 cap=445 }, 1:{ span=1-2 cap=2045 }
[ 171.002619] rq_attach_root() called cpu=0 type=DYN
[ 171.002630] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-4 cpus=5 type=DEF
[ 171.002639] __dl_server_detach_root() called cpu=0
[ 171.002644] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-4 cpus=5 type=DEF
[ 171.002653] __dl_sub() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-5
[ 171.002662] __dl_update() (3) cpu=0 rq->dl.extra_bw=573229
[ 171.002668] __dl_update() (3) cpu=1 rq->dl.extra_bw=882553
[ 171.002674] __dl_update() (3) cpu=2 rq->dl.extra_bw=1092265
[ 171.002680] __dl_update() (3) cpu=3 rq->dl.extra_bw=991778
[ 171.002686] __dl_update() (3) cpu=4 rq->dl.extra_bw=1004885
[ 171.002693] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-4 cpumask_weight(rd->span)=1 type=DYN
[ 171.002702] __dl_server_attach_root() called cpu=0
[ 171.002707] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-4 cpumask_weight(rd->span)=1 type=DYN
[ 171.002716] __dl_add() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=0
[ 171.002724] __dl_update() (3) cpu=0 rq->dl.extra_bw=520801
[ 171.002731] cpu_attach_domain() called cpu=1 type=DYN
[ 171.002738] CPU1 attaching sched-domain(s):
[ 171.002743] domain-0: span=1-2 level=MC
[ 171.002753] groups: 1:{ span=1 cap=1022 }, 2:{ span=2 cap=1023 }
[ 171.002787] domain-1: span=0-2 level=PKG
[ 171.002798] groups: 1:{ span=1-2 cap=2045 }, 0:{ span=0 cap=445 }
[ 171.002831] rq_attach_root() called cpu=1 type=DYN
[ 171.002841] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-4 cpus=4 type=DEF
[ 171.002851] __dl_server_detach_root() called cpu=1
[ 171.002856] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-4 cpus=4 type=DEF
[ 171.002865] __dl_sub() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=1-5
[ 171.002873] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
[ 171.002879] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
[ 171.002885] __dl_update() (3) cpu=3 rq->dl.extra_bw=1004885
[ 171.002891] __dl_update() (3) cpu=4 rq->dl.extra_bw=1017992
[ 171.002898] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-4 cpumask_weight(rd->span)=2 type=DYN
[ 171.002907] __dl_server_attach_root() called cpu=1
[ 171.002912] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-4 cpumask_weight(rd->span)=2 type=DYN
[ 171.002922] __dl_add() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=0-1
[ 171.002930] __dl_update() (3) cpu=0 rq->dl.extra_bw=494587
[ 171.002936] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
[ 171.002943] cpu_attach_domain() called cpu=2 type=DYN
[ 171.002950] CPU2 attaching sched-domain(s):
[ 171.002954] domain-0: span=1-2 level=MC
[ 171.002965] groups: 2:{ span=2 cap=1023 }, 1:{ span=1 cap=1022 }
[ 171.002998] domain-1: span=0-2 level=PKG
[ 171.003009] groups: 1:{ span=1-2 cap=2045 }, 0:{ span=0 cap=445 }
[ 171.003043] rq_attach_root() called cpu=2 type=DYN
[ 171.003053] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-4 cpus=3 type=DEF
[ 171.003062] __dl_server_detach_root() called cpu=2
[ 171.003067] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-4 cpus=3 type=DEF
[ 171.003076] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DEF rd->span=2-5
[ 171.003085] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
[ 171.003091] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
[ 171.003096] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
[ 171.003103] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
[ 171.003113] __dl_server_attach_root() called cpu=2
[ 171.003118] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
[ 171.003127] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
[ 171.003136] __dl_update() (3) cpu=0 rq->dl.extra_bw=477111
[ 171.003141] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
[ 171.003147] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
[ 171.003188] root domain span: 0-2
[ 171.003194] default domain span: 3-5
[ 171.003220] rd 0-2: Checking EAS, schedutil is mandatory
[ 171.005840] psci: CPU5 killed (polled 0 ms)
[ 171.006436] dl_bw_deactivate() called cpu=4
[ 171.006446] __dl_bw_capacity() mask=3-5 cap=892
[ 171.006454] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
[ 171.006464] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
[ 171.006475] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
[ 171.006485] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.006495] Hardware name: ARM Juno development board (r0) (DT)
[ 171.006499] Call trace:
[ 171.006502] show_stack+0x18/0x24 (C)
[ 171.006514] dump_stack_lvl+0x74/0x8c
[ 171.006528] dump_stack+0x18/0x24
[ 171.006541] dl_bw_manage+0x3a0/0x500
[ 171.006554] dl_bw_deactivate+0x40/0x50
[ 171.006564] sched_cpu_deactivate+0x34/0x24c
[ 171.006579] cpuhp_invoke_callback+0x138/0x694
[ 171.006591] cpuhp_thread_fun+0xb0/0x198
[ 171.006604] smpboot_thread_fn+0x200/0x224
[ 171.006616] kthread+0x12c/0x204
[ 171.006627] ret_from_fork+0x10/0x20
[ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
[ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
[ 171.006706] partition_sched_domains() called
[ 171.006713] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.006722] Hardware name: ARM Juno development board (r0) (DT)
[ 171.006727] Call trace:
[ 171.006730] show_stack+0x18/0x24 (C)
[ 171.006740] dump_stack_lvl+0x74/0x8c
[ 171.006754] dump_stack+0x18/0x24
[ 171.006767] partition_sched_domains+0x48/0x7c
[ 171.006778] sched_cpu_deactivate+0x1a8/0x24c
[ 171.006792] cpuhp_invoke_callback+0x138/0x694
[ 171.006805] cpuhp_thread_fun+0xb0/0x198
[ 171.006817] smpboot_thread_fn+0x200/0x224
[ 171.006829] kthread+0x12c/0x204
[ 171.006840] ret_from_fork+0x10/0x20
[ 171.006852] partition_sched_domains_locked() ndoms_new=1
[ 171.006861] partition_sched_domains_locked() goto match2
[ 171.006867] rd 0-2: Checking EAS, schedutil is mandatory
[ 171.007774] psci: CPU4 killed (polled 4 ms)
[ 171.007971] dl_bw_deactivate() called cpu=3
[ 171.007981] __dl_bw_capacity() mask=3-5 cap=446
[ 171.007989] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
[ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
[ 171.008010] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
[ 171.008019] dl_bw_manage() cpu=3 cap=0 overflow=1 req=0 return=-16 type=DEF
[ 171.008069] Error taking CPU3 down: -16
[ 171.008076] Non-boot CPUs are not disabled
[ 171.008080] Enabling non-boot CPUs ...
[ 171.008397] Detected VIPT I-cache on CPU4
[ 171.008472] CPU4: Booted secondary processor 0x0000000102 [0x410fd030]
[ 171.008862] partition_sched_domains() called
[ 171.008869] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.008880] Hardware name: ARM Juno development board (r0) (DT)
[ 171.008884] Call trace:
[ 171.008887] show_stack+0x18/0x24 (C)
[ 171.008899] dump_stack_lvl+0x74/0x8c
[ 171.008913] dump_stack+0x18/0x24
[ 171.008926] partition_sched_domains+0x48/0x7c
[ 171.008937] sched_cpu_activate+0x194/0x1f8
[ 171.008951] cpuhp_invoke_callback+0x138/0x694
[ 171.008963] cpuhp_thread_fun+0xb0/0x198
[ 171.008976] smpboot_thread_fn+0x200/0x224
[ 171.008987] kthread+0x12c/0x204
[ 171.008999] ret_from_fork+0x10/0x20
[ 171.009011] partition_sched_domains_locked() ndoms_new=1
[ 171.009019] partition_sched_domains_locked() goto match2
[ 171.009025] rd 0-2: Checking EAS, schedutil is mandatory
[ 171.009048] CPU4 is up
[ 171.009323] Detected VIPT I-cache on CPU5
[ 171.009377] CPU5: Booted secondary processor 0x0000000103 [0x410fd030]
[ 171.009787] partition_sched_domains() called
[ 171.009795] CPU: 5 UID: 0 PID: 41 Comm: cpuhp/5 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
[ 171.009806] Hardware name: ARM Juno development board (r0) (DT)
[ 171.009810] Call trace:
[ 171.009813] show_stack+0x18/0x24 (C)
[ 171.009825] dump_stack_lvl+0x74/0x8c
[ 171.009839] dump_stack+0x18/0x24
[ 171.009851] partition_sched_domains+0x48/0x7c
[ 171.009862] sched_cpu_activate+0x194/0x1f8
[ 171.009876] cpuhp_invoke_callback+0x138/0x694
[ 171.009889] cpuhp_thread_fun+0xb0/0x198
[ 171.009901] smpboot_thread_fn+0x200/0x224
[ 171.009912] kthread+0x12c/0x204
[ 171.009924] ret_from_fork+0x10/0x20
[ 171.009936] partition_sched_domains_locked() ndoms_new=1
[ 171.009980] cpu_attach_domain() called cpu=0 type=DEF
[ 171.009986] CPU0 attaching NULL sched-domain.
[ 171.009991] span=3-5
[ 171.009997] rq_attach_root() called cpu=0 type=DEF
[ 171.010011] dl_bw_cpus() cpu=0 rd->span=0-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=3 type=DYN
[ 171.010021] __dl_server_detach_root() called cpu=0
[ 171.010026] dl_bw_cpus() cpu=0 rd->span=0-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=3 type=DYN
[ 171.010036] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=0-2
[ 171.010044] __dl_update() (3) cpu=0 rq->dl.extra_bw=494587
[ 171.010050] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
[ 171.010056] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
[ 171.010064] dl_bw_cpus() cpu=0 rd->span=0,3-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DEF
[ 171.010074] __dl_server_attach_root() called cpu=0
[ 171.010079] dl_bw_cpus() cpu=0 rd->span=0,3-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DEF
[ 171.010089] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=0,3-5
[ 171.010098] __dl_update() (3) cpu=0 rq->dl.extra_bw=481480
[ 171.010104] __dl_update() (3) cpu=3 rq->dl.extra_bw=1009254
[ 171.010109] __dl_update() (3) cpu=4 rq->dl.extra_bw=1022361
[ 171.010115] __dl_update() (3) cpu=5 rq->dl.extra_bw=1156925
[ 171.010123] cpu_attach_domain() called cpu=1 type=DEF
[ 171.010129] CPU1 attaching NULL sched-domain.
[ 171.010133] span=0,3-5
[ 171.010139] rq_attach_root() called cpu=1 type=DEF
[ 171.010149] dl_bw_cpus() cpu=1 rd->span=1-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=2 type=DYN
[ 171.010159] __dl_server_detach_root() called cpu=1
[ 171.010164] dl_bw_cpus() cpu=1 rd->span=1-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=2 type=DYN
[ 171.010174] __dl_sub() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=1-2
[ 171.010182] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
[ 171.010188] __dl_update() (3) cpu=2 rq->dl.extra_bw=1149062
[ 171.010195] dl_bw_cpus() cpu=1 rd->span=0-1,3-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=5 type=DEF
[ 171.010205] __dl_server_attach_root() called cpu=1
[ 171.010210] dl_bw_cpus() cpu=1 rd->span=0-1,3-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=5 type=DEF
[ 171.010220] __dl_add() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-1,3-5
[ 171.010229] __dl_update() (3) cpu=0 rq->dl.extra_bw=470995
[ 171.010235] __dl_update() (3) cpu=1 rq->dl.extra_bw=885175
[ 171.010241] __dl_update() (3) cpu=3 rq->dl.extra_bw=998769
[ 171.010247] __dl_update() (3) cpu=4 rq->dl.extra_bw=1011876
[ 171.010252] __dl_update() (3) cpu=5 rq->dl.extra_bw=1146440
[ 171.010259] cpu_attach_domain() called cpu=2 type=DEF
[ 171.010265] CPU2 attaching NULL sched-domain.
[ 171.010269] span=0-1,3-5
[ 171.010275] rq_attach_root() called cpu=2 type=DEF
[ 171.010286] dl_bw_cpus() cpu=2 rd->span=2 cpu_active_mask=0-5 cpumask_weight(rd->span)=1 type=DYN
[ 171.010296] __dl_server_detach_root() called cpu=2
[ 171.010301] dl_bw_cpus() cpu=2 rd->span=2 cpu_active_mask=0-5 cpumask_weight(rd->span)=1 type=DYN
[ 171.010310] __dl_sub() tsk_bw=52428 dl_b->total_bw=0 type=DYN rd->span=2
[ 171.010318] __dl_update() (3) cpu=2 rq->dl.extra_bw=1201490
[ 171.010324] dl_bw_cpus() cpu=2 rd->span=0-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=6 type=DEF
[ 171.010334] __dl_server_attach_root() called cpu=2
[ 171.010339] dl_bw_cpus() cpu=2 rd->span=0-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=6 type=DEF
[ 171.010348] __dl_add() tsk_bw=52428 dl_b->total_bw=262140 type=DEF rd->span=0-5
[ 171.010357] __dl_update() (3) cpu=0 rq->dl.extra_bw=462257
[ 171.010362] __dl_update() (3) cpu=1 rq->dl.extra_bw=876437
[ 171.010368] __dl_update() (3) cpu=2 rq->dl.extra_bw=1192752
[ 171.010374] __dl_update() (3) cpu=3 rq->dl.extra_bw=990031
[ 171.010380] __dl_update() (3) cpu=4 rq->dl.extra_bw=1003138
[ 171.010385] __dl_update() (3) cpu=5 rq->dl.extra_bw=1137702
[ 171.010393] build_sched_domains() called cpu_map=0-2,5
[ 171.010520] cpu_attach_domain() called cpu=0 type=DYN
[ 171.010529] CPU0 attaching sched-domain(s):
[ 171.010534] domain-0: span=0,5 level=MC
[ 171.010546] groups: 0:{ span=0 cap=445 }, 5:{ span=5 cap=445 }
[ 171.010580] domain-1: span=0-2,5 level=PKG
[ 171.010591] groups: 0:{ span=0,5 cap=890 }, 1:{ span=1-2 cap=2044 }
[ 171.010625] rq_attach_root() called cpu=0 type=DYN
[ 171.010636] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=6 type=DEF
[ 171.010645] __dl_server_detach_root() called cpu=0
[ 171.010651] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=6 type=DEF
[ 171.010660] __dl_sub() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-5
[ 171.010669] __dl_update() (3) cpu=0 rq->dl.extra_bw=470995
[ 171.010675] __dl_update() (3) cpu=1 rq->dl.extra_bw=885175
[ 171.010680] __dl_update() (3) cpu=2 rq->dl.extra_bw=1201490
[ 171.010686] __dl_update() (3) cpu=3 rq->dl.extra_bw=998769
[ 171.010692] __dl_update() (3) cpu=4 rq->dl.extra_bw=1011876
[ 171.010697] __dl_update() (3) cpu=5 rq->dl.extra_bw=1146440
[ 171.010705] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-5 cpumask_weight(rd->span)=1 type=DYN
[ 171.010714] __dl_server_attach_root() called cpu=0
[ 171.010719] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-5 cpumask_weight(rd->span)=1 type=DYN
[ 171.010728] __dl_add() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=0
[ 171.010736] __dl_update() (3) cpu=0 rq->dl.extra_bw=418567
[ 171.010743] cpu_attach_domain() called cpu=1 type=DYN
[ 171.010750] CPU1 attaching sched-domain(s):
[ 171.010755] domain-0: span=1-2 level=MC
[ 171.010766] groups: 1:{ span=1 cap=1021 }, 2:{ span=2 cap=1023 }
[ 171.010799] domain-1: span=0-2,5 level=PKG
[ 171.010811] groups: 1:{ span=1-2 cap=2044 }, 0:{ span=0,5 cap=890 }
[ 171.010844] rq_attach_root() called cpu=1 type=DYN
[ 171.010854] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=5 type=DEF
[ 171.010864] __dl_server_detach_root() called cpu=1
[ 171.010869] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=5 type=DEF
[ 171.010879] __dl_sub() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=1-5
[ 171.010887] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
[ 171.010893] __dl_update() (3) cpu=2 rq->dl.extra_bw=1211975
[ 171.010899] __dl_update() (3) cpu=3 rq->dl.extra_bw=1009254
[ 171.010905] __dl_update() (3) cpu=4 rq->dl.extra_bw=1022361
[ 171.010911] __dl_update() (3) cpu=5 rq->dl.extra_bw=1156925
[ 171.010918] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-5 cpumask_weight(rd->span)=2 type=DYN
[ 171.010927] __dl_server_attach_root() called cpu=1
[ 171.010932] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-5 cpumask_weight(rd->span)=2 type=DYN
[ 171.010941] __dl_add() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=0-1
[ 171.010950] __dl_update() (3) cpu=0 rq->dl.extra_bw=392353
[ 171.010956] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
[ 171.010962] cpu_attach_domain() called cpu=2 type=DYN
[ 171.010969] CPU2 attaching sched-domain(s):
[ 171.010974] domain-0: span=1-2 level=MC
[ 171.010985] groups: 2:{ span=2 cap=1023 }, 1:{ span=1 cap=1021 }
[ 171.011018] domain-1: span=0-2,5 level=PKG
[ 171.011029] groups: 1:{ span=1-2 cap=2044 }, 0:{ span=0,5 cap=890 }
[ 171.011063] rq_attach_root() called cpu=2 type=DYN
[ 171.011073] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DEF
[ 171.011083] __dl_server_detach_root() called cpu=2
[ 171.011088] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DEF
[ 171.011097] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DEF rd->span=2-5
[ 171.011105] __dl_update() (3) cpu=2 rq->dl.extra_bw=1225082
[ 171.011111] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
[ 171.011117] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
[ 171.011123] __dl_update() (3) cpu=5 rq->dl.extra_bw=1170032
[ 171.011130] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=3 type=DYN
[ 171.011139] __dl_server_attach_root() called cpu=2
[ 171.011144] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-5 cpumask_weight(rd->span)=3 type=DYN
[ 171.011154] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
[ 171.011162] __dl_update() (3) cpu=0 rq->dl.extra_bw=374877
[ 171.011168] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
[ 171.011174] __dl_update() (3) cpu=2 rq->dl.extra_bw=1207606
[ 171.011181] cpu_attach_domain() called cpu=5 type=DYN
[ 171.011188] CPU5 attaching sched-domain(s):
[ 171.011192] domain-0: span=0,5 level=MC
[ 171.011203] groups: 5:{ span=5 cap=445 }, 0:{ span=0 cap=445 }
[ 171.011237] domain-1: span=0-2,5 level=PKG
[ 171.011248] groups: 0:{ span=0,5 cap=890 }, 1:{ span=1-2 cap=2044 }
[ 171.011281] rq_attach_root() called cpu=5 type=DYN
[ 171.011288] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
[ 171.011299] __dl_server_attach_root() called cpu=5
[ 171.011304] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
[ 171.011313] __dl_add() tsk_bw=52428 dl_b->total_bw=209712 type=DYN rd->span=0-2,5
[ 171.011322] __dl_update() (3) cpu=0 rq->dl.extra_bw=361770
[ 171.011328] __dl_update() (3) cpu=1 rq->dl.extra_bw=838863
[ 171.011334] __dl_update() (3) cpu=2 rq->dl.extra_bw=1194499
[ 171.011339] __dl_update() (3) cpu=5 rq->dl.extra_bw=1156925
[ 171.011381] root domain span: 0-2,5
[ 171.011387] default domain span: 3-4
[ 171.011410] rd 0-2,5: Checking EAS, schedutil is mandatory
[ 171.012325] partition_and_rebuild_sched_domains() called
[ 171.012331] partition_sched_domains_locked() ndoms_new=1
[ 171.012338] partition_sched_domains_locked() goto match2
[ 171.012344] rd 0-2,5: Checking EAS, schedutil is mandatory
[ 171.012369] CPU5 is up
[ 171.226240] atkbd serio0: keyboard reset failed on 1c060000.kmi
[ 173.340005] OOM killer enabled.
[ 173.343148] Restarting tasks ... done.
[ 173.347458] random: crng reseeded on system resumption
[ 173.352939] PM: suspend exit
On 19/02/25 19:14, Dietmar Eggemann wrote:
> On 19/02/2025 14:09, Dietmar Eggemann wrote:
> > On 19/02/2025 11:02, Juri Lelli wrote:
> >> On 19/02/25 10:29, Dietmar Eggemann wrote:
>
> [...]
>
> >> If you already had the patch ignoring sugovs bandwidth in your set, could
> >> you please share the full dmesg?
> >
> > Will do later today ... busy with other stuff right now ;-(
>
> l B B l l l
> ^^^
> isolcpus=[3,4]
>
> w/o sugov tasks:
>
> The issue seems to be that we call partition_sched_domains() for CPU4
> during suspend. Which does not issue a:
>
> build_sched_domains() -> cpu_attach_domain() -> rq_attach_root() ->
> __dl_server_[de|at]tach_root()
And unfortunately this is the path that I am not able to reproduce at my
end. Looks like the boxes I have access to don't use this method when
suspending (no hotplug).
> [ 171.006436] dl_bw_deactivate() called cpu=4
> ...
> [ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
> ^^^^^^^^^^^^^^^^^^^^^(*)
> [ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
> ...
> [ 171.007971] dl_bw_deactivate() called cpu=3
> ...
> [ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
> ^^^^^^^^^^^^^^^ (*)
> [ 171.008010] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
> [ 171.008019] dl_bw_manage() cpu=3 cap=0 overflow=1 req=0 return=-16 type=DEF
> [ 171.008069] Error taking CPU3 down: -16
>
> You can see how 'dl_b->total_bw' stays 104856 (2 x util = 51) even
> though CPU4 is off (*).
Right (well, not right :).
> If total_bw would be 52428 for CPU3 going down we would still fail with
> the current code (taking the else path):
>
> 3604 if (dl_bw_cpus(cpu) - 1)
> 3605 overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
> 3606 else
> 3607 overflow = 1;
>
> but if we would take the if path even when 'dl_bw_cpus(cpu) = 1'
> _dl_overflow() would return false:
>
> 280 return dl_b->bw != -1 &&
> 281 cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
>
> '0 < 52428 - 52428 + 0' is false
OK. The idea of the current logic is that we shouldn't even enter that
branch, as, if total_bw accounting was correct, total_bw for DEF would
be equal to fair_server_bw. So, no additional DEADLINE bandwidth present
and we proceed with offlining.
Let's have a look below.
> ---
>
> [ 170.847396] PM: suspend entry (deep)
> [ 170.852093] Filesystems sync: 0.000 seconds
> [ 170.859274] Freezing user space processes
> [ 170.864616] Freezing user space processes completed (elapsed 0.001 seconds)
> [ 170.871614] OOM killer disabled.
> [ 170.874861] Freezing remaining freezable tasks
> [ 170.880499] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
> [ 170.887936] printk: Suspending console(s) (use no_console_suspend to debug)
> [ 171.000031] arm-scmi arm-scmi.1.auto: timed out in resp(caller: do_xfer+0x114/0x494)
> [ 171.001421] Disabling non-boot CPUs ...
CPU5 going offline.
> [ 171.001501] dl_bw_deactivate() called cpu=5
> [ 171.001518] __dl_bw_capacity() mask=0-2,5 cap=2940
> [ 171.001530] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
> [ 171.001541] dl_bw_manage: cpu=5 cap=2494 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0-2,5
> [ 171.001553] dl_bw_cpus() cpu=5 rd->span=0-2,5 cpu_active_mask=0-5 cpumask_weight(rd->span)=4 type=DYN
> [ 171.001567] CPU: 5 UID: 0 PID: 41 Comm: cpuhp/5 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> [ 171.001578] Hardware name: ARM Juno development board (r0) (DT)
> [ 171.001583] Call trace:
> [ 171.001587] show_stack+0x18/0x24 (C)
> [ 171.001605] dump_stack_lvl+0x74/0x8c
> [ 171.001621] dump_stack+0x18/0x24
> [ 171.001634] dl_bw_manage+0x3a0/0x500
> [ 171.001650] dl_bw_deactivate+0x40/0x50
> [ 171.001661] sched_cpu_deactivate+0x34/0x24c
> [ 171.001676] cpuhp_invoke_callback+0x138/0x694
> [ 171.001689] cpuhp_thread_fun+0xb0/0x198
> [ 171.001702] smpboot_thread_fn+0x200/0x224
> [ 171.001715] kthread+0x12c/0x204
> [ 171.001727] ret_from_fork+0x10/0x20
> [ 171.001741] __dl_overflow() dl_b->bw=996147 cap=2494 cap_scale(dl_b->bw, cap)=2426162 dl_b->total_bw=209712 old_bw=52428 new_bw=0 type=DYN rd->span=0-2,5
> [ 171.001754] dl_bw_manage() cpu=5 cap=2494 overflow=0 req=0 return=0 type=DYN
> [ 171.001814] partition_sched_domains() called
> [ 171.001821] CPU: 5 UID: 0 PID: 41 Comm: cpuhp/5 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> [ 171.001831] Hardware name: ARM Juno development board (r0) (DT)
> [ 171.001835] Call trace:
> [ 171.001838] show_stack+0x18/0x24 (C)
> [ 171.001849] dump_stack_lvl+0x74/0x8c
> [ 171.001862] dump_stack+0x18/0x24
> [ 171.001875] partition_sched_domains+0x48/0x7c
> [ 171.001886] sched_cpu_deactivate+0x1a8/0x24c
> [ 171.001900] cpuhp_invoke_callback+0x138/0x694
> [ 171.001913] cpuhp_thread_fun+0xb0/0x198
> [ 171.001925] smpboot_thread_fn+0x200/0x224
> [ 171.001937] kthread+0x12c/0x204
> [ 171.001948] ret_from_fork+0x10/0x20
> [ 171.001961] partition_sched_domains_locked() ndoms_new=1
> [ 171.002012] cpu_attach_domain() called cpu=0 type=DEF
> [ 171.002018] CPU0 attaching NULL sched-domain.
> [ 171.002022] span=3-4
> [ 171.002029] rq_attach_root() called cpu=0 type=DEF
> [ 171.002043] dl_bw_cpus() cpu=0 rd->span=0-2,5 cpu_active_mask=0-4 cpus=3 type=DYN
> [ 171.002053] __dl_server_detach_root() called cpu=0
> [ 171.002059] dl_bw_cpus() cpu=0 rd->span=0-2,5 cpu_active_mask=0-4 cpus=3 type=DYN
> [ 171.002068] __dl_sub() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2,5
> [ 171.002077] __dl_update() (3) cpu=0 rq->dl.extra_bw=603812
> [ 171.002083] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
> [ 171.002089] __dl_update() (3) cpu=2 rq->dl.extra_bw=1013623
> [ 171.002098] dl_bw_cpus() cpu=0 rd->span=0,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DEF
> [ 171.002109] __dl_server_attach_root() called cpu=0
> [ 171.002114] dl_bw_cpus() cpu=0 rd->span=0,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DEF
> [ 171.002124] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=0,3-4
> [ 171.002133] __dl_update() (3) cpu=0 rq->dl.extra_bw=586336
> [ 171.002139] __dl_update() (3) cpu=3 rq->dl.extra_bw=1004885
> [ 171.002145] __dl_update() (3) cpu=4 rq->dl.extra_bw=1017992
> [ 171.002153] cpu_attach_domain() called cpu=1 type=DEF
> [ 171.002159] CPU1 attaching NULL sched-domain.
> [ 171.002163] span=0,3-4
> [ 171.002169] rq_attach_root() called cpu=1 type=DEF
> [ 171.002181] dl_bw_cpus() cpu=1 rd->span=1-2,5 cpu_active_mask=0-4 cpus=2 type=DYN
> [ 171.002191] __dl_server_detach_root() called cpu=1
> [ 171.002196] dl_bw_cpus() cpu=1 rd->span=1-2,5 cpu_active_mask=0-4 cpus=2 type=DYN
> [ 171.002206] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=1-2,5
> [ 171.002215] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
> [ 171.002221] __dl_update() (3) cpu=2 rq->dl.extra_bw=1039837
> [ 171.002228] dl_bw_cpus() cpu=1 rd->span=0-1,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=4 type=DEF
> [ 171.002238] __dl_server_attach_root() called cpu=1
> [ 171.002243] dl_bw_cpus() cpu=1 rd->span=0-1,3-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=4 type=DEF
> [ 171.002253] __dl_add() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-1,3-4
> [ 171.002262] __dl_update() (3) cpu=0 rq->dl.extra_bw=573229
> [ 171.002267] __dl_update() (3) cpu=1 rq->dl.extra_bw=882553
> [ 171.002273] __dl_update() (3) cpu=3 rq->dl.extra_bw=991778
> [ 171.002279] __dl_update() (3) cpu=4 rq->dl.extra_bw=1004885
> [ 171.002286] cpu_attach_domain() called cpu=2 type=DEF
> [ 171.002291] CPU2 attaching NULL sched-domain.
> [ 171.002296] span=0-1,3-4
> [ 171.002301] rq_attach_root() called cpu=2 type=DEF
> [ 171.002314] dl_bw_cpus() cpu=2 rd->span=2,5 cpu_active_mask=0-4 cpus=1 type=DYN
> [ 171.002323] __dl_server_detach_root() called cpu=2
> [ 171.002329] dl_bw_cpus() cpu=2 rd->span=2,5 cpu_active_mask=0-4 cpus=1 type=DYN
> [ 171.002338] __dl_sub() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=2,5
> [ 171.002346] __dl_update() (3) cpu=2 rq->dl.extra_bw=1092265
> [ 171.002353] dl_bw_cpus() cpu=2 rd->span=0-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=5 type=DEF
> [ 171.002363] __dl_server_attach_root() called cpu=2
> [ 171.002368] dl_bw_cpus() cpu=2 rd->span=0-4 cpu_active_mask=0-4 cpumask_weight(rd->span)=5 type=DEF
> [ 171.002377] __dl_add() tsk_bw=52428 dl_b->total_bw=262140 type=DEF rd->span=0-4
^^^^
OK. 5 still online CPUs dl servers temporarily added to DEF root domain.
> [ 171.002385] __dl_update() (3) cpu=0 rq->dl.extra_bw=562744
> [ 171.002391] __dl_update() (3) cpu=1 rq->dl.extra_bw=872068
> [ 171.002397] __dl_update() (3) cpu=2 rq->dl.extra_bw=1081780
> [ 171.002403] __dl_update() (3) cpu=3 rq->dl.extra_bw=981293
> [ 171.002409] __dl_update() (3) cpu=4 rq->dl.extra_bw=994400
> [ 171.002416] cpu_attach_domain() called cpu=5 type=DEF
> [ 171.002421] CPU5 attaching NULL sched-domain.
> [ 171.002425] span=0-4
> [ 171.002431] rq_attach_root() called cpu=5 type=DEF
> [ 171.002438] build_sched_domains() called cpu_map=0-2
> [ 171.002556] cpu_attach_domain() called cpu=0 type=DYN
Adding CPU5 (going offline) to DEF doesn't add its dl server
contribution (OK).
> [ 171.002565] CPU0 attaching sched-domain(s):
> [ 171.002571] domain-0: span=0-2 level=PKG
> [ 171.002583] groups: 0:{ span=0 cap=445 }, 1:{ span=1-2 cap=2045 }
> [ 171.002619] rq_attach_root() called cpu=0 type=DYN
> [ 171.002630] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-4 cpus=5 type=DEF
> [ 171.002639] __dl_server_detach_root() called cpu=0
> [ 171.002644] dl_bw_cpus() cpu=0 rd->span=0-5 cpu_active_mask=0-4 cpus=5 type=DEF
> [ 171.002653] __dl_sub() tsk_bw=52428 dl_b->total_bw=209712 type=DEF rd->span=0-5
> [ 171.002662] __dl_update() (3) cpu=0 rq->dl.extra_bw=573229
> [ 171.002668] __dl_update() (3) cpu=1 rq->dl.extra_bw=882553
> [ 171.002674] __dl_update() (3) cpu=2 rq->dl.extra_bw=1092265
> [ 171.002680] __dl_update() (3) cpu=3 rq->dl.extra_bw=991778
> [ 171.002686] __dl_update() (3) cpu=4 rq->dl.extra_bw=1004885
> [ 171.002693] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-4 cpumask_weight(rd->span)=1 type=DYN
> [ 171.002702] __dl_server_attach_root() called cpu=0
> [ 171.002707] dl_bw_cpus() cpu=0 rd->span=0 cpu_active_mask=0-4 cpumask_weight(rd->span)=1 type=DYN
> [ 171.002716] __dl_add() tsk_bw=52428 dl_b->total_bw=52428 type=DYN rd->span=0
> [ 171.002724] __dl_update() (3) cpu=0 rq->dl.extra_bw=520801
> [ 171.002731] cpu_attach_domain() called cpu=1 type=DYN
> [ 171.002738] CPU1 attaching sched-domain(s):
> [ 171.002743] domain-0: span=1-2 level=MC
> [ 171.002753] groups: 1:{ span=1 cap=1022 }, 2:{ span=2 cap=1023 }
> [ 171.002787] domain-1: span=0-2 level=PKG
> [ 171.002798] groups: 1:{ span=1-2 cap=2045 }, 0:{ span=0 cap=445 }
> [ 171.002831] rq_attach_root() called cpu=1 type=DYN
> [ 171.002841] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-4 cpus=4 type=DEF
> [ 171.002851] __dl_server_detach_root() called cpu=1
> [ 171.002856] dl_bw_cpus() cpu=1 rd->span=1-5 cpu_active_mask=0-4 cpus=4 type=DEF
> [ 171.002865] __dl_sub() tsk_bw=52428 dl_b->total_bw=157284 type=DEF rd->span=1-5
> [ 171.002873] __dl_update() (3) cpu=1 rq->dl.extra_bw=895660
> [ 171.002879] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
> [ 171.002885] __dl_update() (3) cpu=3 rq->dl.extra_bw=1004885
> [ 171.002891] __dl_update() (3) cpu=4 rq->dl.extra_bw=1017992
> [ 171.002898] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-4 cpumask_weight(rd->span)=2 type=DYN
> [ 171.002907] __dl_server_attach_root() called cpu=1
> [ 171.002912] dl_bw_cpus() cpu=1 rd->span=0-1 cpu_active_mask=0-4 cpumask_weight(rd->span)=2 type=DYN
> [ 171.002922] __dl_add() tsk_bw=52428 dl_b->total_bw=104856 type=DYN rd->span=0-1
> [ 171.002930] __dl_update() (3) cpu=0 rq->dl.extra_bw=494587
> [ 171.002936] __dl_update() (3) cpu=1 rq->dl.extra_bw=869446
> [ 171.002943] cpu_attach_domain() called cpu=2 type=DYN
> [ 171.002950] CPU2 attaching sched-domain(s):
> [ 171.002954] domain-0: span=1-2 level=MC
> [ 171.002965] groups: 2:{ span=2 cap=1023 }, 1:{ span=1 cap=1022 }
> [ 171.002998] domain-1: span=0-2 level=PKG
> [ 171.003009] groups: 1:{ span=1-2 cap=2045 }, 0:{ span=0 cap=445 }
> [ 171.003043] rq_attach_root() called cpu=2 type=DYN
> [ 171.003053] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-4 cpus=3 type=DEF
> [ 171.003062] __dl_server_detach_root() called cpu=2
> [ 171.003067] dl_bw_cpus() cpu=2 rd->span=2-5 cpu_active_mask=0-4 cpus=3 type=DEF
> [ 171.003076] __dl_sub() tsk_bw=52428 dl_b->total_bw=104856 type=DEF rd->span=2-5
^^^^
OK. CPU3 + CPU4 (CPU5 offline).
> [ 171.003085] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
> [ 171.003091] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
> [ 171.003096] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
> [ 171.003103] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
> [ 171.003113] __dl_server_attach_root() called cpu=2
> [ 171.003118] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
> [ 171.003127] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
> [ 171.003136] __dl_update() (3) cpu=0 rq->dl.extra_bw=477111
> [ 171.003141] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
> [ 171.003147] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
> [ 171.003188] root domain span: 0-2
> [ 171.003194] default domain span: 3-5
> [ 171.003220] rd 0-2: Checking EAS, schedutil is mandatory
> [ 171.005840] psci: CPU5 killed (polled 0 ms)
OK. DYN has (CPU0,1,2) 157284 and DEF (CPU3,4) 104856.
CPU4 going offline (it's isolated on DEF).
> [ 171.006436] dl_bw_deactivate() called cpu=4
> [ 171.006446] __dl_bw_capacity() mask=3-5 cap=892
> [ 171.006454] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
> [ 171.006464] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
> [ 171.006475] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
> [ 171.006485] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> [ 171.006495] Hardware name: ARM Juno development board (r0) (DT)
> [ 171.006499] Call trace:
> [ 171.006502] show_stack+0x18/0x24 (C)
> [ 171.006514] dump_stack_lvl+0x74/0x8c
> [ 171.006528] dump_stack+0x18/0x24
> [ 171.006541] dl_bw_manage+0x3a0/0x500
> [ 171.006554] dl_bw_deactivate+0x40/0x50
> [ 171.006564] sched_cpu_deactivate+0x34/0x24c
> [ 171.006579] cpuhp_invoke_callback+0x138/0x694
> [ 171.006591] cpuhp_thread_fun+0xb0/0x198
> [ 171.006604] smpboot_thread_fn+0x200/0x224
> [ 171.006616] kthread+0x12c/0x204
> [ 171.006627] ret_from_fork+0x10/0x20
> [ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
> [ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
> [ 171.006706] partition_sched_domains() called
> [ 171.006713] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> [ 171.006722] Hardware name: ARM Juno development board (r0) (DT)
> [ 171.006727] Call trace:
> [ 171.006730] show_stack+0x18/0x24 (C)
> [ 171.006740] dump_stack_lvl+0x74/0x8c
> [ 171.006754] dump_stack+0x18/0x24
> [ 171.006767] partition_sched_domains+0x48/0x7c
> [ 171.006778] sched_cpu_deactivate+0x1a8/0x24c
> [ 171.006792] cpuhp_invoke_callback+0x138/0x694
> [ 171.006805] cpuhp_thread_fun+0xb0/0x198
> [ 171.006817] smpboot_thread_fn+0x200/0x224
> [ 171.006829] kthread+0x12c/0x204
> [ 171.006840] ret_from_fork+0x10/0x20
> [ 171.006852] partition_sched_domains_locked() ndoms_new=1
> [ 171.006861] partition_sched_domains_locked() goto match2
> [ 171.006867] rd 0-2: Checking EAS, schedutil is mandatory
> [ 171.007774] psci: CPU4 killed (polled 4 ms)
As I guess you were saying above, CPU4 contribution is not removed from
DEF.
> [ 171.007971] dl_bw_deactivate() called cpu=3
> [ 171.007981] __dl_bw_capacity() mask=3-5 cap=446
> [ 171.007989] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
> [ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
^^^^
And this is now wrong. :/
> [ 171.008010] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
> [ 171.008019] dl_bw_manage() cpu=3 cap=0 overflow=1 req=0 return=-16 type=DEF
> [ 171.008069] Error taking CPU3 down: -16
> [ 171.008076] Non-boot CPUs are not disabled
> [ 171.008080] Enabling non-boot CPUs ...
Hummm.
On 20/02/25 11:40, Juri Lelli wrote:
> On 19/02/25 19:14, Dietmar Eggemann wrote:
...
> OK. CPU3 + CPU4 (CPU5 offline).
>
> > [ 171.003085] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
> > [ 171.003091] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
> > [ 171.003096] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
> > [ 171.003103] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
> > [ 171.003113] __dl_server_attach_root() called cpu=2
> > [ 171.003118] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
> > [ 171.003127] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
> > [ 171.003136] __dl_update() (3) cpu=0 rq->dl.extra_bw=477111
> > [ 171.003141] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
> > [ 171.003147] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
> > [ 171.003188] root domain span: 0-2
> > [ 171.003194] default domain span: 3-5
> > [ 171.003220] rd 0-2: Checking EAS, schedutil is mandatory
> > [ 171.005840] psci: CPU5 killed (polled 0 ms)
>
> OK. DYN has (CPU0,1,2) 157284 and DEF (CPU3,4) 104856.
>
> CPU4 going offline (it's isolated on DEF).
>
> > [ 171.006436] dl_bw_deactivate() called cpu=4
> > [ 171.006446] __dl_bw_capacity() mask=3-5 cap=892
> > [ 171.006454] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
> > [ 171.006464] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
> > [ 171.006475] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
> > [ 171.006485] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> > [ 171.006495] Hardware name: ARM Juno development board (r0) (DT)
> > [ 171.006499] Call trace:
> > [ 171.006502] show_stack+0x18/0x24 (C)
> > [ 171.006514] dump_stack_lvl+0x74/0x8c
> > [ 171.006528] dump_stack+0x18/0x24
> > [ 171.006541] dl_bw_manage+0x3a0/0x500
> > [ 171.006554] dl_bw_deactivate+0x40/0x50
> > [ 171.006564] sched_cpu_deactivate+0x34/0x24c
> > [ 171.006579] cpuhp_invoke_callback+0x138/0x694
> > [ 171.006591] cpuhp_thread_fun+0xb0/0x198
> > [ 171.006604] smpboot_thread_fn+0x200/0x224
> > [ 171.006616] kthread+0x12c/0x204
> > [ 171.006627] ret_from_fork+0x10/0x20
> > [ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
> > [ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
> > [ 171.006706] partition_sched_domains() called
> > [ 171.006713] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
> > [ 171.006722] Hardware name: ARM Juno development board (r0) (DT)
> > [ 171.006727] Call trace:
> > [ 171.006730] show_stack+0x18/0x24 (C)
> > [ 171.006740] dump_stack_lvl+0x74/0x8c
> > [ 171.006754] dump_stack+0x18/0x24
> > [ 171.006767] partition_sched_domains+0x48/0x7c
> > [ 171.006778] sched_cpu_deactivate+0x1a8/0x24c
> > [ 171.006792] cpuhp_invoke_callback+0x138/0x694
> > [ 171.006805] cpuhp_thread_fun+0xb0/0x198
> > [ 171.006817] smpboot_thread_fn+0x200/0x224
> > [ 171.006829] kthread+0x12c/0x204
> > [ 171.006840] ret_from_fork+0x10/0x20
> > [ 171.006852] partition_sched_domains_locked() ndoms_new=1
> > [ 171.006861] partition_sched_domains_locked() goto match2
> > [ 171.006867] rd 0-2: Checking EAS, schedutil is mandatory
> > [ 171.007774] psci: CPU4 killed (polled 4 ms)
>
> As I guess you were saying above, CPU4 contribution is not removed from
> DEF.
>
> > [ 171.007971] dl_bw_deactivate() called cpu=3
> > [ 171.007981] __dl_bw_capacity() mask=3-5 cap=446
> > [ 171.007989] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
> > [ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
> ^^^^
> And this is now wrong. :/
So, CPU4 was still on DEF and we don't go through any of the accouting
functions. I wonder if we could simplify this by always re-doing the
accounting after root domains are stable (also for partition_
sched_domain()). So, please take a look at what below. It can definitely
be better encapsulated (also more cleanups are needed) and maybe it's
just useless/stupid (hard to say here because I always see 'pass'
whatever I try to change), but anyway. Also pushed to the usual branch.
---
include/linux/sched/deadline.h | 4 ++++
kernel/cgroup/cpuset.c | 13 ++++++++-----
kernel/sched/deadline.c | 11 ++++++++---
kernel/sched/topology.c | 1 +
4 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
index 3a912ab42bb5..8fc4918c6f3f 100644
--- a/include/linux/sched/deadline.h
+++ b/include/linux/sched/deadline.h
@@ -34,6 +34,10 @@ static inline bool dl_time_before(u64 a, u64 b)
struct root_domain;
extern void dl_add_task_root_domain(struct task_struct *p);
extern void dl_clear_root_domain(struct root_domain *rd);
+extern void dl_clear_root_domain_cpu(int cpu);
+
+extern u64 dl_generation;
+extern bool dl_bw_visited(int cpu, u64 gen);
#endif /* CONFIG_SMP */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 0f910c828973..52243dcc61ab 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -958,6 +958,8 @@ static void dl_rebuild_rd_accounting(void)
{
struct cpuset *cs = NULL;
struct cgroup_subsys_state *pos_css;
+ int cpu;
+ u64 gen = ++dl_generation;
lockdep_assert_held(&cpuset_mutex);
lockdep_assert_cpus_held();
@@ -965,11 +967,12 @@ static void dl_rebuild_rd_accounting(void)
rcu_read_lock();
- /*
- * Clear default root domain DL accounting, it will be computed again
- * if a task belongs to it.
- */
- dl_clear_root_domain(&def_root_domain);
+ for_each_possible_cpu(cpu) {
+ if (dl_bw_visited(cpu, gen))
+ continue;
+
+ dl_clear_root_domain_cpu(cpu);
+ }
cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 8f7420e0c9d6..a6723ed84e68 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -166,7 +166,7 @@ static inline unsigned long dl_bw_capacity(int i)
}
}
-static inline bool dl_bw_visited(int cpu, u64 gen)
+bool dl_bw_visited(int cpu, u64 gen)
{
struct root_domain *rd = cpu_rq(cpu)->rd;
@@ -207,7 +207,7 @@ static inline unsigned long dl_bw_capacity(int i)
return SCHED_CAPACITY_SCALE;
}
-static inline bool dl_bw_visited(int cpu, u64 gen)
+bool dl_bw_visited(int cpu, u64 gen)
{
return false;
}
@@ -3037,6 +3037,11 @@ void dl_clear_root_domain(struct root_domain *rd)
}
}
+void dl_clear_root_domain_cpu(int cpu) {
+ printk_deferred("%s: cpu=%d\n", __func__, cpu);
+ dl_clear_root_domain(cpu_rq(cpu)->rd);
+}
+
#endif /* CONFIG_SMP */
static void switched_from_dl(struct rq *rq, struct task_struct *p)
@@ -3216,7 +3221,7 @@ DEFINE_SCHED_CLASS(dl) = {
};
/* Used for dl_bw check and update, used under sched_rt_handler()::mutex */
-static u64 dl_generation;
+u64 dl_generation;
int sched_dl_global_validate(void)
{
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index c6a140d8d851..9892e6fa3e57 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2814,5 +2814,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
{
mutex_lock(&sched_domains_mutex);
partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ dl_rebuild_rd_accounting();
mutex_unlock(&sched_domains_mutex);
}
On 20/02/2025 15:25, Juri Lelli wrote:
> On 20/02/25 11:40, Juri Lelli wrote:
>> On 19/02/25 19:14, Dietmar Eggemann wrote:
>
> ...
>
>> OK. CPU3 + CPU4 (CPU5 offline).
>>
>>> [ 171.003085] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
>>> [ 171.003091] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
>>> [ 171.003096] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
>>> [ 171.003103] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
>>> [ 171.003113] __dl_server_attach_root() called cpu=2
>>> [ 171.003118] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
>>> [ 171.003127] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
>>> [ 171.003136] __dl_update() (3) cpu=0 rq->dl.extra_bw=477111
>>> [ 171.003141] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
>>> [ 171.003147] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
>>> [ 171.003188] root domain span: 0-2
>>> [ 171.003194] default domain span: 3-5
>>> [ 171.003220] rd 0-2: Checking EAS, schedutil is mandatory
>>> [ 171.005840] psci: CPU5 killed (polled 0 ms)
>>
>> OK. DYN has (CPU0,1,2) 157284 and DEF (CPU3,4) 104856.
>>
>> CPU4 going offline (it's isolated on DEF).
>>
>>> [ 171.006436] dl_bw_deactivate() called cpu=4
>>> [ 171.006446] __dl_bw_capacity() mask=3-5 cap=892
>>> [ 171.006454] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
>>> [ 171.006464] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
>>> [ 171.006475] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
>>> [ 171.006485] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
>>> [ 171.006495] Hardware name: ARM Juno development board (r0) (DT)
>>> [ 171.006499] Call trace:
>>> [ 171.006502] show_stack+0x18/0x24 (C)
>>> [ 171.006514] dump_stack_lvl+0x74/0x8c
>>> [ 171.006528] dump_stack+0x18/0x24
>>> [ 171.006541] dl_bw_manage+0x3a0/0x500
>>> [ 171.006554] dl_bw_deactivate+0x40/0x50
>>> [ 171.006564] sched_cpu_deactivate+0x34/0x24c
>>> [ 171.006579] cpuhp_invoke_callback+0x138/0x694
>>> [ 171.006591] cpuhp_thread_fun+0xb0/0x198
>>> [ 171.006604] smpboot_thread_fn+0x200/0x224
>>> [ 171.006616] kthread+0x12c/0x204
>>> [ 171.006627] ret_from_fork+0x10/0x20
>>> [ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
>>> [ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
>>> [ 171.006706] partition_sched_domains() called
>>> [ 171.006713] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
>>> [ 171.006722] Hardware name: ARM Juno development board (r0) (DT)
>>> [ 171.006727] Call trace:
>>> [ 171.006730] show_stack+0x18/0x24 (C)
>>> [ 171.006740] dump_stack_lvl+0x74/0x8c
>>> [ 171.006754] dump_stack+0x18/0x24
>>> [ 171.006767] partition_sched_domains+0x48/0x7c
>>> [ 171.006778] sched_cpu_deactivate+0x1a8/0x24c
>>> [ 171.006792] cpuhp_invoke_callback+0x138/0x694
>>> [ 171.006805] cpuhp_thread_fun+0xb0/0x198
>>> [ 171.006817] smpboot_thread_fn+0x200/0x224
>>> [ 171.006829] kthread+0x12c/0x204
>>> [ 171.006840] ret_from_fork+0x10/0x20
>>> [ 171.006852] partition_sched_domains_locked() ndoms_new=1
>>> [ 171.006861] partition_sched_domains_locked() goto match2
>>> [ 171.006867] rd 0-2: Checking EAS, schedutil is mandatory
>>> [ 171.007774] psci: CPU4 killed (polled 4 ms)
>>
>> As I guess you were saying above, CPU4 contribution is not removed from
>> DEF.
>>
>>> [ 171.007971] dl_bw_deactivate() called cpu=3
>>> [ 171.007981] __dl_bw_capacity() mask=3-5 cap=446
>>> [ 171.007989] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
>>> [ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
>> ^^^^
>> And this is now wrong. :/
>
> So, CPU4 was still on DEF and we don't go through any of the accouting
> functions. I wonder if we could simplify this by always re-doing the
> accounting after root domains are stable (also for partition_
> sched_domain()). So, please take a look at what below. It can definitely
> be better encapsulated (also more cleanups are needed) and maybe it's
> just useless/stupid (hard to say here because I always see 'pass'
> whatever I try to change), but anyway. Also pushed to the usual branch.
>
> ---
> include/linux/sched/deadline.h | 4 ++++
> kernel/cgroup/cpuset.c | 13 ++++++++-----
> kernel/sched/deadline.c | 11 ++++++++---
> kernel/sched/topology.c | 1 +
> 4 files changed, 21 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
> index 3a912ab42bb5..8fc4918c6f3f 100644
> --- a/include/linux/sched/deadline.h
> +++ b/include/linux/sched/deadline.h
> @@ -34,6 +34,10 @@ static inline bool dl_time_before(u64 a, u64 b)
> struct root_domain;
> extern void dl_add_task_root_domain(struct task_struct *p);
> extern void dl_clear_root_domain(struct root_domain *rd);
> +extern void dl_clear_root_domain_cpu(int cpu);
> +
> +extern u64 dl_generation;
> +extern bool dl_bw_visited(int cpu, u64 gen);
>
> #endif /* CONFIG_SMP */
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 0f910c828973..52243dcc61ab 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -958,6 +958,8 @@ static void dl_rebuild_rd_accounting(void)
> {
> struct cpuset *cs = NULL;
> struct cgroup_subsys_state *pos_css;
> + int cpu;
> + u64 gen = ++dl_generation;
>
> lockdep_assert_held(&cpuset_mutex);
> lockdep_assert_cpus_held();
> @@ -965,11 +967,12 @@ static void dl_rebuild_rd_accounting(void)
>
> rcu_read_lock();
>
> - /*
> - * Clear default root domain DL accounting, it will be computed again
> - * if a task belongs to it.
> - */
> - dl_clear_root_domain(&def_root_domain);
> + for_each_possible_cpu(cpu) {
> + if (dl_bw_visited(cpu, gen))
> + continue;
> +
> + dl_clear_root_domain_cpu(cpu);
> + }
>
> cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 8f7420e0c9d6..a6723ed84e68 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -166,7 +166,7 @@ static inline unsigned long dl_bw_capacity(int i)
> }
> }
>
> -static inline bool dl_bw_visited(int cpu, u64 gen)
> +bool dl_bw_visited(int cpu, u64 gen)
> {
> struct root_domain *rd = cpu_rq(cpu)->rd;
>
> @@ -207,7 +207,7 @@ static inline unsigned long dl_bw_capacity(int i)
> return SCHED_CAPACITY_SCALE;
> }
>
> -static inline bool dl_bw_visited(int cpu, u64 gen)
> +bool dl_bw_visited(int cpu, u64 gen)
> {
> return false;
> }
> @@ -3037,6 +3037,11 @@ void dl_clear_root_domain(struct root_domain *rd)
> }
> }
>
> +void dl_clear_root_domain_cpu(int cpu) {
> + printk_deferred("%s: cpu=%d\n", __func__, cpu);
> + dl_clear_root_domain(cpu_rq(cpu)->rd);
> +}
> +
> #endif /* CONFIG_SMP */
>
> static void switched_from_dl(struct rq *rq, struct task_struct *p)
> @@ -3216,7 +3221,7 @@ DEFINE_SCHED_CLASS(dl) = {
> };
>
> /* Used for dl_bw check and update, used under sched_rt_handler()::mutex */
> -static u64 dl_generation;
> +u64 dl_generation;
>
> int sched_dl_global_validate(void)
> {
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index c6a140d8d851..9892e6fa3e57 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2814,5 +2814,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
> {
> mutex_lock(&sched_domains_mutex);
> partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
> + dl_rebuild_rd_accounting();
> mutex_unlock(&sched_domains_mutex);
> }
>
Latest branch is not building for me ...
CC kernel/time/hrtimer.o
In file included from kernel/sched/build_utility.c:88:
kernel/sched/topology.c: In function ‘partition_sched_domains’:
kernel/sched/topology.c:2817:9: error: implicit declaration of function ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
2817 | dl_rebuild_rd_accounting();
| ^~~~~~~~~~~~~~~~~~~~~~~~
Looks like we are missing a prototype.
Jon
--
nvpublic
On 21/02/2025 12:56, Jon Hunter wrote:
>
> On 20/02/2025 15:25, Juri Lelli wrote:
>> On 20/02/25 11:40, Juri Lelli wrote:
>>> On 19/02/25 19:14, Dietmar Eggemann wrote:
[...]
> Latest branch is not building for me ...
>
> CC kernel/time/hrtimer.o
> In file included from kernel/sched/build_utility.c:88:
> kernel/sched/topology.c: In function ‘partition_sched_domains’:
> kernel/sched/topology.c:2817:9: error: implicit declaration of function
> ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
> 2817 | dl_rebuild_rd_accounting();
> | ^~~~~~~~~~~~~~~~~~~~~~~~
This should fix it for now:
-->8--
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 52243dcc61ab..3484dda93a94 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -954,7 +954,9 @@ static void dl_update_tasks_root_domain(struct cpuset *cs)
css_task_iter_end(&it);
}
-static void dl_rebuild_rd_accounting(void)
+extern void dl_rebuild_rd_accounting(void);
+
+void dl_rebuild_rd_accounting(void)
{
struct cpuset *cs = NULL;
struct cgroup_subsys_state *pos_css;
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9892e6fa3e57..60c9996ccf47 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2806,6 +2806,8 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
update_sched_domain_debugfs();
}
+extern void dl_rebuild_rd_accounting(void);
+
/*
* Call with hotplug lock held
*/
On 21/02/2025 15:45, Dietmar Eggemann wrote:
> On 21/02/2025 12:56, Jon Hunter wrote:
>>
>> On 20/02/2025 15:25, Juri Lelli wrote:
>>> On 20/02/25 11:40, Juri Lelli wrote:
>>>> On 19/02/25 19:14, Dietmar Eggemann wrote:
>
> [...]
>
>> Latest branch is not building for me ...
>>
>> CC kernel/time/hrtimer.o
>> In file included from kernel/sched/build_utility.c:88:
>> kernel/sched/topology.c: In function ‘partition_sched_domains’:
>> kernel/sched/topology.c:2817:9: error: implicit declaration of function
>> ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
>> 2817 | dl_rebuild_rd_accounting();
>> | ^~~~~~~~~~~~~~~~~~~~~~~~
>
> This should fix it for now:
>
> -->8--
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 52243dcc61ab..3484dda93a94 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -954,7 +954,9 @@ static void dl_update_tasks_root_domain(struct cpuset *cs)
> css_task_iter_end(&it);
> }
>
> -static void dl_rebuild_rd_accounting(void)
> +extern void dl_rebuild_rd_accounting(void);
> +
> +void dl_rebuild_rd_accounting(void)
> {
> struct cpuset *cs = NULL;
> struct cgroup_subsys_state *pos_css;
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 9892e6fa3e57..60c9996ccf47 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2806,6 +2806,8 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
> update_sched_domain_debugfs();
> }
>
> +extern void dl_rebuild_rd_accounting(void);
> +
> /*
> * Call with hotplug lock held
> */
>
>
Looks OK now for me.
So DL accounting in partition_and_rebuild_sched_domains() and
partition_sched_domains()!
This is a build from your branch:
https://github.com/jlelli/linux.git experimental/dl-debug
cpumask = [l B B l l l], isolcpus=3-4, no sugov tasks
---
[ 464.034212] psci: CPU5 killed (polled 0 ms)
[ 464.035211] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
[ 464.035294] dl_clear_root_domain: span=0-2 type=DYN
[ 464.035306] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.035324] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.035341] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.035358] rd 0-2: Checking EAS, schedutil is mandatory
[ 464.035369] dl_clear_root_domain_cpu: cpu=0
[ 464.035375] dl_clear_root_domain: span=0-2 type=DYN
[ 464.035384] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.035401] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.035418] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.035433] dl_clear_root_domain_cpu: cpu=3
[ 464.035439] dl_clear_root_domain: span=3-5 type=DEF
[ 464.035448] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=3-5 type=DEF
[ 464.037088] psci: CPU4 killed (polled 0 ms)
[ 464.037497] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=52428 dl_bw_cpus=1 type=DEF span=3-5
[ 464.037576] dl_clear_root_domain: span=0-2 type=DYN
[ 464.037588] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.037607] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.037624] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.037640] rd 0-2: Checking EAS, schedutil is mandatory
[ 464.037651] dl_clear_root_domain_cpu: cpu=0
[ 464.037658] dl_clear_root_domain: span=0-2 type=DYN
[ 464.037667] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.037683] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.037700] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.037714] dl_clear_root_domain_cpu: cpu=3
[ 464.037720] dl_clear_root_domain: span=3-5 type=DEF
[ 464.038687] psci: CPU3 killed (polled 4 ms)
---
full suspend/resume log:
[ 464.867592] PM: suspend entry (deep)
[ 463.871388] Filesystems sync: 0.000 seconds
[ 463.881593] Freezing user space processes
[ 463.887017] Freezing user space processes completed (elapsed 0.005 seconds)
[ 463.894039] OOM killer disabled.
[ 463.897294] Freezing remaining freezable tasks
[ 463.903019] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 463.910482] printk: Suspending console(s) (use no_console_suspend to debug)
[ 464.029605] Disabling non-boot CPUs ...
[ 464.029783] dl_bw_manage: cpu=5 cap=2494 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0-2,5
[ 464.029943] CPU0 attaching NULL sched-domain.
[ 464.029953] span=3-4
[ 464.029979] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DYN
[ 464.030000] __dl_server_detach_root: cpu=0 rd_span=0-2,5 total_bw=157284
[ 464.030014] rq_attach_root: cpu=0 old_span=NULL new_span=3-4
[ 464.030029] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DEF
[ 464.030046] __dl_server_attach_root: cpu=0 rd_span=0,3-4 total_bw=157284
[ 464.030059] CPU1 attaching NULL sched-domain.
[ 464.030065] span=0,3-4
[ 464.030087] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DYN
[ 464.030104] __dl_server_detach_root: cpu=1 rd_span=1-2,5 total_bw=104856
[ 464.030115] rq_attach_root: cpu=1 old_span=NULL new_span=0,3-4
[ 464.030128] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-1,3-4 type=DEF
[ 464.030144] __dl_server_attach_root: cpu=1 rd_span=0-1,3-4 total_bw=209712
[ 464.030156] CPU2 attaching NULL sched-domain.
[ 464.030161] span=0-1,3-4
[ 464.030182] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=2,5 type=DYN
[ 464.030198] __dl_server_detach_root: cpu=2 rd_span=2,5 total_bw=52428
[ 464.030210] rq_attach_root: cpu=2 old_span=NULL new_span=0-1,3-4
[ 464.030222] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
[ 464.030238] __dl_server_attach_root: cpu=2 rd_span=0-4 total_bw=262140
[ 464.030249] CPU5 attaching NULL sched-domain.
[ 464.030255] span=0-4
[ 464.030264] rq_attach_root: cpu=5 old_span= new_span=0-4
[ 464.030428] CPU0 attaching sched-domain(s):
[ 464.030435] domain-0: span=0-2 level=PKG
[ 464.030452] groups: 0:{ span=0 cap=445 }, 1:{ span=1-2 cap=2040 }
[ 464.030505] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 464.030523] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=209712
[ 464.030534] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 464.030546] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 464.030562] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 464.030575] CPU1 attaching sched-domain(s):
[ 464.030581] domain-0: span=1-2 level=MC
[ 464.030594] groups: 1:{ span=1 cap=1019 }, 2:{ span=2 cap=1021 }
[ 464.030684] domain-1: span=0-2 level=PKG
[ 464.030698] groups: 1:{ span=1-2 cap=2040 }, 0:{ span=0 cap=445 }
[ 464.030748] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-5 type=DEF
[ 464.030766] __dl_server_detach_root: cpu=1 rd_span=1-5 total_bw=157284
[ 464.030778] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 464.030790] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 464.030806] __dl_server_attach_root: cpu=1 rd_span=0-1 total_bw=104856
[ 464.030819] CPU2 attaching sched-domain(s):
[ 464.030826] domain-0: span=1-2 level=MC
[ 464.030839] groups: 2:{ span=2 cap=1021 }, 1:{ span=1 cap=1019 }
[ 464.030879] domain-1: span=0-2 level=PKG
[ 464.030892] groups: 1:{ span=1-2 cap=2040 }, 0:{ span=0 cap=445 }
[ 464.030943] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=2-5 type=DEF
[ 464.030960] __dl_server_detach_root: cpu=2 rd_span=2-5 total_bw=104856
[ 464.030971] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 464.030984] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.030999] __dl_server_attach_root: cpu=2 rd_span=0-2 total_bw=157284
[ 464.031055] root domain span: 0-2
[ 464.031062] default domain span: 3-5
[ 464.031092] rd 0-2: Checking EAS, schedutil is mandatory
[ 464.032315] dl_clear_root_domain_cpu: cpu=0
[ 464.032324] dl_clear_root_domain: span=0-2 type=DYN
[ 464.032336] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.032354] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.032370] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.032385] dl_clear_root_domain_cpu: cpu=3
[ 464.032392] dl_clear_root_domain: span=3-5 type=DEF
[ 464.032401] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=3-5 type=DEF
[ 464.032421] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DEF
[ 464.034212] psci: CPU5 killed (polled 0 ms)
[ 464.035211] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
[ 464.035294] dl_clear_root_domain: span=0-2 type=DYN
[ 464.035306] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.035324] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.035341] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.035358] rd 0-2: Checking EAS, schedutil is mandatory
[ 464.035369] dl_clear_root_domain_cpu: cpu=0
[ 464.035375] dl_clear_root_domain: span=0-2 type=DYN
[ 464.035384] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.035401] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.035418] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.035433] dl_clear_root_domain_cpu: cpu=3
[ 464.035439] dl_clear_root_domain: span=3-5 type=DEF
[ 464.035448] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=3-5 type=DEF
[ 464.037088] psci: CPU4 killed (polled 0 ms)
[ 464.037497] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=52428 dl_bw_cpus=1 type=DEF span=3-5
[ 464.037576] dl_clear_root_domain: span=0-2 type=DYN
[ 464.037588] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.037607] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.037624] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.037640] rd 0-2: Checking EAS, schedutil is mandatory
[ 464.037651] dl_clear_root_domain_cpu: cpu=0
[ 464.037658] dl_clear_root_domain: span=0-2 type=DYN
[ 464.037667] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 464.037683] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.037700] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 464.037714] dl_clear_root_domain_cpu: cpu=3
[ 464.037720] dl_clear_root_domain: span=3-5 type=DEF
[ 464.038687] psci: CPU3 killed (polled 4 ms)
[ 464.039106] dl_bw_manage: cpu=2 cap=1470 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0-2
[ 464.039317] CPU0 attaching NULL sched-domain.
[ 464.039328] span=3-5
[ 464.039358] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 464.039384] __dl_server_detach_root: cpu=0 rd_span=0-2 total_bw=104856
[ 464.039401] rq_attach_root: cpu=0 old_span=NULL new_span=3-5
[ 464.039419] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DEF
[ 464.039439] __dl_server_attach_root: cpu=0 rd_span=0,3-5 total_bw=52428
[ 464.039456] CPU1 attaching NULL sched-domain.
[ 464.039465] span=0,3-5
[ 464.039491] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=1-2 type=DYN
[ 464.039512] __dl_server_detach_root: cpu=1 rd_span=1-2 total_bw=52428
[ 464.039527] rq_attach_root: cpu=1 old_span=NULL new_span=0,3-5
[ 464.039544] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1,3-5 type=DEF
[ 464.039563] __dl_server_attach_root: cpu=1 rd_span=0-1,3-5 total_bw=104856
[ 464.039578] CPU2 attaching NULL sched-domain.
[ 464.039588] span=0-1,3-5
[ 464.039602] rq_attach_root: cpu=2 old_span= new_span=0-1,3-5
[ 464.039754] CPU0 attaching sched-domain(s):
[ 464.039763] domain-0: span=0-1 level=PKG
[ 464.039786] groups: 0:{ span=0 cap=444 }, 1:{ span=1 cap=1015 }
[ 464.039869] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 464.039892] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=52428
[ 464.039906] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 464.039923] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 464.039942] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 464.039962] CPU1 attaching sched-domain(s):
[ 464.039972] domain-0: span=0-1 level=PKG
[ 464.039992] groups: 1:{ span=1 cap=1015 }, 0:{ span=0 cap=444 }
[ 464.040071] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=1-5 type=DEF
[ 464.040091] __dl_server_detach_root: cpu=1 rd_span=1-5 total_bw=0
[ 464.040105] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 464.040120] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 464.040139] __dl_server_attach_root: cpu=1 rd_span=0-1 total_bw=104856
[ 464.040206] root domain span: 0-1
[ 464.040217] default domain span: 2-5
[ 464.040250] rd 0-1: Checking EAS, schedutil is mandatory
[ 464.041269] dl_clear_root_domain_cpu: cpu=0
[ 464.041279] dl_clear_root_domain: span=0-1 type=DYN
[ 464.041294] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=0-1 type=DYN
[ 464.041319] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 464.041339] dl_clear_root_domain_cpu: cpu=2
[ 464.041347] dl_clear_root_domain: span=2-5 type=DEF
[ 464.042683] psci: CPU2 killed (polled 4 ms)
[ 464.043689] dl_bw_manage: cpu=1 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0-1
[ 464.043889] CPU0 attaching NULL sched-domain.
[ 464.043901] span=2-5
[ 464.043930] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=0-1 type=DYN
[ 464.043957] __dl_server_detach_root: cpu=0 rd_span=0-1 total_bw=52428
[ 464.043974] rq_attach_root: cpu=0 old_span=NULL new_span=2-5
[ 464.043996] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0,2-5 type=DEF
[ 464.044018] __dl_server_attach_root: cpu=0 rd_span=0,2-5 total_bw=52428
[ 464.044034] CPU1 attaching NULL sched-domain.
[ 464.044044] span=0,2-5
[ 464.044058] rq_attach_root: cpu=1 old_span= new_span=0,2-5
[ 464.044178] CPU0 attaching NULL sched-domain.
[ 464.044188] span=0-5
[ 464.044213] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=0-5 type=DEF
[ 464.044236] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=0
[ 464.044251] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 464.044267] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 464.044288] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 464.044303] root domain span: 0
[ 464.044313] default domain span: 1-5
[ 464.044343] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 464.044677] dl_clear_root_domain_cpu: cpu=0
[ 464.044688] dl_clear_root_domain: span=0 type=DYN
[ 464.044701] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 464.044722] dl_clear_root_domain_cpu: cpu=1
[ 464.044731] dl_clear_root_domain: span=1-5 type=DEF
[ 464.081574] psci: CPU1 killed (polled 0 ms)
[ 464.082786] PM: suspend debug: Waiting for 5 second(s).
[ 469.083692] Enabling non-boot CPUs ...
[ 469.085209] Detected PIPT I-cache on CPU1
[ 469.085316] CPU1: Booted secondary processor 0x0000000000 [0x410fd070]
[ 469.118495] SCMI Notifications - Failed to ENABLE events for key:13000000 !
[ 469.118545] CPU0 attaching NULL sched-domain.
[ 469.118555] span=1-5
[ 469.118601] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=0 type=DYN
[ 469.118630] __dl_server_detach_root: cpu=0 rd_span=0 total_bw=0
[ 469.118647] rq_attach_root: cpu=0 old_span= new_span=1-5
[ 469.118668] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 469.118689] __dl_server_attach_root: cpu=0 rd_span=0-5 total_bw=52428
[ 469.118843] CPU0 attaching sched-domain(s):
[ 469.118853] domain-0: span=0-1 level=PKG
[ 469.118876] groups: 0:{ span=0 cap=445 }, 1:{ span=1 cap=1023 }
[ 469.118961] __dl_sub: cpus=2 tsk_bw=52428 total_bw=0 span=0-5 type=DEF
[ 469.118984] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=0
[ 469.118998] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 469.119014] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 469.119033] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 469.119052] CPU1 attaching sched-domain(s):
[ 469.119062] domain-0: span=0-1 level=PKG
[ 469.119082] groups: 1:{ span=1 cap=1023 }, 0:{ span=0 cap=445 }
[ 469.119148] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 469.119165] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 469.119185] __dl_server_attach_root: cpu=1 rd_span=0-1 total_bw=104856
[ 469.119254] root domain span: 0-1
[ 469.119264] default domain span: 2-5
[ 469.119297] rd 0-1: Checking EAS, schedutil is mandatory
[ 469.119559] dl_clear_root_domain_cpu: cpu=0
[ 469.119569] dl_clear_root_domain: span=0-1 type=DYN
[ 469.119583] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=0-1 type=DYN
[ 469.119604] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 469.119624] dl_clear_root_domain_cpu: cpu=2
[ 469.119632] dl_clear_root_domain: span=2-5 type=DEF
[ 469.119683] CPU1 is up
[ 469.121171] Detected PIPT I-cache on CPU2
[ 469.121260] CPU2: Booted secondary processor 0x0000000001 [0x410fd070]
[ 469.121900] CPU0 attaching NULL sched-domain.
[ 469.121913] span=2-5
[ 469.121950] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=0-1 type=DYN
[ 469.121979] __dl_server_detach_root: cpu=0 rd_span=0-1 total_bw=52428
[ 469.121996] rq_attach_root: cpu=0 old_span=NULL new_span=2-5
[ 469.122014] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=0,2-5 type=DEF
[ 469.122035] __dl_server_attach_root: cpu=0 rd_span=0,2-5 total_bw=52428
[ 469.122051] CPU1 attaching NULL sched-domain.
[ 469.122060] span=0,2-5
[ 469.122088] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=1 type=DYN
[ 469.122109] __dl_server_detach_root: cpu=1 rd_span=1 total_bw=0
[ 469.122124] rq_attach_root: cpu=1 old_span= new_span=0,2-5
[ 469.122143] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 469.122162] __dl_server_attach_root: cpu=1 rd_span=0-5 total_bw=104856
[ 469.122344] CPU0 attaching sched-domain(s):
[ 469.122354] domain-0: span=0-2 level=PKG
[ 469.122376] groups: 0:{ span=0 cap=445 }, 1:{ span=1-2 cap=2046 }
[ 469.122499] __dl_sub: cpus=3 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 469.122523] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=52428
[ 469.122538] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 469.122554] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 469.122573] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 469.122592] CPU1 attaching sched-domain(s):
[ 469.122601] domain-0: span=1-2 level=MC
[ 469.122621] groups: 1:{ span=1 cap=1022 }, 2:{ span=2 }
[ 469.122680] domain-1: span=0-2 level=PKG
[ 469.122699] groups: 1:{ span=1-2 cap=2046 }, 0:{ span=0 cap=445 }
[ 469.122778] __dl_sub: cpus=2 tsk_bw=52428 total_bw=0 span=1-5 type=DEF
[ 469.122799] __dl_server_detach_root: cpu=1 rd_span=1-5 total_bw=0
[ 469.122813] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 469.122829] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 469.122848] __dl_server_attach_root: cpu=1 rd_span=0-1 total_bw=104856
[ 469.122865] CPU2 attaching sched-domain(s):
[ 469.122875] domain-0: span=1-2 level=MC
[ 469.122894] groups: 2:{ span=2 }, 1:{ span=1 cap=1022 }
[ 469.122951] domain-1: span=0-2 level=PKG
[ 469.122970] groups: 1:{ span=1-2 cap=2046 }, 0:{ span=0 cap=445 }
[ 469.123035] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 469.123052] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.123071] __dl_server_attach_root: cpu=2 rd_span=0-2 total_bw=157284
[ 469.123139] root domain span: 0-2
[ 469.123149] default domain span: 3-5
[ 469.123184] rd 0-2: Checking EAS, schedutil is mandatory
[ 469.124043] dl_clear_root_domain_cpu: cpu=0
[ 469.124053] dl_clear_root_domain: span=0-2 type=DYN
[ 469.124067] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 469.124089] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.124108] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.124126] dl_clear_root_domain_cpu: cpu=3
[ 469.124134] dl_clear_root_domain: span=3-5 type=DEF
[ 469.124188] CPU2 is up
[ 469.125145] Detected VIPT I-cache on CPU3
[ 469.125236] CPU3: Booted secondary processor 0x0000000101 [0x410fd030]
[ 469.125804] dl_clear_root_domain: span=0-2 type=DYN
[ 469.125821] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 469.125841] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.125859] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.125877] rd 0-2: Checking EAS, schedutil is mandatory
[ 469.125887] dl_clear_root_domain_cpu: cpu=0
[ 469.125894] dl_clear_root_domain: span=0-2 type=DYN
[ 469.125904] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 469.125920] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.125936] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.125951] dl_clear_root_domain_cpu: cpu=3
[ 469.125958] dl_clear_root_domain: span=3-5 type=DEF
[ 469.125967] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=3-5 type=DEF
[ 469.126017] CPU3 is up
[ 469.126368] Detected VIPT I-cache on CPU4
[ 469.126439] CPU4: Booted secondary processor 0x0000000102 [0x410fd030]
[ 469.126950] dl_clear_root_domain: span=0-2 type=DYN
[ 469.126966] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 469.126986] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.127003] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.127020] rd 0-2: Checking EAS, schedutil is mandatory
[ 469.127030] dl_clear_root_domain_cpu: cpu=0
[ 469.127037] dl_clear_root_domain: span=0-2 type=DYN
[ 469.127047] __dl_add: cpus=3 tsk_bw=52428 total_bw=52428 span=0-2 type=DYN
[ 469.127063] __dl_add: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.127079] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.127094] dl_clear_root_domain_cpu: cpu=3
[ 469.127100] dl_clear_root_domain: span=3-5 type=DEF
[ 469.127110] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=3-5 type=DEF
[ 469.127126] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DEF
[ 469.127175] CPU4 is up
[ 469.127529] Detected VIPT I-cache on CPU5
[ 469.127597] CPU5: Booted secondary processor 0x0000000103 [0x410fd030]
[ 469.128253] CPU0 attaching NULL sched-domain.
[ 469.128261] span=3-5
[ 469.128294] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=0-2 type=DYN
[ 469.128314] __dl_server_detach_root: cpu=0 rd_span=0-2 total_bw=104856
[ 469.128326] rq_attach_root: cpu=0 old_span=NULL new_span=3-5
[ 469.128341] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DEF
[ 469.128358] __dl_server_attach_root: cpu=0 rd_span=0,3-5 total_bw=157284
[ 469.128370] CPU1 attaching NULL sched-domain.
[ 469.128376] span=0,3-5
[ 469.128395] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DYN
[ 469.128412] __dl_server_detach_root: cpu=1 rd_span=1-2 total_bw=52428
[ 469.128423] rq_attach_root: cpu=1 old_span=NULL new_span=0,3-5
[ 469.128436] __dl_add: cpus=5 tsk_bw=52428 total_bw=209712 span=0-1,3-5 type=DEF
[ 469.128453] __dl_server_attach_root: cpu=1 rd_span=0-1,3-5 total_bw=209712
[ 469.128465] CPU2 attaching NULL sched-domain.
[ 469.128470] span=0-1,3-5
[ 469.128490] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=2 type=DYN
[ 469.128506] __dl_server_detach_root: cpu=2 rd_span=2 total_bw=0
[ 469.128518] rq_attach_root: cpu=2 old_span= new_span=0-1,3-5
[ 469.128531] __dl_add: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 469.128547] __dl_server_attach_root: cpu=2 rd_span=0-5 total_bw=262140
[ 469.128739] CPU0 attaching sched-domain(s):
[ 469.128747] domain-0: span=0,5 level=MC
[ 469.128763] groups: 0:{ span=0 cap=445 }, 5:{ span=5 cap=446 }
[ 469.128804] domain-1: span=0-2,5 level=PKG
[ 469.128818] groups: 0:{ span=0,5 cap=891 }, 1:{ span=1-2 cap=2042 }
[ 469.128871] __dl_sub: cpus=6 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 469.128888] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=209712
[ 469.128899] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 469.128911] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 469.128928] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 469.128941] CPU1 attaching sched-domain(s):
[ 469.128947] domain-0: span=1-2 level=MC
[ 469.128961] groups: 1:{ span=1 cap=1019 }, 2:{ span=2 cap=1023 }
[ 469.129001] domain-1: span=0-2,5 level=PKG
[ 469.129014] groups: 1:{ span=1-2 cap=2042 }, 0:{ span=0,5 cap=891 }
[ 469.129065] __dl_sub: cpus=5 tsk_bw=52428 total_bw=157284 span=1-5 type=DEF
[ 469.129083] __dl_server_detach_root: cpu=1 rd_span=1-5 total_bw=157284
[ 469.129094] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 469.129106] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0-1 type=DYN
[ 469.129122] __dl_server_attach_root: cpu=1 rd_span=0-1 total_bw=104856
[ 469.129135] CPU2 attaching sched-domain(s):
[ 469.129141] domain-0: span=1-2 level=MC
[ 469.129154] groups: 2:{ span=2 cap=1023 }, 1:{ span=1 cap=1019 }
[ 469.129194] domain-1: span=0-2,5 level=PKG
[ 469.129208] groups: 1:{ span=1-2 cap=2042 }, 0:{ span=0,5 cap=891 }
[ 469.129258] __dl_sub: cpus=4 tsk_bw=52428 total_bw=104856 span=2-5 type=DEF
[ 469.129275] __dl_server_detach_root: cpu=2 rd_span=2-5 total_bw=104856
[ 469.129286] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 469.129299] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DYN
[ 469.129314] __dl_server_attach_root: cpu=2 rd_span=0-2 total_bw=157284
[ 469.129327] CPU5 attaching sched-domain(s):
[ 469.129333] domain-0: span=0,5 level=MC
[ 469.129347] groups: 5:{ span=5 cap=446 }, 0:{ span=0 cap=445 }
[ 469.129387] domain-1: span=0-2,5 level=PKG
[ 469.129400] groups: 0:{ span=0,5 cap=891 }, 1:{ span=1-2 cap=2042 }
[ 469.129442] rq_attach_root: cpu=5 old_span=NULL new_span=0-2
[ 469.129455] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DYN
[ 469.129472] __dl_server_attach_root: cpu=5 rd_span=0-2,5 total_bw=209712
[ 469.129528] root domain span: 0-2,5
[ 469.129536] default domain span: 3-4
[ 469.129565] rd 0-2,5: Checking EAS, schedutil is mandatory
[ 469.130838] dl_clear_root_domain_cpu: cpu=0
[ 469.130848] dl_clear_root_domain: span=0-2,5 type=DYN
[ 469.130860] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0-2,5 type=DYN
[ 469.130881] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0-2,5 type=DYN
[ 469.130898] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DYN
[ 469.130915] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DYN
[ 469.130930] dl_clear_root_domain_cpu: cpu=3
[ 469.130936] dl_clear_root_domain: span=3-4 type=DEF
[ 469.130945] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=3-4 type=DEF
[ 469.130961] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=3-4 type=DEF
[ 469.130986] dl_clear_root_domain: span=0-2,5 type=DYN
[ 469.130996] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0-2,5 type=DYN
[ 469.131012] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0-2,5 type=DYN
[ 469.131029] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DYN
[ 469.131045] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DYN
[ 469.131062] rd 0-2,5: Checking EAS, schedutil is mandatory
[ 469.131072] dl_clear_root_domain_cpu: cpu=0
[ 469.131078] dl_clear_root_domain: span=0-2,5 type=DYN
[ 469.131088] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0-2,5 type=DYN
[ 469.131105] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0-2,5 type=DYN
[ 469.131122] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DYN
[ 469.131138] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-2,5 type=DYN
[ 469.131153] dl_clear_root_domain_cpu: cpu=3
[ 469.131160] dl_clear_root_domain: span=3-4 type=DEF
[ 469.131169] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=3-4 type=DEF
[ 469.131185] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=3-4 type=DEF
[ 469.131237] CPU5 is up
[ 471.429916] OOM killer enabled.
[ 471.433116] Restarting tasks ... done.
[ 471.437876] random: crng reseeded on system resumption
[ 471.443635] PM: suspend exit
On 24/02/25 14:53, Dietmar Eggemann wrote:
> On 21/02/2025 15:45, Dietmar Eggemann wrote:
> > On 21/02/2025 12:56, Jon Hunter wrote:
> >>
> >> On 20/02/2025 15:25, Juri Lelli wrote:
> >>> On 20/02/25 11:40, Juri Lelli wrote:
> >>>> On 19/02/25 19:14, Dietmar Eggemann wrote:
> >
> > [...]
> >
> >> Latest branch is not building for me ...
> >>
> >> CC kernel/time/hrtimer.o
> >> In file included from kernel/sched/build_utility.c:88:
> >> kernel/sched/topology.c: In function ‘partition_sched_domains’:
> >> kernel/sched/topology.c:2817:9: error: implicit declaration of function
> >> ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
> >> 2817 | dl_rebuild_rd_accounting();
> >> | ^~~~~~~~~~~~~~~~~~~~~~~~
> >
> > This should fix it for now:
> >
> > -->8--
> >
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 52243dcc61ab..3484dda93a94 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -954,7 +954,9 @@ static void dl_update_tasks_root_domain(struct cpuset *cs)
> > css_task_iter_end(&it);
> > }
> >
> > -static void dl_rebuild_rd_accounting(void)
> > +extern void dl_rebuild_rd_accounting(void);
> > +
> > +void dl_rebuild_rd_accounting(void)
> > {
> > struct cpuset *cs = NULL;
> > struct cgroup_subsys_state *pos_css;
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 9892e6fa3e57..60c9996ccf47 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -2806,6 +2806,8 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
> > update_sched_domain_debugfs();
> > }
> >
> > +extern void dl_rebuild_rd_accounting(void);
> > +
> > /*
> > * Call with hotplug lock held
> > */
> >
> >
>
> Looks OK now for me.
>
> So DL accounting in partition_and_rebuild_sched_domains() and
> partition_sched_domains()!
Yeah that's the gist of it. Wait for domains to be stable and recompute
everything.
Thanks for testing. Let's see if Jon can also report good news.
Best,
Juri
Hi Juri,
On 24/02/2025 14:03, Juri Lelli wrote:
> On 24/02/25 14:53, Dietmar Eggemann wrote:
>> On 21/02/2025 15:45, Dietmar Eggemann wrote:
>>> On 21/02/2025 12:56, Jon Hunter wrote:
>>>>
>>>> On 20/02/2025 15:25, Juri Lelli wrote:
>>>>> On 20/02/25 11:40, Juri Lelli wrote:
>>>>>> On 19/02/25 19:14, Dietmar Eggemann wrote:
>>>
>>> [...]
>>>
>>>> Latest branch is not building for me ...
>>>>
>>>> CC kernel/time/hrtimer.o
>>>> In file included from kernel/sched/build_utility.c:88:
>>>> kernel/sched/topology.c: In function ‘partition_sched_domains’:
>>>> kernel/sched/topology.c:2817:9: error: implicit declaration of function
>>>> ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
>>>> 2817 | dl_rebuild_rd_accounting();
>>>> | ^~~~~~~~~~~~~~~~~~~~~~~~
>>>
>>> This should fix it for now:
>>>
>>> -->8--
>>>
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index 52243dcc61ab..3484dda93a94 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -954,7 +954,9 @@ static void dl_update_tasks_root_domain(struct cpuset *cs)
>>> css_task_iter_end(&it);
>>> }
>>>
>>> -static void dl_rebuild_rd_accounting(void)
>>> +extern void dl_rebuild_rd_accounting(void);
>>> +
>>> +void dl_rebuild_rd_accounting(void)
>>> {
>>> struct cpuset *cs = NULL;
>>> struct cgroup_subsys_state *pos_css;
>>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>>> index 9892e6fa3e57..60c9996ccf47 100644
>>> --- a/kernel/sched/topology.c
>>> +++ b/kernel/sched/topology.c
>>> @@ -2806,6 +2806,8 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
>>> update_sched_domain_debugfs();
>>> }
>>>
>>> +extern void dl_rebuild_rd_accounting(void);
>>> +
>>> /*
>>> * Call with hotplug lock held
>>> */
>>>
>>>
>>
>> Looks OK now for me.
>>
>> So DL accounting in partition_and_rebuild_sched_domains() and
>> partition_sched_domains()!
>
> Yeah that's the gist of it. Wait for domains to be stable and recompute
> everything.
>
> Thanks for testing. Let's see if Jon can also report good news.
Sorry for the delay. Yes this is working for me too! If you have an
official patch to fix this, then I can give it a test on my side.
Thanks!
Jon
--
nvpublic
Hi Jon, On 24/02/25 23:39, Jon Hunter wrote: > Hi Juri, > > On 24/02/2025 14:03, Juri Lelli wrote: > > On 24/02/25 14:53, Dietmar Eggemann wrote: ... > > > So DL accounting in partition_and_rebuild_sched_domains() and > > > partition_sched_domains()! > > > > Yeah that's the gist of it. Wait for domains to be stable and recompute > > everything. > > > > Thanks for testing. Let's see if Jon can also report good news. > > > Sorry for the delay. Yes this is working for me too! If you have an official > patch to fix this, then I can give it a test on my side. Good! Thanks for testing and confirming it works for you now. I will be cleaning up the changes and send them out separately. Best, Juri
Hi Juri, On 25/02/2025 09:48, Juri Lelli wrote: > Hi Jon, > > On 24/02/25 23:39, Jon Hunter wrote: >> Hi Juri, >> >> On 24/02/2025 14:03, Juri Lelli wrote: >>> On 24/02/25 14:53, Dietmar Eggemann wrote: > > ... > >>>> So DL accounting in partition_and_rebuild_sched_domains() and >>>> partition_sched_domains()! >>> >>> Yeah that's the gist of it. Wait for domains to be stable and recompute >>> everything. >>> >>> Thanks for testing. Let's see if Jon can also report good news. >> >> >> Sorry for the delay. Yes this is working for me too! If you have an official >> patch to fix this, then I can give it a test on my side. > > Good! Thanks for testing and confirming it works for you now. > > I will be cleaning up the changes and send them out separately. I just wanted to see if you have posted anything yet? I was not sure if I missed it. Thanks! Jon -- nvpublic
Hi Jon, On 03/03/25 14:17, Jon Hunter wrote: > Hi Juri, > > On 25/02/2025 09:48, Juri Lelli wrote: > > Hi Jon, > > > > On 24/02/25 23:39, Jon Hunter wrote: > > > Hi Juri, > > > > > > On 24/02/2025 14:03, Juri Lelli wrote: > > > > On 24/02/25 14:53, Dietmar Eggemann wrote: > > > > ... > > > > > > > So DL accounting in partition_and_rebuild_sched_domains() and > > > > > partition_sched_domains()! > > > > > > > > Yeah that's the gist of it. Wait for domains to be stable and recompute > > > > everything. > > > > > > > > Thanks for testing. Let's see if Jon can also report good news. > > > > > > > > > Sorry for the delay. Yes this is working for me too! If you have an official > > > patch to fix this, then I can give it a test on my side. > > > > Good! Thanks for testing and confirming it works for you now. > > > > I will be cleaning up the changes and send them out separately. > > > I just wanted to see if you have posted anything yet? I was not sure if I > missed it. You didn't miss anything. I cleaned up and refreshed the set and I am currently waiting for bots to tell me if it's good to be posted. Should be able to send it out in the next few days (of course you will be cc-ed :). Thanks, Juri
On 19/02/2025 10:02, Juri Lelli wrote:
> On 19/02/25 10:29, Dietmar Eggemann wrote:
>
> ...
>
>> I did now.
>
> Thanks!
>
>> Patch-wise I have:
>>
>> (1) Putting 'fair_server's __dl_server_[de|at]tach_root() under if
>> '(cpumask_test_cpu(rq->cpu, [old_rd->online|cpu_active_mask))' in
>> rq_attach_root()
>>
>> https://lkml.kernel.org/r/Z7RhNmLpOb7SLImW@jlelli-thinkpadt14gen4.remote.csb
>>
>> (2) Create __dl_server_detach_root() and call it in rq_attach_root()
>>
>> https://lkml.kernel.org/r/Z4fd_6M2vhSMSR0i@jlelli-thinkpadt14gen4.remote.csb
>>
>> plus debug patch:
>>
>> https://lkml.kernel.org/r/Z6M5fQB9P1_bDF7A@jlelli-thinkpadt14gen4.remote.csb
>>
>> plus additional debug.
>
> So you don't have the one with which we ignore special tasks while
> rebuilding domains?
>
> https://lore.kernel.org/all/Z6spnwykg6YSXBX_@jlelli-thinkpadt14gen4.remote.csb/
>
> Could you please double check again against
>
> git@github.com:jlelli/linux.git experimental/dl-debug
>
>> The suspend issue still persists.
>>
>> My hunch is that it's rather an issue with having 0 CPUs left in DEF
>> while deactivating the last isol CPU (CPU3) so we set overflow = 1 w/o
>> calling __dl_overflow(). We want to account fair_server_bw=52428
>> against 0 CPUs.
>>
>> l B B l l l
>>
>> ^^^
>> isolcpus=[3,4]
>>
>>
>> cpumask_and(mask, rd->span, cpu_active_mask)
>>
>> mask = [3-5] & [0-3] = [3] -> dl_bw_cpus(3) = 1
>>
>> ---
>>
>> dl_bw_deactivate() called cpu=5
>>
>> dl_bw_deactivate() called cpu=4
>>
>> dl_bw_deactivate() called cpu=3
>>
>> dl_bw_cpus() cpu=6 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
>> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
>> cpumask_subset(rd->span, cpu_active_mask) is false
>>
>> for_each_cpu_and(i, rd->span, cpu_active_mask)
>> cpus++ <-- cpus is 1 !!!
>>
>> dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
> ^^^^^^
> This still looks wrong: with a single cpu remaining we should only have
> the corresponding dl server bandwidth present (unless there is some
> other DL task running.
>
> If you already had the patch ignoring sugovs bandwidth in your set, could
> you please share the full dmesg?
Attached is the full dmesg from my board with your latest branch. I have
not been able to get to the traces yet, because I am using the same
board to debug another issue.
Cheers
Jon
--
nvpublic
U-Boot 2020.04-g6b630d64fd (Feb 19 2021 - 08:38:59 -0800)
SoC: tegra186
Model: NVIDIA P2771-0000-500
Board: NVIDIA P2771-0000
DRAM: 7.8 GiB
MMC: sdhci@3400000: 1, sdhci@3460000: 0
Loading Environment from MMC... *** Warning - bad CRC, using default environment
In: serial
Out: serial
Err: serial
Net:
Warning: ethernet@2490000 using MAC address from ROM
eth0: ethernet@2490000
Hit any key to stop autoboot: 2 1 0
MMC: no card present
switch to partitions #0, OK
mmc0(part 0) is current device
Scanning mmc 0:1...
Found /boot/extlinux/extlinux.conf
Retrieving file: /boot/extlinux/extlinux.conf
489 bytes read in 17 ms (27.3 KiB/s)
1: primary kernel
Retrieving file: /boot/initrd
7236840 bytes read in 187 ms (36.9 MiB/s)
Retrieving file: /boot/Image
47976960 bytes read in 1147 ms (39.9 MiB/s)
append: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81087/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1 no_console_suspend root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81087/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2
Retrieving file: /boot/dtb/tegra186-p2771-0000.dtb
108349 bytes read in 21 ms (4.9 MiB/s)
## Flattened Device Tree blob at 88400000
Booting using the fdt blob at 0x88400000
Using Device Tree in place at 0000000088400000, end 000000008841d73c
copying carveout for /host1x@13e00000/display-hub@15200000/display@15200000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15210000...
copying carveout for /host1x@13e00000/display-hub@15200000/display@15220000...
DT node /trusty missing in source; can't copy status
DT node /reserved-memory/fb0_carveout missing in source; can't copy
DT node /reserved-memory/fb1_carveout missing in source; can't copy
DT node /reserved-memory/fb2_carveout missing in source; can't copy
DT node /reserved-memory/ramoops_carveout missing in source; can't copy
DT node /reserved-memory/vpr-carveout missing in source; can't copy
Starting kernel ...
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00008-g1a5a0b763ef7 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT Tue Feb 18 04:07:30 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81087/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008831] Console: colour dummy device 80x25
[ 0.013488] printk: legacy console [tty0] enabled
[ 0.018416] printk: legacy bootconsole [uart0] disabled
[ 0.000000] Booting Linux on physical CPU 0x0000000100 [0x411fd073]
[ 0.000000] Linux version 6.13.0-rc6-next-20250110-00008-g1a5a0b763ef7 (jonathanh@goldfinger) (aarch64-linux-gcc.br_real (Buildroot 2022.08) 11.3.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT Tue Feb 18 04:07:30 PST 2025
[ 0.000000] Machine model: NVIDIA Jetson TX2 Developer Kit
[ 0.000000] printk: debug: ignoring loglevel setting.
[ 0.000000] efi: UEFI not found.
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] earlycon: uart0 at MMIO 0x0000000003100000 (options '115200n8')
[ 0.000000] printk: legacy bootconsole [uart0] enabled
[ 0.000000] OF: reserved mem: Reserved memory: unsupported node format, ignoring
[ 0.000000] NUMA: Faking a node at [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x274db08c0-0x274db2eff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000080000000-0x00000000ffffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000100000000-0x00000002771fffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000080000000-0x00000000efffffff]
[ 0.000000] node 0: [mem 0x00000000f0200000-0x00000002757fffff]
[ 0.000000] node 0: [mem 0x0000000275e00000-0x0000000275ffffff]
[ 0.000000] node 0: [mem 0x0000000276600000-0x00000002767fffff]
[ 0.000000] node 0: [mem 0x0000000277000000-0x00000002771fffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x00000002771fffff]
[ 0.000000] On node 0, zone DMA: 512 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 1536 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 2048 pages in unavailable ranges
[ 0.000000] On node 0, zone Normal: 3584 pages in unavailable ranges
[ 0.000000] cma: Reserved 32 MiB at 0x00000000fe000000 on node -1
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 25 pages/cpu s61592 r8192 d32616 u102400
[ 0.000000] pcpu-alloc: s61592 r8192 d32616 u102400 alloc=25*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected PIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Spectre-v2
[ 0.000000] CPU features: detected: Spectre-BHB
[ 0.000000] CPU features: detected: ARM erratum 1742098
[ 0.000000] CPU features: detected: ARM errata 1165522, 1319367, or 1530923
[ 0.000000] alternatives: applying boot alternatives
[ 0.000000] Kernel command line: earlycon console=ttyS0,115200n8 fw_devlink=on root=/dev/nfs rw netdevwait ip=192.168.99.2:192.168.99.1:192.168.99.1:255.255.255.0::eth0:off nfsroot=192.168.99.1:/home/ausvrl81087/nfsroot sched_verbose ignore_loglevel console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 video=tegrafb no_console_suspend=1 nvdumper_reserved=0x2772e0000 gpt rootfs.slot_suffix= usbcore.old_scheme_first=1 tegraid=18.1.2.0.0 maxcpus=6 boot.slot_suffix= boot.ratchetvalues=0.2031647.1 vpr_resize bl_prof_dataptr=0x10000@0x275840000 sdhci_tegra.en_boot_part_access=1
[ 0.000000] Unknown kernel command line parameters "netdevwait vpr_resize nvdumper_reserved=0x2772e0000 tegraid=18.1.2.0.0 bl_prof_dataptr=0x10000@0x275840000", will be passed to user space.
[ 0.000000] printk: log buffer data + meta data: 131072 + 458752 = 589824 bytes
[ 0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Fallback order for Node 0: 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 2055168
[ 0.000000] Policy zone: Normal
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: area num 8.
[ 0.000000] software IO TLB: mapped [mem 0x00000000fa000000-0x00000000fe000000] (64MB)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.000000] rq_attach_root: cpu=1 old_span=NULL new_span=0
[ 0.000000] rq_attach_root: cpu=2 old_span=NULL new_span=0-1
[ 0.000000] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 0.000000] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 0.000000] rq_attach_root: cpu=5 old_span=NULL new_span=0-4
[ 0.000000] rcu: Preemptible hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=6.
[ 0.000000] Trampoline variant of Tasks RCU enabled.
[ 0.000000] Tracing variant of Tasks RCU enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=6.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode
[ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.000000] arch_timer: cp15 timer(s) running at 31.25MHz (phys).
[ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.000000] sched_clock: 56 bits at 31MHz, resolution 32ns, wraps every 4398046511088ns
[ 0.008831] Console: colour dummy device 80x25
[ 0.013488] printk: legacy console [tty0] enabled
[ 0.018416] printk: legacy bootconsole [uart0] disabled
[ 0.023945] Calibrating delay loop (skipped), value calculated using timer frequency.. 62.50 BogoMIPS (lpj=125000)
[ 0.023960] pid_max: default: 32768 minimum: 301
[ 0.024008] LSM: initializing lsm=capability
[ 0.024113] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024142] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 0.024660] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0-5 type=DEF
[ 0.035964] rcu: Hierarchical SRCU implementation.
[ 0.035974] rcu: Max phase no-delay instances is 1000.
[ 0.036156] Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level
[ 0.042217] Tegra Revision: A02 SKU: 220 CPU Process: 0 SoC Process: 0
[ 0.043811] EFI services will not be available.
[ 0.044075] smp: Bringing up secondary CPUs ...
[ 0.048959] CPU features: detected: Kernel page table isolation (KPTI)
[ 0.048996] Detected PIPT I-cache on CPU1
[ 0.049011] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU1: 0x0000009444c004
[ 0.049033] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU1: 0x00000010305116
[ 0.049063] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU1: 0x00000003001066
[ 0.049108] CPU features: Unsupported CPU feature variation detected.
[ 0.049288] CPU1: Booted secondary processor 0x0000000000 [0x4e0f0030]
[ 0.049354] __dl_add: cpus=1 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 0.056437] Detected PIPT I-cache on CPU2
[ 0.056456] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 0.056477] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 0.056503] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 0.056660] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 0.056717] __dl_add: cpus=2 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 0.064308] Detected PIPT I-cache on CPU3
[ 0.064404] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 0.064433] __dl_add: cpus=3 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 0.072339] Detected PIPT I-cache on CPU4
[ 0.072409] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 0.072436] __dl_add: cpus=4 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.080366] Detected PIPT I-cache on CPU5
[ 0.080431] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 0.080456] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
[ 0.080533] smp: Brought up 1 node, 6 CPUs
[ 0.080559] SMP: Total of 6 processors activated.
[ 0.080566] CPU: All CPU(s) started at EL2
[ 0.080578] CPU features: detected: 32-bit EL0 Support
[ 0.080585] CPU features: detected: 32-bit EL1 Support
[ 0.080594] CPU features: detected: CRC32 instructions
[ 0.080683] alternatives: applying system-wide alternatives
[ 0.089321] CPU0 attaching sched-domain(s):
[ 0.089340] domain-0: span=0,3-5 level=MC
[ 0.089355] groups: 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 0.089402] __dl_sub: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 0.089409] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=262140
[ 0.089413] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 0.089418] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 0.089422] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 0.089427] CPU3 attaching sched-domain(s):
[ 0.089453] domain-0: span=0,3-5 level=MC
[ 0.089464] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }
[ 0.089504] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=1-5 type=DEF
[ 0.089508] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=209712
[ 0.089512] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 0.089515] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 0.089519] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 0.089523] CPU4 attaching sched-domain(s):
[ 0.089549] domain-0: span=0,3-5 level=MC
[ 0.089559] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }
[ 0.089598] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-2,4-5 type=DEF
[ 0.089603] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=157284
[ 0.089606] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 0.089610] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 0.089614] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 0.089618] CPU5 attaching sched-domain(s):
[ 0.089645] domain-0: span=0,3-5 level=MC
[ 0.089655] groups: 5:{ span=5 }, 0:{ span=0 cap=1020 }, 3:{ span=3 }, 4:{ span=4 }
[ 0.089694] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,5 type=DEF
[ 0.089698] __dl_server_detach_root: cpu=5 rd_span=1-2,5 total_bw=104856
[ 0.089701] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 0.089704] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 0.089709] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 0.089712] root domain span: 0,3-5
[ 0.089738] default domain span: 1-2
[ 0.089805] Memory: 7902468K/8220672K available (17856K kernel code, 5188K rwdata, 12720K rodata, 10944K init, 1132K bss, 280192K reserved, 32768K cma-reserved)
[ 0.090917] devtmpfs: initialized
[ 0.105340] DMA-API: preallocated 65536 debug entries
[ 0.105362] DMA-API: debugging enabled by kernel config
[ 0.105376] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 0.105396] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[ 0.105790] 20752 pages in range for non-PLT usage
[ 0.105798] 512272 pages in range for PLT usage
[ 0.105949] pinctrl core: initialized pinctrl subsystem
[ 0.108299] DMI not present or invalid.
[ 0.110396] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[ 0.111190] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
[ 0.111397] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[ 0.111702] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[ 0.111750] audit: initializing netlink subsys (disabled)
[ 0.111872] audit: type=2000 audit(0.096:1): state=initialized audit_enabled=0 res=1
[ 0.113511] thermal_sys: Registered thermal governor 'step_wise'
[ 0.113518] thermal_sys: Registered thermal governor 'power_allocator'
[ 0.113591] cpuidle: using governor menu
[ 0.113820] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
[ 0.114019] ASID allocator initialised with 32768 entries
[ 0.116037] Serial: AMBA PL011 UART driver
[ 0.124037] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 0.124072] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 0.124095] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 0.124116] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 0.124136] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 0.124156] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 0.124176] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 0.124196] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 0.124216] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 0.124237] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 0.124258] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 0.124277] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 0.124297] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 0.124317] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 0.124336] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 0.124356] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 0.124376] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 0.124397] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 0.124417] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 0.124437] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 0.124457] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 0.124477] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 0.124496] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 0.124516] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 0.124535] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 0.124555] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 0.124575] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 0.124595] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 0.124614] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 0.124671] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.124731] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.124792] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.124851] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.124910] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.124970] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125030] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125088] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125147] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125204] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125263] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125335] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125397] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125459] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125521] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125585] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125646] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125708] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125771] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125829] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125888] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.125949] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126010] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126072] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126134] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126193] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126253] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126332] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.126419] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 0.139159] /memory-controller@2c00000/external-memory-controller@2c60000: Fixed dependency cycle(s) with /bpmp
[ 0.139374] /bpmp: Fixed dependency cycle(s) with /memory-controller@2c00000/external-memory-controller@2c60000
[ 0.143608] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[ 0.143625] HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
[ 0.143635] HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
[ 0.143642] HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
[ 0.143650] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.143658] HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
[ 0.143666] HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
[ 0.143672] HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
[ 0.144953] ACPI: Interpreter disabled.
[ 0.146981] iommu: Default domain type: Translated
[ 0.146997] iommu: DMA domain TLB invalidation policy: strict mode
[ 0.147499] SCSI subsystem initialized
[ 0.147607] libata version 3.00 loaded.
[ 0.147733] usbcore: registered new interface driver usbfs
[ 0.147758] usbcore: registered new interface driver hub
[ 0.147785] usbcore: registered new device driver usb
[ 0.148342] pps_core: LinuxPPS API ver. 1 registered
[ 0.148351] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[ 0.148367] PTP clock support registered
[ 0.148438] EDAC MC: Ver: 3.0.0
[ 0.148948] scmi_core: SCMI protocol bus registered
[ 0.149622] FPGA manager framework
[ 0.149685] Advanced Linux Sound Architecture Driver Initialized.
[ 0.150315] vgaarb: loaded
[ 0.150684] clocksource: Switched to clocksource arch_sys_counter
[ 0.150845] VFS: Disk quotas dquot_6.6.0
[ 0.150866] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.151024] pnp: PnP ACPI: disabled
[ 0.156212] NET: Registered PF_INET protocol family
[ 0.156419] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[ 0.160281] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
[ 0.160371] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[ 0.160391] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
[ 0.160708] TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
[ 0.161898] TCP: Hash tables configured (established 65536 bind 65536)
[ 0.161980] UDP hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.162198] UDP-Lite hash table entries: 4096 (order: 6, 262144 bytes, linear)
[ 0.162501] NET: Registered PF_UNIX/PF_LOCAL protocol family
[ 0.162855] RPC: Registered named UNIX socket transport module.
[ 0.162871] RPC: Registered udp transport module.
[ 0.162878] RPC: Registered tcp transport module.
[ 0.162885] RPC: Registered tcp-with-tls transport module.
[ 0.162892] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 0.162906] PCI: CLS 0 bytes, default 64
[ 0.163038] Unpacking initramfs...
[ 0.169120] kvm [1]: nv: 566 coarse grained trap handlers
[ 0.169436] kvm [1]: IPA Size Limit: 40 bits
[ 0.170926] kvm [1]: vgic interrupt IRQ9
[ 0.170989] kvm [1]: Hyp nVHE mode initialized successfully
[ 0.172229] Initialise system trusted keyrings
[ 0.172391] workingset: timestamp_bits=42 max_order=21 bucket_order=0
[ 0.172606] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[ 0.172801] NFS: Registering the id_resolver key type
[ 0.172825] Key type id_resolver registered
[ 0.172832] Key type id_legacy registered
[ 0.172851] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[ 0.172860] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[ 0.172972] 9p: Installing v9fs 9p2000 file system support
[ 0.205672] Key type asymmetric registered
[ 0.205699] Asymmetric key parser 'x509' registered
[ 0.205764] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245)
[ 0.205777] io scheduler mq-deadline registered
[ 0.205784] io scheduler kyber registered
[ 0.205813] io scheduler bfq registered
[ 0.215015] ledtrig-cpu: registered to indicate activity on CPUs
[ 0.237679] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 0.240328] msm_serial: driver initialized
[ 0.240583] SuperH (H)SCI(F) driver initialized
[ 0.240699] STM32 USART driver initialized
[ 0.243566] arm-smmu 12000000.iommu: probing hardware configuration...
[ 0.243586] arm-smmu 12000000.iommu: SMMUv2 with:
[ 0.243596] arm-smmu 12000000.iommu: stage 1 translation
[ 0.243604] arm-smmu 12000000.iommu: stage 2 translation
[ 0.243612] arm-smmu 12000000.iommu: nested translation
[ 0.243620] arm-smmu 12000000.iommu: stream matching with 128 register groups
[ 0.243632] arm-smmu 12000000.iommu: 64 context banks (0 stage-2 only)
[ 0.243644] arm-smmu 12000000.iommu: Supported page sizes: 0x61311000
[ 0.243653] arm-smmu 12000000.iommu: Stage-1: 48-bit VA -> 48-bit IPA
[ 0.243661] arm-smmu 12000000.iommu: Stage-2: 48-bit IPA -> 48-bit PA
[ 0.243698] arm-smmu 12000000.iommu: preserved 0 boot mappings
[ 0.248931] loop: module loaded
[ 0.249680] megasas: 07.727.03.00-rc1
[ 0.254944] tun: Universal TUN/TAP device driver, 1.6
[ 0.255632] thunder_xcv, ver 1.0
[ 0.255660] thunder_bgx, ver 1.0
[ 0.255680] nicpf, ver 1.0
[ 0.256494] hns3: Hisilicon Ethernet Network Driver for Hip08 Family - version
[ 0.256506] hns3: Copyright (c) 2017 Huawei Corporation.
[ 0.256534] hclge is initializing
[ 0.256560] e1000: Intel(R) PRO/1000 Network Driver
[ 0.256568] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 0.256589] e1000e: Intel(R) PRO/1000 Network Driver
[ 0.256596] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 0.256616] igb: Intel(R) Gigabit Ethernet Network Driver
[ 0.256623] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 0.256645] igbvf: Intel(R) Gigabit Virtual Function Network Driver
[ 0.256652] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[ 0.256879] sky2: driver version 1.30
[ 0.258763] usbcore: registered new device driver r8152-cfgselector
[ 0.258792] usbcore: registered new interface driver r8152
[ 0.259049] VFIO - User Level meta-driver version: 0.3
[ 0.261155] usbcore: registered new interface driver usb-storage
[ 0.263446] i2c_dev: i2c /dev entries driver
[ 0.268993] sdhci: Secure Digital Host Controller Interface driver
[ 0.269011] sdhci: Copyright(c) Pierre Ossman
[ 0.269540] Synopsys Designware Multimedia Card Interface Driver
[ 0.270226] sdhci-pltfm: SDHCI platform and OF driver helper
[ 0.272367] tegra-bpmp bpmp: Adding to iommu group 0
[ 0.272891] tegra-bpmp bpmp: firmware: 91572a54614f84d0fd0c270beec2c56f
[ 0.274656] /bpmp/i2c/pmic@3c: Fixed dependency cycle(s) with /bpmp/i2c/pmic@3c/pinmux
[ 0.276001] max77620 0-003c: PMIC Version OTP:0x45 and ES:0x8
[ 0.283292] VDD_DDR_1V1_PMIC: Bringing 1125000uV into 1100000-1100000uV
[ 0.293445] VDD_RTC: Bringing 800000uV into 1000000-1000000uV
[ 0.294494] VDDIO_SDMMC3_AP: Bringing 1800000uV into 2800000-2800000uV
[ 0.296083] VDD_HDMI_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.296844] VDD_PEX_1V05: Bringing 1000000uV into 1050000-1050000uV
[ 0.376480] Freeing initrd memory: 7064K
[ 0.418323] max77686-rtc max77620-rtc: registered as rtc0
[ 0.450892] max77686-rtc max77620-rtc: setting system clock to 2021-08-19T15:30:47 UTC (1629387047)
[ 0.580934] clocksource: tsc: mask: 0xffffffffffffff max_cycles: 0xe6a171046, max_idle_ns: 881590405314 ns
[ 0.580958] clocksource: osc: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 49772407460 ns
[ 0.580970] clocksource: usec: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275 ns
[ 0.581422] usbcore: registered new interface driver usbhid
[ 0.581434] usbhid: USB HID core driver
[ 0.584910] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 (0,8000003f) counters available
[ 0.585464] hw perfevents: enabled with armv8_nvidia_denver PMU driver, 7 (0,8000003f) counters available
[ 0.590097] NET: Registered PF_PACKET protocol family
[ 0.590166] 9pnet: Installing 9P2000 support
[ 0.590214] Key type dns_resolver registered
[ 0.597259] registered taskstats version 1
[ 0.597389] Loading compiled-in X.509 certificates
[ 0.602384] Demotion targets for Node 0: null
[ 0.623014] tegra-pcie 10003000.pcie: Adding to iommu group 1
[ 0.623319] tegra-pcie 10003000.pcie: host bridge /pcie@10003000 ranges:
[ 0.623352] tegra-pcie 10003000.pcie: MEM 0x0010000000..0x0010001fff -> 0x0010000000
[ 0.623373] tegra-pcie 10003000.pcie: MEM 0x0010004000..0x0010004fff -> 0x0010004000
[ 0.623393] tegra-pcie 10003000.pcie: IO 0x0050000000..0x005000ffff -> 0x0000000000
[ 0.623414] tegra-pcie 10003000.pcie: MEM 0x0050100000..0x0057ffffff -> 0x0050100000
[ 0.623429] tegra-pcie 10003000.pcie: MEM 0x0058000000..0x007fffffff -> 0x0058000000
[ 0.623494] tegra-pcie 10003000.pcie: 4x1, 1x1 configuration
[ 0.624941] tegra-pcie 10003000.pcie: probing port 0, using 4 lanes
[ 1.837435] tegra-pcie 10003000.pcie: link 0 down, ignoring
[ 1.837910] tegra-pcie 10003000.pcie: PCI host bridge to bus 0000:00
[ 1.837929] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.837941] pci_bus 0000:00: root bus resource [mem 0x10000000-0x10001fff]
[ 1.837952] pci_bus 0000:00: root bus resource [mem 0x10004000-0x10004fff]
[ 1.837962] pci_bus 0000:00: root bus resource [io 0x0000-0xffff]
[ 1.837973] pci_bus 0000:00: root bus resource [mem 0x50100000-0x57ffffff]
[ 1.837983] pci_bus 0000:00: root bus resource [mem 0x58000000-0x7fffffff pref]
[ 1.841635] pci_bus 0000:00: resource 4 [mem 0x10000000-0x10001fff]
[ 1.841650] pci_bus 0000:00: resource 5 [mem 0x10004000-0x10004fff]
[ 1.841660] pci_bus 0000:00: resource 6 [io 0x0000-0xffff]
[ 1.841669] pci_bus 0000:00: resource 7 [mem 0x50100000-0x57ffffff]
[ 1.841678] pci_bus 0000:00: resource 8 [mem 0x58000000-0x7fffffff pref]
[ 1.842721] tegra-gpcdma 2600000.dma-controller: Adding to iommu group 2
[ 1.844569] tegra-gpcdma 2600000.dma-controller: GPC DMA driver register 31 channels
[ 1.846863] printk: legacy console [ttyS0] disabled
[ 1.847048] 3100000.serial: ttyS0 at MMIO 0x3100000 (irq = 23, base_baud = 25500000) is a Tegra
[ 1.847085] printk: legacy console [ttyS0] enabled
[ 4.560861] dwc-eth-dwmac 2490000.ethernet: Adding to iommu group 3
[ 4.578856] dwc-eth-dwmac 2490000.ethernet: User ID: 0x10, Synopsys ID: 0x41
[ 4.585926] dwc-eth-dwmac 2490000.ethernet: DWMAC4/5
[ 4.590992] dwc-eth-dwmac 2490000.ethernet: DMA HW capability register supported
[ 4.598389] dwc-eth-dwmac 2490000.ethernet: RX Checksum Offload Engine supported
[ 4.605786] dwc-eth-dwmac 2490000.ethernet: TX Checksum insertion supported
[ 4.612751] dwc-eth-dwmac 2490000.ethernet: Wake-Up On Lan supported
[ 4.619144] dwc-eth-dwmac 2490000.ethernet: TSO supported
[ 4.624547] dwc-eth-dwmac 2490000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[ 4.632468] dwc-eth-dwmac 2490000.ethernet: Enabled L3L4 Flow TC (entries=8)
[ 4.639520] dwc-eth-dwmac 2490000.ethernet: Enabled RFS Flow TC (entries=10)
[ 4.646568] dwc-eth-dwmac 2490000.ethernet: TSO feature enabled
[ 4.652492] dwc-eth-dwmac 2490000.ethernet: Using 40/40 bits DMA host/device width
[ 4.660807] irq: IRQ73: trimming hierarchy from :pmc@c360000
[ 4.671213] tegra_rtc c2a0000.rtc: registered as rtc1
[ 4.676285] tegra_rtc c2a0000.rtc: Tegra internal Real Time Clock
[ 4.685025] irq: IRQ76: trimming hierarchy from :pmc@c360000
[ 4.690914] pca953x 1-0074: using no AI
[ 4.697887] irq: IRQ77: trimming hierarchy from :pmc@c360000
[ 4.703706] pca953x 1-0077: using no AI
[ 4.722844] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.722854] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.722860] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.722864] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.722869] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.722874] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.763258] dl_clear_root_domain: span=1-2 type=DEF
[ 4.763264] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.763269] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.763337] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
[ 4.763423] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.763428] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.763433] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.763438] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.763442] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.763446] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.829696] dl_clear_root_domain: span=1-2 type=DEF
[ 4.829701] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.829706] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.829756] __dl_sub: cpus=4 tsk_bw=104857 total_bw=104855 span=0,3-5 type=DYN
[ 4.829782] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.829786] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.829790] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.829794] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.829798] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.829802] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.896080] dl_clear_root_domain: span=1-2 type=DEF
[ 4.896084] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.896089] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.896195] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.896200] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.896204] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.896208] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.896212] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.896217] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 4.955225] dl_clear_root_domain: span=1-2 type=DEF
[ 4.955228] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 4.955231] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 4.955304] dl_clear_root_domain: span=0,3-5 type=DYN
[ 4.955307] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 4.955310] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 4.955312] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 4.955315] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 4.955318] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.014313] dl_clear_root_domain: span=1-2 type=DEF
[ 5.014316] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.014319] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.014703] dl_clear_root_domain: span=0,3-5 type=DYN
[ 5.014707] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 5.014710] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 5.014713] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 5.014716] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 5.014718] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 5.022246] sdhci-tegra 3440000.mmc: Adding to iommu group 4
[ 5.026437] dl_clear_root_domain: span=1-2 type=DEF
[ 5.026442] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 5.026445] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 5.098250] sdhci-tegra 3460000.mmc: Adding to iommu group 5
[ 5.108000] irq: IRQ86: trimming hierarchy from :pmc@c360000
[ 5.109074] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.115234] tegra-xusb 3530000.usb: Adding to iommu group 6
[ 5.125072] mmc0: CQHCI version 5.10
[ 5.128378] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 5.135972] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.141304] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 1
[ 5.149464] tegra-xusb 3530000.usb: hcc params 0x0184fd25 hci version 0x100 quirks 0x0000000000000810
[ 5.158705] tegra-xusb 3530000.usb: irq 87, io mem 0x03530000
[ 5.162941] mmc2: SDHCI controller on 3440000.mmc [3440000.mmc] using ADMA 64-bit
[ 5.164578] tegra-xusb 3530000.usb: xHCI Host Controller
[ 5.177385] tegra-xusb 3530000.usb: new USB bus registered, assigned bus number 2
[ 5.184895] mmc0: SDHCI controller on 3460000.mmc [3460000.mmc] using ADMA 64-bit
[ 5.192552] tegra-xusb 3530000.usb: Host supports USB 3.0 SuperSpeed
[ 5.199342] hub 1-0:1.0: USB hub found
[ 5.203188] hub 1-0:1.0: 4 ports detected
[ 5.207719] hub 2-0:1.0: USB hub found
[ 5.211640] hub 2-0:1.0: 3 ports detected
[ 5.220816] sdhci-tegra 3400000.mmc: Adding to iommu group 7
[ 5.226762] irq: IRQ90: trimming hierarchy from :interrupt-controller@3881000
[ 5.233977] irq: IRQ92: trimming hierarchy from :pmc@c360000
[ 5.234024] sdhci-tegra 3400000.mmc: Got CD GPIO
[ 5.239698] irq: IRQ93: trimming hierarchy from :pmc@c360000
[ 5.244392] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.246716] sdhci-tegra 3400000.mmc: Got WP GPIO
[ 5.249992] input: gpio-keys as /devices/platform/gpio-keys/input/input0
[ 5.300641] irq: IRQ94: trimming hierarchy from :pmc@c360000
[ 5.306435] mmc1: SDHCI controller on 3400000.mmc [3400000.mmc] using ADMA 64-bit
[ 5.309600] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.314279] dwc-eth-dwmac 2490000.ethernet eth0: Register MEM_TYPE_PAGE_POOL RxQ-0
[ 5.331456] dwc-eth-dwmac 2490000.ethernet eth0: PHY [stmmac-0:00] driver [Broadcom BCM89610] (irq=73)
[ 5.341557] dwmac4: Master AXI performs any burst length
[ 5.346920] dwc-eth-dwmac 2490000.ethernet eth0: No Safety Features support found
[ 5.354532] dwc-eth-dwmac 2490000.ethernet eth0: IEEE 1588-2008 Advanced Timestamp supported
[ 5.363519] dwc-eth-dwmac 2490000.ethernet eth0: registered PTP clock
[ 5.370729] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 5.395796] mmc0: Command Queue Engine enabled
[ 5.400253] mmc0: new HS400 MMC card at address 0001
[ 5.404492] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 5.405694] mmcblk0: mmc0:0001 032G34 29.1 GiB
[ 5.420039] mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17 p18 p19 p20 p21 p22 p23 p24 p25 p26 p27 p28 p29 p30 p31 p32 p33
[ 5.434884] mmcblk0boot0: mmc0:0001 032G34 4.00 MiB
[ 5.440287] mmcblk0boot1: mmc0:0001 032G34 4.00 MiB
[ 5.445642] mmcblk0rpmb: mmc0:0001 032G34 4.00 MiB, chardev (234:0)
[ 8.229975] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 8.250692] IP-Config: Complete:
[ 8.253916] device=eth0, hwaddr=00:04:4b:8c:aa:bc, ipaddr=192.168.99.2, mask=255.255.255.0, gw=192.168.99.1
[ 8.264086] host=192.168.99.2, domain=, nis-domain=(none)
[ 8.269916] bootserver=192.168.99.1, rootserver=192.168.99.1, rootpath=
[ 8.270006] clk: Disabling unused clocks
[ 8.300142] PM: genpd: Disabling unused power domains
[ 8.305218] ALSA device list:
[ 8.308198] No soundcards found.
[ 8.314146] Freeing unused kernel memory: 10944K
[ 8.318829] Run /init as init process
[ 8.322486] with arguments:
[ 8.325454] /init
[ 8.327725] netdevwait
[ 8.330425] vpr_resize
[ 8.333156] with environment:
[ 8.336298] HOME=/
[ 8.338651] TERM=linux
[ 8.341362] nvdumper_reserved=0x2772e0000
[ 8.345724] tegraid=18.1.2.0.0
[ 8.349124] bl_prof_dataptr=0x10000@0x275840000
[ 8.383234] Root device found: nfs
[ 8.393578] Ethernet interface: eth0
[ 8.402720] IP Address: 192.168.99.2
[ 8.478392] Rootfs mounted over nfs
[ 8.506478] Switching from initrd to actual rootfs
[ 8.774777] systemd[1]: System time before build time, advancing clock.
[ 8.885546] NET: Registered PF_INET6 protocol family
[ 8.891815] Segment Routing with IPv6
[ 8.895527] In-situ OAM (IOAM) with IPv6
[ 8.933832] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[ 8.955463] systemd[1]: Detected architecture arm64.
[ 8.990982] systemd[1]: Set hostname to <tegra-ubuntu>.
[ 10.610688] random: crng init done
[ 10.614553] systemd[1]: Created slice System Slice.
[ 10.620450] systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
[ 10.628283] systemd[1]: Listening on Journal Socket.
[ 10.633445] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[ 10.683126] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[ 10.694544] systemd[1]: Starting Set the console keyboard layout...
[ 10.701045] systemd[1]: Reached target Remote File Systems.
[ 10.864688] systemd-journald[190]: Received request to flush runtime journal from PID 1
[ 11.545564] tegra-host1x 13e00000.host1x: Adding to iommu group 8
[ 11.594861] tegra-xudc 3550000.usb: Adding to iommu group 9
[ 11.607701] host1x-context host1x-ctx.0: Adding to iommu group 10
[ 11.615385] host1x-context host1x-ctx.1: Adding to iommu group 11
[ 11.631862] host1x-context host1x-ctx.2: Adding to iommu group 12
[ 11.640531] host1x-context host1x-ctx.3: Adding to iommu group 13
[ 11.647198] host1x-context host1x-ctx.4: Adding to iommu group 14
[ 11.660504] host1x-context host1x-ctx.5: Adding to iommu group 15
[ 11.675161] tegra-hda 3510000.hda: Adding to iommu group 16
[ 11.679764] at24 6-0050: 256 byte 24c02 EEPROM, read-only
[ 11.681851] host1x-context host1x-ctx.6: Adding to iommu group 17
[ 11.692888] at24 6-0057: 256 byte 24c02 EEPROM, read-only
[ 11.694142] host1x-context host1x-ctx.7: Adding to iommu group 18
[ 11.706877] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 11.719609] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 11.732873] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 11.746182] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 11.757090] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 11.768454] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 11.783321] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 11.794406] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 11.806902] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 11.817932] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 11.828822] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 11.839605] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 11.851929] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 11.862935] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 11.874287] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 11.886113] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 11.896925] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 11.907673] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 11.918544] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 11.929391] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 11.940230] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 11.952045] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 11.962924] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 11.973735] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 11.984473] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 11.995225] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 12.005984] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 12.016737] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 12.029753] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 12.043105] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.054040] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.065357] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.076391] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.087381] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.098466] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.110278] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.121446] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.132388] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.143407] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.154236] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.164999] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.175746] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.186474] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.197257] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.208124] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.218858] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.229715] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.240517] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.251324] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.262119] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.273134] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.283998] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.295041] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.307011] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.317733] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.328490] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.339496] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.351058] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.363027] tegra-audio-graph-card sound: Adding to iommu group 19
[ 12.370959] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=3 as /devices/platform/3510000.hda/sound/card0/input1
[ 12.381410] input: NVIDIA Jetson TX2 HDA HDMI/DP,pcm=7 as /devices/platform/3510000.hda/sound/card0/input2
[ 12.395829] gic 2a41000.interrupt-controller: GIC IRQ controller registered
[ 12.403363] tegra-aconnect aconnect@2900000: Tegra ACONNECT bus registered
[ 12.446559] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901000
[ 12.450191] tegra-adma 2930000.dma-controller: Tegra210 ADMA driver registered 32 channels
[ 12.457536] /aconnect@2900000/ahub@2900800/i2s@2901000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.481551] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901100
[ 12.492296] /aconnect@2900000/ahub@2900800/i2s@2901100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.506860] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901200
[ 12.517617] /aconnect@2900000/ahub@2900800/i2s@2901200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.531396] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901300
[ 12.542136] /aconnect@2900000/ahub@2900800/i2s@2901300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.555321] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901400
[ 12.566199] /aconnect@2900000/ahub@2900800/i2s@2901400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.579119] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/i2s@2901500
[ 12.589860] /aconnect@2900000/ahub@2900800/i2s@2901500: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.602765] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902000
[ 12.613655] /aconnect@2900000/ahub@2900800/sfc@2902000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.626512] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902200
[ 12.637255] /aconnect@2900000/ahub@2900800/sfc@2902200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.650174] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902400
[ 12.660936] /aconnect@2900000/ahub@2900800/sfc@2902400: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.674806] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/sfc@2902600
[ 12.685805] /aconnect@2900000/ahub@2900800/sfc@2902600: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.699657] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903000
[ 12.710401] /aconnect@2900000/ahub@2900800/amx@2903000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.722660] tegra-dc 15200000.display: Adding to iommu group 20
[ 12.729183] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903100
[ 12.739921] /aconnect@2900000/ahub@2900800/amx@2903100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.751899] tegra-dc 15210000.display: Adding to iommu group 20
[ 12.758537] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903200
[ 12.769386] /aconnect@2900000/ahub@2900800/amx@2903200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.782392] tegra-dc 15220000.display: Adding to iommu group 20
[ 12.788541] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amx@2903300
[ 12.799331] /aconnect@2900000/ahub@2900800/amx@2903300: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.813533] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903800
[ 12.824282] /aconnect@2900000/ahub@2900800/adx@2903800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.837606] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903900
[ 12.848383] /aconnect@2900000/ahub@2900800/adx@2903900: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.861405] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903a00
[ 12.872315] /aconnect@2900000/ahub@2900800/adx@2903a00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.886179] irq: IRQ138: trimming hierarchy from :pmc@c360000
[ 12.886238] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/adx@2903b00
[ 12.902669] /aconnect@2900000/ahub@2900800/adx@2903b00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.917197] tegra-vic 15340000.vic: Adding to iommu group 21
[ 12.923367] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904000
[ 12.934202] /aconnect@2900000/ahub@2900800/dmic@2904000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.948456] tegra-nvdec 15480000.nvdec: Adding to iommu group 22
[ 12.955035] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904100
[ 12.965878] /aconnect@2900000/ahub@2900800/dmic@2904100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 12.979149] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dmic@2904200
[ 12.990031] /aconnect@2900000/ahub@2900800/dmic@2904200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.003085] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905000
[ 13.013920] /aconnect@2900000/ahub@2900800/dspk@2905000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.027103] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/dspk@2905100
[ 13.027188] [drm] Initialized tegra 1.0.0 for drm on minor 0
[ 13.037886] /aconnect@2900000/ahub@2900800/dspk@2905100: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.043657] drm drm: [drm] Cannot find any crtc or sizes
[ 13.059888] drm drm: [drm] Cannot find any crtc or sizes
[ 13.062654] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/processing-engine@2908000
[ 13.065697] drm drm: [drm] Cannot find any crtc or sizes
[ 13.077482] /aconnect@2900000/ahub@2900800/processing-engine@2908000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.097452] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a000
[ 13.108265] /aconnect@2900000/ahub@2900800/mvc@290a000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.121309] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/mvc@290a200
[ 13.132133] /aconnect@2900000/ahub@2900800/mvc@290a200: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.145169] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/amixer@290bb00
[ 13.156205] /aconnect@2900000/ahub@2900800/amixer@290bb00: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.169452] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/admaif@290f000
[ 13.180484] /aconnect@2900000/ahub@2900800/admaif@290f000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
[ 13.193715] /aconnect@2900000/ahub@2900800: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800/asrc@2910000
[ 13.204554] /aconnect@2900000/ahub@2900800/asrc@2910000: Fixed dependency cycle(s) with /aconnect@2900000/ahub@2900800
Ubuntu 18.04.6 LTS tegra-ubuntu ttyS0
tegra-ubuntu login: ubuntu (automatic login)
[ 16.578835] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.578842] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.578847] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.578850] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.578853] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.578856] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.619356] dl_clear_root_domain: span=1-2 type=DEF
[ 16.619362] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.619365] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 16.643037] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.643045] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.643050] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.643053] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.643056] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.643060] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.683460] dl_clear_root_domain: span=1-2 type=DEF
[ 16.683466] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.683470] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 6.13.0-rc6-next-20250110-00008-g1a5a0b763ef7 aa[ 16.710896] dl_clear_root_domain: span=0,3-5 type=DYN
rch64)
* Documentation: htt[ 16.710903] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
ps://help.ubuntu.com
* Managem[ 16.710908] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
ent: https://landscape.canon[ 16.710911] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
ical.com
* Support: htt[ 16.710914] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
ps://ubuntu.com/pro
This system[ 16.710918] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
has been minimized by removing packages[ 16.768453] dl_clear_root_domain: span=1-2 type=DEF
and con[ 16.768459] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
tent that are
n[ 16.768463] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
ot requi[ 16.770886] dl_clear_root_domain: span=0,3-5 type=DYN
red on a[ 16.770892] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
system [ 16.770896] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
that use[ 16.770899] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
rs do no[ 16.770902] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
t log in[ 16.770906] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
to.
To restore this content, you can run the 'unminim[ 16.836855] dl_clear_root_domain: span=1-2 type=DEF
ize' com[ 16.836860] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
mand.
[ 16.836863] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
The programs included with the Ubuntu system are free software;
the exact distribution terms f[ 16.866914] dl_clear_root_domain: span=0,3-5 type=DYN
or each program are described in[ 16.866923] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
the
individual[ 16.866927] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
files in /usr/share/doc[ 16.866930] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
/*/copyright.
[ 16.866933] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
Ubuntu [ 16.866937] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
comes with ABSOLUTELY NO WARRANT[ 16.918667] dl_clear_root_domain: span=1-2 type=DEF
Y, to th[ 16.918675] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
e extent[ 16.918679] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
permitted by
applicable law.
[ 16.947088] dl_clear_root_domain: span=0,3-5 type=DYN
[ 16.947097] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 16.947102] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 16.947105] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 16.947108] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 16.947112] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 16.987502] dl_clear_root_domain: span=1-2 type=DEF
[ 16.987509] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 16.987513] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@tegra-ubuntu:~$ [ 23.530882] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 3507000.sata
[ 23.540133] tegra-mc 2c00000.memory-controller: sync_state() pending due to 3507000.sata
[ 23.548299] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 15380000.nvjpg
[ 23.557712] tegra-mc 2c00000.memory-controller: sync_state() pending due to 15380000.nvjpg
[ 23.565992] tegra186-emc 2c60000.external-memory-controller: sync_state() pending due to 154c0000.nvenc
[ 23.575392] tegra-mc 2c00000.memory-controller: sync_state() pending due to 154c0000.nvenc
[ 39.914747] VDD_RTC: disabling
[ 53.708238] PM: suspend entry (deep)
[ 53.711947] Filesystems sync: 0.000 seconds
[ 53.717088] Freezing user space processes
[ 53.722374] Freezing user space processes completed (elapsed 0.001 seconds)
[ 53.729356] OOM killer disabled.
[ 53.732588] Freezing remaining freezable tasks
[ 53.738121] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
[ 53.784526] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 53.802280] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
[ 53.846083] Disabling non-boot CPUs ...
[ 53.849966] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
[ 53.850008] CPU0 attaching NULL sched-domain.
[ 53.864355] span=1-2
[ 53.866545] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 53.866549] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
[ 53.866552] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
[ 53.866556] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
[ 53.866560] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
[ 53.866563] CPU3 attaching NULL sched-domain.
[ 53.903868] span=0-2
[ 53.906055] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
[ 53.906059] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
[ 53.906061] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
[ 53.906064] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
[ 53.906067] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
[ 53.906069] CPU4 attaching NULL sched-domain.
[ 53.943030] span=0-3
[ 53.945219] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
[ 53.945222] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
[ 53.945225] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
[ 53.945227] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
[ 53.945230] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
[ 53.945233] CPU5 attaching NULL sched-domain.
[ 53.982010] span=0-4
[ 53.984200] rq_attach_root: cpu=5 old_span= new_span=0-4
[ 53.984251] CPU0 attaching sched-domain(s):
[ 53.993750] domain-0: span=0,3-4 level=MC
[ 53.997852] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 54.003707] __dl_sub: cpus=5 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 54.003711] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=209712
[ 54.003713] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.003716] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.003719] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.003723] CPU3 attaching sched-domain(s):
[ 54.039724] domain-0: span=0,3-4 level=MC
[ 54.043824] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 54.049672] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=1-5 type=DEF
[ 54.049675] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=157284
[ 54.049678] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 54.049680] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 54.049683] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 54.049686] CPU4 attaching sched-domain(s):
[ 54.086286] domain-0: span=0,3-4 level=MC
[ 54.090383] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 54.096232] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-2,4-5 type=DEF
[ 54.096235] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=104856
[ 54.096238] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 54.096241] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 54.096244] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 54.096246] root domain span: 0,3-4
[ 54.133373] default domain span: 1-2,5
[ 54.137136] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 54.145654] psci: CPU5 killed (polled 0 ms)
[ 54.150727] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 type=DYN span=0,3-4
[ 54.150767] CPU0 attaching NULL sched-domain.
[ 54.165117] span=1-2,5
[ 54.167485] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 54.167489] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 54.167492] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 54.167496] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DEF
[ 54.167500] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=157284
[ 54.167503] CPU3 attaching NULL sched-domain.
[ 54.205306] span=0-2,5
[ 54.207670] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 54.207673] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 54.207676] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 54.207679] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3,5 type=DEF
[ 54.207682] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=209712
[ 54.207685] CPU4 attaching NULL sched-domain.
[ 54.244964] span=0-3,5
[ 54.247324] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 54.247373] CPU0 attaching sched-domain(s):
[ 54.257032] domain-0: span=0,3 level=MC
[ 54.260958] groups: 0:{ span=0 }, 3:{ span=3 }
[ 54.265587] __dl_sub: cpus=4 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 54.265591] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=157284
[ 54.265593] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.265596] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.265598] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.265602] CPU3 attaching sched-domain(s):
[ 54.301582] domain-0: span=0,3 level=MC
[ 54.305504] groups: 3:{ span=3 }, 0:{ span=0 }
[ 54.310132] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=1-5 type=DEF
[ 54.310136] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=104856
[ 54.310138] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 54.310141] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 54.310144] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 54.310148] root domain span: 0,3
[ 54.345869] default domain span: 1-2,4-5
[ 54.349800] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 54.357951] psci: CPU4 killed (polled 0 ms)
[ 54.362779] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DYN span=0,3
[ 54.362816] CPU0 attaching NULL sched-domain.
[ 54.376992] span=1-2,4-5
[ 54.379535] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 54.379539] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 54.379542] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 54.379546] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2,4-5 type=DEF
[ 54.379549] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=157284
[ 54.379552] CPU3 attaching NULL sched-domain.
[ 54.417350] span=0-2,4-5
[ 54.419884] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 54.419928] CPU0 attaching NULL sched-domain.
[ 54.429934] span=0-5
[ 54.432125] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 54.432129] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=104856
[ 54.432132] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.432135] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.432138] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.432140] root domain span: 0
[ 54.467082] default domain span: 1-5
[ 54.470660] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.478522] psci: CPU3 killed (polled 0 ms)
[ 54.483396] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=1-5
[ 54.483530] dl_clear_root_domain: span=0 type=DYN
[ 54.483543] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.483558] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.513548] psci: CPU2 killed (polled 0 ms)
[ 54.518005] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=1-5
[ 54.518103] Error taking CPU1 down: -16
[ 54.531515] Non-boot CPUs are not disabled
[ 54.535647] Enabling non-boot CPUs ...
[ 54.539898] Detected PIPT I-cache on CPU2
[ 54.543955] CPU features: SANITY CHECK: Unexpected variation in SYS_CTR_EL0. Boot CPU: 0x0000008444c004, CPU2: 0x0000009444c004
[ 54.555460] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64DFR0_EL1. Boot CPU: 0x00000010305106, CPU2: 0x00000010305116
[ 54.567700] CPU features: SANITY CHECK: Unexpected variation in SYS_ID_DFR0_EL1. Boot CPU: 0x00000003010066, CPU2: 0x00000003001066
[ 54.579639] CPU2: Booted secondary processor 0x0000000001 [0x4e0f0030]
[ 54.587300] dl_clear_root_domain: span=0 type=DYN
[ 54.587315] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.587332] rd 0: Checking EAS, CPUs do not have asymmetric capacities
[ 54.605724] CPU2 is up
[ 54.608274] Detected PIPT I-cache on CPU3
[ 54.612317] CPU3: Booted secondary processor 0x0000000101 [0x411fd073]
[ 54.619089] CPU0 attaching NULL sched-domain.
[ 54.623468] span=1-5
[ 54.625667] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=0 type=DYN
[ 54.625671] __dl_server_detach_root: cpu=0 rd_span=0 total_bw=0
[ 54.625674] rq_attach_root: cpu=0 old_span= new_span=1-5
[ 54.625678] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 54.625681] __dl_server_attach_root: cpu=0 rd_span=0-5 total_bw=157284
[ 54.625729] CPU0 attaching sched-domain(s):
[ 54.660952] domain-0: span=0,3 level=MC
[ 54.664879] groups: 0:{ span=0 }, 3:{ span=3 }
[ 54.669509] __dl_sub: cpus=4 tsk_bw=52428 total_bw=104856 span=0-5 type=DEF
[ 54.669513] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=104856
[ 54.669516] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.669519] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.669522] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.669526] CPU3 attaching sched-domain(s):
[ 54.705520] domain-0: span=0,3 level=MC
[ 54.709446] groups: 3:{ span=3 }, 0:{ span=0 }
[ 54.714073] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 54.714077] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 54.714080] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 54.714083] root domain span: 0,3
[ 54.736355] default domain span: 1-2,4-5
[ 54.740288] rd 0,3: Checking EAS, CPUs do not have asymmetric capacities
[ 54.747073] CPU3 is up
[ 54.749601] Detected PIPT I-cache on CPU4
[ 54.753629] CPU4: Booted secondary processor 0x0000000102 [0x411fd073]
[ 54.760367] CPU0 attaching NULL sched-domain.
[ 54.764735] span=1-2,4-5
[ 54.767277] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=0,3 type=DYN
[ 54.767282] __dl_server_detach_root: cpu=0 rd_span=0,3 total_bw=52428
[ 54.767285] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,4-5
[ 54.767288] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-2,4-5 type=DEF
[ 54.767291] __dl_server_attach_root: cpu=0 rd_span=0-2,4-5 total_bw=157284
[ 54.767294] CPU3 attaching NULL sched-domain.
[ 54.805112] span=0-2,4-5
[ 54.807655] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=3 type=DYN
[ 54.807658] __dl_server_detach_root: cpu=3 rd_span=3 total_bw=0
[ 54.807661] rq_attach_root: cpu=3 old_span= new_span=0-2,4-5
[ 54.807664] __dl_add: cpus=5 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 54.807667] __dl_server_attach_root: cpu=3 rd_span=0-5 total_bw=209712
[ 54.807717] CPU0 attaching sched-domain(s):
[ 54.843285] domain-0: span=0,3-4 level=MC
[ 54.847385] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 54.853236] __dl_sub: cpus=5 tsk_bw=52428 total_bw=157284 span=0-5 type=DEF
[ 54.853240] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=157284
[ 54.853242] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 54.853245] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 54.853247] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 54.853250] CPU3 attaching sched-domain(s):
[ 54.889249] domain-0: span=0,3-4 level=MC
[ 54.893350] groups: 3:{ span=3 }, 4:{ span=4 }, 0:{ span=0 }
[ 54.899201] __dl_sub: cpus=4 tsk_bw=52428 total_bw=104856 span=1-5 type=DEF
[ 54.899205] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=104856
[ 54.899207] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 54.899209] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 54.899212] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 54.899215] CPU4 attaching sched-domain(s):
[ 54.935815] domain-0: span=0,3-4 level=MC
[ 54.939915] groups: 4:{ span=4 }, 0:{ span=0 }, 3:{ span=3 }
[ 54.945760] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 54.945764] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 54.945767] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 54.945771] root domain span: 0,3-4
[ 54.968739] default domain span: 1-2,5
[ 54.972498] rd 0,3-4: Checking EAS, CPUs do not have asymmetric capacities
[ 54.979534] CPU4 is up
[ 54.982058] Detected PIPT I-cache on CPU5
[ 54.986086] CPU5: Booted secondary processor 0x0000000103 [0x411fd073]
[ 54.992823] CPU0 attaching NULL sched-domain.
[ 54.997198] span=1-2,5
[ 54.999569] __dl_sub: cpus=3 tsk_bw=52428 total_bw=104856 span=0,3-4 type=DYN
[ 54.999573] __dl_server_detach_root: cpu=0 rd_span=0,3-4 total_bw=104856
[ 54.999576] rq_attach_root: cpu=0 old_span=NULL new_span=1-2,5
[ 54.999580] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0-2,5 type=DEF
[ 54.999583] __dl_server_attach_root: cpu=0 rd_span=0-2,5 total_bw=157284
[ 54.999586] CPU3 attaching NULL sched-domain.
[ 55.037405] span=0-2,5
[ 55.039772] __dl_sub: cpus=2 tsk_bw=52428 total_bw=52428 span=3-4 type=DYN
[ 55.039775] __dl_server_detach_root: cpu=3 rd_span=3-4 total_bw=52428
[ 55.039778] rq_attach_root: cpu=3 old_span=NULL new_span=0-2,5
[ 55.039780] __dl_add: cpus=5 tsk_bw=52428 total_bw=209712 span=0-3,5 type=DEF
[ 55.039783] __dl_server_attach_root: cpu=3 rd_span=0-3,5 total_bw=209712
[ 55.039785] CPU4 attaching NULL sched-domain.
[ 55.077078] span=0-3,5
[ 55.079449] __dl_sub: cpus=1 tsk_bw=52428 total_bw=0 span=4 type=DYN
[ 55.079452] __dl_server_detach_root: cpu=4 rd_span=4 total_bw=0
[ 55.079454] rq_attach_root: cpu=4 old_span= new_span=0-3,5
[ 55.079457] __dl_add: cpus=6 tsk_bw=52428 total_bw=262140 span=0-5 type=DEF
[ 55.079459] __dl_server_attach_root: cpu=4 rd_span=0-5 total_bw=262140
[ 55.079516] CPU0 attaching sched-domain(s):
[ 55.114910] domain-0: span=0,3-5 level=MC
[ 55.119011] groups: 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
[ 55.126081] __dl_sub: cpus=6 tsk_bw=52428 total_bw=209712 span=0-5 type=DEF
[ 55.126084] __dl_server_detach_root: cpu=0 rd_span=0-5 total_bw=209712
[ 55.126087] rq_attach_root: cpu=0 old_span=NULL new_span=
[ 55.126089] __dl_add: cpus=1 tsk_bw=52428 total_bw=52428 span=0 type=DYN
[ 55.126092] __dl_server_attach_root: cpu=0 rd_span=0 total_bw=52428
[ 55.126095] CPU3 attaching sched-domain(s):
[ 55.162091] domain-0: span=0,3-5 level=MC
[ 55.166187] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
[ 55.173258] __dl_sub: cpus=5 tsk_bw=52428 total_bw=157284 span=1-5 type=DEF
[ 55.173262] __dl_server_detach_root: cpu=3 rd_span=1-5 total_bw=157284
[ 55.173264] rq_attach_root: cpu=3 old_span=NULL new_span=0
[ 55.173266] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=0,3 type=DYN
[ 55.173269] __dl_server_attach_root: cpu=3 rd_span=0,3 total_bw=104856
[ 55.173272] CPU4 attaching sched-domain(s):
[ 55.209874] domain-0: span=0,3-5 level=MC
[ 55.213973] groups: 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }
[ 55.221041] __dl_sub: cpus=4 tsk_bw=52428 total_bw=104856 span=1-2,4-5 type=DEF
[ 55.221045] __dl_server_detach_root: cpu=4 rd_span=1-2,4-5 total_bw=104856
[ 55.221047] rq_attach_root: cpu=4 old_span=NULL new_span=0,3
[ 55.221049] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-4 type=DYN
[ 55.221052] __dl_server_attach_root: cpu=4 rd_span=0,3-4 total_bw=157284
[ 55.221055] CPU5 attaching sched-domain(s):
[ 55.258872] domain-0: span=0,3-5 level=MC
[ 55.262971] groups: 5:{ span=5 }, 0:{ span=0 }, 3:{ span=3 }, 4:{ span=4 }
[ 55.270034] rq_attach_root: cpu=5 old_span=NULL new_span=0,3-4
[ 55.270037] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 55.270040] __dl_server_attach_root: cpu=5 rd_span=0,3-5 total_bw=209712
[ 55.270044] root domain span: 0,3-5
[ 55.293187] default domain span: 1-2
[ 55.296773] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 55.303853] dl_clear_root_domain: span=0,3-5 type=DYN
[ 55.303857] __dl_add: cpus=4 tsk_bw=52428 total_bw=52428 span=0,3-5 type=DYN
[ 55.303861] __dl_add: cpus=4 tsk_bw=52428 total_bw=104856 span=0,3-5 type=DYN
[ 55.303864] __dl_add: cpus=4 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
[ 55.303866] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0,3-5 type=DYN
[ 55.303869] rd 0,3-5: Checking EAS, CPUs do not have asymmetric capacities
[ 55.344203] dl_clear_root_domain: span=1-2 type=DEF
[ 55.344206] __dl_add: cpus=2 tsk_bw=52428 total_bw=52428 span=1-2 type=DEF
[ 55.344209] __dl_add: cpus=2 tsk_bw=52428 total_bw=104856 span=1-2 type=DEF
[ 55.344222] CPU5 is up
[ 55.372693] dwc-eth-dwmac 2490000.ethernet eth0: configuring for phy/rgmii link mode
[ 56.381128] dwc-eth-dwmac 2490000.ethernet: Failed to reset the dma
[ 56.387404] dwc-eth-dwmac 2490000.ethernet eth0: stmmac_hw_setup: DMA engine initialization failed
[ 56.396644] dwc-eth-dwmac 2490000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 56.410738] usb-conn-gpio 3520000.padctl:ports:usb2-0:connector: repeated role: device
[ 56.421561] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
[ 56.451463] OOM killer enabled.
[ 56.454610] Restarting tasks ... done.
[ 56.459700] random: crng reseeded on system resumption
[ 56.464976] PM: suspend exit
[ 56.465206] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.525945] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.586806] VDDIO_SDMMC3_AP: voltage operation not allowed
[ 56.655796] VDDIO_SDMMC3_AP: voltage operation not allowed
On 18/02/25 10:58, Juri Lelli wrote:
> Hi!
>
> On 17/02/25 17:08, Juri Lelli wrote:
> > On 14/02/25 10:05, Jon Hunter wrote:
>
> ...
>
> > At this point I believe you triggered suspend.
> >
> > > [ 57.290150] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
> > > [ 57.335619] tegra-xusb 3530000.usb: Firmware timestamp: 2020-07-06 13:39:28 UTC
> > > [ 57.353364] dwc-eth-dwmac 2490000.ethernet eth0: Link is Down
> > > [ 57.397022] Disabling non-boot CPUs ...
> >
> > Offlining CPU5.
> >
> > > [ 57.400904] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 type=DYN span=0,3-5
> > > [ 57.400949] CPU0 attaching NULL sched-domain.
> > > [ 57.415298] span=1-2
> > > [ 57.417483] __dl_sub: cpus=3 tsk_bw=52428 total_bw=157284 span=0,3-5 type=DYN
> > > [ 57.417487] __dl_server_detach_root: cpu=0 rd_span=0,3-5 total_bw=157284
> > > [ 57.417496] rq_attach_root: cpu=0 old_span=NULL new_span=1-2
> > > [ 57.417501] __dl_add: cpus=3 tsk_bw=52428 total_bw=157284 span=0-2 type=DEF
> > > [ 57.417504] __dl_server_attach_root: cpu=0 rd_span=0-2 total_bw=157284
> > > [ 57.417507] CPU3 attaching NULL sched-domain.
> > > [ 57.454804] span=0-2
> > > [ 57.456987] __dl_sub: cpus=2 tsk_bw=52428 total_bw=104856 span=3-5 type=DYN
> > > [ 57.456990] __dl_server_detach_root: cpu=3 rd_span=3-5 total_bw=104856
> > > [ 57.456998] rq_attach_root: cpu=3 old_span=NULL new_span=0-2
> > > [ 57.457000] __dl_add: cpus=4 tsk_bw=52428 total_bw=209712 span=0-3 type=DEF
> > > [ 57.457003] __dl_server_attach_root: cpu=3 rd_span=0-3 total_bw=209712
> > > [ 57.457006] CPU4 attaching NULL sched-domain.
> > > [ 57.493964] span=0-3
> > > [ 57.496152] __dl_sub: cpus=1 tsk_bw=52428 total_bw=52428 span=4-5 type=DYN
> > > [ 57.496156] __dl_server_detach_root: cpu=4 rd_span=4-5 total_bw=52428
> > > [ 57.496162] rq_attach_root: cpu=4 old_span=NULL new_span=0-3
> > > [ 57.496165] __dl_add: cpus=5 tsk_bw=52428 total_bw=262140 span=0-4 type=DEF
> > > [ 57.496168] __dl_server_attach_root: cpu=4 rd_span=0-4 total_bw=262140
> > > [ 57.496171] CPU5 attaching NULL sched-domain.
> > > [ 57.532952] span=0-4
> > > [ 57.535143] rq_attach_root: cpu=5 old_span= new_span=0-4
> > > [ 57.535147] __dl_add: cpus=5 tsk_bw=52428 total_bw=314568 span=0-5 type=DEF
> >
> > Maybe we shouldn't add the dl_server contribution of a CPU that is going
> > to be offline.
>
> I tried to implement this idea and ended up with the following. As usual
> also pushed it to the branch on github. Could you please update and
> re-test?
And now for the actual change
---
kernel/sched/topology.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 8830acb4f1b2..c6a140d8d851 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -497,12 +497,14 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
if (rq->rd) {
old_rd = rq->rd;
- if (rq->fair_server.dl_server)
- __dl_server_detach_root(&rq->fair_server, rq);
-
- if (cpumask_test_cpu(rq->cpu, old_rd->online))
+ if (cpumask_test_cpu(rq->cpu, old_rd->online)) {
set_rq_offline(rq);
+ if (rq->fair_server.dl_server)
+ __dl_server_detach_root(&rq->fair_server, rq);
+ }
+
+
cpumask_clear_cpu(rq->cpu, old_rd->span);
/*
@@ -529,16 +531,17 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
}
cpumask_set_cpu(rq->cpu, rd->span);
- if (cpumask_test_cpu(rq->cpu, cpu_active_mask))
+ if (cpumask_test_cpu(rq->cpu, cpu_active_mask)) {
set_rq_online(rq);
- /*
- * Because the rq is not a task, dl_add_task_root_domain() did not
- * move the fair server bw to the rd if it already started.
- * Add it now.
- */
- if (rq->fair_server.dl_server)
- __dl_server_attach_root(&rq->fair_server, rq);
+ /*
+ * Because the rq is not a task, dl_add_task_root_domain() did not
+ * move the fair server bw to the rd if it already started.
+ * Add it now.
+ */
+ if (rq->fair_server.dl_server)
+ __dl_server_attach_root(&rq->fair_server, rq);
+ }
rq_unlock_irqrestore(rq, &rf);
On 17/02/2025 16:08, Juri Lelli wrote: > On 14/02/25 10:05, Jon Hunter wrote: > > ... > >> Sorry for the delay, the day got away from me. However, it is still not >> working :-( > > Ouch. > >> Console log is attached. > > Thanks. Did you happen to also collect a corresponding trace? Sorry, but I am not sure exactly what trace do you want? Thanks Jon -- nvpublic
On 17/02/25 16:10, Jon Hunter wrote: > > On 17/02/2025 16:08, Juri Lelli wrote: > > On 14/02/25 10:05, Jon Hunter wrote: > > > > ... > > > > > Sorry for the delay, the day got away from me. However, it is still not > > > working :-( > > > > Ouch. > > > > > Console log is attached. > > > > Thanks. Did you happen to also collect a corresponding trace? > > Sorry, but I am not sure exactly what trace do you want? Ah, sorry, I think I mentioned it somewhere else in this long thread. The idea would be to boot with something like "ftrace=nop trace_buf_size=50K" added to kernel cmdline. I would then try to offline CPUs 'manually' in the order suspend seems to be doing (starting from CPU5). Offlining CPU1 should still fail. At that point collect the trace with # cat /sys/kernel/debug/tracing/trace > trace.out and share it together with dmesg output as you have been doing so far. Thanks! Juri
On 11/02/2025 11:42, Juri Lelli wrote:
> On 11/02/25 10:15, Christian Loehle wrote:
>> On 2/10/25 17:09, Juri Lelli wrote:
>>> Hi Christian,
>>>
>>> Thanks for taking a look as well.
>>>
>>> On 07/02/25 15:55, Christian Loehle wrote:
>>>> On 2/7/25 14:04, Jon Hunter wrote:
>>>>>
>>>>>
>>>>> On 07/02/2025 13:38, Dietmar Eggemann wrote:
>>>>>> On 07/02/2025 11:38, Jon Hunter wrote:
>>>>>>>
>>>>>>> On 06/02/2025 09:29, Juri Lelli wrote:
>>>>>>>> On 05/02/25 16:56, Jon Hunter wrote:
>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>>> Thanks! That did make it easier :-)
>>>>>>>>>
>>>>>>>>> Here is what I see ...
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>>
>>>>>>>> Still different from what I can repro over here, so, unfortunately, I
>>>>>>>> had to add additional debug printks. Pushed to the same branch/repo.
>>>>>>>>
>>>>>>>> Could I ask for another run with it? Please also share the complete
>>>>>>>> dmesg from boot, as I would need to check debug output when CPUs are
>>>>>>>> first onlined.
>>>>>>
>>>>>> So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0
>>>>>> A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are
>>>>>> isol CPUs?
>>>>>
>>>>> I believe that 1-2 are the denvers (even thought they are listed as 0-1 in device-tree).
>>>>
>>>> Interesting, I have yet to reproduce this with equal capacities in isolcpus.
>>>> Maybe I didn't try hard enough yet.
>>>>
>>>>>
>>>>>> This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53]
>>>>>
>>>>> Yes I think it is similar to this.
>>>>>
>>>>> Thanks!
>>>>> Jon
>>>>>
>>>>
>>>> I could reproduce that on a different LLLLbb with isolcpus=3,4 (Lb) and
>>>> the offlining order:
>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>>
>>>> while the following offlining order succeeds:
>>>> echo 0 > /sys/devices/system/cpu/cpu5/online
>>>> echo 0 > /sys/devices/system/cpu/cpu4/online
>>>> echo 0 > /sys/devices/system/cpu/cpu1/online
>>>> echo 0 > /sys/devices/system/cpu/cpu2/online
>>>> echo 0 > /sys/devices/system/cpu/cpu3/online
>>>> (Both offline an isolcpus last, both have CPU0 online)
>>>>
>>>> The issue only triggers with sugov DL threads (I guess that's obvious, but
>>>> just to mention it).
>>>
>>> It wasn't obvious to me at first :). So thanks for confirming.
>>>
>>>> I'll investigate some more later but wanted to share for now.
>>>
>>> So, problem actually is that I am not yet sure what we should do with
>>> sugovs' bandwidth wrt root domain accounting. W/o isolation it's all
>>> good, as it gets accounted for correctly on the dynamic domains sugov
>>> tasks can run on. But with isolation and sugov affected_cpus that cross
>>> isolation domains (e.g., one BIG one little), we can get into troubles
>>> not knowing if sugov contribution should fall on the DEF or DYN domain.
>>>
>>> Hummm, need to think more about it.
>>
>> That is indeed tricky.
>> I would've found it super appealing to always just have sugov DL tasks activate
>> on this_cpu and not have to worry about all this, but then you have contention
>> amongst CPUs of a cluster and there are energy improvements from always
>> having little cores handle all sugov DL tasks, even for the big CPUs,
>> that's why I introduced
>> commit 93940fbdc468 ("cpufreq/schedutil: Only bind threads if needed")
>> but that really doesn't make this any easier.
>
> What about we actually ignore them consistently? We already do that for
> admission control, so maybe we can do that when rebuilding domains as
> well (until we find maybe a better way to deal with them).
>
> Does the following make any difference?
It at least seems to solve the issue. And like you mentioned on irc, we
don't know the bw req of sugov anyway.
So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks.
dl_rq[0]:
.dl_nr_running : 0
.dl_bw->bw : 996147
.dl_bw->total_bw : 0 <-- !
IMHO, people who want to run serious DL can always check whether there
are already these infrastructural DL tasks or even avoid schedutil.
>
> ---
> kernel/sched/deadline.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index b254d878789d..8f7420e0c9d6 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2995,7 +2995,7 @@ void dl_add_task_root_domain(struct task_struct *p)
> struct dl_bw *dl_b;
>
> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
> - if (!dl_task(p)) {
> + if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
> return;
> }
>
On 12/02/25 19:22, Dietmar Eggemann wrote: > On 11/02/2025 11:42, Juri Lelli wrote: ... > > What about we actually ignore them consistently? We already do that for > > admission control, so maybe we can do that when rebuilding domains as > > well (until we find maybe a better way to deal with them). > > > > Does the following make any difference? > > It at least seems to solve the issue. And like you mentioned on irc, we > don't know the bw req of sugov anyway. > > So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > > dl_rq[0]: > .dl_nr_running : 0 > .dl_bw->bw : 996147 > .dl_bw->total_bw : 0 <-- ! > > IMHO, people who want to run serious DL can always check whether there > are already these infrastructural DL tasks or even avoid schedutil. It definitely not ideal and admittedly gross, but not worse than what we are doing already considering we ignore sugovs at AC and the current bandwidth allocation its there only to help with PI. So, duck tape. :/ A more proper way to work with this would entail coming up with sensible bandwidth allocation for sugovs, but that's most probably hardware specific, so I am not sure how we can make that general enough. Anyway, looks like Jon was still seeing the issue. I asked him to verify he is using all the proposed changes. Let's see what he reports. Best, Juri
On 02/13/25 07:20, Juri Lelli wrote: > On 12/02/25 19:22, Dietmar Eggemann wrote: > > On 11/02/2025 11:42, Juri Lelli wrote: > > ... > > > > What about we actually ignore them consistently? We already do that for > > > admission control, so maybe we can do that when rebuilding domains as > > > well (until we find maybe a better way to deal with them). > > > > > > Does the following make any difference? > > > > It at least seems to solve the issue. And like you mentioned on irc, we > > don't know the bw req of sugov anyway. > > > > So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > > > > dl_rq[0]: > > .dl_nr_running : 0 > > .dl_bw->bw : 996147 > > .dl_bw->total_bw : 0 <-- ! > > > > IMHO, people who want to run serious DL can always check whether there > > are already these infrastructural DL tasks or even avoid schedutil. > > It definitely not ideal and admittedly gross, but not worse than what we > are doing already considering we ignore sugovs at AC and the current > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > A more proper way to work with this would entail coming up with sensible > bandwidth allocation for sugovs, but that's most probably hardware > specific, so I am not sure how we can make that general enough. I haven't been following the problem closely, but one thing I was considering and I don't know if it makes sense to you and could help with this problem too. Shall we lump sugov with stopper class or create a new sched_class (seems unnecessary, I think stopper should do)? With the consolidate cpufreq update patch I've been working on Vincent raised issues with potential new ctx switch and to improve that I needed to look at improving sugov wakeup path. If we decouple it from DL I think that might fix your problem here and could allow us to special case it for other problems like the ones I faced more easily without missing up with DL. Has the time come to consider retire the simple solution of making sugov a fake DL task? > > Anyway, looks like Jon was still seeing the issue. I asked him to verify > he is using all the proposed changes. Let's see what he reports. > > Best, > Juri >
On 16/02/25 16:33, Qais Yousef wrote: > On 02/13/25 07:20, Juri Lelli wrote: > > On 12/02/25 19:22, Dietmar Eggemann wrote: > > > On 11/02/2025 11:42, Juri Lelli wrote: > > > > ... > > > > > > What about we actually ignore them consistently? We already do that for > > > > admission control, so maybe we can do that when rebuilding domains as > > > > well (until we find maybe a better way to deal with them). > > > > > > > > Does the following make any difference? > > > > > > It at least seems to solve the issue. And like you mentioned on irc, we > > > don't know the bw req of sugov anyway. > > > > > > So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > > > > > > dl_rq[0]: > > > .dl_nr_running : 0 > > > .dl_bw->bw : 996147 > > > .dl_bw->total_bw : 0 <-- ! > > > > > > IMHO, people who want to run serious DL can always check whether there > > > are already these infrastructural DL tasks or even avoid schedutil. > > > > It definitely not ideal and admittedly gross, but not worse than what we > > are doing already considering we ignore sugovs at AC and the current > > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > > > A more proper way to work with this would entail coming up with sensible > > bandwidth allocation for sugovs, but that's most probably hardware > > specific, so I am not sure how we can make that general enough. > > I haven't been following the problem closely, but one thing I was considering > and I don't know if it makes sense to you and could help with this problem too. > Shall we lump sugov with stopper class or create a new sched_class (seems > unnecessary, I think stopper should do)? With the consolidate cpufreq update > patch I've been working on Vincent raised issues with potential new ctx switch > and to improve that I needed to look at improving sugov wakeup path. If we > decouple it from DL I think that might fix your problem here and could allow us > to special case it for other problems like the ones I faced more easily without > missing up with DL. > > Has the time come to consider retire the simple solution of making sugov a fake > DL task? Problem is that 'ideally' we would want to explicitly take sugovs into account when designing the system. We don't do that currently as a 'temporary solution' that seemed simpler than a proper approach (started wondering if it's indeed simpler). So, not sure if moving sugovs outside DL is something we want to do. Thanks, Juri
On 02/17/25 15:52, Juri Lelli wrote: > On 16/02/25 16:33, Qais Yousef wrote: > > On 02/13/25 07:20, Juri Lelli wrote: > > > On 12/02/25 19:22, Dietmar Eggemann wrote: > > > > On 11/02/2025 11:42, Juri Lelli wrote: > > > > > > ... > > > > > > > > What about we actually ignore them consistently? We already do that for > > > > > admission control, so maybe we can do that when rebuilding domains as > > > > > well (until we find maybe a better way to deal with them). > > > > > > > > > > Does the following make any difference? > > > > > > > > It at least seems to solve the issue. And like you mentioned on irc, we > > > > don't know the bw req of sugov anyway. > > > > > > > > So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > > > > > > > > dl_rq[0]: > > > > .dl_nr_running : 0 > > > > .dl_bw->bw : 996147 > > > > .dl_bw->total_bw : 0 <-- ! > > > > > > > > IMHO, people who want to run serious DL can always check whether there > > > > are already these infrastructural DL tasks or even avoid schedutil. > > > > > > It definitely not ideal and admittedly gross, but not worse than what we > > > are doing already considering we ignore sugovs at AC and the current > > > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > > > > > A more proper way to work with this would entail coming up with sensible > > > bandwidth allocation for sugovs, but that's most probably hardware > > > specific, so I am not sure how we can make that general enough. > > > > I haven't been following the problem closely, but one thing I was considering > > and I don't know if it makes sense to you and could help with this problem too. > > Shall we lump sugov with stopper class or create a new sched_class (seems > > unnecessary, I think stopper should do)? With the consolidate cpufreq update > > patch I've been working on Vincent raised issues with potential new ctx switch > > and to improve that I needed to look at improving sugov wakeup path. If we > > decouple it from DL I think that might fix your problem here and could allow us > > to special case it for other problems like the ones I faced more easily without > > missing up with DL. > > > > Has the time come to consider retire the simple solution of making sugov a fake > > DL task? > > Problem is that 'ideally' we would want to explicitly take sugovs into > account when designing the system. We don't do that currently as a > 'temporary solution' that seemed simpler than a proper approach (started > wondering if it's indeed simpler). So, not sure if moving sugovs outside > DL is something we want to do. Okay I see. The issue though is that for a DL system with power management features on that warrant to wake up a sugov thread to update the frequency is sort of half broken by design. I don't see the benefit over using RT in this case. But I appreciate I could be misguided. So take it easy on me if it is obviously wrong understanding :) I know in Android usage of DL has been difficult, but many systems ship with slow switch hardware. How does DL handle the long softirqs from block and network layers by the way? This has been in a practice a problem for RT tasks so they should be to DL. sugov done in stopper should be handled similarly IMHO. I *think* it would be simpler to masquerade sugov thread as irq pressure. You can use the rate_limit_us as a potential guide for how much bandwidth sugov needs if moving it to another class really doesn't make sense instead?
On 22/02/25 23:59, Qais Yousef wrote: > On 02/17/25 15:52, Juri Lelli wrote: > > On 16/02/25 16:33, Qais Yousef wrote: > > > On 02/13/25 07:20, Juri Lelli wrote: > > > > On 12/02/25 19:22, Dietmar Eggemann wrote: > > > > > On 11/02/2025 11:42, Juri Lelli wrote: > > > > > > > > ... > > > > > > > > > > What about we actually ignore them consistently? We already do that for > > > > > > admission control, so maybe we can do that when rebuilding domains as > > > > > > well (until we find maybe a better way to deal with them). > > > > > > > > > > > > Does the following make any difference? > > > > > > > > > > It at least seems to solve the issue. And like you mentioned on irc, we > > > > > don't know the bw req of sugov anyway. > > > > > > > > > > So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > > > > > > > > > > dl_rq[0]: > > > > > .dl_nr_running : 0 > > > > > .dl_bw->bw : 996147 > > > > > .dl_bw->total_bw : 0 <-- ! > > > > > > > > > > IMHO, people who want to run serious DL can always check whether there > > > > > are already these infrastructural DL tasks or even avoid schedutil. > > > > > > > > It definitely not ideal and admittedly gross, but not worse than what we > > > > are doing already considering we ignore sugovs at AC and the current > > > > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > > > > > > > A more proper way to work with this would entail coming up with sensible > > > > bandwidth allocation for sugovs, but that's most probably hardware > > > > specific, so I am not sure how we can make that general enough. > > > > > > I haven't been following the problem closely, but one thing I was considering > > > and I don't know if it makes sense to you and could help with this problem too. > > > Shall we lump sugov with stopper class or create a new sched_class (seems > > > unnecessary, I think stopper should do)? With the consolidate cpufreq update > > > patch I've been working on Vincent raised issues with potential new ctx switch > > > and to improve that I needed to look at improving sugov wakeup path. If we > > > decouple it from DL I think that might fix your problem here and could allow us > > > to special case it for other problems like the ones I faced more easily without > > > missing up with DL. > > > > > > Has the time come to consider retire the simple solution of making sugov a fake > > > DL task? > > > > Problem is that 'ideally' we would want to explicitly take sugovs into > > account when designing the system. We don't do that currently as a > > 'temporary solution' that seemed simpler than a proper approach (started > > wondering if it's indeed simpler). So, not sure if moving sugovs outside > > DL is something we want to do. > > Okay I see. The issue though is that for a DL system with power management > features on that warrant to wake up a sugov thread to update the frequency is > sort of half broken by design. I don't see the benefit over using RT in this > case. But I appreciate I could be misguided. So take it easy on me if it is > obviously wrong understanding :) I know in Android usage of DL has been > difficult, but many systems ship with slow switch hardware. > > How does DL handle the long softirqs from block and network layers by the way? > This has been in a practice a problem for RT tasks so they should be to DL. > sugov done in stopper should be handled similarly IMHO. I *think* it would be > simpler to masquerade sugov thread as irq pressure. Kind of a trick question :), as DL doesn't handle this kind of load/pressure explicitly. It is essentially agnostic about it. From a system design point of view though, I would say that one should take that into account and maybe convert sensible kthreads to DL, so that the overall bandwidth can be explicitly evaluated. If one doesn't do that probably a less sound approach is to treat anything not explicitly scheduled by DL, but still required from a system perspective, as overload and be more conservative when assigning bandwidth to DL tasks (i.e. reduce the maximum amount of available bandwidth, so that the system doesn't get saturated). > You can use the rate_limit_us as a potential guide for how much bandwidth sugov > needs if moving it to another class really doesn't make sense instead? Or maybe try to estimate/measure how much utilization sugov threads are effectively using while running some kind of workload of interest and use that as an indication for DL runtime/period.
On 02/24/25 10:27, Juri Lelli wrote: > > Okay I see. The issue though is that for a DL system with power management > > features on that warrant to wake up a sugov thread to update the frequency is > > sort of half broken by design. I don't see the benefit over using RT in this > > case. But I appreciate I could be misguided. So take it easy on me if it is > > obviously wrong understanding :) I know in Android usage of DL has been > > difficult, but many systems ship with slow switch hardware. > > > > How does DL handle the long softirqs from block and network layers by the way? > > This has been in a practice a problem for RT tasks so they should be to DL. > > sugov done in stopper should be handled similarly IMHO. I *think* it would be > > simpler to masquerade sugov thread as irq pressure. > > Kind of a trick question :), as DL doesn't handle this kind of :-) > load/pressure explicitly. It is essentially agnostic about it. From a > system design point of view though, I would say that one should take > that into account and maybe convert sensible kthreads to DL, so that the > overall bandwidth can be explicitly evaluated. If one doesn't do that > probably a less sound approach is to treat anything not explicitly > scheduled by DL, but still required from a system perspective, as > overload and be more conservative when assigning bandwidth to DL tasks > (i.e. reduce the maximum amount of available bandwidth, so that the > system doesn't get saturated). Maybe I didn't understand your initial answer properly. But what I got is that we set as DL to do what you just suggested of converting it kthread to DL to take its bandwidth into account. But we have been lying about bandwidth so far and it was ignored? (I saw early bailouts of SCHED_FLAG_SUGOV was set in bandwidth related operations) > > > You can use the rate_limit_us as a potential guide for how much bandwidth sugov > > needs if moving it to another class really doesn't make sense instead? > > Or maybe try to estimate/measure how much utilization sugov threads are > effectively using while running some kind of workload of interest and > use that as an indication for DL runtime/period. I don't want to side track this thread. So maybe I should start a new thread to discuss this. You might have seen my other series on consolidating cpufreq updates. I'm not sure sugov can have a predictable period. Maybe runtime, but it could run repeatedly, or it could be quite for a long time. TBH I always though we use DL because it was the highest sched_class that is not a stopper. Anyway. Happy to take this discussion into another thread if this is better. I didn't mean to distract from debugging the reported issue. Thanks! -- Qais Yousef
On 25/02/25 00:02, Qais Yousef wrote: > On 02/24/25 10:27, Juri Lelli wrote: > > > > Okay I see. The issue though is that for a DL system with power management > > > features on that warrant to wake up a sugov thread to update the frequency is > > > sort of half broken by design. I don't see the benefit over using RT in this > > > case. But I appreciate I could be misguided. So take it easy on me if it is > > > obviously wrong understanding :) I know in Android usage of DL has been > > > difficult, but many systems ship with slow switch hardware. > > > > > > How does DL handle the long softirqs from block and network layers by the way? > > > This has been in a practice a problem for RT tasks so they should be to DL. > > > sugov done in stopper should be handled similarly IMHO. I *think* it would be > > > simpler to masquerade sugov thread as irq pressure. > > > > Kind of a trick question :), as DL doesn't handle this kind of > > :-) > > > load/pressure explicitly. It is essentially agnostic about it. From a > > system design point of view though, I would say that one should take > > that into account and maybe convert sensible kthreads to DL, so that the > > overall bandwidth can be explicitly evaluated. If one doesn't do that > > probably a less sound approach is to treat anything not explicitly > > scheduled by DL, but still required from a system perspective, as > > overload and be more conservative when assigning bandwidth to DL tasks > > (i.e. reduce the maximum amount of available bandwidth, so that the > > system doesn't get saturated). > > Maybe I didn't understand your initial answer properly. But what I got is that > we set as DL to do what you just suggested of converting it kthread to DL to > take its bandwidth into account. But we have been lying about bandwidth so far > and it was ignored? (I saw early bailouts of SCHED_FLAG_SUGOV was set in > bandwidth related operations) Ignored as to have something 'that works'. :) But, it's definitely far from being good. > > > You can use the rate_limit_us as a potential guide for how much bandwidth sugov > > > needs if moving it to another class really doesn't make sense instead? > > > > Or maybe try to estimate/measure how much utilization sugov threads are > > effectively using while running some kind of workload of interest and > > use that as an indication for DL runtime/period. > > I don't want to side track this thread. So maybe I should start a new thread to > discuss this. You might have seen my other series on consolidating cpufreq > updates. I'm not sure sugov can have a predictable period. Maybe runtime, but > it could run repeatedly, or it could be quite for a long time. Doesn't need to have a predictable period. Sporadic (activations are not periodic) tasks work well with DEADLINE if one is able to come up with a sensible bandwidth allocation for them. So for sugov (and other kthreads) the system designer should be thinking about the amount of CPU to give to each kthread (runtime/period) and the granularity of such allocation (period). > TBH I always though we use DL because it was the highest sched_class that is > not a stopper. > > Anyway. Happy to take this discussion into another thread if this is better. > I didn't mean to distract from debugging the reported issue. No worries! But, a separate thread might help to get more eyes on this, I agree. Best, Juri
On 2/25/25 09:46, Juri Lelli wrote: > On 25/02/25 00:02, Qais Yousef wrote: >> On 02/24/25 10:27, Juri Lelli wrote: >> >>>> Okay I see. The issue though is that for a DL system with power management >>>> features on that warrant to wake up a sugov thread to update the frequency is >>>> sort of half broken by design. I don't see the benefit over using RT in this >>>> case. But I appreciate I could be misguided. So take it easy on me if it is >>>> obviously wrong understanding :) I know in Android usage of DL has been >>>> difficult, but many systems ship with slow switch hardware. >>>> >>>> How does DL handle the long softirqs from block and network layers by the way? >>>> This has been in a practice a problem for RT tasks so they should be to DL. >>>> sugov done in stopper should be handled similarly IMHO. I *think* it would be >>>> simpler to masquerade sugov thread as irq pressure. >>> >>> Kind of a trick question :), as DL doesn't handle this kind of >> >> :-) >> >>> load/pressure explicitly. It is essentially agnostic about it. From a >>> system design point of view though, I would say that one should take >>> that into account and maybe convert sensible kthreads to DL, so that the >>> overall bandwidth can be explicitly evaluated. If one doesn't do that >>> probably a less sound approach is to treat anything not explicitly >>> scheduled by DL, but still required from a system perspective, as >>> overload and be more conservative when assigning bandwidth to DL tasks >>> (i.e. reduce the maximum amount of available bandwidth, so that the >>> system doesn't get saturated). >> >> Maybe I didn't understand your initial answer properly. But what I got is that >> we set as DL to do what you just suggested of converting it kthread to DL to >> take its bandwidth into account. But we have been lying about bandwidth so far >> and it was ignored? (I saw early bailouts of SCHED_FLAG_SUGOV was set in >> bandwidth related operations) > > Ignored as to have something 'that works'. :) > > But, it's definitely far from being good. > >>>> You can use the rate_limit_us as a potential guide for how much bandwidth sugov >>>> needs if moving it to another class really doesn't make sense instead? >>> >>> Or maybe try to estimate/measure how much utilization sugov threads are >>> effectively using while running some kind of workload of interest and >>> use that as an indication for DL runtime/period. >> >> I don't want to side track this thread. So maybe I should start a new thread to >> discuss this. You might have seen my other series on consolidating cpufreq >> updates. I'm not sure sugov can have a predictable period. Maybe runtime, but >> it could run repeatedly, or it could be quite for a long time. > > Doesn't need to have a predictable period. Sporadic (activations are not > periodic) tasks work well with DEADLINE if one is able to come up with a > sensible bandwidth allocation for them. So for sugov (and other > kthreads) the system designer should be thinking about the amount of CPU > to give to each kthread (runtime/period) and the granularity of such > allocation (period). The only really sensible choice I see is rate_limit * some_constant_approximated_runtime and on many systems that may yield >100% of the capacity. Qais' proposed changes would even remove the theoretical rate_limit cap here. A lot of complexity for something that is essentially a non-issue in practice AFAICS...
On 2/13/25 06:20, Juri Lelli wrote: > On 12/02/25 19:22, Dietmar Eggemann wrote: >> On 11/02/2025 11:42, Juri Lelli wrote: > > ... > >>> What about we actually ignore them consistently? We already do that for >>> admission control, so maybe we can do that when rebuilding domains as >>> well (until we find maybe a better way to deal with them). >>> >>> Does the following make any difference? >> >> It at least seems to solve the issue. And like you mentioned on irc, we >> don't know the bw req of sugov anyway. >> >> So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. >> >> dl_rq[0]: >> .dl_nr_running : 0 >> .dl_bw->bw : 996147 >> .dl_bw->total_bw : 0 <-- ! >> >> IMHO, people who want to run serious DL can always check whether there >> are already these infrastructural DL tasks or even avoid schedutil. > > It definitely not ideal and admittedly gross, but not worse than what we > are doing already considering we ignore sugovs at AC and the current > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > A more proper way to work with this would entail coming up with sensible > bandwidth allocation for sugovs, but that's most probably hardware > specific, so I am not sure how we can make that general enough. > > Anyway, looks like Jon was still seeing the issue. I asked him to verify > he is using all the proposed changes. Let's see what he reports. FWIW it also fixes my reproducer. I agree that dummy numbers for sugov bw is futile, but real bw numbers also don't make a lot of sense (what if we exceed them? The system won't be able to change frequency, i.e. might not be able to provide bw for other DL tasks then either?). I'm slightly worried about now allowing the last legal CPU for a sugov cluster to offline, which would lead to a cluster still being active but sugov DL unable to run anywhere. I can't reproduce this currently though. Is this an issue in theory? Or am I missing something?
On 13/02/25 12:27, Christian Loehle wrote: > On 2/13/25 06:20, Juri Lelli wrote: > > On 12/02/25 19:22, Dietmar Eggemann wrote: > >> On 11/02/2025 11:42, Juri Lelli wrote: > > > > ... > > > >>> What about we actually ignore them consistently? We already do that for > >>> admission control, so maybe we can do that when rebuilding domains as > >>> well (until we find maybe a better way to deal with them). > >>> > >>> Does the following make any difference? > >> > >> It at least seems to solve the issue. And like you mentioned on irc, we > >> don't know the bw req of sugov anyway. > >> > >> So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. > >> > >> dl_rq[0]: > >> .dl_nr_running : 0 > >> .dl_bw->bw : 996147 > >> .dl_bw->total_bw : 0 <-- ! > >> > >> IMHO, people who want to run serious DL can always check whether there > >> are already these infrastructural DL tasks or even avoid schedutil. > > > > It definitely not ideal and admittedly gross, but not worse than what we > > are doing already considering we ignore sugovs at AC and the current > > bandwidth allocation its there only to help with PI. So, duck tape. :/ > > > > A more proper way to work with this would entail coming up with sensible > > bandwidth allocation for sugovs, but that's most probably hardware > > specific, so I am not sure how we can make that general enough. > > > > Anyway, looks like Jon was still seeing the issue. I asked him to verify > > he is using all the proposed changes. Let's see what he reports. > > FWIW it also fixes my reproducer. > > I agree that dummy numbers for sugov bw is futile, but real bw numbers > also don't make a lot of sense (what if we exceed them? The system > won't be able to change frequency, i.e. might not be able to provide > bw for other DL tasks then either?). > I'm slightly worried about now allowing the last legal CPU for a sugov > cluster to offline, which would lead to a cluster still being active > but sugov DL unable to run anywhere. I can't reproduce this currently > though. Is this an issue in theory? Or am I missing something? Not sure I get what your worry is, sorry. In my understanding when the last cpu of a policy/cluster gets offlined the corresponding sugov kthread gets stopped as well (sugov_exit)?
On 2/13/25 13:33, Juri Lelli wrote: > On 13/02/25 12:27, Christian Loehle wrote: >> On 2/13/25 06:20, Juri Lelli wrote: >>> On 12/02/25 19:22, Dietmar Eggemann wrote: >>>> On 11/02/2025 11:42, Juri Lelli wrote: >>> >>> ... >>> >>>>> What about we actually ignore them consistently? We already do that for >>>>> admission control, so maybe we can do that when rebuilding domains as >>>>> well (until we find maybe a better way to deal with them). >>>>> >>>>> Does the following make any difference? >>>> >>>> It at least seems to solve the issue. And like you mentioned on irc, we >>>> don't know the bw req of sugov anyway. >>>> >>>> So with this change we start with 'dl_bw->total_bw = 0' even w/ sugov tasks. >>>> >>>> dl_rq[0]: >>>> .dl_nr_running : 0 >>>> .dl_bw->bw : 996147 >>>> .dl_bw->total_bw : 0 <-- ! >>>> >>>> IMHO, people who want to run serious DL can always check whether there >>>> are already these infrastructural DL tasks or even avoid schedutil. >>> >>> It definitely not ideal and admittedly gross, but not worse than what we >>> are doing already considering we ignore sugovs at AC and the current >>> bandwidth allocation its there only to help with PI. So, duck tape. :/ >>> >>> A more proper way to work with this would entail coming up with sensible >>> bandwidth allocation for sugovs, but that's most probably hardware >>> specific, so I am not sure how we can make that general enough. >>> >>> Anyway, looks like Jon was still seeing the issue. I asked him to verify >>> he is using all the proposed changes. Let's see what he reports. >> >> FWIW it also fixes my reproducer. >> >> I agree that dummy numbers for sugov bw is futile, but real bw numbers >> also don't make a lot of sense (what if we exceed them? The system >> won't be able to change frequency, i.e. might not be able to provide >> bw for other DL tasks then either?). >> I'm slightly worried about now allowing the last legal CPU for a sugov >> cluster to offline, which would lead to a cluster still being active >> but sugov DL unable to run anywhere. I can't reproduce this currently >> though. Is this an issue in theory? Or am I missing something? > > Not sure I get what your worry is, sorry. In my understanding when the > last cpu of a policy/cluster gets offlined the corresponding sugov > kthread gets stopped as well (sugov_exit)? > The other way round. We may have sugov kthread of cluster [6,7] affined to CPU1. Is it guaranteed that we cannot offline CPU1 (while CPU6 or CPU7 are still online)? Or without the affinity: cluster [6,7] with isolcpu=6 (i.e. sugov kthread of that cluster can only run on CPU7). Is offlining of CPU6 then prevented (as long as CPU7 is online)? I don't see how. Anyway we probably want to change isolcpu and affinity to merely be a suggestion for the sugov DL case. Fundamentally it belongs to what is run on that CPU anyway.
On 13/02/25 13:38, Christian Loehle wrote: > On 2/13/25 13:33, Juri Lelli wrote: ... > > Not sure I get what your worry is, sorry. In my understanding when the > > last cpu of a policy/cluster gets offlined the corresponding sugov > > kthread gets stopped as well (sugov_exit)? > > > > The other way round. > We may have sugov kthread of cluster [6,7] affined to CPU1. Is it > guaranteed that we cannot offline CPU1 (while CPU6 or CPU7 are still > online)? Uhu, is this a sane/desired setup? Anyway, I would say that if CPU1 is offlined sugov[6,7] will need to be migrated someplace else. > Or without the affinity: > cluster [6,7] with isolcpu=6 (i.e. sugov kthread of that cluster can > only run on CPU7). Is offlining of CPU6 then prevented (as long as > CPU7 is online)? > I don't see how. > Anyway we probably want to change isolcpu and affinity to merely be > a suggestion for the sugov DL case. Fundamentally it belongs to what > is run on that CPU anyway. I would tend to agree.
On 2/13/25 14:51, Juri Lelli wrote: > On 13/02/25 13:38, Christian Loehle wrote: >> On 2/13/25 13:33, Juri Lelli wrote: > > ... > >>> Not sure I get what your worry is, sorry. In my understanding when the >>> last cpu of a policy/cluster gets offlined the corresponding sugov >>> kthread gets stopped as well (sugov_exit)? >>> >> >> The other way round. >> We may have sugov kthread of cluster [6,7] affined to CPU1. Is it >> guaranteed that we cannot offline CPU1 (while CPU6 or CPU7 are still >> online)? > > Uhu, is this a sane/desired setup? Anyway, I would say that if CPU1 is > offlined sugov[6,7] will need to be migrated someplace else. Sane? I guess that's to be discussed. It is definitely desirable unfortunately. As mentioned I experimented with having sugov DL tasks (as they cause a lot of idle wakeups (which are expensive on the bigger CPUs)) both always run locally and never IPI (but that means we have contention and still run a double switch on an 'expensive' CPU) and run that on a little CPU and the latter had much better results. > >> Or without the affinity: >> cluster [6,7] with isolcpu=6 (i.e. sugov kthread of that cluster can >> only run on CPU7). Is offlining of CPU6 then prevented (as long as >> CPU7 is online)? >> I don't see how. >> Anyway we probably want to change isolcpu and affinity to merely be >> a suggestion for the sugov DL case. Fundamentally it belongs to what >> is run on that CPU anyway. > > I would tend to agree. I'll write something up.
On 07/02/2025 13:38, Dietmar Eggemann wrote: > On 07/02/2025 11:38, Jon Hunter wrote: >> >> On 06/02/2025 09:29, Juri Lelli wrote: >>> On 05/02/25 16:56, Jon Hunter wrote: >>> >>> ... >>> >>>> Thanks! That did make it easier :-) >>>> >>>> Here is what I see ... >>> >>> Thanks! >>> >>> Still different from what I can repro over here, so, unfortunately, I >>> had to add additional debug printks. Pushed to the same branch/repo. >>> >>> Could I ask for another run with it? Please also share the complete >>> dmesg from boot, as I would need to check debug output when CPUs are >>> first onlined. > > So you have a system with 2 big and 4 LITTLE CPUs (Denver0 Denver1 A57_0 > A57_1 A57_2 A57_3) in one MC sched domain and (Denver1 and A57_0) are > isol CPUs? I believe that 1-2 are the denvers (even though they are listed as 0-1 in device-tree). > This should be easy to set up for me on my Juno-r0 [A53 A57 A57 A53 A53 A53] Yes I think it is similar to this. Thanks! Jon -- nvpublic
Hi Jon, On 03/02/25 11:01, Jon Hunter wrote: > Hi Juri, > > On 16/01/2025 15:55, Juri Lelli wrote: > > On 16/01/25 13:14, Jon Hunter wrote: > > > > > > On 15/01/2025 16:10, Juri Lelli wrote: > > > > On 14/01/25 15:02, Juri Lelli wrote: > > > > > On 14/01/25 13:52, Jon Hunter wrote: > > > > > > > > > > > > On 13/01/2025 09:32, Juri Lelli wrote: > > > > > > > On 10/01/25 18:40, Jon Hunter wrote: > > > > > > > > > > > > > > ... > > > > > > > > > > > > > > > With the above I see the following ... > > > > > > > > > > > > > > > > [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 > > > > > > > > [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 > > > > > > > > [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 > > > > > > > > > > > > > > So far so good. > > > > > > > > > > > > > > > [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 > > > > > > > > > > > > > > But, this above doesn't sound right. > > > > > > > > > > > > > > > [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 > > > > > > > > [ 53.971068] Error taking CPU1 down: -16 > > > > > > > > [ 53.974912] Non-boot CPUs are not disabled > > > > > > > > > > > > > > What is the topology of your board? > > > > > > > > > > > > > > Are you using any cpuset configuration for partitioning CPUs? > > > > > > > > > > > > > > > > > > I just noticed that by default we do boot this board with 'isolcpus=1-2'. I > > > > > > see that this is a deprecated cmdline argument now and I must admit I don't > > > > > > know the history of this for this specific board. It is quite old now. > > > > > > > > > > > > Thierry, I am curious if you have this set for Tegra186 or not? Looks like > > > > > > our BSP (r35 based) sets this by default. > > > > > > > > > > > > I did try removing this and that does appear to fix it. > > > > > > > > > > OK, good. > > > > > > > > > > > Juri, let me know your thoughts. > > > > > > > > > > Thanks for the additional info. I guess I could now try to repro using > > > > > isolcpus at boot on systems I have access to (to possibly understand > > > > > what the underlying problem is). > > > > > > > > I think the problem lies in the def_root_domain accounting of dl_servers > > > > (which isolated cpus remains attached to). > > > > > > > > Came up with the following, of which I'm not yet fully convinced, but > > > > could you please try it out on top of the debug patch and see how it > > > > does with the original failing setup using isolcpus? > > > > > > > > > Thanks I added the change, but suspend is still failing with this ... > > > > Thanks! > > > > > [ 210.595431] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 > > > [ 210.606269] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 > > > [ 210.617281] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 > > > [ 210.627205] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=2 > > > [ 210.637752] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=262140 dl_bw_cpus=1 > > ^ > > Different than before but still not what I expected. Looks like there > > are conditions/path I currently cannot replicate on my setup, so more > > thinking. Unfortunately I will be out traveling next week, so this > > might required a bit of time. > > > I see that this is now in the mainline and our board is still failing to > suspend. Let me know if there is anything else you need me to test. I've been trying to repro on my side. Since I don't have access to boards like yours, I tried to come up with something based on qemu/kvm, essentially a 6 CPUs virtualized environment with isolcpus=1,2. But, offlining of CPU 1 and 2 works as expected with my proposed fix, so I am back at wondering what might be different in your case.
On 13/01/2025 09:32, Juri Lelli wrote: > On 10/01/25 18:40, Jon Hunter wrote: > > ... > >> With the above I see the following ... >> >> [ 53.919672] dl_bw_manage: cpu=5 cap=3072 fair_server_bw=52428 total_bw=209712 dl_bw_cpus=4 >> [ 53.930608] dl_bw_manage: cpu=4 cap=2048 fair_server_bw=52428 total_bw=157284 dl_bw_cpus=3 >> [ 53.941601] dl_bw_manage: cpu=3 cap=1024 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 > > So far so good. > >> [ 53.952186] dl_bw_manage: cpu=2 cap=1024 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=2 > > But, this above doesn't sound right. > >> [ 53.962938] dl_bw_manage: cpu=1 cap=0 fair_server_bw=52428 total_bw=576708 dl_bw_cpus=1 >> [ 53.971068] Error taking CPU1 down: -16 >> [ 53.974912] Non-boot CPUs are not disabled > > What is the topology of your board? This is a Tegra186 and the topology is described in arch/arm64/boot/dts/nvidia/tegra186.dtsi. This is from the datasheet ... "Two CPU clusters connected by a high-performance coherent interconnect fabric designed by NVIDIA; enables simultaneous operation of both CPU clusters for a true heterogeneous multi-processing (HMP) environment. The Denver 2 (Dual-Core) CPU clusters is optimized for higher single-thread performance; the ARM Cortex-A57 MPCore (Quad-Core) CPU clusters is better suited for multi-threaded applications and lighter loads." So one of these ARM big.LITTLE style topologies. > Are you using any cpuset configuration for partitioning CPUs? Not that I am aware of. > Also, could you please add sched_debug to the kernel cmdline and enable > CONFIG_SCHED_DEBUG (if not enabled already)? That should print > additional information about scheduling domains in case they get > reconfigured for some reason. OK I can enable that. Thanks Jon -- nvpublic
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 53916d5fd3c0b658de3463439dd2b7ce765072cb
Gitweb: https://git.kernel.org/tip/53916d5fd3c0b658de3463439dd2b7ce765072cb
Author: Juri Lelli <juri.lelli@redhat.com>
AuthorDate: Fri, 15 Nov 2024 11:48:29
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Mon, 02 Dec 2024 12:01:31 +01:00
sched/deadline: Check bandwidth overflow earlier for hotplug
Currently we check for bandwidth overflow potentially due to hotplug
operations at the end of sched_cpu_deactivate(), after the cpu going
offline has already been removed from scheduling, active_mask, etc.
This can create issues for DEADLINE tasks, as there is a substantial
race window between the start of sched_cpu_deactivate() and the moment
we possibly decide to roll-back the operation if dl_bw_deactivate()
returns failure in cpuset_cpu_inactive(). An example is a throttled
task that sees its replenishment timer firing while the cpu it was
previously running on is considered offline, but before
dl_bw_deactivate() had a chance to say no and roll-back happened.
Fix this by directly calling dl_bw_deactivate() first thing in
sched_cpu_deactivate() and do the required calculation in the former
function considering the cpu passed as an argument as offline already.
By doing so we also simplify sched_cpu_deactivate(), as there is no need
anymore for any kind of roll-back if we fail early.
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Tested-by: Waiman Long <longman@redhat.com>
Link: https://lore.kernel.org/r/Zzc1DfPhbvqDDIJR@jlelli-thinkpadt14gen4.remote.csb
---
kernel/sched/core.c | 22 +++++++---------------
kernel/sched/deadline.c | 12 ++++++++++--
2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 29f6b24..1dee3f5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8182,19 +8182,14 @@ static void cpuset_cpu_active(void)
cpuset_update_active_cpus();
}
-static int cpuset_cpu_inactive(unsigned int cpu)
+static void cpuset_cpu_inactive(unsigned int cpu)
{
if (!cpuhp_tasks_frozen) {
- int ret = dl_bw_deactivate(cpu);
-
- if (ret)
- return ret;
cpuset_update_active_cpus();
} else {
num_cpus_frozen++;
partition_sched_domains(1, NULL, NULL);
}
- return 0;
}
static inline void sched_smt_present_inc(int cpu)
@@ -8256,6 +8251,11 @@ int sched_cpu_deactivate(unsigned int cpu)
struct rq *rq = cpu_rq(cpu);
int ret;
+ ret = dl_bw_deactivate(cpu);
+
+ if (ret)
+ return ret;
+
/*
* Remove CPU from nohz.idle_cpus_mask to prevent participating in
* load balancing when not active
@@ -8301,15 +8301,7 @@ int sched_cpu_deactivate(unsigned int cpu)
return 0;
sched_update_numa(cpu, false);
- ret = cpuset_cpu_inactive(cpu);
- if (ret) {
- sched_smt_present_inc(cpu);
- sched_set_rq_online(rq, cpu);
- balance_push_set(cpu, false);
- set_cpu_active(cpu, true);
- sched_update_numa(cpu, true);
- return ret;
- }
+ cpuset_cpu_inactive(cpu);
sched_domains_numa_masks_clear(cpu);
return 0;
}
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index fa787c7..1c8b838 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3496,6 +3496,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
break;
case dl_bw_req_deactivate:
/*
+ * cpu is not off yet, but we need to do the math by
+ * considering it off already (i.e., what would happen if we
+ * turn cpu off?).
+ */
+ cap -= arch_scale_cpu_capacity(cpu);
+
+ /*
* cpu is going offline and NORMAL tasks will be moved away
* from it. We can thus discount dl_server bandwidth
* contribution as it won't need to be servicing tasks after
@@ -3512,9 +3519,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
- * wise thing to do.
+ * wise thing to do. As said above, cpu is not offline
+ * yet, so account for that.
*/
- if (dl_bw_cpus(cpu))
+ if (dl_bw_cpus(cpu) - 1)
overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
else
overflow = 1;
On 11/14/24 9:28 AM, Juri Lelli wrote: > Hello! > > v2 of a patch series [3] that addresses two issues affecting DEADLINE > bandwidth accounting during non-destructive changes to root domains and > hotplug operations. The series is based on top of Waiman's > "cgroup/cpuset: Remove redundant rebuild_sched_domains_locked() calls" > series [1] which is now merged into cgroups/for-6.13 (this series is > based on top of that, commit c4c9cebe2fb9). The discussion that > eventually led to these two series can be found at [2]. > > Waiman reported that v1 still failed to make his test_cpuset_prs.sh > happy, so I had to change both patches a little. It now seems to pass on > my runs. > > Patch 01/02 deals with non-destructive root domain changes. With respect > to v1 we now always restore dl_server contributions, considering root > domain span and active cpus mask (otherwise accounting on the default > root domain would end up to be incorrect). > > Patch 02/02 deals with hotplug. With respect to v1 I added special > casing when total_bw = 0 (so no DEADLINE tasks to consider) and when a > root domain is left with no cpus due to hotplug. > > In all honesty, I still see intermittent issues that seems to however be > related to the dance we do in sched_cpu_deactivate(), where we first > turn everything related to a cpu/rq off and revert that if > cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these > seem to be orthogonal to the original discussion we started from, I > wanted to send this out as an hopefully meaningful update/improvement > since yesterday. Will continue looking into this. > > Please go forth and test/review. > > Series also available at > > git@github.com:jlelli/linux.git upstream/dl-server-apply > > Best, > Juri > > [1] https://lore.kernel.org/lkml/20241110025023.664487-1-longman@redhat.com/ > [2] https://lore.kernel.org/lkml/20241029225116.3998487-1-joel@joelfernandes.org/ > [3] v1 - https://lore.kernel.org/lkml/20241113125724.450249-1-juri.lelli@redhat.com/ > > Juri Lelli (2): > sched/deadline: Restore dl_server bandwidth on non-destructive root > domain changes > sched/deadline: Correctly account for allocated bandwidth during > hotplug > > kernel/sched/core.c | 2 +- > kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++-------- > kernel/sched/sched.h | 2 +- > kernel/sched/topology.c | 8 +++-- > 4 files changed, 60 insertions(+), 17 deletions(-) > Thanks for this new patch series. I have confirmed that with some minor twisting of the cpuset code, all the test cases in the test_cpuset_prs.sh script passed. Tested-by: Waiman Long <longman@redhat.com>
© 2016 - 2026 Red Hat, Inc.