arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++ arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++ drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++- drivers/memory/tegra/tegra186-emc.c | 132 ++++++- drivers/memory/tegra/tegra186.c | 48 +++ drivers/memory/tegra/tegra194.c | 59 ++- include/dt-bindings/memory/tegra186-mc.h | 4 + include/dt-bindings/memory/tegra194-mc.h | 6 + 8 files changed, 1344 insertions(+), 10 deletions(-)
This series borrows the concept used on Tegra234 to scale EMC based on CPU frequency and applies it to Tegra186 and Tegra194. Except that the bpmp on those archs does not support bandwidth manager, so the scaling iteself is handled similar to how Tegra124 currently works. Signed-off-by: Aaron Kling <webgeek1234@gmail.com> --- Aaron Kling (8): dt-bindings: tegra: Add ICC IDs for dummy memory clients for Tegra186 dt-bindings: tegra: Add ICC IDs for dummy memory clients for Tegra194 cpufreq: tegra186: add OPP support and set bandwidth memory: tegra186-emc: Support non-bpmp icc scaling memory: tegra186: Support icc scaling memory: tegra194: Support icc scaling arm64: tegra: Add CPU OPP tables for Tegra186 arm64: tegra: Add CPU OPP tables for Tegra194 arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++ arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++ drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++- drivers/memory/tegra/tegra186-emc.c | 132 ++++++- drivers/memory/tegra/tegra186.c | 48 +++ drivers/memory/tegra/tegra194.c | 59 ++- include/dt-bindings/memory/tegra186-mc.h | 4 + include/dt-bindings/memory/tegra194-mc.h | 6 + 8 files changed, 1344 insertions(+), 10 deletions(-) --- base-commit: 1b237f190eb3d36f52dffe07a40b5eb210280e00 change-id: 20250823-tegra186-icc-7299110cd774 prerequisite-change-id: 20250826-tegra186-cpufreq-fixes-7fbff81c68a2:v3 prerequisite-patch-id: 74a2633b412b641f9808306cff9b0a697851d6c8 prerequisite-patch-id: 9c52827317f7abfb93885febb1894b40967bd64c Best regards, -- Aaron Kling <webgeek1234@gmail.com>
On 01/09/25 09:03, Aaron Kling via B4 Relay wrote: > External email: Use caution opening links or attachments > > > This series borrows the concept used on Tegra234 to scale EMC based on > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > bpmp on those archs does not support bandwidth manager, so the scaling > iteself is handled similar to how Tegra124 currently works. > > Signed-off-by: Aaron Kling <webgeek1234@gmail.com> > --- Tegra186/194 had multiple drivers for BWMGR, ISOMGR and LA+PTSA configs on the CPU side. I am not sure how effective this patch series will be in absence of those components. In Tegra234, those were moved to BPMP-FW. So, Kernel forwards the BW request to BPMP (R5) who takes care of setting the final freq. Thank you, Sumit Gupta > Aaron Kling (8): > dt-bindings: tegra: Add ICC IDs for dummy memory clients for Tegra186 > dt-bindings: tegra: Add ICC IDs for dummy memory clients for Tegra194 > cpufreq: tegra186: add OPP support and set bandwidth > memory: tegra186-emc: Support non-bpmp icc scaling > memory: tegra186: Support icc scaling > memory: tegra194: Support icc scaling > arm64: tegra: Add CPU OPP tables for Tegra186 > arm64: tegra: Add CPU OPP tables for Tegra194 > > arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++ > arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++ > drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++- > drivers/memory/tegra/tegra186-emc.c | 132 ++++++- > drivers/memory/tegra/tegra186.c | 48 +++ > drivers/memory/tegra/tegra194.c | 59 ++- > include/dt-bindings/memory/tegra186-mc.h | 4 + > include/dt-bindings/memory/tegra194-mc.h | 6 + > 8 files changed, 1344 insertions(+), 10 deletions(-) > --- > base-commit: 1b237f190eb3d36f52dffe07a40b5eb210280e00 > change-id: 20250823-tegra186-icc-7299110cd774 > prerequisite-change-id: 20250826-tegra186-cpufreq-fixes-7fbff81c68a2:v3 > prerequisite-patch-id: 74a2633b412b641f9808306cff9b0a697851d6c8 > prerequisite-patch-id: 9c52827317f7abfb93885febb1894b40967bd64c > > Best regards, > -- > Aaron Kling <webgeek1234@gmail.com> > > >
On Thu, Sep 4, 2025 at 6:47 AM Sumit Gupta <sumitg@nvidia.com> wrote: > > > On 01/09/25 09:03, Aaron Kling via B4 Relay wrote: > > External email: Use caution opening links or attachments > > > > > > This series borrows the concept used on Tegra234 to scale EMC based on > > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > > bpmp on those archs does not support bandwidth manager, so the scaling > > iteself is handled similar to how Tegra124 currently works. > > > > Signed-off-by: Aaron Kling <webgeek1234@gmail.com> > > --- > > Tegra186/194 had multiple drivers for BWMGR, ISOMGR and LA+PTSA configs > on the CPU side. I am not sure how effective this patch series will be > in absence > of those components. In Tegra234, those were moved to BPMP-FW. So, Kernel > forwards the BW request to BPMP (R5) who takes care of setting the final > freq. I know it's not ideal, but it seems to be working okay as a rough approximation. When the cpu governor kicks up the cpu freq, the emc freq scales to match. In my testing, this has been enough to keep aosp from obviously lagging. Existing drivers for earlier archs, such as tegra124-emc, stub out LA+PTSA as well. Does the lack of that handling make things worse for Tegra186/194 than it would for Tegra124/Tegra210? I'm trying to improve things across all these archs small pieces at time. In several of my recent series, I'm just trying to get any form of load based dfs to work, so I don't have to keep everything pegged to max frequency with the associated thermals and power usage. Aaron
On 04/09/25 22:17, Aaron Kling wrote: > External email: Use caution opening links or attachments > > > On Thu, Sep 4, 2025 at 6:47 AM Sumit Gupta <sumitg@nvidia.com> wrote: >> >> On 01/09/25 09:03, Aaron Kling via B4 Relay wrote: >>> External email: Use caution opening links or attachments >>> >>> >>> This series borrows the concept used on Tegra234 to scale EMC based on >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>> bpmp on those archs does not support bandwidth manager, so the scaling >>> iteself is handled similar to how Tegra124 currently works. >>> >>> Signed-off-by: Aaron Kling <webgeek1234@gmail.com> >>> --- >> Tegra186/194 had multiple drivers for BWMGR, ISOMGR and LA+PTSA configs >> on the CPU side. I am not sure how effective this patch series will be >> in absence >> of those components. In Tegra234, those were moved to BPMP-FW. So, Kernel >> forwards the BW request to BPMP (R5) who takes care of setting the final >> freq. > I know it's not ideal, but it seems to be working okay as a rough > approximation. When the cpu governor kicks up the cpu freq, the emc > freq scales to match. In my testing, this has been enough to keep aosp > from obviously lagging. Existing drivers for earlier archs, such as > tegra124-emc, stub out LA+PTSA as well. Does the lack of that handling > make things worse for Tegra186/194 than it would for > Tegra124/Tegra210? I'm trying to improve things across all these archs > small pieces at time. In several of my recent series, I'm just trying > to get any form of load based dfs to work, so I don't have to keep > everything pegged to max frequency with the associated thermals and > power usage. > > Aaron I am not much familiar with the previous SoCs. But yes having some kind of scaling is better than not having at all and running always at max. This can be a starting point of more improvements for these SoCs in future. Thank you, Sumit Gupta
On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: > This series borrows the concept used on Tegra234 to scale EMC based on > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > bpmp on those archs does not support bandwidth manager, so the scaling > iteself is handled similar to how Tegra124 currently works. > Three different subsystems and no single explanation of dependencies and how this can be merged. Best regards, Krzysztof
On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: > > This series borrows the concept used on Tegra234 to scale EMC based on > > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > > bpmp on those archs does not support bandwidth manager, so the scaling > > iteself is handled similar to how Tegra124 currently works. > > > > Three different subsystems and no single explanation of dependencies and > how this can be merged. The only cross-subsystem hard dependency is that patches 5 and 6 need patches 1 and 2 respectively. Patch 5 logically needs patch 3 to operate as expected, but there should not be compile compile or probe failures if those are out of order. How would you expect this to be presented in a cover letter? Aaron
On 02/09/2025 18:51, Aaron Kling wrote: > On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: >> >> On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: >>> This series borrows the concept used on Tegra234 to scale EMC based on >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>> bpmp on those archs does not support bandwidth manager, so the scaling >>> iteself is handled similar to how Tegra124 currently works. >>> >> >> Three different subsystems and no single explanation of dependencies and >> how this can be merged. > > The only cross-subsystem hard dependency is that patches 5 and 6 need > patches 1 and 2 respectively. Patch 5 logically needs patch 3 to > operate as expected, but there should not be compile compile or probe > failures if those are out of order. How would you expect this to be > presented in a cover letter? Also, placing cpufreq patch between two memory controller patches means you really make it more difficult to apply it for the maintainers. Really, think thoroughly how this patchset is supposed to be read. I will move it to the bottom of my review queue. Best regards, Krzysztof
On Wed, Sep 3, 2025 at 1:20 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > On 02/09/2025 18:51, Aaron Kling wrote: > > On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > >> > >> On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: > >>> This series borrows the concept used on Tegra234 to scale EMC based on > >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the > >>> bpmp on those archs does not support bandwidth manager, so the scaling > >>> iteself is handled similar to how Tegra124 currently works. > >>> > >> > >> Three different subsystems and no single explanation of dependencies and > >> how this can be merged. > > > > The only cross-subsystem hard dependency is that patches 5 and 6 need > > patches 1 and 2 respectively. Patch 5 logically needs patch 3 to > > operate as expected, but there should not be compile compile or probe > > failures if those are out of order. How would you expect this to be > > presented in a cover letter? > > Also, placing cpufreq patch between two memory controller patches means > you really make it more difficult to apply it for the maintainers. > Really, think thoroughly how this patchset is supposed to be read. This is making me more confused. My understanding was that a series like this that has binding, driver, and dt changes would flow like that: all bindings first, all driver changes in the middle, and all dt changes last. Are you suggesting that this should be: cpufreq driver -> bindings -> memory drivers -> dt? Are the bindings supposed to be pulled with the driver changes? I had understood those to be managed separately. Aaron
On 03/09/2025 08:37, Aaron Kling wrote: > On Wed, Sep 3, 2025 at 1:20 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: >> >> On 02/09/2025 18:51, Aaron Kling wrote: >>> On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: >>>> >>>> On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: >>>>> This series borrows the concept used on Tegra234 to scale EMC based on >>>>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>>>> bpmp on those archs does not support bandwidth manager, so the scaling >>>>> iteself is handled similar to how Tegra124 currently works. >>>>> >>>> >>>> Three different subsystems and no single explanation of dependencies and >>>> how this can be merged. >>> >>> The only cross-subsystem hard dependency is that patches 5 and 6 need >>> patches 1 and 2 respectively. Patch 5 logically needs patch 3 to >>> operate as expected, but there should not be compile compile or probe >>> failures if those are out of order. How would you expect this to be >>> presented in a cover letter? >> >> Also, placing cpufreq patch between two memory controller patches means >> you really make it more difficult to apply it for the maintainers. >> Really, think thoroughly how this patchset is supposed to be read. > > This is making me more confused. My understanding was that a series > like this that has binding, driver, and dt changes would flow like > that: all bindings first, all driver changes in the middle, and all dt You mix completely independent subsystems, that's the main problem. Don't send v3 before you understand it or we finish the discussion here. > changes last. Are you suggesting that this should be: cpufreq driver > -> bindings -> memory drivers -> dt? Are the bindings supposed to be > pulled with the driver changes? I had understood those to be managed > separately. What does the submitting patches doc in DT say? Best regards, Krzysztof
On Thu, Sep 4, 2025 at 3:19 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > On 03/09/2025 08:37, Aaron Kling wrote: > > On Wed, Sep 3, 2025 at 1:20 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > >> > >> On 02/09/2025 18:51, Aaron Kling wrote: > >>> On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: > >>>> > >>>> On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: > >>>>> This series borrows the concept used on Tegra234 to scale EMC based on > >>>>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the > >>>>> bpmp on those archs does not support bandwidth manager, so the scaling > >>>>> iteself is handled similar to how Tegra124 currently works. > >>>>> > >>>> > >>>> Three different subsystems and no single explanation of dependencies and > >>>> how this can be merged. > >>> > >>> The only cross-subsystem hard dependency is that patches 5 and 6 need > >>> patches 1 and 2 respectively. Patch 5 logically needs patch 3 to > >>> operate as expected, but there should not be compile compile or probe > >>> failures if those are out of order. How would you expect this to be > >>> presented in a cover letter? > >> > >> Also, placing cpufreq patch between two memory controller patches means > >> you really make it more difficult to apply it for the maintainers. > >> Really, think thoroughly how this patchset is supposed to be read. > > > > This is making me more confused. My understanding was that a series > > like this that has binding, driver, and dt changes would flow like > > that: all bindings first, all driver changes in the middle, and all dt > > You mix completely independent subsystems, that's the main problem. > Don't send v3 before you understand it or we finish the discussion here. > > > changes last. Are you suggesting that this should be: cpufreq driver > > -> bindings -> memory drivers -> dt? Are the bindings supposed to be > > pulled with the driver changes? I had understood those to be managed > > separately. > What does the submitting patches doc in DT say? The only relevant snippet I see is: "The Documentation/ portion of the patch should come in the series before the code implementing the binding." I had got it in my head that all bindings should go first as a separate subsystem, not just docs. I will double check all series before sending new revisions. Aaron
On 04/09/2025 19:49, Aaron Kling wrote: >> >>> changes last. Are you suggesting that this should be: cpufreq driver >>> -> bindings -> memory drivers -> dt? Are the bindings supposed to be >>> pulled with the driver changes? I had understood those to be managed >>> separately. >> What does the submitting patches doc in DT say? > > The only relevant snippet I see is: > "The Documentation/ portion of the patch should come in the series > before the code implementing the binding." > > I had got it in my head that all bindings should go first as a > separate subsystem, not just docs. I will double check all series > before sending new revisions. A bit further because you asked about process: 3) For a series going though multiple trees, the binding patch should be kept with the driver using the binding. We do like this since years, and git log would tell you who committed the bindings (driver maintainer), so I really do not understand why 1% of cases happening other way caused that impression from your last sentence. Best regards, Krzysztof
On 02/09/2025 18:51, Aaron Kling wrote: > On Tue, Sep 2, 2025 at 3:23 AM Krzysztof Kozlowski <krzk@kernel.org> wrote: >> >> On Sun, Aug 31, 2025 at 10:33:48PM -0500, Aaron Kling wrote: >>> This series borrows the concept used on Tegra234 to scale EMC based on >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>> bpmp on those archs does not support bandwidth manager, so the scaling >>> iteself is handled similar to how Tegra124 currently works. >>> >> >> Three different subsystems and no single explanation of dependencies and >> how this can be merged. > > The only cross-subsystem hard dependency is that patches 5 and 6 need > patches 1 and 2 respectively. Patch 5 logically needs patch 3 to > operate as expected, but there should not be compile compile or probe > failures if those are out of order. How would you expect this to be > presented in a cover letter? In whatever way you wish, but you must clearly express dependencies and any merge restrictions. Best regards, Krzysztof
© 2016 - 2025 Red Hat, Inc.