arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++ arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++ drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++- drivers/memory/tegra/tegra186-emc.c | 132 ++++++- drivers/memory/tegra/tegra186.c | 48 +++ drivers/memory/tegra/tegra194.c | 59 ++- include/dt-bindings/memory/tegra186-mc.h | 4 + include/dt-bindings/memory/tegra194-mc.h | 6 + 8 files changed, 1344 insertions(+), 10 deletions(-)
This series borrows the concept used on Tegra234 to scale EMC based on
CPU frequency and applies it to Tegra186 and Tegra194. Except that the
bpmp on those archs does not support bandwidth manager, so the scaling
iteself is handled similar to how Tegra124 currently works.
Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
Changes in v2:
- Use opp scoped free in patch 3
- Cleanup as requested in patch 3
- Move patch 3 to the start of the series to keep subsystems grouped
- Link to v1: https://lore.kernel.org/r/20250831-tegra186-icc-v1-0-607ddc53b507@gmail.com
---
Aaron Kling (8):
cpufreq: tegra186: add OPP support and set bandwidth
dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186
dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194
memory: tegra186-emc: Support non-bpmp icc scaling
memory: tegra186: Support icc scaling
memory: tegra194: Support icc scaling
arm64: tegra: Add CPU OPP tables for Tegra186
arm64: tegra: Add CPU OPP tables for Tegra194
arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++
arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++
drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++-
drivers/memory/tegra/tegra186-emc.c | 132 ++++++-
drivers/memory/tegra/tegra186.c | 48 +++
drivers/memory/tegra/tegra194.c | 59 ++-
include/dt-bindings/memory/tegra186-mc.h | 4 +
include/dt-bindings/memory/tegra194-mc.h | 6 +
8 files changed, 1344 insertions(+), 10 deletions(-)
---
base-commit: 1b237f190eb3d36f52dffe07a40b5eb210280e00
change-id: 20250823-tegra186-icc-7299110cd774
prerequisite-change-id: 20250826-tegra186-cpufreq-fixes-7fbff81c68a2:v3
prerequisite-patch-id: 74a2633b412b641f9808306cff9b0a697851d6c8
prerequisite-patch-id: 9c52827317f7abfb93885febb1894b40967bd64c
Best regards,
--
Aaron Kling <webgeek1234@gmail.com>
On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: > This series borrows the concept used on Tegra234 to scale EMC based on > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > bpmp on those archs does not support bandwidth manager, so the scaling > iteself is handled similar to how Tegra124 currently works. > Nothing improved: https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ Best regards, Krzysztof
On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: > > This series borrows the concept used on Tegra234 to scale EMC based on > > CPU frequency and applies it to Tegra186 and Tegra194. Except that the > > bpmp on those archs does not support bandwidth manager, so the scaling > > iteself is handled similar to how Tegra124 currently works. > > > > Nothing improved: > https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ The dt changes should go last. The cpufreq and memory pieces can go in either order because the new code won't be used unless the dt pieces activate them. Aaron
On 13/10/2025 04:18, Aaron Kling wrote: > On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: >> >> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: >>> This series borrows the concept used on Tegra234 to scale EMC based on >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>> bpmp on those archs does not support bandwidth manager, so the scaling >>> iteself is handled similar to how Tegra124 currently works. >>> >> >> Nothing improved: >> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ > > The dt changes should go last. The cpufreq and memory pieces can go in > either order because the new code won't be used unless the dt pieces > activate them. Then cpufreq and memory should never have been part of same patchset. Instead of simple command to apply it, maintainers need multiple steps. Really, when you send patches, think how this should be handled and how much effort this needs on maintainer side. Best regards, Krzysztof
On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > On 13/10/2025 04:18, Aaron Kling wrote: > > On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: > >> > >> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: > >>> This series borrows the concept used on Tegra234 to scale EMC based on > >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the > >>> bpmp on those archs does not support bandwidth manager, so the scaling > >>> iteself is handled similar to how Tegra124 currently works. > >>> > >> > >> Nothing improved: > >> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ > > > > The dt changes should go last. The cpufreq and memory pieces can go in > > either order because the new code won't be used unless the dt pieces > > activate them. > > > Then cpufreq and memory should never have been part of same patchset. > Instead of simple command to apply it, maintainers need multiple steps. > Really, when you send patches, think how this should be handled and how > much effort this needs on maintainer side. To be honest, I was expecting all of these to go through the tegra tree, since all the drivers I touch are owned by the tegra maintainers. But getting stuff moved through that tree has been like pulling teeth recently. So Krzysztof, what's the alternative you're suggesting here? Aaron
On Sun, Oct 12, 2025 at 9:31 PM Aaron Kling <webgeek1234@gmail.com> wrote: > > On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > > > On 13/10/2025 04:18, Aaron Kling wrote: > > > On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: > > >> > > >> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: > > >>> This series borrows the concept used on Tegra234 to scale EMC based on > > >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the > > >>> bpmp on those archs does not support bandwidth manager, so the scaling > > >>> iteself is handled similar to how Tegra124 currently works. > > >>> > > >> > > >> Nothing improved: > > >> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ > > > > > > The dt changes should go last. The cpufreq and memory pieces can go in > > > either order because the new code won't be used unless the dt pieces > > > activate them. > > > > > > Then cpufreq and memory should never have been part of same patchset. > > Instead of simple command to apply it, maintainers need multiple steps. > > Really, when you send patches, think how this should be handled and how > > much effort this needs on maintainer side. > > To be honest, I was expecting all of these to go through the tegra > tree, since all the drivers I touch are owned by the tegra > maintainers. But getting stuff moved through that tree has been like > pulling teeth recently. So Krzysztof, what's the alternative you're > suggesting here? What is the expectation for the series here, and related, the tegra210 actmon series? Everything put together here accomplishes the single logical task of enabling dynamic frequency scaling for emc on tegra186 and tegra194. The driver subsystems do not have hard dependencies in that the new driver code has fallbacks to not fail to probe if the complementary driver changes are missing. But if I was to split them up, how would it work? I send the cpufreq patch by itself, the memory changes in a group, then the dt changes in a group with b4 deps lines for the two driver sets? That seems crazy complicated for something that's a single logical concept. Especially when as far as I know, this can all go together through the tegra tree. Aaron
On 20/10/2025 22:14, Aaron Kling wrote: > On Sun, Oct 12, 2025 at 9:31 PM Aaron Kling <webgeek1234@gmail.com> wrote: >> >> On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: >>> >>> On 13/10/2025 04:18, Aaron Kling wrote: >>>> On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote: >>>>> >>>>> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote: >>>>>> This series borrows the concept used on Tegra234 to scale EMC based on >>>>>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the >>>>>> bpmp on those archs does not support bandwidth manager, so the scaling >>>>>> iteself is handled similar to how Tegra124 currently works. >>>>>> >>>>> >>>>> Nothing improved: >>>>> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/ >>>> >>>> The dt changes should go last. The cpufreq and memory pieces can go in >>>> either order because the new code won't be used unless the dt pieces >>>> activate them. >>> >>> >>> Then cpufreq and memory should never have been part of same patchset. >>> Instead of simple command to apply it, maintainers need multiple steps. >>> Really, when you send patches, think how this should be handled and how >>> much effort this needs on maintainer side. >> >> To be honest, I was expecting all of these to go through the tegra >> tree, since all the drivers I touch are owned by the tegra >> maintainers. But getting stuff moved through that tree has been like >> pulling teeth recently. So Krzysztof, what's the alternative you're >> suggesting here? > > What is the expectation for the series here, and related, the tegra210 > actmon series? Everything put together here accomplishes the single > logical task of enabling dynamic frequency scaling for emc on tegra186 > and tegra194. The driver subsystems do not have hard dependencies in There are comments from Viresh so I dropped the patchset from my queue. > that the new driver code has fallbacks to not fail to probe if the > complementary driver changes are missing. But if I was to split them > up, how would it work? I send the cpufreq patch by itself, the memory Please open MAINTAINERS file or read the output of get_maintainers.pl. This will tell you what is the subsystem here. Currently you mixed a lot: three subsystems which has only drawbacks. There is no single benefit of that approach, unless you have dependencies (REAL dependencies), but you said you don't have such. If you have dependencies this must be FIRST, the most important thing you mention in the cover letter. Many maintainers appreciate if you mention in patch changelogs as well, because they (me included) do not read cover letters. So if you open the MAINTAINERS file you will find subsystems: cpufreq, Tegra SoC and memory controllers (where DT bindings belong) You split your patchset per subsystem, with the difference (explained in DT submitting patches) is that DT bindings for drivers belong to the driver subsystem. The DTS patches using newly introduced bindings should carry lore links to patchsets with the bindings, so the SoC maintainer can apply them once bindings hit next. I also described the entire process before: https://lore.kernel.org/linux-samsung-soc/CADrjBPq_0nUYRABKpskRF_dhHu+4K=duPVZX==0pr+cjSL_caQ@mail.gmail.com/T/#m2d9130a1342ab201ab49670fa6c858ee3724c83c so now I repeated it second time. It is the last time I repeat the basics of organizing patchsets. > changes in a group, then the dt changes in a group with b4 deps lines > for the two driver sets? That seems crazy complicated for something That's pretty standard, nothing complicated. You should have seen complicated posting here: https://lore.kernel.org/all/20231121-topic-sm8650-upstream-dt-v3-0-db9d0507ffd3@linaro.org/ We all send multiple patchsets, with or without dependencies. Best regards, Krzysztof
© 2016 - 2026 Red Hat, Inc.