arch/arm64/boot/dts/rockchip/rk3568.dtsi | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
For a long time, rk3568's MSI-X had bugs and could only work on one node.
e.g. [ 7.250882] r8125 0002:01:00.0: no MSI/MSI-X. Back to INTx.
Now the ITS of GICv3 on rk3568 has been fixed by commit b08e2f42e86b
("irqchip/gic-v3-its: Share ITS tables with a non-trusted hypervisor")
and commit 2d81e1bb6252 ("irqchip/gic-v3: Add Rockchip 3568002 erratum
workaround").
Following commit b956c9de9175 ("arm64: dts: rockchip: rk356x: Move
PCIe MSI to use GIC ITS instead of MBI"), change the PCIe3 controller's
MSI on rk3568 to use ITS, so that all MSI-X can work properly.
~# dmesg | grep -E 'GIC|ITS'
[ 0.000000] CPU features: detected: GIC system register CPU interface
[ 0.000000] GIC: enabling workaround for GICv3: non-coherent attribute
[ 0.000000] GICv3: GIC: Using split EOI/Deactivate mode
[ 0.000000] GICv3: 320 SPIs implemented
[ 0.000000] GICv3: 0 Extended SPIs implemented
[ 0.000000] GICv3: MBI range [296:319]
[ 0.000000] GICv3: Using MBI frame 0x00000000fd410000
[ 0.000000] GICv3: GICv3 features: 16 PPIs
[ 0.000000] GICv3: CPU0: found redistributor 0 region 0:0x00000000fd460000
[ 0.000000] ITS [mem 0xfd440000-0xfd45ffff]
[ 0.000000] GIC: enabling workaround for ITS: Rockchip erratum RK3568002
[ 0.000000] GIC: enabling workaround for ITS: non-coherent attribute
[ 0.000000] ITS@0x00000000fd440000: allocated 8192 Devices @210000 (indirect, esz 8, psz 64K, shr 0)
[ 0.000000] ITS@0x00000000fd440000: allocated 32768 Interrupt Collections @220000 (flat, esz 2, psz 64K, shr 0)
[ 0.000000] ITS: using cache flushing for cmd queue
[ 0.000000] GICv3: using LPI property table @0x0000000000230000
[ 0.000000] GIC: using cache flushing for LPI property table
[ 0.000000] GICv3: CPU0: using allocated LPI pending table @0x0000000000240000
[ 0.013946] GICv3: CPU1: found redistributor 100 region 0:0x00000000fd480000
[ 0.013968] GICv3: CPU1: using allocated LPI pending table @0x0000000000250000
[ 0.014948] GICv3: CPU2: found redistributor 200 region 0:0x00000000fd4a0000
[ 0.014968] GICv3: CPU2: using allocated LPI pending table @0x0000000000260000
[ 0.015904] GICv3: CPU3: found redistributor 300 region 0:0x00000000fd4c0000
[ 0.015923] GICv3: CPU3: using allocated LPI pending table @0x0000000000270000
~# lspci -v | grep MSI-X
Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
Capabilities: [b0] MSI-X: Enable- Count=128 Masked-
Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
Capabilities: [b0] MSI-X: Enable- Count=128 Masked-
Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
Chukun Pan (1):
arm64: dts: rockchip: rk3568: Move PCIe3 MSI to use GIC ITS
arch/arm64/boot/dts/rockchip/rk3568.dtsi | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--
2.25.1
Hi,
On Sat Mar 8, 2025 at 10:30 AM CET, Chukun Pan wrote:
> For a long time, rk3568's MSI-X had bugs and could only work on one node.
> e.g. [ 7.250882] r8125 0002:01:00.0: no MSI/MSI-X. Back to INTx.
>
> Now the ITS of GICv3 on rk3568 has been fixed by commit b08e2f42e86b
> ("irqchip/gic-v3-its: Share ITS tables with a non-trusted hypervisor")
> and commit 2d81e1bb6252 ("irqchip/gic-v3: Add Rockchip 3568002 erratum
> workaround").
>
> Following commit b956c9de9175 ("arm64: dts: rockchip: rk356x: Move
> PCIe MSI to use GIC ITS instead of MBI"), change the PCIe3 controller's
> MSI on rk3568 to use ITS, so that all MSI-X can work properly.
>
> ~# dmesg | grep -E 'GIC|ITS'
> <snip>
>
> ~# lspci -v | grep MSI-X
> Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
> Capabilities: [b0] MSI-X: Enable- Count=128 Masked-
> Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
> Capabilities: [b0] MSI-X: Enable- Count=128 Masked-
> Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
I tested this patch on my NanoPi R5S with a 6.15-rc3 kernel + a number
of [vcc|phy]-supply patches that have been accepted for 6.16 (and a
small WIP LED patch).
With this patch I get the following kernel warnings:
pci 0001:10:00.0: Primary bus is hard wired to 0
pci 0002:20:00.0: Primary bus is hard wired to 0
If I 'unapply' this patch, I don't see those warnings.
The output of the above mentioned ``dmesg`` and ``lspci`` commands is
exactly the same in both cases.
The PCI IDs refer to:
PCI bridge: Rockchip Electronics Co., Ltd RK3568 Remote Signal Processor (rev 01)
It's possible that this patch only brought a(nother) problem to light,
but was I supposed to see an improvement and if so where/how?
Cheers,
Diederik
> Chukun Pan (1):
> arm64: dts: rockchip: rk3568: Move PCIe3 MSI to use GIC ITS
>
> arch/arm64/boot/dts/rockchip/rk3568.dtsi | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
Hi again,
On Fri May 9, 2025 at 5:34 PM CEST, Diederik de Haas wrote:
> On Sat Mar 8, 2025 at 10:30 AM CET, Chukun Pan wrote:
>> For a long time, rk3568's MSI-X had bugs and could only work on one node.
>> e.g. [ 7.250882] r8125 0002:01:00.0: no MSI/MSI-X. Back to INTx.
>>
>> Following commit b956c9de9175 ("arm64: dts: rockchip: rk356x: Move
>> PCIe MSI to use GIC ITS instead of MBI"), change the PCIe3 controller's
>> MSI on rk3568 to use ITS, so that all MSI-X can work properly.
>>
> I tested this patch on my NanoPi R5S with a 6.15-rc3 kernel + a number
> of [vcc|phy]-supply patches that have been accepted for 6.16 (and a
> small WIP LED patch).
>
> With this patch I get the following kernel warnings:
>
> pci 0001:10:00.0: Primary bus is hard wired to 0
> pci 0002:20:00.0: Primary bus is hard wired to 0
>
> If I 'unapply' this patch, I don't see those warnings.
I was pretty sure I had seen those messages before, but couldn't find
them before. But now I have: on my rk3588-rock-5b.
> It's possible that this patch only brought a(nother) problem to light,
So it looks indeed to be this.
Cheers,
Diederik
>> Chukun Pan (1):
>> arm64: dts: rockchip: rk3568: Move PCIe3 MSI to use GIC ITS
>>
>> arch/arm64/boot/dts/rockchip/rk3568.dtsi | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
Hi, > > With this patch I get the following kernel warnings: > > > > pci 0001:10:00.0: Primary bus is hard wired to 0 > > pci 0002:20:00.0: Primary bus is hard wired to 0 > > > > If I 'unapply' this patch, I don't see those warnings. > I was pretty sure I had seen those messages before, but couldn't find > them before. But now I have: on my rk3588-rock-5b. Thanks for the reminder, I didn't notice this before. The BSP kernel also has this warning. Before this patch: [ 2.997725] pci_bus 0001:01: busn_res: can not insert [bus 01-ff] under [bus 00-0f] (conflicts with (null) [bus 00-0f]) [ 3.009990] pci 0001:00:00.0: BAR 6: assigned [mem 0xf2200000-0xf220ffff pref] [ 3.018100] pci 0001:00:00.0: PCI bridge to [bus 01-ff] ... [ 3.401416] pci_bus 0002:01: busn_res: can not insert [bus 01-ff] under [bus 00-0f] (conflicts with (null) [bus 00-0f]) ... [ 3.545459] pci 0002:00:00.0: PCI bridge to [bus 01-ff] After this patch: [ 3.037779] pci 0001:10:00.0: Primary bus is hard wired to 0 [ 3.044120] pci 0001:10:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring [ 3.053362] pci_bus 0001:11: busn_res: [bus 11-1f] end is updated to 11 [ 3.068920] pci 0001:10:00.0: PCI bridge to [bus 11] ... [ 3.451429] pci 0002:20:00.0: Primary bus is hard wired to 0 [ 3.457793] pci 0002:20:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring ... [ 3.535794] pci_bus 0002:21: busn_res: [bus 21-2f] end is updated to 21 ... [ 3.612893] pci 0002:20:00.0: PCI bridge to [bus 21] Looks like a harmless warning. Thanks, Chukun -- 2.25.1
Hi,
Added linux-pci ML to "To".
On Mon May 12, 2025 at 9:00 AM CEST, Chukun Pan wrote:
>> > With this patch I get the following kernel warnings:
>> >
>> > pci 0001:10:00.0: Primary bus is hard wired to 0
>> > pci 0002:20:00.0: Primary bus is hard wired to 0
>> >
>> > If I 'unapply' this patch, I don't see those warnings.
>
>> I was pretty sure I had seen those messages before, but couldn't find
>> them before. But now I have: on my rk3588-rock-5b.
>
> Thanks for the reminder, I didn't notice this before.
> The BSP kernel also has this warning.
>
> Before this patch:
> [ 2.997725] pci_bus 0001:01: busn_res: can not insert [bus 01-ff] under [bus 00-0f] (conflicts with (null) [bus 00-0f])
> [ 3.009990] pci 0001:00:00.0: BAR 6: assigned [mem 0xf2200000-0xf220ffff pref]
> [ 3.018100] pci 0001:00:00.0: PCI bridge to [bus 01-ff]
> ...
> [ 3.401416] pci_bus 0002:01: busn_res: can not insert [bus 01-ff] under [bus 00-0f] (conflicts with (null) [bus 00-0f])
> ...
> [ 3.545459] pci 0002:00:00.0: PCI bridge to [bus 01-ff]
>
> After this patch:
> [ 3.037779] pci 0001:10:00.0: Primary bus is hard wired to 0
> [ 3.044120] pci 0001:10:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring
> [ 3.053362] pci_bus 0001:11: busn_res: [bus 11-1f] end is updated to 11
> [ 3.068920] pci 0001:10:00.0: PCI bridge to [bus 11]
> ...
> [ 3.451429] pci 0002:20:00.0: Primary bus is hard wired to 0
> [ 3.457793] pci 0002:20:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring
> ...
> [ 3.535794] pci_bus 0002:21: busn_res: [bus 21-2f] end is updated to 21
> ...
> [ 3.612893] pci 0002:20:00.0: PCI bridge to [bus 21]
>
> Looks like a harmless warning.
I see various messages which look odd or suboptimal to me:
- (conflicts with (null) [bus 00-0f])
- bridge configuration invalid ([bus 01-ff]), reconfiguring
But those are informational messages, so I guess that is considered
normal. Looking a bit further and it does look that the severities in
``drivers/pci/probe.c`` are chosen deliberately. So even though my NVMe
drives seem to work, I'm not ready yet to ignore a WARNING.
In my view, a warning is something that should be fixed or if it's
indeed harmless, then its severity should be downgraded.
So I looked where that warning came from and found commit
71f6bd4a2313 ("PCI: workaround hard-wired bus number V2")
And its commit message does not make it clear to *me* if it's valid:
Fixes PCI device detection on IBM xSeries IBM 3850 M2 / x3950 M2
when using ACPI resources (_CRS).
This is default, a manual workaround (without this patch)
would be pci=nocrs boot param.
V2: Add dev_warn if the workaround is hit. This should reveal
how common such setups are (via google) and point to possible
problems if things are still not working as expected.
This could be interpreted as "let's make it a warning so people will put
it in a search engine (and not just ignore it) and then we can find out
via that, if it's a common issue".
It would be helpful if the people (way) more familiar with the PCI
subsystem then me to tell me/us whether the severity is appropriate (and
thus should be fixed?) or if this should be an info or dbg level message
instead.
Cheers,
Diederik
© 2016 - 2026 Red Hat, Inc.