Two unrelated fixes here:
1) Linux really doesn't like it when you claim non existent memory
is directly connected to an initiator (here a CPU).
It is a nonsense entry, though I also plan to try and get
a relaxation of the condition into the kernel.
Maybe we need to care about migration, but I suspect no one
cares about this corner case (hence no one noticed the
problem!)
2) An access outside of the allocated array when building the
the latency and bandwidth tables. Given this crashes QEMU
for me, I think we are fine with the potential table change.
Some notes on 1:
- This structure is almost entirely pointless in general - most
of the fields were removed in HMAT v2.
What remains, is meant to convey memory controller location
when the memory is in a different Proximity Domain from the
memory controller (e.g. a SoC with both HBM and DDR will present
2 NUMA domains but memory controllers will be wherever we describe
the CPUs as being - typically with the DDR)
Currently QEMU creates these to indicate direct connection between
a CPU domain and memory in the same domain. Using the Proximity
domain in SRAT conveys the same. This adds no information.
Notes on 2:
- I debated a follow up patch removing the entires in the table
for initiators on nodes that don't have any initiators.
QEMU won't let you use them as initiators in the LB entries
anyway so there is no way to set those entries and they
end up reported as 0. OK for Bandwidth as no one is going to use
the zero bandwidth channel, but that's a very attractive latency,
but that's fine as no one will read the number as there are
no initiators? (right?)
There is a corner case in ACPI that bites us here.
ACPI Proximity domains are only defined in SRAT, but nothing says
they need to be fully defined. Generic Initiators are optional
afterall (newish feature) so it was common to use _PXM in DSDT
to define where various platform devices were (and PCI but that's
still not read by Linux - a story of pain and broken systems for
another day). That's fine if they are in a node with CPUs
(initiators) but not so much if they happen to be in a memory
only node. Today I think the only thing we can make hit this
condition in QEMU is a PCI Expander Bridge which doesn't initiate
transactions. But things behind it do and there are drivers out
there that do buffer placement based on SLIT distances. I'd
expect HMAT users to follow soon.
It would be nice to think all such systems will use Generic Port
Affinity Structures (and I have patches for those to follow shortly)
but that's overly optimistic beyond CXL where the kernel will use
them and which drove their introduction.
Jonathan Cameron (2):
hmat acpi: Do not add Memory Proximity Domain Attributes Structure
targetting non existent memory.
hmat acpi: Fix out of bounds access due to missing use of indirection
hw/acpi/hmat.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--
2.39.2