[PATCH v2 0/3] acpi: report numa nodes for device memory using GI

ankita@nvidia.com posted 3 patches 7 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20231007201740.30335-1-ankita@nvidia.com
Maintainers: "Michael S. Tsirkin" <mst@redhat.com>, Igor Mammedov <imammedo@redhat.com>, Ani Sinha <anisinha@redhat.com>, Peter Maydell <peter.maydell@linaro.org>, Shannon Zhao <shannon.zhaosl@gmail.com>, Alex Williamson <alex.williamson@redhat.com>, "Cédric Le Goater" <clg@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, "Daniel P. Berrangé" <berrange@redhat.com>, Eduardo Habkost <eduardo@habkost.net>, Eric Blake <eblake@redhat.com>, Markus Armbruster <armbru@redhat.com>
There is a newer version of this series
hw/acpi/acpi-generic-initiator.c         | 213 +++++++++++++++++++++++
hw/acpi/meson.build                      |   1 +
hw/arm/virt-acpi-build.c                 |   3 +
hw/vfio/pci.c                            |   2 -
hw/vfio/pci.h                            |   2 +
include/hw/acpi/acpi-generic-initiator.h |  64 +++++++
qapi/qom.json                            |  40 ++++-
7 files changed, 321 insertions(+), 4 deletions(-)
create mode 100644 hw/acpi/acpi-generic-initiator.c
create mode 100644 include/hw/acpi/acpi-generic-initiator.h
[PATCH v2 0/3] acpi: report numa nodes for device memory using GI
Posted by ankita@nvidia.com 7 months ago
From: Ankit Agrawal <ankita@nvidia.com>

There are upcoming devices which allow CPU to cache coherently access
their memory. It is sensible to expose such memory as NUMA nodes separate
from the sysmem node to the OS. The ACPI spec provides a scheme in SRAT
called Generic Initiator Affinity Structure [1] to allow an association
between a Proximity Domain (PXM) and a Generic Initiator (GI) (e.g.
heterogeneous processors and accelerators, GPUs, and I/O devices with
integrated compute or DMA engines).

Implement the mechanism to build the GI affinity structures as Qemu
currently does not. Introduce a new acpi-generic-initiator object
that links a node to a device BDF. During SRAT creation, all such
objected are identified and used to add the GI Affinity Structures.

A single node per BDF is insufficient for a full utilization of the NVIDIA
GPUs MIG (Mult-Instance GPUs) [2] feature. The feature allows partitioning
of the GPU device resources (including device memory) into several (upto 8)
isolated instances. Each of the partitioned memory requires a dedicated NUMA
node to operate. The partitions are not fixed and they can be created/deleted
at runtime.

Linux OS does not provide a means to dynamically create/destroy NUMA nodes
and such feature implementation is expected to be non-trivial. The nodes
that OS discovers at the boot time while parsing SRAT remains fixed. So we
utilize the GI Affinity structures that allows association between nodes
and devices. Multiple GI structures per BDF is possible, allowing creation
of multiple nodes in the VM by exposing unique PXM in each of these
structures. Implement a new nvidia-acpi-generic-initiator object to associate
a range of nodes with a device.

The admin will create a range of 8 nodes and associate that with the device
using the nvidia-acpi-generic-initiator object. While a configuration of less
than 8 nodes per device is allowed, such configuration will prevent
utilization of the feature to the fullest. This setting is applicable to
all the Grace+Hopper systems. The following is an example of the Qemu command
line arguments to create 8 nodes and link them to the device 'dev0':

-numa node,nodeid=2 \
-numa node,nodeid=3 \
-numa node,nodeid=4 \
-numa node,nodeid=5 \
-numa node,nodeid=6 \
-numa node,nodeid=7 \
-numa node,nodeid=8 \
-numa node,nodeid=9 \
-device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
-object nvidia-acpi-generic-initiator,id=gi0,device=dev0,numa-node-start=2,numa-node-count=8 \

The performance benefits can be realized by providing the NUMA node distances
appropriately (through libvirt tags or Qemu params). The admin can get the
distance among nodes in hardware using `numactl -H`.

This series goes along with the vfio-pci variant driver [3] under review.
It is expected for a vfio-pci driver to expose this feature through
sysfs. Presence of the feature is checked to enable these code changes.

Applied over v8.1.0-rc4.

Signed-off-by: Ankit Agrawal <ankita@nvidia.com>
---

[1] ACPI Spec 6.5, Section 5.2.16.6
[2] https://www.nvidia.com/en-in/technologies/multi-instance-gpu
[3] https://lore.kernel.org/all/20230912153032.19935-1-ankita@nvidia.com/

Link for v1:
https://lore.kernel.org/all/20230915024559.6565-1-ankita@nvidia.com/

v1 -> v2
- Removed dependency on sysfs to communicate the feature with variant module.
- Use GI Affinity SRAT structure instead of Memory Affinity.
- No DSDT entries needed to communicate the PXM for the device. SRAT GI
structure is used instead.
- New objects introduced to establish link between device and nodes.

Ankit Agrawal (3):
  qom: new object to associate device to numa node
  hw/acpi: Implement the SRAT GI affinity structure
  qom: Link multiple numa nodes to device using a new object

 hw/acpi/acpi-generic-initiator.c         | 213 +++++++++++++++++++++++
 hw/acpi/meson.build                      |   1 +
 hw/arm/virt-acpi-build.c                 |   3 +
 hw/vfio/pci.c                            |   2 -
 hw/vfio/pci.h                            |   2 +
 include/hw/acpi/acpi-generic-initiator.h |  64 +++++++
 qapi/qom.json                            |  40 ++++-
 7 files changed, 321 insertions(+), 4 deletions(-)
 create mode 100644 hw/acpi/acpi-generic-initiator.c
 create mode 100644 include/hw/acpi/acpi-generic-initiator.h

-- 
2.17.1