From: Ankit Agrawal <ankita@nvidia.com>
NVIDIA GPU's support MIG (Mult-Instance GPUs) feature [1], which allows
partitioning of the GPU device resources (including device memory) into
several (upto 8) isolated instances. Each of the partitioned memory needs
a dedicated NUMA node to operate. The partitions are not fixed and they
can be created/deleted at runtime.
Unfortunately Linux OS does not provide a means to dynamically create/destroy
NUMA nodes and such feature implementation is not expected to be trivial. The
nodes that OS discovers at the boot time while parsing SRAT remains fixed. So
we utilize the Generic Initiator Affinity structures that allows association
between nodes and devices. Multiple GI structures per BDF is possible,
allowing creation of multiple nodes by exposing unique PXM in each of these
structures.
Implement the mechanism to build the GI affinity structures as Qemu currently
does not. Introduce a new acpi-generic-initiator object to allow host admin
link a device with an associated NUMA node. Qemu maintains this association
and use this object to build the requisite GI Affinity Structure.
When multiple NUMA nodes are associated with a device, it is required to
create those many number of acpi-generic-initiator objects, each representing
a unique device:node association.
Following is one of a decoded GI affinity structure in VM ACPI SRAT.
[0C8h 0200 1] Subtable Type : 05 [Generic Initiator Affinity]
[0C9h 0201 1] Length : 20
[0CAh 0202 1] Reserved1 : 00
[0CBh 0203 1] Device Handle Type : 01
[0CCh 0204 4] Proximity Domain : 00000007
[0D0h 0208 16] Device Handle : 00 00 20 00 00 00 00 00 00 00 00
00 00 00 00 00
[0E0h 0224 4] Flags (decoded below) : 00000001
Enabled : 1
[0E4h 0228 4] Reserved2 : 00000000
[0E8h 0232 1] Subtable Type : 05 [Generic Initiator Affinity]
[0E9h 0233 1] Length : 20
An admin can provide a range of acpi-generic-initiator objects, each
associating a device (by providing the id through pci-dev argument)
to the desired NUMA node (using the node argument). Currently, only PCI
device is supported.
For the grace hopper system, create a range of 8 nodes and associate that
with the device using the acpi-generic-initiator object. While a configuration
of less than 8 nodes per device is allowed, such configuration will prevent
utilization of the feature to the fullest. The following sample creates 8
nodes per PCI device for a VM with 2 PCI devices and link them to the
respecitve PCI device using acpi-generic-initiator objects:
-numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 \
-numa node,nodeid=5 -numa node,nodeid=6 -numa node,nodeid=7 \
-numa node,nodeid=8 -numa node,nodeid=9 \
-device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
-object acpi-generic-initiator,id=gi0,pci-dev=dev0,node=2 \
-object acpi-generic-initiator,id=gi1,pci-dev=dev0,node=3 \
-object acpi-generic-initiator,id=gi2,pci-dev=dev0,node=4 \
-object acpi-generic-initiator,id=gi3,pci-dev=dev0,node=5 \
-object acpi-generic-initiator,id=gi4,pci-dev=dev0,node=6 \
-object acpi-generic-initiator,id=gi5,pci-dev=dev0,node=7 \
-object acpi-generic-initiator,id=gi6,pci-dev=dev0,node=8 \
-object acpi-generic-initiator,id=gi7,pci-dev=dev0,node=9 \
-numa node,nodeid=10 -numa node,nodeid=11 -numa node,nodeid=12 \
-numa node,nodeid=13 -numa node,nodeid=14 -numa node,nodeid=15 \
-numa node,nodeid=16 -numa node,nodeid=17 \
-device vfio-pci-nohotplug,host=0009:01:01.0,bus=pcie.0,addr=05.0,rombar=0,id=dev1 \
-object acpi-generic-initiator,id=gi8,pci-dev=dev1,node=10 \
-object acpi-generic-initiator,id=gi9,pci-dev=dev1,node=11 \
-object acpi-generic-initiator,id=gi10,pci-dev=dev1,node=12 \
-object acpi-generic-initiator,id=gi11,pci-dev=dev1,node=13 \
-object acpi-generic-initiator,id=gi12,pci-dev=dev1,node=14 \
-object acpi-generic-initiator,id=gi13,pci-dev=dev1,node=15 \
-object acpi-generic-initiator,id=gi14,pci-dev=dev1,node=16 \
-object acpi-generic-initiator,id=gi15,pci-dev=dev1,node=17 \
The performance benefits can be realized by providing the NUMA node distances
appropriately (through libvirt tags or Qemu params). The admin can get the
distance among nodes in hardware using `numactl -H`.
Link: https://www.nvidia.com/en-in/technologies/multi-instance-gpu [1]
Cc: Jonathan Cameron <qemu-devel@nongnu.org>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Ankit Agrawal <ankita@nvidia.com>
---
hw/acpi/acpi_generic_initiator.c | 73 ++++++++++++++++++++++++
hw/acpi/hmat.c | 2 +-
hw/acpi/meson.build | 1 +
hw/core/numa.c | 3 +-
include/hw/acpi/acpi_generic_initiator.h | 32 +++++++++++
include/sysemu/numa.h | 1 +
qapi/qom.json | 17 ++++++
7 files changed, 127 insertions(+), 2 deletions(-)
create mode 100644 hw/acpi/acpi_generic_initiator.c
create mode 100644 include/hw/acpi/acpi_generic_initiator.h
diff --git a/hw/acpi/acpi_generic_initiator.c b/hw/acpi/acpi_generic_initiator.c
new file mode 100644
index 0000000000..f57b3c8984
--- /dev/null
+++ b/hw/acpi/acpi_generic_initiator.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
+ */
+
+#include "qemu/osdep.h"
+#include "hw/acpi/acpi_generic_initiator.h"
+#include "hw/boards.h"
+#include "hw/pci/pci_device.h"
+#include "qapi/error.h"
+#include "qapi/qapi-builtin-visit.h"
+#include "qapi/visitor.h"
+#include "qemu/error-report.h"
+
+OBJECT_DEFINE_TYPE_WITH_INTERFACES(AcpiGenericInitiator, acpi_generic_initiator,
+ ACPI_GENERIC_INITIATOR, OBJECT,
+ { TYPE_USER_CREATABLE },
+ { NULL })
+
+OBJECT_DECLARE_SIMPLE_TYPE(AcpiGenericInitiator, ACPI_GENERIC_INITIATOR)
+
+static void acpi_generic_initiator_init(Object *obj)
+{
+ AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
+
+ gi->node = MAX_NODES;
+ gi->pci_dev = NULL;
+}
+
+static void acpi_generic_initiator_finalize(Object *obj)
+{
+ AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
+
+ g_free(gi->pci_dev);
+}
+
+static void acpi_generic_initiator_set_pci_device(Object *obj, const char *val,
+ Error **errp)
+{
+ AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
+
+ gi->pci_dev = g_strdup(val);
+}
+
+static void acpi_generic_initiator_set_node(Object *obj, Visitor *v,
+ const char *name, void *opaque,
+ Error **errp)
+{
+ MachineState *ms = MACHINE(qdev_get_machine());
+ AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
+ uint32_t value;
+
+ if (!visit_type_uint32(v, name, &value, errp)) {
+ return;
+ }
+
+ if (value >= MAX_NODES) {
+ error_printf("%s: Invalid NUMA node specified\n",
+ TYPE_ACPI_GENERIC_INITIATOR);
+ exit(1);
+ }
+
+ gi->node = value;
+ ms->numa_state->nodes[gi->node].has_gi = true;
+}
+
+static void acpi_generic_initiator_class_init(ObjectClass *oc, void *data)
+{
+ object_class_property_add_str(oc, "pci-dev", NULL,
+ acpi_generic_initiator_set_pci_device);
+ object_class_property_add(oc, "node", "int", NULL,
+ acpi_generic_initiator_set_node, NULL, NULL);
+}
diff --git a/hw/acpi/hmat.c b/hw/acpi/hmat.c
index 3042d223c8..2242981e18 100644
--- a/hw/acpi/hmat.c
+++ b/hw/acpi/hmat.c
@@ -214,7 +214,7 @@ static void hmat_build_table_structs(GArray *table_data, NumaState *numa_state)
}
for (i = 0; i < numa_state->num_nodes; i++) {
- if (numa_state->nodes[i].has_cpu) {
+ if (numa_state->nodes[i].has_cpu || numa_state->nodes[i].has_gi) {
initiator_list[num_initiator++] = i;
}
}
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
index fc1b952379..1424046164 100644
--- a/hw/acpi/meson.build
+++ b/hw/acpi/meson.build
@@ -1,5 +1,6 @@
acpi_ss = ss.source_set()
acpi_ss.add(files(
+ 'acpi_generic_initiator.c',
'acpi_interface.c',
'aml-build.c',
'bios-linker-loader.c',
diff --git a/hw/core/numa.c b/hw/core/numa.c
index f08956ddb0..58a32f1564 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -229,7 +229,8 @@ void parse_numa_hmat_lb(NumaState *numa_state, NumaHmatLBOptions *node,
node->target, numa_state->num_nodes);
return;
}
- if (!numa_info[node->initiator].has_cpu) {
+ if (!numa_info[node->initiator].has_cpu &&
+ !numa_info[node->initiator].has_gi) {
error_setg(errp, "Invalid initiator=%d, it isn't an "
"initiator proximity domain", node->initiator);
return;
diff --git a/include/hw/acpi/acpi_generic_initiator.h b/include/hw/acpi/acpi_generic_initiator.h
new file mode 100644
index 0000000000..23d0b591c6
--- /dev/null
+++ b/include/hw/acpi/acpi_generic_initiator.h
@@ -0,0 +1,32 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
+ */
+
+#ifndef ACPI_GENERIC_INITIATOR_H
+#define ACPI_GENERIC_INITIATOR_H
+
+#include "hw/mem/pc-dimm.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "hw/acpi/aml-build.h"
+#include "sysemu/numa.h"
+#include "qemu/uuid.h"
+#include "qom/object.h"
+#include "qom/object_interfaces.h"
+
+#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"
+
+typedef struct AcpiGenericInitiator {
+ /* private */
+ Object parent;
+
+ /* public */
+ char *pci_dev;
+ uint16_t node;
+} AcpiGenericInitiator;
+
+typedef struct AcpiGenericInitiatorClass {
+ ObjectClass parent_class;
+} AcpiGenericInitiatorClass;
+
+#endif
diff --git a/include/sysemu/numa.h b/include/sysemu/numa.h
index 4173ef2afa..825cfe86bc 100644
--- a/include/sysemu/numa.h
+++ b/include/sysemu/numa.h
@@ -41,6 +41,7 @@ struct NodeInfo {
struct HostMemoryBackend *node_memdev;
bool present;
bool has_cpu;
+ bool has_gi;
uint8_t lb_info_provided;
uint16_t initiator;
uint8_t distance[MAX_NODES];
diff --git a/qapi/qom.json b/qapi/qom.json
index c53ef978ff..81727310b1 100644
--- a/qapi/qom.json
+++ b/qapi/qom.json
@@ -794,6 +794,21 @@
{ 'struct': 'VfioUserServerProperties',
'data': { 'socket': 'SocketAddress', 'device': 'str' } }
+##
+# @AcpiGenericInitiatorProperties:
+#
+# Properties for acpi-generic-initiator objects.
+#
+# @pci-dev: PCI device ID to be associated with the node
+#
+# @node: NUMA node associated with the PCI device
+#
+# Since: 9.0
+##
+{ 'struct': 'AcpiGenericInitiatorProperties',
+ 'data': { 'pci-dev': 'str',
+ 'node': 'uint32' } }
+
##
# @RngProperties:
#
@@ -911,6 +926,7 @@
##
{ 'enum': 'ObjectType',
'data': [
+ 'acpi-generic-initiator',
'authz-list',
'authz-listfile',
'authz-pam',
@@ -981,6 +997,7 @@
'id': 'str' },
'discriminator': 'qom-type',
'data': {
+ 'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
'authz-list': 'AuthZListProperties',
'authz-listfile': 'AuthZListFileProperties',
'authz-pam': 'AuthZPAMProperties',
--
2.34.1
On Wed, 6 Mar 2024 12:33:16 +0000 <ankita@nvidia.com> wrote: > From: Ankit Agrawal <ankita@nvidia.com> > > NVIDIA GPU's support MIG (Mult-Instance GPUs) feature [1], which allows > partitioning of the GPU device resources (including device memory) into > several (upto 8) isolated instances. Each of the partitioned memory needs > a dedicated NUMA node to operate. The partitions are not fixed and they > can be created/deleted at runtime. > > Unfortunately Linux OS does not provide a means to dynamically create/destroy > NUMA nodes and such feature implementation is not expected to be trivial. The > nodes that OS discovers at the boot time while parsing SRAT remains fixed. So > we utilize the Generic Initiator Affinity structures that allows association > between nodes and devices. Multiple GI structures per BDF is possible, > allowing creation of multiple nodes by exposing unique PXM in each of these > structures. > > Implement the mechanism to build the GI affinity structures as Qemu currently > does not. Introduce a new acpi-generic-initiator object to allow host admin > link a device with an associated NUMA node. Qemu maintains this association > and use this object to build the requisite GI Affinity Structure. > > When multiple NUMA nodes are associated with a device, it is required to > create those many number of acpi-generic-initiator objects, each representing > a unique device:node association. > > Following is one of a decoded GI affinity structure in VM ACPI SRAT. > [0C8h 0200 1] Subtable Type : 05 [Generic Initiator Affinity] > [0C9h 0201 1] Length : 20 > > [0CAh 0202 1] Reserved1 : 00 > [0CBh 0203 1] Device Handle Type : 01 > [0CCh 0204 4] Proximity Domain : 00000007 > [0D0h 0208 16] Device Handle : 00 00 20 00 00 00 00 00 00 00 00 > 00 00 00 00 00 > [0E0h 0224 4] Flags (decoded below) : 00000001 > Enabled : 1 > [0E4h 0228 4] Reserved2 : 00000000 > > [0E8h 0232 1] Subtable Type : 05 [Generic Initiator Affinity] > [0E9h 0233 1] Length : 20 > > An admin can provide a range of acpi-generic-initiator objects, each > associating a device (by providing the id through pci-dev argument) > to the desired NUMA node (using the node argument). Currently, only PCI > device is supported. > > For the grace hopper system, create a range of 8 nodes and associate that > with the device using the acpi-generic-initiator object. While a configuration > of less than 8 nodes per device is allowed, such configuration will prevent > utilization of the feature to the fullest. The following sample creates 8 > nodes per PCI device for a VM with 2 PCI devices and link them to the > respecitve PCI device using acpi-generic-initiator objects: > > -numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 \ > -numa node,nodeid=5 -numa node,nodeid=6 -numa node,nodeid=7 \ > -numa node,nodeid=8 -numa node,nodeid=9 \ > -device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \ > -object acpi-generic-initiator,id=gi0,pci-dev=dev0,node=2 \ > -object acpi-generic-initiator,id=gi1,pci-dev=dev0,node=3 \ > -object acpi-generic-initiator,id=gi2,pci-dev=dev0,node=4 \ > -object acpi-generic-initiator,id=gi3,pci-dev=dev0,node=5 \ > -object acpi-generic-initiator,id=gi4,pci-dev=dev0,node=6 \ > -object acpi-generic-initiator,id=gi5,pci-dev=dev0,node=7 \ > -object acpi-generic-initiator,id=gi6,pci-dev=dev0,node=8 \ > -object acpi-generic-initiator,id=gi7,pci-dev=dev0,node=9 \ > > -numa node,nodeid=10 -numa node,nodeid=11 -numa node,nodeid=12 \ > -numa node,nodeid=13 -numa node,nodeid=14 -numa node,nodeid=15 \ > -numa node,nodeid=16 -numa node,nodeid=17 \ > -device vfio-pci-nohotplug,host=0009:01:01.0,bus=pcie.0,addr=05.0,rombar=0,id=dev1 \ > -object acpi-generic-initiator,id=gi8,pci-dev=dev1,node=10 \ > -object acpi-generic-initiator,id=gi9,pci-dev=dev1,node=11 \ > -object acpi-generic-initiator,id=gi10,pci-dev=dev1,node=12 \ > -object acpi-generic-initiator,id=gi11,pci-dev=dev1,node=13 \ > -object acpi-generic-initiator,id=gi12,pci-dev=dev1,node=14 \ > -object acpi-generic-initiator,id=gi13,pci-dev=dev1,node=15 \ > -object acpi-generic-initiator,id=gi14,pci-dev=dev1,node=16 \ > -object acpi-generic-initiator,id=gi15,pci-dev=dev1,node=17 \ > > The performance benefits can be realized by providing the NUMA node distances > appropriately (through libvirt tags or Qemu params). The admin can get the > distance among nodes in hardware using `numactl -H`. That's a lot of description when you could just have claimed you want a normal GI node for HMAT and we'd have all believed you ;) > > Link: https://www.nvidia.com/en-in/technologies/multi-instance-gpu [1] > Cc: Jonathan Cameron <qemu-devel@nongnu.org> > Cc: Alex Williamson <alex.williamson@redhat.com> > Cc: Markus Armbruster <armbru@redhat.com> > Acked-by: Markus Armbruster <armbru@redhat.com> > Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Hi Ankit, Some minor things inline. With the includes tidied up. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> > diff --git a/include/hw/acpi/acpi_generic_initiator.h b/include/hw/acpi/acpi_generic_initiator.h > new file mode 100644 > index 0000000000..23d0b591c6 > --- /dev/null > +++ b/include/hw/acpi/acpi_generic_initiator.h > @@ -0,0 +1,32 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved > + */ > + > +#ifndef ACPI_GENERIC_INITIATOR_H > +#define ACPI_GENERIC_INITIATOR_H > + > +#include "hw/mem/pc-dimm.h" Why? > +#include "hw/acpi/bios-linker-loader.h" > +#include "hw/acpi/aml-build.h" > +#include "sysemu/numa.h" This should only include headers that it uses directly. If they are needed down in the c files, then include them there. Most of these are not related to what you have here. > +#include "qemu/uuid.h" > +#include "qom/object.h" > +#include "qom/object_interfaces.h" > + > +#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator" > + > +typedef struct AcpiGenericInitiator { > + /* private */ > + Object parent; > + > + /* public */ > + char *pci_dev; > + uint16_t node; > +} AcpiGenericInitiator; > + > +typedef struct AcpiGenericInitiatorClass { > + ObjectClass parent_class; > +} AcpiGenericInitiatorClass; Trivial, but you could push the class definition down into the c file given it's not accessed from anywhere else. > + > +#endif
>> -object acpi-generic-initiator,id=gi14,pci-dev=dev1,node=16 \ >> -object acpi-generic-initiator,id=gi15,pci-dev=dev1,node=17 \ >> >> The performance benefits can be realized by providing the NUMA node distances >> appropriately (through libvirt tags or Qemu params). The admin can get the >> distance among nodes in hardware using `numactl -H`. > > That's a lot of description when you could just have claimed you want a normal > GI node for HMAT and we'd have all believed you ;) Ack, I'll remove this part and change it to say as such. >> >> Link: https://www.nvidia.com/en-in/technologies/multi-instance-gpu [1] >> Cc: Jonathan Cameron <qemu-devel@nongnu.org> >> Cc: Alex Williamson <alex.williamson@redhat.com> >> Cc: Markus Armbruster <armbru@redhat.com> >> Acked-by: Markus Armbruster <armbru@redhat.com> >> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> > > Hi Ankit, > > Some minor things inline. With the includes tidied up. > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Thanks! >> diff --git a/include/hw/acpi/acpi_generic_initiator.h b/include/hw/acpi/acpi_generic_initiator.h >> new file mode 100644 >> index 0000000000..23d0b591c6 >> --- /dev/null >> +++ b/include/hw/acpi/acpi_generic_initiator.h >> @@ -0,0 +1,32 @@ >> +// SPDX-License-Identifier: GPL-2.0-only >> +/* >> + * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved >> + */ >> + >> +#ifndef ACPI_GENERIC_INITIATOR_H >> +#define ACPI_GENERIC_INITIATOR_H >> + >> +#include "hw/mem/pc-dimm.h" > > Why? > >> +#include "hw/acpi/bios-linker-loader.h" >> +#include "hw/acpi/aml-build.h" >> +#include "sysemu/numa.h" > > This should only include headers that it uses directly. > If they are needed down in the c files, then include them there. Ack, will fix this in the next version. >> +typedef struct AcpiGenericInitiator { >> + /* private */ >> + Object parent; >> + >> + /* public */ >> + char *pci_dev; >> + uint16_t node; >> +} AcpiGenericInitiator; >> + >> +typedef struct AcpiGenericInitiatorClass { >> + ObjectClass parent_class; >> +} AcpiGenericInitiatorClass; > > Trivial, but you could push the class definition down into the c file > given it's not accessed from anywhere else. Sure will move the AcpiGenericInitiatorClass typedef to the .c file. > + > +#endif
© 2016 - 2024 Red Hat, Inc.