[RFC PATCH v3 0/6] Specifying cache topology on ARM

Alireza Sanaee via posted 6 patches 1 month, 2 weeks ago
hw/acpi/aml-build.c                        | 310 ++++++++++++++++++++-
hw/arm/virt-acpi-build.c                   | 137 ++++++++-
hw/arm/virt.c                              |   5 +
hw/core/machine-smp.c                      |   2 +
hw/loongarch/acpi-build.c                  |   3 +-
include/hw/acpi/aml-build.h                |  20 +-
include/hw/boards.h                        |   1 +
target/arm/tcg/cpu64.c                     |  13 +
tests/data/acpi/aarch64/virt/PPTT.topology | Bin 356 -> 540 bytes
tests/qtest/bios-tables-test.c             |   4 +
10 files changed, 487 insertions(+), 8 deletions(-)
[RFC PATCH v3 0/6] Specifying cache topology on ARM
Posted by Alireza Sanaee via 1 month, 2 weeks ago
Specifying the cache layout in virtual machines is useful for
applications and operating systems to fetch accurate information about
the cache structure and make appropriate adjustments. Enforcing correct
sharing information can lead to better optimizations. This patch enables
the specification of cache layout through a command line parameter,
building on a patch set by Intel [1,2]. It uses this set as a foundation.
The ACPI/PPTT table is populated based on user-provided information and
CPU topology.

Example:


+----------------+                            +----------------+
|    Socket 0    |                            |    Socket 1    |
|    (L3 Cache)  |                            |    (L3 Cache)  |
+--------+-------+                            +--------+-------+
         |                                             |
+--------+--------+                            +--------+--------+
|   Cluster 0     |                            |   Cluster 0     |
|   (L2 Cache)    |                            |   (L2 Cache)    |
+--------+--------+                            +--------+--------+
         |                                             |
+--------+--------+  +--------+--------+    +--------+--------+  +--------+----+
|   Core 0         | |   Core 1        |    |   Core 0        |  |   Core 1    |
|   (L1i, L1d)     | |   (L1i, L1d)    |    |   (L1i, L1d)    |  |   (L1i, L1d)|
+--------+--------+  +--------+--------+    +--------+--------+  +--------+----+
         |                   |                       |                   |
+--------+              +--------+              +--------+          +--------+
|Thread 0|              |Thread 1|              |Thread 1|          |Thread 0|
+--------+              +--------+              +--------+          +--------+
|Thread 1|              |Thread 0|              |Thread 0|          |Thread 1|
+--------+              +--------+              +--------+          +--------+


The following command will represent the system.

./qemu-system-aarch64 \
 -machine virt,smp-cache.0.cache=l1i,smp-cache.0.topology=core,smp-cache.1.cache=l1d,smp-cache.1.topology=core,smp-cache.2.cache=l2,smp-cache.2.topology=cluseter,smp-cache.3.cache=l3,smp-cache.3.topology=socket \
 -cpu max \
 -m 2048 \
 -smp sockets=2,clusters=1,cores=2,threads=2 \
 -kernel ./Image.gz \
 -append "console=ttyAMA0 root=/dev/ram rdinit=/init acpi=force" \
 -initrd rootfs.cpio.gz \
 -bios ./edk2-aarch64-code.fd \
 -nographic

Failure cases:
    1) there are cases where QEMU might not have any clusters selected in the
    -smp option, while user specifies caches to be shared at cluster level. In
    this situations, qemu returns error.

    2) There are other scenarios where caches exist in systems' registers but
    not left unspecified by users. In this case qemu returns failure.

Currently only three levels of caches are supported to be specified from
the command line. However, increasing the value does not require
significant changes. Further, this patch assumes l2 and l3 unified
caches and does not allow l(2/3)(i/d). The level terminology is
thread/core/cluster/socket right now.

Here is the hierarchy assumed in this patch:
Socket level = Cluster level + 1 = Core level + 2 = Thread level + 3;

[1] https://lore.kernel.org/kvm/20240908125920.1160236-1-zhao1.liu@intel.com/
[2] https://lore.kernel.org/qemu-devel/20240704031603.1744546-1-zhao1.liu@intel.com/

TODO:
1) Making the code to work with arbitrary levels
2) Separated data and instruction cache at L2 and L3.
3) Allow for different Data or Instruction only at a particular level.
4) Additional cache controls.  e.g. size of L3 may not want to just
match the underlying system, because only some of the associated host
CPUs may be bound to this VM.
5) Add device tree related code to generate info related to caches.

Depends-on: target/arm/tcg: refine cache descriptions with a wrapper
Depends-on: Msg-id: 20240903144550.280-1-alireza.sanaee@huawei.com

Depends-on: Building PPTT with root node and identical implementation flag
Depends-on: Msg-id: 20240926113323.55991-1-yangyicong@huawei.com

Alireza Sanaee (6):
  bios-tables-test: prepare to change ARM ACPI virt PPTT
  i386/cpu: add IsDefined flag to smp-cache property
  target/arm/tcg: increase cache level for cpu=max
  hw/acpi: add cache hierarchy node to pptt table
  tests/qtest/bios-table-test: testing new ARM ACPI PPTT topology
  Update the ACPI tables according to the acpi aml_build change, also
    empty bios-tables-test-allowed-diff.h.

 hw/acpi/aml-build.c                        | 310 ++++++++++++++++++++-
 hw/arm/virt-acpi-build.c                   | 137 ++++++++-
 hw/arm/virt.c                              |   5 +
 hw/core/machine-smp.c                      |   2 +
 hw/loongarch/acpi-build.c                  |   3 +-
 include/hw/acpi/aml-build.h                |  20 +-
 include/hw/boards.h                        |   1 +
 target/arm/tcg/cpu64.c                     |  13 +
 tests/data/acpi/aarch64/virt/PPTT.topology | Bin 356 -> 540 bytes
 tests/qtest/bios-tables-test.c             |   4 +
 10 files changed, 487 insertions(+), 8 deletions(-)

-- 
2.34.1