[PATCH 00/32] Add support for versioned CPU models

Jiri Denemark posted 32 patches 2 weeks, 3 days ago
There is a newer version of this series
src/cpu/cpu.c                                 |   25 +
src/cpu/cpu.h                                 |    8 +
src/cpu/cpu_map.c                             |    2 +-
src/cpu/cpu_x86.c                             |   40 +-
src/cpu_map/index.xml                         |  286 ++--
src/cpu_map/meson.build                       |   60 +
src/cpu_map/sync_qemu_features_i386.py        |    3 +
src/cpu_map/sync_qemu_models_i386.py          |  178 ++-
src/cpu_map/x86_Broadwell-v1.xml              |    6 +
src/cpu_map/x86_Broadwell-v2.xml              |  140 ++
src/cpu_map/x86_Broadwell-v3.xml              |  143 ++
src/cpu_map/x86_Broadwell-v4.xml              |  141 ++
src/cpu_map/x86_Cascadelake-Server-v1.xml     |    6 +
src/cpu_map/x86_Cascadelake-Server-v2.xml     |  157 +++
src/cpu_map/x86_Cascadelake-Server-v3.xml     |  155 +++
src/cpu_map/x86_Cascadelake-Server-v4.xml     |  156 +++
src/cpu_map/x86_Cascadelake-Server-v5.xml     |  158 +++
src/cpu_map/x86_Cooperlake-v1.xml             |    6 +
src/cpu_map/x86_Cooperlake-v2.xml             |  164 +++
src/cpu_map/x86_Dhyana-v1.xml                 |    6 +
src/cpu_map/x86_Dhyana-v2.xml                 |   73 ++
src/cpu_map/x86_EPYC-Milan-v1.xml             |    6 +
src/cpu_map/x86_EPYC-Milan-v2.xml             |   99 ++
src/cpu_map/x86_EPYC-Rome-v1.xml              |    6 +
src/cpu_map/x86_EPYC-Rome-v2.xml              |   86 ++
src/cpu_map/x86_EPYC-Rome-v3.xml              |   86 ++
src/cpu_map/x86_EPYC-Rome-v4.xml              |   85 ++
src/cpu_map/x86_EPYC-v1.xml                   |    6 +
src/cpu_map/x86_EPYC-v2.xml                   |   75 ++
src/cpu_map/x86_EPYC-v3.xml                   |   79 ++
src/cpu_map/x86_EPYC-v4.xml                   |   79 ++
src/cpu_map/x86_GraniteRapids-v1.xml          |    6 +
src/cpu_map/x86_Haswell-v1.xml                |    6 +
src/cpu_map/x86_Haswell-v2.xml                |  134 ++
src/cpu_map/x86_Haswell-v3.xml                |  137 ++
src/cpu_map/x86_Haswell-v4.xml                |  135 ++
src/cpu_map/x86_Icelake-Server-v1.xml         |    6 +
src/cpu_map/x86_Icelake-Server-v2.xml         |  158 +++
src/cpu_map/x86_Icelake-Server-v3.xml         |  165 +++
src/cpu_map/x86_Icelake-Server-v4.xml         |  172 +++
src/cpu_map/x86_Icelake-Server-v5.xml         |  174 +++
src/cpu_map/x86_Icelake-Server-v6.xml         |  175 +++
src/cpu_map/x86_Icelake-Server-v7.xml         |  177 +++
src/cpu_map/x86_IvyBridge-v1.xml              |    6 +
src/cpu_map/x86_IvyBridge-v2.xml              |  119 ++
src/cpu_map/x86_Nehalem-v1.xml                |    6 +
src/cpu_map/x86_Nehalem-v2.xml                |  101 ++
src/cpu_map/x86_SandyBridge-v1.xml            |    6 +
src/cpu_map/x86_SandyBridge-v2.xml            |  110 ++
src/cpu_map/x86_SapphireRapids-v1.xml         |    6 +
src/cpu_map/x86_SapphireRapids-v2.xml         |  193 +++
src/cpu_map/x86_SapphireRapids-v3.xml         |  198 +++
src/cpu_map/x86_SierraForest-v1.xml           |    6 +
src/cpu_map/x86_Skylake-Client-v1.xml         |    6 +
src/cpu_map/x86_Skylake-Client-v2.xml         |  141 ++
src/cpu_map/x86_Skylake-Client-v3.xml         |  139 ++
src/cpu_map/x86_Skylake-Client-v4.xml         |  141 ++
src/cpu_map/x86_Skylake-Server-v1.xml         |    6 +
src/cpu_map/x86_Skylake-Server-v2.xml         |  149 +++
src/cpu_map/x86_Skylake-Server-v3.xml         |  147 +++
src/cpu_map/x86_Skylake-Server-v4.xml         |  148 +++
src/cpu_map/x86_Skylake-Server-v5.xml         |  150 +++
src/cpu_map/x86_Snowridge-v1.xml              |    6 +
src/cpu_map/x86_Snowridge-v2.xml              |  143 ++
src/cpu_map/x86_Snowridge-v3.xml              |  145 +++
src/cpu_map/x86_Snowridge-v4.xml              |  143 ++
src/cpu_map/x86_Westmere-v1.xml               |    6 +
src/cpu_map/x86_Westmere-v2.xml               |  105 ++
src/libvirt_private.syms                      |    1 +
src/qemu/qemu_capabilities.c                  |   53 +
src/qemu/qemu_capabilities.h                  |    3 +
src/qemu/qemu_domain.c                        |    6 +
src/qemu/qemu_postparse.c                     |   19 +
.../x86_64-cpuid-Atom-P5362-json.xml          |   75 +-
.../x86_64-cpuid-Core-i7-8550U-json.xml       |   72 +-
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml   |    5 +-
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml  |    9 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml |    8 +-
.../x86_64-cpuid-EPYC-7601-32-Core-json.xml   |    6 +-
..._64-cpuid-Hygon-C86-7185-32-core-guest.xml |    5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml |    5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-json.xml |    6 +-
...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml |    9 +-
...64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml |    6 +-
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml  |   79 +-
...-cpuid-baseline-Cooperlake+Cascadelake.xml |   84 +-
.../x86_64-cpuid-baseline-EPYC+Rome.xml       |    6 +-
.../x86_64-cpuid-baseline-Ryzen+Rome.xml      |    6 +-
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml  |  369 ++++++
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml  |  740 ++++++++++-
tests/domaincapsdata/qemu_5.2.0.x86_64.xml    |  369 ++++++
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml  |  382 ++++++
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml  |  798 +++++++++++-
tests/domaincapsdata/qemu_6.0.0.x86_64.xml    |  382 ++++++
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml  |  476 +++++++
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml  | 1003 +++++++++++++-
tests/domaincapsdata/qemu_6.1.0.x86_64.xml    |  476 +++++++
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml  |  483 +++++++
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml  | 1008 +++++++++++++-
tests/domaincapsdata/qemu_6.2.0.x86_64.xml    |  483 +++++++
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml  |  509 ++++++++
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml  | 1018 ++++++++++++++-
tests/domaincapsdata/qemu_7.0.0.x86_64.xml    |  509 ++++++++
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml  |  509 ++++++++
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml  | 1154 +++++++++++++++--
tests/domaincapsdata/qemu_7.1.0.x86_64.xml    |  509 ++++++++
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml  |  509 ++++++++
.../qemu_7.2.0-tcg.x86_64+hvf.xml             |  830 +++++++++++-
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml  |  830 +++++++++++-
tests/domaincapsdata/qemu_7.2.0.x86_64.xml    |  509 ++++++++
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml  |  550 ++++++++
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml  |  862 +++++++++++-
tests/domaincapsdata/qemu_8.0.0.x86_64.xml    |  550 ++++++++
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml  |  711 +++++++++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml  |  864 +++++++++++-
tests/domaincapsdata/qemu_8.1.0.x86_64.xml    |  711 +++++++++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml  |  711 +++++++++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml  |  848 +++++++++++-
tests/domaincapsdata/qemu_8.2.0.x86_64.xml    |  711 +++++++++-
.../domaincapsdata/qemu_9.0.0-q35.x86_64.xml  |  711 +++++++++-
.../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml  |  811 +++++++++++-
tests/domaincapsdata/qemu_9.0.0.x86_64.xml    |  711 +++++++++-
.../domaincapsdata/qemu_9.1.0-q35.x86_64.xml  |  816 +++++++++++-
.../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
tests/domaincapsdata/qemu_9.1.0.x86_64.xml    |  816 +++++++++++-
.../domaincapsdata/qemu_9.2.0-q35.x86_64.xml  |  816 +++++++++++-
.../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
tests/domaincapsdata/qemu_9.2.0.x86_64.xml    |  816 +++++++++++-
.../cpu-Haswell.x86_64-latest.args            |    2 +-
.../cpu-Haswell.x86_64-latest.xml             |    2 +-
.../cpu-Haswell2.x86_64-latest.args           |    2 +-
.../cpu-Haswell2.x86_64-latest.xml            |    2 +-
.../cpu-Haswell3.x86_64-latest.args           |    2 +-
.../cpu-Haswell3.x86_64-latest.xml            |    2 +-
...-Icelake-Server-pconfig.x86_64-latest.args |    2 +-
...u-Icelake-Server-pconfig.x86_64-latest.xml |    2 +-
.../cpu-fallback.x86_64-8.0.0.args            |    2 +-
.../cpu-fallback.x86_64-8.0.0.xml             |    2 +-
...-host-model-fallback-kvm.x86_64-8.1.0.args |    2 +-
...host-model-fallback-kvm.x86_64-latest.args |    2 +-
...host-model-fallback-tcg.x86_64-latest.args |    2 +-
...cpu-host-model-features.x86_64-latest.args |    2 +-
.../cpu-host-model-kvm.x86_64-8.1.0.args      |    2 +-
.../cpu-host-model-kvm.x86_64-latest.args     |    2 +-
...ost-model-nofallback-kvm.x86_64-8.1.0.args |    2 +-
...st-model-nofallback-kvm.x86_64-latest.args |    2 +-
...st-model-nofallback-tcg.x86_64-latest.args |    2 +-
.../cpu-host-model-tcg.x86_64-latest.args     |    2 +-
.../cpu-nofallback.x86_64-8.0.0.args          |    2 +-
.../cpu-nofallback.x86_64-8.0.0.xml           |    2 +-
.../cpu-strict1.x86_64-latest.args            |    2 +-
.../cpu-strict1.x86_64-latest.xml             |    2 +-
.../cpu-translation.x86_64-latest.args        |    2 +-
.../cpu-translation.x86_64-latest.xml         |    2 +-
154 files changed, 33779 insertions(+), 1095 deletions(-)
create mode 100644 src/cpu_map/x86_Broadwell-v1.xml
create mode 100644 src/cpu_map/x86_Broadwell-v2.xml
create mode 100644 src/cpu_map/x86_Broadwell-v3.xml
create mode 100644 src/cpu_map/x86_Broadwell-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v1.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Dhyana-v1.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_GraniteRapids-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v2.xml
create mode 100644 src/cpu_map/x86_Haswell-v3.xml
create mode 100644 src/cpu_map/x86_Haswell-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v7.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v1.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v2.xml
create mode 100644 src/cpu_map/x86_Nehalem-v1.xml
create mode 100644 src/cpu_map/x86_Nehalem-v2.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v1.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v1.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v3.xml
create mode 100644 src/cpu_map/x86_SierraForest-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v1.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
create mode 100644 src/cpu_map/x86_Westmere-v1.xml
create mode 100644 src/cpu_map/x86_Westmere-v2.xml
[PATCH 00/32] Add support for versioned CPU models
Posted by Jiri Denemark 2 weeks, 3 days ago
When parsing a domain XML which uses a non-versioned CPU model we want
to replace it with the appropriate version variant similarly to what we
do with machine types. Theoretically QEMU supports per machine type
specification of a version with which a non-versioned CPU model is
replaced, but this is always 1 for all machine types and the
query-machines QMP command does not even report the value.

Luckily after talking to Igor, having a single number per machine type
does not really allow for setting it to anything but 1 as CPU models
have different number of versions. Each machine type would need to
define a specific version for each CPU model, which would be a
maintenance nightmare. For this reason there's no desire to ever resolve
non-versioned CPU models to anything but v1 in QEMU and the per machine
type setting will most likely even be removed completely. Thus it is
safe for us to always use v1 as the canonical CPU model.

Some non-versioned CPU models, however, are actually aliases to specific
versions of a base model rather than being base models themselves. These
are the old CPU model variants before model versions were introduced,
e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
hardcoded and will never change. We do not translate such CPU models to
the corresponding versioned names. This allows us to introduce the
corresponding -v* variants that match the QEMU models rather than the
existing definitions in our CPU map. The guest CPU will be the same
either way, but the way libvirt checks the CPU model compatibility with
the host will be different. The old "partial" check done by libvirt
using the definition from CPU map will still be used for the old names
(we can't change this for compatibility reasons), but the corresponding
versioned variants (as well as all other versions that do not have a
non-versioned alias) will benefit from the recently introduced new
"partial" check which uses only the information we get from QEMU to
check whether a specific CPU definition is usable on the host.

Other I considered were:
- replace -noTSX, -IBRS, ... models with their versioned variants
    - we'd need to translate them back for migration (just what we do
      for -v1) for backward compatibility
    - I found the benefit of new partial checking when explicitly using
      the versioned variants quite appealing and dropped the relevant
      changes in progress

- do not translate anything, i.e., not even base models to -v1
    - the idea behind translating was to make sure QEMU suddenly doesn't
      start translating the base CPU model to a different version (this
      does not happen with -noTSX etc. as they are hardcoded aliases);
      Igor said they will never do that so is this still valid?
    - not translating would bring the same benefit of explicitly using
      -v1 vs non-versioned name

I guess the current mix does not look very consistent (i.e., it's not
either all or nothing), but it makes sense to me. The question is
whether it also makes sense to others :-)

Jiri Denemark (32):
  cpu_x86: Copy added and removed features from ancestor
  sync_qemu_features_i386: Add some removed features back
  sync_qemu_models_i386: Use f-strings
  sync_qemu_models_i386: Do not overwrite existing models
  sync_qemu_models_i386: Do not require full path to QEMU's cpu.c
  sync_qemu_models_i386: Add support for versioned CPU models
  sync_qemu_models_i386: Store extra info in a separate file
  sync_qemu_models_i386: Switch to lxml
  cpu_map: Group models in index.xml
  sync_qemu_models_i386: Update index.xml
  sync_qemu_models_i386: Copy signatures from base model
  cpu: Introduce virCPUCheckModel
  qemu: Canonicalize CPU models
  cpu_map: Add versions of SierraForest CPU model
  cpu_map: Add versions of GraniteRapids CPU model
  cpu_map: Add versions of SapphireRapids CPU model
  cpu_map: Add versions of Snowridge CPU model
  cpu_map: Add versions of Cooperlake CPU model
  cpu_map: Add versions of Icelake-Server CPU model
  cpu_map: Add versions of Cascadelake-Server CPU model
  cpu_map: Add versions of Skylake-Server CPU model
  cpu_map: Add versions of Skylake-Client CPU model
  cpu_map: Add versions of Broadwell CPU model
  cpu_map: Add versions of Haswell CPU model
  cpu_map: Add versions of IvyBridge CPU model
  cpu_map: Add versions of SandyBridge CPU model
  cpu_map: Add versions of Westmere CPU model
  cpu_map: Add versions of Nehalem CPU model
  cpu_map: Add versions of EPYC-Milan CPU model
  cpu_map: Add versions of EPYC-Rome CPU model
  cpu_map: Add versions of EPYC CPU model
  cpu_map: Add versions of Dhyana CPU model

 src/cpu/cpu.c                                 |   25 +
 src/cpu/cpu.h                                 |    8 +
 src/cpu/cpu_map.c                             |    2 +-
 src/cpu/cpu_x86.c                             |   40 +-
 src/cpu_map/index.xml                         |  286 ++--
 src/cpu_map/meson.build                       |   60 +
 src/cpu_map/sync_qemu_features_i386.py        |    3 +
 src/cpu_map/sync_qemu_models_i386.py          |  178 ++-
 src/cpu_map/x86_Broadwell-v1.xml              |    6 +
 src/cpu_map/x86_Broadwell-v2.xml              |  140 ++
 src/cpu_map/x86_Broadwell-v3.xml              |  143 ++
 src/cpu_map/x86_Broadwell-v4.xml              |  141 ++
 src/cpu_map/x86_Cascadelake-Server-v1.xml     |    6 +
 src/cpu_map/x86_Cascadelake-Server-v2.xml     |  157 +++
 src/cpu_map/x86_Cascadelake-Server-v3.xml     |  155 +++
 src/cpu_map/x86_Cascadelake-Server-v4.xml     |  156 +++
 src/cpu_map/x86_Cascadelake-Server-v5.xml     |  158 +++
 src/cpu_map/x86_Cooperlake-v1.xml             |    6 +
 src/cpu_map/x86_Cooperlake-v2.xml             |  164 +++
 src/cpu_map/x86_Dhyana-v1.xml                 |    6 +
 src/cpu_map/x86_Dhyana-v2.xml                 |   73 ++
 src/cpu_map/x86_EPYC-Milan-v1.xml             |    6 +
 src/cpu_map/x86_EPYC-Milan-v2.xml             |   99 ++
 src/cpu_map/x86_EPYC-Rome-v1.xml              |    6 +
 src/cpu_map/x86_EPYC-Rome-v2.xml              |   86 ++
 src/cpu_map/x86_EPYC-Rome-v3.xml              |   86 ++
 src/cpu_map/x86_EPYC-Rome-v4.xml              |   85 ++
 src/cpu_map/x86_EPYC-v1.xml                   |    6 +
 src/cpu_map/x86_EPYC-v2.xml                   |   75 ++
 src/cpu_map/x86_EPYC-v3.xml                   |   79 ++
 src/cpu_map/x86_EPYC-v4.xml                   |   79 ++
 src/cpu_map/x86_GraniteRapids-v1.xml          |    6 +
 src/cpu_map/x86_Haswell-v1.xml                |    6 +
 src/cpu_map/x86_Haswell-v2.xml                |  134 ++
 src/cpu_map/x86_Haswell-v3.xml                |  137 ++
 src/cpu_map/x86_Haswell-v4.xml                |  135 ++
 src/cpu_map/x86_Icelake-Server-v1.xml         |    6 +
 src/cpu_map/x86_Icelake-Server-v2.xml         |  158 +++
 src/cpu_map/x86_Icelake-Server-v3.xml         |  165 +++
 src/cpu_map/x86_Icelake-Server-v4.xml         |  172 +++
 src/cpu_map/x86_Icelake-Server-v5.xml         |  174 +++
 src/cpu_map/x86_Icelake-Server-v6.xml         |  175 +++
 src/cpu_map/x86_Icelake-Server-v7.xml         |  177 +++
 src/cpu_map/x86_IvyBridge-v1.xml              |    6 +
 src/cpu_map/x86_IvyBridge-v2.xml              |  119 ++
 src/cpu_map/x86_Nehalem-v1.xml                |    6 +
 src/cpu_map/x86_Nehalem-v2.xml                |  101 ++
 src/cpu_map/x86_SandyBridge-v1.xml            |    6 +
 src/cpu_map/x86_SandyBridge-v2.xml            |  110 ++
 src/cpu_map/x86_SapphireRapids-v1.xml         |    6 +
 src/cpu_map/x86_SapphireRapids-v2.xml         |  193 +++
 src/cpu_map/x86_SapphireRapids-v3.xml         |  198 +++
 src/cpu_map/x86_SierraForest-v1.xml           |    6 +
 src/cpu_map/x86_Skylake-Client-v1.xml         |    6 +
 src/cpu_map/x86_Skylake-Client-v2.xml         |  141 ++
 src/cpu_map/x86_Skylake-Client-v3.xml         |  139 ++
 src/cpu_map/x86_Skylake-Client-v4.xml         |  141 ++
 src/cpu_map/x86_Skylake-Server-v1.xml         |    6 +
 src/cpu_map/x86_Skylake-Server-v2.xml         |  149 +++
 src/cpu_map/x86_Skylake-Server-v3.xml         |  147 +++
 src/cpu_map/x86_Skylake-Server-v4.xml         |  148 +++
 src/cpu_map/x86_Skylake-Server-v5.xml         |  150 +++
 src/cpu_map/x86_Snowridge-v1.xml              |    6 +
 src/cpu_map/x86_Snowridge-v2.xml              |  143 ++
 src/cpu_map/x86_Snowridge-v3.xml              |  145 +++
 src/cpu_map/x86_Snowridge-v4.xml              |  143 ++
 src/cpu_map/x86_Westmere-v1.xml               |    6 +
 src/cpu_map/x86_Westmere-v2.xml               |  105 ++
 src/libvirt_private.syms                      |    1 +
 src/qemu/qemu_capabilities.c                  |   53 +
 src/qemu/qemu_capabilities.h                  |    3 +
 src/qemu/qemu_domain.c                        |    6 +
 src/qemu/qemu_postparse.c                     |   19 +
 .../x86_64-cpuid-Atom-P5362-json.xml          |   75 +-
 .../x86_64-cpuid-Core-i7-8550U-json.xml       |   72 +-
 .../x86_64-cpuid-EPYC-7502-32-Core-host.xml   |    5 +-
 .../x86_64-cpuid-EPYC-7601-32-Core-guest.xml  |    9 +-
 ...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml |    8 +-
 .../x86_64-cpuid-EPYC-7601-32-Core-json.xml   |    6 +-
 ..._64-cpuid-Hygon-C86-7185-32-core-guest.xml |    5 +-
 ...6_64-cpuid-Hygon-C86-7185-32-core-host.xml |    5 +-
 ...6_64-cpuid-Hygon-C86-7185-32-core-json.xml |    6 +-
 ...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml |    9 +-
 ...64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml |    6 +-
 .../x86_64-cpuid-Xeon-Platinum-9242-json.xml  |   79 +-
 ...-cpuid-baseline-Cooperlake+Cascadelake.xml |   84 +-
 .../x86_64-cpuid-baseline-EPYC+Rome.xml       |    6 +-
 .../x86_64-cpuid-baseline-Ryzen+Rome.xml      |    6 +-
 .../domaincapsdata/qemu_5.2.0-q35.x86_64.xml  |  369 ++++++
 .../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml  |  740 ++++++++++-
 tests/domaincapsdata/qemu_5.2.0.x86_64.xml    |  369 ++++++
 .../domaincapsdata/qemu_6.0.0-q35.x86_64.xml  |  382 ++++++
 .../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml  |  798 +++++++++++-
 tests/domaincapsdata/qemu_6.0.0.x86_64.xml    |  382 ++++++
 .../domaincapsdata/qemu_6.1.0-q35.x86_64.xml  |  476 +++++++
 .../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml  | 1003 +++++++++++++-
 tests/domaincapsdata/qemu_6.1.0.x86_64.xml    |  476 +++++++
 .../domaincapsdata/qemu_6.2.0-q35.x86_64.xml  |  483 +++++++
 .../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml  | 1008 +++++++++++++-
 tests/domaincapsdata/qemu_6.2.0.x86_64.xml    |  483 +++++++
 .../domaincapsdata/qemu_7.0.0-q35.x86_64.xml  |  509 ++++++++
 .../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml  | 1018 ++++++++++++++-
 tests/domaincapsdata/qemu_7.0.0.x86_64.xml    |  509 ++++++++
 .../domaincapsdata/qemu_7.1.0-q35.x86_64.xml  |  509 ++++++++
 .../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml  | 1154 +++++++++++++++--
 tests/domaincapsdata/qemu_7.1.0.x86_64.xml    |  509 ++++++++
 .../domaincapsdata/qemu_7.2.0-q35.x86_64.xml  |  509 ++++++++
 .../qemu_7.2.0-tcg.x86_64+hvf.xml             |  830 +++++++++++-
 .../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml  |  830 +++++++++++-
 tests/domaincapsdata/qemu_7.2.0.x86_64.xml    |  509 ++++++++
 .../domaincapsdata/qemu_8.0.0-q35.x86_64.xml  |  550 ++++++++
 .../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml  |  862 +++++++++++-
 tests/domaincapsdata/qemu_8.0.0.x86_64.xml    |  550 ++++++++
 .../domaincapsdata/qemu_8.1.0-q35.x86_64.xml  |  711 +++++++++-
 .../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml  |  864 +++++++++++-
 tests/domaincapsdata/qemu_8.1.0.x86_64.xml    |  711 +++++++++-
 .../domaincapsdata/qemu_8.2.0-q35.x86_64.xml  |  711 +++++++++-
 .../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml  |  848 +++++++++++-
 tests/domaincapsdata/qemu_8.2.0.x86_64.xml    |  711 +++++++++-
 .../domaincapsdata/qemu_9.0.0-q35.x86_64.xml  |  711 +++++++++-
 .../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml  |  811 +++++++++++-
 tests/domaincapsdata/qemu_9.0.0.x86_64.xml    |  711 +++++++++-
 .../domaincapsdata/qemu_9.1.0-q35.x86_64.xml  |  816 +++++++++++-
 .../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
 tests/domaincapsdata/qemu_9.1.0.x86_64.xml    |  816 +++++++++++-
 .../domaincapsdata/qemu_9.2.0-q35.x86_64.xml  |  816 +++++++++++-
 .../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
 tests/domaincapsdata/qemu_9.2.0.x86_64.xml    |  816 +++++++++++-
 .../cpu-Haswell.x86_64-latest.args            |    2 +-
 .../cpu-Haswell.x86_64-latest.xml             |    2 +-
 .../cpu-Haswell2.x86_64-latest.args           |    2 +-
 .../cpu-Haswell2.x86_64-latest.xml            |    2 +-
 .../cpu-Haswell3.x86_64-latest.args           |    2 +-
 .../cpu-Haswell3.x86_64-latest.xml            |    2 +-
 ...-Icelake-Server-pconfig.x86_64-latest.args |    2 +-
 ...u-Icelake-Server-pconfig.x86_64-latest.xml |    2 +-
 .../cpu-fallback.x86_64-8.0.0.args            |    2 +-
 .../cpu-fallback.x86_64-8.0.0.xml             |    2 +-
 ...-host-model-fallback-kvm.x86_64-8.1.0.args |    2 +-
 ...host-model-fallback-kvm.x86_64-latest.args |    2 +-
 ...host-model-fallback-tcg.x86_64-latest.args |    2 +-
 ...cpu-host-model-features.x86_64-latest.args |    2 +-
 .../cpu-host-model-kvm.x86_64-8.1.0.args      |    2 +-
 .../cpu-host-model-kvm.x86_64-latest.args     |    2 +-
 ...ost-model-nofallback-kvm.x86_64-8.1.0.args |    2 +-
 ...st-model-nofallback-kvm.x86_64-latest.args |    2 +-
 ...st-model-nofallback-tcg.x86_64-latest.args |    2 +-
 .../cpu-host-model-tcg.x86_64-latest.args     |    2 +-
 .../cpu-nofallback.x86_64-8.0.0.args          |    2 +-
 .../cpu-nofallback.x86_64-8.0.0.xml           |    2 +-
 .../cpu-strict1.x86_64-latest.args            |    2 +-
 .../cpu-strict1.x86_64-latest.xml             |    2 +-
 .../cpu-translation.x86_64-latest.args        |    2 +-
 .../cpu-translation.x86_64-latest.xml         |    2 +-
 154 files changed, 33779 insertions(+), 1095 deletions(-)
 create mode 100644 src/cpu_map/x86_Broadwell-v1.xml
 create mode 100644 src/cpu_map/x86_Broadwell-v2.xml
 create mode 100644 src/cpu_map/x86_Broadwell-v3.xml
 create mode 100644 src/cpu_map/x86_Broadwell-v4.xml
 create mode 100644 src/cpu_map/x86_Cascadelake-Server-v1.xml
 create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
 create mode 100644 src/cpu_map/x86_Cascadelake-Server-v3.xml
 create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
 create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
 create mode 100644 src/cpu_map/x86_Cooperlake-v1.xml
 create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
 create mode 100644 src/cpu_map/x86_Dhyana-v1.xml
 create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
 create mode 100644 src/cpu_map/x86_EPYC-Milan-v1.xml
 create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
 create mode 100644 src/cpu_map/x86_EPYC-Rome-v1.xml
 create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
 create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
 create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
 create mode 100644 src/cpu_map/x86_EPYC-v1.xml
 create mode 100644 src/cpu_map/x86_EPYC-v2.xml
 create mode 100644 src/cpu_map/x86_EPYC-v3.xml
 create mode 100644 src/cpu_map/x86_EPYC-v4.xml
 create mode 100644 src/cpu_map/x86_GraniteRapids-v1.xml
 create mode 100644 src/cpu_map/x86_Haswell-v1.xml
 create mode 100644 src/cpu_map/x86_Haswell-v2.xml
 create mode 100644 src/cpu_map/x86_Haswell-v3.xml
 create mode 100644 src/cpu_map/x86_Haswell-v4.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v1.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v2.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
 create mode 100644 src/cpu_map/x86_Icelake-Server-v7.xml
 create mode 100644 src/cpu_map/x86_IvyBridge-v1.xml
 create mode 100644 src/cpu_map/x86_IvyBridge-v2.xml
 create mode 100644 src/cpu_map/x86_Nehalem-v1.xml
 create mode 100644 src/cpu_map/x86_Nehalem-v2.xml
 create mode 100644 src/cpu_map/x86_SandyBridge-v1.xml
 create mode 100644 src/cpu_map/x86_SandyBridge-v2.xml
 create mode 100644 src/cpu_map/x86_SapphireRapids-v1.xml
 create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
 create mode 100644 src/cpu_map/x86_SapphireRapids-v3.xml
 create mode 100644 src/cpu_map/x86_SierraForest-v1.xml
 create mode 100644 src/cpu_map/x86_Skylake-Client-v1.xml
 create mode 100644 src/cpu_map/x86_Skylake-Client-v2.xml
 create mode 100644 src/cpu_map/x86_Skylake-Client-v3.xml
 create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
 create mode 100644 src/cpu_map/x86_Skylake-Server-v1.xml
 create mode 100644 src/cpu_map/x86_Skylake-Server-v2.xml
 create mode 100644 src/cpu_map/x86_Skylake-Server-v3.xml
 create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
 create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
 create mode 100644 src/cpu_map/x86_Snowridge-v1.xml
 create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
 create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
 create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
 create mode 100644 src/cpu_map/x86_Westmere-v1.xml
 create mode 100644 src/cpu_map/x86_Westmere-v2.xml

-- 
2.47.0
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Daniel P. Berrangé 2 weeks, 2 days ago
On Tue, Nov 19, 2024 at 07:49:36PM +0100, Jiri Denemark wrote:
> When parsing a domain XML which uses a non-versioned CPU model we want
> to replace it with the appropriate version variant similarly to what we
> do with machine types. Theoretically QEMU supports per machine type
> specification of a version with which a non-versioned CPU model is
> replaced, but this is always 1 for all machine types and the
> query-machines QMP command does not even report the value.
> 
> Luckily after talking to Igor, having a single number per machine type
> does not really allow for setting it to anything but 1 as CPU models
> have different number of versions. Each machine type would need to
> define a specific version for each CPU model, which would be a
> maintenance nightmare. For this reason there's no desire to ever resolve
> non-versioned CPU models to anything but v1 in QEMU and the per machine
> type setting will most likely even be removed completely. Thus it is
> safe for us to always use v1 as the canonical CPU model.
> 
> Some non-versioned CPU models, however, are actually aliases to specific
> versions of a base model rather than being base models themselves. These
> are the old CPU model variants before model versions were introduced,
> e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
> hardcoded and will never change. We do not translate such CPU models to
> the corresponding versioned names. This allows us to introduce the
> corresponding -v* variants that match the QEMU models rather than the
> existing definitions in our CPU map. The guest CPU will be the same
> either way, but the way libvirt checks the CPU model compatibility with
> the host will be different. The old "partial" check done by libvirt
> using the definition from CPU map will still be used for the old names
> (we can't change this for compatibility reasons), but the corresponding
> versioned variants (as well as all other versions that do not have a
> non-versioned alias) will benefit from the recently introduced new
> "partial" check which uses only the information we get from QEMU to
> check whether a specific CPU definition is usable on the host.
> 
> Other I considered were:
> - replace -noTSX, -IBRS, ... models with their versioned variants
>     - we'd need to translate them back for migration (just what we do
>       for -v1) for backward compatibility
>     - I found the benefit of new partial checking when explicitly using
>       the versioned variants quite appealing and dropped the relevant
>       changes in progress
> 
> - do not translate anything, i.e., not even base models to -v1
>     - the idea behind translating was to make sure QEMU suddenly doesn't
>       start translating the base CPU model to a different version (this
>       does not happen with -noTSX etc. as they are hardcoded aliases);
>       Igor said they will never do that so is this still valid?
>     - not translating would bring the same benefit of explicitly using
>       -v1 vs non-versioned name
> 
> I guess the current mix does not look very consistent (i.e., it's not
> either all or nothing), but it makes sense to me. The question is
> whether it also makes sense to others :-)

Yeah, the inconsistency pokes at my brain.

As a slight diversion first, let me point to domcapabilities output

$ virsh domcapabilities --xpath '//model' | grep Skylake-Client
<model usable="no" vendor="Intel">Skylake-Client</model>
<model usable="no" vendor="Intel">Skylake-Client-IBRS</model>
<model usable="no" vendor="Intel">Skylake-Client-noTSX-IBRS</model>
<model usable="no" vendor="Intel">Skylake-Client-v1</model>
<model usable="no" vendor="Intel">Skylake-Client-v2</model>
<model usable="no" vendor="Intel">Skylake-Client-v3</model>
<model usable="no" vendor="Intel">Skylake-Client-v4</model>

I'm not a fan of duplicating the the CPU models here.

By comparison for machine types we avoid the duplication thus, by
explicitly telling the mgmt app what the aliases are:

  <machine canonical="pc-q35-8.2" maxCpus="1024">q35</machine>
  <machine maxCpus="1024">pc-q35-8.1</machine>
  <machine maxCpus="1024">pc-q35-8.2</machine>
  <machine maxCpus="255">pc-q35-2.4</machine>
  <machine maxCpus="255">pc-q35-2.5</machine>

I think we should be exposing to mgmt apps that some CPU model names are
merely an alias of another model.

This brings up the question of what we call the "canonical" name. Is
"Skylake-Client" canonical, or is "Skylake-Client-v1" canonical ?

ie do we report

$ virsh domcapabilities --xpath '//model' | grep Skylake-Client
<model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
<model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
<model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
<model usable="no" vendor="Intel">Skylake-Client-v4</model>

or

$ virsh domcapabilities --xpath '//model' | grep Skylake-Client
<model usable="no" vendor="Intel" canonical="Skylake-Client">Skylake-Client-v1</model>
<model usable="no" vendor="Intel" canonical="Skylake-Client-IBRS">Skylake-Client-v2</model>
<model usable="no" vendor="Intel" canonical="Skylake-Client-noTSX-IBRS">Skylake-Client-v3</model>
<model usable="no" vendor="Intel">Skylake-Client-v4</model>


In the case of machine types, libvirt doesn't decide - we honour
whatever QEMU tells us is the "canonical" name. Does QEMU tell us
this for CPU models ?

Anyway back to your question of translation consistency.

I think domain XML should never contain an aliased name, it should always
get expanded to the canonical name, as described by the domcapabilities
XML.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Daniel P. Berrangé 2 weeks, 2 days ago
On Wed, Nov 20, 2024 at 12:39:17PM +0000, Daniel P. Berrangé wrote:
> On Tue, Nov 19, 2024 at 07:49:36PM +0100, Jiri Denemark wrote:
> > When parsing a domain XML which uses a non-versioned CPU model we want
> > to replace it with the appropriate version variant similarly to what we
> > do with machine types. Theoretically QEMU supports per machine type
> > specification of a version with which a non-versioned CPU model is
> > replaced, but this is always 1 for all machine types and the
> > query-machines QMP command does not even report the value.
> > 
> > Luckily after talking to Igor, having a single number per machine type
> > does not really allow for setting it to anything but 1 as CPU models
> > have different number of versions. Each machine type would need to
> > define a specific version for each CPU model, which would be a
> > maintenance nightmare. For this reason there's no desire to ever resolve
> > non-versioned CPU models to anything but v1 in QEMU and the per machine
> > type setting will most likely even be removed completely. Thus it is
> > safe for us to always use v1 as the canonical CPU model.
> > 
> > Some non-versioned CPU models, however, are actually aliases to specific
> > versions of a base model rather than being base models themselves. These
> > are the old CPU model variants before model versions were introduced,
> > e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
> > hardcoded and will never change. We do not translate such CPU models to
> > the corresponding versioned names. This allows us to introduce the
> > corresponding -v* variants that match the QEMU models rather than the
> > existing definitions in our CPU map. The guest CPU will be the same
> > either way, but the way libvirt checks the CPU model compatibility with
> > the host will be different. The old "partial" check done by libvirt
> > using the definition from CPU map will still be used for the old names
> > (we can't change this for compatibility reasons), but the corresponding
> > versioned variants (as well as all other versions that do not have a
> > non-versioned alias) will benefit from the recently introduced new
> > "partial" check which uses only the information we get from QEMU to
> > check whether a specific CPU definition is usable on the host.
> > 
> > Other I considered were:
> > - replace -noTSX, -IBRS, ... models with their versioned variants
> >     - we'd need to translate them back for migration (just what we do
> >       for -v1) for backward compatibility
> >     - I found the benefit of new partial checking when explicitly using
> >       the versioned variants quite appealing and dropped the relevant
> >       changes in progress
> > 
> > - do not translate anything, i.e., not even base models to -v1
> >     - the idea behind translating was to make sure QEMU suddenly doesn't
> >       start translating the base CPU model to a different version (this
> >       does not happen with -noTSX etc. as they are hardcoded aliases);
> >       Igor said they will never do that so is this still valid?
> >     - not translating would bring the same benefit of explicitly using
> >       -v1 vs non-versioned name
> > 
> > I guess the current mix does not look very consistent (i.e., it's not
> > either all or nothing), but it makes sense to me. The question is
> > whether it also makes sense to others :-)
> 
> Yeah, the inconsistency pokes at my brain.
> 
> As a slight diversion first, let me point to domcapabilities output
> 
> $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> <model usable="no" vendor="Intel">Skylake-Client</model>
> <model usable="no" vendor="Intel">Skylake-Client-IBRS</model>
> <model usable="no" vendor="Intel">Skylake-Client-noTSX-IBRS</model>
> <model usable="no" vendor="Intel">Skylake-Client-v1</model>
> <model usable="no" vendor="Intel">Skylake-Client-v2</model>
> <model usable="no" vendor="Intel">Skylake-Client-v3</model>
> <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> 
> I'm not a fan of duplicating the the CPU models here.
> 
> By comparison for machine types we avoid the duplication thus, by
> explicitly telling the mgmt app what the aliases are:
> 
>   <machine canonical="pc-q35-8.2" maxCpus="1024">q35</machine>
>   <machine maxCpus="1024">pc-q35-8.1</machine>
>   <machine maxCpus="1024">pc-q35-8.2</machine>
>   <machine maxCpus="255">pc-q35-2.4</machine>
>   <machine maxCpus="255">pc-q35-2.5</machine>
> 
> I think we should be exposing to mgmt apps that some CPU model names are
> merely an alias of another model.
> 
> This brings up the question of what we call the "canonical" name. Is
> "Skylake-Client" canonical, or is "Skylake-Client-v1" canonical ?
> 
> ie do we report
> 
> $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
> <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
> <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
> <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> 
> or
> 
> $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> <model usable="no" vendor="Intel" canonical="Skylake-Client">Skylake-Client-v1</model>
> <model usable="no" vendor="Intel" canonical="Skylake-Client-IBRS">Skylake-Client-v2</model>
> <model usable="no" vendor="Intel" canonical="Skylake-Client-noTSX-IBRS">Skylake-Client-v3</model>
> <model usable="no" vendor="Intel">Skylake-Client-v4</model>

Looking at query-cpu-definitions it reports:

        {
            "alias-of": "Broadwell-v1",
            "deprecated": false,
            "migration-safe": true,
            "name": "Broadwell",
            "static": false,
            "typename": "Broadwell-x86_64-cpu",
            "unavailable-features": [
                "pcid",
                "tsc-deadline",
                "hle",
                "invpcid",
                "rtm"
            ]
        },

That suggests the "-v1" name is the canonical name, which in turn
points to using

 $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
 <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
 <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
 <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
 <model usable="no" vendor="Intel">Skylake-Client-v4</model>

which is possibly a good thing from our POV. Existing libvirt mgmt
applications parsing domcapabilities likely expect our traditional
CPU names to appear as //model/text(). So by adding the versioned
name as //model/@canonical apps only have to know about the new
canonical versioned names if they choose to. Any name matching
logic they is unlikely to be broken.


On the other hand, if we match our machine type behaviour wrt domain XML,
we would translate all names into their canonical (versioned) format. In
doing so there is a back compatibility risk though, as appps may well not
be expecting libvirt to canonicalize the name behind their back. We got
away with it with versioned machine types as that was very early in life.
Can we get away with canonicalizing CPU names today ?

If we don't canonicalize domain XML, the canonical names have practically
zero value to mgmt apps, other than perhaps as a way to show a linear
progression of changes over time. The latter would imply, however, that
apps interpret the 'vNNN' part of the name instead of treating the name
as an opaque string.



With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Jiri Denemark 2 weeks, 1 day ago
On Wed, Nov 20, 2024 at 14:41:29 +0000, Daniel P. Berrangé wrote:
> On Wed, Nov 20, 2024 at 12:39:17PM +0000, Daniel P. Berrangé wrote:
> > On Tue, Nov 19, 2024 at 07:49:36PM +0100, Jiri Denemark wrote:
> > > When parsing a domain XML which uses a non-versioned CPU model we want
> > > to replace it with the appropriate version variant similarly to what we
> > > do with machine types. Theoretically QEMU supports per machine type
> > > specification of a version with which a non-versioned CPU model is
> > > replaced, but this is always 1 for all machine types and the
> > > query-machines QMP command does not even report the value.
> > > 
> > > Luckily after talking to Igor, having a single number per machine type
> > > does not really allow for setting it to anything but 1 as CPU models
> > > have different number of versions. Each machine type would need to
> > > define a specific version for each CPU model, which would be a
> > > maintenance nightmare. For this reason there's no desire to ever resolve
> > > non-versioned CPU models to anything but v1 in QEMU and the per machine
> > > type setting will most likely even be removed completely. Thus it is
> > > safe for us to always use v1 as the canonical CPU model.
> > > 
> > > Some non-versioned CPU models, however, are actually aliases to specific
> > > versions of a base model rather than being base models themselves. These
> > > are the old CPU model variants before model versions were introduced,
> > > e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
> > > hardcoded and will never change. We do not translate such CPU models to
> > > the corresponding versioned names. This allows us to introduce the
> > > corresponding -v* variants that match the QEMU models rather than the
> > > existing definitions in our CPU map. The guest CPU will be the same
> > > either way, but the way libvirt checks the CPU model compatibility with
> > > the host will be different. The old "partial" check done by libvirt
> > > using the definition from CPU map will still be used for the old names
> > > (we can't change this for compatibility reasons), but the corresponding
> > > versioned variants (as well as all other versions that do not have a
> > > non-versioned alias) will benefit from the recently introduced new
> > > "partial" check which uses only the information we get from QEMU to
> > > check whether a specific CPU definition is usable on the host.
> > > 
> > > Other I considered were:
> > > - replace -noTSX, -IBRS, ... models with their versioned variants
> > >     - we'd need to translate them back for migration (just what we do
> > >       for -v1) for backward compatibility
> > >     - I found the benefit of new partial checking when explicitly using
> > >       the versioned variants quite appealing and dropped the relevant
> > >       changes in progress
> > > 
> > > - do not translate anything, i.e., not even base models to -v1
> > >     - the idea behind translating was to make sure QEMU suddenly doesn't
> > >       start translating the base CPU model to a different version (this
> > >       does not happen with -noTSX etc. as they are hardcoded aliases);
> > >       Igor said they will never do that so is this still valid?
> > >     - not translating would bring the same benefit of explicitly using
> > >       -v1 vs non-versioned name
> > > 
> > > I guess the current mix does not look very consistent (i.e., it's not
> > > either all or nothing), but it makes sense to me. The question is
> > > whether it also makes sense to others :-)
> > 
> > Yeah, the inconsistency pokes at my brain.
> > 
> > As a slight diversion first, let me point to domcapabilities output
> > 
> > $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> > <model usable="no" vendor="Intel">Skylake-Client</model>
> > <model usable="no" vendor="Intel">Skylake-Client-IBRS</model>
> > <model usable="no" vendor="Intel">Skylake-Client-noTSX-IBRS</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v1</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v2</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v3</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> > 
> > I'm not a fan of duplicating the the CPU models here.
> > 
> > By comparison for machine types we avoid the duplication thus, by
> > explicitly telling the mgmt app what the aliases are:
> > 
> >   <machine canonical="pc-q35-8.2" maxCpus="1024">q35</machine>
> >   <machine maxCpus="1024">pc-q35-8.1</machine>
> >   <machine maxCpus="1024">pc-q35-8.2</machine>
> >   <machine maxCpus="255">pc-q35-2.4</machine>
> >   <machine maxCpus="255">pc-q35-2.5</machine>
> > 
> > I think we should be exposing to mgmt apps that some CPU model names are
> > merely an alias of another model.
> > 
> > This brings up the question of what we call the "canonical" name. Is
> > "Skylake-Client" canonical, or is "Skylake-Client-v1" canonical ?
> > 
> > ie do we report
> > 
> > $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> > <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
> > <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
> > <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> > 
> > or
> > 
> > $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> > <model usable="no" vendor="Intel" canonical="Skylake-Client">Skylake-Client-v1</model>
> > <model usable="no" vendor="Intel" canonical="Skylake-Client-IBRS">Skylake-Client-v2</model>
> > <model usable="no" vendor="Intel" canonical="Skylake-Client-noTSX-IBRS">Skylake-Client-v3</model>
> > <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> 
> Looking at query-cpu-definitions it reports:
> 
>         {
>             "alias-of": "Broadwell-v1",
>             "deprecated": false,
>             "migration-safe": true,
>             "name": "Broadwell",
>             "static": false,
>             "typename": "Broadwell-x86_64-cpu",
>             "unavailable-features": [
>                 "pcid",
>                 "tsc-deadline",
>                 "hle",
>                 "invpcid",
>                 "rtm"
>             ]
>         },
> 
> That suggests the "-v1" name is the canonical name, which in turn
> points to using
> 
>  $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
>  <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
>  <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
>  <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
>  <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> 
> which is possibly a good thing from our POV. Existing libvirt mgmt
> applications parsing domcapabilities likely expect our traditional
> CPU names to appear as //model/text(). So by adding the versioned
> name as //model/@canonical apps only have to know about the new
> canonical versioned names if they choose to. Any name matching
> logic they is unlikely to be broken.
> 
> 
> On the other hand, if we match our machine type behaviour wrt domain XML,
> we would translate all names into their canonical (versioned) format. In
> doing so there is a back compatibility risk though, as appps may well not
> be expecting libvirt to canonicalize the name behind their back. We got
> away with it with versioned machine types as that was very early in life.
> Can we get away with canonicalizing CPU names today ?
> 
> If we don't canonicalize domain XML, the canonical names have practically
> zero value to mgmt apps, other than perhaps as a way to show a linear
> progression of changes over time. The latter would imply, however, that
> apps interpret the 'vNNN' part of the name instead of treating the name
> as an opaque string.

The original idea (ages ago) was to canonicalize CPU models. But at that
point we were thinking QEMU would change what a non-version CPU model is
translated to in time. But they don't really do it and have no intent
doing it in the future either. Other -v* that have an alias were never
supposed to change.

So I would say we don't really need to do any canonicalization at all as
the only benefit is not really needed. We'd need to undo it anyway for
migration compatibility, which could actually look strange if done even
on domains that were defined using the canonical name.

Jirka
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Jiri Denemark 2 weeks, 1 day ago
On Thu, Nov 21, 2024 at 16:14:58 +0100, Jiri Denemark wrote:
> On Wed, Nov 20, 2024 at 14:41:29 +0000, Daniel P. Berrangé wrote:
> >  $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
> >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
> >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
> >  <model usable="no" vendor="Intel">Skylake-Client-v4</model>

I'm thinking about the benefit of knowing what CPU model is an alias and
which one is canonical for apps. I guess they only need to know what
models are supported on all hosts to select one that can be migrated
everywhere. Wouldn't it be better to have the following instead?

    <model usable='yes' vendor='Intel' base='Skylake-Client' version='3'>Skylake-Client-noTSX-IBRS</model>
    <model usable='yes' vendor='Intel' base='Skylake-Client' version='3'>Skylake-Client-v3</model>

Apps could then easily select the latest version of a specific model or
similar stuff without having to parse model names. If they really wanted
they could even deduce Skylake-Client-noTSX-IBRS and Skylake-Client-v3
are in fact the same CPU model.

Jirka
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Daniel P. Berrangé 2 weeks, 1 day ago
On Thu, Nov 21, 2024 at 07:03:13PM +0100, Jiri Denemark wrote:
> On Thu, Nov 21, 2024 at 16:14:58 +0100, Jiri Denemark wrote:
> > On Wed, Nov 20, 2024 at 14:41:29 +0000, Daniel P. Berrangé wrote:
> > >  $ virsh domcapabilities --xpath '//model' | grep Skylake-Client
> > >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v1">Skylake-Client</model>
> > >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v2">Skylake-Client-IBRS</model>
> > >  <model usable="no" vendor="Intel" canonical="Skylake-Client-v3">Skylake-Client-noTSX-IBRS</model>
> > >  <model usable="no" vendor="Intel">Skylake-Client-v4</model>
> 
> I'm thinking about the benefit of knowing what CPU model is an alias and
> which one is canonical for apps. I guess they only need to know what
> models are supported on all hosts to select one that can be migrated
> everywhere. Wouldn't it be better to have the following instead?
> 
>     <model usable='yes' vendor='Intel' base='Skylake-Client' version='3'>Skylake-Client-noTSX-IBRS</model>
>     <model usable='yes' vendor='Intel' base='Skylake-Client' version='3'>Skylake-Client-v3</model>
> 
> Apps could then easily select the latest version of a specific model or
> similar stuff without having to parse model names. If they really wanted
> they could even deduce Skylake-Client-noTSX-IBRS and Skylake-Client-v3
> are in fact the same CPU model.

The problem with this is that at the QAPI level, QEMU does not express
any concept of "sequence of versioned models", nor a "base" model.

The "-vNNN" suffixes are just a naming convention it happens to use
currently. Humans can think it is a version sequence, but QEMU's not
defined that as an API promise.

The other thing to be wary of is that "bigger" versions does not
imply "better", or "newer", or still runnable. A version simply
reflects the order in which they were added to QEMU for whatever
reason that was created.

I'd probably consider them "variants" rather than "versions" and not
suggest a particular ordering.

From an app POV, the important thing is only that you can identify
a desired variant that is compatible with all the hosts you need
to use.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Daniel P. Berrangé 2 weeks, 2 days ago
FYI, I re-ran the sync script after applying this series:

./src/cpu_map/sync_qemu_models_i386.py ../qemu src/cpu_map

and it adds a bunch more CPUs from QEMU git master.

       <include filename='x86_GraniteRapids-v1.xml'/>
       <include filename='x86_SierraForest.xml'/>
       <include filename='x86_SierraForest-v1.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_GraniteRapids-v2.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v1.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v2.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v3.xml'/>
+      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_KnightsMill.xml'/>
     </group>

Also it is wierd that it added full paths, rather than relative
paths to index.xml ? This fails when actually run, as we're only
expecting relative paths, and so preprepend the CPU map path
to the absolute path.


On Tue, Nov 19, 2024 at 07:49:36PM +0100, Jiri Denemark wrote:
> When parsing a domain XML which uses a non-versioned CPU model we want
> to replace it with the appropriate version variant similarly to what we
> do with machine types. Theoretically QEMU supports per machine type
> specification of a version with which a non-versioned CPU model is
> replaced, but this is always 1 for all machine types and the
> query-machines QMP command does not even report the value.
> 
> Luckily after talking to Igor, having a single number per machine type
> does not really allow for setting it to anything but 1 as CPU models
> have different number of versions. Each machine type would need to
> define a specific version for each CPU model, which would be a
> maintenance nightmare. For this reason there's no desire to ever resolve
> non-versioned CPU models to anything but v1 in QEMU and the per machine
> type setting will most likely even be removed completely. Thus it is
> safe for us to always use v1 as the canonical CPU model.
> 
> Some non-versioned CPU models, however, are actually aliases to specific
> versions of a base model rather than being base models themselves. These
> are the old CPU model variants before model versions were introduced,
> e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
> hardcoded and will never change. We do not translate such CPU models to
> the corresponding versioned names. This allows us to introduce the
> corresponding -v* variants that match the QEMU models rather than the
> existing definitions in our CPU map. The guest CPU will be the same
> either way, but the way libvirt checks the CPU model compatibility with
> the host will be different. The old "partial" check done by libvirt
> using the definition from CPU map will still be used for the old names
> (we can't change this for compatibility reasons), but the corresponding
> versioned variants (as well as all other versions that do not have a
> non-versioned alias) will benefit from the recently introduced new
> "partial" check which uses only the information we get from QEMU to
> check whether a specific CPU definition is usable on the host.
> 
> Other I considered were:
> - replace -noTSX, -IBRS, ... models with their versioned variants
>     - we'd need to translate them back for migration (just what we do
>       for -v1) for backward compatibility
>     - I found the benefit of new partial checking when explicitly using
>       the versioned variants quite appealing and dropped the relevant
>       changes in progress
> 
> - do not translate anything, i.e., not even base models to -v1
>     - the idea behind translating was to make sure QEMU suddenly doesn't
>       start translating the base CPU model to a different version (this
>       does not happen with -noTSX etc. as they are hardcoded aliases);
>       Igor said they will never do that so is this still valid?
>     - not translating would bring the same benefit of explicitly using
>       -v1 vs non-versioned name
> 
> I guess the current mix does not look very consistent (i.e., it's not
> either all or nothing), but it makes sense to me. The question is
> whether it also makes sense to others :-)
> 
> Jiri Denemark (32):
>   cpu_x86: Copy added and removed features from ancestor
>   sync_qemu_features_i386: Add some removed features back
>   sync_qemu_models_i386: Use f-strings
>   sync_qemu_models_i386: Do not overwrite existing models
>   sync_qemu_models_i386: Do not require full path to QEMU's cpu.c
>   sync_qemu_models_i386: Add support for versioned CPU models
>   sync_qemu_models_i386: Store extra info in a separate file
>   sync_qemu_models_i386: Switch to lxml
>   cpu_map: Group models in index.xml
>   sync_qemu_models_i386: Update index.xml
>   sync_qemu_models_i386: Copy signatures from base model
>   cpu: Introduce virCPUCheckModel
>   qemu: Canonicalize CPU models
>   cpu_map: Add versions of SierraForest CPU model
>   cpu_map: Add versions of GraniteRapids CPU model
>   cpu_map: Add versions of SapphireRapids CPU model
>   cpu_map: Add versions of Snowridge CPU model
>   cpu_map: Add versions of Cooperlake CPU model
>   cpu_map: Add versions of Icelake-Server CPU model
>   cpu_map: Add versions of Cascadelake-Server CPU model
>   cpu_map: Add versions of Skylake-Server CPU model
>   cpu_map: Add versions of Skylake-Client CPU model
>   cpu_map: Add versions of Broadwell CPU model
>   cpu_map: Add versions of Haswell CPU model
>   cpu_map: Add versions of IvyBridge CPU model
>   cpu_map: Add versions of SandyBridge CPU model
>   cpu_map: Add versions of Westmere CPU model
>   cpu_map: Add versions of Nehalem CPU model
>   cpu_map: Add versions of EPYC-Milan CPU model
>   cpu_map: Add versions of EPYC-Rome CPU model
>   cpu_map: Add versions of EPYC CPU model
>   cpu_map: Add versions of Dhyana CPU model
> 
>  src/cpu/cpu.c                                 |   25 +
>  src/cpu/cpu.h                                 |    8 +
>  src/cpu/cpu_map.c                             |    2 +-
>  src/cpu/cpu_x86.c                             |   40 +-
>  src/cpu_map/index.xml                         |  286 ++--
>  src/cpu_map/meson.build                       |   60 +
>  src/cpu_map/sync_qemu_features_i386.py        |    3 +
>  src/cpu_map/sync_qemu_models_i386.py          |  178 ++-
>  src/cpu_map/x86_Broadwell-v1.xml              |    6 +
>  src/cpu_map/x86_Broadwell-v2.xml              |  140 ++
>  src/cpu_map/x86_Broadwell-v3.xml              |  143 ++
>  src/cpu_map/x86_Broadwell-v4.xml              |  141 ++
>  src/cpu_map/x86_Cascadelake-Server-v1.xml     |    6 +
>  src/cpu_map/x86_Cascadelake-Server-v2.xml     |  157 +++
>  src/cpu_map/x86_Cascadelake-Server-v3.xml     |  155 +++
>  src/cpu_map/x86_Cascadelake-Server-v4.xml     |  156 +++
>  src/cpu_map/x86_Cascadelake-Server-v5.xml     |  158 +++
>  src/cpu_map/x86_Cooperlake-v1.xml             |    6 +
>  src/cpu_map/x86_Cooperlake-v2.xml             |  164 +++
>  src/cpu_map/x86_Dhyana-v1.xml                 |    6 +
>  src/cpu_map/x86_Dhyana-v2.xml                 |   73 ++
>  src/cpu_map/x86_EPYC-Milan-v1.xml             |    6 +
>  src/cpu_map/x86_EPYC-Milan-v2.xml             |   99 ++
>  src/cpu_map/x86_EPYC-Rome-v1.xml              |    6 +
>  src/cpu_map/x86_EPYC-Rome-v2.xml              |   86 ++
>  src/cpu_map/x86_EPYC-Rome-v3.xml              |   86 ++
>  src/cpu_map/x86_EPYC-Rome-v4.xml              |   85 ++
>  src/cpu_map/x86_EPYC-v1.xml                   |    6 +
>  src/cpu_map/x86_EPYC-v2.xml                   |   75 ++
>  src/cpu_map/x86_EPYC-v3.xml                   |   79 ++
>  src/cpu_map/x86_EPYC-v4.xml                   |   79 ++
>  src/cpu_map/x86_GraniteRapids-v1.xml          |    6 +
>  src/cpu_map/x86_Haswell-v1.xml                |    6 +
>  src/cpu_map/x86_Haswell-v2.xml                |  134 ++
>  src/cpu_map/x86_Haswell-v3.xml                |  137 ++
>  src/cpu_map/x86_Haswell-v4.xml                |  135 ++
>  src/cpu_map/x86_Icelake-Server-v1.xml         |    6 +
>  src/cpu_map/x86_Icelake-Server-v2.xml         |  158 +++
>  src/cpu_map/x86_Icelake-Server-v3.xml         |  165 +++
>  src/cpu_map/x86_Icelake-Server-v4.xml         |  172 +++
>  src/cpu_map/x86_Icelake-Server-v5.xml         |  174 +++
>  src/cpu_map/x86_Icelake-Server-v6.xml         |  175 +++
>  src/cpu_map/x86_Icelake-Server-v7.xml         |  177 +++
>  src/cpu_map/x86_IvyBridge-v1.xml              |    6 +
>  src/cpu_map/x86_IvyBridge-v2.xml              |  119 ++
>  src/cpu_map/x86_Nehalem-v1.xml                |    6 +
>  src/cpu_map/x86_Nehalem-v2.xml                |  101 ++
>  src/cpu_map/x86_SandyBridge-v1.xml            |    6 +
>  src/cpu_map/x86_SandyBridge-v2.xml            |  110 ++
>  src/cpu_map/x86_SapphireRapids-v1.xml         |    6 +
>  src/cpu_map/x86_SapphireRapids-v2.xml         |  193 +++
>  src/cpu_map/x86_SapphireRapids-v3.xml         |  198 +++
>  src/cpu_map/x86_SierraForest-v1.xml           |    6 +
>  src/cpu_map/x86_Skylake-Client-v1.xml         |    6 +
>  src/cpu_map/x86_Skylake-Client-v2.xml         |  141 ++
>  src/cpu_map/x86_Skylake-Client-v3.xml         |  139 ++
>  src/cpu_map/x86_Skylake-Client-v4.xml         |  141 ++
>  src/cpu_map/x86_Skylake-Server-v1.xml         |    6 +
>  src/cpu_map/x86_Skylake-Server-v2.xml         |  149 +++
>  src/cpu_map/x86_Skylake-Server-v3.xml         |  147 +++
>  src/cpu_map/x86_Skylake-Server-v4.xml         |  148 +++
>  src/cpu_map/x86_Skylake-Server-v5.xml         |  150 +++
>  src/cpu_map/x86_Snowridge-v1.xml              |    6 +
>  src/cpu_map/x86_Snowridge-v2.xml              |  143 ++
>  src/cpu_map/x86_Snowridge-v3.xml              |  145 +++
>  src/cpu_map/x86_Snowridge-v4.xml              |  143 ++
>  src/cpu_map/x86_Westmere-v1.xml               |    6 +
>  src/cpu_map/x86_Westmere-v2.xml               |  105 ++
>  src/libvirt_private.syms                      |    1 +
>  src/qemu/qemu_capabilities.c                  |   53 +
>  src/qemu/qemu_capabilities.h                  |    3 +
>  src/qemu/qemu_domain.c                        |    6 +
>  src/qemu/qemu_postparse.c                     |   19 +
>  .../x86_64-cpuid-Atom-P5362-json.xml          |   75 +-
>  .../x86_64-cpuid-Core-i7-8550U-json.xml       |   72 +-
>  .../x86_64-cpuid-EPYC-7502-32-Core-host.xml   |    5 +-
>  .../x86_64-cpuid-EPYC-7601-32-Core-guest.xml  |    9 +-
>  ...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml |    8 +-
>  .../x86_64-cpuid-EPYC-7601-32-Core-json.xml   |    6 +-
>  ..._64-cpuid-Hygon-C86-7185-32-core-guest.xml |    5 +-
>  ...6_64-cpuid-Hygon-C86-7185-32-core-host.xml |    5 +-
>  ...6_64-cpuid-Hygon-C86-7185-32-core-json.xml |    6 +-
>  ...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml |    9 +-
>  ...64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml |    6 +-
>  .../x86_64-cpuid-Xeon-Platinum-9242-json.xml  |   79 +-
>  ...-cpuid-baseline-Cooperlake+Cascadelake.xml |   84 +-
>  .../x86_64-cpuid-baseline-EPYC+Rome.xml       |    6 +-
>  .../x86_64-cpuid-baseline-Ryzen+Rome.xml      |    6 +-
>  .../domaincapsdata/qemu_5.2.0-q35.x86_64.xml  |  369 ++++++
>  .../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml  |  740 ++++++++++-
>  tests/domaincapsdata/qemu_5.2.0.x86_64.xml    |  369 ++++++
>  .../domaincapsdata/qemu_6.0.0-q35.x86_64.xml  |  382 ++++++
>  .../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml  |  798 +++++++++++-
>  tests/domaincapsdata/qemu_6.0.0.x86_64.xml    |  382 ++++++
>  .../domaincapsdata/qemu_6.1.0-q35.x86_64.xml  |  476 +++++++
>  .../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml  | 1003 +++++++++++++-
>  tests/domaincapsdata/qemu_6.1.0.x86_64.xml    |  476 +++++++
>  .../domaincapsdata/qemu_6.2.0-q35.x86_64.xml  |  483 +++++++
>  .../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml  | 1008 +++++++++++++-
>  tests/domaincapsdata/qemu_6.2.0.x86_64.xml    |  483 +++++++
>  .../domaincapsdata/qemu_7.0.0-q35.x86_64.xml  |  509 ++++++++
>  .../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml  | 1018 ++++++++++++++-
>  tests/domaincapsdata/qemu_7.0.0.x86_64.xml    |  509 ++++++++
>  .../domaincapsdata/qemu_7.1.0-q35.x86_64.xml  |  509 ++++++++
>  .../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml  | 1154 +++++++++++++++--
>  tests/domaincapsdata/qemu_7.1.0.x86_64.xml    |  509 ++++++++
>  .../domaincapsdata/qemu_7.2.0-q35.x86_64.xml  |  509 ++++++++
>  .../qemu_7.2.0-tcg.x86_64+hvf.xml             |  830 +++++++++++-
>  .../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml  |  830 +++++++++++-
>  tests/domaincapsdata/qemu_7.2.0.x86_64.xml    |  509 ++++++++
>  .../domaincapsdata/qemu_8.0.0-q35.x86_64.xml  |  550 ++++++++
>  .../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml  |  862 +++++++++++-
>  tests/domaincapsdata/qemu_8.0.0.x86_64.xml    |  550 ++++++++
>  .../domaincapsdata/qemu_8.1.0-q35.x86_64.xml  |  711 +++++++++-
>  .../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml  |  864 +++++++++++-
>  tests/domaincapsdata/qemu_8.1.0.x86_64.xml    |  711 +++++++++-
>  .../domaincapsdata/qemu_8.2.0-q35.x86_64.xml  |  711 +++++++++-
>  .../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml  |  848 +++++++++++-
>  tests/domaincapsdata/qemu_8.2.0.x86_64.xml    |  711 +++++++++-
>  .../domaincapsdata/qemu_9.0.0-q35.x86_64.xml  |  711 +++++++++-
>  .../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml  |  811 +++++++++++-
>  tests/domaincapsdata/qemu_9.0.0.x86_64.xml    |  711 +++++++++-
>  .../domaincapsdata/qemu_9.1.0-q35.x86_64.xml  |  816 +++++++++++-
>  .../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
>  tests/domaincapsdata/qemu_9.1.0.x86_64.xml    |  816 +++++++++++-
>  .../domaincapsdata/qemu_9.2.0-q35.x86_64.xml  |  816 +++++++++++-
>  .../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml  | 1099 ++++++++++++++--
>  tests/domaincapsdata/qemu_9.2.0.x86_64.xml    |  816 +++++++++++-
>  .../cpu-Haswell.x86_64-latest.args            |    2 +-
>  .../cpu-Haswell.x86_64-latest.xml             |    2 +-
>  .../cpu-Haswell2.x86_64-latest.args           |    2 +-
>  .../cpu-Haswell2.x86_64-latest.xml            |    2 +-
>  .../cpu-Haswell3.x86_64-latest.args           |    2 +-
>  .../cpu-Haswell3.x86_64-latest.xml            |    2 +-
>  ...-Icelake-Server-pconfig.x86_64-latest.args |    2 +-
>  ...u-Icelake-Server-pconfig.x86_64-latest.xml |    2 +-
>  .../cpu-fallback.x86_64-8.0.0.args            |    2 +-
>  .../cpu-fallback.x86_64-8.0.0.xml             |    2 +-
>  ...-host-model-fallback-kvm.x86_64-8.1.0.args |    2 +-
>  ...host-model-fallback-kvm.x86_64-latest.args |    2 +-
>  ...host-model-fallback-tcg.x86_64-latest.args |    2 +-
>  ...cpu-host-model-features.x86_64-latest.args |    2 +-
>  .../cpu-host-model-kvm.x86_64-8.1.0.args      |    2 +-
>  .../cpu-host-model-kvm.x86_64-latest.args     |    2 +-
>  ...ost-model-nofallback-kvm.x86_64-8.1.0.args |    2 +-
>  ...st-model-nofallback-kvm.x86_64-latest.args |    2 +-
>  ...st-model-nofallback-tcg.x86_64-latest.args |    2 +-
>  .../cpu-host-model-tcg.x86_64-latest.args     |    2 +-
>  .../cpu-nofallback.x86_64-8.0.0.args          |    2 +-
>  .../cpu-nofallback.x86_64-8.0.0.xml           |    2 +-
>  .../cpu-strict1.x86_64-latest.args            |    2 +-
>  .../cpu-strict1.x86_64-latest.xml             |    2 +-
>  .../cpu-translation.x86_64-latest.args        |    2 +-
>  .../cpu-translation.x86_64-latest.xml         |    2 +-
>  154 files changed, 33779 insertions(+), 1095 deletions(-)
>  create mode 100644 src/cpu_map/x86_Broadwell-v1.xml
>  create mode 100644 src/cpu_map/x86_Broadwell-v2.xml
>  create mode 100644 src/cpu_map/x86_Broadwell-v3.xml
>  create mode 100644 src/cpu_map/x86_Broadwell-v4.xml
>  create mode 100644 src/cpu_map/x86_Cascadelake-Server-v1.xml
>  create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
>  create mode 100644 src/cpu_map/x86_Cascadelake-Server-v3.xml
>  create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
>  create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
>  create mode 100644 src/cpu_map/x86_Cooperlake-v1.xml
>  create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
>  create mode 100644 src/cpu_map/x86_Dhyana-v1.xml
>  create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Milan-v1.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Rome-v1.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
>  create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
>  create mode 100644 src/cpu_map/x86_EPYC-v1.xml
>  create mode 100644 src/cpu_map/x86_EPYC-v2.xml
>  create mode 100644 src/cpu_map/x86_EPYC-v3.xml
>  create mode 100644 src/cpu_map/x86_EPYC-v4.xml
>  create mode 100644 src/cpu_map/x86_GraniteRapids-v1.xml
>  create mode 100644 src/cpu_map/x86_Haswell-v1.xml
>  create mode 100644 src/cpu_map/x86_Haswell-v2.xml
>  create mode 100644 src/cpu_map/x86_Haswell-v3.xml
>  create mode 100644 src/cpu_map/x86_Haswell-v4.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v1.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v2.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
>  create mode 100644 src/cpu_map/x86_Icelake-Server-v7.xml
>  create mode 100644 src/cpu_map/x86_IvyBridge-v1.xml
>  create mode 100644 src/cpu_map/x86_IvyBridge-v2.xml
>  create mode 100644 src/cpu_map/x86_Nehalem-v1.xml
>  create mode 100644 src/cpu_map/x86_Nehalem-v2.xml
>  create mode 100644 src/cpu_map/x86_SandyBridge-v1.xml
>  create mode 100644 src/cpu_map/x86_SandyBridge-v2.xml
>  create mode 100644 src/cpu_map/x86_SapphireRapids-v1.xml
>  create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
>  create mode 100644 src/cpu_map/x86_SapphireRapids-v3.xml
>  create mode 100644 src/cpu_map/x86_SierraForest-v1.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Client-v1.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Client-v2.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Client-v3.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Server-v1.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Server-v2.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Server-v3.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
>  create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
>  create mode 100644 src/cpu_map/x86_Snowridge-v1.xml
>  create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
>  create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
>  create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
>  create mode 100644 src/cpu_map/x86_Westmere-v1.xml
>  create mode 100644 src/cpu_map/x86_Westmere-v2.xml
> 
> -- 
> 2.47.0
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Jiri Denemark 2 weeks, 2 days ago
On Wed, Nov 20, 2024 at 10:23:55 +0000, Daniel P. Berrangé wrote:
> FYI, I re-ran the sync script after applying this series:
> 
> ./src/cpu_map/sync_qemu_models_i386.py ../qemu src/cpu_map
> 
> and it adds a bunch more CPUs from QEMU git master.
> 
>        <include filename='x86_GraniteRapids-v1.xml'/>
>        <include filename='x86_SierraForest.xml'/>
>        <include filename='x86_SierraForest-v1.xml'/>
> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_GraniteRapids-v2.xml'/>

This is a new version added in 9.2.0, while I explicitly wanted to only
add released CPU model, i.e., used 9.1.*. Since 9.2.0 is in rc phase
already I guess I could add this v2 as well.

> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton.xml'/>
> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v1.xml'/>
> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v2.xml'/>
> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v3.xml'/>
> +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_KnightsMill.xml'/>

This series is adding versioned variants to existing CPU models, while
Denverton and KnightsMill were never supported by libvirt. I don't think
they need to be added (Denverton is an Atom CPU and KnightsMill is some
kind of a dead evolution branch), but we can just add them for
consistency.

>      </group>
> 
> Also it is wierd that it added full paths, rather than relative
> paths to index.xml ? This fails when actually run, as we're only
> expecting relative paths, and so preprepend the CPU map path
> to the absolute path.

The absolute paths are strange, I'll look at it.

Jirka
Re: [PATCH 00/32] Add support for versioned CPU models
Posted by Daniel P. Berrangé 2 weeks, 2 days ago
On Wed, Nov 20, 2024 at 02:32:42PM +0100, Jiri Denemark wrote:
> On Wed, Nov 20, 2024 at 10:23:55 +0000, Daniel P. Berrangé wrote:
> > FYI, I re-ran the sync script after applying this series:
> > 
> > ./src/cpu_map/sync_qemu_models_i386.py ../qemu src/cpu_map
> > 
> > and it adds a bunch more CPUs from QEMU git master.
> > 
> >        <include filename='x86_GraniteRapids-v1.xml'/>
> >        <include filename='x86_SierraForest.xml'/>
> >        <include filename='x86_SierraForest-v1.xml'/>
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_GraniteRapids-v2.xml'/>
> 
> This is a new version added in 9.2.0, while I explicitly wanted to only
> add released CPU model, i.e., used 9.1.*. Since 9.2.0 is in rc phase
> already I guess I could add this v2 as well.

There's a small chance someone could appear with a bugfix for -v2 before
9.2.0 GA release. Skip it for Dec 1st release, and we can have it in the
Jan 15th release.

> 
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton.xml'/>
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v1.xml'/>
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v2.xml'/>
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_Denverton-v3.xml'/>
> > +      <include filename='/var/home/berrange/src/virt/libvirt/src/cpu_map/x86_KnightsMill.xml'/>
> 
> This series is adding versioned variants to existing CPU models, while
> Denverton and KnightsMill were never supported by libvirt. I don't think
> they need to be added (Denverton is an Atom CPU and KnightsMill is some
> kind of a dead evolution branch), but we can just add them for
> consistency.

For the sake of just 5 extra CPU xmls, out of 130, I think it is not
worth making an exception by excluding them.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|