[PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine

Daniel P. Berrangé posted 2 patches 6 months, 2 weeks ago
[PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Daniel P. Berrangé 6 months, 2 weeks ago
This effectively reverts

  commit 54c4ea8f3ae614054079395842128a856a73dbf9
  Author: Zhao Liu <zhao1.liu@intel.com>
  Date:   Sat Mar 9 00:01:37 2024 +0800

    hw/core/machine-smp: Deprecate unsupported "parameter=1" SMP configurations

but is not done as a 'git revert' since the part of the changes to the
file hw/core/machine-smp.c which add 'has_XXX' checks remain desirable.
Furthermore, we have to tweak the subsequently added unit test to
account for differing warning message.

The rationale for the original deprecation was:

  "Currently, it was allowed for users to specify the unsupported
   topology parameter as "1". For example, x86 PC machine doesn't
   support drawer/book/cluster topology levels, but user could specify
   "-smp drawers=1,books=1,clusters=1".

   This is meaningless and confusing, so that the support for this kind
   of configurations is marked deprecated since 9.0."

There are varying POVs on the topic of 'unsupported' topology levels.

It is common to say that on a system without hyperthreading, that there
is always 1 thread. Likewise when new CPUs introduced a concept of
multiple "dies', it was reasonable to say that all historical CPUs
before that implicitly had 1 'die'. Likewise for the more recently
introduced 'modules' and 'clusters' parameter'. From this POV, it is
valid to set 'parameter=1' on the -smp command line for any machine,
only a value > 1 is strictly an error condition.

It doesn't cause any functional difficulty for QEMU, because internally
the QEMU code is itself assuming that all "unsupported" parameters
implicitly have a value of '1'.

At the libvirt level, we've allowed applications to set 'parameter=1'
when configuring a guest, and pass that through to QEMU.

Deprecating this creates extra difficulty for because there's no info
exposed from QEMU about which machine types "support" which parameters.
Thus, libvirt can't know whether it is valid to pass 'parameter=1' for
a given machine type, or whether it will trigger deprecation messages.

Since there's no apparent functional benefit to deleting this deprecated
behaviour from QEMU, and it creates problems for consumers of QEMU,
remove this deprecation.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 docs/about/deprecated.rst   | 14 -------
 hw/core/machine-smp.c       | 82 ++++++++++++-------------------------
 tests/unit/test-smp-parse.c |  8 ++--
 3 files changed, 30 insertions(+), 74 deletions(-)

diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index e22acb17f2..5b551b12a6 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -47,20 +47,6 @@ as short-form boolean values, and passed to plugins as ``arg_name=on``.
 However, short-form booleans are deprecated and full explicit ``arg_name=on``
 form is preferred.
 
-``-smp`` (Unsupported "parameter=1" SMP configurations) (since 9.0)
-'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
-
-Specified CPU topology parameters must be supported by the machine.
-
-In the SMP configuration, users should provide the CPU topology parameters that
-are supported by the target machine.
-
-However, historically it was allowed for users to specify the unsupported
-topology parameter as "1", which is meaningless. So support for this kind of
-configurations (e.g. -smp drawers=1,books=1,clusters=1 for x86 PC machine) is
-marked deprecated since 9.0, users have to ensure that all the topology members
-described with -smp are supported by the target machine.
-
 User-mode emulator command line arguments
 -----------------------------------------
 
diff --git a/hw/core/machine-smp.c b/hw/core/machine-smp.c
index 2b93fa99c9..eb43caca9b 100644
--- a/hw/core/machine-smp.c
+++ b/hw/core/machine-smp.c
@@ -119,75 +119,45 @@ void machine_parse_smp_config(MachineState *ms,
 
     /*
      * If not supported by the machine, a topology parameter must be
-     * omitted.
+     * not be set to a value greater than 1.
      */
-    if (!mc->smp_props.modules_supported && config->has_modules) {
-        if (config->modules > 1) {
-            error_setg(errp, "modules not supported by this "
-                       "machine's CPU topology");
-            return;
-        } else {
-            /* Here modules only equals 1 since we've checked zero case. */
-            warn_report("Deprecated CPU topology (considered invalid): "
-                        "Unsupported modules parameter mustn't be "
-                        "specified as 1");
-        }
+    if (!mc->smp_props.modules_supported &&
+        config->has_modules && config->modules > 1) {
+        error_setg(errp,
+                   "modules > 1 not supported by this machine's CPU topology");
+        return;
     }
     modules = modules > 0 ? modules : 1;
 
-    if (!mc->smp_props.clusters_supported && config->has_clusters) {
-        if (config->clusters > 1) {
-            error_setg(errp, "clusters not supported by this "
-                       "machine's CPU topology");
-            return;
-        } else {
-            /* Here clusters only equals 1 since we've checked zero case. */
-            warn_report("Deprecated CPU topology (considered invalid): "
-                        "Unsupported clusters parameter mustn't be "
-                        "specified as 1");
-        }
+    if (!mc->smp_props.clusters_supported &&
+        config->has_clusters && config->clusters > 1) {
+        error_setg(errp,
+                   "clusters > 1 not supported by this machine's CPU topology");
+        return;
     }
     clusters = clusters > 0 ? clusters : 1;
 
-    if (!mc->smp_props.dies_supported && config->has_dies) {
-        if (config->dies > 1) {
-            error_setg(errp, "dies not supported by this "
-                       "machine's CPU topology");
-            return;
-        } else {
-            /* Here dies only equals 1 since we've checked zero case. */
-            warn_report("Deprecated CPU topology (considered invalid): "
-                        "Unsupported dies parameter mustn't be "
-                        "specified as 1");
-        }
+    if (!mc->smp_props.dies_supported &&
+        config->has_dies && config->dies > 1) {
+        error_setg(errp,
+                   "dies > 1 not supported by this machine's CPU topology");
+        return;
     }
     dies = dies > 0 ? dies : 1;
 
-    if (!mc->smp_props.books_supported && config->has_books) {
-        if (config->books > 1) {
-            error_setg(errp, "books not supported by this "
-                       "machine's CPU topology");
-            return;
-        } else {
-            /* Here books only equals 1 since we've checked zero case. */
-            warn_report("Deprecated CPU topology (considered invalid): "
-                        "Unsupported books parameter mustn't be "
-                        "specified as 1");
-        }
+    if (!mc->smp_props.books_supported &&
+        config->has_books && config->books > 1) {
+        error_setg(errp,
+                   "books > 1 not supported by this machine's CPU topology");
+        return;
     }
     books = books > 0 ? books : 1;
 
-    if (!mc->smp_props.drawers_supported && config->has_drawers) {
-        if (config->drawers > 1) {
-            error_setg(errp, "drawers not supported by this "
-                       "machine's CPU topology");
-            return;
-        } else {
-            /* Here drawers only equals 1 since we've checked zero case. */
-            warn_report("Deprecated CPU topology (considered invalid): "
-                        "Unsupported drawers parameter mustn't be "
-                        "specified as 1");
-        }
+    if (!mc->smp_props.drawers_supported &&
+        config->has_drawers && config->drawers > 1) {
+        error_setg(errp,
+                   "drawers > 1 not supported by this machine's CPU topology");
+        return;
     }
     drawers = drawers > 0 ? drawers : 1;
 
diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c
index 8994337e12..56165e6644 100644
--- a/tests/unit/test-smp-parse.c
+++ b/tests/unit/test-smp-parse.c
@@ -337,21 +337,21 @@ static const struct SMPTestData data_generic_invalid[] = {
     {
         /* config: -smp 2,dies=2 */
         .config = SMP_CONFIG_WITH_DIES(T, 2, F, 0, T, 2, F, 0, F, 0, F, 0),
-        .expect_error = "dies not supported by this machine's CPU topology",
+        .expect_error = "dies > 1 not supported by this machine's CPU topology",
     }, {
         /* config: -smp 2,clusters=2 */
         .config = SMP_CONFIG_WITH_CLUSTERS(T, 2, F, 0, T, 2, F, 0, F, 0, F, 0),
-        .expect_error = "clusters not supported by this machine's CPU topology",
+        .expect_error = "clusters > 1 not supported by this machine's CPU topology",
     }, {
         /* config: -smp 2,books=2 */
         .config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 2, F, 0, T, 2, F,
                                                 0, F, 0, F, 0, F, 0),
-        .expect_error = "books not supported by this machine's CPU topology",
+        .expect_error = "books > 1 not supported by this machine's CPU topology",
     }, {
         /* config: -smp 2,drawers=2 */
         .config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 2, T, 2, F, 0, F,
                                                 0, F, 0, F, 0, F, 0),
-        .expect_error = "drawers not supported by this machine's CPU topology",
+        .expect_error = "drawers > 1 not supported by this machine's CPU topology",
     }, {
         /* config: -smp 8,sockets=2,cores=4,threads=2,maxcpus=8 */
         .config = SMP_CONFIG_GENERIC(T, 8, T, 2, T, 4, T, 2, T, 8),
-- 
2.43.0
_______________________________________________
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-leave@lists.libvirt.org
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Zhao Liu 6 months, 1 week ago
On Mon, May 13, 2024 at 01:33:57PM +0100, Daniel P. Berrangé wrote:
> Date: Mon, 13 May 2024 13:33:57 +0100
> From: "Daniel P. Berrangé" <berrange@redhat.com>
> Subject: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any
>  machine
> 
> This effectively reverts
> 
>   commit 54c4ea8f3ae614054079395842128a856a73dbf9
>   Author: Zhao Liu <zhao1.liu@intel.com>
>   Date:   Sat Mar 9 00:01:37 2024 +0800
> 
>     hw/core/machine-smp: Deprecate unsupported "parameter=1" SMP configurations
> 
> but is not done as a 'git revert' since the part of the changes to the
> file hw/core/machine-smp.c which add 'has_XXX' checks remain desirable.
> Furthermore, we have to tweak the subsequently added unit test to
> account for differing warning message.
> 
> The rationale for the original deprecation was:
> 
>   "Currently, it was allowed for users to specify the unsupported
>    topology parameter as "1". For example, x86 PC machine doesn't
>    support drawer/book/cluster topology levels, but user could specify
>    "-smp drawers=1,books=1,clusters=1".
> 
>    This is meaningless and confusing, so that the support for this kind
>    of configurations is marked deprecated since 9.0."
> 
> There are varying POVs on the topic of 'unsupported' topology levels.
> 
> It is common to say that on a system without hyperthreading, that there
> is always 1 thread. Likewise when new CPUs introduced a concept of
> multiple "dies', it was reasonable to say that all historical CPUs
> before that implicitly had 1 'die'. Likewise for the more recently
> introduced 'modules' and 'clusters' parameter'. From this POV, it is
> valid to set 'parameter=1' on the -smp command line for any machine,
> only a value > 1 is strictly an error condition.
> 
> It doesn't cause any functional difficulty for QEMU, because internally
> the QEMU code is itself assuming that all "unsupported" parameters
> implicitly have a value of '1'.
> 
> At the libvirt level, we've allowed applications to set 'parameter=1'
> when configuring a guest, and pass that through to QEMU.
> 
> Deprecating this creates extra difficulty for because there's no info
> exposed from QEMU about which machine types "support" which parameters.
> Thus, libvirt can't know whether it is valid to pass 'parameter=1' for
> a given machine type, or whether it will trigger deprecation messages.
> 
> Since there's no apparent functional benefit to deleting this deprecated
> behaviour from QEMU, and it creates problems for consumers of QEMU,
> remove this deprecation.
> 
> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> ---
>  docs/about/deprecated.rst   | 14 -------
>  hw/core/machine-smp.c       | 82 ++++++++++++-------------------------
>  tests/unit/test-smp-parse.c |  8 ++--
>  3 files changed, 30 insertions(+), 74 deletions(-)
> 
> diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
> index e22acb17f2..5b551b12a6 100644
> --- a/docs/about/deprecated.rst
> +++ b/docs/about/deprecated.rst
> @@ -47,20 +47,6 @@ as short-form boolean values, and passed to plugins as ``arg_name=on``.
>  However, short-form booleans are deprecated and full explicit ``arg_name=on``
>  form is preferred.
>  
> -``-smp`` (Unsupported "parameter=1" SMP configurations) (since 9.0)
> -'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
> -
> -Specified CPU topology parameters must be supported by the machine.
> -
> -In the SMP configuration, users should provide the CPU topology parameters that
> -are supported by the target machine.
> -
> -However, historically it was allowed for users to specify the unsupported
> -topology parameter as "1", which is meaningless. So support for this kind of
> -configurations (e.g. -smp drawers=1,books=1,clusters=1 for x86 PC machine) is
> -marked deprecated since 9.0, users have to ensure that all the topology members
> -described with -smp are supported by the target machine.
> -
>  User-mode emulator command line arguments
>  -----------------------------------------
>  
> diff --git a/hw/core/machine-smp.c b/hw/core/machine-smp.c
> index 2b93fa99c9..eb43caca9b 100644
> --- a/hw/core/machine-smp.c
> +++ b/hw/core/machine-smp.c
> @@ -119,75 +119,45 @@ void machine_parse_smp_config(MachineState *ms,
>  
>      /*
>       * If not supported by the machine, a topology parameter must be

Also need to change this line as:

s/must be/must/

> -     * omitted.
> +     * not be set to a value greater than 1.
>       */

Only the above nit,

Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Zhao Liu 6 months, 2 weeks ago
Cc Paolo for x86 topology part

Hi Daniel,

On Mon, May 13, 2024 at 01:33:57PM +0100, Daniel P. Berrangé wrote:
> Date: Mon, 13 May 2024 13:33:57 +0100
> From: "Daniel P. Berrangé" <berrange@redhat.com>
> Subject: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any
>  machine
> 
> This effectively reverts
> 
>   commit 54c4ea8f3ae614054079395842128a856a73dbf9
>   Author: Zhao Liu <zhao1.liu@intel.com>
>   Date:   Sat Mar 9 00:01:37 2024 +0800
> 
>     hw/core/machine-smp: Deprecate unsupported "parameter=1" SMP configurations
> 
> but is not done as a 'git revert' since the part of the changes to the
> file hw/core/machine-smp.c which add 'has_XXX' checks remain desirable.
> Furthermore, we have to tweak the subsequently added unit test to
> account for differing warning message.
> 
> The rationale for the original deprecation was:
> 
>   "Currently, it was allowed for users to specify the unsupported
>    topology parameter as "1". For example, x86 PC machine doesn't
>    support drawer/book/cluster topology levels, but user could specify
>    "-smp drawers=1,books=1,clusters=1".
> 
>    This is meaningless and confusing, so that the support for this kind
>    of configurations is marked deprecated since 9.0."
> 
> There are varying POVs on the topic of 'unsupported' topology levels.
> 
> It is common to say that on a system without hyperthreading, that there
> is always 1 thread. Likewise when new CPUs introduced a concept of
> multiple "dies', it was reasonable to say that all historical CPUs
> before that implicitly had 1 'die'. Likewise for the more recently
> introduced 'modules' and 'clusters' parameter'. From this POV, it is
> valid to set 'parameter=1' on the -smp command line for any machine,
> only a value > 1 is strictly an error condition.

Currently QEMU has become more and more difficult to maintain a general
topology hierarchy, there are two recent examples:

1. as you mentioned "module" v.s. "cluster", one reason for introducing
"module" is because it is difficult to define what "cluster" is for x86,
the cluster in the device tree can be nested, then it can correspond to
an x86 die, or it can correspond to an x86 module. Therefore, specifying
"clusters=1" for x86 is ambiguous.

2. s390 introduces book and drawer, which are above socket/package
level, but for x86, the level above the package names "cluster" (yeah,
"cluster" again :-(). So if user sets "books=1" or "drawers=1" for x86,
then it's meaningless. Similarly, "clusters=1" is also very confusing for
x86 machine.

I think that only thread/core/socket are architecturally general, the
other topology levels are hard to define across architectures, then
allowing unsupported topology=1 is always confusing...

Moreover, QEMU currently requires a clear topology containment
relationship when defining a topology, after which it will become
increasingly difficult to define a generic topology containment
relationship when new topology levels are introduced in the future...

> It doesn't cause any functional difficulty for QEMU, because internally
> the QEMU code is itself assuming that all "unsupported" parameters
> implicitly have a value of '1'.
> 
> At the libvirt level, we've allowed applications to set 'parameter=1'
> when configuring a guest, and pass that through to QEMU.
> 
> Deprecating this creates extra difficulty for because there's no info
> exposed from QEMU about which machine types "support" which parameters.
> Thus, libvirt can't know whether it is valid to pass 'parameter=1' for
> a given machine type, or whether it will trigger deprecation messages.

I understand that libvirt is having trouble because there is no interface
to expose which topology levels the current machine supports. As a
workaround to eliminate the difficulties at the libvirt level, it's
ok for me.

But I believe deprecating the unsupported topology is necessary, so do
you think it's acceptable to include an interface to expose the supported
topology if it's going to be deprecated again later?

Regards,
Zhao
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Daniel P. Berrangé 6 months, 2 weeks ago
On Mon, May 13, 2024 at 10:22:22PM +0800, Zhao Liu wrote:
> Cc Paolo for x86 topology part
> 
> Hi Daniel,
> 
> On Mon, May 13, 2024 at 01:33:57PM +0100, Daniel P. Berrangé wrote:
> > Date: Mon, 13 May 2024 13:33:57 +0100
> > From: "Daniel P. Berrangé" <berrange@redhat.com>
> > Subject: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any
> >  machine
> > 
> > This effectively reverts
> > 
> >   commit 54c4ea8f3ae614054079395842128a856a73dbf9
> >   Author: Zhao Liu <zhao1.liu@intel.com>
> >   Date:   Sat Mar 9 00:01:37 2024 +0800
> > 
> >     hw/core/machine-smp: Deprecate unsupported "parameter=1" SMP configurations
> > 
> > but is not done as a 'git revert' since the part of the changes to the
> > file hw/core/machine-smp.c which add 'has_XXX' checks remain desirable.
> > Furthermore, we have to tweak the subsequently added unit test to
> > account for differing warning message.
> > 
> > The rationale for the original deprecation was:
> > 
> >   "Currently, it was allowed for users to specify the unsupported
> >    topology parameter as "1". For example, x86 PC machine doesn't
> >    support drawer/book/cluster topology levels, but user could specify
> >    "-smp drawers=1,books=1,clusters=1".
> > 
> >    This is meaningless and confusing, so that the support for this kind
> >    of configurations is marked deprecated since 9.0."
> > 
> > There are varying POVs on the topic of 'unsupported' topology levels.
> > 
> > It is common to say that on a system without hyperthreading, that there
> > is always 1 thread. Likewise when new CPUs introduced a concept of
> > multiple "dies', it was reasonable to say that all historical CPUs
> > before that implicitly had 1 'die'. Likewise for the more recently
> > introduced 'modules' and 'clusters' parameter'. From this POV, it is
> > valid to set 'parameter=1' on the -smp command line for any machine,
> > only a value > 1 is strictly an error condition.
> 
> Currently QEMU has become more and more difficult to maintain a general
> topology hierarchy, there are two recent examples:
> 
> 1. as you mentioned "module" v.s. "cluster", one reason for introducing
> "module" is because it is difficult to define what "cluster" is for x86,
> the cluster in the device tree can be nested, then it can correspond to
> an x86 die, or it can correspond to an x86 module. Therefore, specifying
> "clusters=1" for x86 is ambiguous.
> 
> 2. s390 introduces book and drawer, which are above socket/package
> level, but for x86, the level above the package names "cluster" (yeah,
> "cluster" again :-(). So if user sets "books=1" or "drawers=1" for x86,
> then it's meaningless. Similarly, "clusters=1" is also very confusing for
> x86 machine.
> 
> I think that only thread/core/socket are architecturally general, the
> other topology levels are hard to define across architectures, then
> allowing unsupported topology=1 is always confusing...
> 
> Moreover, QEMU currently requires a clear topology containment
> relationship when defining a topology, after which it will become
> increasingly difficult to define a generic topology containment
> relationship when new topology levels are introduced in the future...

I'm failing to see what real world technical problems QEMU faces
with a parameter being set to '1' by a mgmt app, when QEMU itself
treats all omitted values as being '1' anyway.

If we're trying to faithfully model the real world, then restricting
the topology against machine types though still looks inherantly wrong.
The valid topology ought to be constrained based on the named CPU model.
eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
only an EPYC CPU model, especially if we want to model cache info in
a way that matches the real world silicon better.

> > It doesn't cause any functional difficulty for QEMU, because internally
> > the QEMU code is itself assuming that all "unsupported" parameters
> > implicitly have a value of '1'.
> > 
> > At the libvirt level, we've allowed applications to set 'parameter=1'
> > when configuring a guest, and pass that through to QEMU.
> > 
> > Deprecating this creates extra difficulty for because there's no info
> > exposed from QEMU about which machine types "support" which parameters.
> > Thus, libvirt can't know whether it is valid to pass 'parameter=1' for
> > a given machine type, or whether it will trigger deprecation messages.
> 
> I understand that libvirt is having trouble because there is no interface
> to expose which topology levels the current machine supports. As a
> workaround to eliminate the difficulties at the libvirt level, it's
> ok for me.
> 
> But I believe deprecating the unsupported topology is necessary, so do
> you think it's acceptable to include an interface to expose the supported
> topology if it's going to be deprecated again later?

As above, I think that restrictions based on machine type, while nice and
simple, are incorrect long term. If we did impose restrictions based on
CPU model, then we could trivially expose this info to mgmt apps via the
existing mechanism for querying supported CPU models. Limiting based on
CPU model, however, has potentially greater back compat issues, though
it would be strictly more faithful to hardware.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
_______________________________________________
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-leave@lists.libvirt.org
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Zhao Liu 6 months, 2 weeks ago
> I'm failing to see what real world technical problems QEMU faces
> with a parameter being set to '1' by a mgmt app, when QEMU itself
> treats all omitted values as being '1' anyway.
> 
> If we're trying to faithfully model the real world, then restricting
> the topology against machine types though still looks inherantly wrong.
> The valid topology ought to be constrained based on the named CPU model.
> eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
> only an EPYC CPU model, especially if we want to model cache info in
> a way that matches the real world silicon better.

Thanks for figuring out this. This issue is related with Intel CPU
cache model: currently Intel code defaults L3 shared at die level.
This could be resolved by defining the accurate default cache topology
level for CPU model and make Intel CPU models share L3 at package level
except only Cascadelake.

Then user could define any other topology levels (die/module) for
Icelake and this won't change the cache topology, unless the user adds
more sockets or further customizes the cache topology in another way [1].
Do you agree with this solution?

[1]: https://lore.kernel.org/qemu-devel/20240220092504.726064-1-zhao1.liu@linux.intel.com/

[snip]

> As above, I think that restrictions based on machine type, while nice and
> simple, are incorrect long term. If we did impose restrictions based on
> CPU model, then we could trivially expose this info to mgmt apps via the
> existing mechanism for querying supported CPU models. Limiting based on
> CPU model, however, has potentially greater back compat issues, though
> it would be strictly more faithful to hardware.

I think as long as the default cache topology model is clearly defined,
users can further customize the CPU topology and adjust the cache
topology based on it. After all, topology is architectural, not CPU
model-specific (linux support for topology does not take into account
specific CPU models).

For example, x86, for simplicity, can we assume that all x86 CPU models
support all x86 topology levels (thread/core/module/die/package) without
making distinctions based on specific CPU models?

That way as long as the user doesn't change the default topology, then
Guest's cache and other topology information won't be "corrupted".

And there's one more question, does this rollback mean that smp's
parameters must have compatible default values for all architectures?

This is related with my SMP cache proposal above [1], should I provide
default entries (e.g. default) to be compatible with all architectures,
even if they don't support custom cache topology? Like the following:

-smp 32,sockets=2,dies=2,modules=2,cores=2,threads=2,maxcpus=32,\
     l1d-cache=default,l1i-cache=default,l2-cache=default,l3-cache=default

Thanks,
Zhao
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Daniel P. Berrangé 6 months, 1 week ago
On Tue, May 14, 2024 at 11:49:40AM +0800, Zhao Liu wrote:
> > I'm failing to see what real world technical problems QEMU faces
> > with a parameter being set to '1' by a mgmt app, when QEMU itself
> > treats all omitted values as being '1' anyway.
> > 
> > If we're trying to faithfully model the real world, then restricting
> > the topology against machine types though still looks inherantly wrong.
> > The valid topology ought to be constrained based on the named CPU model.
> > eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
> > only an EPYC CPU model, especially if we want to model cache info in
> > a way that matches the real world silicon better.
> 
> Thanks for figuring out this. This issue is related with Intel CPU
> cache model: currently Intel code defaults L3 shared at die level.
> This could be resolved by defining the accurate default cache topology
> level for CPU model and make Intel CPU models share L3 at package level
> except only Cascadelake.
> 
> Then user could define any other topology levels (die/module) for
> Icelake and this won't change the cache topology, unless the user adds
> more sockets or further customizes the cache topology in another way [1].
> Do you agree with this solution?

Broadly speaking yes. Historically we have created trouble for
ourselves (and or our users) by allowing creation of "wierd"
guest CPU models, which don't resemble those which can be found
in real world silicon. Problems specifically have been around
unsual combinations of CPUID features eg user enabled X, but not Y,
where real silicon always has X + Y enabled, and guest OS assumed
this is always the case.

So if our named CPU models can more faithfully match what you might
see in terms of cache topology in the real world, that's likely to
be a good thing.

> > As above, I think that restrictions based on machine type, while nice and
> > simple, are incorrect long term. If we did impose restrictions based on
> > CPU model, then we could trivially expose this info to mgmt apps via the
> > existing mechanism for querying supported CPU models. Limiting based on
> > CPU model, however, has potentially greater back compat issues, though
> > it would be strictly more faithful to hardware.
> 
> I think as long as the default cache topology model is clearly defined,
> users can further customize the CPU topology and adjust the cache
> topology based on it. After all, topology is architectural, not CPU
> model-specific (linux support for topology does not take into account
> specific CPU models).
> 
> For example, x86, for simplicity, can we assume that all x86 CPU models
> support all x86 topology levels (thread/core/module/die/package) without
> making distinctions based on specific CPU models?

Hmm, true, if we have direct control over cache topology, the
CPU topology is less critical. I'd still be wary of suggesting
it is a good idea to use CPU topology configs that don't reflect
something the CPU vendor has concievably used in real silicon.

> That way as long as the user doesn't change the default topology, then
> Guest's cache and other topology information won't be "corrupted".

> And there's one more question, does this rollback mean that smp's
> parameters must have compatible default values for all architectures?

Historically we preferred "sockets", when filling missing topology,
then more recently we switched to prefer "cores", since high core
counts are generally more common in real world than high socket
counts.

In theory at some point, one might want to fill in 'dies > 0' for
EPYC, or 'modules > 0' for appropriate Intel CPU models, but doing
the reverse while theoretically valid, would be wierd as no such
topology would exist in real silicon.

Ultimately if you're allowing QEMU guest vCPUs threads to float
freely across host CPUs, there is little point in setting dies/
modules/threads to a value other than 1, because the guest OS
won't benefit from understanding cache differences for dies/
modules/threads/etc, if the vCPU can be moved between host CPUs
at any time by the host OS scheduler.

Fine grained control over dies/modules/threads only makes more
sense if you have strictly pinning vCPU threads 1:1 to host CPUs

IOW, simply preferring "cores" for everything is a reasonable
default long term plan for everything, unless the specific
architecture target has no concept of "cores".

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Posted by Zhao Liu 6 months, 1 week ago
On Wed, May 15, 2024 at 06:06:56PM +0100, Daniel P. Berrangé wrote:
> Date: Wed, 15 May 2024 18:06:56 +0100
> From: "Daniel P. Berrangé" <berrange@redhat.com>
> Subject: Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any
>  machine
> 
> On Tue, May 14, 2024 at 11:49:40AM +0800, Zhao Liu wrote:
> > > I'm failing to see what real world technical problems QEMU faces
> > > with a parameter being set to '1' by a mgmt app, when QEMU itself
> > > treats all omitted values as being '1' anyway.
> > > 
> > > If we're trying to faithfully model the real world, then restricting
> > > the topology against machine types though still looks inherantly wrong.
> > > The valid topology ought to be constrained based on the named CPU model.
> > > eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
> > > only an EPYC CPU model, especially if we want to model cache info in
> > > a way that matches the real world silicon better.
> > 
> > Thanks for figuring out this. This issue is related with Intel CPU
> > cache model: currently Intel code defaults L3 shared at die level.
> > This could be resolved by defining the accurate default cache topology
> > level for CPU model and make Intel CPU models share L3 at package level
> > except only Cascadelake.
> > 
> > Then user could define any other topology levels (die/module) for
> > Icelake and this won't change the cache topology, unless the user adds
> > more sockets or further customizes the cache topology in another way [1].
> > Do you agree with this solution?
> 
> Broadly speaking yes. Historically we have created trouble for
> ourselves (and or our users) by allowing creation of "wierd"
> guest CPU models, which don't resemble those which can be found
> in real world silicon. Problems specifically have been around
> unsual combinations of CPUID features eg user enabled X, but not Y,
> where real silicon always has X + Y enabled, and guest OS assumed
> this is always the case.
> 
> So if our named CPU models can more faithfully match what you might
> see in terms of cache topology in the real world, that's likely to
> be a good thing.

Good to know what you think, it's the important guide for our work.

> > > As above, I think that restrictions based on machine type, while nice and
> > > simple, are incorrect long term. If we did impose restrictions based on
> > > CPU model, then we could trivially expose this info to mgmt apps via the
> > > existing mechanism for querying supported CPU models. Limiting based on
> > > CPU model, however, has potentially greater back compat issues, though
> > > it would be strictly more faithful to hardware.
> > 
> > I think as long as the default cache topology model is clearly defined,
> > users can further customize the CPU topology and adjust the cache
> > topology based on it. After all, topology is architectural, not CPU
> > model-specific (linux support for topology does not take into account
> > specific CPU models).
> > 
> > For example, x86, for simplicity, can we assume that all x86 CPU models
> > support all x86 topology levels (thread/core/module/die/package) without
> > making distinctions based on specific CPU models?
> 
> Hmm, true, if we have direct control over cache topology, the
> CPU topology is less critical. I'd still be wary of suggesting
> it is a good idea to use CPU topology configs that don't reflect
> something the CPU vendor has concievably used in real silicon.

Thank you! Yes, besides cache, it may affect other features like
Anthony's RAPL support, MSR is (die/package) topology-aware.

There's still a lot of open here, I'll take your warning into serious
consideration.

> > That way as long as the user doesn't change the default topology, then
> > Guest's cache and other topology information won't be "corrupted".
> 
> > And there's one more question, does this rollback mean that smp's
> > parameters must have compatible default values for all architectures?
> 
> Historically we preferred "sockets", when filling missing topology,
> then more recently we switched to prefer "cores", since high core
> counts are generally more common in real world than high socket
> counts.
> 
> In theory at some point, one might want to fill in 'dies > 0' for
> EPYC, or 'modules > 0' for appropriate Intel CPU models, but doing
> the reverse while theoretically valid, would be wierd as no such
> topology would exist in real silicon.
> 
> Ultimately if you're allowing QEMU guest vCPUs threads to float
> freely across host CPUs, there is little point in setting dies/
> modules/threads to a value other than 1, because the guest OS
> won't benefit from understanding cache differences for dies/
> modules/threads/etc, if the vCPU can be moved between host CPUs
> at any time by the host OS scheduler.
> 
> Fine grained control over dies/modules/threads only makes more
> sense if you have strictly pinning vCPU threads 1:1 to host CPUs
> 
> IOW, simply preferring "cores" for everything is a reasonable
> default long term plan for everything, unless the specific
> architecture target has no concept of "cores".

Thank you very much for your education, it has helped me understand the
meaning of SMP much better!

It makes sense to default the cache topology configuration to core, I'll
check further to see if it would conflict with the current x86 default
cache topology encoding model and how to defuse the potential conflict.

The CPU topology building is centralized in the general SMP code, while
cache topology is scattered throughout the architectures' code, which is
a significant difference between the two. I'll refresh the smp cache
series later.

Regards,
Zhao