[PATCH v2 00/17] xen: support per-cpupool scheduling granularity

Juergen Gross posted 17 patches 3 years, 4 months ago
Test gitlab-ci failed
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20201201082128.15239-1-jgross@suse.com
There is a newer version of this series
docs/misc/hypfs-paths.pandoc |  18 +-
xen/common/hypfs.c           | 336 ++++++++++++++++++++++----
xen/common/sched/core.c      | 137 ++++++++---
xen/common/sched/cpupool.c   | 446 +++++++++++++++++++++++++++--------
xen/common/sched/private.h   |  15 +-
xen/include/xen/hypfs.h      | 140 ++++++++---
xen/include/xen/param.h      |  15 +-
xen/include/xen/sched.h      |   4 +-
8 files changed, 875 insertions(+), 236 deletions(-)
[PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Posted by Juergen Gross 3 years, 4 months ago
Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Changes in V2:
- Added several new patches, especially for some further cleanups in
  cpupool.c.
- Completely reworked the locking scheme with dynamical directories:
  locking of resources (cpupools in this series) is now done via new
  callbacks which are called when traversing the hypfs tree. This
  removes the need to add locking to each hypfs related cpupool
  function and it ensures data integrity across multiple callbacks.
- Reordered the first few patches in order to have already acked
  patches in pure cleanup patches first.
- Addressed several comments.

Juergen Gross (17):
  xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
  xen/cpupool: add missing bits for per-cpupool scheduling granularity
  xen/cpupool: sort included headers in cpupool.c
  xen/cpupool: switch cpupool id to unsigned
  xen/cpupool: switch cpupool list to normal list interface
  xen/cpupool: use ERR_PTR() for returning error cause from
    cpupool_create()
  xen/cpupool: support moving domain between cpupools with different
    granularity
  docs: fix hypfs path documentation
  xen/hypfs: move per-node function pointers into a dedicated struct
  xen/hypfs: pass real failure reason up from hypfs_get_entry()
  xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
  xen/hypfs: add new enter() and exit() per node callbacks
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/cpupool: add cpupool directories
  xen/cpupool: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc |  18 +-
 xen/common/hypfs.c           | 336 ++++++++++++++++++++++----
 xen/common/sched/core.c      | 137 ++++++++---
 xen/common/sched/cpupool.c   | 446 +++++++++++++++++++++++++++--------
 xen/common/sched/private.h   |  15 +-
 xen/include/xen/hypfs.h      | 140 ++++++++---
 xen/include/xen/param.h      |  15 +-
 xen/include/xen/sched.h      |   4 +-
 8 files changed, 875 insertions(+), 236 deletions(-)

-- 
2.26.2


Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Posted by Andrew Cooper 3 years, 4 months ago
On 01/12/2020 08:21, Juergen Gross wrote:
> Support scheduling granularity per cpupool. Setting the granularity is
> done via hypfs, which needed to gain dynamical entries for that
> purpose.
>
> Apart from the hypfs related additional functionality the main change
> for cpupools was the support for moving a domain to a new granularity,
> as this requires to modify the scheduling unit/vcpu relationship.
>
> I have tried to do the hypfs modifications in a rather generic way in
> order to be able to use the same infrastructure in other cases, too
> (e.g. for per-domain entries).
>
> The complete series has been tested by creating cpupools with different
> granularities and moving busy and idle domains between those.
>
> Changes in V2:
> - Added several new patches, especially for some further cleanups in
>   cpupool.c.
> - Completely reworked the locking scheme with dynamical directories:
>   locking of resources (cpupools in this series) is now done via new
>   callbacks which are called when traversing the hypfs tree. This
>   removes the need to add locking to each hypfs related cpupool
>   function and it ensures data integrity across multiple callbacks.
> - Reordered the first few patches in order to have already acked
>   patches in pure cleanup patches first.
> - Addressed several comments.
>
> Juergen Gross (17):
>   xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
>   xen/cpupool: add missing bits for per-cpupool scheduling granularity
>   xen/cpupool: sort included headers in cpupool.c
>   xen/cpupool: switch cpupool id to unsigned
>   xen/cpupool: switch cpupool list to normal list interface
>   xen/cpupool: use ERR_PTR() for returning error cause from
>     cpupool_create()
>   xen/cpupool: support moving domain between cpupools with different
>     granularity
>   docs: fix hypfs path documentation
>   xen/hypfs: move per-node function pointers into a dedicated struct
>   xen/hypfs: pass real failure reason up from hypfs_get_entry()
>   xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>   xen/hypfs: add new enter() and exit() per node callbacks
>   xen/hypfs: support dynamic hypfs nodes
>   xen/hypfs: add support for id-based dynamic directories
>   xen/cpupool: add cpupool directories
>   xen/cpupool: add scheduling granularity entry to cpupool entries
>   xen/cpupool: make per-cpupool sched-gran hypfs node writable

Gitlab CI is fairly (but not completely) reliably hitting an failure in
ARM randconfig against this series only.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one
example.

Error is:

cpupool.c:102:12: error: 'sched_gran_get' defined but not used
[-Werror=unused-function]
  102 | static int sched_gran_get(const char *str, enum sched_gran *mode)
      |            ^~~~~~~~~~~~~~



Weirdly, there is a second diagnostic showing up which appears to be
unrelated and non-fatal, but a concerning non-the-less

mem_access.c: In function 'p2m_mem_access_check':
mem_access.c:227:6: note: parameter passing for argument of type 'const
struct npfec' changed in GCC 9.1
  227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
npfec npfec)
      |      ^~~~~~~~~~~~~~~~~~~~

It appears to be related to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
know that the ABI changed.  However, Xen is an embedded project with no
external linkage, so we can probably compile with -Wno-psabi and be done
with it.

~Andrew, in lieu of a real CI robot.

Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Posted by Jürgen Groß 3 years, 4 months ago
On 05.12.20 00:53, Andrew Cooper wrote:
> On 01/12/2020 08:21, Juergen Gross wrote:
>> Support scheduling granularity per cpupool. Setting the granularity is
>> done via hypfs, which needed to gain dynamical entries for that
>> purpose.
>>
>> Apart from the hypfs related additional functionality the main change
>> for cpupools was the support for moving a domain to a new granularity,
>> as this requires to modify the scheduling unit/vcpu relationship.
>>
>> I have tried to do the hypfs modifications in a rather generic way in
>> order to be able to use the same infrastructure in other cases, too
>> (e.g. for per-domain entries).
>>
>> The complete series has been tested by creating cpupools with different
>> granularities and moving busy and idle domains between those.
>>
>> Changes in V2:
>> - Added several new patches, especially for some further cleanups in
>>    cpupool.c.
>> - Completely reworked the locking scheme with dynamical directories:
>>    locking of resources (cpupools in this series) is now done via new
>>    callbacks which are called when traversing the hypfs tree. This
>>    removes the need to add locking to each hypfs related cpupool
>>    function and it ensures data integrity across multiple callbacks.
>> - Reordered the first few patches in order to have already acked
>>    patches in pure cleanup patches first.
>> - Addressed several comments.
>>
>> Juergen Gross (17):
>>    xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
>>    xen/cpupool: add missing bits for per-cpupool scheduling granularity
>>    xen/cpupool: sort included headers in cpupool.c
>>    xen/cpupool: switch cpupool id to unsigned
>>    xen/cpupool: switch cpupool list to normal list interface
>>    xen/cpupool: use ERR_PTR() for returning error cause from
>>      cpupool_create()
>>    xen/cpupool: support moving domain between cpupools with different
>>      granularity
>>    docs: fix hypfs path documentation
>>    xen/hypfs: move per-node function pointers into a dedicated struct
>>    xen/hypfs: pass real failure reason up from hypfs_get_entry()
>>    xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>>    xen/hypfs: add new enter() and exit() per node callbacks
>>    xen/hypfs: support dynamic hypfs nodes
>>    xen/hypfs: add support for id-based dynamic directories
>>    xen/cpupool: add cpupool directories
>>    xen/cpupool: add scheduling granularity entry to cpupool entries
>>    xen/cpupool: make per-cpupool sched-gran hypfs node writable
> 
> Gitlab CI is fairly (but not completely) reliably hitting an failure in
> ARM randconfig against this series only.
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one
> example.
> 
> Error is:
> 
> cpupool.c:102:12: error: 'sched_gran_get' defined but not used
> [-Werror=unused-function]
>    102 | static int sched_gran_get(const char *str, enum sched_gran *mode)
>        |            ^~~~~~~~~~~~~~
> 

Ah, this is without CONFIG_HYPFS.

Will fix.


Juergen
Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Posted by Jan Beulich 3 years, 4 months ago
On 05.12.2020 00:53, Andrew Cooper wrote:
> Weirdly, there is a second diagnostic showing up which appears to be
> unrelated and non-fatal, but a concerning non-the-less
> 
> mem_access.c: In function 'p2m_mem_access_check':
> mem_access.c:227:6: note: parameter passing for argument of type 'const
> struct npfec' changed in GCC 9.1
>   227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
> npfec npfec)
>       |      ^~~~~~~~~~~~~~~~~~~~
> 
> It appears to be related to
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
> know that the ABI changed.  However, Xen is an embedded project with no
> external linkage, so we can probably compile with -Wno-psabi and be done
> with it.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91710

I have no idea why the fix suggested there hasn't made it into the
code base yet - iirc I had taken it verbatim and it got rid of the
problem in my builds of the compiler.

I don't, btw, think us being "embedded" is an excuse to suppress
the warning. If there really was a code generation difference here
(and not just a false positive warning), an incremental build
across a switch between older and newer gcc would then be broken.

Jan