[PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains

Daniel Henrique Barboza posted 3 patches 3 years, 3 months ago
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20210128151731.1333664-1-danielhb413@gmail.com
Maintainers: David Gibson <david@gibson.dropbear.id.au>, Greg Kurz <groug@kaod.org>
There is a newer version of this series
hw/ppc/spapr.c              | 21 ++------------------
hw/ppc/spapr_numa.c         | 39 ++++++++++++++++++++++++++++++++++++-
include/hw/ppc/spapr.h      |  1 -
include/hw/ppc/spapr_numa.h |  1 +
4 files changed, 41 insertions(+), 21 deletions(-)
[PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains
Posted by Daniel Henrique Barboza 3 years, 3 months ago
Hi,

Patches 02 and 03 contain fixes for a problem Cedric found out when
booting TCG guests with multiple NUMA nodes. See patch 03 commit
message for more info.

First patch is an unrelated cleanup I did while investigating.

Daniel Henrique Barboza (3):
  spapr: move spapr_machine_using_legacy_numa() to spapr_numa.c
  spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper
  spapr_numa.c: fix ibm,max-associativity-domains calculation

 hw/ppc/spapr.c              | 21 ++------------------
 hw/ppc/spapr_numa.c         | 39 ++++++++++++++++++++++++++++++++++++-
 include/hw/ppc/spapr.h      |  1 -
 include/hw/ppc/spapr_numa.h |  1 +
 4 files changed, 41 insertions(+), 21 deletions(-)

-- 
2.26.2


Re: [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains
Posted by Greg Kurz 3 years, 3 months ago
On Thu, 28 Jan 2021 12:17:28 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> Hi,
> 
> Patches 02 and 03 contain fixes for a problem Cedric found out when
> booting TCG guests with multiple NUMA nodes. See patch 03 commit
> message for more info.
> 

This paragraph mentions "TCG guests", but I see nothing that is
specific to TCG in these patches... so I expect the problem to
also exists with KVM, right ?

> First patch is an unrelated cleanup I did while investigating.
> 
> Daniel Henrique Barboza (3):
>   spapr: move spapr_machine_using_legacy_numa() to spapr_numa.c
>   spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper
>   spapr_numa.c: fix ibm,max-associativity-domains calculation
> 
>  hw/ppc/spapr.c              | 21 ++------------------
>  hw/ppc/spapr_numa.c         | 39 ++++++++++++++++++++++++++++++++++++-
>  include/hw/ppc/spapr.h      |  1 -
>  include/hw/ppc/spapr_numa.h |  1 +
>  4 files changed, 41 insertions(+), 21 deletions(-)
> 


Re: [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains
Posted by Daniel Henrique Barboza 3 years, 3 months ago

On 1/28/21 1:03 PM, Greg Kurz wrote:
> On Thu, 28 Jan 2021 12:17:28 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> Hi,
>>
>> Patches 02 and 03 contain fixes for a problem Cedric found out when
>> booting TCG guests with multiple NUMA nodes. See patch 03 commit
>> message for more info.
>>
> 
> This paragraph mentions "TCG guests", but I see nothing that is
> specific to TCG in these patches... so I expect the problem to
> also exists with KVM, right ?

Yeah. I mentioned TCG because this is the use case Cedric reproduced
the bug with, but I myself had no problems reproducing it with
accel=kvm as well.


DHB

> 
>> First patch is an unrelated cleanup I did while investigating.
>>
>> Daniel Henrique Barboza (3):
>>    spapr: move spapr_machine_using_legacy_numa() to spapr_numa.c
>>    spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper
>>    spapr_numa.c: fix ibm,max-associativity-domains calculation
>>
>>   hw/ppc/spapr.c              | 21 ++------------------
>>   hw/ppc/spapr_numa.c         | 39 ++++++++++++++++++++++++++++++++++++-
>>   include/hw/ppc/spapr.h      |  1 -
>>   include/hw/ppc/spapr_numa.h |  1 +
>>   4 files changed, 41 insertions(+), 21 deletions(-)
>>
> 

Re: [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains
Posted by Cédric Le Goater 3 years, 3 months ago
On 1/28/21 6:05 PM, Daniel Henrique Barboza wrote:
> 
> 
> On 1/28/21 1:03 PM, Greg Kurz wrote:
>> On Thu, 28 Jan 2021 12:17:28 -0300
>> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Patches 02 and 03 contain fixes for a problem Cedric found out when
>>> booting TCG guests with multiple NUMA nodes. See patch 03 commit
>>> message for more info.
>>>
>>
>> This paragraph mentions "TCG guests", but I see nothing that is
>> specific to TCG in these patches... so I expect the problem to
>> also exists with KVM, right ?
> 
> Yeah. I mentioned TCG because this is the use case Cedric reproduced
> the bug with, but I myself had no problems reproducing it with
> accel=kvm as well.

I was also seeing the issue on KVM and I am still seeing it with 
this patchset. It's gone on TCG however. 

C.

Re: [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains
Posted by Cédric Le Goater 3 years, 3 months ago
On 1/28/21 6:13 PM, Cédric Le Goater wrote:
> On 1/28/21 6:05 PM, Daniel Henrique Barboza wrote:
>>
>>
>> On 1/28/21 1:03 PM, Greg Kurz wrote:
>>> On Thu, 28 Jan 2021 12:17:28 -0300
>>> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Patches 02 and 03 contain fixes for a problem Cedric found out when
>>>> booting TCG guests with multiple NUMA nodes. See patch 03 commit
>>>> message for more info.
>>>>
>>>
>>> This paragraph mentions "TCG guests", but I see nothing that is
>>> specific to TCG in these patches... so I expect the problem to
>>> also exists with KVM, right ?
>>
>> Yeah. I mentioned TCG because this is the use case Cedric reproduced
>> the bug with, but I myself had no problems reproducing it with
>> accel=kvm as well.
> 
> I was also seeing the issue on KVM and I am still seeing it with 
> this patchset. It's gone on TCG however. 

ooups, sorry. All good on KVM also ! 

Tested-by: Cédric Le Goater <clg@kaod.org>

We can now safely use for_each_node() in the kernel.

Thanks Daniel,

C.