home_node_associativity reply now uses the associativity
values for tcpu->node_id provided by numa_assoc_array.
This will avoid further changes in this code when numa_assoc_array
changes values, but it won't be enough to prevent further changes
if (falar aqui q se mudar o tamanho do array tem q mexer nessa
funcao tambem, falar q a macro associativity() deixa a automacao
de tudo mto unreadable)
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
hw/ppc/spapr_hcall.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index c1d01228c6..2ec30efdcb 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1878,9 +1878,13 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
target_ulong opcode,
target_ulong *args)
{
+ SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
target_ulong flags = args[0];
target_ulong procno = args[1];
PowerPCCPU *tcpu;
+ uint32_t assoc_domain1;
+ uint32_t assoc_domain2;
+ uint32_t assoc_domain3;
int idx;
/* only support procno from H_REGISTER_VPA */
@@ -1893,13 +1897,21 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
return H_P2;
}
+ /*
+ * Index 0 is the ibm,associativity size of the node,
+ * which isn't relevant here.
+ */
+ assoc_domain1 = smc->numa_assoc_array[tcpu->node_id][1];
+ assoc_domain2 = smc->numa_assoc_array[tcpu->node_id][2];
+ assoc_domain3 = smc->numa_assoc_array[tcpu->node_id][3];
+
/* sequence is the same as in the "ibm,associativity" property */
idx = 0;
#define ASSOCIATIVITY(a, b) (((uint64_t)(a) << 32) | \
((uint64_t)(b) & 0xffffffff))
- args[idx++] = ASSOCIATIVITY(0, 0);
- args[idx++] = ASSOCIATIVITY(0, tcpu->node_id);
+ args[idx++] = ASSOCIATIVITY(assoc_domain1, assoc_domain2);
+ args[idx++] = ASSOCIATIVITY(assoc_domain3, tcpu->node_id);
args[idx++] = ASSOCIATIVITY(procno, -1);
for ( ; idx < 6; idx++) {
args[idx] = -1;
--
2.26.2
On Tue, Sep 01, 2020 at 09:56:45AM -0300, Daniel Henrique Barboza wrote:
> home_node_associativity reply now uses the associativity
> values for tcpu->node_id provided by numa_assoc_array.
>
> This will avoid further changes in this code when numa_assoc_array
> changes values, but it won't be enough to prevent further changes
> if (falar aqui q se mudar o tamanho do array tem q mexer nessa
> funcao tambem, falar q a macro associativity() deixa a automacao
> de tudo mto unreadable)
Uh.. I'm guessing that was a note to yourself you intended to
translate before publishing?
>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
> hw/ppc/spapr_hcall.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index c1d01228c6..2ec30efdcb 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1878,9 +1878,13 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
> target_ulong opcode,
> target_ulong *args)
You could move this function to spapr_numa.c as well (the name's a bit
misleading, but spapr_hcall.c isn't really intended to hold *all*
hcall implementations, just the ones that don't have somewhere better
to live).
> {
> + SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> target_ulong flags = args[0];
> target_ulong procno = args[1];
> PowerPCCPU *tcpu;
> + uint32_t assoc_domain1;
> + uint32_t assoc_domain2;
> + uint32_t assoc_domain3;
> int idx;
>
> /* only support procno from H_REGISTER_VPA */
> @@ -1893,13 +1897,21 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
> return H_P2;
> }
>
> + /*
> + * Index 0 is the ibm,associativity size of the node,
> + * which isn't relevant here.
> + */
> + assoc_domain1 = smc->numa_assoc_array[tcpu->node_id][1];
> + assoc_domain2 = smc->numa_assoc_array[tcpu->node_id][2];
> + assoc_domain3 = smc->numa_assoc_array[tcpu->node_id][3];
Using all these temporaries is a little ugly. Maybe do something like:
uint32_t *assoc = smc->numa_assoc_array[tcpu->node_id];
Then just use assoc[1], assoc[2] etc. below.
> +
> /* sequence is the same as in the "ibm,associativity" property */
>
> idx = 0;
> #define ASSOCIATIVITY(a, b) (((uint64_t)(a) << 32) | \
> ((uint64_t)(b) & 0xffffffff))
> - args[idx++] = ASSOCIATIVITY(0, 0);
> - args[idx++] = ASSOCIATIVITY(0, tcpu->node_id);
> + args[idx++] = ASSOCIATIVITY(assoc_domain1, assoc_domain2);
> + args[idx++] = ASSOCIATIVITY(assoc_domain3, tcpu->node_id);
> args[idx++] = ASSOCIATIVITY(procno, -1);
> for ( ; idx < 6; idx++) {
> args[idx] = -1;
Better yet would be to make this handle an arbitrary length of assoc
array, further isolating this from the specifics of how we construct
the arrays. Ideally, you'd call a common path with
spapr_numa_fixup_cpu_dt() to get the assoc array for a cpu, then here
just translate it into the in-register format the hcall expects.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
On 9/2/20 10:46 PM, David Gibson wrote:
> On Tue, Sep 01, 2020 at 09:56:45AM -0300, Daniel Henrique Barboza wrote:
>> home_node_associativity reply now uses the associativity
>> values for tcpu->node_id provided by numa_assoc_array.
>>
>> This will avoid further changes in this code when numa_assoc_array
>> changes values, but it won't be enough to prevent further changes
>> if (falar aqui q se mudar o tamanho do array tem q mexer nessa
>> funcao tambem, falar q a macro associativity() deixa a automacao
>> de tudo mto unreadable)
>
> Uh.. I'm guessing that was a note to yourself you intended to
> translate before publishing?
Ops! Forgot to edit it :|
>
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>> hw/ppc/spapr_hcall.c | 16 ++++++++++++++--
>> 1 file changed, 14 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index c1d01228c6..2ec30efdcb 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -1878,9 +1878,13 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>> target_ulong opcode,
>> target_ulong *args)
>
> You could move this function to spapr_numa.c as well (the name's a bit
> misleading, but spapr_hcall.c isn't really intended to hold *all*
> hcall implementations, just the ones that don't have somewhere better
> to live).
Ok!
>
>> {
>> + SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>> target_ulong flags = args[0];
>> target_ulong procno = args[1];
>> PowerPCCPU *tcpu;
>> + uint32_t assoc_domain1;
>> + uint32_t assoc_domain2;
>> + uint32_t assoc_domain3;
>> int idx;
>>
>> /* only support procno from H_REGISTER_VPA */
>> @@ -1893,13 +1897,21 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>> return H_P2;
>> }
>>
>> + /*
>> + * Index 0 is the ibm,associativity size of the node,
>> + * which isn't relevant here.
>> + */
>> + assoc_domain1 = smc->numa_assoc_array[tcpu->node_id][1];
>> + assoc_domain2 = smc->numa_assoc_array[tcpu->node_id][2];
>> + assoc_domain3 = smc->numa_assoc_array[tcpu->node_id][3];
>
> Using all these temporaries is a little ugly. Maybe do something like:
> uint32_t *assoc = smc->numa_assoc_array[tcpu->node_id];
>
> Then just use assoc[1], assoc[2] etc. below.
>
>> +
>> /* sequence is the same as in the "ibm,associativity" property */
>>
>> idx = 0;
>> #define ASSOCIATIVITY(a, b) (((uint64_t)(a) << 32) | \
>> ((uint64_t)(b) & 0xffffffff))
>> - args[idx++] = ASSOCIATIVITY(0, 0);
>> - args[idx++] = ASSOCIATIVITY(0, tcpu->node_id);
>> + args[idx++] = ASSOCIATIVITY(assoc_domain1, assoc_domain2);
>> + args[idx++] = ASSOCIATIVITY(assoc_domain3, tcpu->node_id);
>> args[idx++] = ASSOCIATIVITY(procno, -1);
>> for ( ; idx < 6; idx++) {
>> args[idx] = -1;
>
> Better yet would be to make this handle an arbitrary length of assoc
> array, further isolating this from the specifics of how we construct
> the arrays. Ideally, you'd call a common path with
> spapr_numa_fixup_cpu_dt() to get the assoc array for a cpu, then here
> just translate it into the in-register format the hcall expects.
Since we're moving this to spapr_numa.c then I guess it's ok to add
more juggling to handle an arbitrary size. I got a bit nervous about
adding too much stuff here in spapr_hcall.c.
Thanks,
DHB
>
© 2016 - 2026 Red Hat, Inc.