From nobody Mon Feb 9 17:06:26 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1632982465901513.6661004000969; Wed, 29 Sep 2021 23:14:25 -0700 (PDT) Received: from localhost ([::1]:59488 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVpKq-0007zc-Cu for importer@patchew.org; Thu, 30 Sep 2021 02:14:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47560) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVosh-0001v7-32; Thu, 30 Sep 2021 01:45:19 -0400 Received: from gandalf.ozlabs.org ([150.107.74.76]:51833) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVose-0003gX-LU; Thu, 30 Sep 2021 01:45:18 -0400 Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4HKhyR2r0sz4xbx; Thu, 30 Sep 2021 15:44:31 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=201602; t=1632980671; bh=BsVpKjHYXHspWk8/SgB2oeLFZE/7/7Bqq+KLkJhLbTs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nf3ky5+NCaTzGWUsxGO6pJsjA6oNnWCnWMQpu3cuk2OYHvZwqEqqckcFU2/9tw3uW AXKGKpwhHUHLpVhE76E3AEiJMXr8PxjeM3HDsa1IzS89O3IpnRgCz+Wga9VCrXSajc 8TPBj6bz2gFXrEAs1LiEyUP2Q2YFDnIMxmwd4kFE= From: David Gibson To: peter.maydell@linaro.org Subject: [PULL 27/44] spapr_numa.c: parametrize FORM1 macros Date: Thu, 30 Sep 2021 15:44:09 +1000 Message-Id: <20210930054426.357344-28-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210930054426.357344-1-david@gibson.dropbear.id.au> References: <20210930054426.357344-1-david@gibson.dropbear.id.au> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=150.107.74.76; envelope-from=dgibson@gandalf.ozlabs.org; helo=gandalf.ozlabs.org X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Henrique Barboza , mark.cave-ayland@ilande.co.uk, qemu-devel@nongnu.org, groug@kaod.org, hpoussin@reactos.org, clg@kaod.org, qemu-ppc@nongnu.org, philmd@redhat.com, David Gibson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1632982466952100001 Content-Type: text/plain; charset="utf-8" From: Daniel Henrique Barboza The next preliminary step to introduce NUMA FORM2 affinity is to make the existing code independent of FORM1 macros and values, i.e. MAX_DISTANCE_REF_POINTS, NUMA_ASSOC_SIZE and VCPU_ASSOC_SIZE. This patch accomplishes that by doing the following: - move the NUMA related macros from spapr.h to spapr_numa.c where they are used. spapr.h gets instead a 'NUMA_NODES_MAX_NUM' macro that is used to refer to the maximum number of NUMA nodes, including GPU nodes, that the machine can support; - MAX_DISTANCE_REF_POINTS and NUMA_ASSOC_SIZE are renamed to FORM1_DIST_REF_POINTS and FORM1_NUMA_ASSOC_SIZE. These FORM1 specific macros are used in FORM1 init functions; - code that uses MAX_DISTANCE_REF_POINTS now retrieves the max_dist_ref_points value using get_max_dist_ref_points(). NUMA_ASSOC_SIZE is replaced by get_numa_assoc_size() and VCPU_ASSOC_SIZE is replaced by get_vcpu_assoc_size(). These functions are used by the generic device tree functions and h_home_node_associativity() and will allow them to switch between FORM1 and FORM2 without changing their core logic. Reviewed-by: Greg Kurz Signed-off-by: Daniel Henrique Barboza Message-Id: <20210920174947.556324-4-danielhb413@gmail.com> Signed-off-by: David Gibson --- hw/ppc/spapr_numa.c | 74 ++++++++++++++++++++++++++++++------------ include/hw/ppc/spapr.h | 28 ++++++++-------- 2 files changed, 67 insertions(+), 35 deletions(-) diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c index bf520d42b2..08e2d6aed8 100644 --- a/hw/ppc/spapr_numa.c +++ b/hw/ppc/spapr_numa.c @@ -19,6 +19,33 @@ /* Moved from hw/ppc/spapr_pci_nvlink2.c */ #define SPAPR_GPU_NUMA_ID (cpu_to_be32(1)) =20 +/* + * Retrieves max_dist_ref_points of the current NUMA affinity. + */ +static int get_max_dist_ref_points(SpaprMachineState *spapr) +{ + return FORM1_DIST_REF_POINTS; +} + +/* + * Retrieves numa_assoc_size of the current NUMA affinity. + */ +static int get_numa_assoc_size(SpaprMachineState *spapr) +{ + return FORM1_NUMA_ASSOC_SIZE; +} + +/* + * Retrieves vcpu_assoc_size of the current NUMA affinity. + * + * vcpu_assoc_size is the size of ibm,associativity array + * for CPUs, which has an extra element (vcpu_id) in the end. + */ +static int get_vcpu_assoc_size(SpaprMachineState *spapr) +{ + return get_numa_assoc_size(spapr) + 1; +} + static bool spapr_numa_is_symmetrical(MachineState *ms) { int src, dst; @@ -96,7 +123,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachine= State *spapr) * considered a match with associativity domains of node 0. */ for (i =3D 1; i < nb_numa_nodes; i++) { - for (j =3D 1; j < MAX_DISTANCE_REF_POINTS; j++) { + for (j =3D 1; j < FORM1_DIST_REF_POINTS; j++) { spapr->numa_assoc_array[i][j] =3D cpu_to_be32(i); } } @@ -134,7 +161,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachin= eState *spapr) * * The Linux kernel will assume that the distance between src = and * dst, in this case of no match, is 10 (local distance) doubl= ed - * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POI= NTS + * for each NUMA it didn't match. We have FORM1_DIST_REF_POINTS * levels (4), so this gives us 10*2*2*2*2 =3D 160. * * This logic can be seen in the Linux kernel source code, as = of @@ -169,7 +196,7 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachine= State *spapr, =20 /* * For all associativity arrays: first position is the size, - * position MAX_DISTANCE_REF_POINTS is always the numa_id, + * position FORM1_DIST_REF_POINTS is always the numa_id, * represented by the index 'i'. * * This will break on sparse NUMA setups, when/if QEMU starts @@ -177,8 +204,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachine= State *spapr, * 'i' will be a valid node_id set by the user. */ for (i =3D 0; i < nb_numa_nodes; i++) { - spapr->numa_assoc_array[i][0] =3D cpu_to_be32(MAX_DISTANCE_REF_POI= NTS); - spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] =3D cpu_to_be3= 2(i); + spapr->numa_assoc_array[i][0] =3D cpu_to_be32(FORM1_DIST_REF_POINT= S); + spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] =3D cpu_to_be32(= i); } =20 /* @@ -192,15 +219,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachi= neState *spapr, max_nodes_with_gpus =3D nb_numa_nodes + NVGPU_MAX_NUM; =20 for (i =3D nb_numa_nodes; i < max_nodes_with_gpus; i++) { - spapr->numa_assoc_array[i][0] =3D cpu_to_be32(MAX_DISTANCE_REF_POI= NTS); + spapr->numa_assoc_array[i][0] =3D cpu_to_be32(FORM1_DIST_REF_POINT= S); =20 - for (j =3D 1; j < MAX_DISTANCE_REF_POINTS; j++) { + for (j =3D 1; j < FORM1_DIST_REF_POINTS; j++) { uint32_t gpu_assoc =3D smc->pre_5_1_assoc_refpoints ? SPAPR_GPU_NUMA_ID : cpu_to_be32(i); spapr->numa_assoc_array[i][j] =3D gpu_assoc; } =20 - spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] =3D cpu_to_be3= 2(i); + spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] =3D cpu_to_be32(= i); } =20 /* @@ -234,13 +261,15 @@ void spapr_numa_write_associativity_dt(SpaprMachineSt= ate *spapr, void *fdt, { _FDT((fdt_setprop(fdt, offset, "ibm,associativity", spapr->numa_assoc_array[nodeid], - sizeof(spapr->numa_assoc_array[nodeid])))); + get_numa_assoc_size(spapr) * sizeof(uint32_t)))); } =20 static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr, PowerPCCPU *cpu) { - uint32_t *vcpu_assoc =3D g_new(uint32_t, VCPU_ASSOC_SIZE); + int max_distance_ref_points =3D get_max_dist_ref_points(spapr); + int vcpu_assoc_size =3D get_vcpu_assoc_size(spapr); + uint32_t *vcpu_assoc =3D g_new(uint32_t, vcpu_assoc_size); int index =3D spapr_get_vcpu_id(cpu); =20 /* @@ -249,10 +278,10 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachi= neState *spapr, * 0, put cpu_id last, then copy the remaining associativity * domains. */ - vcpu_assoc[0] =3D cpu_to_be32(MAX_DISTANCE_REF_POINTS + 1); - vcpu_assoc[VCPU_ASSOC_SIZE - 1] =3D cpu_to_be32(index); + vcpu_assoc[0] =3D cpu_to_be32(max_distance_ref_points + 1); + vcpu_assoc[vcpu_assoc_size - 1] =3D cpu_to_be32(index); memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1, - (VCPU_ASSOC_SIZE - 2) * sizeof(uint32_t)); + (vcpu_assoc_size - 2) * sizeof(uint32_t)); =20 return vcpu_assoc; } @@ -261,12 +290,13 @@ int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr,= void *fdt, int offset, PowerPCCPU *cpu) { g_autofree uint32_t *vcpu_assoc =3D NULL; + int vcpu_assoc_size =3D get_vcpu_assoc_size(spapr); =20 vcpu_assoc =3D spapr_numa_get_vcpu_assoc(spapr, cpu); =20 /* Advertise NUMA via ibm,associativity */ return fdt_setprop(fdt, offset, "ibm,associativity", vcpu_assoc, - VCPU_ASSOC_SIZE * sizeof(uint32_t)); + vcpu_assoc_size * sizeof(uint32_t)); } =20 =20 @@ -274,17 +304,18 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachine= State *spapr, void *fdt, int offset) { MachineState *machine =3D MACHINE(spapr); + int max_distance_ref_points =3D get_max_dist_ref_points(spapr); int nb_numa_nodes =3D machine->numa_state->num_nodes; int nr_nodes =3D nb_numa_nodes ? nb_numa_nodes : 1; uint32_t *int_buf, *cur_index, buf_len; int ret, i; =20 /* ibm,associativity-lookup-arrays */ - buf_len =3D (nr_nodes * MAX_DISTANCE_REF_POINTS + 2) * sizeof(uint32_t= ); + buf_len =3D (nr_nodes * max_distance_ref_points + 2) * sizeof(uint32_t= ); cur_index =3D int_buf =3D g_malloc0(buf_len); int_buf[0] =3D cpu_to_be32(nr_nodes); /* Number of entries per associativity list */ - int_buf[1] =3D cpu_to_be32(MAX_DISTANCE_REF_POINTS); + int_buf[1] =3D cpu_to_be32(max_distance_ref_points); cur_index +=3D 2; for (i =3D 0; i < nr_nodes; i++) { /* @@ -293,8 +324,8 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineSt= ate *spapr, void *fdt, */ uint32_t *associativity =3D spapr->numa_assoc_array[i]; memcpy(cur_index, ++associativity, - sizeof(uint32_t) * MAX_DISTANCE_REF_POINTS); - cur_index +=3D MAX_DISTANCE_REF_POINTS; + sizeof(uint32_t) * max_distance_ref_points); + cur_index +=3D max_distance_ref_points; } ret =3D fdt_setprop(fdt, offset, "ibm,associativity-lookup-arrays", in= t_buf, (cur_index - int_buf) * sizeof(uint32_t)); @@ -383,6 +414,7 @@ static target_ulong h_home_node_associativity(PowerPCCP= U *cpu, target_ulong procno =3D args[1]; PowerPCCPU *tcpu; int idx, assoc_idx; + int vcpu_assoc_size =3D get_vcpu_assoc_size(spapr); =20 /* only support procno from H_REGISTER_VPA */ if (flags !=3D 0x1) { @@ -401,7 +433,7 @@ static target_ulong h_home_node_associativity(PowerPCCP= U *cpu, * 12 associativity domains for vcpus. Assert and bail if that's * not the case. */ - G_STATIC_ASSERT((VCPU_ASSOC_SIZE - 1) <=3D 12); + g_assert((vcpu_assoc_size - 1) <=3D 12); =20 vcpu_assoc =3D spapr_numa_get_vcpu_assoc(spapr, tcpu); /* assoc_idx starts at 1 to skip associativity size */ @@ -422,9 +454,9 @@ static target_ulong h_home_node_associativity(PowerPCCP= U *cpu, * macro. The ternary will fill the remaining registers with -1 * after we went through vcpu_assoc[]. */ - a =3D assoc_idx < VCPU_ASSOC_SIZE ? + a =3D assoc_idx < vcpu_assoc_size ? be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1; - b =3D assoc_idx < VCPU_ASSOC_SIZE ? + b =3D assoc_idx < vcpu_assoc_size ? be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1; =20 args[idx] =3D ASSOCIATIVITY(a, b); diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 637652ad16..814e087e98 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -100,23 +100,23 @@ typedef enum { =20 #define FDT_MAX_SIZE 0x200000 =20 +/* Max number of GPUs per system */ +#define NVGPU_MAX_NUM 6 + +/* Max number of NUMA nodes */ +#define NUMA_NODES_MAX_NUM (MAX_NODES + NVGPU_MAX_NUM) + /* - * NUMA related macros. MAX_DISTANCE_REF_POINTS was taken - * from Linux kernel arch/powerpc/mm/numa.h. It represents the - * amount of associativity domains for non-CPU resources. + * NUMA FORM1 macros. FORM1_DIST_REF_POINTS was taken from + * MAX_DISTANCE_REF_POINTS in arch/powerpc/mm/numa.h from Linux + * kernel source. It represents the amount of associativity domains + * for non-CPU resources. * - * NUMA_ASSOC_SIZE is the base array size of an ibm,associativity + * FORM1_NUMA_ASSOC_SIZE is the base array size of an ibm,associativity * array for any non-CPU resource. - * - * VCPU_ASSOC_SIZE represents the size of ibm,associativity array - * for CPUs, which has an extra element (vcpu_id) in the end. */ -#define MAX_DISTANCE_REF_POINTS 4 -#define NUMA_ASSOC_SIZE (MAX_DISTANCE_REF_POINTS + 1) -#define VCPU_ASSOC_SIZE (NUMA_ASSOC_SIZE + 1) - -/* Max number of these GPUsper a physical box */ -#define NVGPU_MAX_NUM 6 +#define FORM1_DIST_REF_POINTS 4 +#define FORM1_NUMA_ASSOC_SIZE (FORM1_DIST_REF_POINTS + 1) =20 typedef struct SpaprCapabilities SpaprCapabilities; struct SpaprCapabilities { @@ -249,7 +249,7 @@ struct SpaprMachineState { unsigned gpu_numa_id; SpaprTpmProxy *tpm_proxy; =20 - uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE]; + uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE]; =20 Error *fwnmi_migration_blocker; }; --=20 2.31.1