From nobody Tue Apr 30 20:36:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 148778644672658.48194207825156; Wed, 22 Feb 2017 10:00:46 -0800 (PST) Received: from localhost ([::1]:54474 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgbDZ-00082m-Df for importer@patchew.org; Wed, 22 Feb 2017 13:00:45 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38465) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgaeG-0000f1-Tn for qemu-devel@nongnu.org; Wed, 22 Feb 2017 12:24:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cgaeD-0003lU-Py for qemu-devel@nongnu.org; Wed, 22 Feb 2017 12:24:16 -0500 Received: from mx1.redhat.com ([209.132.183.28]:43390) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cgaeD-0003l1-EL; Wed, 22 Feb 2017 12:24:13 -0500 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 26F928047C; Wed, 22 Feb 2017 17:24:13 +0000 (UTC) Received: from dell-r430-03.lab.eng.brq.redhat.com (dell-r430-03.lab.eng.brq.redhat.com [10.34.112.60]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v1MHOBdL030925; Wed, 22 Feb 2017 12:24:11 -0500 From: Igor Mammedov To: qemu-devel@nongnu.org Date: Wed, 22 Feb 2017 18:24:08 +0100 Message-Id: <1487784248-112130-1-git-send-email-imammedo@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 22 Feb 2017 17:24:13 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC] spapr: ensure that all threads within core are on the same NUMA node X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, thuth@redhat.com, agraf@suse.de, mdroth@linux.vnet.ibm.com, qemu-ppc@nongnu.org, David Gibson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Threads within a core probably shouldn't be on different NUMA nodes, so if user has misconfured command line make fail to start and let user fix it. For now use the first thread on the core as source of core's node-id. I'm suggesting this to make sure that it would be possible later map legacy cpu-index based CLI to core-id based internal map in possible_cpus array and completly eleminate numa_info[XXX].node_cpu bitmaps, leaving only possible_cpus as storage of mapping information. CCing SPAPR mantainers for oppinion if enforcement makas sense from plaform poin of view and if we could go ahead with this or I should look for another way to deal with legacy -numa CLI. Signed-off-by: Igor Mammedov CC: qemu-ppc@nongnu.org CC: lvivier@redhat.com CC: David Gibson CC: thuth@redhat.com CC: mdroth@linux.vnet.ibm.com CC: agraf@suse.de --- hw/ppc/spapr_cpu_core.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c index 55cd045..1499a8b 100644 --- a/hw/ppc/spapr_cpu_core.c +++ b/hw/ppc/spapr_cpu_core.c @@ -50,8 +50,6 @@ static void spapr_cpu_init(sPAPRMachineState *spapr, Powe= rPCCPU *cpu, Error **errp) { CPUPPCState *env =3D &cpu->env; - CPUState *cs =3D CPU(cpu); - int i; =20 /* Set time-base frequency to 512 MHz */ cpu_ppc_tb_init(env, SPAPR_TIMEBASE_FREQ); @@ -70,12 +68,6 @@ static void spapr_cpu_init(sPAPRMachineState *spapr, Pow= erPCCPU *cpu, } } =20 - /* Set NUMA node for the added CPUs */ - i =3D numa_get_node_for_cpu(cs->cpu_index); - if (i < nb_numa_nodes) { - cs->numa_node =3D i; - } - xics_cpu_setup(spapr->xics, cpu); =20 qemu_register_reset(spapr_cpu_reset, cpu); @@ -159,11 +151,13 @@ static void spapr_cpu_core_realize(DeviceState *dev, = Error **errp) const char *typename =3D object_class_get_name(scc->cpu_class); size_t size =3D object_type_get_instance_size(typename); Error *local_err =3D NULL; + int core_node_id =3D numa_get_node_for_cpu(cc->core_id);; void *obj; int i, j; =20 sc->threads =3D g_malloc0(size * cc->nr_threads); for (i =3D 0; i < cc->nr_threads; i++) { + int node_id; char id[32]; CPUState *cs; =20 @@ -172,6 +166,19 @@ static void spapr_cpu_core_realize(DeviceState *dev, E= rror **errp) object_initialize(obj, size, typename); cs =3D CPU(obj); cs->cpu_index =3D cc->core_id + i; + + /* Set NUMA node for the added CPUs */ + node_id =3D numa_get_node_for_cpu(cs->cpu_index); + if (node_id !=3D core_node_id) { + error_setg(&local_err, "Invalid node-id=3D%d of thread[cpu-ind= ex: %d]" + " on CPU[core-id: %d, node-id: %d], node-id must be the sa= me", + node_id, cs->cpu_index, cc->core_id, core_node_id); + goto err; + } + if (node_id < nb_numa_nodes) { + cs->numa_node =3D node_id; + } + snprintf(id, sizeof(id), "thread[%d]", i); object_property_add_child(OBJECT(sc), id, obj, &local_err); if (local_err) { --=20 2.7.4