From nobody Tue Feb 10 15:46:47 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1529664230371142.78016584489183; Fri, 22 Jun 2018 03:43:50 -0700 (PDT) Received: from localhost ([::1]:60711 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fWJXh-0006qU-JF for importer@patchew.org; Fri, 22 Jun 2018 06:43:49 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45620) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fWJPr-0000ch-6E for qemu-devel@nongnu.org; Fri, 22 Jun 2018 06:35:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fWJPp-0003P0-Ia for qemu-devel@nongnu.org; Fri, 22 Jun 2018 06:35:43 -0400 Received: from ozlabs.org ([203.11.71.1]:44519) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fWJPo-0003MH-Qk; Fri, 22 Jun 2018 06:35:41 -0400 Received: by ozlabs.org (Postfix, from userid 1007) id 41Bw2H28qrz9s8J; Fri, 22 Jun 2018 20:35:34 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gibson.dropbear.id.au; s=201602; t=1529663735; bh=Gttg2MbnSGMNYnufpRs11A+GBJFDm+FJgzTQ5EkIuZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C//pDxN7TehpJZ3W+X/Mgm5+ptsIXwNK3F1JWaKeHca2JBIQmeVYhcqQ6/mTfKQ14 yj/yHB56FFw1OpouEoRBLqpYab57tg5LPqRwGiGSC2pu1O9fFNyMKTW9pV84jUbX8S ZygfVikubMZ7LhQIJm6STaW/Ew6xF/VhsjUjAVUA= From: David Gibson To: peter.maydell@linaro.org Date: Fri, 22 Jun 2018 20:35:07 +1000 Message-Id: <20180622103528.28598-5-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180622103528.28598-1-david@gibson.dropbear.id.au> References: <20180622103528.28598-1-david@gibson.dropbear.id.au> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 203.11.71.1 Subject: [Qemu-devel] [PULL 04/25] spapr_cpu_core: migrate VPA related state X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, aik@ozlabs.ru, qemu-devel@nongnu.org, agraf@suse.de, groug@kaod.org, qemu-ppc@nongnu.org, clg@kaod.org, David Gibson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Greg Kurz QEMU implements the "Shared Processor LPAR" (SPLPAR) option, which allows the hypervisor to time-slice a physical processor into multiple virtual processor. The intent is to allow more guests to run, and to optimize processor utilization. The guest OS can cede idle VCPUs, so that their processing capacity may be used by other VCPUs, with the H_CEDE hcall. The guest OS can also optimize spinlocks, by confering the time-slice of a spinning VCPU to the spinlock holder if it's currently notrunning, with the H_CONFER hcall. Both hcalls depend on a "Virtual Processor Area" (VPA) to be registered by the guest OS, generally during early boot. Other per-VCPU areas can be registered: the "SLB Shadow Buffer" which allows a more efficient dispatching of VCPUs, and the "Dispatch Trace Log Buffer" (DTL) which is used to compute time stolen by the hypervisor. Both DTL and SLB Shadow areas depend on the VPA to be registered. The VPA/SLB Shadow/DTL are state that QEMU should migrate, but this doesn't happen, for no apparent reason other than it was just never coded. This causes the features listed above to stop working after migration, and it breaks the logic of the H_REGISTER_VPA hcall in the destination. The VPA is set at the guest request, ie, we don't have to migrate it before the guest has actually set it. This patch hence adds an "spapr_cpu/vpa" subsection to the recently introduced per-CPU machine data migration stream. Since DTL and SLB Shadow are optional and both depend on VPA, they get their own subsections "spapr_cpu/vpa/slb_shadow" and "spapr_cpu/vpa/dtl" hanging from the "spapr_cpu/vpa" subsection. Note that this won't break migration to older QEMUs. Is is already handled by only registering the vmstate handler for per-CPU data with newer machine types. Signed-off-by: Greg Kurz Signed-off-by: David Gibson --- hw/ppc/spapr_cpu_core.c | 65 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c index f129ac884e..67f1596c57 100644 --- a/hw/ppc/spapr_cpu_core.c +++ b/hw/ppc/spapr_cpu_core.c @@ -129,6 +129,67 @@ static void spapr_cpu_core_unrealize(DeviceState *dev,= Error **errp) g_free(sc->threads); } =20 +static bool slb_shadow_needed(void *opaque) +{ + sPAPRCPUState *spapr_cpu =3D opaque; + + return spapr_cpu->slb_shadow_addr !=3D 0; +} + +static const VMStateDescription vmstate_spapr_cpu_slb_shadow =3D { + .name =3D "spapr_cpu/vpa/slb_shadow", + .version_id =3D 1, + .minimum_version_id =3D 1, + .needed =3D slb_shadow_needed, + .fields =3D (VMStateField[]) { + VMSTATE_UINT64(slb_shadow_addr, sPAPRCPUState), + VMSTATE_UINT64(slb_shadow_size, sPAPRCPUState), + VMSTATE_END_OF_LIST() + } +}; + +static bool dtl_needed(void *opaque) +{ + sPAPRCPUState *spapr_cpu =3D opaque; + + return spapr_cpu->dtl_addr !=3D 0; +} + +static const VMStateDescription vmstate_spapr_cpu_dtl =3D { + .name =3D "spapr_cpu/vpa/dtl", + .version_id =3D 1, + .minimum_version_id =3D 1, + .needed =3D dtl_needed, + .fields =3D (VMStateField[]) { + VMSTATE_UINT64(dtl_addr, sPAPRCPUState), + VMSTATE_UINT64(dtl_size, sPAPRCPUState), + VMSTATE_END_OF_LIST() + } +}; + +static bool vpa_needed(void *opaque) +{ + sPAPRCPUState *spapr_cpu =3D opaque; + + return spapr_cpu->vpa_addr !=3D 0; +} + +static const VMStateDescription vmstate_spapr_cpu_vpa =3D { + .name =3D "spapr_cpu/vpa", + .version_id =3D 1, + .minimum_version_id =3D 1, + .needed =3D vpa_needed, + .fields =3D (VMStateField[]) { + VMSTATE_UINT64(vpa_addr, sPAPRCPUState), + VMSTATE_END_OF_LIST() + }, + .subsections =3D (const VMStateDescription * []) { + &vmstate_spapr_cpu_slb_shadow, + &vmstate_spapr_cpu_dtl, + NULL + } +}; + static const VMStateDescription vmstate_spapr_cpu_state =3D { .name =3D "spapr_cpu", .version_id =3D 1, @@ -136,6 +197,10 @@ static const VMStateDescription vmstate_spapr_cpu_stat= e =3D { .fields =3D (VMStateField[]) { VMSTATE_END_OF_LIST() }, + .subsections =3D (const VMStateDescription * []) { + &vmstate_spapr_cpu_vpa, + NULL + } }; =20 static void spapr_realize_vcpu(PowerPCCPU *cpu, sPAPRMachineState *spapr, --=20 2.17.1