From nobody Mon Feb 9 05:01:19 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1665064210; cv=none; d=zohomail.com; s=zohoarc; b=K2gE21jXsl1k5pGvp4y1xbKvtvwwQ2WPjjybgHIMxYL2JzK2wvHgMpYcv/UOoKveKFIczKxE/hCd0RpmlS7MKQQC+k7Kl/WK8UzghOmL9DK2xLdowUv7VEw44fbCnHkzNbAmD7JVTz5eMLMyQnagfrGTyngg66lq7kGTgI7Z49s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1665064210; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=b3VGAGLbn8RzreLBSHZharRGQ842cV176L065MyEQes=; b=P+i4j48guMoJLCdcHZxWgcWeOpR1F8aN1TiVU0hmNvUs4RfSOlM2zkyd3LulIdcjNokMlBT0lbNZVE4Zi78sy6+MHtDUIDXWQcuz5VQ8V6sWxHQUaEi/eVqGAQo7ptO4mTl9ZIwbEy1UvbALUJSjnC6xvwc9y3zhDnfM/5FoQuU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1665064210545676.2070071977861; Thu, 6 Oct 2022 06:50:10 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-647-QNtVhUX7MdWVOBIX_c1hxg-1; Thu, 06 Oct 2022 09:50:08 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6E3AB824070; Thu, 6 Oct 2022 13:50:00 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A75514588B3; Thu, 6 Oct 2022 13:50:00 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 425DE19465B3; Thu, 6 Oct 2022 13:50:00 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id CC6DC194658F for ; Thu, 6 Oct 2022 13:49:58 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 9F76F1457F36; Thu, 6 Oct 2022 13:49:58 +0000 (UTC) Received: from maggie.redhat.com (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4597614583FE for ; Thu, 6 Oct 2022 13:49:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665064209; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=b3VGAGLbn8RzreLBSHZharRGQ842cV176L065MyEQes=; b=eOcX48p1jXUMMJqI8clsrtzxpLDn9aC6q4/EBq7g/WiBq2Y+t3IbQ4baDskcow/CLI7b8g yi3JwhAKLPzneAR6xzMonOg5ESd7Q6HZ801eJ00TxONUxqvubMTCPpLympCKz5thDyvHAm dMuRyKJ3+Jt+EADTPQlmdyMOD2ATQiE= X-MC-Unique: QNtVhUX7MdWVOBIX_c1hxg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH v4 8/8] qemu: Enable for vCPUs on hotplug Date: Thu, 6 Oct 2022 15:49:50 +0200 Message-Id: <7b4891c1b1ad86ea5aad035bcf923fb0215e28f6.1665064015.git.mprivozn@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1665064210958100001 Content-Type: text/plain; charset="utf-8"; x-default="true" As advertised in the previous commit, QEMU_SCHED_CORE_VCPUS case is implemented for hotplug case. The implementation is very similar to the cold boot case, except here we fork off for every vCPU (because the implementation is done in qemuProcessSetupVcpu() which is also the function that's called from hotplug code). But that's okay because our hotplug APIs allow hotplugging one device at the time. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=3D2074559 Signed-off-by: Michal Privoznik Reviewed-by: Daniel P. Berrang=C3=A9 --- src/qemu/qemu_hotplug.c | 2 +- src/qemu/qemu_process.c | 61 +++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_process.h | 3 +- 3 files changed, 62 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 00727f6ddc..b77154cc80 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -6241,7 +6241,7 @@ qemuDomainHotplugAddVcpu(virQEMUDriver *driver, vcpuinfo->online =3D true; =20 if (vcpupriv->tid > 0 && - qemuProcessSetupVcpu(vm, i) < 0) + qemuProcessSetupVcpu(vm, i, true) < 0) return -1; } =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 32136978a9..4d702d9015 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -5803,10 +5803,40 @@ qemuProcessNetworkPrepareDevices(virQEMUDriver *dri= ver, } =20 =20 +struct qemuProcessSetupVcpuSchedCoreHelperData { + pid_t vcpupid; + pid_t dummypid; +}; + +static int +qemuProcessSetupVcpuSchedCoreHelper(pid_t ppid G_GNUC_UNUSED, + void *opaque) +{ + struct qemuProcessSetupVcpuSchedCoreHelperData *data =3D opaque; + + if (virProcessSchedCoreShareFrom(data->dummypid) < 0) { + virReportSystemError(errno, + _("unable to share scheduling cookie from %ll= d"), + (long long) data->dummypid); + return -1; + } + + if (virProcessSchedCoreShareTo(data->vcpupid) < 0) { + virReportSystemError(errno, + _("unable to share scheduling cookie to %lld"= ), + (long long) data->vcpupid); + return -1; + } + + return 0; +} + + /** * qemuProcessSetupVcpu: * @vm: domain object * @vcpuid: id of VCPU to set defaults + * @schedCore: whether to set scheduling group * * This function sets resource properties (cgroups, affinity, scheduler) f= or a * vCPU. This function expects that the vCPU is online and the vCPU pids w= ere @@ -5816,8 +5846,11 @@ qemuProcessNetworkPrepareDevices(virQEMUDriver *driv= er, */ int qemuProcessSetupVcpu(virDomainObj *vm, - unsigned int vcpuid) + unsigned int vcpuid, + bool schedCore) { + qemuDomainObjPrivate *priv =3D vm->privateData; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); pid_t vcpupid =3D qemuDomainGetVcpuPid(vm, vcpuid); virDomainVcpuDef *vcpu =3D virDomainDefGetVcpu(vm->def, vcpuid); virDomainResctrlMonDef *mon =3D NULL; @@ -5830,6 +5863,30 @@ qemuProcessSetupVcpu(virDomainObj *vm, &vcpu->sched) < 0) return -1; =20 + if (schedCore && + cfg->schedCore =3D=3D QEMU_SCHED_CORE_VCPUS) { + struct qemuProcessSetupVcpuSchedCoreHelperData data =3D { .vcpupid= =3D vcpupid, + .dummypid =3D -1 }; + + for (i =3D 0; i < virDomainDefGetVcpusMax(vm->def); i++) { + pid_t temptid =3D qemuDomainGetVcpuPid(vm, i); + + if (temptid > 0) { + data.dummypid =3D temptid; + break; + } + } + + if (data.dummypid =3D=3D -1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Unable to find a vCPU that is online")); + return -1; + } + + if (virProcessRunInFork(qemuProcessSetupVcpuSchedCoreHelper, &data= ) < 0) + return -1; + } + for (i =3D 0; i < vm->def->nresctrls; i++) { size_t j =3D 0; virDomainResctrlDef *ct =3D vm->def->resctrls[i]; @@ -5936,7 +5993,7 @@ qemuProcessSetupVcpus(virDomainObj *vm) if (!vcpu->online) continue; =20 - if (qemuProcessSetupVcpu(vm, i) < 0) + if (qemuProcessSetupVcpu(vm, i, false) < 0) return -1; } =20 diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 421efc6016..4dfb2485c0 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -187,7 +187,8 @@ int qemuConnectAgent(virQEMUDriver *driver, virDomainOb= j *vm); =20 =20 int qemuProcessSetupVcpu(virDomainObj *vm, - unsigned int vcpuid); + unsigned int vcpuid, + bool schedCore); int qemuProcessSetupIOThread(virDomainObj *vm, virDomainIOThreadIDDef *iothread); =20 --=20 2.35.1