From nobody Wed Jan 15 11:43:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=git.sr.ht ARC-Seal: i=1; a=rsa-sha256; t=1693970855; cv=none; d=zohomail.com; s=zohoarc; b=n2XhFFdTLsCka8uCTZo3okSRLx5PZnpeWYhTVbI5v8BzcI+W45MajVOsHinkMOHmtunLBiMUiQZDLhsljjhrA9X4N0SP4OOsZUiM16TiOnbklI9SOhFrBGAx4FNx1hLhcHhlNj6jepgaPt8qlv4OMHKlhIdsoBIdEhU34wcJrgg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1693970855; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Sender:Subject:To; bh=EU3Hdrus5ZmNaEAvmc3i9qjpqwEzzup5i8GRfTzI1rY=; b=WE0KC7+hYj+qvzuXzJ7zaupi3j3XOVY/bFF63rC2Xubk2iOFSffXygzsyLughSzGu0Oa7kxjzrgxZASXJy31bwNwWDgyZtIK+ZBGFtiZA0JFNR99f47iolc8F5OHAjmtmG1oU/w6ZPrHzvsS6Io+ct9KN1LBLeadEQBY26aIRis= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 169397085582321.26786716606705; Tue, 5 Sep 2023 20:27:35 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-p9lxIMKMN5q_OfDl9gzilQ-1; Tue, 05 Sep 2023 23:27:28 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AA6148164FE; Wed, 6 Sep 2023 03:27:21 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B2712026D2B; Wed, 6 Sep 2023 03:27:21 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id A5C131946A45; Wed, 6 Sep 2023 03:27:20 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 031C71946A70 for ; Wed, 6 Sep 2023 03:27:17 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id E53B1140E966; Wed, 6 Sep 2023 03:27:16 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast03.extmail.prod.ext.rdu2.redhat.com [10.11.55.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DDA4F140E965 for ; Wed, 6 Sep 2023 03:27:16 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE0C591BC21 for ; Wed, 6 Sep 2023 03:27:16 +0000 (UTC) Received: from mail-b.sr.ht (mail-b.sr.ht [173.195.146.151]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-412-RVa4P0_fPr-S8c1vM4xWcA-1; Tue, 05 Sep 2023 23:27:13 -0400 Received: from git.sr.ht (unknown [173.195.146.142]) by mail-b.sr.ht (Postfix) with ESMTPSA id 10EAC11F0D9; Wed, 6 Sep 2023 03:27:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1693970854; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:list-id:list-help:list-unsubscribe: list-subscribe:list-post; bh=EU3Hdrus5ZmNaEAvmc3i9qjpqwEzzup5i8GRfTzI1rY=; b=gxCm01/ds5NNHEHimIjd5KzD3Kc6PuOwabLxPGRbA1l69T7oLPVJA2Wu1923brvKJRJUMt TKAr1pZnkylSl+QXuU70yPTGtEToHDQtvGv3wptzxIHkqI8+7lXO2Uraf21Gh0SWrMzWsZ /Py1x0ghpyIndBiH5ZExM2lKhDrxQ4A= X-MC-Unique: p9lxIMKMN5q_OfDl9gzilQ-1 X-Original-To: libvir-list@listman.corp.redhat.com X-MC-Unique: RVa4P0_fPr-S8c1vM4xWcA-1 From: ~hyman Date: Wed, 03 Aug 2022 00:27:47 +0800 Subject: [PATCH Libvirt v3 04/10] qemu_driver: Implement qemuDomainSetVcpuDirtyLimit MIME-Version: 1.0 Message-ID: <169397083100.4628.15196043252714532301-4@git.sr.ht> In-Reply-To: <169397083100.4628.15196043252714532301-0@git.sr.ht> To: libvir-list@redhat.com X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: ~hyman Cc: Martin Kletzander , Peter Krempa , yong.huang@smartx.com Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: git.sr.ht Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1693970856356100002 From: Hyman Huang(=E9=BB=84=E5=8B=87) Implement qemuDomainSetVcpuDirtyLimit, which can be used to set or cancel the upper limit of the dirty page rate for virtual CPUs. Signed-off-by: Hyman Huang(=E9=BB=84=E5=8B=87) --- src/qemu/qemu_driver.c | 131 +++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.c | 13 ++++ src/qemu/qemu_monitor.h | 5 ++ src/qemu/qemu_monitor_json.c | 43 ++++++++++++ src/qemu/qemu_monitor_json.h | 5 ++ 5 files changed, 197 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 0d4da937b0..64d97c0fba 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -19876,6 +19876,136 @@ qemuDomainFDAssociate(virDomainPtr domain, return ret; } =20 +static void +qemuDomainSetDirtyLimit(virDomainVcpuDef *vcpu, + unsigned long long rate) +{ + if (rate > 0) { + vcpu->dirtyLimitSet =3D true; + vcpu->dirty_limit =3D rate; + } else { + vcpu->dirtyLimitSet =3D false; + vcpu->dirty_limit =3D 0; + } +} + +static void +qemuDomainSetVcpuDirtyLimitConfig(virDomainDef *def, + int vcpu, + unsigned long long rate) +{ + def->individualvcpus =3D true; + + if (vcpu =3D=3D -1) { + size_t maxvcpus =3D virDomainDefGetVcpusMax(def); + size_t i; + for (i =3D 0; i < maxvcpus; i++) { + qemuDomainSetDirtyLimit(virDomainDefGetVcpu(def, i), rate); + } + } else { + qemuDomainSetDirtyLimit(virDomainDefGetVcpu(def, vcpu), rate); + } +} + +static int +qemuDomainSetVcpuDirtyLimitInternal(virQEMUDriver *driver, + virDomainObj *vm, + virDomainDef *def, + virDomainDef *persistentDef, + int vcpu, + unsigned long long rate) +{ + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + qemuDomainObjPrivate *priv =3D vm->privateData; + + VIR_DEBUG("vcpu %d, rate %llu", vcpu, rate); + if (def) { + qemuDomainObjEnterMonitor(vm); + if (qemuMonitorSetVcpuDirtyLimit(priv->mon, vcpu, rate) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Failed to set dirty page rate limit")); + qemuDomainObjExitMonitor(vm); + return -1; + } + qemuDomainObjExitMonitor(vm); + qemuDomainSetVcpuDirtyLimitConfig(def, vcpu, rate); + } + + if (persistentDef) { + qemuDomainSetVcpuDirtyLimitConfig(persistentDef, vcpu, rate); + if (virDomainDefSave(persistentDef, driver->xmlopt, cfg->configDir= ) < 0) + return -1; + } + + return 0; +} + +static int +qemuDomainSetVcpuDirtyLimit(virDomainPtr domain, + int vcpu, + unsigned long long rate, + unsigned int flags) +{ + virQEMUDriver *driver =3D domain->conn->privateData; + virDomainObj *vm =3D NULL; + qemuDomainObjPrivate *priv; + virDomainDef *def =3D NULL; + virDomainDef *persistentDef =3D NULL; + int ret =3D -1; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + if (!(vm =3D qemuDomainObjFromDomain(domain))) + return -1; + + if (virDomainSetVcpuDirtyLimitEnsureACL(domain->conn, vm->def, flags) = < 0) + goto cleanup; + + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + goto cleanup; + + if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) + goto endjob; + + if (persistentDef) { + if (vcpu >=3D 0 && vcpu >=3D (int)virDomainDefGetVcpusMax(persiste= ntDef)) { + virReportError(VIR_ERR_INVALID_ARG, + _("vcpu %1$d is not present in persistent confi= g"), + vcpu); + goto endjob; + } + } + + if (def) { + if (virDomainObjCheckActive(vm) < 0) + goto endjob; + + priv =3D vm->privateData; + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_VCPU_DIRTY_LIMIT)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not support setting dirty page rat= e limit")); + goto endjob; + } + + if (vcpu >=3D 0 && vcpu >=3D (int)virDomainDefGetVcpusMax(def)) { + virReportError(VIR_ERR_INVALID_ARG, + _("vcpu %1$d is not present in live config"), + vcpu); + goto endjob; + } + } + + ret =3D qemuDomainSetVcpuDirtyLimitInternal(driver, vm, def, persisten= tDef, + vcpu, rate); + + endjob: + virDomainObjEndJob(vm); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} =20 static virHypervisorDriver qemuHypervisorDriver =3D { .name =3D QEMU_DRIVER_NAME, @@ -20126,6 +20256,7 @@ static virHypervisorDriver qemuHypervisorDriver =3D= { .domainStartDirtyRateCalc =3D qemuDomainStartDirtyRateCalc, /* 7.2.0 */ .domainSetLaunchSecurityState =3D qemuDomainSetLaunchSecurityState, /*= 8.0.0 */ .domainFDAssociate =3D qemuDomainFDAssociate, /* 9.0.0 */ + .domainSetVcpuDirtyLimit =3D qemuDomainSetVcpuDirtyLimit, /* 9.7.0 */ }; =20 =20 diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 7053539c7d..90bc0e62c9 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4500,3 +4500,16 @@ qemuMonitorGetStatsByQOMPath(virJSONValue *arr, =20 return NULL; } + + +int +qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate) +{ + VIR_DEBUG("set vcpu %d dirty page rate limit %llu", vcpu, rate); + + QEMU_CHECK_MONITOR(mon); + + return qemuMonitorJSONSetVcpuDirtyLimit(mon, vcpu, rate); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 6c590933aa..07a05365cf 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1579,3 +1579,8 @@ qemuMonitorExtractQueryStats(virJSONValue *info); virJSONValue * qemuMonitorGetStatsByQOMPath(virJSONValue *arr, char *qom_path); + +int +qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 5b9edadcf7..c8f6069566 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -8852,3 +8852,46 @@ qemuMonitorJSONQueryStats(qemuMonitor *mon, =20 return virJSONValueObjectStealArray(reply, "return"); } + +/** + * qemuMonitorJSONSetVcpuDirtyLimit: + * @mon: monitor object + * @vcpu: virtual cpu index to be set, -1 affects all virtual CPUs + * @rate: dirty page rate upper limit to be set, use 0 to disable + * and a positive value to enable + * + * Returns -1 on failure. + */ +int +qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate) +{ + g_autoptr(virJSONValue) cmd =3D NULL; + g_autoptr(virJSONValue) reply =3D NULL; + + if (rate !=3D 0) { + /* set the vcpu dirty page rate limit */ + if (!(cmd =3D qemuMonitorJSONMakeCommand("set-vcpu-dirty-limit", + "k:cpu-index", vcpu, + "U:dirty-rate", rate, + NULL))) { + return -1; + } + } else { + /* cancel the vcpu dirty page rate limit */ + if (!(cmd =3D qemuMonitorJSONMakeCommand("cancel-vcpu-dirty-limit", + "k:cpu-index", vcpu, + NULL))) { + return -1; + } + } + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + return -1; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + return -1; + + return 0; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 06023b98ea..89f61b3052 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -825,3 +825,8 @@ qemuMonitorJSONQueryStats(qemuMonitor *mon, qemuMonitorQueryStatsTargetType target, char **vcpus, GPtrArray *providers); + +int +qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate); --=20 2.38.5