From nobody Mon Feb 9 11:30:50 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1568914907; cv=none; d=zoho.com; s=zohoarc; b=WtgCgY9PZmJkkWI/wWVWN43olj8aQizf+NZ9vLq7oqjQxIt92X0h7A/CzGrzGPbin0bFjWUIElfv1aY6XZyOr69AK/h+vlxgOKSNej95nn59luX022Gt1Dtibwq0dlctjwLsgGN5LZ1xDm4NFb61pAyuOfMqnFBRnRMMOv6KHSk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568914907; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=2alDuOr3fuQwRj2Y5qhh/9/X3GSKuW0NlehS0krPFQU=; b=MtC3SRtjE1NSMLcJE4e+CEZYr/YOtEppJtouwI1/sVsFCCRb66Xs+hYhMOzUXbDC5hyKOnf4cGByo3hxvWmxt9y6OupmWRBFgrASnm21A0fiyUBJXcgH9kMMunJov/UUjxD/jo8RCXhaHunOJt7LyuHzAdExAhWU6qTSQochKfk= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1568914907963993.3898563069052; Thu, 19 Sep 2019 10:41:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iB0Q7-00048y-S5; Thu, 19 Sep 2019 17:40:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iB0Q7-00048q-3b for xen-devel@lists.xenproject.org; Thu, 19 Sep 2019 17:40:43 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 97cc77f4-db04-11e9-b299-bc764e2007e4; Thu, 19 Sep 2019 17:40:39 +0000 (UTC) X-Inumbo-ID: 97cc77f4-db04-11e9-b299-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1568914839; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ldbK2rJd08+BaTupRLkvzPdHJ6Ig2rL5PXIxfegX9xI=; b=fGro8PfyhbHkJgbXuHwib7C+J7pMINu9e9rudKI1dIF+AuyiSbK3yTk9 SUIVZxTg2/CyuBLDNjsyS6CdolYH0OqizK+uVqv5bbZYIhhj+BpaBk/95 sdN8lSLNsu41DUxOswOFsADOzJnIcti/hWMMaysHFLmcfxl0k7P9Yrgah A=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=anthony.perard@citrix.com; spf=Pass smtp.mailfrom=anthony.perard@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of anthony.perard@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of anthony.perard@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 7uuG80KyREpfwHP4j2d4ib/SgG0XiywBkLMT47/E0uw7InlPrb56O94hRSWoY0AcNWm4PvEHLk EX2PWTd6rAlFHGgIzB7mdgna7UtnoLcU/V/RImKvNoCQ93C5RVT4ZeKmKCeQ2u3ukzxi3Vhc4u tmuzAxfwfWwipOEsPmtUDK8LfbJfg3R30TeH6bX885XkqvT0YvGw2aWAMiI32ZUuwc7qqqvh06 MENdRUY65g2DkWcdH3tugkIrO9mNy+KEiXSLsw053Xet3nlE/U9OEp+oeM7CohG87SfmgG6wMm Cqc= X-SBRS: 2.7 X-MesageID: 5801739 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,524,1559534400"; d="scan'208";a="5801739" From: Anthony PERARD To: Date: Thu, 19 Sep 2019 18:16:53 +0100 Message-ID: <20190919171656.899649-34-anthony.perard@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190919171656.899649-1-anthony.perard@citrix.com> References: <20190919171656.899649-1-anthony.perard@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 33/35] libxl: libxl_retrieve_domain_configuration now uses ev_qmp X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony PERARD , Ian Jackson , Wei Liu Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) This was the last user of libxl__qmp_query_cpus which can now be removed. Signed-off-by: Anthony PERARD Acked-by: Ian Jackson --- Notes: v3: - following rename of ev_lock to ev_devlock, renamed field rdcs.ev_lock to rdcs.devlock tools/libxl/libxl_domain.c | 163 ++++++++++++++++++++++++++++------- tools/libxl/libxl_internal.h | 3 - tools/libxl/libxl_qmp.c | 38 -------- 3 files changed, 131 insertions(+), 73 deletions(-) diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c index b97e874a9c05..0dd5b7ffa963 100644 --- a/tools/libxl/libxl_domain.c +++ b/tools/libxl/libxl_domain.c @@ -1800,27 +1800,6 @@ uint32_t libxl_vm_get_start_time(libxl_ctx *ctx, uin= t32_t domid) return ret; } =20 -/* For QEMU upstream we always need to provide the number of cpus present = to - * QEMU whether they are online or not; otherwise QEMU won't accept the sa= ved - * state. See implementation of libxl__qmp_query_cpus. - */ -static int libxl__update_avail_vcpus_qmp(libxl__gc *gc, uint32_t domid, - unsigned int max_vcpus, - libxl_bitmap *map) -{ - int rc; - - rc =3D libxl__qmp_query_cpus(gc, domid, map); - if (rc) { - LOGD(ERROR, domid, "Fail to get number of cpus"); - goto out; - } - - rc =3D 0; -out: - return rc; -} - static int libxl__update_avail_vcpus_xenstore(libxl__gc *gc, uint32_t domi= d, unsigned int max_vcpus, libxl_bitmap *map) @@ -1849,13 +1828,61 @@ static int libxl__update_avail_vcpus_xenstore(libxl= __gc *gc, uint32_t domid, return rc; } =20 +typedef struct { + libxl__ev_qmp qmp; + libxl__ev_time timeout; + libxl_domain_config *d_config; /* user pointer */ + libxl__ev_devlock devlock; + libxl_bitmap qemuu_cpus; +} retrieve_domain_configuration_state; + +static void retrieve_domain_configuration_lock_acquired( + libxl__egc *egc, libxl__ev_devlock *, int rc); +static void retrieve_domain_configuration_cpu_queried( + libxl__egc *egc, libxl__ev_qmp *qmp, + const libxl__json_object *response, int rc); +static void retrieve_domain_configuration_timeout(libxl__egc *egc, + libxl__ev_time *ev, const struct timeval *requested_abs, int rc); +static void retrieve_domain_configuration_end(libxl__egc *egc, + retrieve_domain_configuration_state *rdcs, int rc); + int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, libxl_domain_config *d_config, const libxl_asyncop_how *ao_how) { AO_CREATE(ctx, domid, ao_how); - int rc; + retrieve_domain_configuration_state *rdcs; + + GCNEW(rdcs); + libxl__ev_qmp_init(&rdcs->qmp); + rdcs->qmp.ao =3D ao; + rdcs->qmp.domid =3D domid; + rdcs->qmp.payload_fd =3D -1; + libxl__ev_time_init(&rdcs->timeout); + rdcs->d_config =3D d_config; + libxl_bitmap_init(&rdcs->qemuu_cpus); + libxl__ev_devlock_init(&rdcs->devlock); + rdcs->devlock.ao =3D ao; + rdcs->devlock.domid =3D domid; + rdcs->devlock.callback =3D retrieve_domain_configuration_lock_acquired; + libxl__ev_devlock_lock(egc, &rdcs->devlock); + return AO_INPROGRESS; +} + +static void retrieve_domain_configuration_lock_acquired( + libxl__egc *egc, libxl__ev_devlock *devlock, int rc) +{ + retrieve_domain_configuration_state *rdcs =3D + CONTAINER_OF(devlock, *rdcs, devlock); + STATE_AO_GC(rdcs->qmp.ao); libxl__domain_userdata_lock *lock =3D NULL; + bool has_callback =3D false; + + /* Convenience aliases */ + libxl_domid domid =3D rdcs->qmp.domid; + libxl_domain_config *const d_config =3D rdcs->d_config; + + if (rc) goto out; =20 lock =3D libxl__lock_domain_userdata(gc, domid); if (!lock) { @@ -1870,10 +1897,81 @@ int libxl_retrieve_domain_configuration(libxl_ctx *= ctx, uint32_t domid, goto out; } =20 + libxl__unlock_domain_userdata(lock); + lock =3D NULL; + + /* We start by querying QEMU, if it is running, for its cpumap as this + * is a long operation. */ + if (d_config->b_info.type =3D=3D LIBXL_DOMAIN_TYPE_HVM && + libxl__device_model_version_running(gc, domid) =3D=3D + LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) { + /* For QEMU upstream we always need to provide the number + * of cpus present to QEMU whether they are online or not; + * otherwise QEMU won't accept the saved state. + */ + rc =3D libxl__ev_time_register_rel(ao, &rdcs->timeout, + retrieve_domain_configuration_timeout, + LIBXL_QMP_CMD_TIMEOUT * 1000); + if (rc) goto out; + libxl_bitmap_alloc(CTX, &rdcs->qemuu_cpus, + d_config->b_info.max_vcpus); + rdcs->qmp.callback =3D retrieve_domain_configuration_cpu_queried; + rc =3D libxl__ev_qmp_send(gc, &rdcs->qmp, "query-cpus", NULL); + if (rc) goto out; + has_callback =3D true; + } + +out: + if (lock) libxl__unlock_domain_userdata(lock); + if (!has_callback) + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_cpu_queried( + libxl__egc *egc, libxl__ev_qmp *qmp, + const libxl__json_object *response, int rc) +{ + EGC_GC; + retrieve_domain_configuration_state *rdcs =3D + CONTAINER_OF(qmp, *rdcs, qmp); + + if (rc) goto out; + + rc =3D qmp_parse_query_cpus(gc, qmp->domid, response, &rdcs->qemuu_cpu= s); + +out: + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_timeout(libxl__egc *egc, + libxl__ev_time *ev, const struct timeval *requested_abs, int rc) +{ + retrieve_domain_configuration_state *rdcs =3D + CONTAINER_OF(ev, *rdcs, timeout); + + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_end(libxl__egc *egc, + retrieve_domain_configuration_state *rdcs, int rc) +{ + STATE_AO_GC(rdcs->qmp.ao); + libxl__domain_userdata_lock *lock; + + /* Convenience aliases */ + libxl_domain_config *const d_config =3D rdcs->d_config; + libxl_domid domid =3D rdcs->qmp.domid; + + lock =3D libxl__lock_domain_userdata(gc, domid); + if (!lock) { + rc =3D ERROR_LOCK_FAIL; + goto out; + } + /* Domain name */ { char *domname; - domname =3D libxl_domid_to_name(ctx, domid); + domname =3D libxl_domid_to_name(CTX, domid); if (!domname) { LOGD(ERROR, domid, "Fail to get domain name"); goto out; @@ -1886,13 +1984,13 @@ int libxl_retrieve_domain_configuration(libxl_ctx *= ctx, uint32_t domid, { libxl_dominfo info; libxl_dominfo_init(&info); - rc =3D libxl_domain_info(ctx, &info, domid); + rc =3D libxl_domain_info(CTX, &info, domid); if (rc) { LOGD(ERROR, domid, "Fail to get domain info"); libxl_dominfo_dispose(&info); goto out; } - libxl_uuid_copy(ctx, &d_config->c_info.uuid, &info.uuid); + libxl_uuid_copy(CTX, &d_config->c_info.uuid, &info.uuid); libxl_dominfo_dispose(&info); } =20 @@ -1913,8 +2011,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ct= x, uint32_t domid, assert(version !=3D LIBXL_DEVICE_MODEL_VERSION_UNKNOWN); switch (version) { case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN: - rc =3D libxl__update_avail_vcpus_qmp(gc, domid, - max_vcpus, map); + libxl_bitmap_copy(CTX, map, &rdcs->qemuu_cpus); break; case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: rc =3D libxl__update_avail_vcpus_xenstore(gc, domid, @@ -1939,6 +2036,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ct= x, uint32_t domid, } } =20 + /* Memory limits: * * Currently there are three memory limits: @@ -1972,7 +2070,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ct= x, uint32_t domid, /* Scheduler params */ { libxl_domain_sched_params_dispose(&d_config->b_info.sched_params); - rc =3D libxl_domain_sched_params_get(ctx, domid, + rc =3D libxl_domain_sched_params_get(CTX, domid, &d_config->b_info.sched_params); if (rc) { LOGD(ERROR, domid, "Fail to get scheduler parameters"); @@ -2034,7 +2132,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ct= x, uint32_t domid, =20 if (j < num) { /* found in xenstore */ if (dt->merge) - dt->merge(ctx, p + dt->dev_elem_size * j, q); + dt->merge(CTX, p + dt->dev_elem_size * j, q); } else { /* not found in xenstore */ LOGD(WARN, domid, "Device present in JSON but not in xenstore, igno= red"); @@ -2062,11 +2160,12 @@ int libxl_retrieve_domain_configuration(libxl_ctx *= ctx, uint32_t domid, } =20 out: + libxl__ev_devlock_unlock(gc, &rdcs->devlock); if (lock) libxl__unlock_domain_userdata(lock); - if (rc) - return AO_CREATE_FAIL(rc); + libxl_bitmap_dispose(&rdcs->qemuu_cpus); + libxl__ev_qmp_dispose(gc, &rdcs->qmp); + libxl__ev_time_deregister(gc, &rdcs->timeout); libxl__ao_complete(egc, ao, rc); - return AO_INPROGRESS; } =20 /* diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 1ecebf136984..bfeb38e0eda3 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -1988,9 +1988,6 @@ _hidden libxl__qmp_handler *libxl__qmp_initialize(lib= xl__gc *gc, _hidden int libxl__qmp_resume(libxl__gc *gc, int domid); /* Load current QEMU state from file. */ _hidden int libxl__qmp_restore(libxl__gc *gc, int domid, const char *filen= ame); -/* Query the bitmap of CPUs */ -_hidden int libxl__qmp_query_cpus(libxl__gc *gc, int domid, - libxl_bitmap *map); /* Start NBD server */ _hidden int libxl__qmp_nbd_server_start(libxl__gc *gc, int domid, const char *host, const char *port= ); diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c index 27183bc6c4a3..9639d491d991 100644 --- a/tools/libxl/libxl_qmp.c +++ b/tools/libxl/libxl_qmp.c @@ -767,44 +767,6 @@ int libxl__qmp_resume(libxl__gc *gc, int domid) return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL); } =20 -static int query_cpus_callback(libxl__qmp_handler *qmp, - const libxl__json_object *response, - void *opaque) -{ - libxl_bitmap *map =3D opaque; - unsigned int i; - const libxl__json_object *cpu =3D NULL; - int rc; - GC_INIT(qmp->ctx); - - libxl_bitmap_set_none(map); - for (i =3D 0; (cpu =3D libxl__json_array_get(response, i)); i++) { - unsigned int idx; - const libxl__json_object *o; - - o =3D libxl__json_map_get("CPU", cpu, JSON_INTEGER); - if (!o) { - LOGD(ERROR, qmp->domid, "Failed to retrieve CPU index."); - rc =3D ERROR_FAIL; - goto out; - } - - idx =3D libxl__json_object_get_integer(o); - libxl_bitmap_set(map, idx); - } - - rc =3D 0; -out: - GC_FREE; - return rc; -} - -int libxl__qmp_query_cpus(libxl__gc *gc, int domid, libxl_bitmap *map) -{ - return qmp_run_command(gc, domid, "query-cpus", NULL, - query_cpus_callback, map); -} - int libxl__qmp_nbd_server_start(libxl__gc *gc, int domid, const char *host, const char *port) { --=20 Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel