From nobody Tue Nov 26 15:31:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 205.139.110.61 as permitted sender) client-ip=205.139.110.61; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zoho.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=oracle.com ARC-Seal: i=1; a=rsa-sha256; t=1571737239; cv=none; d=zoho.com; s=zohoarc; b=kO9oAmY87b5nMUKPX5vlK5bPGw8lvnNQl9KoyGu8Ie21hy7eZzpJYZYUvtizHeRtYW6196lLlTaYDHaRktP4kMGciMazcv36jZDc5FTGXJi6LxFvuWY7k2zKtPQCITeqQHpi13FXEZHEBAU9D9aXP8WqnD2UdL2Yv8znj32mMTw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1571737239; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tO6hk3Q/lyAOuJD6WzsrbDhiziup8PrIF5yxZWAcw44=; b=Ctkbl/uqroWS8BP0obF3kH0oiX/jxN5qwWxmiPImWv1FlI/W1ypw4Frq506/doncP62xuBERTT8HnGIJiDoiy8Fqie+Tyt0f1jnWKUAz1wiWQDG551rhsyxFhbE12iaW6NtSFDnpHSG5rDl00tZfmKFgCtUXl2Vx1/DJCMXyMYs= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=pass; spf=pass (zoho.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by mx.zohomail.com with SMTPS id 1571737239819796.7038373555786; Tue, 22 Oct 2019 02:40:39 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-338-_W1p6BsANeqN7Hyy4pae6A-1; Tue, 22 Oct 2019 05:40:36 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E2C661800D79; Tue, 22 Oct 2019 09:40:31 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B79DC5DD61; Tue, 22 Oct 2019 09:40:31 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 5AAD31803B4F; Tue, 22 Oct 2019 09:40:31 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x9LJLbcM004236 for ; Mon, 21 Oct 2019 15:21:37 -0400 Received: by smtp.corp.redhat.com (Postfix) id 92F0F100164D; Mon, 21 Oct 2019 19:21:37 +0000 (UTC) Received: from mx1.redhat.com (ext-mx20.extmail.prod.ext.phx2.redhat.com [10.5.110.49]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8CFC01001DD7 for ; Mon, 21 Oct 2019 19:21:35 +0000 (UTC) Received: from userp2130.oracle.com (userp2130.oracle.com [156.151.31.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1ECCB3086268 for ; Mon, 21 Oct 2019 19:21:28 +0000 (UTC) Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x9LJELGa125243 for ; Mon, 21 Oct 2019 19:21:27 GMT Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2130.oracle.com with ESMTP id 2vqswta0hs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Mon, 21 Oct 2019 19:21:27 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x9LJIU9v055270 for ; Mon, 21 Oct 2019 19:21:26 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 2vrc0096m2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Mon, 21 Oct 2019 19:21:26 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x9LJLPSC001043 for ; Mon, 21 Oct 2019 19:21:25 GMT Received: from waters.dynamic.ziggo.nl (/10.175.22.209) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 21 Oct 2019 12:21:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1571737238; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=tO6hk3Q/lyAOuJD6WzsrbDhiziup8PrIF5yxZWAcw44=; b=DQKyYuObYuE/gKj2L02AMKVyhBifjXi6aUtknRJIiD7Bfb9/J9EGfFlJWmX/iSUOY6U4gl 9UlGnSKe38gzo6fpb+HshaFPA4Y/kGYSjNWM4WtNX0Cs7C6FnC7T41ggSXf/v2ZakBIngu 5Od/C5bwHPGQmuquZfnATyiF2Qh4uIw= From: Wim Ten Have To: Libvirt Development List Date: Mon, 21 Oct 2019 21:21:06 +0200 Message-Id: <20191021192108.25974-3-wim.ten.have@oracle.com> In-Reply-To: <20191021192108.25974-1-wim.ten.have@oracle.com> References: <20191021192108.25974-1-wim.ten.have@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9417 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910210186 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9417 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910210186 X-Greylist: Sender passed SPF test, Sender IP whitelisted by DNSRBL, ACL 238 matched, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Mon, 21 Oct 2019 19:21:33 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Mon, 21 Oct 2019 19:21:33 +0000 (UTC) for IP:'156.151.31.86' DOMAIN:'userp2130.oracle.com' HELO:'userp2130.oracle.com' FROM:'wim.ten.have@oracle.com' RCPT:'' X-RedHat-Spam-Score: -102.402 (DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_MED, SPF_HELO_PASS, SPF_PASS, UNPARSEABLE_RELAY, USER_IN_WHITELIST) 156.151.31.86 userp2130.oracle.com 156.151.31.86 userp2130.oracle.com X-Scanned-By: MIMEDefang 2.84 on 10.5.110.49 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-loop: libvir-list@redhat.com Cc: Menno Lageman , Wim ten Have Subject: [libvirt] [RFC PATCH v1 2/4] qemu: driver changes adding vNUMA vCPU hotplug support X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: _W1p6BsANeqN7Hyy4pae6A-1 X-Mimecast-Spam-Score: 0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" From: Wim ten Have Add support for hot plugging/unplugging vCPUs in vNUMA partitioned KVM guests. Signed-off-by: Wim ten Have Signed-off-by: Menno Lageman --- src/qemu/qemu_driver.c | 6 ++- src/qemu/qemu_hotplug.c | 95 ++++++++++++++++++++++++++++++++++++++--- 2 files changed, 94 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 71947efa4e50..e64afcb8efc9 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4965,14 +4965,16 @@ qemuDomainSetVcpusMax(virQEMUDriverPtr driver, return -1; } =20 - if (virDomainNumaGetCPUCountTotal(persistentDef->numa) > nvcpus) { + if (!virDomainVnumaIsEnabled(persistentDef->numa) && + virDomainNumaGetCPUCountTotal(persistentDef->numa) > nvcpus) { virReportError(VIR_ERR_INVALID_ARG, "%s", _("Number of CPUs in exceeds the desired " "maximum vcpu count")); return -1; } =20 - if (virDomainDefGetVcpusTopology(persistentDef, &topologycpus) =3D=3D = 0 && + if (!virDomainVnumaIsEnabled(persistentDef->numa) && + virDomainDefGetVcpusTopology(persistentDef, &topologycpus) =3D=3D = 0 && nvcpus !=3D topologycpus) { /* allow setting a valid vcpu count for the topology so an invalid * setting may be corrected via this API */ diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 2d47f7461f93..2d48c5bba762 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -6081,6 +6081,60 @@ qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, } =20 =20 +/** + * qemuDomainGetNumaMappedVcpuEntry: + * + * In case of vNUMA guest description we need the node + * mapped vcpu to ensure that guest vcpus are hot-plugged + * or hot-unplugged in a round-robin fashion with whole + * cores on the same NUMA node so they get sibling host + * CPUs. + * + * 2 NUMA node system, 2 threads/core: + * +---+---+---+---+---+---+---+---+---+---+--// + * vcpu | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |... + * +---+---+---+---+---+---+---+---+---+---+--// + * NUMA \------/ \-----/ \-----/ \-----/ \-----/ \-// + * node 0 1 0 1 0 ... + * + * bit 0 1 0 1 2 3 2 3 4 5 ... + * + * 4 NUMA node system, 2 threads/core: + * +---+---+---+---+---+---+---+---+---+---+--// + * vcpu | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |... + * +---+---+---+---+---+---+---+---+---+---+--// + * NUMA \------/ \-----/ \-----/ \-----/ \-----/ \-// + * node 0 1 2 3 0 ... + * + * bit 0 1 0 1 0 1 0 1 2 3 ... + * + */ +static ssize_t +qemuDomainGetNumaMappedVcpuEntry(virDomainDefPtr def, + ssize_t vcpu) +{ + virBitmapPtr nodecpumask =3D NULL; + size_t ncells =3D virDomainNumaGetNodeCount(def->numa); + size_t threads =3D def->cpu->threads ? def->cpu->threads : 1; + ssize_t node, bit, pcpu =3D -1; + + if (!ncells) + return vcpu; + + node =3D (vcpu / threads) % ncells; + nodecpumask =3D virDomainNumaGetNodeCpumask(def->numa, node); + + bit =3D ((vcpu / (threads * ncells)) * threads) + (vcpu % threads); + + while (((pcpu =3D virBitmapNextSetBit(nodecpumask, pcpu)) >=3D 0) && b= it--); + + /* GIGO: Garbage In? Garbage Out! */ + pcpu =3D (pcpu < 0) ? vcpu : pcpu; + + return pcpu; +} + + /** * qemuDomainSelectHotplugVcpuEntities: * @@ -6104,7 +6158,27 @@ qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr = def, qemuDomainVcpuPrivatePtr vcpupriv; unsigned int maxvcpus =3D virDomainDefGetVcpusMax(def); unsigned int curvcpus =3D virDomainDefGetVcpus(def); - ssize_t i; + ssize_t i, target; + size_t threads =3D def->cpu->threads; + size_t nnumaCell =3D virDomainNumaGetNodeCount(def->numa); + size_t minvcpus =3D nnumaCell * threads; + bool HasAutonuma =3D virDomainVnumaIsEnabled(def->numa); + + /* If SMT topology is in place, check that the number of vcpus meets + * the following constraints: + * - at least one fully used core is assigned on each NUMA node + * - cores must be used fully, i.e. all threads of a core are assigned= to + * the same guest + */ + if (HasAutonuma && threads && + (nvcpus < minvcpus || (nvcpus - minvcpus) % threads)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vNUMA: guest %s configured %d vcpus setting " + "does not fit the vNUMA topology for at " + "least one whole core per vNUMA node."), + def->name, nvcpus); + goto error; + } =20 if (!(ret =3D virBitmapNew(maxvcpus))) return NULL; @@ -6113,7 +6187,9 @@ qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr d= ef, *enable =3D true; =20 for (i =3D 0; i < maxvcpus && curvcpus < nvcpus; i++) { - vcpu =3D virDomainDefGetVcpu(def, i); + + target =3D qemuDomainGetNumaMappedVcpuEntry(def, i); + vcpu =3D virDomainDefGetVcpu(def, target); vcpupriv =3D QEMU_DOMAIN_VCPU_PRIVATE(vcpu); =20 if (vcpu->online) @@ -6130,14 +6206,17 @@ qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr= def, "desired vcpu count")); goto error; } + VIR_DEBUG("guest %s hotplug target vcpu =3D %zd\n", def->name,= target); =20 - ignore_value(virBitmapSetBit(ret, i)); + ignore_value(virBitmapSetBit(ret, target)); } } else { *enable =3D false; =20 for (i =3D maxvcpus - 1; i >=3D 0 && curvcpus > nvcpus; i--) { - vcpu =3D virDomainDefGetVcpu(def, i); + + target =3D qemuDomainGetNumaMappedVcpuEntry(def, i); + vcpu =3D virDomainDefGetVcpu(def, target); vcpupriv =3D QEMU_DOMAIN_VCPU_PRIVATE(vcpu); =20 if (!vcpu->online) @@ -6157,8 +6236,9 @@ qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr d= ef, "desired vcpu count")); goto error; } + VIR_DEBUG("guest %s hotunplug target vcpu =3D %zd\n", def->nam= e, target); =20 - ignore_value(virBitmapSetBit(ret, i)); + ignore_value(virBitmapSetBit(ret, target)); } } =20 @@ -6241,6 +6321,11 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, if (curvcpus =3D=3D nvcpus) return; =20 + if (virDomainVnumaIsEnabled(def->numa)) { + virDomainDefSetVcpusVnuma(def, nvcpus); + return; + } + if (curvcpus < nvcpus) { for (i =3D 0; i < maxvcpus; i++) { vcpu =3D virDomainDefGetVcpu(def, i); --=20 2.21.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list