From nobody Mon May 6 16:01:03 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; arc=fail (BodyHash is different from the expected one) Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1563911893646270.40241772065565; Tue, 23 Jul 2019 12:58:13 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 04531C1EB202; Tue, 23 Jul 2019 19:58:12 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C43E15B684; Tue, 23 Jul 2019 19:58:11 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id D4665149E1; Tue, 23 Jul 2019 19:58:10 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x6NJwAck032434 for ; Tue, 23 Jul 2019 15:58:10 -0400 Received: by smtp.corp.redhat.com (Postfix) id 0344860497; Tue, 23 Jul 2019 19:58:10 +0000 (UTC) Received: from mx1.redhat.com (ext-mx13.extmail.prod.ext.phx2.redhat.com [10.5.110.42]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F01696013A for ; Tue, 23 Jul 2019 19:58:07 +0000 (UTC) Received: from m9a0002g.houston.softwaregrp.com (m9a0002g.houston.softwaregrp.com [15.124.64.67]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4778A30A7C7D for ; Tue, 23 Jul 2019 19:58:05 +0000 (UTC) Received: FROM m9a0002g.houston.softwaregrp.com (15.121.0.190) BY m9a0002g.houston.softwaregrp.com WITH ESMTP FOR libvir-list@redhat.com; Tue, 23 Jul 2019 19:57:53 +0000 Received: from M4W0334.microfocus.com (2002:f78:1192::f78:1192) by M9W0067.microfocus.com (2002:f79:be::f79:be) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1591.10; Tue, 23 Jul 2019 19:56:58 +0000 Received: from NAM05-BY2-obe.outbound.protection.outlook.com (15.124.8.11) by M4W0334.microfocus.com (15.120.17.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1591.10 via Frontend Transport; Tue, 23 Jul 2019 19:56:58 +0000 Received: from BY5PR18MB3315.namprd18.prod.outlook.com (10.255.139.204) by BY5PR18MB3186.namprd18.prod.outlook.com (10.255.137.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2094.17; Tue, 23 Jul 2019 19:56:56 +0000 Received: from BY5PR18MB3315.namprd18.prod.outlook.com ([fe80::3569:3521:c935:73c2]) by BY5PR18MB3315.namprd18.prod.outlook.com ([fe80::3569:3521:c935:73c2%7]) with mapi id 15.20.2094.013; Tue, 23 Jul 2019 19:56:56 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OLlEYRHAVfm/828UnptaXgNoHjaVz0FOJjsZ226psYvf+6FpCWqKfO+AZ/g3DkJwQPX5PJ3ZkOFKR1WDrFGeh8CKC4OWKVEfocnnM4cPCvLvhQYXZ7gsOCv4+PnSWoXo/8g9WSP8WLGkcyLwrfb+X/Sp2uJAtrkTszvCp4YhIoa/O/5IMmyLq3LYLTphpTn6P5UL4IemhdoKCjvpYieSRqObPzZI2sgxxp7KJEdA64bmZlz+3GrubL09OTr5idSFxpfM8q3PE+gcNaGYQZu27Z96qFEDcej0hdrA4cOT/UwvEh9xMX71poGB4K7WwUsL3ScJNHaPi7wrcMsSubj7+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K/8WPQoVmzF4ekq52VjFQJfSV3ayncxTcLMKO8OuaXM=; b=iCikQhXbbwxNFXFWXpiVByztEN/ARRxaOM00Vw3HY3Q7/jx1k79vvEt7FdeGs7SMOM4dQdH6Qzh2Kv+oM2aLUuNqpcdvECv6EkRyzGYpCyAwIEsJTRbNo/W4rYBgNHpR5N3U3XvQBNzBD7N7A/F+50UpRLTBkFfmOCtVlPwLpDsfhVqAN3ualG6sKi89f1xjqHwcaEZrVQSL7E3vGTItsUZzlSCK43yofl/Tl3nbbsG8EuOvigmSVe0OdSe7aXL3y3k3xk9QHm1Zh0EKdXwhr8+Cbu/CC0AkqD596mwhGVyCRRwLF5AAn9cwL/XVIU6PcVTR20Srv0eYVk02ExBRpg== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com;arc=none From: Jim Fehlig To: "libvir-list@redhat.com" Thread-Topic: [PATCH RESEND] qemu: Add support for overriding max threads per process limit Thread-Index: AQHVQZDIw2SgArYciEOaL87S76uuRw== Date: Tue, 23 Jul 2019 19:56:56 +0000 Message-ID: <20190723195634.9171-1-jfehlig@suse.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR05CA0028.namprd05.prod.outlook.com (2603:10b6:a03:c0::41) To BY5PR18MB3315.namprd18.prod.outlook.com (2603:10b6:a03:196::12) authentication-results: spf=none (sender IP is ) smtp.mailfrom=JFEHLIG@suse.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [2620:113:8044:4009:ffff:ffff:ffff:fda1] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 79e6fc4a-419e-42f9-5100-08d70fa7ea62 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:BY5PR18MB3186; x-ms-traffictypediagnostic: BY5PR18MB3186: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:1122; x-forefront-prvs: 0107098B6C x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(376002)(346002)(366004)(396003)(136003)(199004)(189003)(102836004)(476003)(8936002)(2616005)(256004)(186003)(14444005)(5660300002)(46003)(50226002)(2351001)(66556008)(66446008)(66476007)(14454004)(66946007)(64756008)(2501003)(386003)(36756003)(2906002)(71190400001)(71200400001)(6506007)(68736007)(4326008)(486006)(25786009)(316002)(6436002)(5640700003)(107886003)(6512007)(6116002)(6916009)(6486002)(53936002)(6306002)(81156014)(81166006)(52116002)(99286004)(86362001)(1076003)(8676002)(305945005)(966005)(7736002)(478600001); DIR:OUT; SFP:1102; SCL:1; SRVR:BY5PR18MB3186; H:BY5PR18MB3315.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; received-spf: None (protection.outlook.com: suse.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: dI4a1+hlWVirNP4VkdRow0wMr98YkvHhevgXiiPWqCBoU8PG2vi9MbNJ7axOy0yhdWQu4+5AIhLgQu5j5qG+b00CFP8E/HX+zGCEb9zfOXgtyk7g5HJrZmdHfLBEPVZd2kGLzhedhRrR4/uulpGv9k5QivFwSDgsD+4xmzOYKooh1M4o9PBtiFWkMHrbmKZ7alOgvfchk7/7qu78r1N6tPUP+xLaUJNHO7wq13UPjghn0pCMFmga+1lB7McJwiAP8lvaYtl8bTOcJB9lkM1kRATMPEAfh+r9mVjIx85Xb4IuhB8qUXWjW0IWbQynS5hfNofijT8Pdc4HnCdEebZ7YNHVDslYUJB4O9pM7qiI7y5XKS9tGHf8XFAFvZTuCQXWfdqyDLHf6Xs3h51JpmoodkORvjoi1/ZfSRcLfbpHkOA= MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 79e6fc4a-419e-42f9-5100-08d70fa7ea62 X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jul 2019 19:56:56.5809 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 856b813c-16e5-49a5-85ec-6f081e13b527 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: JFEHLIG@suse.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR18MB3186 X-OriginatorOrg: suse.com X-Greylist: Sender passed SPF test, Sender IP whitelisted by DNSRBL, ACL 238 matched, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Tue, 23 Jul 2019 19:58:06 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Tue, 23 Jul 2019 19:58:06 +0000 (UTC) for IP:'15.124.64.67' DOMAIN:'m9a0002g.houston.softwaregrp.com' HELO:'m9a0002g.houston.softwaregrp.com' FROM:'JFEHLIG@suse.com' RCPT:'' X-RedHat-Spam-Score: 0.001 (RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY) 15.124.64.67 m9a0002g.houston.softwaregrp.com 15.124.64.67 m9a0002g.houston.softwaregrp.com X-Scanned-By: MIMEDefang 2.84 on 10.5.110.42 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id x6NJwAck032434 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH RESEND] qemu: Add support for overriding max threads per process limit X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 23 Jul 2019 19:58:12 +0000 (UTC) Content-Type: text/plain; charset="utf-8" Some VM configurations may result in a large number of threads created by the associated qemu process which can exceed the system default limit. The maximum number of threads allowed per process is controlled by the pids cgroup controller and is set to 16k when creating VMs with systemd's machined service. The maximum number of threads per process is recorded in the pids.max file under the machine's pids controller cgroup hierarchy, e.g. $cgrp-mnt/pids/machine.slice/machine-qemu\\x2d1\\x2dtest.scope/pids.max Maximum threads per process is controlled with the TasksMax property of the systemd scope for the machine. This patch adds an option to qemu.conf which can be used to override the maximum number of threads allowed per qemu process. If the value of option is greater than zero, it will be set in the TasksMax property of the machine's scope after creating the machine. Signed-off-by: Jim Fehlig Reviewed-by: Daniel P. Berrang=C3=A9 --- Rebase and resend of https://www.redhat.com/archives/libvir-list/2019-June/msg00185.html src/lxc/lxc_cgroup.c | 1 + src/qemu/libvirtd_qemu.aug | 1 + src/qemu/qemu.conf | 10 ++++++++++ src/qemu/qemu_cgroup.c | 1 + src/qemu/qemu_conf.c | 2 ++ src/qemu/qemu_conf.h | 1 + src/qemu/test_libvirtd_qemu.aug.in | 1 + src/util/vircgroup.c | 6 +++++- src/util/vircgroup.h | 1 + src/util/virsystemd.c | 24 +++++++++++++++++++++++- src/util/virsystemd.h | 3 ++- tests/virsystemdtest.c | 12 ++++++------ 12 files changed, 54 insertions(+), 9 deletions(-) diff --git a/src/lxc/lxc_cgroup.c b/src/lxc/lxc_cgroup.c index d93a19d684..76014f3bfd 100644 --- a/src/lxc/lxc_cgroup.c +++ b/src/lxc/lxc_cgroup.c @@ -455,6 +455,7 @@ virCgroupPtr virLXCCgroupCreate(virDomainDefPtr def, nnicindexes, nicindexes, def->resource->partition, -1, + 0, &cgroup) < 0) goto cleanup; =20 diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug index eea9094d39..2a99a0c55f 100644 --- a/src/qemu/libvirtd_qemu.aug +++ b/src/qemu/libvirtd_qemu.aug @@ -95,6 +95,7 @@ module Libvirtd_qemu =3D | limits_entry "max_core" | bool_entry "dump_guest_core" | str_entry "stdio_handler" + | int_entry "max_threads_per_process" =20 let device_entry =3D bool_entry "mac_filter" | bool_entry "relaxed_acs_check" diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf index fd2ed9dc21..8cabeccacb 100644 --- a/src/qemu/qemu.conf +++ b/src/qemu/qemu.conf @@ -613,6 +613,16 @@ #max_processes =3D 0 #max_files =3D 0 =20 +# If max_threads_per_process is set to a positive integer, libvirt +# will use it to set the maximum number of threads that can be +# created by a qemu process. Some VM configurations can result in +# qemu processes with tens of thousands of threads. systemd-based +# systems typically limit the number of threads per process to +# 16k. max_threads_per_process can be used to override default +# limits in the host OS. +# +#max_threads_per_process =3D 0 + # If max_core is set to a non-zero integer, then QEMU will be # permitted to create core dumps when it crashes, provided its # RAM size is smaller than the limit set. diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 19ca60905a..ecd96efb0a 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -929,6 +929,7 @@ qemuInitCgroup(virDomainObjPtr vm, nnicindexes, nicindexes, vm->def->resource->partition, cfg->cgroupControllers, + cfg->maxThreadsPerProc, &priv->cgroup) < 0) { if (virCgroupNewIgnoreError()) goto done; diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index e0195dac29..71d0464c0d 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -670,6 +670,8 @@ virQEMUDriverConfigLoadProcessEntry(virQEMUDriverConfig= Ptr cfg, return -1; if (virConfGetValueUInt(conf, "max_files", &cfg->maxFiles) < 0) return -1; + if (virConfGetValueUInt(conf, "max_threads_per_process", &cfg->maxThre= adsPerProc) < 0) + return -1; =20 if (virConfGetValueType(conf, "max_core") =3D=3D VIR_CONF_STRING) { if (virConfGetValueString(conf, "max_core", &corestr) < 0) diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h index 2229b76e89..d8e3bfe87c 100644 --- a/src/qemu/qemu_conf.h +++ b/src/qemu/qemu_conf.h @@ -162,6 +162,7 @@ struct _virQEMUDriverConfig { =20 unsigned int maxProcesses; unsigned int maxFiles; + unsigned int maxThreadsPerProc; unsigned long long maxCore; bool dumpGuestCore; =20 diff --git a/src/qemu/test_libvirtd_qemu.aug.in b/src/qemu/test_libvirtd_qe= mu.aug.in index 388ba24b8b..b3b44d42d9 100644 --- a/src/qemu/test_libvirtd_qemu.aug.in +++ b/src/qemu/test_libvirtd_qemu.aug.in @@ -76,6 +76,7 @@ module Test_libvirtd_qemu =3D { "set_process_name" =3D "1" } { "max_processes" =3D "0" } { "max_files" =3D "0" } +{ "max_threads_per_process" =3D "0" } { "max_core" =3D "unlimited" } { "dump_guest_core" =3D "1" } { "mac_filter" =3D "1" } diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c index f7afc2964d..9daf62795e 100644 --- a/src/util/vircgroup.c +++ b/src/util/vircgroup.c @@ -1118,6 +1118,7 @@ virCgroupNewMachineSystemd(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) { int rv; @@ -1134,7 +1135,8 @@ virCgroupNewMachineSystemd(const char *name, isContainer, nnicindexes, nicindexes, - partition)) < 0) + partition, + maxthreads)) < 0) return rv; =20 if (controllers !=3D -1) @@ -1246,6 +1248,7 @@ virCgroupNewMachine(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) { int rv; @@ -1262,6 +1265,7 @@ virCgroupNewMachine(const char *name, nicindexes, partition, controllers, + maxthreads, group)) =3D=3D 0) return 0; =20 diff --git a/src/util/vircgroup.h b/src/util/vircgroup.h index 2f68fdb685..3eefe78787 100644 --- a/src/util/vircgroup.h +++ b/src/util/vircgroup.h @@ -98,6 +98,7 @@ int virCgroupNewMachine(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); diff --git a/src/util/virsystemd.c b/src/util/virsystemd.c index f6c5adc5ef..1cb8874403 100644 --- a/src/util/virsystemd.c +++ b/src/util/virsystemd.c @@ -252,12 +252,14 @@ int virSystemdCreateMachine(const char *name, bool iscontainer, size_t nnicindexes, int *nicindexes, - const char *partition) + const char *partition, + unsigned int maxthreads) { int ret; DBusConnection *conn; char *creatorname =3D NULL; char *slicename =3D NULL; + char *scopename =3D NULL; static int hasCreateWithNetwork =3D 1; =20 if ((ret =3D virSystemdHasMachined()) < 0) @@ -403,11 +405,31 @@ int virSystemdCreateMachine(const char *name, goto cleanup; } =20 + if (maxthreads > 0) { + if (!(scopename =3D virSystemdMakeScopeName(name, drivername, fals= e))) + goto cleanup; + + if (virDBusCallMethod(conn, + NULL, + NULL, + "org.freedesktop.systemd1", + "/org/freedesktop/systemd1", + "org.freedesktop.systemd1.Manager", + "SetUnitProperties", + "sba(sv)", + scopename, + true, + 1, + "TasksMax", "t", (uint64_t)maxthreads) < 0) + goto cleanup; + } + ret =3D 0; =20 cleanup: VIR_FREE(creatorname); VIR_FREE(slicename); + VIR_FREE(scopename); return ret; } =20 diff --git a/src/util/virsystemd.h b/src/util/virsystemd.h index 5d56c78835..96626f8fff 100644 --- a/src/util/virsystemd.h +++ b/src/util/virsystemd.h @@ -50,7 +50,8 @@ int virSystemdCreateMachine(const char *name, bool iscontainer, size_t nnicindexes, int *nicindexes, - const char *partition); + const char *partition, + unsigned int maxthreads); =20 int virSystemdTerminateMachine(const char *name); =20 diff --git a/tests/virsystemdtest.c b/tests/virsystemdtest.c index 7aaa8f97fa..340b038095 100644 --- a/tests/virsystemdtest.c +++ b/tests/virsystemdtest.c @@ -175,7 +175,7 @@ static int testCreateContainer(const void *opaque ATTRI= BUTE_UNUSED) 123, true, 0, NULL, - "highpriority.slice") < 0) { + "highpriority.slice", 0) < 0) { fprintf(stderr, "%s", "Failed to create LXC machine\n"); return -1; } @@ -208,7 +208,7 @@ static int testCreateMachine(const void *opaque ATTRIBU= TE_UNUSED) 123, false, 0, NULL, - NULL) < 0) { + NULL, 0) < 0) { fprintf(stderr, "%s", "Failed to create KVM machine\n"); return -1; } @@ -245,7 +245,7 @@ static int testCreateNoSystemd(const void *opaque ATTRI= BUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_NO_SERVICE"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -279,7 +279,7 @@ static int testCreateSystemdNotRunning(const void *opaq= ue ATTRIBUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_NOT_REGISTERED"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -313,7 +313,7 @@ static int testCreateBadSystemd(const void *opaque ATTR= IBUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_BAD_SERVICE"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -348,7 +348,7 @@ static int testCreateNetwork(const void *opaque ATTRIBU= TE_UNUSED) 123, true, nnicindexes, nicindexes, - "highpriority.slice") < 0) { + "highpriority.slice", 0) < 0) { fprintf(stderr, "%s", "Failed to create LXC machine\n"); return -1; } --=20 2.22.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list