From nobody Thu Apr 25 22:43:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1559843178; cv=none; d=zoho.com; s=zohoarc; b=IRwzLOszY7dH+iQZTYlrTqAaVG/4HEyoai0K3g9mK7mz8oXXJj4BSS4p+1g6AjM6rXtviP43L88DeTgpBkPvFB5qjIX7Ds1Bgtf7li1L8uTkWBwuyXqaka6jcPjRuRb5hvcj/yUw5+IKT6f7n7Ak+ba/e1OMPC/ttSF7NtO/Wqw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559843178; h=Content-Type:Content-Transfer-Encoding:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=BIcr2tgziyEZTq3GXVrfyJTVt9aWrfObvMFFWxL79nE=; b=ZkROn9J5dNRDg4TJ8WLy4xFcVn5b3qiH1cMFLZ/BIj/pIN4sHpaHGoU0GAgHeRtjO8khpt+W01o14EOdnhSnL320vBcQ1NBnQH3Ti7b2C3HYZOGSfgAvjfIij3AuAeA+FEwwKNkExEK9xh1pXBDQ1SAe8BgNUvke4Hz4SnKIR2o= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1559843178190113.82280526778368; Thu, 6 Jun 2019 10:46:18 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C3EB1223887; Thu, 6 Jun 2019 17:45:55 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3B1B468D54; Thu, 6 Jun 2019 17:45:38 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id CBB061806B16; Thu, 6 Jun 2019 17:45:04 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x56HfUiD000456 for ; Thu, 6 Jun 2019 13:41:30 -0400 Received: by smtp.corp.redhat.com (Postfix) id A4B8B6A23F; Thu, 6 Jun 2019 17:41:30 +0000 (UTC) Received: from mx1.redhat.com (ext-mx17.extmail.prod.ext.phx2.redhat.com [10.5.110.46]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9E7416A23E for ; Thu, 6 Jun 2019 17:41:28 +0000 (UTC) Received: from smtp2.provo.novell.com (smtp2.provo.novell.com [137.65.250.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 86611300156D for ; Thu, 6 Jun 2019 17:41:02 +0000 (UTC) Received: from linux-tbji.provo.novell.com (prva10-snat226-2.provo.novell.com [137.65.226.36]) by smtp2.provo.novell.com with ESMTP (NOT encrypted); Thu, 06 Jun 2019 11:40:52 -0600 From: Jim Fehlig To: libvir-list@redhat.com Date: Thu, 6 Jun 2019 11:40:41 -0600 Message-Id: <20190606174041.31171-1-jfehlig@suse.com> MIME-Version: 1.0 X-Greylist: Sender passed SPF test, Sender IP whitelisted by DNSRBL, ACL 216 matched, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 06 Jun 2019 17:41:15 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 06 Jun 2019 17:41:15 +0000 (UTC) for IP:'137.65.250.81' DOMAIN:'smtp2.provo.novell.com' HELO:'smtp2.provo.novell.com' FROM:'jfehlig@suse.com' RCPT:'' X-RedHat-Spam-Score: 0 (SPF_HELO_NONE, SPF_PASS) 137.65.250.81 smtp2.provo.novell.com 137.65.250.81 smtp2.provo.novell.com X-Scanned-By: MIMEDefang 2.84 on 10.5.110.46 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH] qemu: Add support for overriding max threads per process limit X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 06 Jun 2019 17:46:08 +0000 (UTC) Content-Type: text/plain; charset="utf-8" Some VM configurations may result in a large number of threads created by the associated qemu process which can exceed the system default limit. The maximum number of threads allowed per process is controlled by the pids cgroup controller and is set to 16k when creating VMs with systemd's machined service. The maximum number of threads per process is recorded in the pids.max file under the machine's pids controller cgroup hierarchy, e.g. $cgrp-mnt/pids/machine.slice/machine-qemu\\x2d1\\x2dtest.scope/pids.max Maximum threads per process is controlled with the TasksMax property of the systemd scope for the machine. This patch adds an option to qemu.conf which can be used to override the maximum number of threads allowed per qemu process. If the value of option is greater than zero, it will be set in the TasksMax property of the machine's scope after creating the machine. Signed-off-by: Jim Fehlig --- src/lxc/lxc_cgroup.c | 1 + src/qemu/libvirtd_qemu.aug | 1 + src/qemu/qemu.conf | 10 ++++++++++ src/qemu/qemu_cgroup.c | 1 + src/qemu/qemu_conf.c | 2 ++ src/qemu/qemu_conf.h | 1 + src/qemu/test_libvirtd_qemu.aug.in | 1 + src/util/vircgroup.c | 6 +++++- src/util/vircgroup.h | 1 + src/util/virsystemd.c | 24 +++++++++++++++++++++++- src/util/virsystemd.h | 3 ++- tests/virsystemdtest.c | 12 ++++++------ 12 files changed, 54 insertions(+), 9 deletions(-) diff --git a/src/lxc/lxc_cgroup.c b/src/lxc/lxc_cgroup.c index d93a19d684..76014f3bfd 100644 --- a/src/lxc/lxc_cgroup.c +++ b/src/lxc/lxc_cgroup.c @@ -455,6 +455,7 @@ virCgroupPtr virLXCCgroupCreate(virDomainDefPtr def, nnicindexes, nicindexes, def->resource->partition, -1, + 0, &cgroup) < 0) goto cleanup; =20 diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug index b311f02da6..c70b903fed 100644 --- a/src/qemu/libvirtd_qemu.aug +++ b/src/qemu/libvirtd_qemu.aug @@ -94,6 +94,7 @@ module Libvirtd_qemu =3D | limits_entry "max_core" | bool_entry "dump_guest_core" | str_entry "stdio_handler" + | int_entry "max_threads_per_process" =20 let device_entry =3D bool_entry "mac_filter" | bool_entry "relaxed_acs_check" diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf index 5a85789d81..ab044c9cf3 100644 --- a/src/qemu/qemu.conf +++ b/src/qemu/qemu.conf @@ -608,6 +608,16 @@ #max_processes =3D 0 #max_files =3D 0 =20 +# If max_threads_per_process is set to a positive integer, libvirt +# will use it to set the maximum number of threads that can be +# created by a qemu process. Some VM configurations can result in +# qemu processes with tens of thousands of threads. systemd-based +# systems typically limit the number of threads per process to +# 16k. max_threads_per_process can be used to override default +# limits in the host OS. +# +#max_threads_per_process =3D 0 + # If max_core is set to a non-zero integer, then QEMU will be # permitted to create core dumps when it crashes, provided its # RAM size is smaller than the limit set. diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index ca76c4fdfa..9603f33e8a 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -930,6 +930,7 @@ qemuInitCgroup(virDomainObjPtr vm, nnicindexes, nicindexes, vm->def->resource->partition, cfg->cgroupControllers, + cfg->maxThreadsPerProc, &priv->cgroup) < 0) { if (virCgroupNewIgnoreError()) goto done; diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index daea11dacb..8ac2dc92b5 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -687,6 +687,8 @@ virQEMUDriverConfigLoadProcessEntry(virQEMUDriverConfig= Ptr cfg, return -1; if (virConfGetValueUInt(conf, "max_files", &cfg->maxFiles) < 0) return -1; + if (virConfGetValueUInt(conf, "max_threads_per_process", &cfg->maxThre= adsPerProc) < 0) + return -1; =20 if (virConfGetValueType(conf, "max_core") =3D=3D VIR_CONF_STRING) { if (virConfGetValueString(conf, "max_core", &corestr) < 0) diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h index 983e74a3cf..48b8711cbd 100644 --- a/src/qemu/qemu_conf.h +++ b/src/qemu/qemu_conf.h @@ -171,6 +171,7 @@ struct _virQEMUDriverConfig { =20 unsigned int maxProcesses; unsigned int maxFiles; + unsigned int maxThreadsPerProc; unsigned long long maxCore; bool dumpGuestCore; =20 diff --git a/src/qemu/test_libvirtd_qemu.aug.in b/src/qemu/test_libvirtd_qe= mu.aug.in index fea1d308b7..ac7ad59ba8 100644 --- a/src/qemu/test_libvirtd_qemu.aug.in +++ b/src/qemu/test_libvirtd_qemu.aug.in @@ -75,6 +75,7 @@ module Test_libvirtd_qemu =3D { "set_process_name" =3D "1" } { "max_processes" =3D "0" } { "max_files" =3D "0" } +{ "max_threads_per_process" =3D "0" } { "max_core" =3D "unlimited" } { "dump_guest_core" =3D "1" } { "mac_filter" =3D "1" } diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c index f58e336404..c31c34e5f8 100644 --- a/src/util/vircgroup.c +++ b/src/util/vircgroup.c @@ -1106,6 +1106,7 @@ virCgroupNewMachineSystemd(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) { int rv; @@ -1122,7 +1123,8 @@ virCgroupNewMachineSystemd(const char *name, isContainer, nnicindexes, nicindexes, - partition)) < 0) + partition, + maxthreads)) < 0) return rv; =20 if (controllers !=3D -1) @@ -1234,6 +1236,7 @@ virCgroupNewMachine(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) { int rv; @@ -1250,6 +1253,7 @@ virCgroupNewMachine(const char *name, nicindexes, partition, controllers, + maxthreads, group)) =3D=3D 0) return 0; =20 diff --git a/src/util/vircgroup.h b/src/util/vircgroup.h index 377e0fd870..3fb99f70ef 100644 --- a/src/util/vircgroup.h +++ b/src/util/vircgroup.h @@ -99,6 +99,7 @@ int virCgroupNewMachine(const char *name, int *nicindexes, const char *partition, int controllers, + unsigned int maxthreads, virCgroupPtr *group) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); diff --git a/src/util/virsystemd.c b/src/util/virsystemd.c index 3f03e3bd63..497d100a5c 100644 --- a/src/util/virsystemd.c +++ b/src/util/virsystemd.c @@ -238,12 +238,14 @@ int virSystemdCreateMachine(const char *name, bool iscontainer, size_t nnicindexes, int *nicindexes, - const char *partition) + const char *partition, + unsigned int maxthreads) { int ret; DBusConnection *conn; char *creatorname =3D NULL; char *slicename =3D NULL; + char *scopename =3D NULL; static int hasCreateWithNetwork =3D 1; =20 if ((ret =3D virSystemdHasMachined()) < 0) @@ -389,11 +391,31 @@ int virSystemdCreateMachine(const char *name, goto cleanup; } =20 + if (maxthreads > 0) { + if (!(scopename =3D virSystemdMakeScopeName(name, drivername, fals= e))) + goto cleanup; + + if (virDBusCallMethod(conn, + NULL, + NULL, + "org.freedesktop.systemd1", + "/org/freedesktop/systemd1", + "org.freedesktop.systemd1.Manager", + "SetUnitProperties", + "sba(sv)", + scopename, + true, + 1, + "TasksMax", "t", maxthreads) < 0) + goto cleanup; + } + ret =3D 0; =20 cleanup: VIR_FREE(creatorname); VIR_FREE(slicename); + VIR_FREE(scopename); return ret; } =20 diff --git a/src/util/virsystemd.h b/src/util/virsystemd.h index 7d9c0ebd62..bdce3b2e9d 100644 --- a/src/util/virsystemd.h +++ b/src/util/virsystemd.h @@ -37,7 +37,8 @@ int virSystemdCreateMachine(const char *name, bool iscontainer, size_t nnicindexes, int *nicindexes, - const char *partition); + const char *partition, + unsigned int maxthreads); =20 int virSystemdTerminateMachine(const char *name); =20 diff --git a/tests/virsystemdtest.c b/tests/virsystemdtest.c index 82c02decd1..478fa844fa 100644 --- a/tests/virsystemdtest.c +++ b/tests/virsystemdtest.c @@ -172,7 +172,7 @@ static int testCreateContainer(const void *opaque ATTRI= BUTE_UNUSED) 123, true, 0, NULL, - "highpriority.slice") < 0) { + "highpriority.slice", 0) < 0) { fprintf(stderr, "%s", "Failed to create LXC machine\n"); return -1; } @@ -205,7 +205,7 @@ static int testCreateMachine(const void *opaque ATTRIBU= TE_UNUSED) 123, false, 0, NULL, - NULL) < 0) { + NULL, 0) < 0) { fprintf(stderr, "%s", "Failed to create KVM machine\n"); return -1; } @@ -242,7 +242,7 @@ static int testCreateNoSystemd(const void *opaque ATTRI= BUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_NO_SERVICE"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -276,7 +276,7 @@ static int testCreateSystemdNotRunning(const void *opaq= ue ATTRIBUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_NOT_REGISTERED"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -310,7 +310,7 @@ static int testCreateBadSystemd(const void *opaque ATTR= IBUTE_UNUSED) 123, false, 0, NULL, - NULL)) =3D=3D 0) { + NULL, 0)) =3D=3D 0) { unsetenv("FAIL_BAD_SERVICE"); fprintf(stderr, "%s", "Unexpected create machine success\n"); return -1; @@ -345,7 +345,7 @@ static int testCreateNetwork(const void *opaque ATTRIBU= TE_UNUSED) 123, true, nnicindexes, nicindexes, - "highpriority.slice") < 0) { + "highpriority.slice", 0) < 0) { fprintf(stderr, "%s", "Failed to create LXC machine\n"); return -1; } --=20 2.21.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list