From nobody Thu Apr 18 19:42:05 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1683643151; cv=none; d=zohomail.com; s=zohoarc; b=dIkO3i3DC0wZ0pAjB7gp6IS24mKEozb4wgJ2Y7Afl76vx4CQAcFrI22fEfXeNmLRFnsYMBwQWwlrh+u27FViAaucbdWXNAXWsKb9wLGNFKHCCRe1x2McsulkKuIm9Fuq7ew3dAI/1PdxdM+Bs3xucW3BrAMUgz1/CwiuyPRRP1I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1683643151; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TuAI9kl6q3GNacCqgro1Qy8XGAJbv8X9oOyxzoWjxkY=; b=EW9LULNJhMJ0YStu+WcNTCLoUKeP1R/N0wEmMChSP3GqDOuKKEwuj3Q++DAsrfdH4qTVNgAVXpJ6zWO/Sd5vsrXgazpGctdS5ubYDkMIl64meN3dRslp1gJoYWtNldgtUeEKLs4ECZ8tMqqT37xeywVykVCxJImiQCHq7frQbJM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1683643151829833.4838396253336; Tue, 9 May 2023 07:39:11 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-439-zwcBdNIjPmyMNWxv_3BEAA-1; Tue, 09 May 2023 10:39:08 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B76AB280208D; Tue, 9 May 2023 14:39:04 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0A151492C13; Tue, 9 May 2023 14:39:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D557719466DF; Tue, 9 May 2023 14:39:02 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C67E419465BC for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 4DF2C14171BD; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8DC21410F23 for ; Tue, 9 May 2023 14:38:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683643150; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=TuAI9kl6q3GNacCqgro1Qy8XGAJbv8X9oOyxzoWjxkY=; b=AS3xLT0p/tEj2FVFGRgXilxv4ytuiD19cZah8Ad33VKGOmCetsiWTzuZ+Hp56dfBBOA4XT yO9O2JrqY8r6c4U6ZfkIauV/4FmZH4XZLa5nX7ADiuxb+Tfys77rNF3MPIC0GLQmHmCgoI 8kZxllAtopY8HTT1qNNKWN9EBBBZcJs= X-MC-Unique: zwcBdNIjPmyMNWxv_3BEAA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH 1/2] qemu_domin: Account for NVMe disks when calculating memlock limit on hotplug Date: Tue, 9 May 2023 16:38:52 +0200 Message-Id: <40eba65948e59271e3b4ffe53181615fb032967a.1683643108.git.mprivozn@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1683643152223100001 Content-Type: text/plain; charset="utf-8"; x-default="true" During hotplug of a NVMe disk we need to adjust the memlock limit. The computation of the limit is handled by qemuDomainGetMemLockLimitBytes() which looks at given domain definition and accounts for various device types (as different types require different amounts). But during disk hotplug the disk is not added to domain definition until the very last moment. Therefore, qemuDomainGetMemLockLimitBytes() has this @forceVFIO argument which tells it to assume VFIO even if there are no signs of VFIO in domain definition. And this kind of works, until the amount needed for NVMe disks changed (in v9.3.0-rc1~52). What's missing in the commit is making @forceVFIO behave the same as if there was an NVMe disk present in the domain definition. But, we can do even better - just mimic whatever we're doing for hostdevs. IOW - introduce qemuDomainAdjustMaxMemLockNVMe() that behaves the same as qemuDomainAdjustMaxMemLockHostdev(). There are subtle differences though: 1) qemuDomainAdjustMaxMemLockHostdev() can afford placing hostdev right at the end of vm->def->hostdevs, because the array was already reallocated (at the beginning of qemuDomainAttachHostPCIDevice()). But qemuDomainAdjustMaxMemLockNVMe() doesn't have that luxury. 2) qemuDomainAdjustMaxMemLockHostdev() places a virDomainHostdevDef pointer into domain definition, while qemuDomainStorageSourceAccessModifyNVMe() (which calls qemuDomainAdjustMaxMemLock()) sees a virStorageSource pointer but domain definition contains virDomainDiskDef. But that okay, we can create a dummy disk definition and append it into the domain definition. After this, qemuDomainAdjustMaxMemLock() can be called with @forceVFIO =3D false, as the disk is now part of domain definition (when computing the new limit). Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=3D2014030#c28 Signed-off-by: Michal Privoznik Reviewed-by: Martin Kletzander --- src/qemu/qemu_domain.c | 35 ++++++++++++++++++++++++++++++++++- src/qemu/qemu_domain.h | 3 +++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index d556e2186c..b5b4184782 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -8026,7 +8026,7 @@ qemuDomainStorageSourceAccessModifyNVMe(virQEMUDriver= *driver, goto revoke; } =20 - if (qemuDomainAdjustMaxMemLock(vm, true) < 0) + if (qemuDomainAdjustMaxMemLockNVMe(vm, src) < 0) goto revoke; =20 revoke_maxmemlock =3D true; @@ -9779,6 +9779,39 @@ qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, } =20 =20 +/** + * qemuDomainAdjustMaxMemLockNVMe: + * @vm: domain object + * @src: disk source + * + * Temporarily add the disk source to the domain definition, + * adjust the max memlock based in this new definition and + * restore the original definition. + * + * Returns: 0 on success, + * -1 on failure. + */ +int +qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, + virStorageSource *src) +{ + g_autofree virDomainDiskDef *disk =3D NULL; + int ret =3D 0; + + disk =3D g_new0(virDomainDiskDef, 1); + disk->src =3D src; + + VIR_APPEND_ELEMENT_COPY(vm->def->disks, vm->def->ndisks, disk); + + if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + ret =3D -1; + + VIR_DELETE_ELEMENT_INPLACE(vm->def->disks, vm->def->ndisks - 1, vm->de= f->ndisks); + + return ret; +} + + /** * qemuDomainHasVcpuPids: * @vm: Domain object diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index eaa75de3e5..ee2ddda079 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -41,6 +41,7 @@ #include "virdomainmomentobjlist.h" #include "virenum.h" #include "vireventthread.h" +#include "storage_source_conf.h" =20 #define QEMU_DOMAIN_FORMAT_LIVE_FLAGS \ (VIR_DOMAIN_XML_SECURE) @@ -859,6 +860,8 @@ int qemuDomainAdjustMaxMemLock(virDomainObj *vm, bool forceVFIO); int qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, virDomainHostdevDef *hostdev); +int qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, + virStorageSource *src); int qemuDomainSetMaxMemLock(virDomainObj *vm, unsigned long long limit, unsigned long long *origPtr); --=20 2.39.3 From nobody Thu Apr 18 19:42:05 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1683643154; cv=none; d=zohomail.com; s=zohoarc; b=nAJTaPInoNMUGUojjOvY2s9dA3daaHSgHbkO1zqOrdsdVg6JueaIBKVKmdpJypzJb2L3bqffIVo14oNgQz63vFzo5f0/LN46AzOMAOO8Qzyuy2QkRt5CgTnie/0FxERcfYKbCJVJrD/vzimn1vVDitDpaCrv3dERm3fZ9tIfAsY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1683643154; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EYPq23ugPZN5mkJlF0U09Kaf7rXxrUIUgbTYsRdcHBw=; b=gF3OxhqhNCtKtcz6/IB4TJHCUuYW8WZEt7dyqLdwzzx6rDNAaV8/SOm6d2GFicT5v1pW+pJtUK5VCj6vyl6GsHyW8NrK7sxqLo54IWIIHCXXhfP9Q5gqAgZdPBCLQveABCV8LqOxOXPk1mIzvNxoxnZFjtyM4DdouPH3rwAlC5w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1683643154975974.2559015297487; Tue, 9 May 2023 07:39:14 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-31-dZpdqHcJPrimfdWpC57aOw-1; Tue, 09 May 2023 10:39:09 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7A18581DB6E; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 64A0840C2063; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 4DFBD19465BD; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C94FE19465BD for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id DE4EF14171C0; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id 849F014171BC for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683643153; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=EYPq23ugPZN5mkJlF0U09Kaf7rXxrUIUgbTYsRdcHBw=; b=LIkIY5fVINKh6+rCLoHi1SzX9oAJ3TryYV+08haGjbpUpvz6YZ4SToNREXyHmcHcNh1hbG Y5JOMlSYT5iuCG0cfKD1RV0VZOrkchF73M0PWisHda3L+3PuqY83hlfUolE5jKxC7lixnj f2Zo2+lLEr+5xSKBAgqnPYbEpdSV60I= X-MC-Unique: dZpdqHcJPrimfdWpC57aOw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH 2/2] qemu: Drop @forceVFIO argument of qemuDomainGetMemLockLimitBytes() and qemuDomainAdjustMaxMemLock() Date: Tue, 9 May 2023 16:38:53 +0200 Message-Id: <21bc8622c85cf3adad1aa8c1f350395048d29c02.1683643108.git.mprivozn@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1683643156236100004 Content-Type: text/plain; charset="utf-8"; x-default="true" After previous cleanup, there's not a single caller that would call either qemuDomainGetMemLockLimitBytes() or qemuDomainAdjustMaxMemLock() with @forceVFIO set. All callers pass false. Drop the unneeded argument from both functions. Signed-off-by: Michal Privoznik Reviewed-by: Martin Kletzander --- src/qemu/qemu_domain.c | 42 ++++++++++++++++------------------------- src/qemu/qemu_domain.h | 6 ++---- src/qemu/qemu_hotplug.c | 16 ++++++++-------- src/qemu/qemu_process.c | 2 +- tests/qemumemlocktest.c | 2 +- 5 files changed, 28 insertions(+), 40 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index b5b4184782..fac611d920 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -8040,7 +8040,7 @@ qemuDomainStorageSourceAccessModifyNVMe(virQEMUDriver= *driver, =20 revoke: if (revoke_maxmemlock) { - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to change max memlock limit"); } =20 @@ -9403,14 +9403,12 @@ ppc64VFIODeviceIsNV2Bridge(const char *device) /** * getPPC64MemLockLimitBytes: * @def: domain definition - * @forceVFIO: force VFIO usage * * A PPC64 helper that calculates the memory locking limit in order for * the guest to operate properly. */ static unsigned long long -getPPC64MemLockLimitBytes(virDomainDef *def, - bool forceVFIO) +getPPC64MemLockLimitBytes(virDomainDef *def) { unsigned long long memKB =3D 0; unsigned long long baseLimit =3D 0; @@ -9472,10 +9470,10 @@ getPPC64MemLockLimitBytes(virDomainDef *def, 8192; =20 /* NVLink2 support in QEMU is a special case of the passthrough - * mechanics explained in the forceVFIO case below. The GPU RAM - * is placed with a gap after maxMemory. The current QEMU - * implementation puts the NVIDIA RAM above the PCI MMIO, which - * starts at 32TiB and is the MMIO reserved for the guest main RAM. + * mechanics explained below. The GPU RAM is placed with a gap after + * maxMemory. The current QEMU implementation puts the NVIDIA RAM + * above the PCI MMIO, which starts at 32TiB and is the MMIO + * reserved for the guest main RAM. * * This window ends at 64TiB, and this is where the GPUs are being * placed. The next available window size is at 128TiB, and @@ -9496,7 +9494,7 @@ getPPC64MemLockLimitBytes(virDomainDef *def, passthroughLimit =3D maxMemory + 128 * (1ULL<<30) / 512 * nPCIHostBridges + 8192; - } else if (forceVFIO || qemuDomainNeedsVFIO(def) || virDomainDefHasVDP= ANet(def)) { + } else if (qemuDomainNeedsVFIO(def) || virDomainDefHasVDPANet(def)) { /* For regular (non-NVLink2 present) VFIO passthrough, the value * of passthroughLimit is: * @@ -9580,20 +9578,16 @@ qemuDomainGetNumVDPANetDevices(const virDomainDef *= def) /** * qemuDomainGetMemLockLimitBytes: * @def: domain definition - * @forceVFIO: force VFIO calculation * * Calculate the memory locking limit that needs to be set in order for * the guest to operate properly. The limit depends on a number of factors, * including certain configuration options and less immediately apparent o= nes * such as the guest architecture or the use of certain devices. - * The @forceVFIO argument can be used to tell this function will use VFIO= even - * though @def doesn't indicates so right now. * * Returns: the memory locking limit, or 0 if setting the limit is not nee= ded */ unsigned long long -qemuDomainGetMemLockLimitBytes(virDomainDef *def, - bool forceVFIO) +qemuDomainGetMemLockLimitBytes(virDomainDef *def) { unsigned long long memKB =3D 0; int nvfio; @@ -9615,7 +9609,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; =20 if (ARCH_IS_PPC64(def->os.arch) && def->virtType =3D=3D VIR_DOMAIN_VIR= T_KVM) - return getPPC64MemLockLimitBytes(def, forceVFIO); + return getPPC64MemLockLimitBytes(def); =20 nvfio =3D qemuDomainGetNumVFIOHostdevs(def); nnvme =3D qemuDomainGetNumNVMeDisks(def); @@ -9638,7 +9632,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, * * Note that this may not be valid for all platforms. */ - if (forceVFIO || nvfio || nnvme || nvdpa) { + if (nvfio || nnvme || nvdpa) { /* At present, the full memory needs to be locked for each VFIO / = VDPA * NVMe device. For VFIO devices, this only applies when there is a * vIOMMU present. Yes, this may result in a memory limit that is @@ -9650,7 +9644,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, */ int factor =3D nvdpa + nnvme; =20 - if (nvfio || forceVFIO) { + if (nvfio) { if (nvfio && def->iommu) factor +=3D nvfio; else @@ -9726,12 +9720,9 @@ qemuDomainSetMaxMemLock(virDomainObj *vm, /** * qemuDomainAdjustMaxMemLock: * @vm: domain - * @forceVFIO: apply VFIO requirements even if vm's def doesn't require it * * Adjust the memory locking limit for the QEMU process associated to @vm,= in - * order to comply with VFIO or architecture requirements. If @forceVFIO is - * true then the limit is changed even if nothing in @vm's definition indi= cates - * so. + * order to comply with VFIO or architecture requirements. * * The limit will not be changed unless doing so is needed; the first time * the limit is changed, the original (default) limit is stored in @vm and @@ -9741,11 +9732,10 @@ qemuDomainSetMaxMemLock(virDomainObj *vm, * Returns: 0 on success, <0 on failure */ int -qemuDomainAdjustMaxMemLock(virDomainObj *vm, - bool forceVFIO) +qemuDomainAdjustMaxMemLock(virDomainObj *vm) { return qemuDomainSetMaxMemLock(vm, - qemuDomainGetMemLockLimitBytes(vm->def,= forceVFIO), + qemuDomainGetMemLockLimitBytes(vm->def), &QEMU_DOMAIN_PRIVATE(vm)->originalMemlo= ck); } =20 @@ -9770,7 +9760,7 @@ qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, int ret =3D 0; =20 vm->def->hostdevs[vm->def->nhostdevs++] =3D hostdev; - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) ret =3D -1; =20 vm->def->hostdevs[--(vm->def->nhostdevs)] =3D NULL; @@ -9803,7 +9793,7 @@ qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, =20 VIR_APPEND_ELEMENT_COPY(vm->def->disks, vm->def->ndisks, disk); =20 - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) ret =3D -1; =20 VIR_DELETE_ELEMENT_INPLACE(vm->def->disks, vm->def->ndisks - 1, vm->de= f->ndisks); diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index ee2ddda079..ec9ae75bce 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -854,10 +854,8 @@ bool qemuDomainSupportsPCI(virDomainDef *def, =20 void qemuDomainUpdateCurrentMemorySize(virDomainObj *vm); =20 -unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDef *def, - bool forceVFIO); -int qemuDomainAdjustMaxMemLock(virDomainObj *vm, - bool forceVFIO); +unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDef *def); +int qemuDomainAdjustMaxMemLock(virDomainObj *vm); int qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, virDomainHostdevDef *hostdev); int qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 54b5a2c2c9..d5148f5815 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -1244,7 +1244,7 @@ qemuDomainAttachNetDevice(virQEMUDriver *driver, break; =20 case VIR_DOMAIN_NET_TYPE_VDPA: - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) goto cleanup; adjustmemlock =3D true; break; @@ -1417,7 +1417,7 @@ qemuDomainAttachNetDevice(virQEMUDriver *driver, * after all */ if (adjustmemlock) - qemuDomainAdjustMaxMemLock(vm, false); + qemuDomainAdjustMaxMemLock(vm); =20 if (net->type =3D=3D VIR_DOMAIN_NET_TYPE_NETWORK) { if (conn) @@ -1564,7 +1564,7 @@ qemuDomainAttachHostPCIDevice(virQEMUDriver *driver, if (teardowndevice && qemuDomainNamespaceTeardownHostdev(vm, hostdev) < 0) VIR_WARN("Unable to remove host device from /dev"); - if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to reset maximum locked memory on hotplug fail"); =20 if (releaseaddr) @@ -2291,7 +2291,7 @@ qemuDomainAttachMemory(virQEMUDriver *driver, if (virDomainMemoryInsert(vm->def, mem) < 0) goto cleanup; =20 - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) goto removedef; =20 qemuDomainObjEnterMonitor(vm); @@ -2357,7 +2357,7 @@ qemuDomainAttachMemory(virQEMUDriver *driver, =20 /* reset the mlock limit */ virErrorPreserveLast(&orig_err); - ignore_value(qemuDomainAdjustMaxMemLock(vm, false)); + ignore_value(qemuDomainAdjustMaxMemLock(vm)); virErrorRestore(&orig_err); =20 goto audit; @@ -2720,7 +2720,7 @@ qemuDomainAttachMediatedDevice(virQEMUDriver *driver, ret =3D 0; cleanup: if (ret < 0) { - if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to reset maximum locked memory on hotplug fai= l"); if (teardowncgroup && qemuTeardownHostdevCgroup(vm, hostdev) < 0) VIR_WARN("Unable to remove host device cgroup ACL on hotplug f= ail"); @@ -4583,7 +4583,7 @@ qemuDomainRemoveMemoryDevice(virQEMUDriver *driver, ignore_value(qemuProcessRefreshBalloonState(vm, VIR_ASYNC_JOB_NONE)); =20 /* decrease the mlock limit after memory unplug if necessary */ - ignore_value(qemuDomainAdjustMaxMemLock(vm, false)); + ignore_value(qemuDomainAdjustMaxMemLock(vm)); =20 return 0; } @@ -4690,7 +4690,7 @@ qemuDomainRemoveHostDevice(virQEMUDriver *driver, qemuDomainRemovePCIHostDevice(driver, vm, hostdev); /* QEMU might no longer need to lock as much memory, eg. we just * detached the last VFIO device, so adjust the limit here */ - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Failed to adjust locked memory limit"); break; case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_USB: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 952814d663..721b379381 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7656,7 +7656,7 @@ qemuProcessLaunch(virConnectPtr conn, =20 /* In some situations, eg. VFIO passthrough, QEMU might need to lock a * significant amount of memory, so we need to set the limit according= ly */ - maxMemLock =3D qemuDomainGetMemLockLimitBytes(vm->def, false); + maxMemLock =3D qemuDomainGetMemLockLimitBytes(vm->def); =20 /* For all these settings, zero indicates that the limit should * not be set explicitly and the default/inherited limit should diff --git a/tests/qemumemlocktest.c b/tests/qemumemlocktest.c index c53905a7dd..7d219fcc40 100644 --- a/tests/qemumemlocktest.c +++ b/tests/qemumemlocktest.c @@ -39,7 +39,7 @@ testCompareMemLock(const void *data) return -1; } =20 - return virTestCompareToULL(info->memlock, qemuDomainGetMemLockLimitByt= es(def, false)); + return virTestCompareToULL(info->memlock, qemuDomainGetMemLockLimitByt= es(def)); } =20 static int --=20 2.39.3