From nobody Wed Jan 15 10:21:43 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1683643154; cv=none; d=zohomail.com; s=zohoarc; b=nAJTaPInoNMUGUojjOvY2s9dA3daaHSgHbkO1zqOrdsdVg6JueaIBKVKmdpJypzJb2L3bqffIVo14oNgQz63vFzo5f0/LN46AzOMAOO8Qzyuy2QkRt5CgTnie/0FxERcfYKbCJVJrD/vzimn1vVDitDpaCrv3dERm3fZ9tIfAsY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1683643154; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EYPq23ugPZN5mkJlF0U09Kaf7rXxrUIUgbTYsRdcHBw=; b=gF3OxhqhNCtKtcz6/IB4TJHCUuYW8WZEt7dyqLdwzzx6rDNAaV8/SOm6d2GFicT5v1pW+pJtUK5VCj6vyl6GsHyW8NrK7sxqLo54IWIIHCXXhfP9Q5gqAgZdPBCLQveABCV8LqOxOXPk1mIzvNxoxnZFjtyM4DdouPH3rwAlC5w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1683643154975974.2559015297487; Tue, 9 May 2023 07:39:14 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-31-dZpdqHcJPrimfdWpC57aOw-1; Tue, 09 May 2023 10:39:09 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7A18581DB6E; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 64A0840C2063; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 4DFBD19465BD; Tue, 9 May 2023 14:39:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C94FE19465BD for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id DE4EF14171C0; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id 849F014171BC for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683643153; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=EYPq23ugPZN5mkJlF0U09Kaf7rXxrUIUgbTYsRdcHBw=; b=LIkIY5fVINKh6+rCLoHi1SzX9oAJ3TryYV+08haGjbpUpvz6YZ4SToNREXyHmcHcNh1hbG Y5JOMlSYT5iuCG0cfKD1RV0VZOrkchF73M0PWisHda3L+3PuqY83hlfUolE5jKxC7lixnj f2Zo2+lLEr+5xSKBAgqnPYbEpdSV60I= X-MC-Unique: dZpdqHcJPrimfdWpC57aOw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH 2/2] qemu: Drop @forceVFIO argument of qemuDomainGetMemLockLimitBytes() and qemuDomainAdjustMaxMemLock() Date: Tue, 9 May 2023 16:38:53 +0200 Message-Id: <21bc8622c85cf3adad1aa8c1f350395048d29c02.1683643108.git.mprivozn@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1683643156236100004 Content-Type: text/plain; charset="utf-8"; x-default="true" After previous cleanup, there's not a single caller that would call either qemuDomainGetMemLockLimitBytes() or qemuDomainAdjustMaxMemLock() with @forceVFIO set. All callers pass false. Drop the unneeded argument from both functions. Signed-off-by: Michal Privoznik Reviewed-by: Martin Kletzander --- src/qemu/qemu_domain.c | 42 ++++++++++++++++------------------------- src/qemu/qemu_domain.h | 6 ++---- src/qemu/qemu_hotplug.c | 16 ++++++++-------- src/qemu/qemu_process.c | 2 +- tests/qemumemlocktest.c | 2 +- 5 files changed, 28 insertions(+), 40 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index b5b4184782..fac611d920 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -8040,7 +8040,7 @@ qemuDomainStorageSourceAccessModifyNVMe(virQEMUDriver= *driver, =20 revoke: if (revoke_maxmemlock) { - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to change max memlock limit"); } =20 @@ -9403,14 +9403,12 @@ ppc64VFIODeviceIsNV2Bridge(const char *device) /** * getPPC64MemLockLimitBytes: * @def: domain definition - * @forceVFIO: force VFIO usage * * A PPC64 helper that calculates the memory locking limit in order for * the guest to operate properly. */ static unsigned long long -getPPC64MemLockLimitBytes(virDomainDef *def, - bool forceVFIO) +getPPC64MemLockLimitBytes(virDomainDef *def) { unsigned long long memKB =3D 0; unsigned long long baseLimit =3D 0; @@ -9472,10 +9470,10 @@ getPPC64MemLockLimitBytes(virDomainDef *def, 8192; =20 /* NVLink2 support in QEMU is a special case of the passthrough - * mechanics explained in the forceVFIO case below. The GPU RAM - * is placed with a gap after maxMemory. The current QEMU - * implementation puts the NVIDIA RAM above the PCI MMIO, which - * starts at 32TiB and is the MMIO reserved for the guest main RAM. + * mechanics explained below. The GPU RAM is placed with a gap after + * maxMemory. The current QEMU implementation puts the NVIDIA RAM + * above the PCI MMIO, which starts at 32TiB and is the MMIO + * reserved for the guest main RAM. * * This window ends at 64TiB, and this is where the GPUs are being * placed. The next available window size is at 128TiB, and @@ -9496,7 +9494,7 @@ getPPC64MemLockLimitBytes(virDomainDef *def, passthroughLimit =3D maxMemory + 128 * (1ULL<<30) / 512 * nPCIHostBridges + 8192; - } else if (forceVFIO || qemuDomainNeedsVFIO(def) || virDomainDefHasVDP= ANet(def)) { + } else if (qemuDomainNeedsVFIO(def) || virDomainDefHasVDPANet(def)) { /* For regular (non-NVLink2 present) VFIO passthrough, the value * of passthroughLimit is: * @@ -9580,20 +9578,16 @@ qemuDomainGetNumVDPANetDevices(const virDomainDef *= def) /** * qemuDomainGetMemLockLimitBytes: * @def: domain definition - * @forceVFIO: force VFIO calculation * * Calculate the memory locking limit that needs to be set in order for * the guest to operate properly. The limit depends on a number of factors, * including certain configuration options and less immediately apparent o= nes * such as the guest architecture or the use of certain devices. - * The @forceVFIO argument can be used to tell this function will use VFIO= even - * though @def doesn't indicates so right now. * * Returns: the memory locking limit, or 0 if setting the limit is not nee= ded */ unsigned long long -qemuDomainGetMemLockLimitBytes(virDomainDef *def, - bool forceVFIO) +qemuDomainGetMemLockLimitBytes(virDomainDef *def) { unsigned long long memKB =3D 0; int nvfio; @@ -9615,7 +9609,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; =20 if (ARCH_IS_PPC64(def->os.arch) && def->virtType =3D=3D VIR_DOMAIN_VIR= T_KVM) - return getPPC64MemLockLimitBytes(def, forceVFIO); + return getPPC64MemLockLimitBytes(def); =20 nvfio =3D qemuDomainGetNumVFIOHostdevs(def); nnvme =3D qemuDomainGetNumNVMeDisks(def); @@ -9638,7 +9632,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, * * Note that this may not be valid for all platforms. */ - if (forceVFIO || nvfio || nnvme || nvdpa) { + if (nvfio || nnvme || nvdpa) { /* At present, the full memory needs to be locked for each VFIO / = VDPA * NVMe device. For VFIO devices, this only applies when there is a * vIOMMU present. Yes, this may result in a memory limit that is @@ -9650,7 +9644,7 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, */ int factor =3D nvdpa + nnvme; =20 - if (nvfio || forceVFIO) { + if (nvfio) { if (nvfio && def->iommu) factor +=3D nvfio; else @@ -9726,12 +9720,9 @@ qemuDomainSetMaxMemLock(virDomainObj *vm, /** * qemuDomainAdjustMaxMemLock: * @vm: domain - * @forceVFIO: apply VFIO requirements even if vm's def doesn't require it * * Adjust the memory locking limit for the QEMU process associated to @vm,= in - * order to comply with VFIO or architecture requirements. If @forceVFIO is - * true then the limit is changed even if nothing in @vm's definition indi= cates - * so. + * order to comply with VFIO or architecture requirements. * * The limit will not be changed unless doing so is needed; the first time * the limit is changed, the original (default) limit is stored in @vm and @@ -9741,11 +9732,10 @@ qemuDomainSetMaxMemLock(virDomainObj *vm, * Returns: 0 on success, <0 on failure */ int -qemuDomainAdjustMaxMemLock(virDomainObj *vm, - bool forceVFIO) +qemuDomainAdjustMaxMemLock(virDomainObj *vm) { return qemuDomainSetMaxMemLock(vm, - qemuDomainGetMemLockLimitBytes(vm->def,= forceVFIO), + qemuDomainGetMemLockLimitBytes(vm->def), &QEMU_DOMAIN_PRIVATE(vm)->originalMemlo= ck); } =20 @@ -9770,7 +9760,7 @@ qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, int ret =3D 0; =20 vm->def->hostdevs[vm->def->nhostdevs++] =3D hostdev; - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) ret =3D -1; =20 vm->def->hostdevs[--(vm->def->nhostdevs)] =3D NULL; @@ -9803,7 +9793,7 @@ qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, =20 VIR_APPEND_ELEMENT_COPY(vm->def->disks, vm->def->ndisks, disk); =20 - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) ret =3D -1; =20 VIR_DELETE_ELEMENT_INPLACE(vm->def->disks, vm->def->ndisks - 1, vm->de= f->ndisks); diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index ee2ddda079..ec9ae75bce 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -854,10 +854,8 @@ bool qemuDomainSupportsPCI(virDomainDef *def, =20 void qemuDomainUpdateCurrentMemorySize(virDomainObj *vm); =20 -unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDef *def, - bool forceVFIO); -int qemuDomainAdjustMaxMemLock(virDomainObj *vm, - bool forceVFIO); +unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDef *def); +int qemuDomainAdjustMaxMemLock(virDomainObj *vm); int qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, virDomainHostdevDef *hostdev); int qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 54b5a2c2c9..d5148f5815 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -1244,7 +1244,7 @@ qemuDomainAttachNetDevice(virQEMUDriver *driver, break; =20 case VIR_DOMAIN_NET_TYPE_VDPA: - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) goto cleanup; adjustmemlock =3D true; break; @@ -1417,7 +1417,7 @@ qemuDomainAttachNetDevice(virQEMUDriver *driver, * after all */ if (adjustmemlock) - qemuDomainAdjustMaxMemLock(vm, false); + qemuDomainAdjustMaxMemLock(vm); =20 if (net->type =3D=3D VIR_DOMAIN_NET_TYPE_NETWORK) { if (conn) @@ -1564,7 +1564,7 @@ qemuDomainAttachHostPCIDevice(virQEMUDriver *driver, if (teardowndevice && qemuDomainNamespaceTeardownHostdev(vm, hostdev) < 0) VIR_WARN("Unable to remove host device from /dev"); - if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to reset maximum locked memory on hotplug fail"); =20 if (releaseaddr) @@ -2291,7 +2291,7 @@ qemuDomainAttachMemory(virQEMUDriver *driver, if (virDomainMemoryInsert(vm->def, mem) < 0) goto cleanup; =20 - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) goto removedef; =20 qemuDomainObjEnterMonitor(vm); @@ -2357,7 +2357,7 @@ qemuDomainAttachMemory(virQEMUDriver *driver, =20 /* reset the mlock limit */ virErrorPreserveLast(&orig_err); - ignore_value(qemuDomainAdjustMaxMemLock(vm, false)); + ignore_value(qemuDomainAdjustMaxMemLock(vm)); virErrorRestore(&orig_err); =20 goto audit; @@ -2720,7 +2720,7 @@ qemuDomainAttachMediatedDevice(virQEMUDriver *driver, ret =3D 0; cleanup: if (ret < 0) { - if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (teardownmemlock && qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Unable to reset maximum locked memory on hotplug fai= l"); if (teardowncgroup && qemuTeardownHostdevCgroup(vm, hostdev) < 0) VIR_WARN("Unable to remove host device cgroup ACL on hotplug f= ail"); @@ -4583,7 +4583,7 @@ qemuDomainRemoveMemoryDevice(virQEMUDriver *driver, ignore_value(qemuProcessRefreshBalloonState(vm, VIR_ASYNC_JOB_NONE)); =20 /* decrease the mlock limit after memory unplug if necessary */ - ignore_value(qemuDomainAdjustMaxMemLock(vm, false)); + ignore_value(qemuDomainAdjustMaxMemLock(vm)); =20 return 0; } @@ -4690,7 +4690,7 @@ qemuDomainRemoveHostDevice(virQEMUDriver *driver, qemuDomainRemovePCIHostDevice(driver, vm, hostdev); /* QEMU might no longer need to lock as much memory, eg. we just * detached the last VFIO device, so adjust the limit here */ - if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + if (qemuDomainAdjustMaxMemLock(vm) < 0) VIR_WARN("Failed to adjust locked memory limit"); break; case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_USB: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 952814d663..721b379381 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7656,7 +7656,7 @@ qemuProcessLaunch(virConnectPtr conn, =20 /* In some situations, eg. VFIO passthrough, QEMU might need to lock a * significant amount of memory, so we need to set the limit according= ly */ - maxMemLock =3D qemuDomainGetMemLockLimitBytes(vm->def, false); + maxMemLock =3D qemuDomainGetMemLockLimitBytes(vm->def); =20 /* For all these settings, zero indicates that the limit should * not be set explicitly and the default/inherited limit should diff --git a/tests/qemumemlocktest.c b/tests/qemumemlocktest.c index c53905a7dd..7d219fcc40 100644 --- a/tests/qemumemlocktest.c +++ b/tests/qemumemlocktest.c @@ -39,7 +39,7 @@ testCompareMemLock(const void *data) return -1; } =20 - return virTestCompareToULL(info->memlock, qemuDomainGetMemLockLimitByt= es(def, false)); + return virTestCompareToULL(info->memlock, qemuDomainGetMemLockLimitByt= es(def)); } =20 static int --=20 2.39.3