From nobody Wed Jan 15 05:23:30 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1683643151; cv=none; d=zohomail.com; s=zohoarc; b=dIkO3i3DC0wZ0pAjB7gp6IS24mKEozb4wgJ2Y7Afl76vx4CQAcFrI22fEfXeNmLRFnsYMBwQWwlrh+u27FViAaucbdWXNAXWsKb9wLGNFKHCCRe1x2McsulkKuIm9Fuq7ew3dAI/1PdxdM+Bs3xucW3BrAMUgz1/CwiuyPRRP1I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1683643151; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TuAI9kl6q3GNacCqgro1Qy8XGAJbv8X9oOyxzoWjxkY=; b=EW9LULNJhMJ0YStu+WcNTCLoUKeP1R/N0wEmMChSP3GqDOuKKEwuj3Q++DAsrfdH4qTVNgAVXpJ6zWO/Sd5vsrXgazpGctdS5ubYDkMIl64meN3dRslp1gJoYWtNldgtUeEKLs4ECZ8tMqqT37xeywVykVCxJImiQCHq7frQbJM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1683643151829833.4838396253336; Tue, 9 May 2023 07:39:11 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-439-zwcBdNIjPmyMNWxv_3BEAA-1; Tue, 09 May 2023 10:39:08 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B76AB280208D; Tue, 9 May 2023 14:39:04 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0A151492C13; Tue, 9 May 2023 14:39:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D557719466DF; Tue, 9 May 2023 14:39:02 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C67E419465BC for ; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 4DF2C14171BD; Tue, 9 May 2023 14:38:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8DC21410F23 for ; Tue, 9 May 2023 14:38:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683643150; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=TuAI9kl6q3GNacCqgro1Qy8XGAJbv8X9oOyxzoWjxkY=; b=AS3xLT0p/tEj2FVFGRgXilxv4ytuiD19cZah8Ad33VKGOmCetsiWTzuZ+Hp56dfBBOA4XT yO9O2JrqY8r6c4U6ZfkIauV/4FmZH4XZLa5nX7ADiuxb+Tfys77rNF3MPIC0GLQmHmCgoI 8kZxllAtopY8HTT1qNNKWN9EBBBZcJs= X-MC-Unique: zwcBdNIjPmyMNWxv_3BEAA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH 1/2] qemu_domin: Account for NVMe disks when calculating memlock limit on hotplug Date: Tue, 9 May 2023 16:38:52 +0200 Message-Id: <40eba65948e59271e3b4ffe53181615fb032967a.1683643108.git.mprivozn@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1683643152223100001 Content-Type: text/plain; charset="utf-8"; x-default="true" During hotplug of a NVMe disk we need to adjust the memlock limit. The computation of the limit is handled by qemuDomainGetMemLockLimitBytes() which looks at given domain definition and accounts for various device types (as different types require different amounts). But during disk hotplug the disk is not added to domain definition until the very last moment. Therefore, qemuDomainGetMemLockLimitBytes() has this @forceVFIO argument which tells it to assume VFIO even if there are no signs of VFIO in domain definition. And this kind of works, until the amount needed for NVMe disks changed (in v9.3.0-rc1~52). What's missing in the commit is making @forceVFIO behave the same as if there was an NVMe disk present in the domain definition. But, we can do even better - just mimic whatever we're doing for hostdevs. IOW - introduce qemuDomainAdjustMaxMemLockNVMe() that behaves the same as qemuDomainAdjustMaxMemLockHostdev(). There are subtle differences though: 1) qemuDomainAdjustMaxMemLockHostdev() can afford placing hostdev right at the end of vm->def->hostdevs, because the array was already reallocated (at the beginning of qemuDomainAttachHostPCIDevice()). But qemuDomainAdjustMaxMemLockNVMe() doesn't have that luxury. 2) qemuDomainAdjustMaxMemLockHostdev() places a virDomainHostdevDef pointer into domain definition, while qemuDomainStorageSourceAccessModifyNVMe() (which calls qemuDomainAdjustMaxMemLock()) sees a virStorageSource pointer but domain definition contains virDomainDiskDef. But that okay, we can create a dummy disk definition and append it into the domain definition. After this, qemuDomainAdjustMaxMemLock() can be called with @forceVFIO =3D false, as the disk is now part of domain definition (when computing the new limit). Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=3D2014030#c28 Signed-off-by: Michal Privoznik Reviewed-by: Martin Kletzander --- src/qemu/qemu_domain.c | 35 ++++++++++++++++++++++++++++++++++- src/qemu/qemu_domain.h | 3 +++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index d556e2186c..b5b4184782 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -8026,7 +8026,7 @@ qemuDomainStorageSourceAccessModifyNVMe(virQEMUDriver= *driver, goto revoke; } =20 - if (qemuDomainAdjustMaxMemLock(vm, true) < 0) + if (qemuDomainAdjustMaxMemLockNVMe(vm, src) < 0) goto revoke; =20 revoke_maxmemlock =3D true; @@ -9779,6 +9779,39 @@ qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, } =20 =20 +/** + * qemuDomainAdjustMaxMemLockNVMe: + * @vm: domain object + * @src: disk source + * + * Temporarily add the disk source to the domain definition, + * adjust the max memlock based in this new definition and + * restore the original definition. + * + * Returns: 0 on success, + * -1 on failure. + */ +int +qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, + virStorageSource *src) +{ + g_autofree virDomainDiskDef *disk =3D NULL; + int ret =3D 0; + + disk =3D g_new0(virDomainDiskDef, 1); + disk->src =3D src; + + VIR_APPEND_ELEMENT_COPY(vm->def->disks, vm->def->ndisks, disk); + + if (qemuDomainAdjustMaxMemLock(vm, false) < 0) + ret =3D -1; + + VIR_DELETE_ELEMENT_INPLACE(vm->def->disks, vm->def->ndisks - 1, vm->de= f->ndisks); + + return ret; +} + + /** * qemuDomainHasVcpuPids: * @vm: Domain object diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index eaa75de3e5..ee2ddda079 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -41,6 +41,7 @@ #include "virdomainmomentobjlist.h" #include "virenum.h" #include "vireventthread.h" +#include "storage_source_conf.h" =20 #define QEMU_DOMAIN_FORMAT_LIVE_FLAGS \ (VIR_DOMAIN_XML_SECURE) @@ -859,6 +860,8 @@ int qemuDomainAdjustMaxMemLock(virDomainObj *vm, bool forceVFIO); int qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, virDomainHostdevDef *hostdev); +int qemuDomainAdjustMaxMemLockNVMe(virDomainObj *vm, + virStorageSource *src); int qemuDomainSetMaxMemLock(virDomainObj *vm, unsigned long long limit, unsigned long long *origPtr); --=20 2.39.3