From nobody Mon Feb 9 05:42:55 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1655992704; cv=none; d=zohomail.com; s=zohoarc; b=SrUmoUxh9Dr0GmXp1fQAa180B1t7r/i1VZji9VFSBW9ZOLW/Vhs8JATSOPZWjzJg8RQwr4PTzGOZ657VkulIS4KRLubfT7wFQxDg9/yTi6FUvimwSvZ8FkiiZl+yjOKQ6R5TZWgiMmmt98ppOIuaiXUENqmW/NKUXQ9Qab/r+Vo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1655992704; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=VWMAI2Xe9Tlcy2R9cfsgJkId5mZ/WWq7HCcoEbHg8fs=; b=crOv28L6JnD04M0EoOzinQ2oT24/2+vgvOSYQEc979A0SWzSKEHpl5znamaRa1BEvoOcsFJwEEFTh8kD8Zc1Ac6ZisohetrftSG/zl8yvdyuu8z/ypd6rQXHjhWpj4b/TGRz7T4hOrfS+KlOpu9WFHLEO2ZwFksNAvnG683HTds= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1655992704114766.26126943623; Thu, 23 Jun 2022 06:58:24 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-398-caXptwG-MhOgitl4ZiN4vQ-1; Thu, 23 Jun 2022 09:58:21 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF57E3816845; Thu, 23 Jun 2022 13:58:19 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id BB6B018EB2; Thu, 23 Jun 2022 13:58:19 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 31383194B950; Thu, 23 Jun 2022 13:58:16 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 803CC194B954 for ; Thu, 23 Jun 2022 13:58:15 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 5C0BA40D2962; Thu, 23 Jun 2022 13:58:15 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2086540D282F for ; Thu, 23 Jun 2022 13:58:15 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id 174B2241E3D; Thu, 23 Jun 2022 15:58:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655992703; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=VWMAI2Xe9Tlcy2R9cfsgJkId5mZ/WWq7HCcoEbHg8fs=; b=Rs5H0/YL+kUcBqS15O2zs5TEreQ3A/leLeI+xja1iPWYdPfAFsXCd95cZ9/22Dj0vBgYL/ G57jttKdtFHNVIj2QJaaV8SKoLdt1wamdifpQgoqzkZev6BYTExguG0x9DKSBNEOReM/Nt 7xon0S5xkHjsMP1ICcJSNwkQ4RCjNfw= X-MC-Unique: caXptwG-MhOgitl4ZiN4vQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH 1/7] qemu: Add qemuDomainSetMaxMemLock helper Date: Thu, 23 Jun 2022 15:58:06 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1655992706303100002 Content-Type: text/plain; charset="utf-8" qemuDomainAdjustMaxMemLock combined computing the desired limit with applying it. This patch separates the code to apply a memory locking limit to a new qemuDomainSetMaxMemLock helper for better reusability. Signed-off-by: Jiri Denemark --- src/qemu/qemu_domain.c | 95 ++++++++++++++++++++++++++---------------- src/qemu/qemu_domain.h | 3 ++ 2 files changed, 61 insertions(+), 37 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 9769e3bb92..e363993739 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -9459,6 +9459,61 @@ qemuDomainGetMemLockLimitBytes(virDomainDef *def, } =20 =20 +/** + * qemuDomainSetMaxMemLock: + * @vm: domain + * @limit: the desired memory locking limit + * @origPtr: where to store (or load from) the original value of the limit + * + * Set the memory locking limit for @vm unless it's already big enough. If + * @origPtr is non-NULL, the original value of the limit will be store the= re + * and can be restored by calling this function with @limit =3D=3D 0. + * + * Returns: 0 on success, -1 otherwise. + */ +int +qemuDomainSetMaxMemLock(virDomainObj *vm, + unsigned long long limit, + unsigned long long *origPtr) +{ + unsigned long long current =3D 0; + + if (virProcessGetMaxMemLock(vm->pid, ¤t) < 0) + return -1; + + if (limit > 0) { + VIR_DEBUG("Requested memory lock limit: %llu", limit); + /* If the limit is already high enough, we can assume + * that some external process is taking care of managing + * process limits and we shouldn't do anything ourselves: + * we're probably running in a containerized environment + * where we don't have enough privilege anyway */ + if (current >=3D limit) { + VIR_DEBUG("Current limit %llu is big enough", current); + return 0; + } + + /* If this is the first time adjusting the limit, save the current + * value so that we can restore it once memory locking is no longer + * required */ + if (origPtr && *origPtr =3D=3D 0) + *origPtr =3D current; + } else { + /* Once memory locking is no longer required, we can restore the + * original, usually very low, limit. But only if we actually stor= ed + * the original limit before. */ + if (!origPtr || *origPtr =3D=3D 0) + return 0; + + limit =3D *origPtr; + *origPtr =3D 0; + VIR_DEBUG("Resetting memory lock limit back to %llu", limit); + } + + return virProcessSetMaxMemLock(vm->pid, limit); +} + + /** * qemuDomainAdjustMaxMemLock: * @vm: domain @@ -9480,43 +9535,9 @@ int qemuDomainAdjustMaxMemLock(virDomainObj *vm, bool forceVFIO) { - qemuDomainObjPrivate *priv =3D vm->privateData; - unsigned long long currentMemLock =3D 0; - unsigned long long desiredMemLock =3D 0; - - desiredMemLock =3D qemuDomainGetMemLockLimitBytes(vm->def, forceVFIO); - if (virProcessGetMaxMemLock(vm->pid, ¤tMemLock) < 0) - return -1; - - if (desiredMemLock > 0) { - if (currentMemLock < desiredMemLock) { - /* If this is the first time adjusting the limit, save the cur= rent - * value so that we can restore it once memory locking is no l= onger - * required */ - if (priv->originalMemlock =3D=3D 0) { - priv->originalMemlock =3D currentMemLock; - } - } else { - /* If the limit is already high enough, we can assume - * that some external process is taking care of managing - * process limits and we shouldn't do anything ourselves: - * we're probably running in a containerized environment - * where we don't have enough privilege anyway */ - desiredMemLock =3D 0; - } - } else { - /* Once memory locking is no longer required, we can restore the - * original, usually very low, limit */ - desiredMemLock =3D priv->originalMemlock; - priv->originalMemlock =3D 0; - } - - if (desiredMemLock > 0 && - virProcessSetMaxMemLock(vm->pid, desiredMemLock) < 0) { - return -1; - } - - return 0; + return qemuDomainSetMaxMemLock(vm, + qemuDomainGetMemLockLimitBytes(vm->def,= forceVFIO), + &QEMU_DOMAIN_PRIVATE(vm)->originalMemlo= ck); } =20 =20 diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index a87dfff1bb..6d35f61dfd 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -840,6 +840,9 @@ int qemuDomainAdjustMaxMemLock(virDomainObj *vm, bool forceVFIO); int qemuDomainAdjustMaxMemLockHostdev(virDomainObj *vm, virDomainHostdevDef *hostdev); +int qemuDomainSetMaxMemLock(virDomainObj *vm, + unsigned long long limit, + unsigned long long *origPtr); =20 int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, const virDomainMemoryDef *mem); --=20 2.35.1