From nobody Sat Feb 7 07:10:28 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1642599164; cv=none; d=zohomail.com; s=zohoarc; b=ni+SCGa5htF+lnRYLjtX460lRxxIcvEsCYrm7375Xk/9dhALpDtTCwx2ztrgyJ869iAQBwiOZGvMKSMHHqUQrebUF8MbD3HFxGdW9uby5Fa18vhOunns7Jd+ymAcL1ZNHZo535hMVJmuKqn/65nQ6dUWkCu87Hr2WMw2yRnn/rM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1642599164; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=24kQeOS25lhfnC8MTEoalULELH78VTeQ90oqLWSYdQ4=; b=hBm2OzihYa9IMd867UFWBDX1XVroX5Aze18lmzJkfFEAExJdKShrGdBOILM1SeOLPePmN7fwS79lUjaMvDEKNvCKg5cfLPyE4w5ST/NNStRRcXge4XjeTDkMAvWn248/COGgfm7zeEZEt4Yd8LTW99JDZyLsHGhC5rO9tB7qnB0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1642599164694265.1343324163349; Wed, 19 Jan 2022 05:32:44 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-588-2GxVtJDtNAeWK8e3cBGO4w-1; Wed, 19 Jan 2022 08:32:40 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AC270851A1B; Wed, 19 Jan 2022 13:32:32 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4D1697E412; Wed, 19 Jan 2022 13:32:32 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 14BC44BB7C; Wed, 19 Jan 2022 13:32:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20JDWQpD009003 for ; Wed, 19 Jan 2022 08:32:26 -0500 Received: by smtp.corp.redhat.com (Postfix) id 429F17DE50; Wed, 19 Jan 2022 13:32:26 +0000 (UTC) Received: from maggie.redhat.com (unknown [10.43.2.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 974B17DE4D; Wed, 19 Jan 2022 13:32:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642599163; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=24kQeOS25lhfnC8MTEoalULELH78VTeQ90oqLWSYdQ4=; b=RQ6ZKGUFE343QLfbK5UZ/155OGFmqoMK/rA4X80iOO/quA3cyP1bAFjn+qNN1fMvVJrSUC kOmxwUtMAH7pLGOsUGlHh9YyqdvdREhIDVcNlDX1ZhotVslbX+cpdksOy0Yv/5UIIgDy7d Q8mlAlPhA9pVedWQERpZDFRe6JM4K1Y= X-MC-Unique: 2GxVtJDtNAeWK8e3cBGO4w-1 From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH v1 2/5] qemu: Separate out hugepages handling from qemuBuildMemoryBackendProps() Date: Wed, 19 Jan 2022 14:32:15 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-loop: libvir-list@redhat.com Cc: david@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1642599165283100002 Content-Type: text/plain; charset="utf-8" The qemuBuildMemoryBackendProps() function is already long enough. Move code that decides what hugepages to use into a separate function. Signed-off-by: Michal Privoznik --- src/qemu/qemu_command.c | 148 +++++++++++++++++++++++----------------- 1 file changed, 86 insertions(+), 62 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index fb87649e65..858aa0211a 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -3603,6 +3603,86 @@ qemuBuildMemoryGetDefaultPagesize(virQEMUDriverConfi= g *cfg, } =20 =20 +static int +qemuBuildMemoryGetPagesize(virQEMUDriverConfig *cfg, + const virDomainDef *def, + const virDomainMemoryDef *mem, + unsigned long long *pagesizeRet, + bool *needHugepageRet, + bool *useHugepageRet) +{ + const long system_page_size =3D virGetSystemPageSizeKB(); + unsigned long long pagesize =3D mem->pagesize; + bool needHugepage =3D !!pagesize; + bool useHugepage =3D !!pagesize; + + if (pagesize =3D=3D 0) { + virDomainHugePage *master_hugepage =3D NULL; + virDomainHugePage *hugepage =3D NULL; + bool thisHugepage =3D false; + size_t i; + + /* Find the huge page size we want to use */ + for (i =3D 0; i < def->mem.nhugepages; i++) { + hugepage =3D &def->mem.hugepages[i]; + + if (!hugepage->nodemask) { + master_hugepage =3D hugepage; + continue; + } + + /* just find the master hugepage in case we don't use NUMA */ + if (mem->targetNode < 0) + continue; + + if (virBitmapGetBit(hugepage->nodemask, mem->targetNode, + &thisHugepage) < 0) { + /* Ignore this error. It's not an error after all. Well, + * the nodemask for this can contain lower NUMA + * nodes than we are querying in here. */ + continue; + } + + if (thisHugepage) { + /* Hooray, we've found the page size */ + needHugepage =3D true; + break; + } + } + + if (i =3D=3D def->mem.nhugepages) { + /* We have not found specific huge page to be used with this + * NUMA node. Use the generic setting then ( without any + * @nodemask) if possible. */ + hugepage =3D master_hugepage; + } + + if (hugepage) { + pagesize =3D hugepage->size; + useHugepage =3D true; + } + } + + if (pagesize =3D=3D system_page_size) { + /* However, if user specified to use "huge" page + * of regular system page size, it's as if they + * hasn't specified any huge pages at all. */ + pagesize =3D 0; + needHugepage =3D false; + useHugepage =3D false; + } else if (useHugepage && pagesize =3D=3D 0) { + if (qemuBuildMemoryGetDefaultPagesize(cfg, &pagesize) < 0) + return -1; + } + + *pagesizeRet =3D pagesize; + *needHugepageRet =3D needHugepage; + *useHugepageRet =3D useHugepage; + + return 0; +} + + /** * qemuBuildMemoryBackendProps: * @backendProps: [out] constructed object @@ -3640,18 +3720,16 @@ qemuBuildMemoryBackendProps(virJSONValue **backendP= rops, { const char *backendType =3D "memory-backend-file"; virDomainNumatuneMemMode mode; - const long system_page_size =3D virGetSystemPageSizeKB(); virDomainMemoryAccess memAccess =3D mem->access; - size_t i; g_autofree char *memPath =3D NULL; bool prealloc =3D false; virBitmap *nodemask =3D NULL; int rc; g_autoptr(virJSONValue) props =3D NULL; bool nodeSpecified =3D virDomainNumatuneNodeSpecified(def->numa, mem->= targetNode); - unsigned long long pagesize =3D mem->pagesize; - bool needHugepage =3D !!pagesize; - bool useHugepage =3D !!pagesize; + unsigned long long pagesize =3D 0; + bool needHugepage =3D false; + bool useHugepage =3D false; int discard =3D mem->discard; bool disableCanonicalPath =3D false; =20 @@ -3696,63 +3774,9 @@ qemuBuildMemoryBackendProps(virJSONValue **backendPr= ops, virDomainNumatuneGetMode(def->numa, -1, &mode) < 0) mode =3D VIR_DOMAIN_NUMATUNE_MEM_STRICT; =20 - if (pagesize =3D=3D 0) { - virDomainHugePage *master_hugepage =3D NULL; - virDomainHugePage *hugepage =3D NULL; - bool thisHugepage =3D false; - - /* Find the huge page size we want to use */ - for (i =3D 0; i < def->mem.nhugepages; i++) { - hugepage =3D &def->mem.hugepages[i]; - - if (!hugepage->nodemask) { - master_hugepage =3D hugepage; - continue; - } - - /* just find the master hugepage in case we don't use NUMA */ - if (mem->targetNode < 0) - continue; - - if (virBitmapGetBit(hugepage->nodemask, mem->targetNode, - &thisHugepage) < 0) { - /* Ignore this error. It's not an error after all. Well, - * the nodemask for this can contain lower NUMA - * nodes than we are querying in here. */ - continue; - } - - if (thisHugepage) { - /* Hooray, we've found the page size */ - needHugepage =3D true; - break; - } - } - - if (i =3D=3D def->mem.nhugepages) { - /* We have not found specific huge page to be used with this - * NUMA node. Use the generic setting then ( without any - * @nodemask) if possible. */ - hugepage =3D master_hugepage; - } - - if (hugepage) { - pagesize =3D hugepage->size; - useHugepage =3D true; - } - } - - if (pagesize =3D=3D system_page_size) { - /* However, if user specified to use "huge" page - * of regular system page size, it's as if they - * hasn't specified any huge pages at all. */ - pagesize =3D 0; - needHugepage =3D false; - useHugepage =3D false; - } else if (useHugepage && pagesize =3D=3D 0) { - if (qemuBuildMemoryGetDefaultPagesize(cfg, &pagesize) < 0) - return -1; - } + if (qemuBuildMemoryGetPagesize(cfg, def, mem, &pagesize, + &needHugepage, &useHugepage) < 0) + return -1; =20 props =3D virJSONValueNewObject(); =20 --=20 2.34.1