From nobody Sat Feb 7 13:41:22 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) client-ip=205.139.110.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1592995562; cv=none; d=zohomail.com; s=zohoarc; b=k9s8d2Z7J3kXhlRwgPLE9x7wtSw/uc2QRIHFmS9s9/GJoR8tyLnLEMOslXZTKccROr47DBWVuLxS0Tb9welhikuesLnSh+/paA0deQYXqcZdBwaVcEWATAW3+0Wvd8RW7a3Oo0lV6RUU9hrMlT7rdOtGwVouKi1t52Kxag0qwvU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1592995562; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=LW6r/DiufgqjOm8cpkCbD5s3fBAoYf6dxJ/AG5wYikWJWf4CR2Hdrdg/x+V0Wmek217eCeI0qy7eDHXGGPdDGPxa1w/Y8QTwUspAyPgnKULsvcGO8MBAFf+2gvRVyF0Xzei3/QOkFvU97bjQ7uXwtecIKhCqV5cxPIr8KTdJ5x4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by mx.zohomail.com with SMTPS id 1592995562937534.3160550031713; Wed, 24 Jun 2020 03:46:02 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-22-AdFwj3iePHyDI8YH3awuRg-1; Wed, 24 Jun 2020 06:45:58 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9713110059A7; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 60C5C1974E; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 40703104912; Wed, 24 Jun 2020 10:45:52 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 05OAjnLR022376 for ; Wed, 24 Jun 2020 06:45:50 -0400 Received: by smtp.corp.redhat.com (Postfix) id DE653112D170; Wed, 24 Jun 2020 10:45:49 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast04.extmail.prod.ext.rdu2.redhat.com [10.11.55.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D93DF110F390 for ; Wed, 24 Jun 2020 10:45:47 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E53BC1049841 for ; Wed, 24 Jun 2020 10:45:46 +0000 (UTC) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-91-26kYBdYXO7uvaqn9zfsazg-1; Wed, 24 Jun 2020 06:45:44 -0400 Received: by mail-pf1-f193.google.com with SMTP id d66so996341pfd.6 for ; Wed, 24 Jun 2020 03:45:44 -0700 (PDT) Received: from localhost.localdomain ([116.73.70.57]) by smtp.gmail.com with ESMTPSA id r8sm16962006pgn.19.2020.06.24.03.45.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jun 2020 03:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592995561; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:content-type:content-type: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=cnl1NSB9dcTtHaeifWVVTuGnaudD0jzAF+IJVU0RLh8f1nASo7bVl9tJ7m/joFhEhkIlWz zULXoOWM7OB8u3t0lj+uNZki3N/Wfks0mvGA0cqtun4+s4Ei+Hg/wIlS5kMUH3mM5n6n5C qJS5jWCcDB7t9lKZp9I3/8NHDIt+Oio= X-MC-Unique: AdFwj3iePHyDI8YH3awuRg-1 X-MC-Unique: 26kYBdYXO7uvaqn9zfsazg-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=DiJ1fwz4g7Fb34YNCxcTZ/DY2KcOQr3nDspim+AJEQeI2vC5aAsDk/Fnh3keE0Aa1k YXJLMlZoKWh71FJYAZn+rNyXsEAjDD3iWgj6/k1BR16d9Y2scxEFoT1PFZu6ds/qBUOZ AhH7rr3X1s+uMen/yj/+o80fbjmYWNf7iQ3Y8OR7eKYGoL2OsyjYjAo1smmf8lAeSq3d dFC7sfJtwCbFis+Zd06GniAOpGAEYNTEv0dFa3qKIIPd4tvBKMoS6+vKBOaQiQ6UagVv FDLG8DLDvcibPqkGnbFE9P8H3orGIjmFQwl7WLoNkMXxFHEyjSClpyQ/5GMXO2UfW9pW Pfgw== X-Gm-Message-State: AOAM533ikDmvW97M4yTEbq3AhW/rgxbM+mjuVH2x75omafzHPsW0tuvi /2RS8n7AtmXdw/GusatGMxg57vZ7QHg= X-Google-Smtp-Source: ABdhPJwk8cBUOjRpkre3rxzfIcVyu0E0OZoE19Gadj4vl7lZc0OwgBQQrVR49t/LTbzqcSAzsBqRtw== X-Received: by 2002:aa7:8ad4:: with SMTP id b20mr29266754pfd.268.1592995543030; Wed, 24 Jun 2020 03:45:43 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH 1/2] qemu_domain: Avoid using qemuDomainObjPrivatePtr as parameter Date: Wed, 24 Jun 2020 16:15:22 +0530 Message-Id: <20200624104523.4770-2-pc44800@gmail.com> In-Reply-To: <20200624104523.4770-1-pc44800@gmail.com> References: <20200624104523.4770-1-pc44800@gmail.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" In functions `qemuDomainObjInitJob`, `qemuDomainObjResetJob`, `qemuDomainObjResetAgentJob`, `qemuDomainObjResetAsyncJob`, `qemuDomainObjFreeJob`, `qemuDomainJobAllowed`, `qemuDomainNestedJobAllowed` we avoid sending the complete qemuDomainObjPrivatePtr as parameter and instead just send qemuDomainJobObjPtr. This is done in a effort to separating the qemu-job APIs into a spearate file. Signed-off-by: Prathamesh Chavan --- src/qemu/qemu_domain.c | 94 ++++++++++++++++++++---------------------- src/qemu/qemu_domain.h | 4 +- 2 files changed, 46 insertions(+), 52 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2375b0de35..1ddaa922c5 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -350,15 +350,15 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driv= er, =20 =20 static int -qemuDomainObjInitJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjInitJob(qemuDomainJobObjPtr job) { - memset(&priv->job, 0, sizeof(priv->job)); + memset(job, 0, sizeof(*job)); =20 - if (virCondInit(&priv->job.cond) < 0) + if (virCondInit(&job->cond) < 0) return -1; =20 - if (virCondInit(&priv->job.asyncCond) < 0) { - virCondDestroy(&priv->job.cond); + if (virCondInit(&job->asyncCond) < 0) { + virCondDestroy(&job->cond); return -1; } =20 @@ -366,10 +366,8 @@ qemuDomainObjInitJob(qemuDomainObjPrivatePtr priv) } =20 static void -qemuDomainObjResetJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->active =3D QEMU_JOB_NONE; job->owner =3D 0; job->ownerAPI =3D NULL; @@ -378,10 +376,8 @@ qemuDomainObjResetJob(qemuDomainObjPrivatePtr priv) =20 =20 static void -qemuDomainObjResetAgentJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetAgentJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->agentActive =3D QEMU_AGENT_JOB_NONE; job->agentOwner =3D 0; job->agentOwnerAPI =3D NULL; @@ -390,10 +386,8 @@ qemuDomainObjResetAgentJob(qemuDomainObjPrivatePtr pri= v) =20 =20 static void -qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->asyncJob =3D QEMU_ASYNC_JOB_NONE; job->asyncOwner =3D 0; job->asyncOwnerAPI =3D NULL; @@ -426,19 +420,19 @@ qemuDomainObjRestoreJob(virDomainObjPtr obj, job->migParams =3D g_steal_pointer(&priv->job.migParams); job->apiFlags =3D priv->job.apiFlags; =20 - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); } =20 static void -qemuDomainObjFreeJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjFreeJob(qemuDomainJobObjPtr job) { - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - virCondDestroy(&priv->job.cond); - virCondDestroy(&priv->job.asyncCond); + qemuDomainObjResetJob(job); + qemuDomainObjResetAsyncJob(job); + g_clear_pointer(&job->current, qemuDomainJobInfoFree); + g_clear_pointer(&job->completed, qemuDomainJobInfoFree); + virCondDestroy(&job->cond); + virCondDestroy(&job->asyncCond); } =20 static bool @@ -2231,7 +2225,7 @@ qemuDomainObjPrivateAlloc(void *opaque) if (VIR_ALLOC(priv) < 0) return NULL; =20 - if (qemuDomainObjInitJob(priv) < 0) { + if (qemuDomainObjInitJob(&priv->job) < 0) { virReportSystemError(errno, "%s", _("Unable to init qemu driver mutexes")); goto error; @@ -2342,7 +2336,7 @@ qemuDomainObjPrivateFree(void *data) qemuDomainObjPrivateDataClear(priv); =20 virObjectUnref(priv->monConfig); - qemuDomainObjFreeJob(priv); + qemuDomainObjFreeJob(&priv->job); VIR_FREE(priv->lockState); VIR_FREE(priv->origname); =20 @@ -6215,8 +6209,8 @@ qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver,= virDomainObjPtr obj) qemuDomainObjPrivatePtr priv =3D obj->privateData; =20 if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); qemuDomainObjSaveStatus(driver, obj); } =20 @@ -6237,28 +6231,28 @@ qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj) } =20 static bool -qemuDomainNestedJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) +qemuDomainNestedJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) { - return !priv->job.asyncJob || - job =3D=3D QEMU_JOB_NONE || - (priv->job.mask & JOB_MASK(job)) !=3D 0; + return !jobs->asyncJob || + newJob =3D=3D QEMU_JOB_NONE || + (jobs->mask & JOB_MASK(newJob)) !=3D 0; } =20 bool -qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) +qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) { - return !priv->job.active && qemuDomainNestedJobAllowed(priv, job); + return !jobs->active && qemuDomainNestedJobAllowed(jobs, newJob); } =20 static bool -qemuDomainObjCanSetJob(qemuDomainObjPrivatePtr priv, - qemuDomainJob job, - qemuDomainAgentJob agentJob) +qemuDomainObjCanSetJob(qemuDomainJobObjPtr job, + qemuDomainJob newJob, + qemuDomainAgentJob newAgentJob) { - return ((job =3D=3D QEMU_JOB_NONE || - priv->job.active =3D=3D QEMU_JOB_NONE) && - (agentJob =3D=3D QEMU_AGENT_JOB_NONE || - priv->job.agentActive =3D=3D QEMU_AGENT_JOB_NONE)); + return ((newJob =3D=3D QEMU_JOB_NONE || + job->active =3D=3D QEMU_JOB_NONE) && + (newAgentJob =3D=3D QEMU_AGENT_JOB_NONE || + job->agentActive =3D=3D QEMU_AGENT_JOB_NONE)); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -6330,7 +6324,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, goto error; } =20 - while (!nested && !qemuDomainNestedJobAllowed(priv, job)) { + while (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) { if (nowait) goto cleanup; =20 @@ -6339,7 +6333,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, goto error; } =20 - while (!qemuDomainObjCanSetJob(priv, job, agentJob)) { + while (!qemuDomainObjCanSetJob(&priv->job, job, agentJob)) { if (nowait) goto cleanup; =20 @@ -6350,13 +6344,13 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driv= er, =20 /* No job is active but a new async job could have been started while = obj * was unlocked, so we need to recheck it. */ - if (!nested && !qemuDomainNestedJobAllowed(priv, job)) + if (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) goto retry; =20 ignore_value(virTimeMillisNow(&now)); =20 if (job) { - qemuDomainObjResetJob(priv); + qemuDomainObjResetJob(&priv->job); =20 if (job !=3D QEMU_JOB_ASYNC) { VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", @@ -6371,7 +6365,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetAsyncJob(&priv->job); priv->job.current =3D g_new0(qemuDomainJobInfo, 1); priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; priv->job.asyncJob =3D asyncJob; @@ -6383,7 +6377,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } =20 if (agentJob) { - qemuDomainObjResetAgentJob(priv); + qemuDomainObjResetAgentJob(&priv->job); =20 VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", qemuDomainAgentJobTypeToString(agentJob), @@ -6428,7 +6422,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, duration / 1000, agentDuration / 1000, asyncDuration / 1000); =20 if (job) { - if (nested || qemuDomainNestedJobAllowed(priv, job)) + if (nested || qemuDomainNestedJobAllowed(&priv->job, job)) blocker =3D priv->job.ownerAPI; else blocker =3D priv->job.asyncOwnerAPI; @@ -6617,7 +6611,7 @@ qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomai= nObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetJob(priv); + qemuDomainObjResetJob(&priv->job); if (qemuDomainTrackJob(job)) qemuDomainObjSaveStatus(driver, obj); /* We indeed need to wake up ALL threads waiting because @@ -6638,7 +6632,7 @@ qemuDomainObjEndAgentJob(virDomainObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetAgentJob(priv); + qemuDomainObjResetAgentJob(&priv->job); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ virCondBroadcast(&priv->job.cond); @@ -6655,7 +6649,7 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, vir= DomainObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetAsyncJob(&priv->job); qemuDomainObjSaveStatus(driver, obj); virCondBroadcast(&priv->job.asyncCond); } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e78a2b935d..19e80fef2b 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -860,8 +860,8 @@ void qemuDomainSetFakeReboot(virQEMUDriverPtr driver, virDomainObjPtr vm, bool value); =20 -bool qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, - qemuDomainJob job); +bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, + qemuDomainJob newJob); =20 int qemuDomainCheckDiskStartupPolicy(virQEMUDriverPtr driver, virDomainObjPtr vm, --=20 2.17.1