From nobody Sun May 12 06:17:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1653561799; cv=none; d=zohomail.com; s=zohoarc; b=IXHUASoEMUIQ70vWKNsnoSftomle9HMv0GKdha+mLjaYsOm8X+14wS5HWQxUKSDNcrSjWl0wIXp1Zgs1ULEOlXXTdDn1BIot/XbeoSEkwPOXeufGQ9yN4WZgbk3XDVzy7lgLA9jhmWZUtdfDIcRpHWn45LuBn+7XOu5029QKYwY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1653561799; h=Content-Type:Content-Transfer-Encoding:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=eRnOIaYdiXKQlgnxO2l+bcCQ8idyZb4raa8LD7YAJOE=; b=PCXseaFD9Q6Y2XNuJOH5nwL08EoYTIYZZ+xmYxq6LBBN2nBUDGXPDzjn2AqjlpEAnkbOxR3Q761xwAkfHZVL3CLob2Zo8vet2NubOEnAw3XjAuVG9ft0oslwhOXs6k6LiETzwbV56mIQojJJ4BvxIJ4IDAfQPszk2BpPDG/fOeA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 16535617990341014.9001958278; Thu, 26 May 2022 03:43:19 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-377-GaHf-MhYPFGoUORHUFgcww-1; Thu, 26 May 2022 06:43:13 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3194C8027EE; Thu, 26 May 2022 10:43:11 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B38DC400F38; Thu, 26 May 2022 10:43:09 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 558A6194EBBB; Thu, 26 May 2022 10:43:09 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id CB7A7194704D for ; Thu, 26 May 2022 10:43:07 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id B933D112131B; Thu, 26 May 2022 10:43:07 +0000 (UTC) Received: from maggie.redhat.com (unknown [10.43.2.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60CB61121315 for ; Thu, 26 May 2022 10:43:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653561798; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=eRnOIaYdiXKQlgnxO2l+bcCQ8idyZb4raa8LD7YAJOE=; b=WFlvtxBxLPdjk6pCFrznJZmkFLo1STKkU3IsF+YlelYoX4tb5eYwdxwqTn5m96A1clIz2m iYorlsaidWeMc+N+tQ2rLLjEH/sKigngOmNMpZUQURw2ixr20faqBEVa5Dre48K0YmZ0Td d8QuDUPXtIni/G7VN0XgCh7K2RmBg0w= X-MC-Unique: GaHf-MhYPFGoUORHUFgcww-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH] lib: Be consistent about vm->pid Date: Thu, 26 May 2022 12:42:48 +0200 Message-Id: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1653561800560100001 Content-Type: text/plain; charset="utf-8"; x-default="true" The virDomainObj struct has @pid member where the domain's hypervisor PID is stored (e.g. QEMU/bhyve/libvirt_lxc/... PID). However, we are not consistent when it comes to shutoff state. Initially, because virDomainObjNew() uses g_new0() the @pid is initialized to 0. But when domain is shut off, some functions set it to -1 (virBhyveProcessStop, virCHProcessStop, qemuProcessStop, ..). In other places, the @pid is tested to be 0, on some other places it's tested for being negative and in the rest for being positive. To solve this inconsistency we can stick with either value, -1 or 0. I've chosen the latter as it's safer IMO. For instance if by mistake we'd kill(vm->pid, SIGTERM) we would kill ourselves instead of init's process group. Signed-off-by: Michal Privoznik Reviewed-by: Jonathon Jongsma --- src/bhyve/bhyve_process.c | 8 ++++---- src/ch/ch_domain.c | 2 +- src/ch/ch_process.c | 2 +- src/conf/domain_conf.h | 2 +- src/hypervisor/domain_cgroup.c | 2 +- src/lxc/lxc_process.c | 6 +++--- src/openvz/openvz_conf.c | 2 +- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_process.c | 4 ++-- tests/qemusecuritytest.c | 1 - 10 files changed, 15 insertions(+), 16 deletions(-) diff --git a/src/bhyve/bhyve_process.c b/src/bhyve/bhyve_process.c index 18002b559b..d46786d393 100644 --- a/src/bhyve/bhyve_process.c +++ b/src/bhyve/bhyve_process.c @@ -293,7 +293,7 @@ virBhyveProcessStop(struct _bhyveConn *driver, return 0; } =20 - if (vm->pid <=3D 0) { + if (vm->pid =3D=3D 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Invalid PID %d for VM"), (int)vm->pid); @@ -329,7 +329,7 @@ virBhyveProcessStop(struct _bhyveConn *driver, bhyveProcessAutoDestroy); =20 virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, reason); - vm->pid =3D -1; + vm->pid =3D 0; vm->def->id =3D -1; =20 bhyveProcessStopHook(vm, VIR_HOOK_BHYVE_OP_RELEASE); @@ -344,7 +344,7 @@ virBhyveProcessStop(struct _bhyveConn *driver, int virBhyveProcessShutdown(virDomainObj *vm) { - if (vm->pid <=3D 0) { + if (vm->pid =3D=3D 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Invalid PID %d for VM"), (int)vm->pid); @@ -433,7 +433,7 @@ virBhyveProcessReconnect(virDomainObj *vm, if (!virDomainObjIsActive(vm)) return 0; =20 - if (!vm->pid) + if (vm->pid =3D=3D 0) return 0; =20 virObjectLock(vm); diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index bb489a74e3..5ab526ba1b 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -405,7 +405,7 @@ virCHDomainGetMachineName(virDomainObj *vm) virCHDriver *driver =3D priv->driver; char *ret =3D NULL; =20 - if (vm->pid > 0) { + if (vm->pid !=3D 0) { ret =3D virSystemdGetMachineNameByPID(vm->pid); if (!ret) virResetLastError(); diff --git a/src/ch/ch_process.c b/src/ch/ch_process.c index 977082d585..4247902bcf 100644 --- a/src/ch/ch_process.c +++ b/src/ch/ch_process.c @@ -574,7 +574,7 @@ virCHProcessStop(virCHDriver *driver G_GNUC_UNUSED, vm->def->name); } =20 - vm->pid =3D -1; + vm->pid =3D 0; vm->def->id =3D -1; g_clear_pointer(&priv->machineName, g_free); =20 diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 6906db4af5..e7e0f24443 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -3059,7 +3059,7 @@ struct _virDomainObj { virObjectLockable parent; virCond cond; =20 - pid_t pid; + pid_t pid; /* 0 for no PID, avoid negative values like -1 */ virDomainStateReason state; =20 unsigned int autostart : 1; diff --git a/src/hypervisor/domain_cgroup.c b/src/hypervisor/domain_cgroup.c index 8072465615..7265325406 100644 --- a/src/hypervisor/domain_cgroup.c +++ b/src/hypervisor/domain_cgroup.c @@ -517,7 +517,7 @@ virDomainCgroupSetupCgroup(const char *prefix, bool privileged, char *machineName) { - if (!vm->pid) { + if (vm->pid =3D=3D 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Cannot setup cgroups until process is started")); return -1; diff --git a/src/lxc/lxc_process.c b/src/lxc/lxc_process.c index 9722d8e1de..d162812a74 100644 --- a/src/lxc/lxc_process.c +++ b/src/lxc/lxc_process.c @@ -217,7 +217,7 @@ static void virLXCProcessCleanup(virLXCDriver *driver, lxcProcessRemoveDomainStatus(cfg, vm); =20 virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, reason); - vm->pid =3D -1; + vm->pid =3D 0; vm->def->id =3D -1; =20 if (!!g_atomic_int_dec_and_test(&driver->nactive) && driver->inhibitCa= llback) @@ -892,7 +892,7 @@ int virLXCProcessStop(virLXCDriver *driver, _("Some processes refused to die")); return -1; } - } else if (vm->pid > 0) { + } else if (vm->pid !=3D 0) { /* If cgroup doesn't exist, just try cleaning up the * libvirt_lxc process */ if (virProcessKillPainfully(vm->pid, true) < 0) { @@ -1033,7 +1033,7 @@ virLXCProcessReadLogOutputData(virDomainObj *vm, bool isdead =3D false; char *eol; =20 - if (vm->pid <=3D 0 || + if (vm->pid =3D=3D 0 || (kill(vm->pid, 0) =3D=3D -1 && errno =3D=3D ESRCH)) isdead =3D true; =20 diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index 191c79e1e2..79ab786355 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -528,7 +528,7 @@ int openvzLoadDomains(struct openvz_driver *driver) if (STREQ(status, "stopped")) { virDomainObjSetState(dom, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_UNKNOWN); - dom->pid =3D -1; + dom->pid =3D 0; } else { virDomainObjSetState(dom, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_UNKNOWN); diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 31e60c7359..8ebf152d95 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -10741,7 +10741,7 @@ qemuDomainGetMachineName(virDomainObj *vm) virQEMUDriver *driver =3D priv->driver; char *ret =3D NULL; =20 - if (vm->pid > 0) { + if (vm->pid !=3D 0) { ret =3D virSystemdGetMachineNameByPID(vm->pid); if (!ret) virResetLastError(); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 07acb1c427..cbcc480089 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -8239,7 +8239,7 @@ void qemuProcessStop(virQEMUDriver *driver, g_clear_pointer(&vm->deprecations, g_free); vm->ndeprecations =3D 0; vm->taint =3D 0; - vm->pid =3D -1; + vm->pid =3D 0; virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, reason); for (i =3D 0; i < vm->def->niothreadids; i++) vm->def->iothreadids[i]->thread_id =3D 0; @@ -8969,7 +8969,7 @@ qemuProcessReconnectHelper(virDomainObj *obj, g_autofree char *name =3D NULL; =20 /* If the VM was inactive, we don't need to reconnect */ - if (!obj->pid) + if (obj->pid =3D=3D 0) return 0; =20 data =3D g_new0(struct qemuProcessReconnectData, 1); diff --git a/tests/qemusecuritytest.c b/tests/qemusecuritytest.c index 924c625a4c..4e2343b7d7 100644 --- a/tests/qemusecuritytest.c +++ b/tests/qemusecuritytest.c @@ -54,7 +54,6 @@ prepareObjects(virQEMUDriver *driver, if (!(vm =3D virDomainObjNew(driver->xmlopt))) return -1; =20 - vm->pid =3D -1; priv =3D vm->privateData; priv->chardevStdioLogd =3D false; priv->rememberOwner =3D true; --=20 2.35.1