From nobody Mon May 6 12:14:06 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1643138331956775.4207364960437; Tue, 25 Jan 2022 11:18:51 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-526-af3K7OoIM-CeXYWcFso-cA-1; Tue, 25 Jan 2022 14:18:47 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C516C18A0729; Tue, 25 Jan 2022 19:18:39 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2ACC51001F4D; Tue, 25 Jan 2022 19:18:39 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id CB5764BB7C; Tue, 25 Jan 2022 19:18:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20PJIbZg018063 for ; Tue, 25 Jan 2022 14:18:37 -0500 Received: by smtp.corp.redhat.com (Postfix) id 5AC152144B20; Tue, 25 Jan 2022 19:18:37 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast06.extmail.prod.ext.rdu2.redhat.com [10.11.55.22]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 55AE02166B1C for ; Tue, 25 Jan 2022 19:18:34 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A6C0185A7B2 for ; Tue, 25 Jan 2022 19:18:34 +0000 (UTC) Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-241-1qHurzeSOMyh0_6BFFdA0g-1; Tue, 25 Jan 2022 14:18:32 -0500 Received: by mail-qt1-f174.google.com with SMTP id y8so2373090qtn.8; Tue, 25 Jan 2022 11:18:32 -0800 (PST) Received: from gabell (209-6-122-159.s2973.c3-0.arl-cbr1.sbo-arl.ma.cable.rcncustomer.com. [209.6.122.159]) by smtp.gmail.com with ESMTPSA id h21sm9313875qth.16.2022.01.25.11.18.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jan 2022 11:18:31 -0800 (PST) X-MC-Unique: af3K7OoIM-CeXYWcFso-cA-1 X-MC-Unique: 1qHurzeSOMyh0_6BFFdA0g-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:subject:message-id:mime-version :content-disposition; bh=pItiQYe7spHvZS5oCXLIrpEf9DnOpHuM+NZVNbkpGUU=; b=HvQAqA3aBvXzV2eHjn5mPf9/KtFATebz3hGrKiGgGeqrWBM9/NCRzk6XP79orYKQJo UwcgfSC8ZDGjkxJXK/M2jhKzCrdPR1JBHWSoYufEPsEWlzlgDEfxzp4psIHTvX9xoQaZ 63FJBYmM2dH3RBl0h+djR5/xr/lIvnmyHVijHi6qbyKYUM37NcEMhL0t4AlThSNr+cDN X4x11nn9aceAzB/gilc/YuGJ6n51sVQugNlIliIpx35xaDiX8aWYNj0vNNM1xwRnwDz6 /xATBBV7dDpQKbOCuU+qrw5siOIQ13tR7afl/3v0IZx4GC78koh/PBcd2T5vwyf2cswF laNA== X-Gm-Message-State: AOAM532KazdN1IYCaCGnCFiYm7MfHRelWLUj64r7rLcHiVCCrNFbDSNm P2D4JUgxoD3DgFFNlSeb9QwjN1bp9w== X-Google-Smtp-Source: ABdhPJx8B9N4SkbfiBsesr0zZltdzAH/Nduvd0aMNSlatMpNucGP5VDvk3FwlNIGhLdiTQ/4y84fOw== X-Received: by 2002:a05:622a:48a:: with SMTP id p10mr222532qtx.522.1643138311459; Tue, 25 Jan 2022 11:18:31 -0800 (PST) Date: Tue, 25 Jan 2022 14:18:29 -0500 From: Masayoshi Mizuma To: libvir-list@redhat.com, berrange@redhat.com Subject: The unix domain socket remains even after the VM is destroyed Message-ID: MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Mimecast-Bulk-Signature: yes X-Mimecast-Spam-Signature: bulk X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-ZM-MESSAGEID: 1643138334220100001 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hello, I found an issue that libvirt isn't close an unix domain socket to connect = to the qemu monitor even after the VM is destroyed. This issue happens since commit 695bdb3841 ("src: ensure GSource background= unref happens in correct event loop") on the system whose glib version is 2.56. I would appreciate it if you could give any ideas to solve the issue. The socket is allocated in qemuMonitorOpenUnix(), and used by the monitor->= socket and monitor->watch: qemuMonitorOpen qemuMonitorOpenUnix if ((monfd =3D socket(AF_UNIX, SOCK_STREAM, 0)) < 0) { qemuMonitorOpenInternal mon->socket =3D g_socket_new_from_fd(fd, &gerr); qemuMonitorRegister mon->watch =3D g_socket_create_source(mon->socket, Usually, the socket is closed when the reference counter of the glib object (mon->socket and mon->watch) gets 0: qemuMonitorClose qemuMonitorUnregister vir_g_source_unref(mon->watch, mon->context); g_source_set_callback(idle, virEventGLibSourceUnrefIdle, src, NUL= L); virEventGLibSourceUnrefIdle g_source_unref(src); <=3D=3D unref monitor->watch g_object_unref(mon->socket); <=3D=3D unref monitor->socket It seems that the callback virEventGLibSourceUnrefIdle() to unref the monit= or->watch doesn't work when qemuMonitorUnregister() is called via qemuProcessStop(), so the s= ocket isn't closed. I'm not sure why the callback doesn't work at that time. I suspect that the= VM is closing so the main loop of the monitor doesn't work any more. We can close the socket to add g_socket_close() before unref the mon->socke= t, however, it may remain a memory leak because of mon->watch (GSource object), so prob= ably it isn't a good idea to close the socket. We can unref the mon->watch to set NULL to the second argument of vir_g_sou= rce_unref() because the default main loop still works at that time, however, I'm not sure it's an appropriate way to avoid the gobject issue which the commit so= lves... I found this issue on the qemu monitor, and probably the qemu agent has the= same issue because the socket procedure is almost the same as the monitor. I would appreciate it if you could give any ideas to solve this issue. Following is to observe the callback working: --- diff --git a/src/util/glibcompat.c b/src/util/glibcompat.c index eb6dcc0111..b8b1770424 100644 --- a/src/util/glibcompat.c +++ b/src/util/glibcompat.c @@ -24,6 +24,9 @@ =20 #include "glibcompat.h" =20 +#include "virlog.h" + +VIR_LOG_INIT("util.glibcompat"); /* * Note that because of the GLIB_VERSION_MAX_ALLOWED constant in * config-post.h, allowing use of functions from newer GLib via @@ -244,6 +247,7 @@ virEventGLibSourceUnrefIdle(gpointer data) GSource *src =3D data; =20 g_source_unref(src); + VIR_DEBUG("unrefed: %p", src); =20 return FALSE; } @@ -257,6 +261,7 @@ void vir_g_source_unref(GSource *src, GMainContext *ctx) g_source_attach(idle, ctx); =20 g_source_unref(idle); + VIR_DEBUG("unref registered: %p ctx: %p", src, ctx); } =20 #endif --- Case the mon->watch (0xffff28008af0) is unreffed correctly (via qemuMonitorUpdateWatch()): 18:54:15.403+0000: 16845: debug : qemuMonitorEmitResume:1159 : mon=3D0xffff= 683ac020 18:54:15.403+0000: 16845: debug : qemuProcessHandleResume:713 : Transitione= d guest test-vm into running state, reason 'booted', event detail 0 18:54:15.404+0000: 16845: debug : vir_g_source_unref:264 : unref registered= : 0xffff28008af0 ctx: 0xffff780169a0 18:54:15.404+0000: 16845: debug : qemuMonitorJSONIOProcessLine:222 : Line [= {"return": {}, "id": "libvirt-10"}] 18:54:15.404+0000: 16845: info : qemuMonitorJSONIOProcessLine:242 : QEMU_MO= NITOR_RECV_REPLY: mon=3D0xffff683ac020 reply=3D{"return": {}, "id": "libvir= t-10"} 18:54:15.404+0000: 16845: debug : vir_g_source_unref:264 : unref registered= : 0xffff2819a260 ctx: 0xffff780169a0 18:54:15.404+0000: 16845: debug : virEventGLibSourceUnrefIdle:250 : unrefed= : 0xffff28008af0 Case the mon->watch (0xffff7802bb30) isn't unreffed (via qemuProcessStop()): 18:54:15.642+0000: 16589: debug : qemuProcessStop:8008 : Shutting down vm= =3D0xaaaad40edec0 name=3Dtest-vm id=3D3 pid=3D16842, reason=3Ddestroyed, as= yncJob=3Dnone, flags=3D0x0 18:54:15.642+0000: 16589: debug : qemuDomainLogAppendMessage:6733 : Append = log message (vm=3D'test-vm' message=3D'2022-01-25 18:54:15.642+0000: shutti= ng down, reason=3Ddestroyed) stdioLogD=3D1 18:54:15.643+0000: 16589: info : qemuMonitorClose:834 : QEMU_MONITOR_CLOSE:= mon=3D0xffff683ac020 18:54:15.643+0000: 16589: debug : vir_g_source_unref:264 : unref registered= : 0xffff7802bb30 ctx: 0xffff780169a0 18:54:15.643+0000: 16845: debug : qemuMonitorJSONIOProcessEvent:209 : handl= e SHUTDOWN handler=3D0xffff4ef057c0 data=3D0xffff28007da0 18:54:15.643+0000: 16845: debug : qemuMonitorEmitShutdown:1132 : mon=3D0xff= ff683ac020 guest=3D2 18:54:15.643+0000: 16845: debug : qemuProcessHandleShutdown:572 : vm=3D0xaa= aad40edec0 18:54:15.643+0000: 16845: debug : qemuProcessHandleShutdown:592 : Transitio= ned guest test-vm to shutdown state 18:54:15.643+0000: 16845: debug : qemuProcessKill:7921 : vm=3D0xaaaad40edec= 0 name=3Dtest-vm pid=3D16842 flags=3D0x2 18:54:15.643+0000: 16845: debug : qemuMonitorDispose:216 : mon=3D0xffff683a= c020 18:54:15.643+0000: 16589: debug : qemuProcessKill:7921 : vm=3D0xaaaad40edec= 0 name=3Dtest-vm pid=3D16842 flags=3D0x5 18:54:15.643+0000: 16589: debug : qemuDomainCleanupRun:7313 : driver=3D0xff= ff281441b0, vm=3Dtest-vm 18:54:15.644+0000: 16589: debug : qemuProcessAutoDestroyRemove:8395 : vm=3D= test-vm 18:54:15.645+0000: 16589: debug : qemuRemoveCgroup:1221 : Failed to termina= te cgroup for test-vm 18:54:15.647+0000: 16589: debug : qemuDomainObjEndJob:1141 : Stopping job: = destroy (async=3Dnone vm=3D0xaaaad40edec0 name=3Dtest-vm) Thanks! Masa