From nobody Fri May 3 06:33:55 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1626783180; cv=none; d=zohomail.com; s=zohoarc; b=MuPeDThf+ii5YIdCKJX6AGe/WD/Pd3OR5fhA/HZrqrHYH72uW5bexW0gN8hPtNDBz/o6ih4lern/2zvyLGzs8v7TG1kI177rXJJBXF51ExDm/h/XqtSlYfB1caW9HoSV8AcTE1BEIGv6DLBQvFWEx+THSo9WqHnXUtIyXCTWGI4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1626783180; h=Content-Type:Content-Transfer-Encoding:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=/U0nET5WxagdZZ0t1JhhBYm9eDbAZd53aQOXzKVGD6Y=; b=SICPoL2oIgwlfpZ/At0W9ATEj1mCzcS9xLnRLjUPHqOthpL7VeXfBsx+WcZ0CEEnIM1Iu8dxyPpY6tJyLA1EgUT5sb41DJEFTxVZcC1kOfD1TUMC0GHExHrS2DGePKyWE4eM9nRkpzMq4Du8Eh9W6vn99oPsxFGKszC11OS7Xmk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1626783180644986.8393901245826; Tue, 20 Jul 2021 05:13:00 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-480-57eUIH6eNYGCw7iAB8zM5A-1; Tue, 20 Jul 2021 08:12:57 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 99CBA800050; Tue, 20 Jul 2021 12:12:51 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EC13617A70; Tue, 20 Jul 2021 12:12:49 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 431614EA2A; Tue, 20 Jul 2021 12:12:47 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 16KCCkga026245 for ; Tue, 20 Jul 2021 08:12:46 -0400 Received: by smtp.corp.redhat.com (Postfix) id A46B56788B; Tue, 20 Jul 2021 12:12:46 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-114-189.ams2.redhat.com [10.36.114.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0251A5F9B7; Tue, 20 Jul 2021 12:12:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626783179; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=/U0nET5WxagdZZ0t1JhhBYm9eDbAZd53aQOXzKVGD6Y=; b=OBuIU50rSHHJTGiAFeoIaPCZcXbmYc7ZFIw8iT5keB9vQBHpF8ZZegcgmk0v+Bi6WK7AML /0xLOx7Qm4FXjbJS9YO5SiyfYMdc1P9RKhp+3YqJiJz9SjMZTLH3N2k2+dMo7UkyK9+/ma fYonzY/qQ4GmjpvnlS60PYt8FRGpCK4= X-MC-Unique: 57eUIH6eNYGCw7iAB8zM5A-1 From: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= To: libvir-list@redhat.com Subject: [libvirt PATCH] docs: add kbase article on how to configure core dumps for QEMU Date: Tue, 20 Jul 2021 13:12:38 +0100 Message-Id: <20210720121238.2660286-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1626783181240100001 Enabling core dumps is a reasonably straightforward task, but is not documented clearly. This page provides as easy link to point users to when they need to debug QEMU. Signed-off-by: Daniel P. Berrang=C3=A9 Reviewed-by: Michal Privoznik --- docs/kbase/index.rst | 4 ++ docs/kbase/meson.build | 1 + docs/kbase/qemu-core-dump.rst | 132 ++++++++++++++++++++++++++++++++++ 3 files changed, 137 insertions(+) create mode 100644 docs/kbase/qemu-core-dump.rst diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst index 91083ee49d..372042886d 100644 --- a/docs/kbase/index.rst +++ b/docs/kbase/index.rst @@ -67,3 +67,7 @@ Internals / Debugging `VM migration internals `__ VM migration implementation details, complementing the info in `migration <../migration.html>`__ + +`Capturing core dumps for QEMU `__ + How to configure libvirt to enable capture of core dumps from + QEMU virtual machines diff --git a/docs/kbase/meson.build b/docs/kbase/meson.build index 7631b47018..6d17a83d1d 100644 --- a/docs/kbase/meson.build +++ b/docs/kbase/meson.build @@ -12,6 +12,7 @@ docs_kbase_files =3D [ 'locking-sanlock', 'merging_disk_image_chains', 'migrationinternals', + 'qemu-core-dump', 'qemu-passthrough-security', 'rpm-deployment', 's390_protected_virt', diff --git a/docs/kbase/qemu-core-dump.rst b/docs/kbase/qemu-core-dump.rst new file mode 100644 index 0000000000..d27f81c4d6 --- /dev/null +++ b/docs/kbase/qemu-core-dump.rst @@ -0,0 +1,132 @@ +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D +Capturing core dumps for QEMU +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +The default behaviour for a QEMU virtual machine launched by libvirt is to +have core dumps disabled. There can be times, however, when it is benefici= al +to collect a core dump to enable debugging. + +QEMU driver configuration +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + +There is a global setting in the QEMU driver configuration file that contr= ols +whether core dumps are permitted, and their maximum size. Enabling core du= mps +is simply a matter of setting the maximum size to a non-zero value by edit= ting +the ``/etc/libvirt/qemu.conf`` file: + +:: + + max_core =3D "unlimited" + +For an adhoc debugging session, setting the core dump size to "unlimited" = is +viable, on the assumption that the core dumps will be disabled again once = the +requisite information is collected. If the intention is to leave core dum= ps +permanently enabled, more careful consideration of limits is required + +Note that by default, a core dump will **NOT** include the the guest RAM +region, so will only include memory regions used by QEMU for emulation and +backend purposes. This is expected to be sufficient for the vast majority +of debugging needs. + +When there is a need to examine guest RAM though, a further setting is +available + +:: + + dump_guest_core =3D 1 + +This will of course result in core dumps that are as large as the biggest +virtual machine on the host - potentially 10's or even 100's of GB in size. +To allow more fine grained control it is possible to toggle this on a per +VM basis in the XML configuration. + +After changing either of the settings in ``/etc/libvirt/qemu.conf`` the da= emon +hosting the QEMU driver must be restarted. For deployments using the monol= ithic +daemons, this means ``libvirtd``, while for those using modular daemons th= is +means ``virtqemud`` + +:: + + systemctl restart libvirtd (for a monolithic deployment) + systemctl restart virtqemud (for a modular deployment) + +While libvirt attempts to make it possible to restart the daemons without +negatively impacting running guests, there are some management operations +that may get interrupted. In particular long running jobs like live +migration or block device copy jobs may abort. It is thus wise to check +that the host is mostly idle before restarting the daemons. + =20 +Guest core dump configuration +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +The ``dump_guest_core`` setting mentioned above will allow guest RAM to be +included in core dumps for all virtual machines on the host. This may not +be desirable, so it is also possible to control this on a per-virtual +machine basis in the XML configuration: + +:: + + ... + +Note, it is still neccessary to at least set ``max_core`` to a non-zero +value in the global configuration file. + +Some management applications may not offer the ability to customimze the +XML configuration for a guest. In such situations, using the global +``dump_guest_core`` setting is the only option. + =20 +Host OS core dump storage +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + +The Linux kernel default behaviour is to write core dumps to a file in the +current working directory of the process. This will not work with QEMU +processes launched by libvirt, because their working directory is ``/`` +which will not be writable. + +Most modern OS distros, however, now include systemd which configures a +custom core dump handler out of the box. When this is in effect, core dumps +from QEMU can be seen using the ``coredumpctl`` commands + +:: + + $ coredumpctl list -r + TIME PID UID GID SIG COREFILE EXE = SIZE + Tue 2021-07-20 12:12:52 BST 2649303 107 107 SIGABRT present /usr/bin/= qemu-system-x86_64 1.8M + ...snip... + + $ coredumpctl info 2649303 + PID: 2649303 (qemu-system-x86) + UID: 107 (qemu) + GID: 107 (qemu) + Signal: 6 (ABRT) + Timestamp: Tue 2021-07-20 12:12:52 BST (48min ago) + Command Line: /usr/bin/qemu-system-x86_64 -name guest=3Df30,debug-thre= ads=3Don ..snip... -msg timestamp=3Don + Executable: /usr/bin/qemu-system-x86_64 + Control Group: /machine.slice/machine-qemu\x2d1\x2df30.scope/libvirt/em= ulator + Unit: machine-qemu\x2d1\x2df30.scope + Slice: machine.slice + Boot ID: 6b9015d0c05f4e7fbfe4197a2c7824a2 + Machine ID: c78c8286d6d74b22ac0dd275975f9ced + Hostname: localhost.localdomain + Storage: /var/lib/systemd/coredump/core.qemu-system-x86.107.6b901= 5d0c05f4e7fbfe4197a2c7824a2.2649303.1626779572000000.zst (present) + Disk Size: 1.8M + Message: Process 2649303 (qemu-system-x86) of user 107 dumped cor= e. + =20 + Stack trace of thread 2649303: + #0 0x00007ff3c32436be n/a (libc.so.6 + 0xf56be) + #1 0x000055a949c0ed05 qemu_poll_ns (qemu-system-x86_64 = + 0x7b0d05) + #2 0x000055a949c0e476 main_loop_wait (qemu-system-x86_6= 4 + 0x7b0476) + #3 0x000055a949a36d27 qemu_main_loop (qemu-system-x86_6= 4 + 0x5d8d27) + #4 0x000055a94979e4d2 main (qemu-system-x86_64 + 0x3404= d2) + #5 0x00007ff3c3175b75 n/a (libc.so.6 + 0x27b75) + #6 0x000055a9497a1f5e _start (qemu-system-x86_64 + 0x34= 3f5e) + =20 + Stack trace of thread 2649368: + #0 0x00007ff3c32435bf n/a (libc.so.6 + 0xf55bf) + #1 0x00007ff3c3af547c g_main_context_iterate.constprop.= 0 (libglib-2.0.so.0 + 0xa947c) + #2 0x00007ff3c3aa0a93 g_main_loop_run (libglib-2.0.so.0= + 0x54a93) + #3 0x00007ff3c17a727a red_worker_main.lto_priv.0 (libsp= ice-server.so.1 + 0x5227a) + #4 0x00007ff3c3326299 start_thread (libpthread.so.0 + 0= x9299) + #5 0x00007ff3c324e353 n/a (libc.so.6 + 0x100353) + + ...snip... --=20 2.31.1