From nobody Wed May 1 04:22:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1652716298; cv=none; d=zohomail.com; s=zohoarc; b=bINRl6ZcMwcqhPeRvFocVHtIW7pokDnhN4gs/Kji/zeOfNHegdlrydMVnXbxWTeC3D94wJkHoBAi0AVaO5Qh2Ot9UE213qs61UhjrXkEWrpudLKPdJXITdIMv4b008O8O8smaz5PINGgl0K5M+pAv1i6zNi01gm9d07CRFIMCAI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652716298; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=0LVxyf2uTnuSBTdYJGSV7LawDL9mx09bd6yKipmio+4=; b=Mshs8F2+GhWvQw0sq9hZbu47Jyy54qYHVzCq3sl9fv2Bh+AswK5MfEaRILGb0sy2b0xCEG++3TxIkruZNzpZznd9TRQt0V7ouHLyFx19ToFTD2D2+ijmeC9o1j8tphTIUG9Olld9fqzxB+ivWqhayScpJyw3BCELK3Uhs+JutvE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1652716298446876.8634074819643; Mon, 16 May 2022 08:51:38 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-133-cO4pd0rkPx2poUE0Pm8vwQ-1; Mon, 16 May 2022 11:51:35 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3D743805F6F; Mon, 16 May 2022 15:51:33 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 22C6D40316C; Mon, 16 May 2022 15:51:33 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D37B6194705C; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 479CC194704E for ; Mon, 16 May 2022 15:51:30 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 29B1A1541C42; Mon, 16 May 2022 15:51:30 +0000 (UTC) Received: from speedmetal.lan (unknown [10.40.208.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 250FA1541C40 for ; Mon, 16 May 2022 15:51:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652716297; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=0LVxyf2uTnuSBTdYJGSV7LawDL9mx09bd6yKipmio+4=; b=MABxw/p2Vq+GGQ1wirEE/iTU50PkmsUzBFbdHsMUdLvwdyflpkLTFpvFywt7tp5p+fMNwG swsCZ1QqmIInkRMS9Ipmw5tIiXTkQ4IlG46T4SdPytRmw2SPw8yVN1k4y86Ew7FU4jF1b0 ARm2eCYmeKjGJudKmuTgGrqmymbgtmA= X-MC-Unique: cO4pd0rkPx2poUE0Pm8vwQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 1/3] qemu: THREADS.txt: rSTize and move to knowledge-base Date: Mon, 16 May 2022 17:51:24 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1652716299571100001 Content-Type: text/plain; charset="utf-8" Move the internal documentation about qemu threading to the knowledge base. The conversion included rstizing of the text document, mainly just fixing of the headline and enclosing function names and code examples into code block sections. Signed-off-by: Peter Krempa Reviewed-by: J=C3=A1n Tomko --- docs/kbase/index.rst | 3 + docs/kbase/internals/meson.build | 1 + .../kbase/internals/qemu-threads.rst | 136 +++++++++--------- 3 files changed, 69 insertions(+), 71 deletions(-) rename src/qemu/THREADS.txt =3D> docs/kbase/internals/qemu-threads.rst (70= %) diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst index 0848467d51..f1cd143fab 100644 --- a/docs/kbase/index.rst +++ b/docs/kbase/index.rst @@ -101,3 +101,6 @@ Internals `RPC protocol & APIs `__ RPC protocol information and API / dispatch guide + +`QEMU driver threading `__ + Basics of locking and threaded access to qemu driver primitives. diff --git a/docs/kbase/internals/meson.build b/docs/kbase/internals/meson.= build index 8b5daad1f9..3e84b398b2 100644 --- a/docs/kbase/internals/meson.build +++ b/docs/kbase/internals/meson.build @@ -5,6 +5,7 @@ docs_kbase_internals_files =3D [ 'locking', 'migration', 'overview', + 'qemu-threads', 'rpc', ] diff --git a/src/qemu/THREADS.txt b/docs/kbase/internals/qemu-threads.rst similarity index 70% rename from src/qemu/THREADS.txt rename to docs/kbase/internals/qemu-threads.rst index b5f54f203c..c68512d1b3 100644 --- a/src/qemu/THREADS.txt +++ b/docs/kbase/internals/qemu-threads.rst @@ -1,5 +1,7 @@ - QEMU Driver Threading: The Rules - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +QEMU Driver Threading: The Rules +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D + +.. contents:: This document describes how thread safety is ensured throughout the QEMU driver. The criteria for this model are: @@ -8,37 +10,36 @@ the QEMU driver. The criteria for this model are: - Code which sleeps must be able to time out after suitable period - Must be safe against dispatch of asynchronous events from monitor - Basic locking primitives ------------------------ There are a number of locks on various objects - * virQEMUDriver * + ``virQEMUDriver`` - The qemu_conf.h file has inline comments describing the locking + The ``qemu_conf.h`` file has inline comments describing the locking needs for each field. Any field marked immutable, self-locking can be accessed without the driver lock. For other fields there - are typically helper APIs in qemu_conf.c that provide serialized - access to the data. No code outside qemu_conf.c should ever + are typically helper APIs in ``qemu_conf.c`` that provide serialized + access to the data. No code outside ``qemu_conf.c`` should ever acquire this lock - * virDomainObj * + ``virDomainObj`` Will be locked and the reference counter will be increased after calli= ng - any of the virDomainObjListFindBy{ID,Name,UUID} methods. The preferred= way + any of the ``virDomainObjListFindBy{ID,Name,UUID}`` methods. The prefe= rred way of decrementing the reference counter and unlocking the domain is usin= g the - virDomainObjEndAPI() function. + ``virDomainObjEndAPI()`` function. - Lock must be held when changing/reading any variable in the virDomainO= bj * + Lock must be held when changing/reading any variable in the ``virDomai= nObj`` This lock must not be held for anything which sleeps/waits (i.e. monit= or commands). - * qemuMonitorPrivatePtr: Job conditions + ``qemuMonitorPrivatePtr`` job conditions - Since virDomainObj *lock must not be held during sleeps, the job + Since ``virDomainObj`` lock must not be held during sleeps, the job conditions provide additional protection for code making updates. QEMU driver uses three kinds of job conditions: asynchronous, agent @@ -61,30 +62,30 @@ There are a number of locks on various objects Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (qemuDomainObjBeginAgentJob), or only normal job (qemuDomainObjBeginJo= b) + (``qemuDomainObjBeginAgentJob``), or only normal job (``qemuDomainObjB= eginJob``) but not both at the same time. Holding an agent job and a normal job w= ould allow an unresponsive or malicious agent to block normal libvirt API a= nd potentially result in a denial of service. Which type of job to grab depends whether caller wishes to communicate only with agent socket, or only with qemu monitor socket. - Immediately after acquiring the virDomainObj *lock, any method + Immediately after acquiring the ``virDomainObj`` lock, any method which intends to update state must acquire asynchronous, normal or - agent job . The virDomainObj *lock is released while blocking on + agent job . The ``virDomainObj`` lock is released while blocking on these condition variables. Once the job condition is acquired, a - method can safely release the virDomainObj *lock whenever it hits + method can safely release the ``virDomainObj`` lock whenever it hits a piece of code which may sleep/wait, and re-acquire it after the sleep/wait. Whenever an asynchronous job wants to talk to the monitor, it needs to acquire nested job (a special kind of normal job) to obtain exclusive access to the monitor. - Since the virDomainObj *lock was dropped while waiting for the + Since the ``virDomainObj`` lock was dropped while waiting for the job condition, it is possible that the domain is no longer active when the condition is finally obtained. The monitor lock is only safe to grab after verifying that the domain is still active. - * qemuMonitor *: Mutex + ``qemuMonitor`` mutex Lock to be used when invoking any monitor command to ensure safety wrt any asynchronous events that may be dispatched from the monitor. @@ -92,118 +93,111 @@ There are a number of locks on various objects The job condition *MUST* be held before acquiring the monitor lock - The virDomainObj *lock *MUST* be held before acquiring the monitor + The ``virDomainObj`` lock *MUST* be held before acquiring the monitor lock. - The virDomainObj *lock *MUST* then be released when invoking the + The ``virDomainObj`` lock *MUST* then be released when invoking the monitor command. Helper methods -------------- -To lock the virDomainObj * - - virObjectLock() - - Acquires the virDomainObj *lock +To lock the ``virDomainObj`` - virObjectUnlock() - - Releases the virDomainObj *lock + ``virObjectLock()`` + - Acquires the ``virDomainObj`` lock + ``virObjectUnlock()`` + - Releases the ``virDomainObj`` lock To acquire the normal job condition - qemuDomainObjBeginJob() + ``qemuDomainObjBeginJob()`` - Waits until the job is compatible with current async job or no async job is running - - Waits for job.cond condition 'job.active !=3D 0' using virDomainObj * + - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` mutex - Rechecks if the job is still compatible and repeats waiting if it isn't - - Sets job.active to the job type + - Sets ``job.active`` to the job type - - qemuDomainObjEndJob() + ``qemuDomainObjEndJob()`` - Sets job.active to 0 - Signals on job.cond condition - To acquire the agent job condition - qemuDomainObjBeginAgentJob() + ``qemuDomainObjBeginAgentJob()`` - Waits until there is no other agent job set - - Sets job.agentActive tp the job type - - qemuDomainObjEndAgentJob() - - Sets job.agentActive to 0 - - Signals on job.cond condition + - Sets ``job.agentActive`` to the job type + ``qemuDomainObjEndAgentJob()`` + - Sets ``job.agentActive`` to 0 + - Signals on ``job.cond`` condition To acquire the asynchronous job condition - qemuDomainObjBeginAsyncJob() + ``qemuDomainObjBeginAsyncJob()`` - Waits until no async job is running - - Waits for job.cond condition 'job.active !=3D 0' using virDomainObj * + - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` mutex - - Rechecks if any async job was started while waiting on job.cond + - Rechecks if any async job was started while waiting on ``job.cond`` and repeats waiting in that case - - Sets job.asyncJob to the asynchronous job type - - - qemuDomainObjEndAsyncJob() - - Sets job.asyncJob to 0 - - Broadcasts on job.asyncCond condition + - Sets ``job.asyncJob`` to the asynchronous job type + ``qemuDomainObjEndAsyncJob()`` + - Sets ``job.asyncJob`` to 0 + - Broadcasts on ``job.asyncCond`` condition To acquire the QEMU monitor lock - qemuDomainObjEnterMonitor() - - Acquires the qemuMonitorObjPtr lock - - Releases the virDomainObj *lock + ``qemuDomainObjEnterMonitor()`` + - Acquires the ``qemuMonitorObj`` lock + - Releases the ``virDomainObj`` lock - qemuDomainObjExitMonitor() - - Releases the qemuMonitorObjPtr lock - - Acquires the virDomainObj *lock + ``qemuDomainObjExitMonitor()`` + - Releases the ``qemuMonitorObj`` lock + - Acquires the ``virDomainObj`` lock These functions must not be used by an asynchronous job. To acquire the QEMU monitor lock as part of an asynchronous job - qemuDomainObjEnterMonitorAsync() + ``qemuDomainObjEnterMonitorAsync()`` - Validates that the right async job is still running - - Acquires the qemuMonitorObjPtr lock - - Releases the virDomainObj *lock + - Acquires the ``qemuMonitorObj`` lock + - Releases the ``virDomainObj`` lock - Validates that the VM is still active qemuDomainObjExitMonitor() - - Releases the qemuMonitorObjPtr lock - - Acquires the virDomainObj *lock + - Releases the ``qemuMonitorObj`` lock + - Acquires the ``virDomainObj`` lock These functions are for use inside an asynchronous job; the caller must check for a return of -1 (VM not running, so nothing to exit). - Helper functions may also call this with VIR_ASYNC_JOB_NONE when + Helper functions may also call this with ``VIR_ASYNC_JOB_NONE`` when used from a sync job (such as when first starting a domain). To keep a domain alive while waiting on a remote command - qemuDomainObjEnterRemote() - - Releases the virDomainObj *lock + ``qemuDomainObjEnterRemote()`` + - Releases the ``virDomainObj`` lock - qemuDomainObjExitRemote() - - Acquires the virDomainObj *lock + ``qemuDomainObjExitRemote()`` + - Acquires the ``virDomainObj`` lock Design patterns --------------- - - * Accessing something directly to do with a virDomainObj * + * Accessing something directly to do with a ``virDomainObj``:: virDomainObj *obj; @@ -214,7 +208,7 @@ Design patterns virDomainObjEndAPI(&obj); - * Updating something directly to do with a virDomainObj * + * Updating something directly to do with a ``virDomainObj``:: virDomainObj *obj; @@ -229,7 +223,7 @@ Design patterns virDomainObjEndAPI(&obj); - * Invoking a monitor command on a virDomainObj * + * Invoking a monitor command on a ``virDomainObj``:: virDomainObj *obj; qemuDomainObjPrivate *priv; @@ -252,7 +246,7 @@ Design patterns virDomainObjEndAPI(&obj); - * Invoking an agent command on a virDomainObj * + * Invoking an agent command on a ``virDomainObj``:: virDomainObj *obj; qemuAgent *agent; @@ -276,7 +270,7 @@ Design patterns virDomainObjEndAPI(&obj); - * Running asynchronous job + * Running asynchronous job:: virDomainObj *obj; qemuDomainObjPrivate *priv; @@ -316,7 +310,7 @@ Design patterns virDomainObjEndAPI(&obj); - * Coordinating with a remote server for migration + * Coordinating with a remote server for migration:: virDomainObj *obj; qemuDomainObjPrivate *priv; --=20 2.35.3 From nobody Wed May 1 04:22:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1652716338; cv=none; d=zohomail.com; s=zohoarc; b=jIjWRl7lMVx+BrI80UO5B8JRYmQuPADzaRdH+xWGQMztz8Zi7kY0tyyXlDG1figEz+n2AuW9lHjiuHwnxYp7WYeN2YjMJHr8+9210QHPJkqHKzXfeH46ual4nzEXrlm1AN8Pnhru+avsgn8ifKuDVoFgEQ9UBjARNyPKeDlpnUQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652716338; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Xx2FTWVeMAQokeHPcoMHLq8mQsZ3MDq5Jt6v4mrbHog=; b=mYO5VLVYu7DTmTT2wnU7mVzYUYy2e8K3q09OiF/VaGidpMRqe/eybIkdrgYHPwywVavdScUCZ56bPdIfTHjvhPV8VuFv4AxA3EY8NrRa3HqQA/eNUq1Ch3z04ZkYjWMAUT3InvMZGxjxX+3k8R/ACDSgIFsws3FHrHSwaXzygVA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1652716338919597.7346272875106; Mon, 16 May 2022 08:52:18 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-394-Zfz28VuIMcmeQEiQDcDXVg-1; Mon, 16 May 2022 11:51:35 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EFF0F1DB28A8; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 575471541C40; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 1B87B1947054; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 4FDEA194704E for ; Mon, 16 May 2022 15:51:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 32B3E1541C42; Mon, 16 May 2022 15:51:31 +0000 (UTC) Received: from speedmetal.lan (unknown [10.40.208.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8F4D61541C40 for ; Mon, 16 May 2022 15:51:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652716338; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=Xx2FTWVeMAQokeHPcoMHLq8mQsZ3MDq5Jt6v4mrbHog=; b=BvBk05QBJYXvTwCECadtvGNsutTL4AKIfU2rz2l1OKc/TRQOfV3nuiO95zZq9sKwH9t0ND xZi7PIkj1Vv09mboDuVFmKEnqM1H5quRBRzYUSw0/FkXUH4AldmIRLXuSO4MDI5Q1Tda7m x35z/tpVzaGeQnLHPJkDLeajAxzFKGk= X-MC-Unique: Zfz28VuIMcmeQEiQDcDXVg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 2/3] qemu: MIGRATION.txt: Move to kbase and rSTisze Date: Mon, 16 May 2022 17:51:25 +0200 Message-Id: <4b10f1254c16ee9485ee3fa8c8d662f39b65d4e2.1652715463.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1652716339820100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Peter Krempa Reviewed-by: J=C3=A1n Tomko --- docs/kbase/index.rst | 3 + docs/kbase/internals/meson.build | 1 + .../kbase/internals/qemu-migration.rst | 67 ++++++++++--------- 3 files changed, 40 insertions(+), 31 deletions(-) rename src/qemu/MIGRATION.txt =3D> docs/kbase/internals/qemu-migration.rst= (59%) diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst index f1cd143fab..d0f2167be8 100644 --- a/docs/kbase/index.rst +++ b/docs/kbase/index.rst @@ -104,3 +104,6 @@ Internals `QEMU driver threading `__ Basics of locking and threaded access to qemu driver primitives. + +`QEMU migration internals `__ + Description of migration phases in the ``v2`` and ``v3`` migration prot= ocol. diff --git a/docs/kbase/internals/meson.build b/docs/kbase/internals/meson.= build index 3e84b398b2..4f7b223786 100644 --- a/docs/kbase/internals/meson.build +++ b/docs/kbase/internals/meson.build @@ -5,6 +5,7 @@ docs_kbase_internals_files =3D [ 'locking', 'migration', 'overview', + 'qemu-migration', 'qemu-threads', 'rpc', ] diff --git a/src/qemu/MIGRATION.txt b/docs/kbase/internals/qemu-migration.r= st similarity index 59% rename from src/qemu/MIGRATION.txt rename to docs/kbase/internals/qemu-migration.rst index b75fe62788..d9061ca49e 100644 --- a/src/qemu/MIGRATION.txt +++ b/docs/kbase/internals/qemu-migration.rst @@ -1,84 +1,89 @@ - QEMU Migration Phases - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +QEMU Migration Phases +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +.. contents:: QEMU supports only migration protocols 2 and 3 (1 was lacking too many steps). Repeating the protocol sequences from libvirt.c: -Sequence v2: +Migration protocol v2 API Sequence +---------------------------------- - Src: DumpXML + **Src**: ``DumpXML`` - Generate XML to pass to dst - Dst: Prepare + **Dst**: ``Prepare`` - Get ready to accept incoming VM - Generate optional cookie to pass to src - Src: Perform + **Src**: ``Perform`` - Start migration and wait for send completion - Kill off VM if successful, resume if failed - Dst: Finish + **Dst**: ``Finish`` - Wait for recv completion and check status - Kill off VM if unsuccessful -Sequence v3: +Migration protocol v3 API Sequence +---------------------------------- - Src: Begin + **Src**: ``Begin`` - Generate XML to pass to dst - Generate optional cookie to pass to dst - Dst: Prepare + **Dst**: ``Prepare`` - Get ready to accept incoming VM - Generate optional cookie to pass to src - Src: Perform + **Src**: ``Perform`` - Start migration and wait for send completion - Generate optional cookie to pass to dst - Dst: Finish + **Dst**: ``Finish`` - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src - Src: Confirm + **Src**: ``Confirm`` - Kill off VM if success, resume if failed - QEMU Migration Locking Rules - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D +QEMU Migration Locking Rules +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D Migration is a complicated beast which may span across several APIs on both source and destination side and we need to keep the domain we are migratin= g in a consistent state during the whole process. To avoid anyone from changing the domain in the middle of migration we nee= d to -keep MIGRATION_OUT job active during migration from Begin to Confirm on the -source side and MIGRATION_IN job has to be active from Prepare to Finish on -the destination side. +keep ``MIGRATION_OUT`` job active during migration from ``Begin`` to +``Confirm`` on the source side and ``MIGRATION_IN`` job has to be active f= rom +``Prepare`` to ``Finish`` on the destination side. For this purpose we introduce several helper methods to deal with locking -primitives (described in THREADS.txt) in the right way: +primitives (described in `qemu-threads `__) in the righ= t way: -* qemuMigrationJobStart +* ``qemuMigrationJobStart`` -* qemuMigrationJobContinue +* ``qemuMigrationJobContinue`` -* qemuMigrationJobStartPhase +* ``qemuMigrationJobStartPhase`` -* qemuMigrationJobSetPhase +* ``qemuMigrationJobSetPhase`` -* qemuMigrationJobFinish +* ``qemuMigrationJobFinish`` -The sequence of calling qemuMigrationJob* helper methods is as follows: +The sequence of calling ``qemuMigrationJob*`` helper methods is as follows: -- The first API of a migration protocol (Prepare or Perform/Begin dependin= g on - migration type and version) has to start migration job and keep it activ= e: +- The first API of a migration protocol (``Prepare`` or ``Perform/Begin`` + depending on migration type and version) has to start migration job and = keep + it active:: qemuMigrationJobStart(driver, vm, VIR_JOB_MIGRATION_{IN,OUT}); qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); -- All consequent phases except for the last one have to keep the job activ= e: +- All consequent phases except for the last one have to keep the job activ= e:: if (!qemuMigrationJobIsActive(vm, VIR_JOB_MIGRATION_{IN,OUT})) return; @@ -86,7 +91,7 @@ The sequence of calling qemuMigrationJob* helper methods = is as follows: ...do work... qemuMigrationJobContinue(vm); -- The last migration phase finally finishes the migration job: +- The last migration phase finally finishes the migration job:: if (!qemuMigrationJobIsActive(vm, VIR_JOB_MIGRATION_{IN,OUT})) return; @@ -94,7 +99,7 @@ The sequence of calling qemuMigrationJob* helper methods = is as follows: ...do work... qemuMigrationJobFinish(driver, vm); -While migration job is running (i.e., after qemuMigrationJobStart* but bef= ore -qemuMigrationJob{Continue,Finish}), migration phase can be advanced using +While migration job is running (i.e., after ``qemuMigrationJobStart*`` but= before +``qemuMigrationJob{Continue,Finish}``), migration phase can be advanced us= ing:: qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); --=20 2.35.3 From nobody Wed May 1 04:22:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1652716301; cv=none; d=zohomail.com; s=zohoarc; b=GMR9Yc2z1+dnDq9Le4tIPxMtl4I+0rZ18ZMjBhxTa0fT6U3NB0gdNs66WBZeKM9P0P5vnxVQX0jA4eKMhCU+gxOQKJLbxJ+OpmyZqukqTefHSMlk71l486oF6sHg7HtpfqWXkk6u7XUtU41oVPyJp1qK7zVDZCr6xLVWkKPS+RY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652716301; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=nulmPiJJ4wS2cAb95cLBylaVTvWdupYqg0C6gunjL44=; b=SJQNpnVoHfpXr6nlIMHhHjIgWwY9MwnYbL2l18ER1Psy6i+AiZ6VW8HtLvIJ7hRklyeY+nnsPPMrPj86YOIXEUCXsVaiQpjH8Kgeg4UD5Jy/29uK1gtnwFsjIt07lgE0aH95SE8lqe0fIs0f3OQ5FaECkPwQmtQtJnCr2oKyt0E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1652716301001534.6627248064821; Mon, 16 May 2022 08:51:41 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-1KOHeVtHNO-rKxFflfsJpw-1; Mon, 16 May 2022 11:51:37 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 606FF811E78; Mon, 16 May 2022 15:51:35 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D92F492C14; Mon, 16 May 2022 15:51:35 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 2B3C11947054; Mon, 16 May 2022 15:51:35 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 613191947072 for ; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 3EC161541C42; Mon, 16 May 2022 15:51:32 +0000 (UTC) Received: from speedmetal.lan (unknown [10.40.208.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B8801541C40 for ; Mon, 16 May 2022 15:51:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652716300; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=nulmPiJJ4wS2cAb95cLBylaVTvWdupYqg0C6gunjL44=; b=AEp9UFeLsRx/Vy+udYvPugmgVo2xZiDz89zwWWqKwkPpOVWi1UyFDLhotz2KEy+KQwXQ8q RfIlDInub+YhHUBn4HSCtabr9Y8ynWwXnm2cP/+NfN+39Rih0LwKs/MaHxV3MnLHSBPMDR aJeAUKuGb77Liu73MnU4IJW/ctJBh18= X-MC-Unique: 1KOHeVtHNO-rKxFflfsJpw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 3/3] qemu: EVENTHANDLERS.txt: Move to kbase and rSTisze Date: Mon, 16 May 2022 17:51:26 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1652716301538100003 Content-Type: text/plain; charset="utf-8" Signed-off-by: Peter Krempa Reviewed-by: J=C3=A1n Tomko --- docs/kbase/index.rst | 3 ++ docs/kbase/internals/meson.build | 1 + .../kbase/internals/qemu-event-handlers.rst | 44 ++++++++++--------- 3 files changed, 28 insertions(+), 20 deletions(-) rename src/qemu/EVENTHANDLERS.txt =3D> docs/kbase/internals/qemu-event-han= dlers.rst (64%) diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst index d0f2167be8..8b710db85a 100644 --- a/docs/kbase/index.rst +++ b/docs/kbase/index.rst @@ -107,3 +107,6 @@ Internals `QEMU migration internals `__ Description of migration phases in the ``v2`` and ``v3`` migration prot= ocol. + +`QEMU monitor event handling `__ + Brief outline how events emitted by qemu on the monitor are handlded. diff --git a/docs/kbase/internals/meson.build b/docs/kbase/internals/meson.= build index 4f7b223786..a16d5a290b 100644 --- a/docs/kbase/internals/meson.build +++ b/docs/kbase/internals/meson.build @@ -5,6 +5,7 @@ docs_kbase_internals_files =3D [ 'locking', 'migration', 'overview', + 'qemu-event-handlers', 'qemu-migration', 'qemu-threads', 'rpc', diff --git a/src/qemu/EVENTHANDLERS.txt b/docs/kbase/internals/qemu-event-h= andlers.rst similarity index 64% rename from src/qemu/EVENTHANDLERS.txt rename to docs/kbase/internals/qemu-event-handlers.rst index 39094d793e..3589c4c48c 100644 --- a/src/qemu/EVENTHANDLERS.txt +++ b/docs/kbase/internals/qemu-event-handlers.rst @@ -1,10 +1,14 @@ +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +QEMU event handlers +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + This is a short description of how an example qemu event can be used to trigger handler code that is called from the context of a worker thread, rather than directly from the event thread (which should itself never block, and can't do things like send qemu monitor commands, etc). -In this case (the NIC_RX_FILTER_CHANGED event) the event is handled by +In this case (the ``NIC_RX_FILTER_CHANGED`` event) the event is handled by calling a qemu monitor command to get the current RX filter state, then executing ioctls/sending netlink messages on the host in response to changes in that filter state. This event is *not* propagated to the @@ -14,39 +18,39 @@ to the end of this document, please do!). Hopefully this narration will be helpful when adding handlers for other qemu events in the future. ----------------------------------------------------- +QEMU monitor events +------------------- Any event emitted by qemu is received by -qemu_monitor_json.c:qemuMonitorJSONIOProcessEvent(). It looks up the -event by name in the table eventHandlers (in the same file), which +``qemu_monitor_json.c:qemuMonitorJSONIOProcessEvent()``. It looks up the +event by name in the table ``eventHandlers`` (in the same file), which should have an entry like this for each event that libvirt -understands: +understands:: { "NIC_RX_FILTER_CHANGED", qemuMonitorJSONHandleNicRxFilterChanged, }, - NB: This table is searched with bsearch, so it *must* be - alphabetically sorted. +NB: This table is searched with bsearch, so it *must* be alphabetically so= rted. -qemuMonitorJSONIOProcessEvent calls the function listed in -eventHandlers, e.g.: +``qemuMonitorJSONIOProcessEvent`` calls the function listed in +``eventHandlers``, e.g.:: qemu_monitor_json.c:qemuMonitorJSONHandleNicRxFilterChanged() which extracts any required data from the JSON ("name" in this case), -and calls: +and calls:: qemu_monitor.c:qemuMonitorEmitNicRxFilterChanged() -which uses QEMU_MONITOR_CALLBACK() to call -mon->cb->domainNicRxFilterChanged(). domainNicRxFilterChanged is one -in a list of function pointers in qemu_process.c:monitorCallbacks. For -our example, it has been set to: +which uses ``QEMU_MONITOR_CALLBACK()`` to call +``mon->cb->domainNicRxFilterChanged()``. ``domainNicRxFilterChanged`` is o= ne +in a list of function pointers in ``qemu_process.c:monitorCallbacks``. For +our example, it has been set to:: qemuProcessHandleNicRxFilterChanged() -This function allocates a qemuProcessEvent object, and queues an event -named QEMU_PROCESS_EVENT_NIC_RX_FILTER_CHANGED (you'll want to add an -enum to qemu_domain.h:qemuProcessEventType for your event) for a +This function allocates a ``qemuProcessEvent`` object, and queues an event +named ``QEMU_PROCESS_EVENT_NIC_RX_FILTER_CHANGED`` (you'll want to add an +enum to ``qemu_domain.h:qemuProcessEventType`` for your event) for a worker thread to handle. (Everything up to this point has happened in the context of the thread @@ -56,17 +60,17 @@ monitor. Everything after this is handled in the contex= t of a worker thread, so it has more freedom to make qemu monitor calls and blocking system calls on the host.) -When the worker thread gets the event, it calls +When the worker thread gets the event, it calls:: qemuProcessEventHandler() which switches on the eventType (in our example, -QEMU_PROCESS_EVENT_NIC_RX_FILTER_CHANGED) and decides to call: +``QEMU_PROCESS_EVENT_NIC_RX_FILTER_CHANGED``) and decides to call:: processNicRxFilterChangedEvent() and *that* is where the actual work will be done (and any -event-specific memory allocated during qemuProcessHandleXXX() will be +event-specific memory allocated during ``qemuProcessHandleXXX()`` will be freed). Note that this function must do proper refcounting of the domain object, and assure that the domain is still active prior to performing any operations - it is possible that the domain could have --=20 2.35.3