From nobody Sun Feb 8 18:49:20 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) client-ip=205.139.110.61; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1574434137; cv=none; d=zohomail.com; s=zohoarc; b=Lgt1cSlnB0YMkGlRyxrNLjpyKsc7zsjmMr577CX7nX6WcZlY57gsKUIdqecIYDLVJ2n0woypik3keJ93QX5PWZynVudtwOaCWo9OVmOrCeTecLaCiZaLLBdanDpeVG59outbZT7gMwzHHwf+ugnmqlrKjJyhuX3KE1ZC3LuC6FI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574434137; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xtCzdIiJ4rHGTdjsWnlU1w4kpjiiiygI7bZ0R1MV7Pg=; b=RrqEQcaD4aa/Wuvog2wVRlXhyLYBgCxxP4hIawzzUq5xUGDUX8eCUhwgbJbwOTRNXzTINOf0DOMnr9pdmsKf72TFKYUWLmcdeNLB2Mi4Z6O/jCYCTXKcS7WgRSIZxEZFt8LGFwX1nG+3tI40Rcnjj1GuUAMr1n9meegvY9Z8KWU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by mx.zohomail.com with SMTPS id 1574434137914272.8364106658005; Fri, 22 Nov 2019 06:48:57 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-226-1poGzmNVMA2nKRJGgL_ejw-1; Fri, 22 Nov 2019 09:47:58 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3EFFC100551B; Fri, 22 Nov 2019 14:47:51 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F3BB35EE1A; Fri, 22 Nov 2019 14:47:50 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 7A4341808855; Fri, 22 Nov 2019 14:47:50 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id xAMElib0018288 for ; Fri, 22 Nov 2019 09:47:44 -0500 Received: by smtp.corp.redhat.com (Postfix) id 3A8021036C8D; Fri, 22 Nov 2019 14:47:44 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-112-49.ams2.redhat.com [10.36.112.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8014E1036C7E; Fri, 22 Nov 2019 14:47:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574434136; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=xtCzdIiJ4rHGTdjsWnlU1w4kpjiiiygI7bZ0R1MV7Pg=; b=apu7hZ1gfqKhbbH9yZvdbtCbBtyBfIDZMyafvB8WETm/OMRZmnvtPBLVWrUV6wqZun5Dvb ArEJIpAgrsacReSBkLhwRcx5viTIeZx/8Buq/kHBShYY9Vz7ztlxEJUsYQ6RcQpqDNhnd8 2/trQDHJ34WDFkSS1/uTBBz3v0DknGg= From: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= To: libvir-list@redhat.com Date: Fri, 22 Nov 2019 14:47:00 +0000 Message-Id: <20191122144702.3780548-14-berrange@redhat.com> In-Reply-To: <20191122144702.3780548-1-berrange@redhat.com> References: <20191122144702.3780548-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v2 13/15] docs: convert kbase/locking-lockd.html.in to RST X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-MC-Unique: 1poGzmNVMA2nKRJGgL_ejw-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) This is a semi-automated conversion. The first conversion is done using "pandoc -f html -t rst". The result is then editted manually to apply the desired heading markup, and fix a few things that pandoc gets wrong. Signed-off-by: Daniel P. Berrang=C3=A9 --- docs/kbase/locking-lockd.html.in | 160 ------------------------------- docs/kbase/locking-lockd.rst | 121 +++++++++++++++++++++++ 2 files changed, 121 insertions(+), 160 deletions(-) delete mode 100644 docs/kbase/locking-lockd.html.in create mode 100644 docs/kbase/locking-lockd.rst diff --git a/docs/kbase/locking-lockd.html.in b/docs/kbase/locking-lockd.ht= ml.in deleted file mode 100644 index 855404ac97..0000000000 --- a/docs/kbase/locking-lockd.html.in +++ /dev/null @@ -1,160 +0,0 @@ - - - - -

Virtual machine lock manager, virtlockd plugin

- -
    - -

    - This page describes use of the virtlockd - service as a lock driver - plugin for virtual machine disk mutual exclusion. -

    - -

    virtlockd background

    - -

    - The virtlockd daemon is a single purpose binary which - focuses exclusively on the task of acquiring and holding - locks on behalf of running virtual machines. It is - designed to offer a low overhead, portable locking - scheme can be used out of the box on virtualization - hosts with minimal configuration overheads. It makes - use of the POSIX fcntl advisory locking capability - to hold locks, which is supported by the majority of - commonly used filesystems. -

    - -

    virtlockd daemon setup

    - -

    - In most OS, the virtlockd daemon itself will not require - any upfront configuration work. It is installed by default - when libvirtd is present, and a systemd socket unit is - registered such that the daemon will be automatically - started when first required. With OS that predate systemd - though, it will be necessary to start it at boot time, - prior to libvirtd being started. On RHEL/Fedora distros, - this can be achieved as follows -

    - -
    -# chkconfig virtlockd on
    -# service virtlockd start
    -    
    - -

    - The above instructions apply to the instance of virtlockd - that runs privileged, and is used by the libvirtd daemon - that runs privileged. If running libvirtd as an unprivileged - user, it will always automatically spawn an instance of - the virtlockd daemon unprivileged too. This requires no - setup at all. -

    - -

    libvirt lockd plugin configuration

    - -

    - Once the virtlockd daemon is running, or setup to autostart, - the next step is to configure the libvirt lockd plugin. - There is a separate configuration file for each libvirt - driver that is using virtlockd. For QEMU, we will edit - /etc/libvirt/qemu-lockd.conf -

    - -

    - The default behaviour of the lockd plugin is to acquire locks - directly on the virtual disk images associated with the guest - <disk> elements. This ensures it can run out of the box - with no configuration, providing locking for disk images on - shared filesystems such as NFS. It does not provide any cross - host protection for storage that is backed by block devices, - since locks acquired on device nodes in /dev only apply within - the host. It may also be the case that the filesystem holding - the disk images is not capable of supporting fcntl locks. -

    - -

    - To address these problems it is possible to tell lockd to - acquire locks on an indirect file. Essentially lockd will - calculate the SHA256 checksum of the fully qualified path, - and create a zero length file in a given directory whose - filename is the checksum. It will then acquire a lock on - that file. Assuming the block devices assigned to the guest - are using stable paths (eg /dev/disk/by-path/XXXXXXX) then - this will allow for locks to apply across hosts. This - feature can be enabled by setting a configuration setting - that specifies the directory in which to create the lock - files. The directory referred to should of course be - placed on a shared filesystem (eg NFS) that is accessible - to all hosts which can see the shared block devices. -

    - -
    -$ su - root
    -# augtool -s set \
    -  /files/etc/libvirt/qemu-lockd.conf/file_lockspace_dir \
    -  "/var/lib/libvirt/lockd/files"
    -    
    - -

    - If the guests are using either LVM and SCSI block devices - for their virtual disks, there is a unique identifier - associated with each device. It is possible to tell lockd - to use this UUID as the basis for acquiring locks, rather - than the SHA256 sum of the filename. The benefit of this - is that the locking protection will work even if the file - paths to the given block device are different on each - host. -

    - -
    -$ su - root
    -# augtool -s set \
    -  /files/etc/libvirt/qemu-lockd.conf/scsi_lockspace_dir \
    -  "/var/lib/libvirt/lockd/scsi"
    -# augtool -s set \
    -  /files/etc/libvirt/qemu-lockd.conf/lvm_lockspace_dir \
    -  "/var/lib/libvirt/lockd/lvm"
    -    
    - -

    - It is important to remember that the changes made to the - /etc/libvirt/qemu-lockd.conf file must be - propagated to all hosts before any virtual machines are - launched on them. This ensures that all hosts are using - the same locking mechanism -

    - -

    QEMU/KVM driver configuration

    - -

    - The QEMU driver is capable of using the virtlockd plugin - since the release 1.0.2. - The out of the box configuration, however, currently - uses the nop lock manager plugin. - To get protection for disks, it is thus necessary - to reconfigure QEMU to activate the lockd - driver. This is achieved by editing the QEMU driver - configuration file (/etc/libvirt/qemu.conf) - and changing the lock_manager configuration - tunable. -

    - -
    -$ su - root
    -# augtool -s  set /files/etc/libvirt/qemu.conf/lock_manager lockd
    -# service libvirtd restart
    -    
    - -

    - Every time you start a guest, the virtlockd daemon will acquire - locks on the disk files directly, or in one of the configured - lookaside directories based on SHA256 sum. To check that locks - are being acquired as expected, the lslocks tool - can be run. -

    - - - diff --git a/docs/kbase/locking-lockd.rst b/docs/kbase/locking-lockd.rst new file mode 100644 index 0000000000..70e742b77c --- /dev/null +++ b/docs/kbase/locking-lockd.rst @@ -0,0 +1,121 @@ +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Virtual machine lock manager, virtlockd plugin +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +.. contents:: + +This page describes use of the ``virtlockd`` service as a `lock +driver `__ plugin for virtual machine disk mutual +exclusion. + +virtlockd background +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The virtlockd daemon is a single purpose binary which focuses +exclusively on the task of acquiring and holding locks on behalf of +running virtual machines. It is designed to offer a low overhead, +portable locking scheme can be used out of the box on virtualization +hosts with minimal configuration overheads. It makes use of the POSIX +fcntl advisory locking capability to hold locks, which is supported by +the majority of commonly used filesystems. + +virtlockd daemon setup +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +In most OS, the virtlockd daemon itself will not require any upfront +configuration work. It is installed by default when libvirtd is present, +and a systemd socket unit is registered such that the daemon will be +automatically started when first required. With OS that predate systemd +though, it will be necessary to start it at boot time, prior to libvirtd +being started. On RHEL/Fedora distros, this can be achieved as follows + +:: + + # chkconfig virtlockd on + # service virtlockd start + +The above instructions apply to the instance of virtlockd that runs +privileged, and is used by the libvirtd daemon that runs privileged. If +running libvirtd as an unprivileged user, it will always automatically +spawn an instance of the virtlockd daemon unprivileged too. This +requires no setup at all. + +libvirt lockd plugin configuration +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Once the virtlockd daemon is running, or setup to autostart, the next +step is to configure the libvirt lockd plugin. There is a separate +configuration file for each libvirt driver that is using virtlockd. For +QEMU, we will edit ``/etc/libvirt/qemu-lockd.conf`` + +The default behaviour of the lockd plugin is to acquire locks directly +on the virtual disk images associated with the guest elements. +This ensures it can run out of the box with no configuration, providing +locking for disk images on shared filesystems such as NFS. It does not +provide any cross host protection for storage that is backed by block +devices, since locks acquired on device nodes in /dev only apply within +the host. It may also be the case that the filesystem holding the disk +images is not capable of supporting fcntl locks. + +To address these problems it is possible to tell lockd to acquire locks +on an indirect file. Essentially lockd will calculate the SHA256 +checksum of the fully qualified path, and create a zero length file in a +given directory whose filename is the checksum. It will then acquire a +lock on that file. Assuming the block devices assigned to the guest are +using stable paths (eg /dev/disk/by-path/XXXXXXX) then this will allow +for locks to apply across hosts. This feature can be enabled by setting +a configuration setting that specifies the directory in which to create +the lock files. The directory referred to should of course be placed on +a shared filesystem (eg NFS) that is accessible to all hosts which can +see the shared block devices. + +:: + + $ su - root + # augtool -s set \ + /files/etc/libvirt/qemu-lockd.conf/file_lockspace_dir \ + "/var/lib/libvirt/lockd/files" + +If the guests are using either LVM and SCSI block devices for their +virtual disks, there is a unique identifier associated with each device. +It is possible to tell lockd to use this UUID as the basis for acquiring +locks, rather than the SHA256 sum of the filename. The benefit of this +is that the locking protection will work even if the file paths to the +given block device are different on each host. + +:: + + $ su - root + # augtool -s set \ + /files/etc/libvirt/qemu-lockd.conf/scsi_lockspace_dir \ + "/var/lib/libvirt/lockd/scsi" + # augtool -s set \ + /files/etc/libvirt/qemu-lockd.conf/lvm_lockspace_dir \ + "/var/lib/libvirt/lockd/lvm" + +It is important to remember that the changes made to the +``/etc/libvirt/qemu-lockd.conf`` file must be propagated to all hosts +before any virtual machines are launched on them. This ensures that all +hosts are using the same locking mechanism + +QEMU/KVM driver configuration +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +The QEMU driver is capable of using the virtlockd plugin since the +release 1.0.2. The out of the box configuration, however, currently uses +the **nop** lock manager plugin. To get protection for disks, it is thus +necessary to reconfigure QEMU to activate the **lockd** driver. This is +achieved by editing the QEMU driver configuration file +(``/etc/libvirt/qemu.conf``) and changing the ``lock_manager`` +configuration tunable. + +:: + + $ su - root + # augtool -s set /files/etc/libvirt/qemu.conf/lock_manager lockd + # service libvirtd restart + +Every time you start a guest, the virtlockd daemon will acquire locks on +the disk files directly, or in one of the configured lookaside +directories based on SHA256 sum. To check that locks are being acquired +as expected, the ``lslocks`` tool can be run. --=20 2.23.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list