From nobody Tue Mar 3 05:27:23 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1771418695; cv=none; d=zohomail.com; s=zohoarc; b=RxGaA72aroT/j4KBwBKq7//+vNOJcCfkqhkuOmeopDQCluHhe22OWu+ufaTwaNJI5HhhF6rGoXBtP9kiJcj9PqG/ZA9wUq8Xfa8KasfxejU67Xm6dOEsb2ALJYbagNFwVZVMI9u0jp5a0hblFBEjnXzfSP0dbsB7VNtA0xyCr7o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1771418695; h=Content-Type:Content-Transfer-Encoding:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Owner:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id:Cc; bh=134yOYhu9Rd/DsC9gfFk04YSmcIAcjyPWEZPgU8aQBA=; b=L55E9O21QIgBF6xhmgweNeayyivjM5phrh1YvHv2seCdPwjPa3UfmJHQl84EGpBeUN0VTz0VsQLa5oRokhx0dJdX4IDXdOyZYc/VFZB9wljRDAkmiv9xPVOINZfJOyDzU0lodmZMUHxL9ux5OKBW51WsscbO5SooFp4Il1y6EJQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1771418695448369.59104347932055; Wed, 18 Feb 2026 04:44:55 -0800 (PST) Received: by lists.libvirt.org (Postfix, from userid 993) id 76A3643EEE; Wed, 18 Feb 2026 07:44:54 -0500 (EST) Received: from [172.19.199.9] (lists.libvirt.org [8.43.85.245]) by lists.libvirt.org (Postfix) with ESMTP id CFC64444AB; Wed, 18 Feb 2026 07:11:18 -0500 (EST) Received: by lists.libvirt.org (Postfix, from userid 993) id E6F3A4417C; Wed, 18 Feb 2026 07:10:57 -0500 (EST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (3072 bits) server-digest SHA256) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 60B4341A4E for ; Wed, 18 Feb 2026 07:07:07 -0500 (EST) Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-22-2AnGnb1tNBCZNJwcqcmREQ-1; Wed, 18 Feb 2026 07:07:05 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id F38661956052 for ; Wed, 18 Feb 2026 12:07:04 +0000 (UTC) Received: from kinshicho.usersys.redhat.com (unknown [10.45.226.171]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E15A630001A5 for ; Wed, 18 Feb 2026 12:07:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 4.0.1 (2024-03-26) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HELO_MISC_IP,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED,RCVD_IN_VALIDITY_RPBL_BLOCKED, RCVD_IN_VALIDITY_SAFE_BLOCKED,SPF_PASS autolearn=unavailable autolearn_force=no version=4.0.1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1771416427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=134yOYhu9Rd/DsC9gfFk04YSmcIAcjyPWEZPgU8aQBA=; b=C5LVq2BtR22xDFRKxRhMRJR/xNVnTsHIER0VmvpliYpL1AmgyqBNvboPrpCgfoYPjDGdDg YCzivqJIoP0VcQXTUoc1dXF02NlqEsHD5SDznI5c0DtGchswZ7oN6Kfpj1EgwBQEzg7EsX 8LMWUMZxSuL3bWIrNSgE2u69EmQOdQk= X-MC-Unique: 2AnGnb1tNBCZNJwcqcmREQ-1 X-Mimecast-MFC-AGG-ID: 2AnGnb1tNBCZNJwcqcmREQ_1771416425 To: devel@lists.libvirt.org Subject: [PATCH v3 36/38] virsh: Update for varstore handling Date: Wed, 18 Feb 2026 13:05:59 +0100 Message-ID: <20260218120601.230343-37-abologna@redhat.com> In-Reply-To: <20260218120601.230343-1-abologna@redhat.com> References: <20260218120601.230343-1-abologna@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: lF3FK2se16B25xl5_FuPGZd4Y3PWncsXhCiD5c9uXYo_1771416425 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Message-ID-Hash: OCMD43BTAAW2TL55NEJSXQLD4HBQPDCD X-Message-ID-Hash: OCMD43BTAAW2TL55NEJSXQLD4HBQPDCD X-MailFrom: abologna@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; header-match-devel.lists.libvirt.org-0; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Andrea Bolognani via Devel Reply-To: Andrea Bolognani X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1771418697589158500 Content-Type: text/plain; charset="utf-8"; x-default="true" Provide a new set of command line flags named after varstore, mirroring the ones that already exist for NVRAM. Users will not need to worry about whether the guest uses one or the other, since either version will seamlessly apply to both scenarios, meaning among other things that existing scripts will continue working as expected. Signed-off-by: Andrea Bolognani --- docs/manpages/virsh.rst | 44 ++++++++++++++++++++------------- tools/virsh-domain.c | 55 ++++++++++++++++++++++++++++++++--------- tools/virsh-snapshot.c | 9 +++++-- 3 files changed, 78 insertions(+), 30 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index ff0cf1a715..7ce51d0e32 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -1699,7 +1699,7 @@ create :: =20 create FILE [--console] [--paused] [--autodestroy] - [--pass-fds N,M,...] [--validate] [--reset-nvram] + [--pass-fds N,M,...] [--validate] [--reset-nvram] [--reset-varstore] =20 Create a domain from an XML . Optionally, *--validate* option can be passed to validate the format of the input XML file against an internal RNG @@ -1722,8 +1722,9 @@ of open file descriptors which should be pass on into= the guest. The file descriptors will be re-numbered in the guest, starting from 3. This is only supported with container based virtualization. =20 -If *--reset-nvram* is specified, any existing NVRAM file will be deleted -and re-initialized from its pristine template. +If *--reset-nvram* or *--reset-varstore* is specified, any existing +NVRAM/varstore file will be deleted and re-initialized from its pristine +template. =20 **Example:** =20 @@ -4262,7 +4263,8 @@ restore :: =20 restore state-file [--bypass-cache] [--xml file] - [{--running | --paused}] [--reset-nvram] [--parallel-channels] + [{--running | --paused}] [--reset-nvram] [--reset-varstore] + [--parallel-channels] =20 Restores a domain from a ``virsh save`` state file. See *save* for more in= fo. =20 @@ -4281,8 +4283,9 @@ save image to decide between running or paused; passi= ng either the *--running* or *--paused* flag will allow overriding which state the domain should be started in. =20 -If *--reset-nvram* is specified, any existing NVRAM file will be deleted -and re-initialized from its pristine template. +If *--reset-nvram* or *--reset-varstore* is specified, any existing +NVRAM/varstore file will be deleted and re-initialized from its pristine +template. =20 *--parallel-channels* option can specify number of parallel IO channels to be used when loading memory from file. Parallel save may significantly @@ -4906,7 +4909,7 @@ start =20 start domain-name-or-uuid [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] - [--pass-fds N,M,...] [--reset-nvram] + [--pass-fds N,M,...] [--reset-nvram] [--reset-varstore] =20 Start a (previously defined) inactive domain, either from the last ``managedsave`` state, or via a fresh boot if no managedsave state is @@ -4925,8 +4928,9 @@ of open file descriptors which should be pass on into= the guest. The file descriptors will be re-numbered in the guest, starting from 3. This is only supported with container based virtualization. =20 -If *--reset-nvram* is specified, any existing NVRAM file will be deleted -and re-initialized from its pristine template. +If *--reset-nvram* or *--reset-varstore* is specified, any existing +NVRAM/varstore file will be deleted and re-initialized from its pristine +template. =20 =20 suspend @@ -4962,8 +4966,9 @@ undefine =20 :: =20 - undefine domain [--managed-save] [--snapshots-metadata] - [--checkpoints-metadata] [--nvram] [--keep-nvram] + undefine domain [--managed-save] + [--snapshots-metadata] [--checkpoints-metadata] + [--nvram] [--keep-nvram] [--varstore] [--keep-varstore] [ {--storage volumes | --remove-all-storage [--delete-storage-volume-snapshots]} --wipe-storage] [--tpm] [--keep-tpm] @@ -4988,9 +4993,13 @@ domain. Without the flag, attempts to undefine an i= nactive domain with checkpoint metadata will fail. If the domain is active, this flag is ignored. =20 -*--nvram* and *--keep-nvram* specify accordingly to delete or keep nvram -(/domain/os/nvram/) file. If the domain has an nvram file and the flags are -omitted, the undefine will fail. +The *--nvram* / *--varstore* and *--keep-nvram* / *--keep-varstore* flags +specify whether to delete or keep the NVRAM (/domain/os/nvram/) or +varstore (/domain/os/varstore) file respectively. The two sets of names are +provided for convenience and consistency, but they're effectively aliases: +that is, *--nvram* will work on a domain configured to use varstore and +vice versa. If the domain has an NVRAM/varstore file and the flags are +omitted, the undefine operation will fail. =20 The *--storage* flag takes a parameter ``volumes``, which is a comma separ= ated list of volume target names or source paths of storage volumes to be remov= ed @@ -8128,7 +8137,7 @@ snapshot-revert :: =20 snapshot-revert domain {snapshot | --current} [{--running | --paused}] - [--force] [--reset-nvram] + [--force] [--reset-nvram] [--reset-varstore] =20 Revert the given domain to the snapshot specified by *snapshot*, or to the current snapshot with *--current*. Be aware @@ -8174,8 +8183,9 @@ requires the use of *--force* to proceed: likely cause extensive filesystem corruption or crashes due to swap co= ntent mismatches when run. =20 -If *--reset-nvram* is specified, any existing NVRAM file will be deleted -and re-initialized from its pristine template. +If *--reset-nvram* or *--reset-varstore* is specified, any existing +NVRAM/varstore file will be deleted and re-initialized from its pristine +template. =20 =20 snapshot-delete diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index cb9dd069b6..8f06238875 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -3981,11 +3981,19 @@ static const vshCmdOptDef opts_undefine[] =3D { }, {.name =3D "nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("remove nvram file") + .help =3D N_("remove NVRAM/varstore file") }, {.name =3D "keep-nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("keep nvram file") + .help =3D N_("keep NVRAM/varstore file") + }, + {.name =3D "varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("remove NVRAM/varstore file") + }, + {.name =3D "keep-varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("keep NVRAM/varstore file") }, {.name =3D "tpm", .type =3D VSH_OT_BOOL, @@ -4020,10 +4028,10 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd) bool wipe_storage =3D vshCommandOptBool(cmd, "wipe-storage"); bool remove_all_storage =3D vshCommandOptBool(cmd, "remove-all-storage= "); bool delete_snapshots =3D vshCommandOptBool(cmd, "delete-storage-volum= e-snapshots"); - bool nvram =3D vshCommandOptBool(cmd, "nvram"); - bool keep_nvram =3D vshCommandOptBool(cmd, "keep-nvram"); bool tpm =3D vshCommandOptBool(cmd, "tpm"); bool keep_tpm =3D vshCommandOptBool(cmd, "keep-tpm"); + bool nvram =3D false; + bool keep_nvram =3D false; /* Positive if these items exist. */ int has_managed_save =3D 0; int has_snapshots_metadata =3D 0; @@ -4048,8 +4056,18 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd) virshControl *priv =3D ctl->privData; =20 VSH_REQUIRE_OPTION("delete-storage-volume-snapshots", "remove-all-stor= age"); - VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram"); VSH_EXCLUSIVE_OPTIONS("tpm", "keep-tpm"); + VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram"); + VSH_EXCLUSIVE_OPTIONS("varstore", "keep-varstore"); + VSH_EXCLUSIVE_OPTIONS("nvram", "keep-varstore"); + VSH_EXCLUSIVE_OPTIONS("varstore", "keep-nvram"); + + if (vshCommandOptBool(cmd, "nvram") || + vshCommandOptBool(cmd, "varstore")) + nvram =3D true; + if (vshCommandOptBool(cmd, "keep-nvram") || + vshCommandOptBool(cmd, "keep-varstore")) + keep_nvram =3D true; =20 ignore_value(vshCommandOptStringQuiet(ctl, cmd, "storage", &vol_string= )); =20 @@ -4401,7 +4419,11 @@ static const vshCmdOptDef opts_start[] =3D { }, {.name =3D "reset-nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("re-initialize NVRAM from its pristine template") + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") + }, + {.name =3D "reset-varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") }, {.name =3D NULL} }; @@ -4461,7 +4483,8 @@ cmdStart(vshControl *ctl, const vshCmd *cmd) flags |=3D VIR_DOMAIN_START_BYPASS_CACHE; if (vshCommandOptBool(cmd, "force-boot")) flags |=3D VIR_DOMAIN_START_FORCE_BOOT; - if (vshCommandOptBool(cmd, "reset-nvram")) + if (vshCommandOptBool(cmd, "reset-nvram") || + vshCommandOptBool(cmd, "reset-varstore")) flags |=3D VIR_DOMAIN_START_RESET_NVRAM; =20 /* We can emulate force boot, even for older servers that reject it. = */ @@ -5728,7 +5751,11 @@ static const vshCmdOptDef opts_restore[] =3D { }, {.name =3D "reset-nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("re-initialize NVRAM from its pristine template") + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") + }, + {.name =3D "reset-varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") }, {.name =3D NULL} }; @@ -5753,7 +5780,8 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd) flags |=3D VIR_DOMAIN_SAVE_RUNNING; if (vshCommandOptBool(cmd, "paused")) flags |=3D VIR_DOMAIN_SAVE_PAUSED; - if (vshCommandOptBool(cmd, "reset-nvram")) + if (vshCommandOptBool(cmd, "reset-nvram") || + vshCommandOptBool(cmd, "reset-varstore")) flags |=3D VIR_DOMAIN_SAVE_RESET_NVRAM; =20 if (vshCommandOptString(ctl, cmd, "file", &from) < 0) @@ -8520,7 +8548,11 @@ static const vshCmdOptDef opts_create[] =3D { }, {.name =3D "reset-nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("re-initialize NVRAM from its pristine template") + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") + }, + {.name =3D "reset-varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") }, {.name =3D NULL} }; @@ -8575,7 +8607,8 @@ cmdCreate(vshControl *ctl, const vshCmd *cmd) flags |=3D VIR_DOMAIN_START_AUTODESTROY; if (vshCommandOptBool(cmd, "validate")) flags |=3D VIR_DOMAIN_START_VALIDATE; - if (vshCommandOptBool(cmd, "reset-nvram")) + if (vshCommandOptBool(cmd, "reset-nvram") || + vshCommandOptBool(cmd, "reset-varstore")) flags |=3D VIR_DOMAIN_START_RESET_NVRAM; =20 dom =3D virshDomainCreateXMLHelper(priv->conn, buffer, nfds, fds, flag= s); diff --git a/tools/virsh-snapshot.c b/tools/virsh-snapshot.c index 8e5b9d635c..3d5880b1d5 100644 --- a/tools/virsh-snapshot.c +++ b/tools/virsh-snapshot.c @@ -1714,7 +1714,11 @@ static const vshCmdOptDef opts_snapshot_revert[] =3D= { }, {.name =3D "reset-nvram", .type =3D VSH_OT_BOOL, - .help =3D N_("re-initialize NVRAM from its pristine template") + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") + }, + {.name =3D "reset-varstore", + .type =3D VSH_OT_BOOL, + .help =3D N_("re-initialize NVRAM/varstore from its pristine template= ") }, {.name =3D NULL} }; @@ -1733,7 +1737,8 @@ cmdDomainSnapshotRevert(vshControl *ctl, const vshCmd= *cmd) flags |=3D VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING; if (vshCommandOptBool(cmd, "paused")) flags |=3D VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED; - if (vshCommandOptBool(cmd, "reset-nvram")) + if (vshCommandOptBool(cmd, "reset-nvram") || + vshCommandOptBool(cmd, "reset-varstore")) flags |=3D VIR_DOMAIN_SNAPSHOT_REVERT_RESET_NVRAM; /* We want virsh snapshot-revert --force to work even when talking * to older servers that did the unsafe revert by default but --=20 2.53.0