[PATCH v3 36/38] virsh: Update for varstore handling

Andrea Bolognani via Devel posted 38 patches 1 week, 5 days ago
There is a newer version of this series
[PATCH v3 36/38] virsh: Update for varstore handling
Posted by Andrea Bolognani via Devel 1 week, 5 days ago
Provide a new set of command line flags named after varstore,
mirroring the ones that already exist for NVRAM. Users will
not need to worry about whether the guest uses one or the
other, since either version will seamlessly apply to both
scenarios, meaning among other things that existing scripts
will continue working as expected.

Signed-off-by: Andrea Bolognani <abologna@redhat.com>
---
 docs/manpages/virsh.rst | 44 ++++++++++++++++++++-------------
 tools/virsh-domain.c    | 55 ++++++++++++++++++++++++++++++++---------
 tools/virsh-snapshot.c  |  9 +++++--
 3 files changed, 78 insertions(+), 30 deletions(-)

diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index ff0cf1a715..7ce51d0e32 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -1699,7 +1699,7 @@ create
 ::
 
    create FILE [--console] [--paused] [--autodestroy]
-      [--pass-fds N,M,...] [--validate] [--reset-nvram]
+      [--pass-fds N,M,...] [--validate] [--reset-nvram] [--reset-varstore]
 
 Create a domain from an XML <file>. Optionally, *--validate* option can be
 passed to validate the format of the input XML file against an internal RNG
@@ -1722,8 +1722,9 @@ of open file descriptors which should be pass on into the guest. The
 file descriptors will be re-numbered in the guest, starting from 3. This
 is only supported with container based virtualization.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 **Example:**
 
@@ -4262,7 +4263,8 @@ restore
 ::
 
    restore state-file [--bypass-cache] [--xml file]
-      [{--running | --paused}] [--reset-nvram] [--parallel-channels]
+      [{--running | --paused}] [--reset-nvram] [--reset-varstore]
+      [--parallel-channels]
 
 Restores a domain from a ``virsh save`` state file. See *save* for more info.
 
@@ -4281,8 +4283,9 @@ save image to decide between running or paused; passing either the
 *--running* or *--paused* flag will allow overriding which state the
 domain should be started in.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 *--parallel-channels* option can specify number of parallel IO channels
 to be used when loading memory from file. Parallel save may significantly
@@ -4906,7 +4909,7 @@ start
 
    start domain-name-or-uuid [--console] [--paused]
       [--autodestroy] [--bypass-cache] [--force-boot]
-      [--pass-fds N,M,...] [--reset-nvram]
+      [--pass-fds N,M,...] [--reset-nvram] [--reset-varstore]
 
 Start a (previously defined) inactive domain, either from the last
 ``managedsave`` state, or via a fresh boot if no managedsave state is
@@ -4925,8 +4928,9 @@ of open file descriptors which should be pass on into the guest. The
 file descriptors will be re-numbered in the guest, starting from 3. This
 is only supported with container based virtualization.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 
 suspend
@@ -4962,8 +4966,9 @@ undefine
 
 ::
 
-   undefine domain [--managed-save] [--snapshots-metadata]
-      [--checkpoints-metadata] [--nvram] [--keep-nvram]
+   undefine domain [--managed-save]
+      [--snapshots-metadata] [--checkpoints-metadata]
+      [--nvram] [--keep-nvram] [--varstore] [--keep-varstore]
       [ {--storage volumes | --remove-all-storage
          [--delete-storage-volume-snapshots]} --wipe-storage]
       [--tpm] [--keep-tpm]
@@ -4988,9 +4993,13 @@ domain.  Without the flag, attempts to undefine an inactive domain with
 checkpoint metadata will fail.  If the domain is active, this flag is
 ignored.
 
-*--nvram* and *--keep-nvram* specify accordingly to delete or keep nvram
-(/domain/os/nvram/) file. If the domain has an nvram file and the flags are
-omitted, the undefine will fail.
+The *--nvram* / *--varstore* and *--keep-nvram* / *--keep-varstore* flags
+specify whether to delete or keep the NVRAM (/domain/os/nvram/) or
+varstore (/domain/os/varstore) file respectively. The two sets of names are
+provided for convenience and consistency, but they're effectively aliases:
+that is, *--nvram* will work on a domain configured to use varstore and
+vice versa. If the domain has an NVRAM/varstore file and the flags are
+omitted, the undefine operation will fail.
 
 The *--storage* flag takes a parameter ``volumes``, which is a comma separated
 list of volume target names or source paths of storage volumes to be removed
@@ -8128,7 +8137,7 @@ snapshot-revert
 ::
 
    snapshot-revert domain {snapshot | --current} [{--running | --paused}]
-      [--force] [--reset-nvram]
+      [--force] [--reset-nvram] [--reset-varstore]
 
 Revert the given domain to the snapshot specified by *snapshot*, or to
 the current snapshot with *--current*.  Be aware
@@ -8174,8 +8183,9 @@ requires the use of *--force* to proceed:
     likely cause extensive filesystem corruption or crashes due to swap content
     mismatches when run.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 
 snapshot-delete
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index cb9dd069b6..8f06238875 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -3981,11 +3981,19 @@ static const vshCmdOptDef opts_undefine[] = {
     },
     {.name = "nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("remove nvram file")
+     .help = N_("remove NVRAM/varstore file")
     },
     {.name = "keep-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("keep nvram file")
+     .help = N_("keep NVRAM/varstore file")
+    },
+    {.name = "varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("remove NVRAM/varstore file")
+    },
+    {.name = "keep-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("keep NVRAM/varstore file")
     },
     {.name = "tpm",
      .type = VSH_OT_BOOL,
@@ -4020,10 +4028,10 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd)
     bool wipe_storage = vshCommandOptBool(cmd, "wipe-storage");
     bool remove_all_storage = vshCommandOptBool(cmd, "remove-all-storage");
     bool delete_snapshots = vshCommandOptBool(cmd, "delete-storage-volume-snapshots");
-    bool nvram = vshCommandOptBool(cmd, "nvram");
-    bool keep_nvram = vshCommandOptBool(cmd, "keep-nvram");
     bool tpm = vshCommandOptBool(cmd, "tpm");
     bool keep_tpm = vshCommandOptBool(cmd, "keep-tpm");
+    bool nvram = false;
+    bool keep_nvram = false;
     /* Positive if these items exist.  */
     int has_managed_save = 0;
     int has_snapshots_metadata = 0;
@@ -4048,8 +4056,18 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd)
     virshControl *priv = ctl->privData;
 
     VSH_REQUIRE_OPTION("delete-storage-volume-snapshots", "remove-all-storage");
-    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram");
     VSH_EXCLUSIVE_OPTIONS("tpm", "keep-tpm");
+    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram");
+    VSH_EXCLUSIVE_OPTIONS("varstore", "keep-varstore");
+    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-varstore");
+    VSH_EXCLUSIVE_OPTIONS("varstore", "keep-nvram");
+
+    if (vshCommandOptBool(cmd, "nvram") ||
+        vshCommandOptBool(cmd, "varstore"))
+        nvram = true;
+    if (vshCommandOptBool(cmd, "keep-nvram") ||
+        vshCommandOptBool(cmd, "keep-varstore"))
+        keep_nvram = true;
 
     ignore_value(vshCommandOptStringQuiet(ctl, cmd, "storage", &vol_string));
 
@@ -4401,7 +4419,11 @@ static const vshCmdOptDef opts_start[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -4461,7 +4483,8 @@ cmdStart(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_START_BYPASS_CACHE;
     if (vshCommandOptBool(cmd, "force-boot"))
         flags |= VIR_DOMAIN_START_FORCE_BOOT;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_START_RESET_NVRAM;
 
     /* We can emulate force boot, even for older servers that reject it.  */
@@ -5728,7 +5751,11 @@ static const vshCmdOptDef opts_restore[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -5753,7 +5780,8 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_SAVE_RUNNING;
     if (vshCommandOptBool(cmd, "paused"))
         flags |= VIR_DOMAIN_SAVE_PAUSED;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_SAVE_RESET_NVRAM;
 
     if (vshCommandOptString(ctl, cmd, "file", &from) < 0)
@@ -8520,7 +8548,11 @@ static const vshCmdOptDef opts_create[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -8575,7 +8607,8 @@ cmdCreate(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_START_AUTODESTROY;
     if (vshCommandOptBool(cmd, "validate"))
         flags |= VIR_DOMAIN_START_VALIDATE;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_START_RESET_NVRAM;
 
     dom = virshDomainCreateXMLHelper(priv->conn, buffer, nfds, fds, flags);
diff --git a/tools/virsh-snapshot.c b/tools/virsh-snapshot.c
index 8e5b9d635c..3d5880b1d5 100644
--- a/tools/virsh-snapshot.c
+++ b/tools/virsh-snapshot.c
@@ -1714,7 +1714,11 @@ static const vshCmdOptDef opts_snapshot_revert[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -1733,7 +1737,8 @@ cmdDomainSnapshotRevert(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING;
     if (vshCommandOptBool(cmd, "paused"))
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_RESET_NVRAM;
     /* We want virsh snapshot-revert --force to work even when talking
      * to older servers that did the unsafe revert by default but
-- 
2.53.0
Re: [PATCH v3 36/38] virsh: Update for varstore handling
Posted by Daniel P. Berrangé via Devel 1 week ago
On Wed, Feb 18, 2026 at 01:05:59PM +0100, Andrea Bolognani via Devel wrote:
> Provide a new set of command line flags named after varstore,
> mirroring the ones that already exist for NVRAM. Users will
> not need to worry about whether the guest uses one or the
> other, since either version will seamlessly apply to both
> scenarios, meaning among other things that existing scripts
> will continue working as expected.

I'm more inclined to just document that '--reset-nvram' applies to
all cases and not introduce new args that are just a synonym. Users
don't really need to even know that  the JSON vars aren't using
NVRAM.

Adding 2 args for the same thing, suggests to users that they first
need to look at the guest XML to determine which arg to use for a
given guest, which is rather unproductive / unhelpful.

> 
> Signed-off-by: Andrea Bolognani <abologna@redhat.com>
> ---
>  docs/manpages/virsh.rst | 44 ++++++++++++++++++++-------------
>  tools/virsh-domain.c    | 55 ++++++++++++++++++++++++++++++++---------
>  tools/virsh-snapshot.c  |  9 +++++--
>  3 files changed, 78 insertions(+), 30 deletions(-)


With regards,
Daniel
-- 
|: https://berrange.com       ~~        https://hachyderm.io/@berrange :|
|: https://libvirt.org          ~~          https://entangle-photo.org :|
|: https://pixelfed.art/berrange   ~~    https://fstop138.berrange.com :|
Re: [PATCH v3 36/38] virsh: Update for varstore handling
Posted by Andrea Bolognani via Devel 1 week ago
On Mon, Feb 23, 2026 at 02:46:31PM +0000, Daniel P. Berrangé wrote:
> On Wed, Feb 18, 2026 at 01:05:59PM +0100, Andrea Bolognani via Devel wrote:
> > Provide a new set of command line flags named after varstore,
> > mirroring the ones that already exist for NVRAM. Users will
> > not need to worry about whether the guest uses one or the
> > other, since either version will seamlessly apply to both
> > scenarios, meaning among other things that existing scripts
> > will continue working as expected.
>
> I'm more inclined to just document that '--reset-nvram' applies to
> all cases and not introduce new args that are just a synonym. Users
> don't really need to even know that  the JSON vars aren't using
> NVRAM.

I think the updated documentation does a reasonable job at making it
clear that the two options are equivalent:

  $ virsh start --help

  NAME
    start - start a (previously defined) inactive domain

  OPTIONS
    --reset-nvram    re-initialize NVRAM/varstore from its pristine template
    --reset-varstore  re-initialize NVRAM/varstore from its pristine template

  $ man virsh

  start

    If --reset-nvram or --reset-varstore is specified, any existing
    NVRAM/varstore file will be deleted and re-initialized from its
    pristine template.

> Adding 2 args for the same thing, suggests to users that they first
> need to look at the guest XML to determine which arg to use for a
> given guest, which is rather unproductive / unhelpful.

That's true to some extent, but so is the opposite scenario: the user
has already looked at the XML, knows what element is used there, and
now has to figure out whether the virsh option matches the name of
the element or is... The other one.

In the long run, I expect that usage of NVRAM is going to fade out,
with varstore being used for most/all configurations. The closer we
get to that scenario, the more having to use --reset-nvram is going
to look out of place and cause friction. So adding its varstore
equivalent to the vocabulary right now feels preferrable.

-- 
Andrea Bolognani / Red Hat / Virtualization
Re: [PATCH v3 36/38] virsh: Update for varstore handling
Posted by Daniel P. Berrangé via Devel 1 week ago
On Mon, Feb 23, 2026 at 11:41:14AM -0500, Andrea Bolognani wrote:
> On Mon, Feb 23, 2026 at 02:46:31PM +0000, Daniel P. Berrangé wrote:
> > On Wed, Feb 18, 2026 at 01:05:59PM +0100, Andrea Bolognani via Devel wrote:
> > > Provide a new set of command line flags named after varstore,
> > > mirroring the ones that already exist for NVRAM. Users will
> > > not need to worry about whether the guest uses one or the
> > > other, since either version will seamlessly apply to both
> > > scenarios, meaning among other things that existing scripts
> > > will continue working as expected.
> >
> > I'm more inclined to just document that '--reset-nvram' applies to
> > all cases and not introduce new args that are just a synonym. Users
> > don't really need to even know that  the JSON vars aren't using
> > NVRAM.
> 
> I think the updated documentation does a reasonable job at making it
> clear that the two options are equivalent:
> 
>   $ virsh start --help
> 
>   NAME
>     start - start a (previously defined) inactive domain
> 
>   OPTIONS
>     --reset-nvram    re-initialize NVRAM/varstore from its pristine template
>     --reset-varstore  re-initialize NVRAM/varstore from its pristine template
> 
>   $ man virsh
> 
>   start
> 
>     If --reset-nvram or --reset-varstore is specified, any existing
>     NVRAM/varstore file will be deleted and re-initialized from its
>     pristine template.

> > Adding 2 args for the same thing, suggests to users that they first
> > need to look at the guest XML to determine which arg to use for a
> > given guest, which is rather unproductive / unhelpful.
> 
> That's true to some extent, but so is the opposite scenario: the user
> has already looked at the XML, knows what element is used there, and
> now has to figure out whether the virsh option matches the name of
> the element or is... The other one.

I don't see that as a real world problem as "--reset-nvram" is
trivially discoverable with the updated docs string that mentions
its effect on 'varstore'.  I just don't see a need to change anything
here except the docs.

With regards,
Daniel
-- 
|: https://berrange.com       ~~        https://hachyderm.io/@berrange :|
|: https://libvirt.org          ~~          https://entangle-photo.org :|
|: https://pixelfed.art/berrange   ~~    https://fstop138.berrange.com :|

Re: [PATCH v3 36/38] virsh: Update for varstore handling
Posted by Andrea Bolognani via Devel 1 week ago
On Mon, Feb 23, 2026 at 04:54:42PM +0000, Daniel P. Berrangé wrote:
> On Mon, Feb 23, 2026 at 11:41:14AM -0500, Andrea Bolognani wrote:
> > On Mon, Feb 23, 2026 at 02:46:31PM +0000, Daniel P. Berrangé wrote:
> > > Adding 2 args for the same thing, suggests to users that they first
> > > need to look at the guest XML to determine which arg to use for a
> > > given guest, which is rather unproductive / unhelpful.
> >
> > That's true to some extent, but so is the opposite scenario: the user
> > has already looked at the XML, knows what element is used there, and
> > now has to figure out whether the virsh option matches the name of
> > the element or is... The other one.
>
> I don't see that as a real world problem as "--reset-nvram" is
> trivially discoverable with the updated docs string that mentions
> its effect on 'varstore'.  I just don't see a need to change anything
> here except the docs.

I'm still convinced that it will prove to be less usable in the long
run, but then again many of our existing APIs suffer from similar
quirks already, so I'll do what you ask and update the documentation
only :)

-- 
Andrea Bolognani / Red Hat / Virtualization