[libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support

Cole Robinson posted 30 patches 4 years, 6 months ago
Test syntax-check passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/cover.1570482718.git.crobinso@redhat.com
src/libvirt_private.syms        |   1 -
src/security/security_dac.c     |  63 +++++--
src/security/security_selinux.c |  97 +++++++----
src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
src/util/virstoragefile.h       |  11 +-
5 files changed, 290 insertions(+), 163 deletions(-)
[libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
This series is the first steps to teaching libvirt about qcow2
data_file support, aka external data files or qcow2 external metadata.

A bit about the feature: it was added in qemu 4.0. It essentially
creates a two part image file: a qcow2 layer that just tracks the
image metadata, and a separate data file which is stores the VM
disk contents. AFAICT the driving use case is to keep a fully coherent
raw disk image on disk, and only use qcow2 as an intermediate metadata
layer when necessary, for things like incremental backup support.

The original qemu patch posting is here:
https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html

For testing, you can create a new qcow2+raw data_file image from an
existing image, like:

    qemu-img convert -O qcow2 \
        -o data_file=NEW.raw,data_file_raw=yes
        EXISTING.raw NEW.qcow2

The goal of this series is to teach libvirt enough about this case
so that we can correctly relabel the data_file on VM startup/shutdown.
The main functional changes are

  * Teach storagefile how to parse out data_file from the qcow2 header
  * Store the raw string as virStorageSource->externalDataStoreRaw
  * Track that as its out virStorageSource in externalDataStore
  * dac/selinux relabel externalDataStore as needed

>From libvirt's perspective, externalDataStore is conceptually pretty
close to a backingStore, but the main difference is its read/write
permissions should match its parent image, rather than being readonly
like backingStore.

This series has only been tested on top of the -blockdev enablement
series, but I don't think it actually interacts with that work at
the moment.


Future work:
  * Exposing this in the runtime XML. We need to figure out an XML
    schema. It will reuse virStorageSource obviously, but the main
    thing to figure out is probably 1) what the top element name
    should be ('dataFile' maybe?), 2) where it sits in the XML
    hierarchy (under <disk> or under <source> I guess)

  * Exposing this on the qemu -blockdev command line. Similar to how
    in the blockdev world we are explicitly putting the disk backing
    chain on the command line, we can do that for data_file too. Then
    like persistent <backingStore> XML the user will have the power
    to overwrite the data_file location for an individual VM run.

  * Figure out how we expect ovirt/rhev to be using this at runtime.
    Possibly taking a running VM using a raw image, doing blockdev-*
    magic to pivot it to qcow2+raw data_file, so it can initiate
    incremental backup on top of a previously raw only VM?


Known issues:
  * In the qemu driver, the qcow2 image metadata is only parsed
    in -blockdev world if no <backingStore> is specified in the
    persistent XML. So basically if there's a <backingStore> listed,
    we never parse the qcow2 header and detect the presence of
    data_file. Fixable I'm sure but I didn't look into it much yet.

Most of this is cleanups and refactorings to simplify the actual
functional changes.

Cole Robinson (30):
  storagefile: Make GetMetadataInternal static
  storagefile: qcow1: Check for BACKING_STORE_OK
  storagefile: qcow1: Fix check for empty backing file
  storagefile: qcow1: Let qcowXGetBackingStore fill in format
  storagefile: Check version to determine if qcow2 or not
  storagefile: Drop now unused isQCow2 argument
  storagefile: Use qcowXGetBackingStore directly
  storagefile: Push 'start' into qcow2GetBackingStoreFormat
  storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
  storagefile: Rename qcow2GetBackingStoreFormat
  storagefile: Rename qcow2GetExtensions 'format' argument
  storagefile: Fix backing format \0 check
  storagefile: Add externalDataStoreRaw member
  storagefile: Parse qcow2 external data file
  storagefile: Fill in meta->externalDataStoreRaw
  storagefile: Don't access backingStoreRaw directly in
    FromBackingRelative
  storagefile: Split out virStorageSourceNewFromChild
  storagefile: Add externalDataStore member
  storagefile: Fill in meta->externalDataStore
  security: dac: Drop !parent handling in SetImageLabelInternal
  security: dac: Add is_toplevel to SetImageLabelInternal
  security: dac: Restore image label for externalDataStore
  security: dac: break out SetImageLabelRelative
  security: dac: Label externalDataStore
  security: selinux: Simplify SetImageLabelInternal
  security: selinux: Drop !parent handling in SetImageLabelInternal
  security: selinux: Add is_toplevel to SetImageLabelInternal
  security: selinux: Restore image label for externalDataStore
  security: selinux: break out SetImageLabelRelative
  security: selinux: Label externalDataStore

 src/libvirt_private.syms        |   1 -
 src/security/security_dac.c     |  63 +++++--
 src/security/security_selinux.c |  97 +++++++----
 src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
 src/util/virstoragefile.h       |  11 +-
 5 files changed, 290 insertions(+), 163 deletions(-)

-- 
2.23.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Daniel Henrique Barboza 4 years, 6 months ago
Hi,


ACKed basically everything, perhaps one or two patches I found something
worth talking about but nothing too gamebreaker. Logic-wise everything made
sense to me, but I believe someone else with a deeper understanding of the
storage backend in Libvirt might know better.

I am not sure how hard it is to test the changes you're making here, but it
would be good to have new stuff in virstoragetest.c (seems like the best 
place)
to cover this new data_file support as well.

On a side note: from patches 20+ I got an impression that I was reviewing
the same patches over and over again. I think it's a good idea to have a 
look
at the code repetition between the files in src/security dir, or at the 
very least
between security_dac.c and security_selinux.c files.


Thanks,


DHB


On 10/7/19 6:49 PM, Cole Robinson wrote:
> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
>
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM
> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
>
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
>
>      qemu-img convert -O qcow2 \
>          -o data_file=NEW.raw,data_file_raw=yes
>          EXISTING.raw NEW.qcow2
>
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
>
>    * Teach storagefile how to parse out data_file from the qcow2 header
>    * Store the raw string as virStorageSource->externalDataStoreRaw
>    * Track that as its out virStorageSource in externalDataStore
>    * dac/selinux relabel externalDataStore as needed
>
> >From libvirt's perspective, externalDataStore is conceptually pretty
> close to a backingStore, but the main difference is its read/write
> permissions should match its parent image, rather than being readonly
> like backingStore.
>
> This series has only been tested on top of the -blockdev enablement
> series, but I don't think it actually interacts with that work at
> the moment.
>
>
> Future work:
>    * Exposing this in the runtime XML. We need to figure out an XML
>      schema. It will reuse virStorageSource obviously, but the main
>      thing to figure out is probably 1) what the top element name
>      should be ('dataFile' maybe?), 2) where it sits in the XML
>      hierarchy (under <disk> or under <source> I guess)
>
>    * Exposing this on the qemu -blockdev command line. Similar to how
>      in the blockdev world we are explicitly putting the disk backing
>      chain on the command line, we can do that for data_file too. Then
>      like persistent <backingStore> XML the user will have the power
>      to overwrite the data_file location for an individual VM run.
>
>    * Figure out how we expect ovirt/rhev to be using this at runtime.
>      Possibly taking a running VM using a raw image, doing blockdev-*
>      magic to pivot it to qcow2+raw data_file, so it can initiate
>      incremental backup on top of a previously raw only VM?
>
>
> Known issues:
>    * In the qemu driver, the qcow2 image metadata is only parsed
>      in -blockdev world if no <backingStore> is specified in the
>      persistent XML. So basically if there's a <backingStore> listed,
>      we never parse the qcow2 header and detect the presence of
>      data_file. Fixable I'm sure but I didn't look into it much yet.
>
> Most of this is cleanups and refactorings to simplify the actual
> functional changes.
>
> Cole Robinson (30):
>    storagefile: Make GetMetadataInternal static
>    storagefile: qcow1: Check for BACKING_STORE_OK
>    storagefile: qcow1: Fix check for empty backing file
>    storagefile: qcow1: Let qcowXGetBackingStore fill in format
>    storagefile: Check version to determine if qcow2 or not
>    storagefile: Drop now unused isQCow2 argument
>    storagefile: Use qcowXGetBackingStore directly
>    storagefile: Push 'start' into qcow2GetBackingStoreFormat
>    storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>    storagefile: Rename qcow2GetBackingStoreFormat
>    storagefile: Rename qcow2GetExtensions 'format' argument
>    storagefile: Fix backing format \0 check
>    storagefile: Add externalDataStoreRaw member
>    storagefile: Parse qcow2 external data file
>    storagefile: Fill in meta->externalDataStoreRaw
>    storagefile: Don't access backingStoreRaw directly in
>      FromBackingRelative
>    storagefile: Split out virStorageSourceNewFromChild
>    storagefile: Add externalDataStore member
>    storagefile: Fill in meta->externalDataStore
>    security: dac: Drop !parent handling in SetImageLabelInternal
>    security: dac: Add is_toplevel to SetImageLabelInternal
>    security: dac: Restore image label for externalDataStore
>    security: dac: break out SetImageLabelRelative
>    security: dac: Label externalDataStore
>    security: selinux: Simplify SetImageLabelInternal
>    security: selinux: Drop !parent handling in SetImageLabelInternal
>    security: selinux: Add is_toplevel to SetImageLabelInternal
>    security: selinux: Restore image label for externalDataStore
>    security: selinux: break out SetImageLabelRelative
>    security: selinux: Label externalDataStore
>
>   src/libvirt_private.syms        |   1 -
>   src/security/security_dac.c     |  63 +++++--
>   src/security/security_selinux.c |  97 +++++++----
>   src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
>   src/util/virstoragefile.h       |  11 +-
>   5 files changed, 290 insertions(+), 163 deletions(-)
>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Christian Ehrhardt 4 years, 6 months ago
On Mon, Oct 7, 2019 at 11:49 PM Cole Robinson <crobinso@redhat.com> wrote:
>
> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
>
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM
> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
>
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
>
>     qemu-img convert -O qcow2 \
>         -o data_file=NEW.raw,data_file_raw=yes
>         EXISTING.raw NEW.qcow2
>
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
>
>   * Teach storagefile how to parse out data_file from the qcow2 header
>   * Store the raw string as virStorageSource->externalDataStoreRaw
>   * Track that as its out virStorageSource in externalDataStore
>   * dac/selinux relabel externalDataStore as needed
>
> >From libvirt's perspective, externalDataStore is conceptually pretty
> close to a backingStore, but the main difference is its read/write
> permissions should match its parent image, rather than being readonly
> like backingStore.
>
> This series has only been tested on top of the -blockdev enablement
> series, but I don't think it actually interacts with that work at
> the moment.
>
>
> Future work:
>   * Exposing this in the runtime XML. We need to figure out an XML
>     schema. It will reuse virStorageSource obviously, but the main
>     thing to figure out is probably 1) what the top element name
>     should be ('dataFile' maybe?), 2) where it sits in the XML
>     hierarchy (under <disk> or under <source> I guess)
>
>   * Exposing this on the qemu -blockdev command line. Similar to how
>     in the blockdev world we are explicitly putting the disk backing
>     chain on the command line, we can do that for data_file too. Then
>     like persistent <backingStore> XML the user will have the power
>     to overwrite the data_file location for an individual VM run.
>
>   * Figure out how we expect ovirt/rhev to be using this at runtime.
>     Possibly taking a running VM using a raw image, doing blockdev-*
>     magic to pivot it to qcow2+raw data_file, so it can initiate
>     incremental backup on top of a previously raw only VM?
>
>
> Known issues:
>   * In the qemu driver, the qcow2 image metadata is only parsed
>     in -blockdev world if no <backingStore> is specified in the
>     persistent XML. So basically if there's a <backingStore> listed,
>     we never parse the qcow2 header and detect the presence of
>     data_file. Fixable I'm sure but I didn't look into it much yet.
>
> Most of this is cleanups and refactorings to simplify the actual
> functional changes.
>
> Cole Robinson (30):
>   storagefile: Make GetMetadataInternal static
>   storagefile: qcow1: Check for BACKING_STORE_OK
>   storagefile: qcow1: Fix check for empty backing file
>   storagefile: qcow1: Let qcowXGetBackingStore fill in format
>   storagefile: Check version to determine if qcow2 or not
>   storagefile: Drop now unused isQCow2 argument
>   storagefile: Use qcowXGetBackingStore directly
>   storagefile: Push 'start' into qcow2GetBackingStoreFormat
>   storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetExtensions 'format' argument
>   storagefile: Fix backing format \0 check
>   storagefile: Add externalDataStoreRaw member
>   storagefile: Parse qcow2 external data file
>   storagefile: Fill in meta->externalDataStoreRaw
>   storagefile: Don't access backingStoreRaw directly in
>     FromBackingRelative
>   storagefile: Split out virStorageSourceNewFromChild
>   storagefile: Add externalDataStore member
>   storagefile: Fill in meta->externalDataStore
>   security: dac: Drop !parent handling in SetImageLabelInternal
>   security: dac: Add is_toplevel to SetImageLabelInternal
>   security: dac: Restore image label for externalDataStore
>   security: dac: break out SetImageLabelRelative
>   security: dac: Label externalDataStore
>   security: selinux: Simplify SetImageLabelInternal
>   security: selinux: Drop !parent handling in SetImageLabelInternal
>   security: selinux: Add is_toplevel to SetImageLabelInternal
>   security: selinux: Restore image label for externalDataStore
>   security: selinux: break out SetImageLabelRelative
>   security: selinux: Label externalDataStore

Hi Cole,
it seems the changes to dac/selinux follow a common pattern, in the
past those changes then mostly applied to security-apparmor as well.
Are you going to add patches for that security backend as well before
this is final or do you expect the apparmor users Debian/Suse/Ubuntu
to do so later on?

Furthermore for the static XML->Apparmor part it will most likely need
an extension to [1] as well.
Fortunately as you said it is very similar to backingDevices which
means it should be rather easy to add this.

[1]: https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/security/virt-aa-helper.c;h=5853ad985fe17c91af3f1dc39d179f22a1dca5b7;hb=HEAD#l980



>  src/libvirt_private.syms        |   1 -
>  src/security/security_dac.c     |  63 +++++--
>  src/security/security_selinux.c |  97 +++++++----
>  src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
>  src/util/virstoragefile.h       |  11 +-
>  5 files changed, 290 insertions(+), 163 deletions(-)
>
> --
> 2.23.0
>
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list



--
Christian Ehrhardt
Software Engineer, Ubuntu Server
Canonical Ltd

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/8/19 1:52 AM, Christian Ehrhardt wrote:
> On Mon, Oct 7, 2019 at 11:49 PM Cole Robinson <crobinso@redhat.com> wrote:
>>
>> This series is the first steps to teaching libvirt about qcow2
>> data_file support, aka external data files or qcow2 external metadata.
>>
>> A bit about the feature: it was added in qemu 4.0. It essentially
>> creates a two part image file: a qcow2 layer that just tracks the
>> image metadata, and a separate data file which is stores the VM
>> disk contents. AFAICT the driving use case is to keep a fully coherent
>> raw disk image on disk, and only use qcow2 as an intermediate metadata
>> layer when necessary, for things like incremental backup support.
>>
>> The original qemu patch posting is here:
>> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>>
>> For testing, you can create a new qcow2+raw data_file image from an
>> existing image, like:
>>
>>      qemu-img convert -O qcow2 \
>>          -o data_file=NEW.raw,data_file_raw=yes
>>          EXISTING.raw NEW.qcow2
>>
>> The goal of this series is to teach libvirt enough about this case
>> so that we can correctly relabel the data_file on VM startup/shutdown.
>> The main functional changes are
>>
>>    * Teach storagefile how to parse out data_file from the qcow2 header
>>    * Store the raw string as virStorageSource->externalDataStoreRaw
>>    * Track that as its out virStorageSource in externalDataStore
>>    * dac/selinux relabel externalDataStore as needed
>>
>> >From libvirt's perspective, externalDataStore is conceptually pretty
>> close to a backingStore, but the main difference is its read/write
>> permissions should match its parent image, rather than being readonly
>> like backingStore.
>>
>> This series has only been tested on top of the -blockdev enablement
>> series, but I don't think it actually interacts with that work at
>> the moment.
>>
>>
>> Future work:
>>    * Exposing this in the runtime XML. We need to figure out an XML
>>      schema. It will reuse virStorageSource obviously, but the main
>>      thing to figure out is probably 1) what the top element name
>>      should be ('dataFile' maybe?), 2) where it sits in the XML
>>      hierarchy (under <disk> or under <source> I guess)
>>
>>    * Exposing this on the qemu -blockdev command line. Similar to how
>>      in the blockdev world we are explicitly putting the disk backing
>>      chain on the command line, we can do that for data_file too. Then
>>      like persistent <backingStore> XML the user will have the power
>>      to overwrite the data_file location for an individual VM run.
>>
>>    * Figure out how we expect ovirt/rhev to be using this at runtime.
>>      Possibly taking a running VM using a raw image, doing blockdev-*
>>      magic to pivot it to qcow2+raw data_file, so it can initiate
>>      incremental backup on top of a previously raw only VM?
>>
>>
>> Known issues:
>>    * In the qemu driver, the qcow2 image metadata is only parsed
>>      in -blockdev world if no <backingStore> is specified in the
>>      persistent XML. So basically if there's a <backingStore> listed,
>>      we never parse the qcow2 header and detect the presence of
>>      data_file. Fixable I'm sure but I didn't look into it much yet.
>>
>> Most of this is cleanups and refactorings to simplify the actual
>> functional changes.
>>
>> Cole Robinson (30):
>>    storagefile: Make GetMetadataInternal static
>>    storagefile: qcow1: Check for BACKING_STORE_OK
>>    storagefile: qcow1: Fix check for empty backing file
>>    storagefile: qcow1: Let qcowXGetBackingStore fill in format
>>    storagefile: Check version to determine if qcow2 or not
>>    storagefile: Drop now unused isQCow2 argument
>>    storagefile: Use qcowXGetBackingStore directly
>>    storagefile: Push 'start' into qcow2GetBackingStoreFormat
>>    storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>>    storagefile: Rename qcow2GetBackingStoreFormat
>>    storagefile: Rename qcow2GetExtensions 'format' argument
>>    storagefile: Fix backing format \0 check
>>    storagefile: Add externalDataStoreRaw member
>>    storagefile: Parse qcow2 external data file
>>    storagefile: Fill in meta->externalDataStoreRaw
>>    storagefile: Don't access backingStoreRaw directly in
>>      FromBackingRelative
>>    storagefile: Split out virStorageSourceNewFromChild
>>    storagefile: Add externalDataStore member
>>    storagefile: Fill in meta->externalDataStore
>>    security: dac: Drop !parent handling in SetImageLabelInternal
>>    security: dac: Add is_toplevel to SetImageLabelInternal
>>    security: dac: Restore image label for externalDataStore
>>    security: dac: break out SetImageLabelRelative
>>    security: dac: Label externalDataStore
>>    security: selinux: Simplify SetImageLabelInternal
>>    security: selinux: Drop !parent handling in SetImageLabelInternal
>>    security: selinux: Add is_toplevel to SetImageLabelInternal
>>    security: selinux: Restore image label for externalDataStore
>>    security: selinux: break out SetImageLabelRelative
>>    security: selinux: Label externalDataStore
> 
> Hi Cole,
> it seems the changes to dac/selinux follow a common pattern, in the
> past those changes then mostly applied to security-apparmor as well.
> Are you going to add patches for that security backend as well before
> this is final or do you expect the apparmor users Debian/Suse/Ubuntu
> to do so later on?
> 
> Furthermore for the static XML->Apparmor part it will most likely need
> an extension to [1] as well.
> Fortunately as you said it is very similar to backingDevices which
> means it should be rather easy to add this.
> 
> [1]: https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/security/virt-aa-helper.c;h=5853ad985fe17c91af3f1dc39d179f22a1dca5b7;hb=HEAD#l980

I forgot about apparmor, sorry. I just sent a series and CCd you that 
does some apparmor cleanups and tweaks so that for this series the 
data_file coverage is just this extra diff:

diff --git a/src/security/virt-aa-helper.c b/src/security/virt-aa-helper.c
index d9f6b5638b..fc095d2964 100644
--- a/src/security/virt-aa-helper.c
+++ b/src/security/virt-aa-helper.c
@@ -948,6 +948,10 @@ storage_source_add_files(virStorageSourcePtr src,
          if (add_file_path(tmp, depth, buf) < 0)
              return -1;

+        if (src->externalDataStore &&
+            storage_source_add_files(src->externalDataStore, buf, 
depth) < 0)
+            return -1;
+
          depth++;
      }

I will add a proper patch for that when the prep work hits git

Thanks,
Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Michal Privoznik 4 years, 6 months ago
On 10/7/19 11:49 PM, Cole Robinson wrote:
> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
> 
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM
> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
> 
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
> 
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
> 
>      qemu-img convert -O qcow2 \
>          -o data_file=NEW.raw,data_file_raw=yes
>          EXISTING.raw NEW.qcow2
> 
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
> 
>    * Teach storagefile how to parse out data_file from the qcow2 header
>    * Store the raw string as virStorageSource->externalDataStoreRaw
>    * Track that as its out virStorageSource in externalDataStore
>    * dac/selinux relabel externalDataStore as needed
> 
>>From libvirt's perspective, externalDataStore is conceptually pretty
> close to a backingStore, but the main difference is its read/write
> permissions should match its parent image, rather than being readonly
> like backingStore.
> 
> This series has only been tested on top of the -blockdev enablement
> series, but I don't think it actually interacts with that work at
> the moment.
> 
> 
> Future work:
>    * Exposing this in the runtime XML. We need to figure out an XML
>      schema. It will reuse virStorageSource obviously, but the main
>      thing to figure out is probably 1) what the top element name
>      should be ('dataFile' maybe?), 2) where it sits in the XML
>      hierarchy (under <disk> or under <source> I guess)
> 
>    * Exposing this on the qemu -blockdev command line. Similar to how
>      in the blockdev world we are explicitly putting the disk backing
>      chain on the command line, we can do that for data_file too. Then
>      like persistent <backingStore> XML the user will have the power
>      to overwrite the data_file location for an individual VM run.
> 
>    * Figure out how we expect ovirt/rhev to be using this at runtime.
>      Possibly taking a running VM using a raw image, doing blockdev-*
>      magic to pivot it to qcow2+raw data_file, so it can initiate
>      incremental backup on top of a previously raw only VM?
> 
> 
> Known issues:
>    * In the qemu driver, the qcow2 image metadata is only parsed
>      in -blockdev world if no <backingStore> is specified in the
>      persistent XML. So basically if there's a <backingStore> listed,
>      we never parse the qcow2 header and detect the presence of
>      data_file. Fixable I'm sure but I didn't look into it much yet.
> 
> Most of this is cleanups and refactorings to simplify the actual
> functional changes.
> 
> Cole Robinson (30):
>    storagefile: Make GetMetadataInternal static
>    storagefile: qcow1: Check for BACKING_STORE_OK
>    storagefile: qcow1: Fix check for empty backing file
>    storagefile: qcow1: Let qcowXGetBackingStore fill in format
>    storagefile: Check version to determine if qcow2 or not
>    storagefile: Drop now unused isQCow2 argument
>    storagefile: Use qcowXGetBackingStore directly
>    storagefile: Push 'start' into qcow2GetBackingStoreFormat
>    storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>    storagefile: Rename qcow2GetBackingStoreFormat
>    storagefile: Rename qcow2GetExtensions 'format' argument
>    storagefile: Fix backing format \0 check
>    storagefile: Add externalDataStoreRaw member
>    storagefile: Parse qcow2 external data file
>    storagefile: Fill in meta->externalDataStoreRaw
>    storagefile: Don't access backingStoreRaw directly in
>      FromBackingRelative
>    storagefile: Split out virStorageSourceNewFromChild
>    storagefile: Add externalDataStore member
>    storagefile: Fill in meta->externalDataStore
>    security: dac: Drop !parent handling in SetImageLabelInternal
>    security: dac: Add is_toplevel to SetImageLabelInternal
>    security: dac: Restore image label for externalDataStore
>    security: dac: break out SetImageLabelRelative
>    security: dac: Label externalDataStore
>    security: selinux: Simplify SetImageLabelInternal
>    security: selinux: Drop !parent handling in SetImageLabelInternal
>    security: selinux: Add is_toplevel to SetImageLabelInternal
>    security: selinux: Restore image label for externalDataStore
>    security: selinux: break out SetImageLabelRelative
>    security: selinux: Label externalDataStore
> 
>   src/libvirt_private.syms        |   1 -
>   src/security/security_dac.c     |  63 +++++--
>   src/security/security_selinux.c |  97 +++++++----
>   src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
>   src/util/virstoragefile.h       |  11 +-
>   5 files changed, 290 insertions(+), 163 deletions(-)
> 

Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Christophe de Dinechin 4 years, 6 months ago
Cole Robinson writes:

> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
>
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM

s/is stores/stores/

> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
>
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
>
>     qemu-img convert -O qcow2 \
>         -o data_file=NEW.raw,data_file_raw=yes
>         EXISTING.raw NEW.qcow2

What happens if you try to do that in place, i.e. EXISTING==NEW?

>
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
>
>   * Teach storagefile how to parse out data_file from the qcow2 header
>   * Store the raw string as virStorageSource->externalDataStoreRaw
>   * Track that as its out virStorageSource in externalDataStore
>   * dac/selinux relabel externalDataStore as needed
>

--
Cheers,
Christophe de Dinechin (IRC c3d)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
Hmm why wasn't I in to/cc list, was that intentional?

On 10/17/19 5:26 AM, Christophe de Dinechin wrote:
> 
> Cole Robinson writes:
> 
>> This series is the first steps to teaching libvirt about qcow2
>> data_file support, aka external data files or qcow2 external metadata.
>>
>> A bit about the feature: it was added in qemu 4.0. It essentially
>> creates a two part image file: a qcow2 layer that just tracks the
>> image metadata, and a separate data file which is stores the VM
> 
> s/is stores/stores/

Indeed, but what's the benefit of correcting grammar in a cover letter
for already applied patches?

> 
>> disk contents. AFAICT the driving use case is to keep a fully coherent
>> raw disk image on disk, and only use qcow2 as an intermediate metadata
>> layer when necessary, for things like incremental backup support.
>>
>> The original qemu patch posting is here:
>> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>>
>> For testing, you can create a new qcow2+raw data_file image from an
>> existing image, like:
>>
>>     qemu-img convert -O qcow2 \
>>         -o data_file=NEW.raw,data_file_raw=yes
>>         EXISTING.raw NEW.qcow2
> 
> What happens if you try to do that in place, i.e. EXISTING==NEW?
> 

Results in an unbootable image in my testing. I'm guessing it
initializes the raw image first, similar to trying to use 'qemu-img
create' with an existing raw image as data_file

https://bugzilla.redhat.com/show_bug.cgi?id=1688814#c8

- Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Christophe de Dinechin 4 years, 6 months ago

> On 17 Oct 2019, at 14:35, Cole Robinson <crobinso@redhat.com> wrote:
> 
> Hmm why wasn't I in to/cc list, was that intentional?

Not at all. I assumed a reply-to would add it automatically and did not
bother checking. Probably something wrong in my mu4e config.

> 
> On 10/17/19 5:26 AM, Christophe de Dinechin wrote:
>> 
>> Cole Robinson writes:
>> 
>>> This series is the first steps to teaching libvirt about qcow2
>>> data_file support, aka external data files or qcow2 external metadata.
>>> 
>>> A bit about the feature: it was added in qemu 4.0. It essentially
>>> creates a two part image file: a qcow2 layer that just tracks the
>>> image metadata, and a separate data file which is stores the VM
>> 
>> s/is stores/stores/
> 
> Indeed, but what's the benefit of correcting grammar in a cover letter
> for already applied patches?

Obviously, I realized you had already applied that patch only after sending
my email ;-) I saw that there was still some active discussion on that thread,
so I assumed it was still under review.

>> 
>> What happens if you try to do that in place, i.e. EXISTING==NEW?
>> 
> 
> Results in an unbootable image in my testing. I'm guessing it
> initializes the raw image first, similar to trying to use 'qemu-img
> create' with an existing raw image as data_file
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1688814#c8 <https://bugzilla.redhat.com/show_bug.cgi?id=1688814#c8>

OK. Might be worth catching or fixing at some point.


Christophe--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Han Han 4 years, 6 months ago
Hello Cole, one issue is found:
The qcow2 data file XTTRs is not cleaned on external snapshot when
-blockdev is not enabled

Versions:
libvirt v5.8.0-134-g9d03e9adf1
qemu-kvm-4.1.0-13.module+el8.1.0+4313+ef76ec61.x86_64

Steps:
1. Convert a OS image to qcow2&qcow2 data file:
# qemu-img convert -O qcow2 -o
data_file=/var/lib/libvirt/images/pc-data.raw,data_file_raw=on
/var/lib/libvirt/images/pc.qcow2 /var/lib/libvirt/images/pc-data.qcow2

2. Build and start libvirt source, start libvirt daemon:
# make clean && CC=/usr/lib64/ccache/cc ./autogen.sh&&./configure
--without-libssh --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --program-prefix=
--disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib
--mandir=/usr/share/man --infodir=/usr/share/info --with-qemu
--without-openvz --without-lxc --without-vbox --without-libxl --with-sasl
--with-polkit --with-libvirtd --without-phyp --with-esx --without-hyperv
--without-vmware --without-xenapi --without-vz --without-bhyve
--with-interface --with-network --with-storage-fs --with-storage-lvm
--with-storage-iscsi --with-storage-iscsi-direct --with-storage-scsi
--with-storage-disk --with-storage-mpath --with-storage-rbd
--without-storage-sheepdog --with-storage-gluster --without-storage-zfs
--without-storage-vstorage --with-numactl --with-numad --with-capng
--without-fuse --with-netcf --with-selinux
--with-selinux-mount=/sys/fs/selinux --without-apparmor --without-hal
--with-udev --with-yajl --with-sanlock --with-libpcap --with-macvtap
--with-audit --with-dtrace --with-driver-modules --with-firewalld
--with-firewalld-zone --without-wireshark-dissector --without-pm-utils
--with-nss-plugin '--with-packager=Unknown, 2019-08-19-12:13:01,
lab.rhel8.me' --with-packager-version=1.el8 --with-qemu-user=qemu
--with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM --enable-werror
--enable-expensive-tests --with-init-script=systemd --without-login-shell
&& make -j8
# LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/virtlogd
# LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" LIBVIRT_DEBUG=3
LIBVIRT_LOG_FILTERS="1:util 1:qemu 1:security"
LIBVIRT_LOG_OUTPUTS="1:file:/tmp/libvirt_daemon.log" src/.libs/libvirtd

3. Define and start an VM with the qcow2&qcow2 data file. Note that the
-blockdev is not enabled
# virsh define pc-data.xml
# virsh start pc-data

4. Create snapshot and check the data file XATTRs:
# virsh snapshot-create-as pc-data s1 --no-metadata --disk-only
# getfattr -m - -d /var/lib/libvirt/images/pc-data.raw
getfattr: Removing leading '/' from absolute path names
# file: var/lib/libvirt/images/pc-data.raw
security.selinux="unconfined_u:object_r:svirt_image_t:s0:c775,c1011"
trusted.libvirt.security.dac="+107:+107"
trusted.libvirt.security.ref_dac="1"
trusted.libvirt.security.ref_selinux="1"
trusted.libvirt.security.selinux="unconfined_u:object_r:svirt_image_t:s0:c284,c367"
trusted.libvirt.security.timestamp_dac="1563328069"
trusted.libvirt.security.timestamp_selinux="1563328069"

Shutdown the VM. The XATTRs of data file is not changed.
It is not expected. The XTTRs should not contain *.libvirt.*

Issue is not reproduced with -blockdev enabled:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
  <qemu:capabilities>
    <qemu:add capability='blockdev'/>
    <qemu:del capability='drive'/>
  </qemu:capabilities>
</domain>

See the libvirt daemon log and vm xml in attachment.

On Tue, Oct 8, 2019 at 5:49 AM Cole Robinson <crobinso@redhat.com> wrote:

> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
>
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM
> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
>
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
>
>     qemu-img convert -O qcow2 \
>         -o data_file=NEW.raw,data_file_raw=yes
>         EXISTING.raw NEW.qcow2
>
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
>
>   * Teach storagefile how to parse out data_file from the qcow2 header
>   * Store the raw string as virStorageSource->externalDataStoreRaw
>   * Track that as its out virStorageSource in externalDataStore
>   * dac/selinux relabel externalDataStore as needed
>
> >From libvirt's perspective, externalDataStore is conceptually pretty
> close to a backingStore, but the main difference is its read/write
> permissions should match its parent image, rather than being readonly
> like backingStore.
>
> This series has only been tested on top of the -blockdev enablement
> series, but I don't think it actually interacts with that work at
> the moment.
>
>
> Future work:
>   * Exposing this in the runtime XML. We need to figure out an XML
>     schema. It will reuse virStorageSource obviously, but the main
>     thing to figure out is probably 1) what the top element name
>     should be ('dataFile' maybe?), 2) where it sits in the XML
>     hierarchy (under <disk> or under <source> I guess)
>
>   * Exposing this on the qemu -blockdev command line. Similar to how
>     in the blockdev world we are explicitly putting the disk backing
>     chain on the command line, we can do that for data_file too. Then
>     like persistent <backingStore> XML the user will have the power
>     to overwrite the data_file location for an individual VM run.
>
>   * Figure out how we expect ovirt/rhev to be using this at runtime.
>     Possibly taking a running VM using a raw image, doing blockdev-*
>     magic to pivot it to qcow2+raw data_file, so it can initiate
>     incremental backup on top of a previously raw only VM?
>
>
> Known issues:
>   * In the qemu driver, the qcow2 image metadata is only parsed
>     in -blockdev world if no <backingStore> is specified in the
>     persistent XML. So basically if there's a <backingStore> listed,
>     we never parse the qcow2 header and detect the presence of
>     data_file. Fixable I'm sure but I didn't look into it much yet.
>
> Most of this is cleanups and refactorings to simplify the actual
> functional changes.
>
> Cole Robinson (30):
>   storagefile: Make GetMetadataInternal static
>   storagefile: qcow1: Check for BACKING_STORE_OK
>   storagefile: qcow1: Fix check for empty backing file
>   storagefile: qcow1: Let qcowXGetBackingStore fill in format
>   storagefile: Check version to determine if qcow2 or not
>   storagefile: Drop now unused isQCow2 argument
>   storagefile: Use qcowXGetBackingStore directly
>   storagefile: Push 'start' into qcow2GetBackingStoreFormat
>   storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetExtensions 'format' argument
>   storagefile: Fix backing format \0 check
>   storagefile: Add externalDataStoreRaw member
>   storagefile: Parse qcow2 external data file
>   storagefile: Fill in meta->externalDataStoreRaw
>   storagefile: Don't access backingStoreRaw directly in
>     FromBackingRelative
>   storagefile: Split out virStorageSourceNewFromChild
>   storagefile: Add externalDataStore member
>   storagefile: Fill in meta->externalDataStore
>   security: dac: Drop !parent handling in SetImageLabelInternal
>   security: dac: Add is_toplevel to SetImageLabelInternal
>   security: dac: Restore image label for externalDataStore
>   security: dac: break out SetImageLabelRelative
>   security: dac: Label externalDataStore
>   security: selinux: Simplify SetImageLabelInternal
>   security: selinux: Drop !parent handling in SetImageLabelInternal
>   security: selinux: Add is_toplevel to SetImageLabelInternal
>   security: selinux: Restore image label for externalDataStore
>   security: selinux: break out SetImageLabelRelative
>   security: selinux: Label externalDataStore
>
>  src/libvirt_private.syms        |   1 -
>  src/security/security_dac.c     |  63 +++++--
>  src/security/security_selinux.c |  97 +++++++----
>  src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
>  src/util/virstoragefile.h       |  11 +-
>  5 files changed, 290 insertions(+), 163 deletions(-)
>
> --
> 2.23.0
>
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
>


-- 
Best regards,
-----------------------------------
Han Han
Quality Engineer
Redhat.

Email: hhan@redhat.com
Phone: +861065339333
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/15/19 3:56 AM, Han Han wrote:
> Hello Cole, one issue is found:
> The qcow2 data file XTTRs is not cleaned on external snapshot when
> -blockdev is not enabled
> 
> Versions:
> libvirt v5.8.0-134-g9d03e9adf1
> qemu-kvm-4.1.0-13.module+el8.1.0+4313+ef76ec61.x86_64
> 
> Steps:
> 1. Convert a OS image to qcow2&qcow2 data file:
> # qemu-img convert -O qcow2 -o
> data_file=/var/lib/libvirt/images/pc-data.raw,data_file_raw=on
> /var/lib/libvirt/images/pc.qcow2 /var/lib/libvirt/images/pc-data.qcow2
> 
> 2. Build and start libvirt source, start libvirt daemon:
> # make clean && CC=/usr/lib64/ccache/cc ./autogen.sh&&./configure
> --without-libssh --build=x86_64-redhat-linux-gnu
> --host=x86_64-redhat-linux-gnu --program-prefix=
> --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
> --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
> --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
> --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib
> --mandir=/usr/share/man --infodir=/usr/share/info --with-qemu
> --without-openvz --without-lxc --without-vbox --without-libxl
> --with-sasl --with-polkit --with-libvirtd --without-phyp --with-esx
> --without-hyperv --without-vmware --without-xenapi --without-vz
> --without-bhyve --with-interface --with-network --with-storage-fs
> --with-storage-lvm --with-storage-iscsi --with-storage-iscsi-direct
> --with-storage-scsi --with-storage-disk --with-storage-mpath
> --with-storage-rbd --without-storage-sheepdog --with-storage-gluster
> --without-storage-zfs --without-storage-vstorage --with-numactl
> --with-numad --with-capng --without-fuse --with-netcf --with-selinux
> --with-selinux-mount=/sys/fs/selinux --without-apparmor --without-hal
> --with-udev --with-yajl --with-sanlock --with-libpcap --with-macvtap
> --with-audit --with-dtrace --with-driver-modules --with-firewalld
> --with-firewalld-zone --without-wireshark-dissector --without-pm-utils
> --with-nss-plugin '--with-packager=Unknown, 2019-08-19-12:13:01,
> lab.rhel8.me <http://lab.rhel8.me>' --with-packager-version=1.el8
> --with-qemu-user=qemu --with-qemu-group=qemu
> --with-tls-priority=@LIBVIRT,SYSTEM --enable-werror
> --enable-expensive-tests --with-init-script=systemd
> --without-login-shell && make -j8
> # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/virtlogd
> # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" LIBVIRT_DEBUG=3
> LIBVIRT_LOG_FILTERS="1:util 1:qemu 1:security"
> LIBVIRT_LOG_OUTPUTS="1:file:/tmp/libvirt_daemon.log" src/.libs/libvirtd
> 
> 3. Define and start an VM with the qcow2&qcow2 data file. Note that the
> -blockdev is not enabled
> # virsh define pc-data.xml
> # virsh start pc-data
> 
> 4. Create snapshot and check the data file XATTRs:
> # virsh snapshot-create-as pc-data s1 --no-metadata --disk-only
> # getfattr -m - -d /var/lib/libvirt/images/pc-data.raw
> getfattr: Removing leading '/' from absolute path names
> # file: var/lib/libvirt/images/pc-data.raw
> security.selinux="unconfined_u:object_r:svirt_image_t:s0:c775,c1011"
> trusted.libvirt.security.dac="+107:+107"
> trusted.libvirt.security.ref_dac="1"
> trusted.libvirt.security.ref_selinux="1"
> trusted.libvirt.security.selinux="unconfined_u:object_r:svirt_image_t:s0:c284,c367"
> trusted.libvirt.security.timestamp_dac="1563328069"
> trusted.libvirt.security.timestamp_selinux="1563328069"
> 
> Shutdown the VM. The XATTRs of data file is not changed.
> It is not expected. The XTTRs should not contain *.libvirt.*
> 
> Issue is not reproduced with -blockdev enabled:
> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
> ...
>   <qemu:capabilities>
>     <qemu:add capability='blockdev'/>
>     <qemu:del capability='drive'/>
>   </qemu:capabilities>
> </domain>
> 
> See the libvirt daemon log and vm xml in attachment.

Nice catch! I will need to dig into this to figure out where the issue
is. Can you put this info into an upstream bug report in
product=Virtualization Tools  and I will get to it when I can

Thanks,
Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Han Han 4 years, 6 months ago
On Wed, Oct 16, 2019 at 1:04 AM Cole Robinson <crobinso@redhat.com> wrote:

> On 10/15/19 3:56 AM, Han Han wrote:
> > Hello Cole, one issue is found:
> > The qcow2 data file XTTRs is not cleaned on external snapshot when
> > -blockdev is not enabled
> >
> > Versions:
> > libvirt v5.8.0-134-g9d03e9adf1
> > qemu-kvm-4.1.0-13.module+el8.1.0+4313+ef76ec61.x86_64
> >
> > Steps:
> > 1. Convert a OS image to qcow2&qcow2 data file:
> > # qemu-img convert -O qcow2 -o
> > data_file=/var/lib/libvirt/images/pc-data.raw,data_file_raw=on
> > /var/lib/libvirt/images/pc.qcow2 /var/lib/libvirt/images/pc-data.qcow2
> >
> > 2. Build and start libvirt source, start libvirt daemon:
> > # make clean && CC=/usr/lib64/ccache/cc ./autogen.sh&&./configure
> > --without-libssh --build=x86_64-redhat-linux-gnu
> > --host=x86_64-redhat-linux-gnu --program-prefix=
> > --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
> > --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
> > --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
> > --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib
> > --mandir=/usr/share/man --infodir=/usr/share/info --with-qemu
> > --without-openvz --without-lxc --without-vbox --without-libxl
> > --with-sasl --with-polkit --with-libvirtd --without-phyp --with-esx
> > --without-hyperv --without-vmware --without-xenapi --without-vz
> > --without-bhyve --with-interface --with-network --with-storage-fs
> > --with-storage-lvm --with-storage-iscsi --with-storage-iscsi-direct
> > --with-storage-scsi --with-storage-disk --with-storage-mpath
> > --with-storage-rbd --without-storage-sheepdog --with-storage-gluster
> > --without-storage-zfs --without-storage-vstorage --with-numactl
> > --with-numad --with-capng --without-fuse --with-netcf --with-selinux
> > --with-selinux-mount=/sys/fs/selinux --without-apparmor --without-hal
> > --with-udev --with-yajl --with-sanlock --with-libpcap --with-macvtap
> > --with-audit --with-dtrace --with-driver-modules --with-firewalld
> > --with-firewalld-zone --without-wireshark-dissector --without-pm-utils
> > --with-nss-plugin '--with-packager=Unknown, 2019-08-19-12:13:01,
> > lab.rhel8.me <http://lab.rhel8.me>' --with-packager-version=1.el8
> > --with-qemu-user=qemu --with-qemu-group=qemu
> > --with-tls-priority=@LIBVIRT,SYSTEM --enable-werror
> > --enable-expensive-tests --with-init-script=systemd
> > --without-login-shell && make -j8
> > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/virtlogd
> > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" LIBVIRT_DEBUG=3
> > LIBVIRT_LOG_FILTERS="1:util 1:qemu 1:security"
> > LIBVIRT_LOG_OUTPUTS="1:file:/tmp/libvirt_daemon.log" src/.libs/libvirtd
> >
> > 3. Define and start an VM with the qcow2&qcow2 data file. Note that the
> > -blockdev is not enabled
> > # virsh define pc-data.xml
> > # virsh start pc-data
> >
> > 4. Create snapshot and check the data file XATTRs:
> > # virsh snapshot-create-as pc-data s1 --no-metadata --disk-only
> > # getfattr -m - -d /var/lib/libvirt/images/pc-data.raw
> > getfattr: Removing leading '/' from absolute path names
> > # file: var/lib/libvirt/images/pc-data.raw
> > security.selinux="unconfined_u:object_r:svirt_image_t:s0:c775,c1011"
> > trusted.libvirt.security.dac="+107:+107"
> > trusted.libvirt.security.ref_dac="1"
> > trusted.libvirt.security.ref_selinux="1"
> >
> trusted.libvirt.security.selinux="unconfined_u:object_r:svirt_image_t:s0:c284,c367"
> > trusted.libvirt.security.timestamp_dac="1563328069"
> > trusted.libvirt.security.timestamp_selinux="1563328069"
> >
> > Shutdown the VM. The XATTRs of data file is not changed.
> > It is not expected. The XTTRs should not contain *.libvirt.*
> >
> > Issue is not reproduced with -blockdev enabled:
> > <domain type='kvm' xmlns:qemu='
> http://libvirt.org/schemas/domain/qemu/1.0'>
> > ...
> >   <qemu:capabilities>
> >     <qemu:add capability='blockdev'/>
> >     <qemu:del capability='drive'/>
> >   </qemu:capabilities>
> > </domain>
> >
> > See the libvirt daemon log and vm xml in attachment.
>
> Nice catch! I will need to dig into this to figure out where the issue
> is. Can you put this info into an upstream bug report in
>
Sure. https://bugzilla.redhat.com/show_bug.cgi?id=1762135

> product=Virtualization Tools  and I will get to it when I can
>
> Thanks,
> Cole
>


-- 
Best regards,
-----------------------------------
Han Han
Quality Engineer
Redhat.

Email: hhan@redhat.com
Phone: +861065339333
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Ján Tomko 4 years, 6 months ago
On Mon, Oct 07, 2019 at 05:49:14PM -0400, Cole Robinson wrote:
>This series is the first steps to teaching libvirt about qcow2
>data_file support, aka external data files or qcow2 external metadata.
>
>A bit about the feature: it was added in qemu 4.0. It essentially
>creates a two part image file: a qcow2 layer that just tracks the
>image metadata, and a separate data file which is stores the VM
>disk contents. AFAICT the driving use case is to keep a fully coherent
>raw disk image on disk, and only use qcow2 as an intermediate metadata
>layer when necessary, for things like incremental backup support.
>
>The original qemu patch posting is here:
>https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
>For testing, you can create a new qcow2+raw data_file image from an
>existing image, like:
>
>    qemu-img convert -O qcow2 \
>        -o data_file=NEW.raw,data_file_raw=yes
>        EXISTING.raw NEW.qcow2
>
>The goal of this series is to teach libvirt enough about this case
>so that we can correctly relabel the data_file on VM startup/shutdown.
>The main functional changes are
>
>  * Teach storagefile how to parse out data_file from the qcow2 header
>  * Store the raw string as virStorageSource->externalDataStoreRaw
>  * Track that as its out virStorageSource in externalDataStore
>  * dac/selinux relabel externalDataStore as needed
>
>>From libvirt's perspective, externalDataStore is conceptually pretty
>close to a backingStore, but the main difference is its read/write
>permissions should match its parent image, rather than being readonly
>like backingStore.
>
>This series has only been tested on top of the -blockdev enablement
>series, but I don't think it actually interacts with that work at
>the moment.
>
>
>Future work:
>  * Exposing this in the runtime XML. We need to figure out an XML

This also belongs in the persistent XML.

>    schema. It will reuse virStorageSource obviously, but the main
>    thing to figure out is probably 1) what the top element name
>    should be ('dataFile' maybe?), 2) where it sits in the XML
>    hierarchy (under <disk> or under <source> I guess)
>

<metadataStore> maybe?

>  * Exposing this on the qemu -blockdev command line. Similar to how
>    in the blockdev world we are explicitly putting the disk backing
>    chain on the command line, we can do that for data_file too.

Historically, not being explicit on the command line and letting QEMU
do the right thing has bitten us, so yes, we have to do it for data_file
too.

>Then
>    like persistent <backingStore> XML the user will have the power
>    to overwrite the data_file location for an individual VM run.
>

If the point of the thin qcow2 layer is to contain the dirty bitmaps for
incremental backup then running this then you might as well use a
different metadata_file? Otherwise the metadata won't match the actual
data.

OTOH, I can imagine throwing away the metadata file and starting over.

>  * Figure out how we expect ovirt/rhev to be using this at runtime.
>    Possibly taking a running VM using a raw image, doing blockdev-*
>    magic to pivot it to qcow2+raw data_file, so it can initiate
>    incremental backup on top of a previously raw only VM?
>
>
>Known issues:
>  * In the qemu driver, the qcow2 image metadata is only parsed
>    in -blockdev world if no <backingStore> is specified in the
>    persistent XML. So basically if there's a <backingStore> listed,
>    we never parse the qcow2 header and detect the presence of
>    data_file. Fixable I'm sure but I didn't look into it much yet.

This will be fixed by introducing an XML element for it.

Jano

>
>Most of this is cleanups and refactorings to simplify the actual
>functional changes.
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/11/19 9:45 AM, Ján Tomko wrote:
> On Mon, Oct 07, 2019 at 05:49:14PM -0400, Cole Robinson wrote:
>> This series is the first steps to teaching libvirt about qcow2
>> data_file support, aka external data files or qcow2 external metadata.
>>
>> A bit about the feature: it was added in qemu 4.0. It essentially
>> creates a two part image file: a qcow2 layer that just tracks the
>> image metadata, and a separate data file which is stores the VM
>> disk contents. AFAICT the driving use case is to keep a fully coherent
>> raw disk image on disk, and only use qcow2 as an intermediate metadata
>> layer when necessary, for things like incremental backup support.
>>
>> The original qemu patch posting is here:
>> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>>
>> For testing, you can create a new qcow2+raw data_file image from an
>> existing image, like:
>>
>>    qemu-img convert -O qcow2 \
>>        -o data_file=NEW.raw,data_file_raw=yes
>>        EXISTING.raw NEW.qcow2
>>
>> The goal of this series is to teach libvirt enough about this case
>> so that we can correctly relabel the data_file on VM startup/shutdown.
>> The main functional changes are
>>
>>  * Teach storagefile how to parse out data_file from the qcow2 header
>>  * Store the raw string as virStorageSource->externalDataStoreRaw
>>  * Track that as its out virStorageSource in externalDataStore
>>  * dac/selinux relabel externalDataStore as needed
>>
>>> From libvirt's perspective, externalDataStore is conceptually pretty
>> close to a backingStore, but the main difference is its read/write
>> permissions should match its parent image, rather than being readonly
>> like backingStore.
>>
>> This series has only been tested on top of the -blockdev enablement
>> series, but I don't think it actually interacts with that work at
>> the moment.
>>
>>
>> Future work:
>>  * Exposing this in the runtime XML. We need to figure out an XML
> 
> This also belongs in the persistent XML.
> 

Agreed

>>    schema. It will reuse virStorageSource obviously, but the main
>>    thing to figure out is probably 1) what the top element name
>>    should be ('dataFile' maybe?), 2) where it sits in the XML
>>    hierarchy (under <disk> or under <source> I guess)
>>
> 
> <metadataStore> maybe?

The way this code is structured, we have

src->path = FOO.qcow2
src->externalDataStore-> FOO.raw

FOO.raw contains the disk/OS contents, FOO.qcow2 just the qcow2 
metadata. If we reflect that layout in the XML, we have

<disk>
   <source file='FOO.qcow'>
     <externalDataStore>
       <source file='FOO.raw'/>
     </externalDataStore>
   </source>
</disk>

If we called it metadataStore it sounds like the layout is inverted.

>  >>  * Exposing this on the qemu -blockdev command line. Similar to how
>>    in the blockdev world we are explicitly putting the disk backing
>>    chain on the command line, we can do that for data_file too.
> 
> Historically, not being explicit on the command line and letting QEMU
> do the right thing has bitten us, so yes, we have to do it for data_file
> too.
> 
>> Then
>>    like persistent <backingStore> XML the user will have the power
>>    to overwrite the data_file location for an individual VM run.
>>
> 
> If the point of the thin qcow2 layer is to contain the dirty bitmaps for
> incremental backup then running this then you might as well use a
> different metadata_file? Otherwise the metadata won't match the actual
> data.
> 

I'm not sure I follow this part, but maybe that's due to data_file 
naming mixup

> OTOH, I can imagine throwing away the metadata file and starting over.
> 

Yes this is one of the main drivers I think. That the qcow2 layer gives 
qcow2 native features like dirty bitmaps, but if it ever comes to it, 
the data is still in raw format which simplifies processing the image 
with other tools. Plus raw is less of a boogieman than qcow2 for some 
people, so I think there's some marketing opportunity behind it to say 
'see your data is still there in FOO.raw'.

There's probably cases where the user would want to ditch the top level 
layer and use that data raw layer directly, but similar to writing to a 
backing image, it invalidates the top layer, and there's no rebase 
operation for data_file AFAICT. But the persistent XML will allow 
configuring that if someone wanted it

>>  * Figure out how we expect ovirt/rhev to be using this at runtime.
>>    Possibly taking a running VM using a raw image, doing blockdev-*
>>    magic to pivot it to qcow2+raw data_file, so it can initiate
>>    incremental backup on top of a previously raw only VM?
>>
>>
>> Known issues:
>>  * In the qemu driver, the qcow2 image metadata is only parsed
>>    in -blockdev world if no <backingStore> is specified in the
>>    persistent XML. So basically if there's a <backingStore> listed,
>>    we never parse the qcow2 header and detect the presence of
>>    data_file. Fixable I'm sure but I didn't look into it much yet.
> 
> This will be fixed by introducing an XML element for it.
> 

It's part of the fix I think. We will still need to change qemu_block.c 
logic to accomodate this in some way. Right now, whether we probe the 
qcow2 file metadata is only dependent on <backingStore> in the 
persistent XML or not. But now the probing provides info on both 
backingStore and externalDataStore, so tying probing only to prescence 
of backingStore XML isn't sufficient.

I'm thinking extend the storage_file.c entry points to have an option 
like 'skipBackingStore' and 'skipExternalDataStore' or similar, so we 
only probe what we want, and probing is skipped entirely only if both 
backingStore and externalDataStore are in the XML. That's just an idea, 
I'll look into it more next week and if there's no clear answer I'll 
start a separate thread

Thanks,
Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Han Han 4 years, 6 months ago
Hi Cole,
I merged crobinso/qcow2-data_file branch to 37b565c00. Reserved new
capabilities introduced by these to branches to resolve conflicts.
Then build and test as following:
# ./autogen.sh&& ./configure --without-libssh
--build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu
--program-prefix= --disable-dependency-tracking --prefix=/usr
--exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib
--mandir=/usr/share/man --infodir=/usr/share/info --with-qemu
--without-openvz --without-lxc --without-vbox --without-libxl --with-sasl
--with-polkit --with-libvirtd --without-phyp --with-esx --without-hyperv
--without-vmware --without-xenapi --without-vz --without-bhyve
--with-interface --with-network --with-storage-fs --with-storage-lvm
--with-storage-iscsi --with-storage-iscsi-direct --with-storage-scsi
--with-storage-disk --with-storage-mpath --with-storage-rbd
--without-storage-sheepdog --with-storage-gluster --without-storage-zfs
--without-storage-vstorage --with-numactl --with-numad --with-capng
--without-fuse --with-netcf --with-selinux
--with-selinux-mount=/sys/fs/selinux --without-apparmor --without-hal
--with-udev --with-yajl --with-sanlock --with-libpcap --with-macvtap
--with-audit --with-dtrace --with-driver-modules --with-firewalld
--with-firewalld-zone --without-wireshark-dissector --without-pm-utils
--with-nss-plugin '--with-packager=Unknown, 2019-08-19-12:13:01,
lab.rhel8.me' --with-packager-version=1.el8 --with-qemu-user=qemu
--with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM --enable-werror
--enable-expensive-tests --with-init-script=systemd --without-login-shell
&& make

Start libvirtd and virtlogd
# LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
# LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" ./src/virtlogd

Then try to list all domains:
# virsh list --all

Libvirtd exits with segment fault:
[1]    30104 segmentation fault (core dumped)  LD_PRELOAD="$(find src -name
'*.so.*'|tr '\n' ' ')" src/.libs/libvirtd

Version:
qemu-4.1

Backtrace:
(gdb) bt
#0  0x00007fbe57a0d1b9 in virDomainVirtioSerialAddrSetAddControllers
(def=<optimized out>, def=<optimized out>, addrs=<optimized out>) at
conf/domain_addr.c:1656
#1  virDomainVirtioSerialAddrSetCreateFromDomain (def=def@entry=0x7fbde81cc3f0)
at conf/domain_addr.c:1753

#2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses
(def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174

#3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0, qemuCaps=0x7fbde81d2210,
driver=0x7fbde8126850, obj=0x0, newDomain=<optimized out>) at
qemu/qemu_domain_address.c:3174
#4  0x00007fbe57a39e0d in virDomainDefPostParse (def=def@entry=0x7fbde81cc3f0,
caps=caps@entry=0x7fbde8154d20, parseFlags=parseFlags@entry=4610,
xmlopt=xmlopt@entry=0x7fbde83ce070,
    parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
#5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized out>,
root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070,
parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
#6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry=0x0,
filename=<optimized out>, caps=caps@entry=0x7fbde8154d20,
xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0,

    flags=flags@entry=4610) at conf/domain_conf.c:21628
#7  0x00007fbe57a528f6 in virDomainDefParseFile (filename=<optimized out>,
caps=caps@entry=0x7fbde8154d20, xmlopt=xmlopt@entry=0x7fbde83ce070,
parseOpaque=parseOpaque@entry=0x0, flags=flags@entry=4610)
    at conf/domain_conf.c:21653
#8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0,
notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070
"/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050
"/etc/libvirt/qemu",
    xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20, doms=0x7fbde8126940) at
conf/virdomainobjlist.c:503
#9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940,
configDir=0x7fbde8124050 "/etc/libvirt/qemu", autostartDir=0x7fbde8124070
"/etc/libvirt/qemu/autostart", liveStatus=liveStatus@entry=false,

    caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0, opaque=0x0) at
conf/virdomainobjlist.c:625
#10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true,
callback=<optimized out>, opaque=<optimized out>) at
qemu/qemu_driver.c:1007

#11 0x00007fbe57b8033d in virStateInitialize (privileged=true,
mandatory=mandatory@entry=false, callback=callback@entry=0x55dfb702ecc0
<daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60)
    at libvirt.c:666
#12 0x000055dfb702ed1d in daemonRunStateInit (opaque=0x55dfb8869d60) at
remote/remote_daemon.c:846
#13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at
util/virthread.c:196
#14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at
pthread_create.c:486
#15 0x00007fbe55763133 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Could you please check this issue?
The full threads backtrace is in attachment

On Tue, Oct 8, 2019 at 5:49 AM Cole Robinson <crobinso@redhat.com> wrote:

> This series is the first steps to teaching libvirt about qcow2
> data_file support, aka external data files or qcow2 external metadata.
>
> A bit about the feature: it was added in qemu 4.0. It essentially
> creates a two part image file: a qcow2 layer that just tracks the
> image metadata, and a separate data file which is stores the VM
> disk contents. AFAICT the driving use case is to keep a fully coherent
> raw disk image on disk, and only use qcow2 as an intermediate metadata
> layer when necessary, for things like incremental backup support.
>
> The original qemu patch posting is here:
> https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg07496.html
>
> For testing, you can create a new qcow2+raw data_file image from an
> existing image, like:
>
>     qemu-img convert -O qcow2 \
>         -o data_file=NEW.raw,data_file_raw=yes
>         EXISTING.raw NEW.qcow2
>
> The goal of this series is to teach libvirt enough about this case
> so that we can correctly relabel the data_file on VM startup/shutdown.
> The main functional changes are
>
>   * Teach storagefile how to parse out data_file from the qcow2 header
>   * Store the raw string as virStorageSource->externalDataStoreRaw
>   * Track that as its out virStorageSource in externalDataStore
>   * dac/selinux relabel externalDataStore as needed
>
> >From libvirt's perspective, externalDataStore is conceptually pretty
> close to a backingStore, but the main difference is its read/write
> permissions should match its parent image, rather than being readonly
> like backingStore.
>
> This series has only been tested on top of the -blockdev enablement
> series, but I don't think it actually interacts with that work at
> the moment.
>
>
> Future work:
>   * Exposing this in the runtime XML. We need to figure out an XML
>     schema. It will reuse virStorageSource obviously, but the main
>     thing to figure out is probably 1) what the top element name
>     should be ('dataFile' maybe?), 2) where it sits in the XML
>     hierarchy (under <disk> or under <source> I guess)
>
>   * Exposing this on the qemu -blockdev command line. Similar to how
>     in the blockdev world we are explicitly putting the disk backing
>     chain on the command line, we can do that for data_file too. Then
>     like persistent <backingStore> XML the user will have the power
>     to overwrite the data_file location for an individual VM run.
>
>   * Figure out how we expect ovirt/rhev to be using this at runtime.
>     Possibly taking a running VM using a raw image, doing blockdev-*
>     magic to pivot it to qcow2+raw data_file, so it can initiate
>     incremental backup on top of a previously raw only VM?
>
>
> Known issues:
>   * In the qemu driver, the qcow2 image metadata is only parsed
>     in -blockdev world if no <backingStore> is specified in the
>     persistent XML. So basically if there's a <backingStore> listed,
>     we never parse the qcow2 header and detect the presence of
>     data_file. Fixable I'm sure but I didn't look into it much yet.
>
> Most of this is cleanups and refactorings to simplify the actual
> functional changes.
>
> Cole Robinson (30):
>   storagefile: Make GetMetadataInternal static
>   storagefile: qcow1: Check for BACKING_STORE_OK
>   storagefile: qcow1: Fix check for empty backing file
>   storagefile: qcow1: Let qcowXGetBackingStore fill in format
>   storagefile: Check version to determine if qcow2 or not
>   storagefile: Drop now unused isQCow2 argument
>   storagefile: Use qcowXGetBackingStore directly
>   storagefile: Push 'start' into qcow2GetBackingStoreFormat
>   storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetBackingStoreFormat
>   storagefile: Rename qcow2GetExtensions 'format' argument
>   storagefile: Fix backing format \0 check
>   storagefile: Add externalDataStoreRaw member
>   storagefile: Parse qcow2 external data file
>   storagefile: Fill in meta->externalDataStoreRaw
>   storagefile: Don't access backingStoreRaw directly in
>     FromBackingRelative
>   storagefile: Split out virStorageSourceNewFromChild
>   storagefile: Add externalDataStore member
>   storagefile: Fill in meta->externalDataStore
>   security: dac: Drop !parent handling in SetImageLabelInternal
>   security: dac: Add is_toplevel to SetImageLabelInternal
>   security: dac: Restore image label for externalDataStore
>   security: dac: break out SetImageLabelRelative
>   security: dac: Label externalDataStore
>   security: selinux: Simplify SetImageLabelInternal
>   security: selinux: Drop !parent handling in SetImageLabelInternal
>   security: selinux: Add is_toplevel to SetImageLabelInternal
>   security: selinux: Restore image label for externalDataStore
>   security: selinux: break out SetImageLabelRelative
>   security: selinux: Label externalDataStore
>
>  src/libvirt_private.syms        |   1 -
>  src/security/security_dac.c     |  63 +++++--
>  src/security/security_selinux.c |  97 +++++++----
>  src/util/virstoragefile.c       | 281 ++++++++++++++++++++------------
>  src/util/virstoragefile.h       |  11 +-
>  5 files changed, 290 insertions(+), 163 deletions(-)
>
> --
> 2.23.0
>
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
>


-- 
Best regards,
-----------------------------------
Han Han
Quality Engineer
Redhat.

Email: hhan@redhat.com
Phone: +861065339333

Thread 18 (Thread 0x7fbe38ff9700 (LWP 30116)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a2a0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a278) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe38ff8990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 6, cond = 0x55dfb886a278, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 6
        seq = 3
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a278, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a278, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5834 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8852830) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a278
        priority = true
        curWorkers = 0x55dfb886a268
        maxLimit = 0x55dfb886a260
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8852830}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454976788224, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454976785280, 7135777789718557002, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 17 (Thread 0x7fbe397fa700 (LWP 30115)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a2a0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a278) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe397f9990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 4, cond = 0x55dfb886a278, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 4
        seq = 2
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a278, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a278, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5834 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88528f0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a278
        priority = true
        curWorkers = 0x55dfb886a268
        maxLimit = 0x55dfb886a260
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88528f0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454985180928, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454985177984, 7135781088790311242, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 16 (Thread 0x7fbe016ff700 (LWP 30118)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb88cefe0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb88cef90, cond=0x55dfb88cefb8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe016fe990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 0, cond = 0x55dfb88cefb8, mutex = 0x55dfb88cef90, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 0
        seq = 0
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb88cefb8, mutex=mutex@entry=0x55dfb88cef90) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb88cefb8, m=m@entry=0x55dfb88cef90) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88cf090) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb88cef50
        cond = 0x55dfb88cefb8
        priority = false
        curWorkers = 0x55dfb88cf030
        maxLimit = 0x55dfb88cf018
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88cf090}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454044628736, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454044625792, 7135693261540946250, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 15 (Thread 0x7fbe00efe700 (LWP 30119)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb88cefe0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb88cef90, cond=0x55dfb88cefb8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe00efd990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 2, cond = 0x55dfb88cefb8, mutex = 0x55dfb88cef90, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 2
        seq = 1
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb88cefb8, mutex=mutex@entry=0x55dfb88cef90) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb88cefb8, m=m@entry=0x55dfb88cef90) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88b11b0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb88cef50
        cond = 0x55dfb88cefb8
        priority = false
        curWorkers = 0x55dfb88cf030
        maxLimit = 0x55dfb88cf018
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88b11b0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454036236032, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454036233088, 7135689964616675658, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 14 (Thread 0x7fbdfaffd700 (LWP 30122)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb88cefe0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb88cef90, cond=0x55dfb88cefb8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbdfaffc990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 8, cond = 0x55dfb88cefb8, mutex = 0x55dfb88cef90, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 8
        seq = 4
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb88cefb8, mutex=mutex@entry=0x55dfb88cef90) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb88cefb8, m=m@entry=0x55dfb88cef90) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8886fb0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb88cef50
        cond = 0x55dfb88cefb8
        priority = false
        curWorkers = 0x55dfb88cf030
        maxLimit = 0x55dfb88cf018
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8886fb0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140453936617216, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140453936614272, 7133952598268965194, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 13 (Thread 0x7fbdf91d9700 (LWP 30174)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x7fbde80fbd98) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x7fbde80fbd30, cond=0x7fbde80fbd70) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbdf91d89a0, __canceltype = 1365485888, __prev = 0x0}
        cbuffer = {wseq = 8, cond = 0x7fbde80fbd70, mutex = 0x7fbde80fbd30, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 8
        seq = 4
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x7fbde80fbd70, mutex=mutex@entry=0x7fbde80fbd30) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x7fbde80fbd70, m=m@entry=0x7fbde80fbd30) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe02143e94 in udevEventHandleThread (opaque=<optimized out>) at node_device/node_device_udev.c:1590
        priv = 0x7fbde80fbd20
        device = <optimized out>
        __FUNCTION__ = "udevEventHandleThread"
#5  0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe02143e40 <udevEventHandleThread>, funcName = 0x7fbe02158229 "udevEventHandleThread", worker = false, opaque = 0x0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140453905012480, -7171118289940385462, 140453928220782, 140453928220783, 140453928220960, 140453905009536, 7133952308358672714, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 12 (Thread 0x7fbdfb7fe700 (LWP 30121)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb88cefe0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb88cef90, cond=0x55dfb88cefb8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbdfb7fd990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 6, cond = 0x55dfb88cefb8, mutex = 0x55dfb88cef90, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 6
        seq = 3
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb88cefb8, mutex=mutex@entry=0x55dfb88cef90) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb88cefb8, m=m@entry=0x55dfb88cef90) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88b3510) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb88cef50
        cond = 0x55dfb88cefb8
        priority = false
        curWorkers = 0x55dfb88cf030
        maxLimit = 0x55dfb88cf018
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88b3510}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140453945009920, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140453945006976, 7133955897340719434, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 11 (Thread 0x7fbdfbfff700 (LWP 30120)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb88cefe0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb88cef90, cond=0x55dfb88cefb8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbdfbffe990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 4, cond = 0x55dfb88cefb8, mutex = 0x55dfb88cef90, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 4
        seq = 2
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb88cefb8, mutex=mutex@entry=0x55dfb88cef90) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb88cefb8, m=m@entry=0x55dfb88cef90) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88a3cf0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb88cef50
        cond = 0x55dfb88cefb8
        priority = false
        curWorkers = 0x55dfb88cf030
        maxLimit = 0x55dfb88cf018
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88a3cf0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140453953402624, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140453953399680, 7133954796218478922, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 10 (Thread 0x7fbe39ffb700 (LWP 30114)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a2a0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a278) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe39ffa990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 2, cond = 0x55dfb886a278, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 2
        seq = 1
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a278, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a278, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5834 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8852470) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a278
        priority = true
        curWorkers = 0x55dfb886a268
        maxLimit = 0x55dfb886a260
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8852470}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454993573632, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454993570688, 7135779987668070730, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 9 (Thread 0x7fbe27fff700 (LWP 30117)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a2a0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a278) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe27ffe990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 8, cond = 0x55dfb886a278, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 8
        seq = 4
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a278, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a278, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5834 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8852770) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a278
        priority = true
        curWorkers = 0x55dfb886a268
        maxLimit = 0x55dfb886a260
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8852770}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140454691600128, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140454691597184, 7135775587474075978, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 8 (Thread 0x7fbe58319b80 (LWP 30104)):
#0  0x00007fbe55758211 in __GI___poll (fds=0x55dfb88d5810, nfds=nfds@entry=6, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
        resultvar = 18446744073709551100
        sc_cancel_oldtype = 0
        sc_ret = <optimized out>
#1  0x00007fbe57998aab in poll (__timeout=-1, __nfds=6, __fds=<optimized out>) at /usr/include/bits/poll2.h:46
No locals.
#2  virEventPollRunOnce () at util/vireventpoll.c:636
        fds = 0x55dfb88d5810
        ret = <optimized out>
        timeout = <optimized out>
        nfds = 6
        __func__ = "virEventPollRunOnce"
        __FUNCTION__ = "virEventPollRunOnce"
#3  0x00007fbe57997661 in virEventRunDefaultImpl () at util/virevent.c:322
        __func__ = "virEventRunDefaultImpl"
#4  0x00007fbe57acf71d in virNetDaemonRun (dmn=dmn@entry=0x55dfb8869d60) at rpc/virnetdaemon.c:836
        timerid = -1
        timerActive = false
        __FUNCTION__ = "virNetDaemonRun"
        __func__ = "virNetDaemonRun"
#5  0x000055dfb702d312 in main (argc=<optimized out>, argv=<optimized out>) at remote/remote_daemon.c:1448
        dmn = 0x55dfb8869d60
        srv = <optimized out>
        srvAdm = <optimized out>
        adminProgram = <optimized out>
        lxcProgram = <optimized out>
        remote_config_file = 0x55dfb8867620 "/etc/libvirt/libvirtd.conf"
        statuswrite = -1
        ret = 1
        pid_file_fd = 3
        pid_file = 0x55dfb8856270 "/var/run/libvirtd.pid"
        sock_file = 0x55dfb8869910 "/var/run/libvirt/libvirt-sock"
        sock_file_ro = 0x55dfb8860ec0 "/var/run/libvirt/libvirt-sock-ro"
        sock_file_adm = 0x55dfb8861840 "/var/run/libvirt/libvirt-admin-sock"
        timeout = -1
        verbose = 0
        godaemon = 0
        ipsock = 0
        config = 0x55dfb8866200
        privileged = <optimized out>
        implicit_conf = <optimized out>
        run_dir = 0x55dfb88529b0 "/var/run/libvirt"
        old_umask = <optimized out>
        opts = {{name = 0x55dfb70592ae "verbose", has_arg = 0, flag = 0x7fffb190c72c, val = 118}, {name = 0x55dfb70592ce "daemon", has_arg = 0, flag = 0x7fffb190c730, val = 100}, {name = 0x55dfb70592dc "listen", has_arg = 0, flag = 0x7fffb190c734, val = 108}, {name = 0x55dfb70595f8 "config", has_arg = 1, flag = 0x0, val = 102}, {name = 0x55dfb7059641 "timeout", has_arg = 1, flag = 0x0, val = 116}, {name = 0x55dfb70595ff "pid-file", has_arg = 1, flag = 0x0, val = 112}, {name = 0x55dfb7059360 "version", has_arg = 0, flag = 0x0, val = 86}, {name = 0x55dfb7059274 "help", has_arg = 0, flag = 0x0, val = 104}, {name = 0x0, has_arg = 0, flag = 0x0, val = 0}}
        __func__ = "main"

Thread 7 (Thread 0x7fbe3affd700 (LWP 30112)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a200) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a1d8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe3affc990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 8, cond = 0x55dfb886a1d8, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 8
        seq = 4
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a1d8, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a1d8, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8851470) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a1d8
        priority = false
        curWorkers = 0x55dfb886a250
        maxLimit = 0x55dfb886a238
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8851470}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455010359040, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455010356096, 7135782185617584458, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 6 (Thread 0x7fbe3a7fc700 (LWP 30113)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a2a0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a278) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe3a7fb990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 0, cond = 0x55dfb886a278, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 0
        seq = 0
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a278, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a278, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5834 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8851f30) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a278
        priority = true
        curWorkers = 0x55dfb886a268
        maxLimit = 0x55dfb886a260
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8851f30}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455001966336, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455001963392, 7135783286739824970, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 5 (Thread 0x7fbe3bfff700 (LWP 30110)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a200) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a1d8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe3bffe990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 4, cond = 0x55dfb886a1d8, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 4
        seq = 2
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a1d8, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a1d8, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8851ff0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a1d8
        priority = false
        curWorkers = 0x55dfb886a250
        maxLimit = 0x55dfb886a238
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8851ff0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455027144448, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455027141504, 7135784383567098186, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 4 (Thread 0x7fbe3b7fe700 (LWP 30111)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a200) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a1d8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe3b7fd990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 6, cond = 0x55dfb886a1d8, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 6
        seq = 3
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a1d8, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a1d8, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb8851db0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a1d8
        priority = false
        curWorkers = 0x55dfb886a250
        maxLimit = 0x55dfb886a238
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb8851db0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455018751744, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455018748800, 7135785484689338698, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 3 (Thread 0x7fbe408e1700 (LWP 30109)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a200) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a1d8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe408e0990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 2, cond = 0x55dfb886a1d8, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 2
        seq = 1
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a1d8, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a1d8, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88526b0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a1d8
        priority = false
        curWorkers = 0x55dfb886a250
        maxLimit = 0x55dfb886a238
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88526b0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455103567616, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455103564672, 7135831515464462666, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 2 (Thread 0x7fbe410e2700 (LWP 30108)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55dfb886a200) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
        __ret = -512
        oldtype = 0
        err = <optimized out>
        oldtype = <optimized out>
        err = <optimized out>
        __ret = <optimized out>
        resultvar = <optimized out>
        __arg4 = <optimized out>
        __arg3 = <optimized out>
        __arg2 = <optimized out>
        __arg1 = <optimized out>
        _a4 = <optimized out>
        _a3 = <optimized out>
        _a2 = <optimized out>
        _a1 = <optimized out>
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55dfb886a1b0, cond=0x55dfb886a1d8) at pthread_cond_wait.c:502
        spin = 0
        buffer = {__routine = 0x7fbe55a381f0 <__condvar_cleanup_waiting>, __arg = 0x7fbe410e1990, __canceltype = 7, __prev = 0x0}
        cbuffer = {wseq = 0, cond = 0x55dfb886a1d8, mutex = 0x55dfb886a1b0, private = 0}
        rt = <optimized out>
        err = <optimized out>
        g = 0
        flags = <optimized out>
        g1_start = <optimized out>
        signals = <optimized out>
        result = 0
        wseq = 0
        seq = 0
        private = 0
        maxspin = <optimized out>
        err = <optimized out>
        result = <optimized out>
        wseq = <optimized out>
        g = <optimized out>
        seq = <optimized out>
        flags = <optimized out>
        private = <optimized out>
        signals = <optimized out>
        g1_start = <optimized out>
        spin = <optimized out>
        buffer = <optimized out>
        cbuffer = <optimized out>
        rt = <optimized out>
        s = <optimized out>
#2  __pthread_cond_wait (cond=cond@entry=0x55dfb886a1d8, mutex=mutex@entry=0x55dfb886a1b0) at pthread_cond_wait.c:655
No locals.
#3  0x00007fbe579f4df6 in virCondWait (c=c@entry=0x55dfb886a1d8, m=m@entry=0x55dfb886a1b0) at util/virthread.c:144
        ret = <optimized out>
#4  0x00007fbe579f5883 in virThreadPoolWorker (opaque=opaque@entry=0x55dfb88584e0) at util/virthreadpool.c:120
        data = 0x0
        pool = 0x55dfb886a170
        cond = 0x55dfb886a1d8
        priority = false
        curWorkers = 0x55dfb886a250
        maxLimit = 0x55dfb886a238
        job = 0x0
#5  0x00007fbe579f4bb8 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x7fbe579f56a0 <virThreadPoolWorker>, funcName = 0x7fbe57c28b3b "virNetServerHandleJob", worker = true, opaque = 0x55dfb88584e0}
#6  0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140455111960320, -7171118289940385462, 140736172442718, 140736172442719, 140736172442896, 140455111957376, 7135834814536216906, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.

Thread 1 (Thread 0x7fbdfa7fc700 (LWP 30123)):
#0  0x00007fbe57a0d1b9 in virDomainVirtioSerialAddrSetAddControllers (def=<optimized out>, def=<optimized out>, addrs=<optimized out>) at conf/domain_addr.c:1656
        i = 0
        i = <optimized out>
#1  virDomainVirtioSerialAddrSetCreateFromDomain (def=def@entry=0x7fbde81cc3f0) at conf/domain_addr.c:1753
        addrs = 0x7fbde81e3790
        ret = 0x0
#2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses (def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174
        ret = -1
        i = <optimized out>
        addrs = 0x0
        ret = <optimized out>
        i = <optimized out>
        addrs = <optimized out>
        __func__ = "qemuDomainAssignVirtioSerialAddresses"
        chr = <optimized out>
        chr = <optimized out>
#3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0, qemuCaps=0x7fbde81d2210, driver=0x7fbde8126850, obj=0x0, newDomain=<optimized out>) at qemu/qemu_domain_address.c:3174
No locals.
#4  0x00007fbe57a39e0d in virDomainDefPostParse (def=def@entry=0x7fbde81cc3f0, caps=caps@entry=0x7fbde8154d20, parseFlags=parseFlags@entry=4610, xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
        ret = 0
        localParseOpaque = true
        data = {caps = 0x7fbde8154d20, xmlopt = 0x7fbde83ce070, parseOpaque = 0x7fbde81d2210, parseFlags = 4610}
#5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized out>, root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
        ctxt = 0x7fbde83ba390
        def = 0x7fbde81cc3f0
        virTemporaryReturnPointer = <optimized out>
#6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry=0x0, filename=<optimized out>, caps=caps@entry=0x7fbde8154d20, xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0, flags=flags@entry=4610) at conf/domain_conf.c:21628
        xml = 0x7fbde83b1b30
        def = 0x0
        keepBlanksDefault = 1
        root = 0x7fbde83c5ff0
        __FUNCTION__ = "virDomainDefParse"
#7  0x00007fbe57a528f6 in virDomainDefParseFile (filename=<optimized out>, caps=caps@entry=0x7fbde8154d20, xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0, flags=flags@entry=4610) at conf/domain_conf.c:21653
No locals.
#8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0, notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050 "/etc/libvirt/qemu", xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20, doms=0x7fbde8126940) at conf/virdomainobjlist.c:503
        configFile = 0x7fbde81d7f30 "/etc/libvirt/qemu/pc.xml"
        autostartLink = 0x0
        def = 0x0
        autostart = <optimized out>
        oldDef = 0x0
        dom = <optimized out>
        configFile = <optimized out>
        autostartLink = <optimized out>
        def = <optimized out>
        dom = <optimized out>
        autostart = <optimized out>
        oldDef = <optimized out>
#9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940, configDir=0x7fbde8124050 "/etc/libvirt/qemu", autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart", liveStatus=liveStatus@entry=false, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0, opaque=0x0) at conf/virdomainobjlist.c:625
        dom = 0x0
        dir = 0x7fbde81d7f60
        entry = 0x7fbde81d7fe0
        ret = <optimized out>
        rc = <optimized out>
        __func__ = "virDomainObjListLoadAllConfigs"
#10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true, callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:1007
        driverConf = 0x0
        cfg = 0x7fbde8123830
        run_uid = <optimized out>
        run_gid = <optimized out>
        hugepagePath = 0x0
        memoryBackingPath = 0x0
        autostart = true
        i = 1
        __FUNCTION__ = "qemuStateInitialize"
#11 0x00007fbe57b8033d in virStateInitialize (privileged=true, mandatory=mandatory@entry=false, callback=callback@entry=0x55dfb702ecc0 <daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60) at libvirt.c:666
        ret = <optimized out>
        i = 7
        __func__ = "virStateInitialize"
#12 0x000055dfb702ed1d in daemonRunStateInit (opaque=0x55dfb8869d60) at remote/remote_daemon.c:846
        dmn = 0x55dfb8869d60
        sysident = 0x7fbde8000b70
        __func__ = "daemonRunStateInit"
#13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at util/virthread.c:196
        args = 0x0
        local = {func = 0x55dfb702ece0 <daemonRunStateInit>, funcName = 0x55dfb7059955 "daemonRunStateInit", worker = false, opaque = 0x55dfb8869d60}
#14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at pthread_create.c:486
        ret = <optimized out>
        pd = <optimized out>
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140453928224512, -7171118289940385462, 140736172443038, 140736172443039, 140736172443216, 140453928221568, 7133953699391205706, 7135877393514885450}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#15 0x00007fbe55763133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/10/19 11:25 PM, Han Han wrote:
> Hi Cole,
> I merged crobinso/qcow2-data_file branch to 37b565c00. Reserved new 
> capabilities introduced by these to branches to resolve conflicts.
> Then build and test as following:
> # ./autogen.sh&& ./configure --without-libssh 
> --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu 
> --program-prefix= --disable-dependency-tracking --prefix=/usr 
> --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin 
> --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include 
> --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var 
> --sharedstatedir=/var/lib --mandir=/usr/share/man 
> --infodir=/usr/share/info --with-qemu --without-openvz --without-lxc 
> --without-vbox --without-libxl --with-sasl --with-polkit --with-libvirtd 
> --without-phyp --with-esx --without-hyperv --without-vmware 
> --without-xenapi --without-vz --without-bhyve --with-interface 
> --with-network --with-storage-fs --with-storage-lvm --with-storage-iscsi 
> --with-storage-iscsi-direct --with-storage-scsi --with-storage-disk 
> --with-storage-mpath --with-storage-rbd --without-storage-sheepdog 
> --with-storage-gluster --without-storage-zfs --without-storage-vstorage 
> --with-numactl --with-numad --with-capng --without-fuse --with-netcf 
> --with-selinux --with-selinux-mount=/sys/fs/selinux --without-apparmor 
> --without-hal --with-udev --with-yajl --with-sanlock --with-libpcap 
> --with-macvtap --with-audit --with-dtrace --with-driver-modules 
> --with-firewalld --with-firewalld-zone --without-wireshark-dissector 
> --without-pm-utils --with-nss-plugin '--with-packager=Unknown, 
> 2019-08-19-12:13:01, lab.rhel8.me <http://lab.rhel8.me>' 
> --with-packager-version=1.el8 --with-qemu-user=qemu 
> --with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM 
> --enable-werror --enable-expensive-tests --with-init-script=systemd 
> --without-login-shell && make
> 
> Start libvirtd and virtlogd
> # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
> # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" ./src/virtlogd
> 
> Then try to list all domains:
> # virsh list --all
> 
> Libvirtd exits with segment fault:
> [1]    30104 segmentation fault (core dumped)  LD_PRELOAD="$(find src 
> -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
> 
> Version:
> qemu-4.1
> 
> Backtrace:
> (gdb) bt
> #0  0x00007fbe57a0d1b9 in virDomainVirtioSerialAddrSetAddControllers 
> (def=<optimized out>, def=<optimized out>, addrs=<optimized out>) at 
> conf/domain_addr.c:1656
> #1  virDomainVirtioSerialAddrSetCreateFromDomain 
> (def=def@entry=0x7fbde81cc3f0) at conf/domain_addr.c:1753
> #2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses 
> (def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174
> #3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0, 
> qemuCaps=0x7fbde81d2210, driver=0x7fbde8126850, obj=0x0, 
> newDomain=<optimized out>) at qemu/qemu_domain_address.c:3174
> #4  0x00007fbe57a39e0d in virDomainDefPostParse 
> (def=def@entry=0x7fbde81cc3f0, caps=caps@entry=0x7fbde8154d20, 
> parseFlags=parseFlags@entry=4610, xmlopt=xmlopt@entry=0x7fbde83ce070,
>      parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
> #5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized out>, 
> root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, 
> parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
> #6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry=0x0, 
> filename=<optimized out>, caps=caps@entry=0x7fbde8154d20, 
> xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0,
>      flags=flags@entry=4610) at conf/domain_conf.c:21628
> #7  0x00007fbe57a528f6 in virDomainDefParseFile (filename=<optimized 
> out>, caps=caps@entry=0x7fbde8154d20, 
> xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0, 
> flags=flags@entry=4610)
>      at conf/domain_conf.c:21653
> #8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0, 
> notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070 
> "/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050 
> "/etc/libvirt/qemu",
>      xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20, doms=0x7fbde8126940) at 
> conf/virdomainobjlist.c:503
> #9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940, 
> configDir=0x7fbde8124050 "/etc/libvirt/qemu", 
> autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart", 
> liveStatus=liveStatus@entry=false,
>      caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0, opaque=0x0) 
> at conf/virdomainobjlist.c:625
> #10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true, 
> callback=<optimized out>, opaque=<optimized out>) at 
> qemu/qemu_driver.c:1007
> #11 0x00007fbe57b8033d in virStateInitialize (privileged=true, 
> mandatory=mandatory@entry=false, callback=callback@entry=0x55dfb702ecc0 
> <daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60)
>      at libvirt.c:666
> #12 0x000055dfb702ed1d in daemonRunStateInit (opaque=0x55dfb8869d60) at 
> remote/remote_daemon.c:846
> #13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at 
> util/virthread.c:196
> #14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at 
> pthread_create.c:486
> #15 0x00007fbe55763133 in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> 
> Could you please check this issue?
> The full threads backtrace is in attachment
> 


Thanks for checking. I'm struggling to see how that backtrace could be 
related to this series, or even determine what the issue is. Can you 
confirm that master branch doesn't have any issue? If it doesn't, can 
you bisect the series to figure out where the offending patch is?

Thanks,
Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Han Han 4 years, 6 months ago
On Sat, Oct 12, 2019 at 1:05 AM Cole Robinson <crobinso@redhat.com> wrote:

> On 10/10/19 11:25 PM, Han Han wrote:
> > Hi Cole,
> > I merged crobinso/qcow2-data_file branch to 37b565c00. Reserved new
> > capabilities introduced by these to branches to resolve conflicts.
> > Then build and test as following:
> > # ./autogen.sh&& ./configure --without-libssh
> > --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu
> > --program-prefix= --disable-dependency-tracking --prefix=/usr
> > --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
> > --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include
> > --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var
> > --sharedstatedir=/var/lib --mandir=/usr/share/man
> > --infodir=/usr/share/info --with-qemu --without-openvz --without-lxc
> > --without-vbox --without-libxl --with-sasl --with-polkit --with-libvirtd
> > --without-phyp --with-esx --without-hyperv --without-vmware
> > --without-xenapi --without-vz --without-bhyve --with-interface
> > --with-network --with-storage-fs --with-storage-lvm --with-storage-iscsi
> > --with-storage-iscsi-direct --with-storage-scsi --with-storage-disk
> > --with-storage-mpath --with-storage-rbd --without-storage-sheepdog
> > --with-storage-gluster --without-storage-zfs --without-storage-vstorage
> > --with-numactl --with-numad --with-capng --without-fuse --with-netcf
> > --with-selinux --with-selinux-mount=/sys/fs/selinux --without-apparmor
> > --without-hal --with-udev --with-yajl --with-sanlock --with-libpcap
> > --with-macvtap --with-audit --with-dtrace --with-driver-modules
> > --with-firewalld --with-firewalld-zone --without-wireshark-dissector
> > --without-pm-utils --with-nss-plugin '--with-packager=Unknown,
> > 2019-08-19-12:13:01, lab.rhel8.me <http://lab.rhel8.me>'
> > --with-packager-version=1.el8 --with-qemu-user=qemu
> > --with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM
> > --enable-werror --enable-expensive-tests --with-init-script=systemd
> > --without-login-shell && make
> >
> > Start libvirtd and virtlogd
> > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
> > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" ./src/virtlogd
> >
> > Then try to list all domains:
> > # virsh list --all
> >
> > Libvirtd exits with segment fault:
> > [1]    30104 segmentation fault (core dumped)  LD_PRELOAD="$(find src
> > -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
> >
> > Version:
> > qemu-4.1
> >
> > Backtrace:
> > (gdb) bt
> > #0  0x00007fbe57a0d1b9 in virDomainVirtioSerialAddrSetAddControllers
> > (def=<optimized out>, def=<optimized out>, addrs=<optimized out>) at
> > conf/domain_addr.c:1656
> > #1  virDomainVirtioSerialAddrSetCreateFromDomain
> > (def=def@entry=0x7fbde81cc3f0) at conf/domain_addr.c:1753
> > #2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses
> > (def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174
> > #3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0,
> > qemuCaps=0x7fbde81d2210, driver=0x7fbde8126850, obj=0x0,
> > newDomain=<optimized out>) at qemu/qemu_domain_address.c:3174
> > #4  0x00007fbe57a39e0d in virDomainDefPostParse
> > (def=def@entry=0x7fbde81cc3f0, caps=caps@entry=0x7fbde8154d20,
> > parseFlags=parseFlags@entry=4610, xmlopt=xmlopt@entry=0x7fbde83ce070,
> >      parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
> > #5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized out>,
> > root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070,
> > parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
> > #6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry=0x0,
> > filename=<optimized out>, caps=caps@entry=0x7fbde8154d20,
> > xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0,
> >      flags=flags@entry=4610) at conf/domain_conf.c:21628
> > #7  0x00007fbe57a528f6 in virDomainDefParseFile (filename=<optimized
> > out>, caps=caps@entry=0x7fbde8154d20,
> > xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0,
> > flags=flags@entry=4610)
> >      at conf/domain_conf.c:21653
> > #8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0,
> > notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070
> > "/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050
> > "/etc/libvirt/qemu",
> >      xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20, doms=0x7fbde8126940) at
> > conf/virdomainobjlist.c:503
> > #9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940,
> > configDir=0x7fbde8124050 "/etc/libvirt/qemu",
> > autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart",
> > liveStatus=liveStatus@entry=false,
> >      caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0, opaque=0x0)
> > at conf/virdomainobjlist.c:625
> > #10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true,
> > callback=<optimized out>, opaque=<optimized out>) at
> > qemu/qemu_driver.c:1007
> > #11 0x00007fbe57b8033d in virStateInitialize (privileged=true,
> > mandatory=mandatory@entry=false, callback=callback@entry=0x55dfb702ecc0
> > <daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60)
> >      at libvirt.c:666
> > #12 0x000055dfb702ed1d in daemonRunStateInit (opaque=0x55dfb8869d60) at
> > remote/remote_daemon.c:846
> > #13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at
> > util/virthread.c:196
> > #14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at
> > pthread_create.c:486
> > #15 0x00007fbe55763133 in clone () at
> > ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> >
> > Could you please check this issue?
> > The full threads backtrace is in attachment
> >
>
> Hello, the git bisect shows that is the first bad commit:
192229f3a76ccc1b98a2c9e24f1feb0465b87a0b is the first bad commit
commit 192229f3a76ccc1b98a2c9e24f1feb0465b87a0b
Author: Cole Robinson <crobinso@redhat.com>
Date:   Fri Oct 4 19:57:55 2019 -0400

    storagefile: Push extension_end calc to qcow2GetBackingStoreFormat

    This is a step towards making this qcow2GetBackingStoreFormat into
    a generic qcow2 extensions parser

    Signed-off-by: Cole Robinson <crobinso@redhat.com>


Steps:
1. Merge crobinso/qcow2-data_file branch to 37b565c00.
2. Copy .gdbinit to libvirt source dir. Change the arguments values of
check-segv.sh
3. Set v5.8.0 as the start of bisect. Then start bisect.
# git bisect start HEAD v5.8.0
# git bisect run /tmp/check-segv.sh


> Thanks for checking. I'm struggling to see how that backtrace could be
> related to this series, or even determine what the issue is. Can you
> confirm that master branch doesn't have any issue? If it doesn't, can
> you bisect the series to figure out where the offending patch is?
>
> Thanks,
> Cole
>


-- 
Best regards,
-----------------------------------
Han Han
Quality Engineer
Redhat.

Email: hhan@redhat.com
Phone: +861065339333
file src/.libs/libvirtd
symbol-file src/.libs/libvirtd
core-file coredump
bt
quit
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/12/19 11:07 AM, Han Han wrote:
> 
> 
> On Sat, Oct 12, 2019 at 1:05 AM Cole Robinson <crobinso@redhat.com
> <mailto:crobinso@redhat.com>> wrote:
> 
>     On 10/10/19 11:25 PM, Han Han wrote:
>     > Hi Cole,
>     > I merged crobinso/qcow2-data_file branch to 37b565c00. Reserved new
>     > capabilities introduced by these to branches to resolve conflicts.
>     > Then build and test as following:
>     > # ./autogen.sh&& ./configure --without-libssh
>     > --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu
>     > --program-prefix= --disable-dependency-tracking --prefix=/usr
>     > --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
>     > --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include
>     > --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var
>     > --sharedstatedir=/var/lib --mandir=/usr/share/man
>     > --infodir=/usr/share/info --with-qemu --without-openvz --without-lxc
>     > --without-vbox --without-libxl --with-sasl --with-polkit
>     --with-libvirtd
>     > --without-phyp --with-esx --without-hyperv --without-vmware
>     > --without-xenapi --without-vz --without-bhyve --with-interface
>     > --with-network --with-storage-fs --with-storage-lvm
>     --with-storage-iscsi
>     > --with-storage-iscsi-direct --with-storage-scsi --with-storage-disk
>     > --with-storage-mpath --with-storage-rbd --without-storage-sheepdog
>     > --with-storage-gluster --without-storage-zfs
>     --without-storage-vstorage
>     > --with-numactl --with-numad --with-capng --without-fuse --with-netcf
>     > --with-selinux --with-selinux-mount=/sys/fs/selinux
>     --without-apparmor
>     > --without-hal --with-udev --with-yajl --with-sanlock --with-libpcap
>     > --with-macvtap --with-audit --with-dtrace --with-driver-modules
>     > --with-firewalld --with-firewalld-zone --without-wireshark-dissector
>     > --without-pm-utils --with-nss-plugin '--with-packager=Unknown,
>     > 2019-08-19-12:13:01, lab.rhel8.me <http://lab.rhel8.me>
>     <http://lab.rhel8.me>'
>     > --with-packager-version=1.el8 --with-qemu-user=qemu
>     > --with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM
>     > --enable-werror --enable-expensive-tests --with-init-script=systemd
>     > --without-login-shell && make
>     >
>     > Start libvirtd and virtlogd
>     > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')"
>     src/.libs/libvirtd
>     > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')" ./src/virtlogd
>     >
>     > Then try to list all domains:
>     > # virsh list --all
>     >
>     > Libvirtd exits with segment fault:
>     > [1]    30104 segmentation fault (core dumped)  LD_PRELOAD="$(find src
>     > -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
>     >
>     > Version:
>     > qemu-4.1
>     >
>     > Backtrace:
>     > (gdb) bt
>     > #0  0x00007fbe57a0d1b9 in virDomainVirtioSerialAddrSetAddControllers
>     > (def=<optimized out>, def=<optimized out>, addrs=<optimized out>) at
>     > conf/domain_addr.c:1656
>     > #1  virDomainVirtioSerialAddrSetCreateFromDomain
>     > (def=def@entry=0x7fbde81cc3f0) at conf/domain_addr.c:1753
>     > #2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses
>     > (def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174
>     > #3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0,
>     > qemuCaps=0x7fbde81d2210, driver=0x7fbde8126850, obj=0x0,
>     > newDomain=<optimized out>) at qemu/qemu_domain_address.c:3174
>     > #4  0x00007fbe57a39e0d in virDomainDefPostParse
>     > (def=def@entry=0x7fbde81cc3f0, caps=caps@entry=0x7fbde8154d20,
>     > parseFlags=parseFlags@entry=4610, xmlopt=xmlopt@entry=0x7fbde83ce070,
>     >      parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
>     > #5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized out>,
>     > root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070,
>     > parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
>     > #6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry=0x0,
>     > filename=<optimized out>, caps=caps@entry=0x7fbde8154d20,
>     > xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry=0x0,
>     >      flags=flags@entry=4610) at conf/domain_conf.c:21628
>     > #7  0x00007fbe57a528f6 in virDomainDefParseFile (filename=<optimized
>     > out>, caps=caps@entry=0x7fbde8154d20,
>     > xmlopt=xmlopt@entry=0x7fbde83ce070,
>     parseOpaque=parseOpaque@entry=0x0,
>     > flags=flags@entry=4610)
>     >      at conf/domain_conf.c:21653
>     > #8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0,
>     > notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070
>     > "/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050
>     > "/etc/libvirt/qemu",
>     >      xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20,
>     doms=0x7fbde8126940) at
>     > conf/virdomainobjlist.c:503
>     > #9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940,
>     > configDir=0x7fbde8124050 "/etc/libvirt/qemu",
>     > autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart",
>     > liveStatus=liveStatus@entry=false,
>     >      caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0,
>     opaque=0x0)
>     > at conf/virdomainobjlist.c:625
>     > #10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true,
>     > callback=<optimized out>, opaque=<optimized out>) at
>     > qemu/qemu_driver.c:1007
>     > #11 0x00007fbe57b8033d in virStateInitialize (privileged=true,
>     > mandatory=mandatory@entry=false,
>     callback=callback@entry=0x55dfb702ecc0
>     > <daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60)
>     >      at libvirt.c:666
>     > #12 0x000055dfb702ed1d in daemonRunStateInit
>     (opaque=0x55dfb8869d60) at
>     > remote/remote_daemon.c:846
>     > #13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at
>     > util/virthread.c:196
>     > #14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at
>     > pthread_create.c:486
>     > #15 0x00007fbe55763133 in clone () at
>     > ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>     >
>     > Could you please check this issue?
>     > The full threads backtrace is in attachment
>     >
> 
> Hello, the git bisect shows that is the first bad commit:
> 192229f3a76ccc1b98a2c9e24f1feb0465b87a0b is the first bad commit
> commit 192229f3a76ccc1b98a2c9e24f1feb0465b87a0b
> Author: Cole Robinson <crobinso@redhat.com <mailto:crobinso@redhat.com>>
> Date:   Fri Oct 4 19:57:55 2019 -0400
> 
>     storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
> 
>     This is a step towards making this qcow2GetBackingStoreFormat into
>     a generic qcow2 extensions parser
> 
>     Signed-off-by: Cole Robinson <crobinso@redhat.com
> <mailto:crobinso@redhat.com>>
> 
> 
> Steps:
> 1. Merge crobinso/qcow2-data_file branch to 37b565c00.
> 2. Copy .gdbinit to libvirt source dir. Change the arguments values of 
> check-segv.sh
> 3. Set v5.8.0 as the start of bisect. Then start bisect.
> # git bisect start HEAD v5.8.0
> # git bisect run /tmp/check-segv.sh
> 

I'm still quite confused. Maybe something I'm missing in one of these
commits is causing memory corruption that is manifesting elsewhere?

Can you provide full LIBVIRT_DEBUG=1 output when starting libvirtd? You
can use git master now because the patches have been pushed. I suggest
hosting the output somewhere rather than attaching it here, because it
will probably be large

Also, if you can pinpoint what VM XML that is being parsed when this
crashes, and post that too, it might help.

Thanks,
Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Han Han 4 years, 6 months ago
I find the issue cannot reproduced when `make clean` before build the
source.
It is not proper to build with an unclean source dir, right?

On Tue, Oct 15, 2019 at 3:55 AM Cole Robinson <crobinso@redhat.com> wrote:

> On 10/12/19 11:07 AM, Han Han wrote:
> >
> >
> > On Sat, Oct 12, 2019 at 1:05 AM Cole Robinson <crobinso@redhat.com
> > <mailto:crobinso@redhat.com>> wrote:
> >
> >     On 10/10/19 11:25 PM, Han Han wrote:
> >     > Hi Cole,
> >     > I merged crobinso/qcow2-data_file branch to 37b565c00. Reserved new
> >     > capabilities introduced by these to branches to resolve conflicts.
> >     > Then build and test as following:
> >     > # ./autogen.sh&& ./configure --without-libssh
> >     > --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu
> >     > --program-prefix= --disable-dependency-tracking --prefix=/usr
> >     > --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
> >     > --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include
> >     > --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var
> >     > --sharedstatedir=/var/lib --mandir=/usr/share/man
> >     > --infodir=/usr/share/info --with-qemu --without-openvz
> --without-lxc
> >     > --without-vbox --without-libxl --with-sasl --with-polkit
> >     --with-libvirtd
> >     > --without-phyp --with-esx --without-hyperv --without-vmware
> >     > --without-xenapi --without-vz --without-bhyve --with-interface
> >     > --with-network --with-storage-fs --with-storage-lvm
> >     --with-storage-iscsi
> >     > --with-storage-iscsi-direct --with-storage-scsi --with-storage-disk
> >     > --with-storage-mpath --with-storage-rbd --without-storage-sheepdog
> >     > --with-storage-gluster --without-storage-zfs
> >     --without-storage-vstorage
> >     > --with-numactl --with-numad --with-capng --without-fuse
> --with-netcf
> >     > --with-selinux --with-selinux-mount=/sys/fs/selinux
> >     --without-apparmor
> >     > --without-hal --with-udev --with-yajl --with-sanlock --with-libpcap
> >     > --with-macvtap --with-audit --with-dtrace --with-driver-modules
> >     > --with-firewalld --with-firewalld-zone
> --without-wireshark-dissector
> >     > --without-pm-utils --with-nss-plugin '--with-packager=Unknown,
> >     > 2019-08-19-12:13:01, lab.rhel8.me <http://lab.rhel8.me>
> >     <http://lab.rhel8.me>'
> >     > --with-packager-version=1.el8 --with-qemu-user=qemu
> >     > --with-qemu-group=qemu --with-tls-priority=@LIBVIRT,SYSTEM
> >     > --enable-werror --enable-expensive-tests --with-init-script=systemd
> >     > --without-login-shell && make
> >     >
> >     > Start libvirtd and virtlogd
> >     > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')"
> >     src/.libs/libvirtd
> >     > # LD_PRELOAD="$(find src -name '*.so.*'|tr '\n' ' ')"
> ./src/virtlogd
> >     >
> >     > Then try to list all domains:
> >     > # virsh list --all
> >     >
> >     > Libvirtd exits with segment fault:
> >     > [1]    30104 segmentation fault (core dumped)  LD_PRELOAD="$(find
> src
> >     > -name '*.so.*'|tr '\n' ' ')" src/.libs/libvirtd
> >     >
> >     > Version:
> >     > qemu-4.1
> >     >
> >     > Backtrace:
> >     > (gdb) bt
> >     > #0  0x00007fbe57a0d1b9 in
> virDomainVirtioSerialAddrSetAddControllers
> >     > (def=<optimized out>, def=<optimized out>, addrs=<optimized out>)
> at
> >     > conf/domain_addr.c:1656
> >     > #1  virDomainVirtioSerialAddrSetCreateFromDomain
> >     > (def=def@entry=0x7fbde81cc3f0) at conf/domain_addr.c:1753
> >     > #2  0x00007fbe0179897e in qemuDomainAssignVirtioSerialAddresses
> >     > (def=0x7fbde81cc3f0) at qemu/qemu_domain_address.c:3174
> >     > #3  qemuDomainAssignAddresses (def=0x7fbde81cc3f0,
> >     > qemuCaps=0x7fbde81d2210, driver=0x7fbde8126850, obj=0x0,
> >     > newDomain=<optimized out>) at qemu/qemu_domain_address.c:3174
> >     > #4  0x00007fbe57a39e0d in virDomainDefPostParse
> >     > (def=def@entry=0x7fbde81cc3f0, caps=caps@entry=0x7fbde8154d20,
> >     > parseFlags=parseFlags@entry=4610, xmlopt=xmlopt@entry
> =0x7fbde83ce070,
> >     >      parseOpaque=parseOpaque@entry=0x0) at conf/domain_conf.c:5858
> >     > #5  0x00007fbe57a525c5 in virDomainDefParseNode (xml=<optimized
> out>,
> >     > root=0x7fbde83c5ff0, caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070,
> >     > parseOpaque=0x0, flags=4610) at conf/domain_conf.c:21677
> >     > #6  0x00007fbe57a526c8 in virDomainDefParse (xmlStr=xmlStr@entry
> =0x0,
> >     > filename=<optimized out>, caps=caps@entry=0x7fbde8154d20,
> >     > xmlopt=xmlopt@entry=0x7fbde83ce070, parseOpaque=parseOpaque@entry
> =0x0,
> >     >      flags=flags@entry=4610) at conf/domain_conf.c:21628
> >     > #7  0x00007fbe57a528f6 in virDomainDefParseFile
> (filename=<optimized
> >     > out>, caps=caps@entry=0x7fbde8154d20,
> >     > xmlopt=xmlopt@entry=0x7fbde83ce070,
> >     parseOpaque=parseOpaque@entry=0x0,
> >     > flags=flags@entry=4610)
> >     >      at conf/domain_conf.c:21653
> >     > #8  0x00007fbe57a5e16a in virDomainObjListLoadConfig (opaque=0x0,
> >     > notify=0x0, name=0x7fbde81d7ff3 "pc", autostartDir=0x7fbde8124070
> >     > "/etc/libvirt/qemu/autostart", configDir=0x7fbde8124050
> >     > "/etc/libvirt/qemu",
> >     >      xmlopt=0x7fbde83ce070, caps=0x7fbde8154d20,
> >     doms=0x7fbde8126940) at
> >     > conf/virdomainobjlist.c:503
> >     > #9  virDomainObjListLoadAllConfigs (doms=0x7fbde8126940,
> >     > configDir=0x7fbde8124050 "/etc/libvirt/qemu",
> >     > autostartDir=0x7fbde8124070 "/etc/libvirt/qemu/autostart",
> >     > liveStatus=liveStatus@entry=false,
> >     >      caps=0x7fbde8154d20, xmlopt=0x7fbde83ce070, notify=0x0,
> >     opaque=0x0)
> >     > at conf/virdomainobjlist.c:625
> >     > #10 0x00007fbe017f57e2 in qemuStateInitialize (privileged=true,
> >     > callback=<optimized out>, opaque=<optimized out>) at
> >     > qemu/qemu_driver.c:1007
> >     > #11 0x00007fbe57b8033d in virStateInitialize (privileged=true,
> >     > mandatory=mandatory@entry=false,
> >     callback=callback@entry=0x55dfb702ecc0
> >     > <daemonInhibitCallback>, opaque=opaque@entry=0x55dfb8869d60)
> >     >      at libvirt.c:666
> >     > #12 0x000055dfb702ed1d in daemonRunStateInit
> >     (opaque=0x55dfb8869d60) at
> >     > remote/remote_daemon.c:846
> >     > #13 0x00007fbe579f4be2 in virThreadHelper (data=<optimized out>) at
> >     > util/virthread.c:196
> >     > #14 0x00007fbe55a322de in start_thread (arg=<optimized out>) at
> >     > pthread_create.c:486
> >     > #15 0x00007fbe55763133 in clone () at
> >     > ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> >     >
> >     > Could you please check this issue?
> >     > The full threads backtrace is in attachment
> >     >
> >
> > Hello, the git bisect shows that is the first bad commit:
> > 192229f3a76ccc1b98a2c9e24f1feb0465b87a0b is the first bad commit
> > commit 192229f3a76ccc1b98a2c9e24f1feb0465b87a0b
> > Author: Cole Robinson <crobinso@redhat.com <mailto:crobinso@redhat.com>>
> > Date:   Fri Oct 4 19:57:55 2019 -0400
> >
> >     storagefile: Push extension_end calc to qcow2GetBackingStoreFormat
> >
> >     This is a step towards making this qcow2GetBackingStoreFormat into
> >     a generic qcow2 extensions parser
> >
> >     Signed-off-by: Cole Robinson <crobinso@redhat.com
> > <mailto:crobinso@redhat.com>>
> >
> >
> > Steps:
> > 1. Merge crobinso/qcow2-data_file branch to 37b565c00.
> > 2. Copy .gdbinit to libvirt source dir. Change the arguments values of
> > check-segv.sh
> > 3. Set v5.8.0 as the start of bisect. Then start bisect.
> > # git bisect start HEAD v5.8.0
> > # git bisect run /tmp/check-segv.sh
> >
>
> I'm still quite confused. Maybe something I'm missing in one of these
> commits is causing memory corruption that is manifesting elsewhere?
>
> Can you provide full LIBVIRT_DEBUG=1 output when starting libvirtd? You
> can use git master now because the patches have been pushed. I suggest
> hosting the output somewhere rather than attaching it here, because it
> will probably be large
>
> Also, if you can pinpoint what VM XML that is being parsed when this
> crashes, and post that too, it might help.
>
> Thanks,
> Cole
>


-- 
Best regards,
-----------------------------------
Han Han
Quality Engineer
Redhat.

Email: hhan@redhat.com
Phone: +861065339333
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 00/30] storagefile, security: qcow2 data_file support
Posted by Cole Robinson 4 years, 6 months ago
On 10/15/19 12:29 AM, Han Han wrote:
> I find the issue cannot reproduced when `make clean` before build the
> source.
> It is not proper to build with an unclean source dir, right?
> 

I don't use 'make clean' in libvirt.git but in other projects I have hit
issues that required 'make clean', like in qemu if trying to run old
commits, often the build system gets confused. I dip in and out of
libvirt development though so maybe regular devs have hit similar issues

- Cole

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list