[Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support

Marc-André Lureau posted 8 patches 6 years, 9 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20170714182012.4595-1-marcandre.lureau@redhat.com
Test FreeBSD passed
Test checkpatch failed
Test docker passed
Test s390x passed
There is a newer version of this series
scripts/dump-guest-memory.py         |  47 ++++++++
include/hw/acpi/aml-build.h          |   1 +
include/hw/acpi/bios-linker-loader.h |   2 +
include/hw/acpi/vmcoreinfo.h         |  37 ++++++
include/hw/compat.h                  |   4 -
include/sysemu/dump.h                |   2 +
dump.c                               | 154 +++++++++++++++++++++++++
hw/acpi/aml-build.c                  |   2 +
hw/acpi/bios-linker-loader.c         |  10 ++
hw/acpi/vmcoreinfo.c                 | 211 +++++++++++++++++++++++++++++++++++
hw/acpi/vmgenid.c                    |   9 +-
hw/i386/acpi-build.c                 |  14 +++
stubs/vmcoreinfo.c                   |   9 ++
tests/vmcoreinfo-test.c              | 141 +++++++++++++++++++++++
MAINTAINERS                          |   9 ++
default-configs/arm-softmmu.mak      |   1 +
default-configs/i386-softmmu.mak     |   1 +
default-configs/x86_64-softmmu.mak   |   1 +
docs/specs/vmcoreinfo.txt            | 138 +++++++++++++++++++++++
hw/acpi/Makefile.objs                |   1 +
stubs/Makefile.objs                  |   1 +
tests/Makefile.include               |   2 +
22 files changed, 785 insertions(+), 12 deletions(-)
create mode 100644 include/hw/acpi/vmcoreinfo.h
create mode 100644 hw/acpi/vmcoreinfo.c
create mode 100644 stubs/vmcoreinfo.c
create mode 100644 tests/vmcoreinfo-test.c
create mode 100644 docs/specs/vmcoreinfo.txt
[Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Marc-André Lureau 6 years, 9 months ago
Recent linux kernels enable KASLR to randomize phys/virt memory
addresses. This series aims to provide enough information in qemu
dumps so that crash utility can work with randomized kernel too (it
hasn't been tested on other archs than x86 though, help welcome).

The vmcoreinfo device is an emulated ACPI device that exposes a 4k
memory range to the guest to store various informations useful to
debug the guest OS. (it is greatly inspired by the VMGENID device
implementation). The version field with value 0 is meant to give
paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
different purposes or OSes. (note: some wanted to see pvpanic somehow
merged with this device, I have no clear idea how to do that, nor do I
think this is a good idea since the devices are quite different, used
at different time for different purposes. And it can be done as a
future iteration if it is appropriate, feel free to send patches)

Crash 7.1.9 will parse the "phys_base" value from the VMCOREINFO note,
and thus will work with KASLR-dump produced by this series.

By priority, VMCOREINFO "phys_base" value is the most accurate. If not
available, qemu will keep the current guessed value.

The series implements the VMCOREINFO note addition in qemu ELF/kdump,
as well as the python scripts/dump-guest-memory.py.

To test:

Compile and run a guest kernel with CONFIG_RANDOMIZE_BASE=y.

Run qemu with -device vmcoreinfo.

Load the experimental vmcoreinfo module in guest
https://github.com/elmarco/vmgenid-test/blob/master/qemuvmci-test.c.

Produce an ELF dump:
{ "execute": "dump-guest-memory", "arguments": { "protocol": "file:dump", "paging": false } }

Produce a kdump:
{ "execute": "dump-guest-memory", "arguments": { "protocol": "file:dump", "paging": false, "format": "kdump-zlib" } }

Or with (gdb) dump-guest-memory, with scripts/dump-guest-memory.py script.

Analyze with crash >= 7.1.9
$ crash vmlinux dump

v4: from Laszlo review
- switch to warn_report*()
- update test to follow vmgenid and use boot-sector infrastructure
- fix range checks in the python script
- add vmcoreinfo_get() stub

v3: from Laszlo review
- change vmcoreinfo offset to 36
- reset err to null after report
- use PRIu32
- change name_size and desc_size against MAX_VMCOREINFO_SIZE
- python code simplification
- check boundaries of blocks in phys_memory_read()
- fix some vmgi vs vmci names
- add more comments in code
- fix comment indentation
- add r-b tags

v2: from Laszlo review
- vmci: fix guest endianess handling
- vmci: fix wrong sizeof()
- vmci: add back reset logic from vmgenid
- dump: have 1MB size limit for vmcoreinfo
- dump: fix potential off-by-1 buffer manipulation
- dump: use temporary variable for qemu_strtou64
- dump: fixed VMCOREINFO duplication in kdump
- update gdb script to not call into qemu process
- update MAINTAINERS with some new files

Marc-André Lureau (8):
  vmgenid: replace x-write-pointer-available hack
  acpi: add vmcoreinfo device
  stubs: add vmcoreinfo_get() stub
  tests: add simple vmcoreinfo test
  dump: add vmcoreinfo ELF note
  kdump: add vmcoreinfo ELF note
  scripts/dump-guest-memory.py: add vmcoreinfo
  MAINTAINERS: add Dump maintainers

 scripts/dump-guest-memory.py         |  47 ++++++++
 include/hw/acpi/aml-build.h          |   1 +
 include/hw/acpi/bios-linker-loader.h |   2 +
 include/hw/acpi/vmcoreinfo.h         |  37 ++++++
 include/hw/compat.h                  |   4 -
 include/sysemu/dump.h                |   2 +
 dump.c                               | 154 +++++++++++++++++++++++++
 hw/acpi/aml-build.c                  |   2 +
 hw/acpi/bios-linker-loader.c         |  10 ++
 hw/acpi/vmcoreinfo.c                 | 211 +++++++++++++++++++++++++++++++++++
 hw/acpi/vmgenid.c                    |   9 +-
 hw/i386/acpi-build.c                 |  14 +++
 stubs/vmcoreinfo.c                   |   9 ++
 tests/vmcoreinfo-test.c              | 141 +++++++++++++++++++++++
 MAINTAINERS                          |   9 ++
 default-configs/arm-softmmu.mak      |   1 +
 default-configs/i386-softmmu.mak     |   1 +
 default-configs/x86_64-softmmu.mak   |   1 +
 docs/specs/vmcoreinfo.txt            | 138 +++++++++++++++++++++++
 hw/acpi/Makefile.objs                |   1 +
 stubs/Makefile.objs                  |   1 +
 tests/Makefile.include               |   2 +
 22 files changed, 785 insertions(+), 12 deletions(-)
 create mode 100644 include/hw/acpi/vmcoreinfo.h
 create mode 100644 hw/acpi/vmcoreinfo.c
 create mode 100644 stubs/vmcoreinfo.c
 create mode 100644 tests/vmcoreinfo-test.c
 create mode 100644 docs/specs/vmcoreinfo.txt

-- 
2.13.1.395.gf7b71de06


Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Michael S. Tsirkin 6 years, 9 months ago
On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
> Recent linux kernels enable KASLR to randomize phys/virt memory
> addresses. This series aims to provide enough information in qemu
> dumps so that crash utility can work with randomized kernel too (it
> hasn't been tested on other archs than x86 though, help welcome).
> 
> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
> memory range to the guest to store various informations useful to
> debug the guest OS. (it is greatly inspired by the VMGENID device
> implementation). The version field with value 0 is meant to give
> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
> different purposes or OSes. (note: some wanted to see pvpanic somehow
> merged with this device, I have no clear idea how to do that, nor do I
> think this is a good idea since the devices are quite different, used
> at different time for different purposes. And it can be done as a
> future iteration if it is appropriate, feel free to send patches)

First, I think you underestimate the difficulty of maintaining
compatibility.

Second, this seems racy - how do you know when is guest done writing out
the data?

Given you have very little data to export (PA, size - do
you even need size?) - how about just using an ACPI method do it,
instead of exporting a physical addess and storing address there.  This
way you can add more methods as you add functionality.

VMGENID has very specific requirements around performance,
and does not care about consistency at all.
This does not apply here.


> Crash 7.1.9 will parse the "phys_base" value from the VMCOREINFO note,
> and thus will work with KASLR-dump produced by this series.
> 
> By priority, VMCOREINFO "phys_base" value is the most accurate. If not
> available, qemu will keep the current guessed value.
> 
> The series implements the VMCOREINFO note addition in qemu ELF/kdump,
> as well as the python scripts/dump-guest-memory.py.
> 
> To test:
> 
> Compile and run a guest kernel with CONFIG_RANDOMIZE_BASE=y.
> 
> Run qemu with -device vmcoreinfo.
> 
> Load the experimental vmcoreinfo module in guest
> https://github.com/elmarco/vmgenid-test/blob/master/qemuvmci-test.c.
> 
> Produce an ELF dump:
> { "execute": "dump-guest-memory", "arguments": { "protocol": "file:dump", "paging": false } }
> 
> Produce a kdump:
> { "execute": "dump-guest-memory", "arguments": { "protocol": "file:dump", "paging": false, "format": "kdump-zlib" } }
> 
> Or with (gdb) dump-guest-memory, with scripts/dump-guest-memory.py script.
> 
> Analyze with crash >= 7.1.9
> $ crash vmlinux dump
> 
> v4: from Laszlo review
> - switch to warn_report*()
> - update test to follow vmgenid and use boot-sector infrastructure
> - fix range checks in the python script
> - add vmcoreinfo_get() stub
> 
> v3: from Laszlo review
> - change vmcoreinfo offset to 36
> - reset err to null after report
> - use PRIu32
> - change name_size and desc_size against MAX_VMCOREINFO_SIZE
> - python code simplification
> - check boundaries of blocks in phys_memory_read()
> - fix some vmgi vs vmci names
> - add more comments in code
> - fix comment indentation
> - add r-b tags
> 
> v2: from Laszlo review
> - vmci: fix guest endianess handling
> - vmci: fix wrong sizeof()
> - vmci: add back reset logic from vmgenid
> - dump: have 1MB size limit for vmcoreinfo
> - dump: fix potential off-by-1 buffer manipulation
> - dump: use temporary variable for qemu_strtou64
> - dump: fixed VMCOREINFO duplication in kdump
> - update gdb script to not call into qemu process
> - update MAINTAINERS with some new files
> 
> Marc-André Lureau (8):
>   vmgenid: replace x-write-pointer-available hack
>   acpi: add vmcoreinfo device
>   stubs: add vmcoreinfo_get() stub
>   tests: add simple vmcoreinfo test
>   dump: add vmcoreinfo ELF note
>   kdump: add vmcoreinfo ELF note
>   scripts/dump-guest-memory.py: add vmcoreinfo
>   MAINTAINERS: add Dump maintainers
> 
>  scripts/dump-guest-memory.py         |  47 ++++++++
>  include/hw/acpi/aml-build.h          |   1 +
>  include/hw/acpi/bios-linker-loader.h |   2 +
>  include/hw/acpi/vmcoreinfo.h         |  37 ++++++
>  include/hw/compat.h                  |   4 -
>  include/sysemu/dump.h                |   2 +
>  dump.c                               | 154 +++++++++++++++++++++++++
>  hw/acpi/aml-build.c                  |   2 +
>  hw/acpi/bios-linker-loader.c         |  10 ++
>  hw/acpi/vmcoreinfo.c                 | 211 +++++++++++++++++++++++++++++++++++
>  hw/acpi/vmgenid.c                    |   9 +-
>  hw/i386/acpi-build.c                 |  14 +++
>  stubs/vmcoreinfo.c                   |   9 ++
>  tests/vmcoreinfo-test.c              | 141 +++++++++++++++++++++++
>  MAINTAINERS                          |   9 ++
>  default-configs/arm-softmmu.mak      |   1 +
>  default-configs/i386-softmmu.mak     |   1 +
>  default-configs/x86_64-softmmu.mak   |   1 +
>  docs/specs/vmcoreinfo.txt            | 138 +++++++++++++++++++++++
>  hw/acpi/Makefile.objs                |   1 +
>  stubs/Makefile.objs                  |   1 +
>  tests/Makefile.include               |   2 +
>  22 files changed, 785 insertions(+), 12 deletions(-)
>  create mode 100644 include/hw/acpi/vmcoreinfo.h
>  create mode 100644 hw/acpi/vmcoreinfo.c
>  create mode 100644 stubs/vmcoreinfo.c
>  create mode 100644 tests/vmcoreinfo-test.c
>  create mode 100644 docs/specs/vmcoreinfo.txt
> 
> -- 
> 2.13.1.395.gf7b71de06
> 

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Laszlo Ersek 6 years, 9 months ago
On 07/14/17 21:59, Michael S. Tsirkin wrote:
> On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
>> Recent linux kernels enable KASLR to randomize phys/virt memory
>> addresses. This series aims to provide enough information in qemu
>> dumps so that crash utility can work with randomized kernel too (it
>> hasn't been tested on other archs than x86 though, help welcome).
>>
>> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
>> memory range to the guest to store various informations useful to
>> debug the guest OS. (it is greatly inspired by the VMGENID device
>> implementation). The version field with value 0 is meant to give
>> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
>> different purposes or OSes. (note: some wanted to see pvpanic somehow
>> merged with this device, I have no clear idea how to do that, nor do I
>> think this is a good idea since the devices are quite different, used
>> at different time for different purposes. And it can be done as a
>> future iteration if it is appropriate, feel free to send patches)
> 
> First, I think you underestimate the difficulty of maintaining
> compatibility.
> 
> Second, this seems racy - how do you know when is guest done writing out
> the data?

What data exactly?

The guest kernel module points the fields in the "vmcoreinfo page" to
the then-existent vmcoreinfo ELF note. At that point, the ELF note is
complete.

If we catch the guest with a dump request while the kernel module is
setting up the fields (i.e., the fields are not consistent), then we'll
catch that in our sanity checks, and the note won't be extracted. This
is no different from the case when you simply dump the guest RAM before
the module got invoked.

> Given you have very little data to export (PA, size - do
> you even need size?)

Yes, it tells us in advance how much memory to allocate before we copy
out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).

> - how about just using an ACPI method do it,

Do what exactly?

> instead of exporting a physical addess and storing address there.  This
> way you can add more methods as you add functionality.

I'm not saying this is a bad idea (especially because I don't fully
understand your point), but I will say that I'm quite sad that you are
sending Marc-André back to the drawing board after he posted v4 -- also
invalidating my review efforts. :/

Laszlo

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Michael S. Tsirkin 6 years, 9 months ago
On Fri, Jul 14, 2017 at 10:21:43PM +0200, Laszlo Ersek wrote:
> On 07/14/17 21:59, Michael S. Tsirkin wrote:
> > On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
> >> Recent linux kernels enable KASLR to randomize phys/virt memory
> >> addresses. This series aims to provide enough information in qemu
> >> dumps so that crash utility can work with randomized kernel too (it
> >> hasn't been tested on other archs than x86 though, help welcome).
> >>
> >> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
> >> memory range to the guest to store various informations useful to
> >> debug the guest OS. (it is greatly inspired by the VMGENID device
> >> implementation). The version field with value 0 is meant to give
> >> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
> >> different purposes or OSes. (note: some wanted to see pvpanic somehow
> >> merged with this device, I have no clear idea how to do that, nor do I
> >> think this is a good idea since the devices are quite different, used
> >> at different time for different purposes. And it can be done as a
> >> future iteration if it is appropriate, feel free to send patches)
> > 
> > First, I think you underestimate the difficulty of maintaining
> > compatibility.
> > 
> > Second, this seems racy - how do you know when is guest done writing out
> > the data?
> 
> What data exactly?
> 
> The guest kernel module points the fields in the "vmcoreinfo page" to
> the then-existent vmcoreinfo ELF note. At that point, the ELF note is
> complete.

When does this happen?

> If we catch the guest with a dump request while the kernel module is
> setting up the fields (i.e., the fields are not consistent), then we'll
> catch that in our sanity checks, and the note won't be extracted.

Are there assumptions about e.g. in which order pa and size
are written out then? Atomicity of these writes?

> This
> is no different from the case when you simply dump the guest RAM before
> the module got invoked.
> 
> > Given you have very little data to export (PA, size - do
> > you even need size?)
> 
> Yes, it tells us in advance how much memory to allocate before we copy
> out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).
> 
> > - how about just using an ACPI method do it,
> 
> Do what exactly?

Pass address + size to host - that's what the interface is doing,
isn't it?

> > instead of exporting a physical addess and storing address there.  This
> > way you can add more methods as you add functionality.
> 
> I'm not saying this is a bad idea (especially because I don't fully
> understand your point), but I will say that I'm quite sad that you are
> sending Marc-André back to the drawing board after he posted v4 -- also
> invalidating my review efforts. :/
> 
> Laszlo

You are right, I should have looked at this sooner. Early RFC
suggested writing into fw cfg directly. I couldn't find any
discussion around this - why was this abandoned?

-- 
MST

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Marc-André Lureau 6 years, 9 months ago
Hi

On Sat, Jul 15, 2017 at 12:23 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Fri, Jul 14, 2017 at 10:21:43PM +0200, Laszlo Ersek wrote:
>> On 07/14/17 21:59, Michael S. Tsirkin wrote:
>> > On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
>> >> Recent linux kernels enable KASLR to randomize phys/virt memory
>> >> addresses. This series aims to provide enough information in qemu
>> >> dumps so that crash utility can work with randomized kernel too (it
>> >> hasn't been tested on other archs than x86 though, help welcome).
>> >>
>> >> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
>> >> memory range to the guest to store various informations useful to
>> >> debug the guest OS. (it is greatly inspired by the VMGENID device
>> >> implementation). The version field with value 0 is meant to give
>> >> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
>> >> different purposes or OSes. (note: some wanted to see pvpanic somehow
>> >> merged with this device, I have no clear idea how to do that, nor do I
>> >> think this is a good idea since the devices are quite different, used
>> >> at different time for different purposes. And it can be done as a
>> >> future iteration if it is appropriate, feel free to send patches)
>> >
>> > First, I think you underestimate the difficulty of maintaining
>> > compatibility.
>> >
>> > Second, this seems racy - how do you know when is guest done writing out
>> > the data?
>>
>> What data exactly?
>>
>> The guest kernel module points the fields in the "vmcoreinfo page" to
>> the then-existent vmcoreinfo ELF note. At that point, the ELF note is
>> complete.
>
> When does this happen?

Very early boot afaik. But the exact details on when to expose it is
left to the kernel side. For now, it's a test module I load manually.

>
>> If we catch the guest with a dump request while the kernel module is
>> setting up the fields (i.e., the fields are not consistent), then we'll
>> catch that in our sanity checks, and the note won't be extracted.
>
> Are there assumptions about e.g. in which order pa and size
> are written out then? Atomicity of these writes?

I expect it to be atomic, but as Laszlo said, the code should be quite
careful when trying to read the data.

>
>> This
>> is no different from the case when you simply dump the guest RAM before
>> the module got invoked.
>>
>> > Given you have very little data to export (PA, size - do
>> > you even need size?)
>>
>> Yes, it tells us in advance how much memory to allocate before we copy
>> out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).
>>
>> > - how about just using an ACPI method do it,
>>
>> Do what exactly?
>
> Pass address + size to host - that's what the interface is doing,
> isn't it?
>


The memory region is meant to be usable for other OS, or to export
more details in the future. I think if we add a method, it would be to
tell qemu that the memory has been written, but it may still be
corrupted at the time we read it. So I am not sure it will really help


>> > instead of exporting a physical addess and storing address there.  This
>> > way you can add more methods as you add functionality.
>>
>> I'm not saying this is a bad idea (especially because I don't fully
>> understand your point), but I will say that I'm quite sad that you are
>> sending Marc-André back to the drawing board after he posted v4 -- also
>> invalidating my review efforts. :/
>>
>> Laszlo
>
> You are right, I should have looked at this sooner. Early RFC
> suggested writing into fw cfg directly. I couldn't find any
> discussion around this - why was this abandoned?

Violation (or rather abuse) of layers iirc



-- 
Marc-André Lureau

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Michael S. Tsirkin 6 years, 9 months ago
On Sat, Jul 15, 2017 at 12:31:36AM +0200, Marc-André Lureau wrote:
> Hi
> 
> On Sat, Jul 15, 2017 at 12:23 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Fri, Jul 14, 2017 at 10:21:43PM +0200, Laszlo Ersek wrote:
> >> On 07/14/17 21:59, Michael S. Tsirkin wrote:
> >> > On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
> >> >> Recent linux kernels enable KASLR to randomize phys/virt memory
> >> >> addresses. This series aims to provide enough information in qemu
> >> >> dumps so that crash utility can work with randomized kernel too (it
> >> >> hasn't been tested on other archs than x86 though, help welcome).
> >> >>
> >> >> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
> >> >> memory range to the guest to store various informations useful to
> >> >> debug the guest OS. (it is greatly inspired by the VMGENID device
> >> >> implementation). The version field with value 0 is meant to give
> >> >> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
> >> >> different purposes or OSes. (note: some wanted to see pvpanic somehow
> >> >> merged with this device, I have no clear idea how to do that, nor do I
> >> >> think this is a good idea since the devices are quite different, used
> >> >> at different time for different purposes. And it can be done as a
> >> >> future iteration if it is appropriate, feel free to send patches)
> >> >
> >> > First, I think you underestimate the difficulty of maintaining
> >> > compatibility.
> >> >
> >> > Second, this seems racy - how do you know when is guest done writing out
> >> > the data?
> >>
> >> What data exactly?
> >>
> >> The guest kernel module points the fields in the "vmcoreinfo page" to
> >> the then-existent vmcoreinfo ELF note. At that point, the ELF note is
> >> complete.
> >
> > When does this happen?
> 
> Very early boot afaik. But the exact details on when to expose it is
> left to the kernel side. For now, it's a test module I load manually.
> 
> >
> >> If we catch the guest with a dump request while the kernel module is
> >> setting up the fields (i.e., the fields are not consistent), then we'll
> >> catch that in our sanity checks, and the note won't be extracted.
> >
> > Are there assumptions about e.g. in which order pa and size
> > are written out then? Atomicity of these writes?
> 
> I expect it to be atomic, but as Laszlo said, the code should be quite
> careful when trying to read the data.

Same when writing it out.  Worth documenting too.

> >
> >> This
> >> is no different from the case when you simply dump the guest RAM before
> >> the module got invoked.
> >>
> >> > Given you have very little data to export (PA, size - do
> >> > you even need size?)
> >>
> >> Yes, it tells us in advance how much memory to allocate before we copy
> >> out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).
> >>
> >> > - how about just using an ACPI method do it,
> >>
> >> Do what exactly?
> >
> > Pass address + size to host - that's what the interface is doing,
> > isn't it?
> >
> 
> 
> The memory region is meant to be usable for other OS, or to export
> more details in the future.

That's the issue. You will never be able to increment version
just to add more data because that will break old hypervisors.

> I think if we add a method, it would be to
> tell qemu that the memory has been written, but it may still be
> corrupted at the time we read it. So I am not sure it will really help

So don't. Just pass PA and size to method as arguments and let it figure
out how to pass it to QEMU. To extend, you will simply add another
method - which one is present tells you what does hypervisor
support, which one is called tells you what does guest support.

What to do there internally? I don't mind it if it sticks this data
in reserved memory like you did here. And then you won't need to
reserve a full 4K, just a couple of bytes as it will be safe to extend.

> 
> >> > instead of exporting a physical addess and storing address there.  This
> >> > way you can add more methods as you add functionality.
> >>
> >> I'm not saying this is a bad idea (especially because I don't fully
> >> understand your point), but I will say that I'm quite sad that you are
> >> sending Marc-André back to the drawing board after he posted v4 -- also
> >> invalidating my review efforts. :/
> >>
> >> Laszlo
> >
> > You are right, I should have looked at this sooner. Early RFC
> > suggested writing into fw cfg directly. I couldn't find any
> > discussion around this - why was this abandoned?
> 
> Violation (or rather abuse) of layers iirc

Hmm.  I tried to find discussion about that and failed.  It is available
through QEMU0002 in ACPI - would it be OK if guests went through that?
I do not mind the extra indirection so much, but I don't think
host/guest interface compatibility issues are fully thought through.

> 
> 
> -- 
> Marc-André Lureau

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Marc-André Lureau 6 years, 9 months ago
Hi

On Fri, Jul 14, 2017 at 4:29 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Sat, Jul 15, 2017 at 12:31:36AM +0200, Marc-André Lureau wrote:
>> Hi
>>
>> On Sat, Jul 15, 2017 at 12:23 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
>> > On Fri, Jul 14, 2017 at 10:21:43PM +0200, Laszlo Ersek wrote:
>> >> On 07/14/17 21:59, Michael S. Tsirkin wrote:
>> >> > On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
>> >> >> Recent linux kernels enable KASLR to randomize phys/virt memory
>> >> >> addresses. This series aims to provide enough information in qemu
>> >> >> dumps so that crash utility can work with randomized kernel too (it
>> >> >> hasn't been tested on other archs than x86 though, help welcome).
>> >> >>
>> >> >> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
>> >> >> memory range to the guest to store various informations useful to
>> >> >> debug the guest OS. (it is greatly inspired by the VMGENID device
>> >> >> implementation). The version field with value 0 is meant to give
>> >> >> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
>> >> >> different purposes or OSes. (note: some wanted to see pvpanic somehow
>> >> >> merged with this device, I have no clear idea how to do that, nor do I
>> >> >> think this is a good idea since the devices are quite different, used
>> >> >> at different time for different purposes. And it can be done as a
>> >> >> future iteration if it is appropriate, feel free to send patches)
>> >> >
>> >> > First, I think you underestimate the difficulty of maintaining
>> >> > compatibility.
>> >> >
>> >> > Second, this seems racy - how do you know when is guest done writing out
>> >> > the data?
>> >>
>> >> What data exactly?
>> >>
>> >> The guest kernel module points the fields in the "vmcoreinfo page" to
>> >> the then-existent vmcoreinfo ELF note. At that point, the ELF note is
>> >> complete.
>> >
>> > When does this happen?
>>
>> Very early boot afaik. But the exact details on when to expose it is
>> left to the kernel side. For now, it's a test module I load manually.
>>
>> >
>> >> If we catch the guest with a dump request while the kernel module is
>> >> setting up the fields (i.e., the fields are not consistent), then we'll
>> >> catch that in our sanity checks, and the note won't be extracted.
>> >
>> > Are there assumptions about e.g. in which order pa and size
>> > are written out then? Atomicity of these writes?
>>
>> I expect it to be atomic, but as Laszlo said, the code should be quite
>> careful when trying to read the data.
>
> Same when writing it out.  Worth documenting too.
>
>> >
>> >> This
>> >> is no different from the case when you simply dump the guest RAM before
>> >> the module got invoked.
>> >>
>> >> > Given you have very little data to export (PA, size - do
>> >> > you even need size?)
>> >>
>> >> Yes, it tells us in advance how much memory to allocate before we copy
>> >> out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).
>> >>
>> >> > - how about just using an ACPI method do it,
>> >>
>> >> Do what exactly?
>> >
>> > Pass address + size to host - that's what the interface is doing,
>> > isn't it?
>> >
>>
>>
>> The memory region is meant to be usable for other OS, or to export
>> more details in the future.
>
> That's the issue. You will never be able to increment version
> just to add more data because that will break old hypervisors.

Could you be more explicit on what will break?

>
>> I think if we add a method, it would be to
>> tell qemu that the memory has been written, but it may still be
>> corrupted at the time we read it. So I am not sure it will really help
>
> So don't. Just pass PA and size to method as arguments and let it figure
> out how to pass it to QEMU. To extend, you will simply add another
> method - which one is present tells you what does hypervisor
> support, which one is called tells you what does guest support.
>
> What to do there internally? I don't mind it if it sticks this data
> in reserved memory like you did here. And then you won't need to
> reserve a full 4K, just a couple of bytes as it will be safe to extend.
>

I can see how for example nvdimm methods are implemented, there is a
memory region reserved for data exchange, and an IO NTFY to give qemu
execution context. Is this how we should design the interface?

I would like to hear from Ladi how he intended to use the device in
the future, and if he would also prefer ACPI methods and what those
methods should be.

>>
>> >> > instead of exporting a physical addess and storing address there.  This
>> >> > way you can add more methods as you add functionality.
>> >>
>> >> I'm not saying this is a bad idea (especially because I don't fully
>> >> understand your point), but I will say that I'm quite sad that you are
>> >> sending Marc-André back to the drawing board after he posted v4 -- also
>> >> invalidating my review efforts. :/
>> >>
>> >> Laszlo
>> >
>> > You are right, I should have looked at this sooner. Early RFC
>> > suggested writing into fw cfg directly. I couldn't find any
>> > discussion around this - why was this abandoned?
>>
>> Violation (or rather abuse) of layers iirc
>
> Hmm.  I tried to find discussion about that and failed.  It is available
> through QEMU0002 in ACPI - would it be OK if guests went through that?

I wouldn't mind, although merging functionality in a single device
isn't what I would think of first. I guess Ladi would be happier with
a single device. I suppose it shouldn't break drivers if we had
memory, io, methods etc to the device?

Laszlo, what do you think if we add vmcoreinfo methods to QEMU0002?

> I do not mind the extra indirection so much, but I don't think
> host/guest interface compatibility issues are fully thought through.
>
>>
>>
>> --
>> Marc-André Lureau



-- 
Marc-André Lureau

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Ladi Prosek 6 years, 9 months ago
Hi Marc-Andre,

On Tue, Jul 18, 2017 at 3:29 PM, Marc-André Lureau
<marcandre.lureau@gmail.com> wrote:
> Hi
>
> On Fri, Jul 14, 2017 at 4:29 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>> On Sat, Jul 15, 2017 at 12:31:36AM +0200, Marc-André Lureau wrote:
>>> Hi
>>>
>>> On Sat, Jul 15, 2017 at 12:23 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
>>> > On Fri, Jul 14, 2017 at 10:21:43PM +0200, Laszlo Ersek wrote:
>>> >> On 07/14/17 21:59, Michael S. Tsirkin wrote:
>>> >> > On Fri, Jul 14, 2017 at 08:20:03PM +0200, Marc-André Lureau wrote:
>>> >> >> Recent linux kernels enable KASLR to randomize phys/virt memory
>>> >> >> addresses. This series aims to provide enough information in qemu
>>> >> >> dumps so that crash utility can work with randomized kernel too (it
>>> >> >> hasn't been tested on other archs than x86 though, help welcome).
>>> >> >>
>>> >> >> The vmcoreinfo device is an emulated ACPI device that exposes a 4k
>>> >> >> memory range to the guest to store various informations useful to
>>> >> >> debug the guest OS. (it is greatly inspired by the VMGENID device
>>> >> >> implementation). The version field with value 0 is meant to give
>>> >> >> paddr/size of the VMCOREINFO ELF PT_NOTE, other values can be used for
>>> >> >> different purposes or OSes. (note: some wanted to see pvpanic somehow
>>> >> >> merged with this device, I have no clear idea how to do that, nor do I
>>> >> >> think this is a good idea since the devices are quite different, used
>>> >> >> at different time for different purposes. And it can be done as a
>>> >> >> future iteration if it is appropriate, feel free to send patches)
>>> >> >
>>> >> > First, I think you underestimate the difficulty of maintaining
>>> >> > compatibility.
>>> >> >
>>> >> > Second, this seems racy - how do you know when is guest done writing out
>>> >> > the data?
>>> >>
>>> >> What data exactly?
>>> >>
>>> >> The guest kernel module points the fields in the "vmcoreinfo page" to
>>> >> the then-existent vmcoreinfo ELF note. At that point, the ELF note is
>>> >> complete.
>>> >
>>> > When does this happen?
>>>
>>> Very early boot afaik. But the exact details on when to expose it is
>>> left to the kernel side. For now, it's a test module I load manually.
>>>
>>> >
>>> >> If we catch the guest with a dump request while the kernel module is
>>> >> setting up the fields (i.e., the fields are not consistent), then we'll
>>> >> catch that in our sanity checks, and the note won't be extracted.
>>> >
>>> > Are there assumptions about e.g. in which order pa and size
>>> > are written out then? Atomicity of these writes?
>>>
>>> I expect it to be atomic, but as Laszlo said, the code should be quite
>>> careful when trying to read the data.
>>
>> Same when writing it out.  Worth documenting too.
>>
>>> >
>>> >> This
>>> >> is no different from the case when you simply dump the guest RAM before
>>> >> the module got invoked.
>>> >>
>>> >> > Given you have very little data to export (PA, size - do
>>> >> > you even need size?)
>>> >>
>>> >> Yes, it tells us in advance how much memory to allocate before we copy
>>> >> out the vmcoreinfo ELF note (and we enforce a sanity limit on the size).
>>> >>
>>> >> > - how about just using an ACPI method do it,
>>> >>
>>> >> Do what exactly?
>>> >
>>> > Pass address + size to host - that's what the interface is doing,
>>> > isn't it?
>>> >
>>>
>>>
>>> The memory region is meant to be usable for other OS, or to export
>>> more details in the future.
>>
>> That's the issue. You will never be able to increment version
>> just to add more data because that will break old hypervisors.
>
> Could you be more explicit on what will break?
>
>>
>>> I think if we add a method, it would be to
>>> tell qemu that the memory has been written, but it may still be
>>> corrupted at the time we read it. So I am not sure it will really help
>>
>> So don't. Just pass PA and size to method as arguments and let it figure
>> out how to pass it to QEMU. To extend, you will simply add another
>> method - which one is present tells you what does hypervisor
>> support, which one is called tells you what does guest support.
>>
>> What to do there internally? I don't mind it if it sticks this data
>> in reserved memory like you did here. And then you won't need to
>> reserve a full 4K, just a couple of bytes as it will be safe to extend.
>>
>
> I can see how for example nvdimm methods are implemented, there is a
> memory region reserved for data exchange, and an IO NTFY to give qemu
> execution context. Is this how we should design the interface?
>
> I would like to hear from Ladi how he intended to use the device in
> the future, and if he would also prefer ACPI methods and what those
> methods should be.

We should be able to drive pretty much anything from Windows. I wrote
a dummy driver for your earlier prototype just to be sure that ACPI
methods are fine, as I had not done or seen that before.

There are constraints which may be unique to Windows, though. If the
dump-support data is kept in guest-allocated memory, the guest must be
able to revoke it (set / call the method with NULL PA?) because
Windows drivers must free everything on unload. Unlike Linux, I don't
think we can get a piece of memory at startup and keep it for as long
as the OS running. It would be flagged as a memory leak. But that
should be easy to have. Can't really think of anything else.

>>>
>>> >> > instead of exporting a physical addess and storing address there.  This
>>> >> > way you can add more methods as you add functionality.
>>> >>
>>> >> I'm not saying this is a bad idea (especially because I don't fully
>>> >> understand your point), but I will say that I'm quite sad that you are
>>> >> sending Marc-André back to the drawing board after he posted v4 -- also
>>> >> invalidating my review efforts. :/
>>> >>
>>> >> Laszlo
>>> >
>>> > You are right, I should have looked at this sooner. Early RFC
>>> > suggested writing into fw cfg directly. I couldn't find any
>>> > discussion around this - why was this abandoned?
>>>
>>> Violation (or rather abuse) of layers iirc
>>
>> Hmm.  I tried to find discussion about that and failed.  It is available
>> through QEMU0002 in ACPI - would it be OK if guests went through that?
>
> I wouldn't mind, although merging functionality in a single device
> isn't what I would think of first. I guess Ladi would be happier with
> a single device. I suppose it shouldn't break drivers if we had
> memory, io, methods etc to the device?

Yeah, it would be nice if this was part of pvpanic. Even something
really simple like:

 /* The bit of supported pv event */
 #define PVPANIC_F_PANICKED      0
+#define PVPANIC_F_DUMP_INFO_SET      1

-     memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic", 1);
+    memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s,
"pvpanic", 32); // or whatever

The guest writes to two or three registers: PA, size, type?, then sets
the PVPANIC_F_DUMP_INFO_SET bit.

Although not sure if extending the I/O region is OK. And of course I
only need this on x86 :p

Thanks!

> Laszlo, what do you think if we add vmcoreinfo methods to QEMU0002?
>
>> I do not mind the extra indirection so much, but I don't think
>> host/guest interface compatibility issues are fully thought through.
>>
>>>
>>>
>>> --
>>> Marc-André Lureau
>
>
>
> --
> Marc-André Lureau

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Marc-André Lureau 6 years, 9 months ago
Hi

On Tue, Jul 18, 2017 at 6:05 PM Ladi Prosek <lprosek@redhat.com> wrote:

>
> > I would like to hear from Ladi how he intended to use the device in
> > the future, and if he would also prefer ACPI methods and what those
> > methods should be.
>
> We should be able to drive pretty much anything from Windows. I wrote
> a dummy driver for your earlier prototype just to be sure that ACPI
> methods are fine, as I had not done or seen that before.
>
> There are constraints which may be unique to Windows, though. If the
> dump-support data is kept in guest-allocated memory, the guest must be
> able to revoke it (set / call the method with NULL PA?) because
> Windows drivers must free everything on unload. Unlike Linux, I don't
>

Well, the currently proposed vmcoreinfo device has a 4k memory region to
put anything you want, Windows shouldn't be allowed to touch it directly
(e820 regions iirc)

think we can get a piece of memory at startup and keep it for as long
> as the OS running. It would be flagged as a memory leak. But that
> should be easy to have. Can't really think of anything else.
>

The question is what kind of data you want to give to the host to help with
debug. Is this something that can be as simple as addr/size, or you would
rather have a 4k page to put various things there.


>
> >>>
> >>> >> > instead of exporting a physical addess and storing address
> there.  This
> >>> >> > way you can add more methods as you add functionality.
> >>> >>
> >>> >> I'm not saying this is a bad idea (especially because I don't fully
> >>> >> understand your point), but I will say that I'm quite sad that you
> are
> >>> >> sending Marc-André back to the drawing board after he posted v4 --
> also
> >>> >> invalidating my review efforts. :/
> >>> >>
> >>> >> Laszlo
> >>> >
> >>> > You are right, I should have looked at this sooner. Early RFC
> >>> > suggested writing into fw cfg directly. I couldn't find any
> >>> > discussion around this - why was this abandoned?
> >>>
> >>> Violation (or rather abuse) of layers iirc
> >>
> >> Hmm.  I tried to find discussion about that and failed.  It is available
> >> through QEMU0002 in ACPI - would it be OK if guests went through that?
> >
> > I wouldn't mind, although merging functionality in a single device
> > isn't what I would think of first. I guess Ladi would be happier with
> > a single device. I suppose it shouldn't break drivers if we had
> > memory, io, methods etc to the device?
>
> Yeah, it would be nice if this was part of pvpanic. Even something
> really simple like:
>
>  /* The bit of supported pv event */
>  #define PVPANIC_F_PANICKED      0
> +#define PVPANIC_F_DUMP_INFO_SET      1
>

QEMU0002 is fw_cfg


>
> -     memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic",
> 1);
> +    memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s,
> "pvpanic", 32); // or whatever
>
> The guest writes to two or three registers: PA, size, type?, then sets
> the PVPANIC_F_DUMP_INFO_SET bit.
>
> Although not sure if extending the I/O region is OK. And of course I
> only need this on x86 :p
>
>
I would rather have a solution that works on various archs. It's a shame
pvpanic was designed with x86 only in mind imho.

-- 
Marc-André Lureau
Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by Ladi Prosek 6 years, 9 months ago
On Tue, Jul 18, 2017 at 6:18 PM, Marc-André Lureau
<marcandre.lureau@gmail.com> wrote:
> Hi
>
> On Tue, Jul 18, 2017 at 6:05 PM Ladi Prosek <lprosek@redhat.com> wrote:
>>
>>
>> > I would like to hear from Ladi how he intended to use the device in
>> > the future, and if he would also prefer ACPI methods and what those
>> > methods should be.
>>
>> We should be able to drive pretty much anything from Windows. I wrote
>> a dummy driver for your earlier prototype just to be sure that ACPI
>> methods are fine, as I had not done or seen that before.
>>
>> There are constraints which may be unique to Windows, though. If the
>> dump-support data is kept in guest-allocated memory, the guest must be
>> able to revoke it (set / call the method with NULL PA?) because
>> Windows drivers must free everything on unload. Unlike Linux, I don't
>
>
> Well, the currently proposed vmcoreinfo device has a 4k memory region to put
> anything you want, Windows shouldn't be allowed to touch it directly (e820
> regions iirc)

Got it. And we would likely use it to put just addr/size there to make
updates atomic. I think we're supposed to update our dump-support data
on memory hot-plug for example.

>> think we can get a piece of memory at startup and keep it for as long
>> as the OS running. It would be flagged as a memory leak. But that
>> should be easy to have. Can't really think of anything else.
>
>
> The question is what kind of data you want to give to the host to help with
> debug. Is this something that can be as simple as addr/size, or you would
> rather have a 4k page to put various things there.

The size of the dump header as currently provided by Windows (that's
the dump-support data we want to pass to the host) is 4k. So I
wouldn't put it directly in the page anyway. That, plus the desire to
update the data at least somewhat atomically, would make me prefer a
simple addr/size pair I think.

>>
>>
>> >>>
>> >>> >> > instead of exporting a physical addess and storing address there.
>> >>> >> > This
>> >>> >> > way you can add more methods as you add functionality.
>> >>> >>
>> >>> >> I'm not saying this is a bad idea (especially because I don't fully
>> >>> >> understand your point), but I will say that I'm quite sad that you
>> >>> >> are
>> >>> >> sending Marc-André back to the drawing board after he posted v4 --
>> >>> >> also
>> >>> >> invalidating my review efforts. :/
>> >>> >>
>> >>> >> Laszlo
>> >>> >
>> >>> > You are right, I should have looked at this sooner. Early RFC
>> >>> > suggested writing into fw cfg directly. I couldn't find any
>> >>> > discussion around this - why was this abandoned?
>> >>>
>> >>> Violation (or rather abuse) of layers iirc
>> >>
>> >> Hmm.  I tried to find discussion about that and failed.  It is
>> >> available
>> >> through QEMU0002 in ACPI - would it be OK if guests went through that?
>> >
>> > I wouldn't mind, although merging functionality in a single device
>> > isn't what I would think of first. I guess Ladi would be happier with
>> > a single device. I suppose it shouldn't break drivers if we had
>> > memory, io, methods etc to the device?
>>
>> Yeah, it would be nice if this was part of pvpanic. Even something
>> really simple like:
>>
>>  /* The bit of supported pv event */
>>  #define PVPANIC_F_PANICKED      0
>> +#define PVPANIC_F_DUMP_INFO_SET      1
>
>
> QEMU0002 is fw_cfg

Ah, sorry, I got confused.

>>
>>
>> -     memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic",
>> 1);
>> +    memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s,
>> "pvpanic", 32); // or whatever
>>
>> The guest writes to two or three registers: PA, size, type?, then sets
>> the PVPANIC_F_DUMP_INFO_SET bit.
>>
>> Although not sure if extending the I/O region is OK. And of course I
>> only need this on x86 :p
>>
>
> I would rather have a solution that works on various archs. It's a shame
> pvpanic was designed with x86 only in mind imho.

Understood. Thanks!

>
> --
> Marc-André Lureau

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Posted by no-reply@patchew.org 6 years, 9 months ago
Hi,

This series seems to have some coding style problems. See output below for
more information:

Message-id: 20170714182012.4595-1-marcandre.lureau@redhat.com
Subject: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Type: series

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

git config --local diff.renamelimit 0
git config --local diff.renames True

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
04e62b8 MAINTAINERS: add Dump maintainers
947599d scripts/dump-guest-memory.py: add vmcoreinfo
7f07c2f kdump: add vmcoreinfo ELF note
dffc775 dump: add vmcoreinfo ELF note
5f6bf26 tests: add simple vmcoreinfo test
6dd3154 stubs: add vmcoreinfo_get() stub
232c73c acpi: add vmcoreinfo device
e13fbf6 vmgenid: replace x-write-pointer-available hack

=== OUTPUT BEGIN ===
Checking PATCH 1/8: vmgenid: replace x-write-pointer-available hack...
Checking PATCH 2/8: acpi: add vmcoreinfo device...
WARNING: line over 80 characters
#388: FILE: hw/acpi/vmcoreinfo.c:153:
+        VMSTATE_UINT8_ARRAY(vmcoreinfo_addr_le, VMCoreInfoState, sizeof(uint64_t)),

total: 0 errors, 1 warnings, 470 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
Checking PATCH 3/8: stubs: add vmcoreinfo_get() stub...
Checking PATCH 4/8: tests: add simple vmcoreinfo test...
Checking PATCH 5/8: dump: add vmcoreinfo ELF note...
Checking PATCH 6/8: kdump: add vmcoreinfo ELF note...
WARNING: line over 80 characters
#30: FILE: dump.c:847:
+        get_note_sizes(s, s->vmcoreinfo, &hsize, &name_size, &size_vmcoreinfo_desc);

WARNING: line over 80 characters
#47: FILE: dump.c:957:
+        get_note_sizes(s, s->vmcoreinfo, &hsize, &name_size, &size_vmcoreinfo_desc);

total: 0 errors, 2 warnings, 32 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
Checking PATCH 7/8: scripts/dump-guest-memory.py: add vmcoreinfo...
ERROR: Use of volatile is usually wrong: see Documentation/volatile-considered-harmful.txt
#26: FILE: hw/acpi/vmcoreinfo.c:168:
+    static volatile VMCoreInfoState *vmcoreinfo_gdb_helper G_GNUC_UNUSED;

total: 1 errors, 0 warnings, 86 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 8/8: MAINTAINERS: add Dump maintainers...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org