[Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence

Ashijeet Acharya posted 7 patches 6 years, 11 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1493280397-9622-1-git-send-email-ashijeetacharya@gmail.com
Test checkpatch passed
Test docker passed
Test s390x passed
block/dmg.c | 214 +++++++++++++++++++++++++++++++++++++++---------------------
block/dmg.h |  10 +++
2 files changed, 148 insertions(+), 76 deletions(-)
[Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Ashijeet Acharya 6 years, 11 months ago
Previously posted series patches:
v1: http://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg04641.html

This series helps to provide chunk size independence for DMG driver to prevent
denial-of-service in cases where untrusted files are being accessed by the user.

This task is mentioned on the public block ToDo
Here -> http://wiki.qemu.org/ToDo/Block/DmgChunkSizeIndependence

Patch 1 introduces a new data structure to aid caching of random access points
within a compressed stream.

Patch 2 is an extension of patch 1 and introduces a new function to
initialize/update/reset our cached random access point.

Patch 3 limits the output buffer size to a max of 2MB to avoid QEMU allocate
huge amounts of memory.

Patch 4 is a simple preparatory patch to aid handling of various types of chunks.

Patches 5 & 6 help to handle various types of chunks.

Patch 7 simply refactors dmg_co_preadv() to read multiple sectors at once.

Patch 8 finally removes the error messages QEMU used to throw when an image with
chunk sizes above 64MB were accessed by the user.

->Testing procedure:
Convert a DMG file to raw format using the "qemu-img convert" tool present in
v2.9.0
Next convert the same image again after applying these patches.
Compare the two images using "qemu-img compare" tool to check if they are
identical.

You can pickup any DMG image from the collection present
Here -> https://lists.gnu.org/archive/html/qemu-devel/2014-12/msg03606.html

->Important note:
These patches assume that the terms "chunk" and "block" are synonyms of each other
when we talk about bz2 compressed streams. Thus according to the bz2 docs[1],
the max uncompressed size of a chunk/block can reach to 46MB which is less than
the previously allowed size of 64MB, so we can continue decompressing the whole
chunk/block at once instead of partial decompression just like we do now.

This limitation was forced by the fact that bz2 compressed streams do not allow
random access midway through a chunk/block as the BZ2_bzDecompress() API in bzlib
seeks for the magic key "BZh" before starting decompression.[2] This magic key is
present at the start of every chunk/block only and since our cached random access
points need not necessarily point to the start of a chunk/block, BZ2_bzDecompress()
fails with an error value BZ_DATA_ERROR_MAGIC[3]

[1] https://en.wikipedia.org/wiki/Bzip2#File_format
[2] https://blastedbio.blogspot.in/2011/11/random-access-to-bzip2.html
[3] http://linux.math.tifr.res.in/manuals/html/manual_3.html#SEC17

Special thanks to Peter Wu for helping me understand and tackle the bz2
compressed chunks.

Changes in v2:
- limit the buffer size to 2MB after fixing the buffering problems (john/fam)

Ashijeet Acharya (7):
  dmg: Introduce a new struct to cache random access points
  dmg: New function to help us cache random access point
  dmg: Refactor and prepare dmg_read_chunk() to cache random access
    points
  dmg: Handle zlib compressed chunks
  dmg: Handle bz2 compressed/raw/zeroed chunks
  dmg: Refactor dmg_co_preadv() to start reading multiple sectors
  dmg: Limit the output buffer size to a max of 2MB

 block/dmg.c | 214 +++++++++++++++++++++++++++++++++++++++---------------------
 block/dmg.h |  10 +++
 2 files changed, 148 insertions(+), 76 deletions(-)

-- 
2.6.2


Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Ashijeet Acharya 6 years, 11 months ago
On Thu, Apr 27, 2017 at 1:36 PM, Ashijeet Acharya
<ashijeetacharya@gmail.com> wrote:
> Previously posted series patches:
> v1: http://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg04641.html
>
> This series helps to provide chunk size independence for DMG driver to prevent
> denial-of-service in cases where untrusted files are being accessed by the user.
>
> This task is mentioned on the public block ToDo
> Here -> http://wiki.qemu.org/ToDo/Block/DmgChunkSizeIndependence
>
> Patch 1 introduces a new data structure to aid caching of random access points
> within a compressed stream.
>
> Patch 2 is an extension of patch 1 and introduces a new function to
> initialize/update/reset our cached random access point.
>
> Patch 3 limits the output buffer size to a max of 2MB to avoid QEMU allocate
> huge amounts of memory.
>
> Patch 4 is a simple preparatory patch to aid handling of various types of chunks.
>
> Patches 5 & 6 help to handle various types of chunks.
>
> Patch 7 simply refactors dmg_co_preadv() to read multiple sectors at once.
>
> Patch 8 finally removes the error messages QEMU used to throw when an image with
> chunk sizes above 64MB were accessed by the user.

John, I have squashed patch 3 and 8 (as in v1) actually and that
change is represented in patch 7 (as in v2). The cover letter here is
quite misleading, as I forgot to update it and simply did a ctrl-c --
ctrl-v carelessly.

Ashijeet

> Ashijeet Acharya (7):
>   dmg: Introduce a new struct to cache random access points
>   dmg: New function to help us cache random access point
>   dmg: Refactor and prepare dmg_read_chunk() to cache random access
>     points
>   dmg: Handle zlib compressed chunks
>   dmg: Handle bz2 compressed/raw/zeroed chunks
>   dmg: Refactor dmg_co_preadv() to start reading multiple sectors
>   dmg: Limit the output buffer size to a max of 2MB
>
>  block/dmg.c | 214 +++++++++++++++++++++++++++++++++++++++---------------------
>  block/dmg.h |  10 +++
>  2 files changed, 148 insertions(+), 76 deletions(-)
>
> --
> 2.6.2
>

Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Stefan Hajnoczi 6 years, 11 months ago
On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> ->Testing procedure:
> Convert a DMG file to raw format using the "qemu-img convert" tool present in
> v2.9.0
> Next convert the same image again after applying these patches.
> Compare the two images using "qemu-img compare" tool to check if they are
> identical.
> 
> You can pickup any DMG image from the collection present
> Here -> https://lists.gnu.org/archive/html/qemu-devel/2014-12/msg03606.html

Please add a qemu-iotests test case:
http://wiki.qemu.org/Testing/QemuIoTests

Image files can be included in tests/qemu-iotests/sample_images/.  They
must be small.
Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Stefan Hajnoczi 6 years, 11 months ago
On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> This series helps to provide chunk size independence for DMG driver to prevent
> denial-of-service in cases where untrusted files are being accessed by the user.

The core of the chunk size dependence problem are these lines:

    s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
                                              ds.max_compressed_size + 1);
    s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
                                                512 * ds.max_sectors_per_chunk);

The refactoring needs to eliminate these buffers because their size is
controlled by the untrusted input file.

After applying your patches these lines remain unchanged and we still
cannot use input files that have a 250 MB chunk size, for example.  So
I'm not sure how this series is supposed to work.

Here is the approach I would take:

In order to achieve this dmg_read_chunk() needs to be scrapped.  It is
designed to read a full chunk.  The new model does not read full chunks
anymore.

Uncompressed reads or zeroes should operate directly on qiov, not
s->uncompressed_chunk.  This code will be dropped:

    data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
    qemu_iovec_from_buf(qiov, i * 512, data, 512);

Compressed reads still buffers.  I suggest the following buffers:

1. compressed_buf - compressed data is read into this buffer from file
2. uncompressed_buf - a place to discard decompressed data while
                      simulating a seek operation

Data is read from compressed chunks by reading a reasonable amount
(64k?) into compressed_buf.  If the user wishes to read at an offset
into this chunk then a loop decompresses data we are seeking over into
uncompressed_buf (and refills compressed_buf if it becomes empty) until
the desired offset is reached.  Then decompression can continue
directly into the user's qiov and uncompressed_buf isn't used to
decompress the data requested by the user.

Sequential compressed reads can be optimized by keeping the compression
state across read calls.  That means the zlib/bz2 state plus
compressed_buf and the current offset.  That way we don't need to
re-seek into the current compressed chunk to handle sequential reads.
Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Ashijeet Acharya 6 years, 8 months ago
On Fri, May 5, 2017 at 7:29 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> > This series helps to provide chunk size independence for DMG driver to
> prevent
> > denial-of-service in cases where untrusted files are being accessed by
> the user.
>
> The core of the chunk size dependence problem are these lines:
>
>     s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
>                                               ds.max_compressed_size + 1);
>     s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
>                                                 512 *
> ds.max_sectors_per_chunk);
>
> The refactoring needs to eliminate these buffers because their size is
> controlled by the untrusted input file.
>

Oh okay, I understand now. But wouldn't I still need to allocate some
memory for these buffers to be able to use them for the compressed chunks
case you mentioned below. Instead of letting the DMG images control the
size of these buffers, maybe I can hard-code the size of these buffers
instead?


>
> After applying your patches these lines remain unchanged and we still
> cannot use input files that have a 250 MB chunk size, for example.  So
> I'm not sure how this series is supposed to work.
>
> Here is the approach I would take:
>
> In order to achieve this dmg_read_chunk() needs to be scrapped.  It is
> designed to read a full chunk.  The new model does not read full chunks
> anymore.
>
> Uncompressed reads or zeroes should operate directly on qiov, not
> s->uncompressed_chunk.  This code will be dropped:
>
>     data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
>     qemu_iovec_from_buf(qiov, i * 512, data, 512);
>

I have never worked with qiov before, are there any places where I can
refer to inside other drivers to get the idea of how to use it directly (I
am searching by myself in the meantime...)? I got clearly what you are
trying to say, but don't know how to implement it. I think, don't we
already do that for the zeroed chunks in DMG in dmg_co_preadv()?


>
> Compressed reads still buffers.  I suggest the following buffers:
>
> 1. compressed_buf - compressed data is read into this buffer from file
> 2. uncompressed_buf - a place to discard decompressed data while
>                       simulating a seek operation
>

Yes, these are the buffers whose size I can hard-code as discussed above?
You can suggest the preferred size to me.


> Data is read from compressed chunks by reading a reasonable amount
> (64k?) into compressed_buf.  If the user wishes to read at an offset
> into this chunk then a loop decompresses data we are seeking over into
> uncompressed_buf (and refills compressed_buf if it becomes empty) until
> the desired offset is reached.  Then decompression can continue
> directly into the user's qiov and uncompressed_buf isn't used to
> decompress the data requested by the user.
>

Yes, this series does exactly that but keeps using the "uncompressed"
buffer once we reach the desired offset. Once, I understand to use qiov
directly, we can do this. Also, Kevin did suggest me (as I remember
vaguely) that in reality we never actually get the read request at a
particular offset because DMG driver is generally used with "qemu-img
convert", which means all read requests are from the top.


>
> Sequential compressed reads can be optimized by keeping the compression
> state across read calls.  That means the zlib/bz2 state plus
> compressed_buf and the current offset.  That way we don't need to
> re-seek into the current compressed chunk to handle sequential reads.
>

I guess, that's what I implemented with this series so now I can reuse the
"caching access point" part in the next series to implement this
optimization.

Thanks
Ashijeet
Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Stefan Hajnoczi 6 years, 7 months ago
On Sun, Aug 20, 2017 at 1:47 PM, Ashijeet Acharya
<ashijeetacharya@gmail.com> wrote:
> On Fri, May 5, 2017 at 7:29 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
>> > This series helps to provide chunk size independence for DMG driver to
>> > prevent
>> > denial-of-service in cases where untrusted files are being accessed by
>> > the user.
>>
>> The core of the chunk size dependence problem are these lines:
>>
>>     s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
>>                                               ds.max_compressed_size + 1);
>>     s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
>>                                                 512 *
>> ds.max_sectors_per_chunk);
>>
>> The refactoring needs to eliminate these buffers because their size is
>> controlled by the untrusted input file.
>
>
> Oh okay, I understand now. But wouldn't I still need to allocate some memory
> for these buffers to be able to use them for the compressed chunks case you
> mentioned below. Instead of letting the DMG images control the size of these
> buffers, maybe I can hard-code the size of these buffers instead?
>
>>
>>
>> After applying your patches these lines remain unchanged and we still
>> cannot use input files that have a 250 MB chunk size, for example.  So
>> I'm not sure how this series is supposed to work.
>>
>> Here is the approach I would take:
>>
>> In order to achieve this dmg_read_chunk() needs to be scrapped.  It is
>> designed to read a full chunk.  The new model does not read full chunks
>> anymore.
>>
>> Uncompressed reads or zeroes should operate directly on qiov, not
>> s->uncompressed_chunk.  This code will be dropped:
>>
>>     data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
>>     qemu_iovec_from_buf(qiov, i * 512, data, 512);
>
>
> I have never worked with qiov before, are there any places where I can refer
> to inside other drivers to get the idea of how to use it directly (I am
> searching by myself in the meantime...)?

A QEMUIOVector is a utility type for struct iovec iov[] processing.
See util/iov.c.  This is called "vectored" or "scatter-gather" I/O.

Instead of transferring data to/from a single <buffer, length> tuple,
they take an array [<buffer, length>].  For example, the buffer "Hello
world" could be split into two elements:
[{"Hello ", strlen("Hello ")},
 {"world", strlen("world")}]

Vectored I/O is often used because it eliminates memory copies.  Say
you have a network packet header struct and also a data payload array.
Traditionally you would have to allocate a new buffer large enough for
both header and payload, copy the header and payload into the buffer,
and finally give this temporary buffer to the I/O function.  This is
inefficient.  With vectored I/O you can create a vector with two
elements, the header and the payload, and the I/O function can process
them without needing a temporary buffer copy.

> I got clearly what you are trying
> to say, but don't know how to implement it. I think, don't we already do
> that for the zeroed chunks in DMG in dmg_co_preadv()?

Yes, dmg_co_preadv() directly zeroes the qiov.  It doesn't use
s->compressed_chunk/s->uncompressed_chunk.

>
>>
>>
>> Compressed reads still buffers.  I suggest the following buffers:
>>
>> 1. compressed_buf - compressed data is read into this buffer from file
>> 2. uncompressed_buf - a place to discard decompressed data while
>>                       simulating a seek operation
>
>
> Yes, these are the buffers whose size I can hard-code as discussed above?
> You can suggest the preferred size to me.

Try starting with 256 KB for both buffers.

>> Data is read from compressed chunks by reading a reasonable amount
>> (64k?) into compressed_buf.  If the user wishes to read at an offset
>> into this chunk then a loop decompresses data we are seeking over into
>> uncompressed_buf (and refills compressed_buf if it becomes empty) until
>> the desired offset is reached.  Then decompression can continue
>> directly into the user's qiov and uncompressed_buf isn't used to
>> decompress the data requested by the user.
>
>
> Yes, this series does exactly that but keeps using the "uncompressed" buffer
> once we reach the desired offset. Once, I understand to use qiov directly,
> we can do this. Also, Kevin did suggest me (as I remember vaguely) that in
> reality we never actually get the read request at a particular offset
> because DMG driver is generally used with "qemu-img convert", which means
> all read requests are from the top.

For performance it's fine to assume a sequential access pattern.  The
block driver should still support random access I/O though.

Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Ashijeet Acharya 6 years, 7 months ago
On Tue, Aug 29, 2017 at 8:55 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Sun, Aug 20, 2017 at 1:47 PM, Ashijeet Acharya
> <ashijeetacharya@gmail.com> wrote:
> > On Fri, May 5, 2017 at 7:29 PM, Stefan Hajnoczi <stefanha@gmail.com>
> wrote:
> >>
> >> On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> >> > This series helps to provide chunk size independence for DMG driver to
> >> > prevent
> >> > denial-of-service in cases where untrusted files are being accessed by
> >> > the user.
> >>
> >> The core of the chunk size dependence problem are these lines:
> >>
> >>     s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
> >>                                               ds.max_compressed_size +
> 1);
> >>     s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
> >>                                                 512 *
> >> ds.max_sectors_per_chunk);
> >>
> >> The refactoring needs to eliminate these buffers because their size is
> >> controlled by the untrusted input file.
> >
> >
> > Oh okay, I understand now. But wouldn't I still need to allocate some
> memory
> > for these buffers to be able to use them for the compressed chunks case
> you
> > mentioned below. Instead of letting the DMG images control the size of
> these
> > buffers, maybe I can hard-code the size of these buffers instead?
> >
> >>
> >>
> >> After applying your patches these lines remain unchanged and we still
> >> cannot use input files that have a 250 MB chunk size, for example.  So
> >> I'm not sure how this series is supposed to work.
> >>
> >> Here is the approach I would take:
> >>
> >> In order to achieve this dmg_read_chunk() needs to be scrapped.  It is
> >> designed to read a full chunk.  The new model does not read full chunks
> >> anymore.
> >>
> >> Uncompressed reads or zeroes should operate directly on qiov, not
> >> s->uncompressed_chunk.  This code will be dropped:
> >>
> >>     data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
> >>     qemu_iovec_from_buf(qiov, i * 512, data, 512);
> >
> >
> > I have never worked with qiov before, are there any places where I can
> refer
> > to inside other drivers to get the idea of how to use it directly (I am
> > searching by myself in the meantime...)?
>
> A QEMUIOVector is a utility type for struct iovec iov[] processing.
> See util/iov.c.  This is called "vectored" or "scatter-gather" I/O.
>
> Instead of transferring data to/from a single <buffer, length> tuple,
> they take an array [<buffer, length>].  For example, the buffer "Hello
> world" could be split into two elements:
> [{"Hello ", strlen("Hello ")},
>  {"world", strlen("world")}]
>
> Vectored I/O is often used because it eliminates memory copies.  Say
> you have a network packet header struct and also a data payload array.
> Traditionally you would have to allocate a new buffer large enough for
> both header and payload, copy the header and payload into the buffer,
> and finally give this temporary buffer to the I/O function.  This is
> inefficient.  With vectored I/O you can create a vector with two
> elements, the header and the payload, and the I/O function can process
> them without needing a temporary buffer copy.
>

Thanks for the detailed explanation, I think I understood the concept now
and how to use qiov efficiently.
Correct me if I am wrong here. In order to use qiov directly for
uncompressed chunks:

1. Declare a new local_qiov inside dmg_co_preadv (let's say)
2. Initialize it with qemu_iovec_init()
3. Reset it with qemu_iovec_reset() (this is because we will perform this
action in a loop and thus need to reset it before every loop?)
4. Declare a buffer "uncompressed_buf" and allocate it with
qemu_try_blockalign()
5. Add this buffer to our local_qiov using qemu_iovec_add()
6. Read data from file directly into local_qiov using bdrv_co_preadv()
7. On success concatenate it with the qiov passed into the main
dmg_co_preadv() function.

I think this method only works for uncompressed chunks. For the compressed
ones, I believe we will still need to do it in the existing way, i.e. read
chunk from file -> decompress into output buffer -> use
qemu_iovec_from_buf() because we cannot read directly since data is in
compressed form. Makes sense?


> > I got clearly what you are trying
> > to say, but don't know how to implement it. I think, don't we already do
> > that for the zeroed chunks in DMG in dmg_co_preadv()?
>
> Yes, dmg_co_preadv() directly zeroes the qiov.  It doesn't use
> s->compressed_chunk/s->uncompressed_chunk.
>
> >
> >>
> >>
> >> Compressed reads still buffers.  I suggest the following buffers:
> >>
> >> 1. compressed_buf - compressed data is read into this buffer from file
> >> 2. uncompressed_buf - a place to discard decompressed data while
> >>                       simulating a seek operation
> >
> >
> > Yes, these are the buffers whose size I can hard-code as discussed above?
> > You can suggest the preferred size to me.
>
> Try starting with 256 KB for both buffers.
>

Okay, I will do that. But I think we cannot use these buffer sizes for bz2
chunks (see below)


> >> Data is read from compressed chunks by reading a reasonable amount
> >> (64k?) into compressed_buf.  If the user wishes to read at an offset
> >> into this chunk then a loop decompresses data we are seeking over into
> >> uncompressed_buf (and refills compressed_buf if it becomes empty) until
> >> the desired offset is reached.  Then decompression can continue
> >> directly into the user's qiov and uncompressed_buf isn't used to
> >> decompress the data requested by the user.
> >
> >
> > Yes, this series does exactly that but keeps using the "uncompressed"
> buffer
> > once we reach the desired offset. Once, I understand to use qiov
> directly,
> > we can do this. Also, Kevin did suggest me (as I remember vaguely) that
> in
> > reality we never actually get the read request at a particular offset
> > because DMG driver is generally used with "qemu-img convert", which means
> > all read requests are from the top.
>
> For performance it's fine to assume a sequential access pattern.  The
> block driver should still support random access I/O though.
>

Yes, I agree. But I don't think we can add random access for the bz2 chunks
because they need to be decompressed as a whole and not in parts. I tried
to explain it in my cover letter as an important note (
https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg05327.html). Due
to the limitation explained there, I guess we cannot use a loop to seek to
the desired offset inside bz2 chunks. Also, since they need to be
decompressed at once, we will ultimately need to allocate large buffers.

If you fell I am right till now, I suggest that we can allocate small
buffers for other cases as discussed above and separately allocate huge
buffers according to the size of bz2 chunk we are currently reading into.
This can be done because bz2 chunks normally expand to a max size of 46 MB
which is below than our current limit of 64 MB.
See this -> https://en.wikipedia.org/wiki/Bzip2#File_format

Thanks
Ashijeet
Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Stefan Hajnoczi 6 years, 7 months ago
On Wed, Aug 30, 2017 at 06:32:52PM +0530, Ashijeet Acharya wrote:
> On Tue, Aug 29, 2017 at 8:55 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> 
> > On Sun, Aug 20, 2017 at 1:47 PM, Ashijeet Acharya
> > <ashijeetacharya@gmail.com> wrote:
> > > On Fri, May 5, 2017 at 7:29 PM, Stefan Hajnoczi <stefanha@gmail.com>
> > wrote:
> > >>
> > >> On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> > >> > This series helps to provide chunk size independence for DMG driver to
> > >> > prevent
> > >> > denial-of-service in cases where untrusted files are being accessed by
> > >> > the user.
> > >>
> > >> The core of the chunk size dependence problem are these lines:
> > >>
> > >>     s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
> > >>                                               ds.max_compressed_size +
> > 1);
> > >>     s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
> > >>                                                 512 *
> > >> ds.max_sectors_per_chunk);
> > >>
> > >> The refactoring needs to eliminate these buffers because their size is
> > >> controlled by the untrusted input file.
> > >
> > >
> > > Oh okay, I understand now. But wouldn't I still need to allocate some
> > memory
> > > for these buffers to be able to use them for the compressed chunks case
> > you
> > > mentioned below. Instead of letting the DMG images control the size of
> > these
> > > buffers, maybe I can hard-code the size of these buffers instead?
> > >
> > >>
> > >>
> > >> After applying your patches these lines remain unchanged and we still
> > >> cannot use input files that have a 250 MB chunk size, for example.  So
> > >> I'm not sure how this series is supposed to work.
> > >>
> > >> Here is the approach I would take:
> > >>
> > >> In order to achieve this dmg_read_chunk() needs to be scrapped.  It is
> > >> designed to read a full chunk.  The new model does not read full chunks
> > >> anymore.
> > >>
> > >> Uncompressed reads or zeroes should operate directly on qiov, not
> > >> s->uncompressed_chunk.  This code will be dropped:
> > >>
> > >>     data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
> > >>     qemu_iovec_from_buf(qiov, i * 512, data, 512);
> > >
> > >
> > > I have never worked with qiov before, are there any places where I can
> > refer
> > > to inside other drivers to get the idea of how to use it directly (I am
> > > searching by myself in the meantime...)?
> >
> > A QEMUIOVector is a utility type for struct iovec iov[] processing.
> > See util/iov.c.  This is called "vectored" or "scatter-gather" I/O.
> >
> > Instead of transferring data to/from a single <buffer, length> tuple,
> > they take an array [<buffer, length>].  For example, the buffer "Hello
> > world" could be split into two elements:
> > [{"Hello ", strlen("Hello ")},
> >  {"world", strlen("world")}]
> >
> > Vectored I/O is often used because it eliminates memory copies.  Say
> > you have a network packet header struct and also a data payload array.
> > Traditionally you would have to allocate a new buffer large enough for
> > both header and payload, copy the header and payload into the buffer,
> > and finally give this temporary buffer to the I/O function.  This is
> > inefficient.  With vectored I/O you can create a vector with two
> > elements, the header and the payload, and the I/O function can process
> > them without needing a temporary buffer copy.
> >
> 
> Thanks for the detailed explanation, I think I understood the concept now
> and how to use qiov efficiently.
> Correct me if I am wrong here. In order to use qiov directly for
> uncompressed chunks:
> 
> 1. Declare a new local_qiov inside dmg_co_preadv (let's say)

No, the operation should use qiov directly if the chunk is uncompressed.

A temporary buffer is only needed if the data needs to be uncompressed.

> 2. Initialize it with qemu_iovec_init()
> 3. Reset it with qemu_iovec_reset() (this is because we will perform this
> action in a loop and thus need to reset it before every loop?)
> 4. Declare a buffer "uncompressed_buf" and allocate it with
> qemu_try_blockalign()
> 5. Add this buffer to our local_qiov using qemu_iovec_add()
> 6. Read data from file directly into local_qiov using bdrv_co_preadv()
> 7. On success concatenate it with the qiov passed into the main
> dmg_co_preadv() function.
> 
> I think this method only works for uncompressed chunks. For the compressed
> ones, I believe we will still need to do it in the existing way, i.e. read
> chunk from file -> decompress into output buffer -> use
> qemu_iovec_from_buf() because we cannot read directly since data is in
> compressed form. Makes sense?
> 
> 
> > > I got clearly what you are trying
> > > to say, but don't know how to implement it. I think, don't we already do
> > > that for the zeroed chunks in DMG in dmg_co_preadv()?
> >
> > Yes, dmg_co_preadv() directly zeroes the qiov.  It doesn't use
> > s->compressed_chunk/s->uncompressed_chunk.
> >
> > >
> > >>
> > >>
> > >> Compressed reads still buffers.  I suggest the following buffers:
> > >>
> > >> 1. compressed_buf - compressed data is read into this buffer from file
> > >> 2. uncompressed_buf - a place to discard decompressed data while
> > >>                       simulating a seek operation
> > >
> > >
> > > Yes, these are the buffers whose size I can hard-code as discussed above?
> > > You can suggest the preferred size to me.
> >
> > Try starting with 256 KB for both buffers.
> >
> 
> Okay, I will do that. But I think we cannot use these buffer sizes for bz2
> chunks (see below)
> 
> 
> > >> Data is read from compressed chunks by reading a reasonable amount
> > >> (64k?) into compressed_buf.  If the user wishes to read at an offset
> > >> into this chunk then a loop decompresses data we are seeking over into
> > >> uncompressed_buf (and refills compressed_buf if it becomes empty) until
> > >> the desired offset is reached.  Then decompression can continue
> > >> directly into the user's qiov and uncompressed_buf isn't used to
> > >> decompress the data requested by the user.
> > >
> > >
> > > Yes, this series does exactly that but keeps using the "uncompressed"
> > buffer
> > > once we reach the desired offset. Once, I understand to use qiov
> > directly,
> > > we can do this. Also, Kevin did suggest me (as I remember vaguely) that
> > in
> > > reality we never actually get the read request at a particular offset
> > > because DMG driver is generally used with "qemu-img convert", which means
> > > all read requests are from the top.
> >
> > For performance it's fine to assume a sequential access pattern.  The
> > block driver should still support random access I/O though.
> >
> 
> Yes, I agree. But I don't think we can add random access for the bz2 chunks
> because they need to be decompressed as a whole and not in parts. I tried
> to explain it in my cover letter as an important note (
> https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg05327.html).

Here is what you said:

  "bz2 compressed streams do not allow random access midway through a
  chunk/block as the BZ2_bzDecompress() API in bzlib seeks for the
  magic key "BZh" before starting decompression.[2] This magic key is
  present at the start of every chunk/block only and since our cached
  random access points need not necessarily point to the start of a
  chunk/block, BZ2_bzDecompress() fails with an error value
  BZ_DATA_ERROR_MAGIC"

The block driver can simulate random access.  Take a look at the
BZ2_bzDecompress() API docs:

  "You may provide and remove as little or as much data as you like on
  each call of BZ2_bzDecompress. In the limit, it is acceptable to
  supply and remove data one byte at a time, although this would be
  terribly inefficient. You should always ensure that at least one byte
  of output space is available at each call.

In other words, bz2 supports streaming.  Therefore you can use the
buffer sizes I suggested in a loop instead of reading the whole bz2
block upfront.

If you keep the bz_stream between dmg_uncompress_bz2_do() calls then
sequential reads can be optimized.  They do not need to reread from the
beginning of the bz2 block.  That's important because sequential I/O is
the main access pattern expected by this block driver.

Stefan

Re: [Qemu-devel] [PATCH v2 0/7] Refactor DMG driver to have chunk size independence
Posted by Ashijeet Acharya 6 years, 7 months ago
On Tue, Sep 5, 2017 at 4:28 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Wed, Aug 30, 2017 at 06:32:52PM +0530, Ashijeet Acharya wrote:
> > On Tue, Aug 29, 2017 at 8:55 PM, Stefan Hajnoczi <stefanha@gmail.com>
> wrote:
> >
> > > On Sun, Aug 20, 2017 at 1:47 PM, Ashijeet Acharya
> > > <ashijeetacharya@gmail.com> wrote:
> > > > On Fri, May 5, 2017 at 7:29 PM, Stefan Hajnoczi <stefanha@gmail.com>
> > > wrote:
> > > >>
> > > >> On Thu, Apr 27, 2017 at 01:36:30PM +0530, Ashijeet Acharya wrote:
> > > >> > This series helps to provide chunk size independence for DMG
> driver to
> > > >> > prevent
> > > >> > denial-of-service in cases where untrusted files are being
> accessed by
> > > >> > the user.
> > > >>
> > > >> The core of the chunk size dependence problem are these lines:
> > > >>
> > > >>     s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
> > > >>
>  ds.max_compressed_size +
> > > 1);
> > > >>     s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
> > > >>                                                 512 *
> > > >> ds.max_sectors_per_chunk);
> > > >>
> > > >> The refactoring needs to eliminate these buffers because their size
> is
> > > >> controlled by the untrusted input file.
> > > >
> > > >
> > > > Oh okay, I understand now. But wouldn't I still need to allocate some
> > > memory
> > > > for these buffers to be able to use them for the compressed chunks
> case
> > > you
> > > > mentioned below. Instead of letting the DMG images control the size
> of
> > > these
> > > > buffers, maybe I can hard-code the size of these buffers instead?
> > > >
> > > >>
> > > >>
> > > >> After applying your patches these lines remain unchanged and we
> still
> > > >> cannot use input files that have a 250 MB chunk size, for example.
> So
> > > >> I'm not sure how this series is supposed to work.
> > > >>
> > > >> Here is the approach I would take:
> > > >>
> > > >> In order to achieve this dmg_read_chunk() needs to be scrapped.  It
> is
> > > >> designed to read a full chunk.  The new model does not read full
> chunks
> > > >> anymore.
> > > >>
> > > >> Uncompressed reads or zeroes should operate directly on qiov, not
> > > >> s->uncompressed_chunk.  This code will be dropped:
> > > >>
> > > >>     data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
> > > >>     qemu_iovec_from_buf(qiov, i * 512, data, 512);
> > > >
> > > >
> > > > I have never worked with qiov before, are there any places where I
> can
> > > refer
> > > > to inside other drivers to get the idea of how to use it directly (I
> am
> > > > searching by myself in the meantime...)?
> > >
> > > A QEMUIOVector is a utility type for struct iovec iov[] processing.
> > > See util/iov.c.  This is called "vectored" or "scatter-gather" I/O.
> > >
> > > Instead of transferring data to/from a single <buffer, length> tuple,
> > > they take an array [<buffer, length>].  For example, the buffer "Hello
> > > world" could be split into two elements:
> > > [{"Hello ", strlen("Hello ")},
> > >  {"world", strlen("world")}]
> > >
> > > Vectored I/O is often used because it eliminates memory copies.  Say
> > > you have a network packet header struct and also a data payload array.
> > > Traditionally you would have to allocate a new buffer large enough for
> > > both header and payload, copy the header and payload into the buffer,
> > > and finally give this temporary buffer to the I/O function.  This is
> > > inefficient.  With vectored I/O you can create a vector with two
> > > elements, the header and the payload, and the I/O function can process
> > > them without needing a temporary buffer copy.
> > >
> >
> > Thanks for the detailed explanation, I think I understood the concept now
> > and how to use qiov efficiently.
> > Correct me if I am wrong here. In order to use qiov directly for
> > uncompressed chunks:
> >
> > 1. Declare a new local_qiov inside dmg_co_preadv (let's say)
>
> No, the operation should use qiov directly if the chunk is uncompressed.
>
> A temporary buffer is only needed if the data needs to be uncompressed.
>

Yes, I had a chat with John and he cleared most of my doubts on how to
approach it correctly now without using a temporary buffer.


>
> > 2. Initialize it with qemu_iovec_init()
> > 3. Reset it with qemu_iovec_reset() (this is because we will perform this
> > action in a loop and thus need to reset it before every loop?)
> > 4. Declare a buffer "uncompressed_buf" and allocate it with
> > qemu_try_blockalign()
> > 5. Add this buffer to our local_qiov using qemu_iovec_add()
> > 6. Read data from file directly into local_qiov using bdrv_co_preadv()
> > 7. On success concatenate it with the qiov passed into the main
> > dmg_co_preadv() function.
> >
> > I think this method only works for uncompressed chunks. For the
> compressed
> > ones, I believe we will still need to do it in the existing way, i.e.
> read
> > chunk from file -> decompress into output buffer -> use
> > qemu_iovec_from_buf() because we cannot read directly since data is in
> > compressed form. Makes sense?
> >
> >
> > > > I got clearly what you are trying
> > > > to say, but don't know how to implement it. I think, don't we
> already do
> > > > that for the zeroed chunks in DMG in dmg_co_preadv()?
> > >
> > > Yes, dmg_co_preadv() directly zeroes the qiov.  It doesn't use
> > > s->compressed_chunk/s->uncompressed_chunk.
> > >
> > > >
> > > >>
> > > >>
> > > >> Compressed reads still buffers.  I suggest the following buffers:
> > > >>
> > > >> 1. compressed_buf - compressed data is read into this buffer from
> file
> > > >> 2. uncompressed_buf - a place to discard decompressed data while
> > > >>                       simulating a seek operation
> > > >
> > > >
> > > > Yes, these are the buffers whose size I can hard-code as discussed
> above?
> > > > You can suggest the preferred size to me.
> > >
> > > Try starting with 256 KB for both buffers.
> > >
> >
> > Okay, I will do that. But I think we cannot use these buffer sizes for
> bz2
> > chunks (see below)
> >
> >
> > > >> Data is read from compressed chunks by reading a reasonable amount
> > > >> (64k?) into compressed_buf.  If the user wishes to read at an offset
> > > >> into this chunk then a loop decompresses data we are seeking over
> into
> > > >> uncompressed_buf (and refills compressed_buf if it becomes empty)
> until
> > > >> the desired offset is reached.  Then decompression can continue
> > > >> directly into the user's qiov and uncompressed_buf isn't used to
> > > >> decompress the data requested by the user.
> > > >
> > > >
> > > > Yes, this series does exactly that but keeps using the "uncompressed"
> > > buffer
> > > > once we reach the desired offset. Once, I understand to use qiov
> > > directly,
> > > > we can do this. Also, Kevin did suggest me (as I remember vaguely)
> that
> > > in
> > > > reality we never actually get the read request at a particular offset
> > > > because DMG driver is generally used with "qemu-img convert", which
> means
> > > > all read requests are from the top.
> > >
> > > For performance it's fine to assume a sequential access pattern.  The
> > > block driver should still support random access I/O though.
> > >
> >
> > Yes, I agree. But I don't think we can add random access for the bz2
> chunks
> > because they need to be decompressed as a whole and not in parts. I tried
> > to explain it in my cover letter as an important note (
> > https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg05327.html).
>
> Here is what you said:
>
>   "bz2 compressed streams do not allow random access midway through a
>   chunk/block as the BZ2_bzDecompress() API in bzlib seeks for the
>   magic key "BZh" before starting decompression.[2] This magic key is
>   present at the start of every chunk/block only and since our cached
>   random access points need not necessarily point to the start of a
>   chunk/block, BZ2_bzDecompress() fails with an error value
>   BZ_DATA_ERROR_MAGIC"
>
> The block driver can simulate random access.  Take a look at the
> BZ2_bzDecompress() API docs:
>
>   "You may provide and remove as little or as much data as you like on
>   each call of BZ2_bzDecompress. In the limit, it is acceptable to
>   supply and remove data one byte at a time, although this would be
>   terribly inefficient. You should always ensure that at least one byte
>   of output space is available at each call.
>
> In other words, bz2 supports streaming.  Therefore you can use the
> buffer sizes I suggested in a loop instead of reading the whole bz2
> block upfront.
>
> If you keep the bz_stream between dmg_uncompress_bz2_do() calls then
> sequential reads can be optimized.  They do not need to reread from the
> beginning of the bz2 block.  That's important because sequential I/O is
> the main access pattern expected by this block driver.
>

Yes, I am trying to implement it exactly like that. It failed the last time
I tried it in v2 but maybe I did something wrong because the docs say
otherwise. As far as the optimization is concerned, I am caching the access
points to resume reading from there in the next call so that should work.

Ashijeet

>
> Stefan
>