[PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel

Marek Marczykowski-Górecki posted 1 patch 2 months, 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20241008213225.728922-1-marmarek@invisiblethingslab.com
tools/libs/guest/xg_dom_bzimageloader.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
[PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Marek Marczykowski-Górecki 2 months, 2 weeks ago
Linux 6.12-rc2 fails to decompress with the current 128MiB, contrary to
the code comment. It results in a failure like this:

    domainbuilder: detail: xc_dom_kernel_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/vmlinuz"
    domainbuilder: detail: xc_dom_malloc_filemap    : 12104 kB
    domainbuilder: detail: xc_dom_module_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/initramfs"
    domainbuilder: detail: xc_dom_malloc_filemap    : 7711 kB
    domainbuilder: detail: xc_dom_boot_xen_init: ver 4.19, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
    domainbuilder: detail: xc_dom_parse_image: called
    domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
    domainbuilder: detail: loader probe failed
    domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
    domainbuilder: detail: loader probe failed
    domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
    domainbuilder: detail: _xc_try_lzma_decode: XZ decompression error: Memory usage limit reached
    xc: error: panic: xg_dom_bzimageloader.c:761: xc_dom_probe_bzimage_kernel unable to XZ decompress kernel: Invalid kernel
    domainbuilder: detail: loader probe failed
    domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
    domainbuilder: detail: loader probe failed
    xc: error: panic: xg_dom_core.c:689: xc_dom_find_loader: no loader found: Invalid kernel
    libxl: error: libxl_dom.c:566:libxl__build_dom: xc_dom_parse_image failed

The important part: XZ decompression error: Memory usage limit reached

This looks to be related to the following change in Linux:
8653c909922743bceb4800e5cc26087208c9e0e6 ("xz: use 128 MiB dictionary and force single-threaded mode")

Fix this by increasing the block size to 256MiB. And remove the
misleading comment (from lack of better ideas).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 tools/libs/guest/xg_dom_bzimageloader.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/libs/guest/xg_dom_bzimageloader.c b/tools/libs/guest/xg_dom_bzimageloader.c
index c6ee6d83e7c6..1fb4e5a1f728 100644
--- a/tools/libs/guest/xg_dom_bzimageloader.c
+++ b/tools/libs/guest/xg_dom_bzimageloader.c
@@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
     return retval;
 }
 
-/* 128 Mb is the minimum size (half-way) documented to work for all inputs. */
-#define LZMA_BLOCK_SIZE (128*1024*1024)
+#define LZMA_BLOCK_SIZE (256*1024*1024)
 
 static int xc_try_xz_decode(
     struct xc_dom_image *dom, void **blob, size_t *size)
-- 
2.46.0


Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Roger Pau Monné 3 days, 12 hours ago
On Tue, Oct 08, 2024 at 11:32:23PM +0200, Marek Marczykowski-Górecki wrote:
> Linux 6.12-rc2 fails to decompress with the current 128MiB, contrary to
> the code comment. It results in a failure like this:
> 
>     domainbuilder: detail: xc_dom_kernel_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/vmlinuz"
>     domainbuilder: detail: xc_dom_malloc_filemap    : 12104 kB
>     domainbuilder: detail: xc_dom_module_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/initramfs"
>     domainbuilder: detail: xc_dom_malloc_filemap    : 7711 kB
>     domainbuilder: detail: xc_dom_boot_xen_init: ver 4.19, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>     domainbuilder: detail: xc_dom_parse_image: called
>     domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>     domainbuilder: detail: loader probe failed
>     domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
>     domainbuilder: detail: loader probe failed
>     domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
>     domainbuilder: detail: _xc_try_lzma_decode: XZ decompression error: Memory usage limit reached
>     xc: error: panic: xg_dom_bzimageloader.c:761: xc_dom_probe_bzimage_kernel unable to XZ decompress kernel: Invalid kernel
>     domainbuilder: detail: loader probe failed
>     domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
>     domainbuilder: detail: loader probe failed
>     xc: error: panic: xg_dom_core.c:689: xc_dom_find_loader: no loader found: Invalid kernel
>     libxl: error: libxl_dom.c:566:libxl__build_dom: xc_dom_parse_image failed
> 
> The important part: XZ decompression error: Memory usage limit reached
> 
> This looks to be related to the following change in Linux:
> 8653c909922743bceb4800e5cc26087208c9e0e6 ("xz: use 128 MiB dictionary and force single-threaded mode")
> 
> Fix this by increasing the block size to 256MiB. And remove the
> misleading comment (from lack of better ideas).
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I assumed I already RB this, but it seems not.

Could we get an Ack from the tools or libs maintainer for this to go
in?  It's not the best solution, but we need to get this sorted in
time for 4.20, and backport to stable branches.

Thanks, Roger.

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Anthony PERARD 2 days, 15 hours ago
On Wed, Dec 18, 2024 at 06:58:25PM +0100, Roger Pau Monné wrote:
> On Tue, Oct 08, 2024 at 11:32:23PM +0200, Marek Marczykowski-Górecki wrote:
> > Linux 6.12-rc2 fails to decompress with the current 128MiB, contrary to
> > the code comment. It results in a failure like this:
> > 
> >     domainbuilder: detail: xc_dom_kernel_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/vmlinuz"
> >     domainbuilder: detail: xc_dom_malloc_filemap    : 12104 kB
> >     domainbuilder: detail: xc_dom_module_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/initramfs"
> >     domainbuilder: detail: xc_dom_malloc_filemap    : 7711 kB
> >     domainbuilder: detail: xc_dom_boot_xen_init: ver 4.19, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> >     domainbuilder: detail: xc_dom_parse_image: called
> >     domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
> >     domainbuilder: detail: loader probe failed
> >     domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
> >     domainbuilder: detail: loader probe failed
> >     domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
> >     domainbuilder: detail: _xc_try_lzma_decode: XZ decompression error: Memory usage limit reached
> >     xc: error: panic: xg_dom_bzimageloader.c:761: xc_dom_probe_bzimage_kernel unable to XZ decompress kernel: Invalid kernel
> >     domainbuilder: detail: loader probe failed
> >     domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
> >     domainbuilder: detail: loader probe failed
> >     xc: error: panic: xg_dom_core.c:689: xc_dom_find_loader: no loader found: Invalid kernel
> >     libxl: error: libxl_dom.c:566:libxl__build_dom: xc_dom_parse_image failed
> > 
> > The important part: XZ decompression error: Memory usage limit reached
> > 
> > This looks to be related to the following change in Linux:
> > 8653c909922743bceb4800e5cc26087208c9e0e6 ("xz: use 128 MiB dictionary and force single-threaded mode")
> > 
> > Fix this by increasing the block size to 256MiB. And remove the
> > misleading comment (from lack of better ideas).
> > 
> > Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> I assumed I already RB this, but it seems not.
> 
> Could we get an Ack from the tools or libs maintainer for this to go
> in?  It's not the best solution, but we need to get this sorted in
> time for 4.20, and backport to stable branches.

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 

 | Vates 

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech
Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Andrew Cooper 3 days, 11 hours ago
On 18/12/2024 5:58 pm, Roger Pau Monné wrote:
> On Tue, Oct 08, 2024 at 11:32:23PM +0200, Marek Marczykowski-Górecki wrote:
>> Linux 6.12-rc2 fails to decompress with the current 128MiB, contrary to
>> the code comment. It results in a failure like this:
>>
>>     domainbuilder: detail: xc_dom_kernel_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/vmlinuz"
>>     domainbuilder: detail: xc_dom_malloc_filemap    : 12104 kB
>>     domainbuilder: detail: xc_dom_module_file: filename="/var/lib/qubes/vm-kernels/6.12-rc2-1.1.fc37/initramfs"
>>     domainbuilder: detail: xc_dom_malloc_filemap    : 7711 kB
>>     domainbuilder: detail: xc_dom_boot_xen_init: ver 4.19, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>     domainbuilder: detail: xc_dom_parse_image: called
>>     domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>>     domainbuilder: detail: loader probe failed
>>     domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
>>     domainbuilder: detail: loader probe failed
>>     domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
>>     domainbuilder: detail: _xc_try_lzma_decode: XZ decompression error: Memory usage limit reached
>>     xc: error: panic: xg_dom_bzimageloader.c:761: xc_dom_probe_bzimage_kernel unable to XZ decompress kernel: Invalid kernel
>>     domainbuilder: detail: loader probe failed
>>     domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
>>     domainbuilder: detail: loader probe failed
>>     xc: error: panic: xg_dom_core.c:689: xc_dom_find_loader: no loader found: Invalid kernel
>>     libxl: error: libxl_dom.c:566:libxl__build_dom: xc_dom_parse_image failed
>>
>> The important part: XZ decompression error: Memory usage limit reached
>>
>> This looks to be related to the following change in Linux:
>> 8653c909922743bceb4800e5cc26087208c9e0e6 ("xz: use 128 MiB dictionary and force single-threaded mode")
>>
>> Fix this by increasing the block size to 256MiB. And remove the
>> misleading comment (from lack of better ideas).
>>
>> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> I assumed I already RB this, but it seems not.
>
> Could we get an Ack from the tools or libs maintainer for this to go
> in?  It's not the best solution, but we need to get this sorted in
> time for 4.20, and backport to stable branches.

I agree.  This isn't great, but it's far better than doing nothing, and
there's no other viable alternative proposed.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

~Andrew

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Jan Beulich 2 months, 1 week ago
On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
> --- a/tools/libs/guest/xg_dom_bzimageloader.c
> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>      return retval;
>  }
>  
> -/* 128 Mb is the minimum size (half-way) documented to work for all inputs. */
> -#define LZMA_BLOCK_SIZE (128*1024*1024)
> +#define LZMA_BLOCK_SIZE (256*1024*1024)

That's as arbitrary as before, now just not even with a comment at least
hinting at it being arbitrary. Quoting from one of the LZMA API headers:

	 * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
	 * UINT32_MAX), so increasing the maximum dictionary size of the
	 * encoder won't cause problems for old decoders.

IOW - what if the Linux folks decided to increase the dictionary size
further? I therefore wonder whether we don't need to make this more
dynamic, perhaps by peeking into the header to obtain the dictionary
size used. The one thing I'm not sure about is whether there can't be
multiple such headers throughout the file, and hence (in principle)
differing dictionary sizes.

Jan

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Marek Marczykowski-Górecki 2 months, 1 week ago
On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
> > --- a/tools/libs/guest/xg_dom_bzimageloader.c
> > +++ b/tools/libs/guest/xg_dom_bzimageloader.c
> > @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
> >      return retval;
> >  }
> >  
> > -/* 128 Mb is the minimum size (half-way) documented to work for all inputs. */
> > -#define LZMA_BLOCK_SIZE (128*1024*1024)
> > +#define LZMA_BLOCK_SIZE (256*1024*1024)
> 
> That's as arbitrary as before, now just not even with a comment at least
> hinting at it being arbitrary. Quoting from one of the LZMA API headers:
> 
> 	 * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
> 	 * UINT32_MAX), so increasing the maximum dictionary size of the
> 	 * encoder won't cause problems for old decoders.
> 
> IOW - what if the Linux folks decided to increase the dictionary size
> further? I therefore wonder whether we don't need to make this more
> dynamic, perhaps by peeking into the header to obtain the dictionary
> size used. The one thing I'm not sure about is whether there can't be
> multiple such headers throughout the file, and hence (in principle)
> differing dictionary sizes.

What is the purpose of this block size limit? From the error message, it
seems to be avoiding excessive memory usage during decompression (which
could be DoS via OOM). If that's the case, then taking the limit from
the kernel binary itself will miss this point (especially in case of
pygrub or similar, but there may be other cases of not-fully-trusted
kernel binaries).

I realize replacing one arbitrary number with another is not really
future-proof, but also the last one lasted for over 10 years, so maybe
it isn't really a big issue.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Jan Beulich 2 months, 1 week ago
On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
> On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
>> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
>>> --- a/tools/libs/guest/xg_dom_bzimageloader.c
>>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
>>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>>>      return retval;
>>>  }
>>>  
>>> -/* 128 Mb is the minimum size (half-way) documented to work for all inputs. */
>>> -#define LZMA_BLOCK_SIZE (128*1024*1024)
>>> +#define LZMA_BLOCK_SIZE (256*1024*1024)
>>
>> That's as arbitrary as before, now just not even with a comment at least
>> hinting at it being arbitrary. Quoting from one of the LZMA API headers:
>>
>> 	 * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
>> 	 * UINT32_MAX), so increasing the maximum dictionary size of the
>> 	 * encoder won't cause problems for old decoders.
>>
>> IOW - what if the Linux folks decided to increase the dictionary size
>> further? I therefore wonder whether we don't need to make this more
>> dynamic, perhaps by peeking into the header to obtain the dictionary
>> size used. The one thing I'm not sure about is whether there can't be
>> multiple such headers throughout the file, and hence (in principle)
>> differing dictionary sizes.
> 
> What is the purpose of this block size limit? From the error message, it
> seems to be avoiding excessive memory usage during decompression (which
> could be DoS via OOM). If that's the case, then taking the limit from
> the kernel binary itself will miss this point (especially in case of
> pygrub or similar, but there may be other cases of not-fully-trusted
> kernel binaries).

Indeed. The question then simply is: Where do we want to draw the line
between what we permit and what we reject?

Jan

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Juergen Gross 2 months, 1 week ago
On 09.10.24 12:19, Jan Beulich wrote:
> On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
>> On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
>>> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
>>>> --- a/tools/libs/guest/xg_dom_bzimageloader.c
>>>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
>>>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>>>>       return retval;
>>>>   }
>>>>   
>>>> -/* 128 Mb is the minimum size (half-way) documented to work for all inputs. */
>>>> -#define LZMA_BLOCK_SIZE (128*1024*1024)
>>>> +#define LZMA_BLOCK_SIZE (256*1024*1024)
>>>
>>> That's as arbitrary as before, now just not even with a comment at least
>>> hinting at it being arbitrary. Quoting from one of the LZMA API headers:
>>>
>>> 	 * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
>>> 	 * UINT32_MAX), so increasing the maximum dictionary size of the
>>> 	 * encoder won't cause problems for old decoders.
>>>
>>> IOW - what if the Linux folks decided to increase the dictionary size
>>> further? I therefore wonder whether we don't need to make this more
>>> dynamic, perhaps by peeking into the header to obtain the dictionary
>>> size used. The one thing I'm not sure about is whether there can't be
>>> multiple such headers throughout the file, and hence (in principle)
>>> differing dictionary sizes.
>>
>> What is the purpose of this block size limit? From the error message, it
>> seems to be avoiding excessive memory usage during decompression (which
>> could be DoS via OOM). If that's the case, then taking the limit from
>> the kernel binary itself will miss this point (especially in case of
>> pygrub or similar, but there may be other cases of not-fully-trusted
>> kernel binaries).
> 
> Indeed. The question then simply is: Where do we want to draw the line
> between what we permit and what we reject?

IMHO the most natural solution would be to use guest memory for this purpose.
OTOH this probably would require a significant rework of libxenguest.


Juergen

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Andrew Cooper 2 months, 1 week ago
On 09/10/2024 11:26 am, Juergen Gross wrote:
> On 09.10.24 12:19, Jan Beulich wrote:
>> On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
>>> On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
>>>> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
>>>>> --- a/tools/libs/guest/xg_dom_bzimageloader.c
>>>>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
>>>>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>>>>>       return retval;
>>>>>   }
>>>>>   -/* 128 Mb is the minimum size (half-way) documented to work for
>>>>> all inputs. */
>>>>> -#define LZMA_BLOCK_SIZE (128*1024*1024)
>>>>> +#define LZMA_BLOCK_SIZE (256*1024*1024)
>>>>
>>>> That's as arbitrary as before, now just not even with a comment at
>>>> least
>>>> hinting at it being arbitrary. Quoting from one of the LZMA API
>>>> headers:
>>>>
>>>>      * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
>>>>      * UINT32_MAX), so increasing the maximum dictionary size of the
>>>>      * encoder won't cause problems for old decoders.
>>>>
>>>> IOW - what if the Linux folks decided to increase the dictionary size
>>>> further? I therefore wonder whether we don't need to make this more
>>>> dynamic, perhaps by peeking into the header to obtain the dictionary
>>>> size used. The one thing I'm not sure about is whether there can't be
>>>> multiple such headers throughout the file, and hence (in principle)
>>>> differing dictionary sizes.
>>>
>>> What is the purpose of this block size limit? From the error
>>> message, it
>>> seems to be avoiding excessive memory usage during decompression (which
>>> could be DoS via OOM). If that's the case, then taking the limit from
>>> the kernel binary itself will miss this point (especially in case of
>>> pygrub or similar, but there may be other cases of not-fully-trusted
>>> kernel binaries).
>>
>> Indeed. The question then simply is: Where do we want to draw the line
>> between what we permit and what we reject?
>
> IMHO the most natural solution would be to use guest memory for this
> purpose.
> OTOH this probably would require a significant rework of libxenguest.


That was XSA-25.  There are toolstack-provided limits on kernel&initrd
sizes.

~Andrew

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Jan Beulich 2 months, 1 week ago
On 09.10.2024 13:08, Andrew Cooper wrote:
> On 09/10/2024 11:26 am, Juergen Gross wrote:
>> On 09.10.24 12:19, Jan Beulich wrote:
>>> On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
>>>> On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
>>>>> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
>>>>>> --- a/tools/libs/guest/xg_dom_bzimageloader.c
>>>>>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
>>>>>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>>>>>>       return retval;
>>>>>>   }
>>>>>>   -/* 128 Mb is the minimum size (half-way) documented to work for
>>>>>> all inputs. */
>>>>>> -#define LZMA_BLOCK_SIZE (128*1024*1024)
>>>>>> +#define LZMA_BLOCK_SIZE (256*1024*1024)
>>>>>
>>>>> That's as arbitrary as before, now just not even with a comment at
>>>>> least
>>>>> hinting at it being arbitrary. Quoting from one of the LZMA API
>>>>> headers:
>>>>>
>>>>>      * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
>>>>>      * UINT32_MAX), so increasing the maximum dictionary size of the
>>>>>      * encoder won't cause problems for old decoders.
>>>>>
>>>>> IOW - what if the Linux folks decided to increase the dictionary size
>>>>> further? I therefore wonder whether we don't need to make this more
>>>>> dynamic, perhaps by peeking into the header to obtain the dictionary
>>>>> size used. The one thing I'm not sure about is whether there can't be
>>>>> multiple such headers throughout the file, and hence (in principle)
>>>>> differing dictionary sizes.
>>>>
>>>> What is the purpose of this block size limit? From the error
>>>> message, it
>>>> seems to be avoiding excessive memory usage during decompression (which
>>>> could be DoS via OOM). If that's the case, then taking the limit from
>>>> the kernel binary itself will miss this point (especially in case of
>>>> pygrub or similar, but there may be other cases of not-fully-trusted
>>>> kernel binaries).
>>>
>>> Indeed. The question then simply is: Where do we want to draw the line
>>> between what we permit and what we reject?
>>
>> IMHO the most natural solution would be to use guest memory for this
>> purpose.
>> OTOH this probably would require a significant rework of libxenguest.
> 
> That was XSA-25.  There are toolstack-provided limits on kernel&initrd
> sizes.

Which probably can't be directly applied to dictionary size used during
(de)compression.

Jan

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Jürgen Groß 2 months, 1 week ago
On 09.10.24 13:15, Jan Beulich wrote:
> On 09.10.2024 13:08, Andrew Cooper wrote:
>> On 09/10/2024 11:26 am, Juergen Gross wrote:
>>> On 09.10.24 12:19, Jan Beulich wrote:
>>>> On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
>>>>> On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
>>>>>> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
>>>>>>> --- a/tools/libs/guest/xg_dom_bzimageloader.c
>>>>>>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c
>>>>>>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
>>>>>>>        return retval;
>>>>>>>    }
>>>>>>>    -/* 128 Mb is the minimum size (half-way) documented to work for
>>>>>>> all inputs. */
>>>>>>> -#define LZMA_BLOCK_SIZE (128*1024*1024)
>>>>>>> +#define LZMA_BLOCK_SIZE (256*1024*1024)
>>>>>>
>>>>>> That's as arbitrary as before, now just not even with a comment at
>>>>>> least
>>>>>> hinting at it being arbitrary. Quoting from one of the LZMA API
>>>>>> headers:
>>>>>>
>>>>>>       * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
>>>>>>       * UINT32_MAX), so increasing the maximum dictionary size of the
>>>>>>       * encoder won't cause problems for old decoders.
>>>>>>
>>>>>> IOW - what if the Linux folks decided to increase the dictionary size
>>>>>> further? I therefore wonder whether we don't need to make this more
>>>>>> dynamic, perhaps by peeking into the header to obtain the dictionary
>>>>>> size used. The one thing I'm not sure about is whether there can't be
>>>>>> multiple such headers throughout the file, and hence (in principle)
>>>>>> differing dictionary sizes.
>>>>>
>>>>> What is the purpose of this block size limit? From the error
>>>>> message, it
>>>>> seems to be avoiding excessive memory usage during decompression (which
>>>>> could be DoS via OOM). If that's the case, then taking the limit from
>>>>> the kernel binary itself will miss this point (especially in case of
>>>>> pygrub or similar, but there may be other cases of not-fully-trusted
>>>>> kernel binaries).
>>>>
>>>> Indeed. The question then simply is: Where do we want to draw the line
>>>> between what we permit and what we reject?
>>>
>>> IMHO the most natural solution would be to use guest memory for this
>>> purpose.
>>> OTOH this probably would require a significant rework of libxenguest.
>>
>> That was XSA-25.  There are toolstack-provided limits on kernel&initrd
>> sizes.
> 
> Which probably can't be directly applied to dictionary size used during
> (de)compression.

My point still stands: using GUEST memory for all the decompression work
would avoid all these problems. If the guest memory isn't sufficient, a
decompression by e.g. grub wouldn't work either.


Juergen

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Marek Marczykowski-Górecki 2 months, 1 week ago
On Wed, Oct 09, 2024 at 01:38:32PM +0200, Jürgen Groß wrote:
> On 09.10.24 13:15, Jan Beulich wrote:
> > On 09.10.2024 13:08, Andrew Cooper wrote:
> > > On 09/10/2024 11:26 am, Juergen Gross wrote:
> > > > On 09.10.24 12:19, Jan Beulich wrote:
> > > > > On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
> > > > > > On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
> > > > > > > On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
> > > > > > > > --- a/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > +++ b/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
> > > > > > > >        return retval;
> > > > > > > >    }
> > > > > > > >    -/* 128 Mb is the minimum size (half-way) documented to work for
> > > > > > > > all inputs. */
> > > > > > > > -#define LZMA_BLOCK_SIZE (128*1024*1024)
> > > > > > > > +#define LZMA_BLOCK_SIZE (256*1024*1024)
> > > > > > > 
> > > > > > > That's as arbitrary as before, now just not even with a comment at
> > > > > > > least
> > > > > > > hinting at it being arbitrary. Quoting from one of the LZMA API
> > > > > > > headers:
> > > > > > > 
> > > > > > >       * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
> > > > > > >       * UINT32_MAX), so increasing the maximum dictionary size of the
> > > > > > >       * encoder won't cause problems for old decoders.
> > > > > > > 
> > > > > > > IOW - what if the Linux folks decided to increase the dictionary size
> > > > > > > further? I therefore wonder whether we don't need to make this more
> > > > > > > dynamic, perhaps by peeking into the header to obtain the dictionary
> > > > > > > size used. The one thing I'm not sure about is whether there can't be
> > > > > > > multiple such headers throughout the file, and hence (in principle)
> > > > > > > differing dictionary sizes.
> > > > > > 
> > > > > > What is the purpose of this block size limit? From the error
> > > > > > message, it
> > > > > > seems to be avoiding excessive memory usage during decompression (which
> > > > > > could be DoS via OOM). If that's the case, then taking the limit from
> > > > > > the kernel binary itself will miss this point (especially in case of
> > > > > > pygrub or similar, but there may be other cases of not-fully-trusted
> > > > > > kernel binaries).
> > > > > 
> > > > > Indeed. The question then simply is: Where do we want to draw the line
> > > > > between what we permit and what we reject?
> > > > 
> > > > IMHO the most natural solution would be to use guest memory for this
> > > > purpose.
> > > > OTOH this probably would require a significant rework of libxenguest.
> > > 
> > > That was XSA-25.  There are toolstack-provided limits on kernel&initrd
> > > sizes.
> > 
> > Which probably can't be directly applied to dictionary size used during
> > (de)compression.
> 
> My point still stands: using GUEST memory for all the decompression work
> would avoid all these problems. If the guest memory isn't sufficient, a
> decompression by e.g. grub wouldn't work either.

Doing that would probably require mapping guest memory to dom0 for this
purpose. And probably quite severe changes to the decompressing code
(liblzma?) to actually use that memory instead of standard heap. I don't
think it's a feasible short term fix.
Theoretically this could be made configurable (if nothing else, then via
an env variable or even build-time setting...), but honestly it feels
like an overkill.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Roger Pau Monné 5 days, 21 hours ago
Adding Oleksii, as this IMO wants to be a blocker for 4.20.

On Wed, Oct 09, 2024 at 03:03:28PM +0200, Marek Marczykowski-Górecki wrote:
> On Wed, Oct 09, 2024 at 01:38:32PM +0200, Jürgen Groß wrote:
> > On 09.10.24 13:15, Jan Beulich wrote:
> > > On 09.10.2024 13:08, Andrew Cooper wrote:
> > > > On 09/10/2024 11:26 am, Juergen Gross wrote:
> > > > > On 09.10.24 12:19, Jan Beulich wrote:
> > > > > > On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
> > > > > > > On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
> > > > > > > > On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
> > > > > > > > > --- a/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > > +++ b/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > > @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
> > > > > > > > >        return retval;
> > > > > > > > >    }
> > > > > > > > >    -/* 128 Mb is the minimum size (half-way) documented to work for
> > > > > > > > > all inputs. */
> > > > > > > > > -#define LZMA_BLOCK_SIZE (128*1024*1024)
> > > > > > > > > +#define LZMA_BLOCK_SIZE (256*1024*1024)
> > > > > > > > 
> > > > > > > > That's as arbitrary as before, now just not even with a comment at
> > > > > > > > least
> > > > > > > > hinting at it being arbitrary. Quoting from one of the LZMA API
> > > > > > > > headers:
> > > > > > > > 
> > > > > > > >       * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
> > > > > > > >       * UINT32_MAX), so increasing the maximum dictionary size of the
> > > > > > > >       * encoder won't cause problems for old decoders.
> > > > > > > > 
> > > > > > > > IOW - what if the Linux folks decided to increase the dictionary size
> > > > > > > > further? I therefore wonder whether we don't need to make this more
> > > > > > > > dynamic, perhaps by peeking into the header to obtain the dictionary
> > > > > > > > size used. The one thing I'm not sure about is whether there can't be
> > > > > > > > multiple such headers throughout the file, and hence (in principle)
> > > > > > > > differing dictionary sizes.
> > > > > > > 
> > > > > > > What is the purpose of this block size limit? From the error
> > > > > > > message, it
> > > > > > > seems to be avoiding excessive memory usage during decompression (which
> > > > > > > could be DoS via OOM). If that's the case, then taking the limit from
> > > > > > > the kernel binary itself will miss this point (especially in case of
> > > > > > > pygrub or similar, but there may be other cases of not-fully-trusted
> > > > > > > kernel binaries).
> > > > > > 
> > > > > > Indeed. The question then simply is: Where do we want to draw the line
> > > > > > between what we permit and what we reject?
> > > > > 
> > > > > IMHO the most natural solution would be to use guest memory for this
> > > > > purpose.
> > > > > OTOH this probably would require a significant rework of libxenguest.
> > > > 
> > > > That was XSA-25.  There are toolstack-provided limits on kernel&initrd
> > > > sizes.
> > > 
> > > Which probably can't be directly applied to dictionary size used during
> > > (de)compression.
> > 
> > My point still stands: using GUEST memory for all the decompression work
> > would avoid all these problems. If the guest memory isn't sufficient, a
> > decompression by e.g. grub wouldn't work either.
> 
> Doing that would probably require mapping guest memory to dom0 for this
> purpose. And probably quite severe changes to the decompressing code
> (liblzma?) to actually use that memory instead of standard heap. I don't
> think it's a feasible short term fix.
> Theoretically this could be made configurable (if nothing else, then via
> an env variable or even build-time setting...), but honestly it feels
> like an overkill.

As a compromise that could likely be done in time for the release,
would it be feasible to fetch the dictionary size from the header and
cap it at certain boundary using max(<header val>, <boundary>)?

Thanks, Roger.

Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
Posted by Marek Marczykowski-Górecki 5 days, 18 hours ago
On Mon, Dec 16, 2024 at 09:35:37AM +0100, Roger Pau Monné wrote:
> Adding Oleksii, as this IMO wants to be a blocker for 4.20.
> 
> On Wed, Oct 09, 2024 at 03:03:28PM +0200, Marek Marczykowski-Górecki wrote:
> > On Wed, Oct 09, 2024 at 01:38:32PM +0200, Jürgen Groß wrote:
> > > On 09.10.24 13:15, Jan Beulich wrote:
> > > > On 09.10.2024 13:08, Andrew Cooper wrote:
> > > > > On 09/10/2024 11:26 am, Juergen Gross wrote:
> > > > > > On 09.10.24 12:19, Jan Beulich wrote:
> > > > > > > On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote:
> > > > > > > > On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote:
> > > > > > > > > On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote:
> > > > > > > > > > --- a/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > > > +++ b/tools/libs/guest/xg_dom_bzimageloader.c
> > > > > > > > > > @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode(
> > > > > > > > > >        return retval;
> > > > > > > > > >    }
> > > > > > > > > >    -/* 128 Mb is the minimum size (half-way) documented to work for
> > > > > > > > > > all inputs. */
> > > > > > > > > > -#define LZMA_BLOCK_SIZE (128*1024*1024)
> > > > > > > > > > +#define LZMA_BLOCK_SIZE (256*1024*1024)
> > > > > > > > > 
> > > > > > > > > That's as arbitrary as before, now just not even with a comment at
> > > > > > > > > least
> > > > > > > > > hinting at it being arbitrary. Quoting from one of the LZMA API
> > > > > > > > > headers:
> > > > > > > > > 
> > > > > > > > >       * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e.
> > > > > > > > >       * UINT32_MAX), so increasing the maximum dictionary size of the
> > > > > > > > >       * encoder won't cause problems for old decoders.
> > > > > > > > > 
> > > > > > > > > IOW - what if the Linux folks decided to increase the dictionary size
> > > > > > > > > further? I therefore wonder whether we don't need to make this more
> > > > > > > > > dynamic, perhaps by peeking into the header to obtain the dictionary
> > > > > > > > > size used. The one thing I'm not sure about is whether there can't be
> > > > > > > > > multiple such headers throughout the file, and hence (in principle)
> > > > > > > > > differing dictionary sizes.
> > > > > > > > 
> > > > > > > > What is the purpose of this block size limit? From the error
> > > > > > > > message, it
> > > > > > > > seems to be avoiding excessive memory usage during decompression (which
> > > > > > > > could be DoS via OOM). If that's the case, then taking the limit from
> > > > > > > > the kernel binary itself will miss this point (especially in case of
> > > > > > > > pygrub or similar, but there may be other cases of not-fully-trusted
> > > > > > > > kernel binaries).
> > > > > > > 
> > > > > > > Indeed. The question then simply is: Where do we want to draw the line
> > > > > > > between what we permit and what we reject?
> > > > > > 
> > > > > > IMHO the most natural solution would be to use guest memory for this
> > > > > > purpose.
> > > > > > OTOH this probably would require a significant rework of libxenguest.
> > > > > 
> > > > > That was XSA-25.  There are toolstack-provided limits on kernel&initrd
> > > > > sizes.
> > > > 
> > > > Which probably can't be directly applied to dictionary size used during
> > > > (de)compression.
> > > 
> > > My point still stands: using GUEST memory for all the decompression work
> > > would avoid all these problems. If the guest memory isn't sufficient, a
> > > decompression by e.g. grub wouldn't work either.
> > 
> > Doing that would probably require mapping guest memory to dom0 for this
> > purpose. And probably quite severe changes to the decompressing code
> > (liblzma?) to actually use that memory instead of standard heap. I don't
> > think it's a feasible short term fix.
> > Theoretically this could be made configurable (if nothing else, then via
> > an env variable or even build-time setting...), but honestly it feels
> > like an overkill.
> 
> As a compromise that could likely be done in time for the release,
> would it be feasible to fetch the dictionary size from the header and
> cap it at certain boundary using max(<header val>, <boundary>)?

Isn't the current constant more or less that already? It's named
LZMA_BLOCK_SIZE, but the lzma_stream_decoder() argument it's used for is
"memlimit", described as "Memory usage limit as bytes".

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab