This @size parameter often comes from fd. We'd better check it before
doing read and allocation.
Chose 1G as high enough empiric bound.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---
hw/core/loader.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/hw/core/loader.c b/hw/core/loader.c
index 4dd5a71fb7..4b67543046 100644
--- a/hw/core/loader.c
+++ b/hw/core/loader.c
@@ -281,11 +281,26 @@ ssize_t load_aout(const char *filename, hwaddr addr, int max_sz,
/* ELF loader */
+#define ELF_LOAD_MAX (1024 * 1024 * 1024)
+
static void *load_at(int fd, off_t offset, size_t size)
{
void *ptr;
- if (lseek(fd, offset, SEEK_SET) < 0)
+
+ /*
+ * We often come here with @size, which was previously read from file
+ * descriptor too. That's not good to read and allocate for unchecked
+ * number of bytes. Coverity also doesn't like it and generate problems.
+ * So, let's limit all load_at() calls to ELF_LOAD_MAX at least.
+ */
+ if (size > ELF_LOAD_MAX) {
return NULL;
+ }
+
+ if (lseek(fd, offset, SEEK_SET) < 0) {
+ return NULL;
+ }
+
ptr = g_malloc(size);
if (read(fd, ptr, size) != size) {
g_free(ptr);
--
2.34.1
On Mon, 25 Sept 2023 at 20:41, Vladimir Sementsov-Ogievskiy
<vsementsov@yandex-team.ru> wrote:
>
> This @size parameter often comes from fd. We'd better check it before
> doing read and allocation.
>
> Chose 1G as high enough empiric bound.
Empirical for who?
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
> ---
> hw/core/loader.c | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/hw/core/loader.c b/hw/core/loader.c
> index 4dd5a71fb7..4b67543046 100644
> --- a/hw/core/loader.c
> +++ b/hw/core/loader.c
> @@ -281,11 +281,26 @@ ssize_t load_aout(const char *filename, hwaddr addr, int max_sz,
>
> /* ELF loader */
>
> +#define ELF_LOAD_MAX (1024 * 1024 * 1024)
> +
> static void *load_at(int fd, off_t offset, size_t size)
> {
> void *ptr;
> - if (lseek(fd, offset, SEEK_SET) < 0)
> +
> + /*
> + * We often come here with @size, which was previously read from file
> + * descriptor too. That's not good to read and allocate for unchecked
> + * number of bytes. Coverity also doesn't like it and generate problems.
> + * So, let's limit all load_at() calls to ELF_LOAD_MAX at least.
> + */
> + if (size > ELF_LOAD_MAX) {
> return NULL;
> + }
> +
> + if (lseek(fd, offset, SEEK_SET) < 0) {
> + return NULL;
> + }
> +
> ptr = g_malloc(size);
> if (read(fd, ptr, size) != size) {
> g_free(ptr);
This doesn't really help anything:
(1) if the value is really big, it doesn't cause any terrible
consequences -- QEMU will just exit because the allocation
fails, which is fine because this will be at QEMU startup
and only happens if the user running QEMU gives us a silly file
(2) we do a lot of other "allocate and abort on failure"
elsewhere in the ELF loader, for instance the allocations of
the symbol table and relocs in the load_symbols and
elf_reloc functions, and then on a bigger scale when we
work with the actual data in the ELF file
thanks
-- PMM
On 26.09.23 13:33, Peter Maydell wrote:
> On Mon, 25 Sept 2023 at 20:41, Vladimir Sementsov-Ogievskiy
> <vsementsov@yandex-team.ru> wrote:
>>
>> This @size parameter often comes from fd. We'd better check it before
>> doing read and allocation.
>>
>> Chose 1G as high enough empiric bound.
>
> Empirical for who?
>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
>> ---
>> hw/core/loader.c | 17 ++++++++++++++++-
>> 1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/core/loader.c b/hw/core/loader.c
>> index 4dd5a71fb7..4b67543046 100644
>> --- a/hw/core/loader.c
>> +++ b/hw/core/loader.c
>> @@ -281,11 +281,26 @@ ssize_t load_aout(const char *filename, hwaddr addr, int max_sz,
>>
>> /* ELF loader */
>>
>> +#define ELF_LOAD_MAX (1024 * 1024 * 1024)
>> +
>> static void *load_at(int fd, off_t offset, size_t size)
>> {
>> void *ptr;
>> - if (lseek(fd, offset, SEEK_SET) < 0)
>> +
>> + /*
>> + * We often come here with @size, which was previously read from file
>> + * descriptor too. That's not good to read and allocate for unchecked
>> + * number of bytes. Coverity also doesn't like it and generate problems.
>> + * So, let's limit all load_at() calls to ELF_LOAD_MAX at least.
>> + */
>> + if (size > ELF_LOAD_MAX) {
>> return NULL;
>> + }
>> +
>> + if (lseek(fd, offset, SEEK_SET) < 0) {
>> + return NULL;
>> + }
>> +
>> ptr = g_malloc(size);
>> if (read(fd, ptr, size) != size) {
>> g_free(ptr);
>
> This doesn't really help anything:
> (1) if the value is really big, it doesn't cause any terrible
> consequences -- QEMU will just exit because the allocation
> fails, which is fine because this will be at QEMU startup
> and only happens if the user running QEMU gives us a silly file
> (2) we do a lot of other "allocate and abort on failure"
> elsewhere in the ELF loader, for instance the allocations of
> the symbol table and relocs in the load_symbols and
> elf_reloc functions, and then on a bigger scale when we
> work with the actual data in the ELF file
Reasonable..
Don't you have an idea, how to somehow mark the value "trusted" for Coverity?
--
Best regards,
Vladimir
On Tue, 26 Sept 2023 at 11:51, Vladimir Sementsov-Ogievskiy
<vsementsov@yandex-team.ru> wrote:
>
> On 26.09.23 13:33, Peter Maydell wrote:
> > On Mon, 25 Sept 2023 at 20:41, Vladimir Sementsov-Ogievskiy
> > <vsementsov@yandex-team.ru> wrote:
> >>
> >> This @size parameter often comes from fd. We'd better check it before
> >> doing read and allocation.
> >>
> >> Chose 1G as high enough empiric bound.
> >
> > Empirical for who?
> >
> >> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
> >> ---
> >> hw/core/loader.c | 17 ++++++++++++++++-
> >> 1 file changed, 16 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/core/loader.c b/hw/core/loader.c
> >> index 4dd5a71fb7..4b67543046 100644
> >> --- a/hw/core/loader.c
> >> +++ b/hw/core/loader.c
> >> @@ -281,11 +281,26 @@ ssize_t load_aout(const char *filename, hwaddr addr, int max_sz,
> >>
> >> /* ELF loader */
> >>
> >> +#define ELF_LOAD_MAX (1024 * 1024 * 1024)
> >> +
> >> static void *load_at(int fd, off_t offset, size_t size)
> >> {
> >> void *ptr;
> >> - if (lseek(fd, offset, SEEK_SET) < 0)
> >> +
> >> + /*
> >> + * We often come here with @size, which was previously read from file
> >> + * descriptor too. That's not good to read and allocate for unchecked
> >> + * number of bytes. Coverity also doesn't like it and generate problems.
> >> + * So, let's limit all load_at() calls to ELF_LOAD_MAX at least.
> >> + */
> >> + if (size > ELF_LOAD_MAX) {
> >> return NULL;
> >> + }
> >> +
> >> + if (lseek(fd, offset, SEEK_SET) < 0) {
> >> + return NULL;
> >> + }
> >> +
> >> ptr = g_malloc(size);
> >> if (read(fd, ptr, size) != size) {
> >> g_free(ptr);
> >
> > This doesn't really help anything:
> > (1) if the value is really big, it doesn't cause any terrible
> > consequences -- QEMU will just exit because the allocation
> > fails, which is fine because this will be at QEMU startup
> > and only happens if the user running QEMU gives us a silly file
> > (2) we do a lot of other "allocate and abort on failure"
> > elsewhere in the ELF loader, for instance the allocations of
> > the symbol table and relocs in the load_symbols and
> > elf_reloc functions, and then on a bigger scale when we
> > work with the actual data in the ELF file
>
> Reasonable..
>
> Don't you have an idea, how to somehow mark the value "trusted" for Coverity?
In the web UI, I just mark it "false positive" in the dropdown, and
move on. Coverity has an absolute ton of false positives, and you
really can't work with it unless you have a workflow for ignoring them.
thanks
-- PMM
On 26.09.23 13:54, Peter Maydell wrote:
> On Tue, 26 Sept 2023 at 11:51, Vladimir Sementsov-Ogievskiy
> <vsementsov@yandex-team.ru> wrote:
>>
>> On 26.09.23 13:33, Peter Maydell wrote:
>>> On Mon, 25 Sept 2023 at 20:41, Vladimir Sementsov-Ogievskiy
>>> <vsementsov@yandex-team.ru> wrote:
>>>>
>>>> This @size parameter often comes from fd. We'd better check it before
>>>> doing read and allocation.
>>>>
>>>> Chose 1G as high enough empiric bound.
>>>
>>> Empirical for who?
>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
>>>> ---
>>>> hw/core/loader.c | 17 ++++++++++++++++-
>>>> 1 file changed, 16 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/hw/core/loader.c b/hw/core/loader.c
>>>> index 4dd5a71fb7..4b67543046 100644
>>>> --- a/hw/core/loader.c
>>>> +++ b/hw/core/loader.c
>>>> @@ -281,11 +281,26 @@ ssize_t load_aout(const char *filename, hwaddr addr, int max_sz,
>>>>
>>>> /* ELF loader */
>>>>
>>>> +#define ELF_LOAD_MAX (1024 * 1024 * 1024)
>>>> +
>>>> static void *load_at(int fd, off_t offset, size_t size)
>>>> {
>>>> void *ptr;
>>>> - if (lseek(fd, offset, SEEK_SET) < 0)
>>>> +
>>>> + /*
>>>> + * We often come here with @size, which was previously read from file
>>>> + * descriptor too. That's not good to read and allocate for unchecked
>>>> + * number of bytes. Coverity also doesn't like it and generate problems.
>>>> + * So, let's limit all load_at() calls to ELF_LOAD_MAX at least.
>>>> + */
>>>> + if (size > ELF_LOAD_MAX) {
>>>> return NULL;
>>>> + }
>>>> +
>>>> + if (lseek(fd, offset, SEEK_SET) < 0) {
>>>> + return NULL;
>>>> + }
>>>> +
>>>> ptr = g_malloc(size);
>>>> if (read(fd, ptr, size) != size) {
>>>> g_free(ptr);
>>>
>>> This doesn't really help anything:
>>> (1) if the value is really big, it doesn't cause any terrible
>>> consequences -- QEMU will just exit because the allocation
>>> fails, which is fine because this will be at QEMU startup
>>> and only happens if the user running QEMU gives us a silly file
>>> (2) we do a lot of other "allocate and abort on failure"
>>> elsewhere in the ELF loader, for instance the allocations of
>>> the symbol table and relocs in the load_symbols and
>>> elf_reloc functions, and then on a bigger scale when we
>>> work with the actual data in the ELF file
>>
>> Reasonable..
>>
>> Don't you have an idea, how to somehow mark the value "trusted" for Coverity?
>
> In the web UI, I just mark it "false positive" in the dropdown, and
> move on. Coverity has an absolute ton of false positives, and you
> really can't work with it unless you have a workflow for ignoring them.
>
Yes, I have the possibility to mark "false positives", but tried to fix some things in the code which seemed reasonable to me. Thanks a lot for reviewing and explaining!
--
Best regards,
Vladimir
© 2016 - 2026 Red Hat, Inc.