[PATCH] pci_dma_rw: return correct value instead of 0

Emanuele Giuseppe Esposito posted 1 patch 3 years, 9 months ago
Test docker-quick@centos7 failed
Test docker-mingw@fedora failed
Test checkpatch failed
Test FreeBSD failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20200729221732.29041-1-e.emanuelegiuseppe@gmail.com
Maintainers: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>
include/hw/pci/pci.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
[PATCH] pci_dma_rw: return correct value instead of 0
Posted by Emanuele Giuseppe Esposito 3 years, 9 months ago
pci_dma_rw currently always returns 0, regardless
of the result of dma_memory_rw. Adjusted to return
the correct value.

Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
---
 include/hw/pci/pci.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index c1bf7d5356..41c4ab5932 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
 static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
                              void *buf, dma_addr_t len, DMADirection dir)
 {
-    dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
-    return 0;
+    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
 }
 
 static inline int pci_dma_read(PCIDevice *dev, dma_addr_t addr,
-- 
2.17.1


Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Stefano Garzarella 3 years, 9 months ago
On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
> pci_dma_rw currently always returns 0, regardless
> of the result of dma_memory_rw. Adjusted to return
> the correct value.
> 
> Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
> ---
>  include/hw/pci/pci.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> index c1bf7d5356..41c4ab5932 100644
> --- a/include/hw/pci/pci.h
> +++ b/include/hw/pci/pci.h
> @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
>  static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
>                               void *buf, dma_addr_t len, DMADirection dir)
>  {
> -    dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> -    return 0;
> +    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
>  }

I think it's a left over from when we used "void cpu_physical_memory_rw()".

I agree that it is better to return the dma_memory_rw() return value, but
at first look, no one seems to check the return value of pci_dma_rw(),
pci_dma_read(), andpci_dma_write().

Should we make them void?


Anyway, for this patch:

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>

Thanks,
Stefano


Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Emanuele Giuseppe Esposito 3 years, 9 months ago

On 30/07/2020 09:41, Stefano Garzarella wrote:
> On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
>> pci_dma_rw currently always returns 0, regardless
>> of the result of dma_memory_rw. Adjusted to return
>> the correct value.
>>
>> Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
>> ---
>>   include/hw/pci/pci.h | 3 +--
>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
>> index c1bf7d5356..41c4ab5932 100644
>> --- a/include/hw/pci/pci.h
>> +++ b/include/hw/pci/pci.h
>> @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
>>   static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
>>                                void *buf, dma_addr_t len, DMADirection dir)
>>   {
>> -    dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
>> -    return 0;
>> +    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
>>   }
> 
> I think it's a left over from when we used "void cpu_physical_memory_rw()".
> 
> I agree that it is better to return the dma_memory_rw() return value, but
> at first look, no one seems to check the return value of pci_dma_rw(),
> pci_dma_read(), andpci_dma_write().
> 
> Should we make them void?

I noticed that nobody checks the return of those functions, but I think 
checking for possible error is always useful. I am using the edu device 
and clearly doing something wrong since with this fix I discovered that 
the pci_dma_read call returns nonzero.

Keeping the function as it is or void would make it harder to spot such 
errors in future.

Thank you,
Emanuele

> 
> 
> Anyway, for this patch:
> 
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> 
> Thanks,
> Stefano
> 

Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Stefano Garzarella 3 years, 9 months ago
On Thu, Jul 30, 2020 at 10:50:43AM +0200, Emanuele Giuseppe Esposito wrote:
> 
> 
> On 30/07/2020 09:41, Stefano Garzarella wrote:
> > On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
> > > pci_dma_rw currently always returns 0, regardless
> > > of the result of dma_memory_rw. Adjusted to return
> > > the correct value.
> > > 
> > > Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
> > > ---
> > >   include/hw/pci/pci.h | 3 +--
> > >   1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > > index c1bf7d5356..41c4ab5932 100644
> > > --- a/include/hw/pci/pci.h
> > > +++ b/include/hw/pci/pci.h
> > > @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
> > >   static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
> > >                                void *buf, dma_addr_t len, DMADirection dir)
> > >   {
> > > -    dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> > > -    return 0;
> > > +    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> > >   }
> > 
> > I think it's a left over from when we used "void cpu_physical_memory_rw()".
> > 
> > I agree that it is better to return the dma_memory_rw() return value, but
> > at first look, no one seems to check the return value of pci_dma_rw(),
> > pci_dma_read(), andpci_dma_write().
> > 
> > Should we make them void?
> 
> I noticed that nobody checks the return of those functions, but I think
> checking for possible error is always useful. I am using the edu device and
> clearly doing something wrong since with this fix I discovered that the
> pci_dma_read call returns nonzero.
> 
> Keeping the function as it is or void would make it harder to spot such
> errors in future.

I agree, I was just worried that no one checks the return value.

Thanks,
Stefano


Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Peter Maydell 3 years, 9 months ago
On Thu, 30 Jul 2020 at 08:42, Stefano Garzarella <sgarzare@redhat.com> wrote:
> I agree that it is better to return the dma_memory_rw() return value, but
> at first look, no one seems to check the return value of pci_dma_rw(),
> pci_dma_read(), andpci_dma_write().
>
> Should we make them void?

In general code (eg device models) that issues memory transactions
need to have a mechanism for finding out whether the transaction
succeeds. Traditionally QEMU didn't have the concept of a
transaction failing, but we have added it, starting with the
APIs at the bottom level (the address_space_* ones). We haven't
always plumbed the error-handling (or the memory-transaction
input, for that matter) through to some of these other APIs.
I think for consistency we should do that, and ideally we
should make all these APIs look the same as the base-level
address_space* ones, which would mean returning a MemTxError
rather than a bool.

We should also figure out why the dma_* functions exist at all:
they include some calls to dma_barrier(), but not all devices
do DMA with the dma_* functions, so we have an inconsistency
that should be sorted out...

thanks
-- PMM

Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Stefano Garzarella 3 years, 9 months ago
On Thu, Jul 30, 2020 at 09:58:21AM +0100, Peter Maydell wrote:
> On Thu, 30 Jul 2020 at 08:42, Stefano Garzarella <sgarzare@redhat.com> wrote:
> > I agree that it is better to return the dma_memory_rw() return value, but
> > at first look, no one seems to check the return value of pci_dma_rw(),
> > pci_dma_read(), andpci_dma_write().
> >
> > Should we make them void?
> 
> In general code (eg device models) that issues memory transactions
> need to have a mechanism for finding out whether the transaction
> succeeds. Traditionally QEMU didn't have the concept of a
> transaction failing, but we have added it, starting with the
> APIs at the bottom level (the address_space_* ones). We haven't
> always plumbed the error-handling (or the memory-transaction
> input, for that matter) through to some of these other APIs.
> I think for consistency we should do that, and ideally we
> should make all these APIs look the same as the base-level
> address_space* ones, which would mean returning a MemTxError
> rather than a bool.

Yeah, that makes a lot of sense to me.

> 
> We should also figure out why the dma_* functions exist at all:
> they include some calls to dma_barrier(), but not all devices
> do DMA with the dma_* functions, so we have an inconsistency
> that should be sorted out...
> 

I've never looked in detail, but I agree we should have more consistency.

Thanks for the details!
Stefano


Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Peter Maydell 3 years, 9 months ago
On Wed, 29 Jul 2020 at 23:19, Emanuele Giuseppe Esposito
<e.emanuelegiuseppe@gmail.com> wrote:
>
> pci_dma_rw currently always returns 0, regardless
> of the result of dma_memory_rw. Adjusted to return
> the correct value.
>
> Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>

We also have the equivalent patch from Klaus Jensen back in 2019
which got reviewed at the time but seems to have gotten lost along the way:
https://patchwork.kernel.org/patch/11184911/

thanks
-- PMM

Re: [PATCH] pci_dma_rw: return correct value instead of 0
Posted by Klaus Jensen 3 years, 8 months ago
On Jul 30 09:48, Peter Maydell wrote:
> On Wed, 29 Jul 2020 at 23:19, Emanuele Giuseppe Esposito
> <e.emanuelegiuseppe@gmail.com> wrote:
> >
> > pci_dma_rw currently always returns 0, regardless
> > of the result of dma_memory_rw. Adjusted to return
> > the correct value.
> >
> > Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
> 
> We also have the equivalent patch from Klaus Jensen back in 2019
> which got reviewed at the time but seems to have gotten lost along the way:
> https://patchwork.kernel.org/patch/11184911/
> 
> thanks
> -- PMM
> 

Yes, I posted this a while back because I need that return value in the
emulated nvme device - so please don't make it void ;)

My patch was part of a series that has not gone in yet, but I can resend
seperately - or you can just apply the patch from Emanuele. It's no
biggie to me as long as the fix is there!


Klaus