pci_dma_rw currently always returns 0, regardless
of the result of dma_memory_rw. Adjusted to return
the correct value.
Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
---
include/hw/pci/pci.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index c1bf7d5356..41c4ab5932 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
void *buf, dma_addr_t len, DMADirection dir)
{
- dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
- return 0;
+ return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
}
static inline int pci_dma_read(PCIDevice *dev, dma_addr_t addr,
--
2.17.1
On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
> pci_dma_rw currently always returns 0, regardless
> of the result of dma_memory_rw. Adjusted to return
> the correct value.
>
> Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
> ---
> include/hw/pci/pci.h | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> index c1bf7d5356..41c4ab5932 100644
> --- a/include/hw/pci/pci.h
> +++ b/include/hw/pci/pci.h
> @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
> static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
> void *buf, dma_addr_t len, DMADirection dir)
> {
> - dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> - return 0;
> + return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> }
I think it's a left over from when we used "void cpu_physical_memory_rw()".
I agree that it is better to return the dma_memory_rw() return value, but
at first look, no one seems to check the return value of pci_dma_rw(),
pci_dma_read(), andpci_dma_write().
Should we make them void?
Anyway, for this patch:
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Thanks,
Stefano
On 30/07/2020 09:41, Stefano Garzarella wrote:
> On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
>> pci_dma_rw currently always returns 0, regardless
>> of the result of dma_memory_rw. Adjusted to return
>> the correct value.
>>
>> Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
>> ---
>> include/hw/pci/pci.h | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
>> index c1bf7d5356..41c4ab5932 100644
>> --- a/include/hw/pci/pci.h
>> +++ b/include/hw/pci/pci.h
>> @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
>> static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
>> void *buf, dma_addr_t len, DMADirection dir)
>> {
>> - dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
>> - return 0;
>> + return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
>> }
>
> I think it's a left over from when we used "void cpu_physical_memory_rw()".
>
> I agree that it is better to return the dma_memory_rw() return value, but
> at first look, no one seems to check the return value of pci_dma_rw(),
> pci_dma_read(), andpci_dma_write().
>
> Should we make them void?
I noticed that nobody checks the return of those functions, but I think
checking for possible error is always useful. I am using the edu device
and clearly doing something wrong since with this fix I discovered that
the pci_dma_read call returns nonzero.
Keeping the function as it is or void would make it harder to spot such
errors in future.
Thank you,
Emanuele
>
>
> Anyway, for this patch:
>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
> Thanks,
> Stefano
>
On Thu, Jul 30, 2020 at 10:50:43AM +0200, Emanuele Giuseppe Esposito wrote:
>
>
> On 30/07/2020 09:41, Stefano Garzarella wrote:
> > On Thu, Jul 30, 2020 at 12:17:32AM +0200, Emanuele Giuseppe Esposito wrote:
> > > pci_dma_rw currently always returns 0, regardless
> > > of the result of dma_memory_rw. Adjusted to return
> > > the correct value.
> > >
> > > Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
> > > ---
> > > include/hw/pci/pci.h | 3 +--
> > > 1 file changed, 1 insertion(+), 2 deletions(-)
> > >
> > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > > index c1bf7d5356..41c4ab5932 100644
> > > --- a/include/hw/pci/pci.h
> > > +++ b/include/hw/pci/pci.h
> > > @@ -787,8 +787,7 @@ static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
> > > static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
> > > void *buf, dma_addr_t len, DMADirection dir)
> > > {
> > > - dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> > > - return 0;
> > > + return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
> > > }
> >
> > I think it's a left over from when we used "void cpu_physical_memory_rw()".
> >
> > I agree that it is better to return the dma_memory_rw() return value, but
> > at first look, no one seems to check the return value of pci_dma_rw(),
> > pci_dma_read(), andpci_dma_write().
> >
> > Should we make them void?
>
> I noticed that nobody checks the return of those functions, but I think
> checking for possible error is always useful. I am using the edu device and
> clearly doing something wrong since with this fix I discovered that the
> pci_dma_read call returns nonzero.
>
> Keeping the function as it is or void would make it harder to spot such
> errors in future.
I agree, I was just worried that no one checks the return value.
Thanks,
Stefano
On Thu, 30 Jul 2020 at 08:42, Stefano Garzarella <sgarzare@redhat.com> wrote: > I agree that it is better to return the dma_memory_rw() return value, but > at first look, no one seems to check the return value of pci_dma_rw(), > pci_dma_read(), andpci_dma_write(). > > Should we make them void? In general code (eg device models) that issues memory transactions need to have a mechanism for finding out whether the transaction succeeds. Traditionally QEMU didn't have the concept of a transaction failing, but we have added it, starting with the APIs at the bottom level (the address_space_* ones). We haven't always plumbed the error-handling (or the memory-transaction input, for that matter) through to some of these other APIs. I think for consistency we should do that, and ideally we should make all these APIs look the same as the base-level address_space* ones, which would mean returning a MemTxError rather than a bool. We should also figure out why the dma_* functions exist at all: they include some calls to dma_barrier(), but not all devices do DMA with the dma_* functions, so we have an inconsistency that should be sorted out... thanks -- PMM
On Thu, Jul 30, 2020 at 09:58:21AM +0100, Peter Maydell wrote: > On Thu, 30 Jul 2020 at 08:42, Stefano Garzarella <sgarzare@redhat.com> wrote: > > I agree that it is better to return the dma_memory_rw() return value, but > > at first look, no one seems to check the return value of pci_dma_rw(), > > pci_dma_read(), andpci_dma_write(). > > > > Should we make them void? > > In general code (eg device models) that issues memory transactions > need to have a mechanism for finding out whether the transaction > succeeds. Traditionally QEMU didn't have the concept of a > transaction failing, but we have added it, starting with the > APIs at the bottom level (the address_space_* ones). We haven't > always plumbed the error-handling (or the memory-transaction > input, for that matter) through to some of these other APIs. > I think for consistency we should do that, and ideally we > should make all these APIs look the same as the base-level > address_space* ones, which would mean returning a MemTxError > rather than a bool. Yeah, that makes a lot of sense to me. > > We should also figure out why the dma_* functions exist at all: > they include some calls to dma_barrier(), but not all devices > do DMA with the dma_* functions, so we have an inconsistency > that should be sorted out... > I've never looked in detail, but I agree we should have more consistency. Thanks for the details! Stefano
On Wed, 29 Jul 2020 at 23:19, Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> wrote: > > pci_dma_rw currently always returns 0, regardless > of the result of dma_memory_rw. Adjusted to return > the correct value. > > Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> We also have the equivalent patch from Klaus Jensen back in 2019 which got reviewed at the time but seems to have gotten lost along the way: https://patchwork.kernel.org/patch/11184911/ thanks -- PMM
On Jul 30 09:48, Peter Maydell wrote: > On Wed, 29 Jul 2020 at 23:19, Emanuele Giuseppe Esposito > <e.emanuelegiuseppe@gmail.com> wrote: > > > > pci_dma_rw currently always returns 0, regardless > > of the result of dma_memory_rw. Adjusted to return > > the correct value. > > > > Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> > > We also have the equivalent patch from Klaus Jensen back in 2019 > which got reviewed at the time but seems to have gotten lost along the way: > https://patchwork.kernel.org/patch/11184911/ > > thanks > -- PMM > Yes, I posted this a while back because I need that return value in the emulated nvme device - so please don't make it void ;) My patch was part of a series that has not gone in yet, but I can resend seperately - or you can just apply the patch from Emanuele. It's no biggie to me as long as the fix is there! Klaus
© 2016 - 2026 Red Hat, Inc.