[RFC PATCH 2/5] system/memory: support unaligned access

Tomoyuki HIROSE posted 5 patches 1 year, 3 months ago
There is a newer version of this series
[RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 3 months ago
The previous code ignored 'impl.unaligned' and handled unaligned
accesses as is. But this implementation could not emulate specific
registers of some devices that allow unaligned access such as xHCI
Host Controller Capability Registers.

This commit emulates an unaligned access with multiple aligned
accesses. Additionally, the overwriting of the max access size is
removed to retrive the actual max access size.

Signed-off-by: Tomoyuki HIROSE <tomoyuki.hirose@igel.co.jp>
---
 system/memory.c  | 147 ++++++++++++++++++++++++++++++++++++++---------
 system/physmem.c |   8 ---
 2 files changed, 119 insertions(+), 36 deletions(-)

diff --git a/system/memory.c b/system/memory.c
index 85f6834cb3..c2164e6478 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -518,27 +518,118 @@ static MemTxResult memory_region_write_with_attrs_accessor(MemoryRegion *mr,
     return mr->ops->write_with_attrs(mr->opaque, addr, tmp, size, attrs);
 }
 
+typedef MemTxResult (*MemoryRegionAccessFn)(MemoryRegion *mr,
+                                            hwaddr addr,
+                                            uint64_t *value,
+                                            unsigned size,
+                                            signed shift,
+                                            uint64_t mask,
+                                            MemTxAttrs attrs);
+
+static MemTxResult access_emulation(hwaddr addr,
+                                    uint64_t *value,
+                                    unsigned int size,
+                                    unsigned int access_size_min,
+                                    unsigned int access_size_max,
+                                    MemoryRegion *mr,
+                                    MemTxAttrs attrs,
+                                    MemoryRegionAccessFn access_fn_read,
+                                    MemoryRegionAccessFn access_fn_write,
+                                    bool is_write)
+{
+    hwaddr a;
+    uint8_t *d;
+    uint64_t v;
+    MemTxResult r = MEMTX_OK;
+    bool is_big_endian = memory_region_big_endian(mr);
+    void (*store)(void *, int, uint64_t) = is_big_endian ? stn_be_p : stn_le_p;
+    uint64_t (*load)(const void *, int) = is_big_endian ? ldn_be_p : ldn_le_p;
+    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
+    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
+    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
+        0 : addr % access_size;
+    hwaddr start = addr - round_down;
+    hwaddr tail = addr + size <= mr->size ? addr + size : mr->size;
+    uint8_t data[16] = {0};
+    g_assert(size <= 8);
+
+    for (a = start, d = data, v = 0; a < tail;
+         a += access_size, d += access_size, v = 0) {
+        r |= access_fn_read(mr, a, &v, access_size, 0, access_mask,
+                            attrs);
+        store(d, access_size, v);
+    }
+    if (is_write) {
+        stn_he_p(&data[round_down], size, load(value, size));
+        for (a = start, d = data; a < tail;
+             a += access_size, d += access_size) {
+            v = load(d, access_size);
+            r |= access_fn_write(mr, a, &v, access_size, 0, access_mask,
+                                 attrs);
+        }
+    } else {
+        store(value, size, ldn_he_p(&data[round_down], size));
+    }
+
+    return r;
+}
+
+static bool is_access_fastpath(hwaddr addr,
+                               unsigned int size,
+                               unsigned int access_size_min,
+                               unsigned int access_size_max,
+                               MemoryRegion *mr)
+{
+    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
+    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
+        0 : addr % access_size;
+
+    return round_down == 0 && access_size <= size;
+}
+
+static MemTxResult access_fastpath(hwaddr addr,
+                                   uint64_t *value,
+                                   unsigned int size,
+                                   unsigned int access_size_min,
+                                   unsigned int access_size_max,
+                                   MemoryRegion *mr,
+                                   MemTxAttrs attrs,
+                                   MemoryRegionAccessFn fastpath)
+{
+    MemTxResult r = MEMTX_OK;
+    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
+    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
+
+    if (memory_region_big_endian(mr)) {
+        for (size_t i = 0; i < size; i += access_size) {
+            r |= fastpath(mr, addr + i, value, access_size,
+                          (size - access_size - i) * 8, access_mask, attrs);
+        }
+    } else {
+        for (size_t i = 0; i < size; i += access_size) {
+            r |= fastpath(mr, addr + i, value, access_size,
+                          i * 8, access_mask, attrs);
+        }
+    }
+
+    return r;
+}
+
 static MemTxResult access_with_adjusted_size(hwaddr addr,
                                       uint64_t *value,
                                       unsigned size,
                                       unsigned access_size_min,
                                       unsigned access_size_max,
-                                      MemTxResult (*access_fn)
-                                                  (MemoryRegion *mr,
-                                                   hwaddr addr,
-                                                   uint64_t *value,
-                                                   unsigned size,
-                                                   signed shift,
-                                                   uint64_t mask,
-                                                   MemTxAttrs attrs),
+                                      MemoryRegionAccessFn access_fn_read,
+                                      MemoryRegionAccessFn access_fn_write,
+                                      bool is_write,
                                       MemoryRegion *mr,
                                       MemTxAttrs attrs)
 {
-    uint64_t access_mask;
-    unsigned access_size;
-    unsigned i;
     MemTxResult r = MEMTX_OK;
     bool reentrancy_guard_applied = false;
+    MemoryRegionAccessFn access_fn_fastpath =
+        is_write ? access_fn_write : access_fn_read;
 
     if (!access_size_min) {
         access_size_min = 1;
@@ -560,20 +651,16 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
         reentrancy_guard_applied = true;
     }
 
-    /* FIXME: support unaligned access? */
-    access_size = MAX(MIN(size, access_size_max), access_size_min);
-    access_mask = MAKE_64BIT_MASK(0, access_size * 8);
-    if (memory_region_big_endian(mr)) {
-        for (i = 0; i < size; i += access_size) {
-            r |= access_fn(mr, addr + i, value, access_size,
-                        (size - access_size - i) * 8, access_mask, attrs);
-        }
+    if (is_access_fastpath(addr, size, access_size_min, access_size_max, mr)) {
+        r |= access_fastpath(addr, value, size,
+                             access_size_min, access_size_max, mr, attrs,
+                             access_fn_fastpath);
     } else {
-        for (i = 0; i < size; i += access_size) {
-            r |= access_fn(mr, addr + i, value, access_size, i * 8,
-                        access_mask, attrs);
-        }
+        r |= access_emulation(addr, value, size,
+                              access_size_min, access_size_max, mr, attrs,
+                              access_fn_read, access_fn_write, is_write);
     }
+
     if (mr->dev && reentrancy_guard_applied) {
         mr->dev->mem_reentrancy_guard.engaged_in_io = false;
     }
@@ -1459,13 +1546,15 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
                                          mr->ops->impl.min_access_size,
                                          mr->ops->impl.max_access_size,
                                          memory_region_read_accessor,
-                                         mr, attrs);
+                                         memory_region_write_accessor,
+                                         false, mr, attrs);
     } else {
         return access_with_adjusted_size(addr, pval, size,
                                          mr->ops->impl.min_access_size,
                                          mr->ops->impl.max_access_size,
                                          memory_region_read_with_attrs_accessor,
-                                         mr, attrs);
+                                         memory_region_write_with_attrs_accessor,
+                                         false, mr, attrs);
     }
 }
 
@@ -1553,15 +1642,17 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return access_with_adjusted_size(addr, &data, size,
                                          mr->ops->impl.min_access_size,
                                          mr->ops->impl.max_access_size,
-                                         memory_region_write_accessor, mr,
-                                         attrs);
+                                         memory_region_read_accessor,
+                                         memory_region_write_accessor,
+                                         true, mr, attrs);
     } else {
         return
             access_with_adjusted_size(addr, &data, size,
                                       mr->ops->impl.min_access_size,
                                       mr->ops->impl.max_access_size,
+                                      memory_region_read_with_attrs_accessor,
                                       memory_region_write_with_attrs_accessor,
-                                      mr, attrs);
+                                      true, mr, attrs);
     }
 }
 
diff --git a/system/physmem.c b/system/physmem.c
index dc1db3a384..ff444140a8 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2693,14 +2693,6 @@ int memory_access_size(MemoryRegion *mr, unsigned l, hwaddr addr)
         access_size_max = 4;
     }
 
-    /* Bound the maximum access by the alignment of the address.  */
-    if (!mr->ops->impl.unaligned) {
-        unsigned align_size_max = addr & -addr;
-        if (align_size_max != 0 && align_size_max < access_size_max) {
-            access_size_max = align_size_max;
-        }
-    }
-
     /* Don't attempt accesses larger than the maximum.  */
     if (l > access_size_max) {
         l = access_size_max;
-- 
2.43.0
Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 2 months ago
On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
> The previous code ignored 'impl.unaligned' and handled unaligned
> accesses as is. But this implementation could not emulate specific
> registers of some devices that allow unaligned access such as xHCI
> Host Controller Capability Registers.

I have some comment that can be naive, please bare with me..

Firstly, could you provide an example in the commit message, of what would
start working after this patch?

IIUC things like read(addr=0x2, size=8) should already working before but
it'll be cut into 4 times read() over 2 bytes for unaligned=false, am I
right?

> 
> This commit emulates an unaligned access with multiple aligned
> accesses. Additionally, the overwriting of the max access size is
> removed to retrive the actual max access size.
> 
> Signed-off-by: Tomoyuki HIROSE <tomoyuki.hirose@igel.co.jp>
> ---
>  system/memory.c  | 147 ++++++++++++++++++++++++++++++++++++++---------
>  system/physmem.c |   8 ---
>  2 files changed, 119 insertions(+), 36 deletions(-)
> 
> diff --git a/system/memory.c b/system/memory.c
> index 85f6834cb3..c2164e6478 100644
> --- a/system/memory.c
> +++ b/system/memory.c
> @@ -518,27 +518,118 @@ static MemTxResult memory_region_write_with_attrs_accessor(MemoryRegion *mr,
>      return mr->ops->write_with_attrs(mr->opaque, addr, tmp, size, attrs);
>  }
>  
> +typedef MemTxResult (*MemoryRegionAccessFn)(MemoryRegion *mr,
> +                                            hwaddr addr,
> +                                            uint64_t *value,
> +                                            unsigned size,
> +                                            signed shift,
> +                                            uint64_t mask,
> +                                            MemTxAttrs attrs);
> +
> +static MemTxResult access_emulation(hwaddr addr,
> +                                    uint64_t *value,
> +                                    unsigned int size,
> +                                    unsigned int access_size_min,
> +                                    unsigned int access_size_max,
> +                                    MemoryRegion *mr,
> +                                    MemTxAttrs attrs,
> +                                    MemoryRegionAccessFn access_fn_read,
> +                                    MemoryRegionAccessFn access_fn_write,
> +                                    bool is_write)
> +{
> +    hwaddr a;
> +    uint8_t *d;
> +    uint64_t v;
> +    MemTxResult r = MEMTX_OK;
> +    bool is_big_endian = memory_region_big_endian(mr);
> +    void (*store)(void *, int, uint64_t) = is_big_endian ? stn_be_p : stn_le_p;
> +    uint64_t (*load)(const void *, int) = is_big_endian ? ldn_be_p : ldn_le_p;
> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
> +    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> +    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
> +        0 : addr % access_size;
> +    hwaddr start = addr - round_down;
> +    hwaddr tail = addr + size <= mr->size ? addr + size : mr->size;

There're plenty of special considerations on addr+size over mr->size.  It
was confusing to me at the 1st glance, because after we have MR pointer
logically we should have clamped the size to make sure it won't get more
than the mr->size, e.g. for address space accesses it should have happened
in address_space_translate_internal(), translating IOs in flatviews.

Then I noticed b242e0e0e2 ("exec: skip MMIO regions correctly in
cpu_physical_memory_write_rom_internal"), also the special handling of MMIO
in access sizes where it won't be clamped.  Is this relevant to why
mr->size needs to be checked here, and is it intended to allow it to have
addr+size > mr->size?

If it's intended, IMHO it would be nice to add some comment explicitly or
mention it in the commit message.  It might not be very straightforward to
see..

> +    uint8_t data[16] = {0};
> +    g_assert(size <= 8);
> +
> +    for (a = start, d = data, v = 0; a < tail;
> +         a += access_size, d += access_size, v = 0) {
> +        r |= access_fn_read(mr, a, &v, access_size, 0, access_mask,
> +                            attrs);
> +        store(d, access_size, v);

I'm slightly confused on what is the endianess of data[].  It uses store(),
so I think it means it follows the MR's endianess.  But then..

> +    }
> +    if (is_write) {
> +        stn_he_p(&data[round_down], size, load(value, size));

... here stn_he_p() should imply that data[] is using host endianess...
Meanwhile I wonder why value should be loaded by load() - value should
points to a u64 which is, IIUC, host-endian, while load() is using MR's
endianess..

I wonder if we could have data[] using host endianess always, then here:

           stn_he_p(&data[round_down], size, *value);

> +        for (a = start, d = data; a < tail;
> +             a += access_size, d += access_size) {
> +            v = load(d, access_size);
> +            r |= access_fn_write(mr, a, &v, access_size, 0, access_mask,
> +                                 attrs);
> +        }
> +    } else {
> +        store(value, size, ldn_he_p(&data[round_down], size));
> +    }
> +
> +    return r;

Now when unaligned write, it'll read at most 16 byte out in data[], apply
the changes, and write back all 16 bytes down even if only 8 bytes are new.

Is this the intended behavior?  When I was thinking impl.unaligned=true, I
thought the device should be able to process unaligned address in the MR
ops directly.  But I could be totally wrong here, hence more of a pure
question..

> +}
> +
> +static bool is_access_fastpath(hwaddr addr,
> +                               unsigned int size,
> +                               unsigned int access_size_min,
> +                               unsigned int access_size_max,
> +                               MemoryRegion *mr)
> +{
> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
> +    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
> +        0 : addr % access_size;
> +
> +    return round_down == 0 && access_size <= size;

Would it be more readable to rewrite this with some if clauses?  Something
like:

is_access_fastpath()
{
  size_t access_size = MAX(MIN(size, access_size_max), access_size_min);

  if (access_size < access_size_min) {
    return false;
  }
    
  if (mr->ops->impl.unaligned && (addr + size <= mr->size)) {
    return true;
  }

  return addr % access_size;
}

> +}
> +
> +static MemTxResult access_fastpath(hwaddr addr,
> +                                   uint64_t *value,
> +                                   unsigned int size,
> +                                   unsigned int access_size_min,
> +                                   unsigned int access_size_max,
> +                                   MemoryRegion *mr,
> +                                   MemTxAttrs attrs,
> +                                   MemoryRegionAccessFn fastpath)
> +{
> +    MemTxResult r = MEMTX_OK;
> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
> +    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> +
> +    if (memory_region_big_endian(mr)) {
> +        for (size_t i = 0; i < size; i += access_size) {
> +            r |= fastpath(mr, addr + i, value, access_size,
> +                          (size - access_size - i) * 8, access_mask, attrs);
> +        }
> +    } else {
> +        for (size_t i = 0; i < size; i += access_size) {
> +            r |= fastpath(mr, addr + i, value, access_size,
> +                          i * 8, access_mask, attrs);
> +        }
> +    }
> +
> +    return r;
> +}
> +
>  static MemTxResult access_with_adjusted_size(hwaddr addr,
>                                        uint64_t *value,
>                                        unsigned size,
>                                        unsigned access_size_min,
>                                        unsigned access_size_max,
> -                                      MemTxResult (*access_fn)
> -                                                  (MemoryRegion *mr,
> -                                                   hwaddr addr,
> -                                                   uint64_t *value,
> -                                                   unsigned size,
> -                                                   signed shift,
> -                                                   uint64_t mask,
> -                                                   MemTxAttrs attrs),
> +                                      MemoryRegionAccessFn access_fn_read,
> +                                      MemoryRegionAccessFn access_fn_write,
> +                                      bool is_write,
>                                        MemoryRegion *mr,
>                                        MemTxAttrs attrs)
>  {
> -    uint64_t access_mask;
> -    unsigned access_size;
> -    unsigned i;
>      MemTxResult r = MEMTX_OK;
>      bool reentrancy_guard_applied = false;
> +    MemoryRegionAccessFn access_fn_fastpath =
> +        is_write ? access_fn_write : access_fn_read;
>  
>      if (!access_size_min) {
>          access_size_min = 1;
> @@ -560,20 +651,16 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
>          reentrancy_guard_applied = true;
>      }
>  
> -    /* FIXME: support unaligned access? */
> -    access_size = MAX(MIN(size, access_size_max), access_size_min);
> -    access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> -    if (memory_region_big_endian(mr)) {
> -        for (i = 0; i < size; i += access_size) {
> -            r |= access_fn(mr, addr + i, value, access_size,
> -                        (size - access_size - i) * 8, access_mask, attrs);
> -        }
> +    if (is_access_fastpath(addr, size, access_size_min, access_size_max, mr)) {
> +        r |= access_fastpath(addr, value, size,
> +                             access_size_min, access_size_max, mr, attrs,
> +                             access_fn_fastpath);
>      } else {
> -        for (i = 0; i < size; i += access_size) {
> -            r |= access_fn(mr, addr + i, value, access_size, i * 8,
> -                        access_mask, attrs);
> -        }
> +        r |= access_emulation(addr, value, size,
> +                              access_size_min, access_size_max, mr, attrs,
> +                              access_fn_read, access_fn_write, is_write);
>      }
> +
>      if (mr->dev && reentrancy_guard_applied) {
>          mr->dev->mem_reentrancy_guard.engaged_in_io = false;
>      }
> @@ -1459,13 +1546,15 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
>                                           mr->ops->impl.min_access_size,
>                                           mr->ops->impl.max_access_size,
>                                           memory_region_read_accessor,
> -                                         mr, attrs);
> +                                         memory_region_write_accessor,
> +                                         false, mr, attrs);
>      } else {
>          return access_with_adjusted_size(addr, pval, size,
>                                           mr->ops->impl.min_access_size,
>                                           mr->ops->impl.max_access_size,
>                                           memory_region_read_with_attrs_accessor,
> -                                         mr, attrs);
> +                                         memory_region_write_with_attrs_accessor,
> +                                         false, mr, attrs);
>      }
>  }
>  
> @@ -1553,15 +1642,17 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
>          return access_with_adjusted_size(addr, &data, size,
>                                           mr->ops->impl.min_access_size,
>                                           mr->ops->impl.max_access_size,
> -                                         memory_region_write_accessor, mr,
> -                                         attrs);
> +                                         memory_region_read_accessor,
> +                                         memory_region_write_accessor,
> +                                         true, mr, attrs);
>      } else {
>          return
>              access_with_adjusted_size(addr, &data, size,
>                                        mr->ops->impl.min_access_size,
>                                        mr->ops->impl.max_access_size,
> +                                      memory_region_read_with_attrs_accessor,
>                                        memory_region_write_with_attrs_accessor,
> -                                      mr, attrs);
> +                                      true, mr, attrs);
>      }
>  }
>  
> diff --git a/system/physmem.c b/system/physmem.c
> index dc1db3a384..ff444140a8 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -2693,14 +2693,6 @@ int memory_access_size(MemoryRegion *mr, unsigned l, hwaddr addr)
>          access_size_max = 4;
>      }
>  
> -    /* Bound the maximum access by the alignment of the address.  */
> -    if (!mr->ops->impl.unaligned) {
> -        unsigned align_size_max = addr & -addr;
> -        if (align_size_max != 0 && align_size_max < access_size_max) {
> -            access_size_max = align_size_max;
> -        }
> -    }

Could you explain why this needs to be removed?

Again, I was expecting the change was for a device that will have
unaligned==true first, so this shouldn't matter.  Then I wonder why this
behavior needs change.  But I could miss something.

Thanks,

> -
>      /* Don't attempt accesses larger than the maximum.  */
>      if (l > access_size_max) {
>          l = access_size_max;
> -- 
> 2.43.0
> 

-- 
Peter Xu
Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 2 months ago
In this email, I explain what this patch set will resolve and an
overview of this patch set. I will respond to your specific code
review comments in a separate email.

On 2024/12/03 6:23, Peter Xu wrote:
> On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
>> The previous code ignored 'impl.unaligned' and handled unaligned
>> accesses as is. But this implementation could not emulate specific
>> registers of some devices that allow unaligned access such as xHCI
>> Host Controller Capability Registers.
> I have some comment that can be naive, please bare with me..
>
> Firstly, could you provide an example in the commit message, of what would
> start working after this patch?
Sorry, I'll describe what will start working in the next version of
this patch set. I'll also provide an example here.  After applying
this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
Capability Registers region will work correctly. For example, the read
result will return 0x0110 (version 1.1.0). Previously, a
read(addr=0x2, size=2) in the Capability Register region would return
0, which is incorrect. According to the xHCI specification, the
Capability Register region does not prohibit accesses of any size or
unaligned accesses.
> IIUC things like read(addr=0x2, size=8) should already working before but
> it'll be cut into 4 times read() over 2 bytes for unaligned=false, am I
> right?
Yes, I also think so. I think the operation read(addr=0x2, size=8) in
a MemoryRegion with impl.unaligned==false should be split into
multiple aligned read() operations. The access size should depends on
the region's 'impl.max_access_size' and 'impl.min_access_size'
. Actually, the comments in 'include/exec/memory.h' seem to confirm
this behavior:

```
     /* If true, unaligned accesses are supported.  Otherwise all accesses
      * are converted to (possibly multiple) naturally aligned accesses.
      */
     bool unaligned;
```

MemoryRegionOps struct in the MemoryRegion has two members, 'valid'
and 'impl' . I think 'valid' determines the behavior of the
MemoryRegion exposed to the guest, and 'impl' determines the behavior
of the MemoryRegion exposed to the QEMU memory region manager.

Consider the situation where we have a MemoryRegion with the following
parameters:

```
MemoryRegion mr = (MemoryRegion){
     //...
     .ops = (MemoryRegionOps){
         //...
     .read = ops_read_function;
     .write = ops_write_function;
     .valid.min_access_size = 4;
     .valid.max_access_size = 4;
     .valid.unaligned = true;
     .impl.min_access_size = 2;
     .impl.max_access_size = 2;
     .impl.unaligned = false;
     };
};
```

With this MemoryRegion 'mr', the guest can read(addr=0x1, size=4)
because 'valid.unaligned' is true.  But 'impl.unaligned' is false, so
'mr.ops->read()' function does not support addr=0x1, which is
unaligned. In this situation, we need to convert the unaligned access
to multiple aligned accesses, such as:

- mr.ops->read(addr=0x0, size=2)
- mr.ops->read(addr=0x2, size=2)
- mr.ops->read(addr=0x4, size=2)

After that, we should return a result of read(addr=0x1, size=4) from
above mr.ops->read() results, I think.

I will respond to the remaining points in a separate email.

Thanks,
Tomoyuki HIROSE
>> This commit emulates an unaligned access with multiple aligned
>> accesses. Additionally, the overwriting of the max access size is
>> removed to retrive the actual max access size.
>>
>> Signed-off-by: Tomoyuki HIROSE<tomoyuki.hirose@igel.co.jp>
>> ---
>>   system/memory.c  | 147 ++++++++++++++++++++++++++++++++++++++---------
>>   system/physmem.c |   8 ---
>>   2 files changed, 119 insertions(+), 36 deletions(-)
>>
>> diff --git a/system/memory.c b/system/memory.c
>> index 85f6834cb3..c2164e6478 100644
>> --- a/system/memory.c
>> +++ b/system/memory.c
>> @@ -518,27 +518,118 @@ static MemTxResult memory_region_write_with_attrs_accessor(MemoryRegion *mr,
>>       return mr->ops->write_with_attrs(mr->opaque, addr, tmp, size, attrs);
>>   }
>>   
>> +typedef MemTxResult (*MemoryRegionAccessFn)(MemoryRegion *mr,
>> +                                            hwaddr addr,
>> +                                            uint64_t *value,
>> +                                            unsigned size,
>> +                                            signed shift,
>> +                                            uint64_t mask,
>> +                                            MemTxAttrs attrs);
>> +
>> +static MemTxResult access_emulation(hwaddr addr,
>> +                                    uint64_t *value,
>> +                                    unsigned int size,
>> +                                    unsigned int access_size_min,
>> +                                    unsigned int access_size_max,
>> +                                    MemoryRegion *mr,
>> +                                    MemTxAttrs attrs,
>> +                                    MemoryRegionAccessFn access_fn_read,
>> +                                    MemoryRegionAccessFn access_fn_write,
>> +                                    bool is_write)
>> +{
>> +    hwaddr a;
>> +    uint8_t *d;
>> +    uint64_t v;
>> +    MemTxResult r = MEMTX_OK;
>> +    bool is_big_endian = memory_region_big_endian(mr);
>> +    void (*store)(void *, int, uint64_t) = is_big_endian ? stn_be_p : stn_le_p;
>> +    uint64_t (*load)(const void *, int) = is_big_endian ? ldn_be_p : ldn_le_p;
>> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
>> +    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
>> +    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
>> +        0 : addr % access_size;
>> +    hwaddr start = addr - round_down;
>> +    hwaddr tail = addr + size <= mr->size ? addr + size : mr->size;
> There're plenty of special considerations on addr+size over mr->size.  It
> was confusing to me at the 1st glance, because after we have MR pointer
> logically we should have clamped the size to make sure it won't get more
> than the mr->size, e.g. for address space accesses it should have happened
> in address_space_translate_internal(), translating IOs in flatviews.
>
> Then I noticed b242e0e0e2 ("exec: skip MMIO regions correctly in
> cpu_physical_memory_write_rom_internal"), also the special handling of MMIO
> in access sizes where it won't be clamped.  Is this relevant to why
> mr->size needs to be checked here, and is it intended to allow it to have
> addr+size > mr->size?
>
> If it's intended, IMHO it would be nice to add some comment explicitly or
> mention it in the commit message.  It might not be very straightforward to
> see..
>
>> +    uint8_t data[16] = {0};
>> +    g_assert(size <= 8);
>> +
>> +    for (a = start, d = data, v = 0; a < tail;
>> +         a += access_size, d += access_size, v = 0) {
>> +        r |= access_fn_read(mr, a, &v, access_size, 0, access_mask,
>> +                            attrs);
>> +        store(d, access_size, v);
> I'm slightly confused on what is the endianess of data[].  It uses store(),
> so I think it means it follows the MR's endianess.  But then..
>
>> +    }
>> +    if (is_write) {
>> +        stn_he_p(&data[round_down], size, load(value, size));
> ... here stn_he_p() should imply that data[] is using host endianess...
> Meanwhile I wonder why value should be loaded by load() - value should
> points to a u64 which is, IIUC, host-endian, while load() is using MR's
> endianess..
>
> I wonder if we could have data[] using host endianess always, then here:
>
>             stn_he_p(&data[round_down], size, *value);
>
>> +        for (a = start, d = data; a < tail;
>> +             a += access_size, d += access_size) {
>> +            v = load(d, access_size);
>> +            r |= access_fn_write(mr, a, &v, access_size, 0, access_mask,
>> +                                 attrs);
>> +        }
>> +    } else {
>> +        store(value, size, ldn_he_p(&data[round_down], size));
>> +    }
>> +
>> +    return r;
> Now when unaligned write, it'll read at most 16 byte out in data[], apply
> the changes, and write back all 16 bytes down even if only 8 bytes are new.
>
> Is this the intended behavior?  When I was thinking impl.unaligned=true, I
> thought the device should be able to process unaligned address in the MR
> ops directly.  But I could be totally wrong here, hence more of a pure
> question..
>
>> +}
>> +
>> +static bool is_access_fastpath(hwaddr addr,
>> +                               unsigned int size,
>> +                               unsigned int access_size_min,
>> +                               unsigned int access_size_max,
>> +                               MemoryRegion *mr)
>> +{
>> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
>> +    hwaddr round_down = mr->ops->impl.unaligned && addr + size <= mr->size ?
>> +        0 : addr % access_size;
>> +
>> +    return round_down == 0 && access_size <= size;
> Would it be more readable to rewrite this with some if clauses?  Something
> like:
>
> is_access_fastpath()
> {
>    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
>
>    if (access_size < access_size_min) {
>      return false;
>    }
>      
>    if (mr->ops->impl.unaligned && (addr + size <= mr->size)) {
>      return true;
>    }
>
>    return addr % access_size;
> }
>
>> +}
>> +
>> +static MemTxResult access_fastpath(hwaddr addr,
>> +                                   uint64_t *value,
>> +                                   unsigned int size,
>> +                                   unsigned int access_size_min,
>> +                                   unsigned int access_size_max,
>> +                                   MemoryRegion *mr,
>> +                                   MemTxAttrs attrs,
>> +                                   MemoryRegionAccessFn fastpath)
>> +{
>> +    MemTxResult r = MEMTX_OK;
>> +    size_t access_size = MAX(MIN(size, access_size_max), access_size_min);
>> +    uint64_t access_mask = MAKE_64BIT_MASK(0, access_size * 8);
>> +
>> +    if (memory_region_big_endian(mr)) {
>> +        for (size_t i = 0; i < size; i += access_size) {
>> +            r |= fastpath(mr, addr + i, value, access_size,
>> +                          (size - access_size - i) * 8, access_mask, attrs);
>> +        }
>> +    } else {
>> +        for (size_t i = 0; i < size; i += access_size) {
>> +            r |= fastpath(mr, addr + i, value, access_size,
>> +                          i * 8, access_mask, attrs);
>> +        }
>> +    }
>> +
>> +    return r;
>> +}
>> +
>>   static MemTxResult access_with_adjusted_size(hwaddr addr,
>>                                         uint64_t *value,
>>                                         unsigned size,
>>                                         unsigned access_size_min,
>>                                         unsigned access_size_max,
>> -                                      MemTxResult (*access_fn)
>> -                                                  (MemoryRegion *mr,
>> -                                                   hwaddr addr,
>> -                                                   uint64_t *value,
>> -                                                   unsigned size,
>> -                                                   signed shift,
>> -                                                   uint64_t mask,
>> -                                                   MemTxAttrs attrs),
>> +                                      MemoryRegionAccessFn access_fn_read,
>> +                                      MemoryRegionAccessFn access_fn_write,
>> +                                      bool is_write,
>>                                         MemoryRegion *mr,
>>                                         MemTxAttrs attrs)
>>   {
>> -    uint64_t access_mask;
>> -    unsigned access_size;
>> -    unsigned i;
>>       MemTxResult r = MEMTX_OK;
>>       bool reentrancy_guard_applied = false;
>> +    MemoryRegionAccessFn access_fn_fastpath =
>> +        is_write ? access_fn_write : access_fn_read;
>>   
>>       if (!access_size_min) {
>>           access_size_min = 1;
>> @@ -560,20 +651,16 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
>>           reentrancy_guard_applied = true;
>>       }
>>   
>> -    /* FIXME: support unaligned access? */
>> -    access_size = MAX(MIN(size, access_size_max), access_size_min);
>> -    access_mask = MAKE_64BIT_MASK(0, access_size * 8);
>> -    if (memory_region_big_endian(mr)) {
>> -        for (i = 0; i < size; i += access_size) {
>> -            r |= access_fn(mr, addr + i, value, access_size,
>> -                        (size - access_size - i) * 8, access_mask, attrs);
>> -        }
>> +    if (is_access_fastpath(addr, size, access_size_min, access_size_max, mr)) {
>> +        r |= access_fastpath(addr, value, size,
>> +                             access_size_min, access_size_max, mr, attrs,
>> +                             access_fn_fastpath);
>>       } else {
>> -        for (i = 0; i < size; i += access_size) {
>> -            r |= access_fn(mr, addr + i, value, access_size, i * 8,
>> -                        access_mask, attrs);
>> -        }
>> +        r |= access_emulation(addr, value, size,
>> +                              access_size_min, access_size_max, mr, attrs,
>> +                              access_fn_read, access_fn_write, is_write);
>>       }
>> +
>>       if (mr->dev && reentrancy_guard_applied) {
>>           mr->dev->mem_reentrancy_guard.engaged_in_io = false;
>>       }
>> @@ -1459,13 +1546,15 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
>>                                            mr->ops->impl.min_access_size,
>>                                            mr->ops->impl.max_access_size,
>>                                            memory_region_read_accessor,
>> -                                         mr, attrs);
>> +                                         memory_region_write_accessor,
>> +                                         false, mr, attrs);
>>       } else {
>>           return access_with_adjusted_size(addr, pval, size,
>>                                            mr->ops->impl.min_access_size,
>>                                            mr->ops->impl.max_access_size,
>>                                            memory_region_read_with_attrs_accessor,
>> -                                         mr, attrs);
>> +                                         memory_region_write_with_attrs_accessor,
>> +                                         false, mr, attrs);
>>       }
>>   }
>>   
>> @@ -1553,15 +1642,17 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
>>           return access_with_adjusted_size(addr, &data, size,
>>                                            mr->ops->impl.min_access_size,
>>                                            mr->ops->impl.max_access_size,
>> -                                         memory_region_write_accessor, mr,
>> -                                         attrs);
>> +                                         memory_region_read_accessor,
>> +                                         memory_region_write_accessor,
>> +                                         true, mr, attrs);
>>       } else {
>>           return
>>               access_with_adjusted_size(addr, &data, size,
>>                                         mr->ops->impl.min_access_size,
>>                                         mr->ops->impl.max_access_size,
>> +                                      memory_region_read_with_attrs_accessor,
>>                                         memory_region_write_with_attrs_accessor,
>> -                                      mr, attrs);
>> +                                      true, mr, attrs);
>>       }
>>   }
>>   
>> diff --git a/system/physmem.c b/system/physmem.c
>> index dc1db3a384..ff444140a8 100644
>> --- a/system/physmem.c
>> +++ b/system/physmem.c
>> @@ -2693,14 +2693,6 @@ int memory_access_size(MemoryRegion *mr, unsigned l, hwaddr addr)
>>           access_size_max = 4;
>>       }
>>   
>> -    /* Bound the maximum access by the alignment of the address.  */
>> -    if (!mr->ops->impl.unaligned) {
>> -        unsigned align_size_max = addr & -addr;
>> -        if (align_size_max != 0 && align_size_max < access_size_max) {
>> -            access_size_max = align_size_max;
>> -        }
>> -    }
> Could you explain why this needs to be removed?
>
> Again, I was expecting the change was for a device that will have
> unaligned==true first, so this shouldn't matter.  Then I wonder why this
> behavior needs change.  But I could miss something.
>
> Thanks,
>
>> -
>>       /* Don't attempt accesses larger than the maximum.  */
>>       if (l > access_size_max) {
>>           l = access_size_max;
>> -- 
>> 2.43.0
>>

Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 2 months ago
On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
> In this email, I explain what this patch set will resolve and an
> overview of this patch set. I will respond to your specific code
> review comments in a separate email.

Yes, that's OK.

> 
> On 2024/12/03 6:23, Peter Xu wrote:
> > On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
> > > The previous code ignored 'impl.unaligned' and handled unaligned
> > > accesses as is. But this implementation could not emulate specific
> > > registers of some devices that allow unaligned access such as xHCI
> > > Host Controller Capability Registers.
> > I have some comment that can be naive, please bare with me..
> > 
> > Firstly, could you provide an example in the commit message, of what would
> > start working after this patch?
> Sorry, I'll describe what will start working in the next version of
> this patch set. I'll also provide an example here.  After applying
> this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
> Capability Registers region will work correctly. For example, the read
> result will return 0x0110 (version 1.1.0). Previously, a
> read(addr=0x2, size=2) in the Capability Register region would return
> 0, which is incorrect. According to the xHCI specification, the
> Capability Register region does not prohibit accesses of any size or
> unaligned accesses.

Thanks for the context, Tomoyuki.

I assume it's about xhci_cap_ops then.  If you agree we can also mention
xhci_cap_ops when dscribing it, so readers can easily reference the MR
attributes from the code alongside with understanding the use case.

Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
can be changed to 2 (together with additional xhci_cap_read/write support)?

Note that I'm not saying it must do so even if it would work for xHCI, but
if the memory API change is only for one device, then it can still be
discussed about which option would be better on changing the device or the
core.

Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
to share together when describing the issue.  That will be very persuasive
input that a generic solution is needed.

> > IIUC things like read(addr=0x2, size=8) should already working before but
> > it'll be cut into 4 times read() over 2 bytes for unaligned=false, am I
> > right?
> Yes, I also think so. I think the operation read(addr=0x2, size=8) in
> a MemoryRegion with impl.unaligned==false should be split into
> multiple aligned read() operations. The access size should depends on
> the region's 'impl.max_access_size' and 'impl.min_access_size'
> . Actually, the comments in 'include/exec/memory.h' seem to confirm
> this behavior:
> 
> ```
>     /* If true, unaligned accesses are supported.  Otherwise all accesses
>      * are converted to (possibly multiple) naturally aligned accesses.
>      */
>     bool unaligned;
> ```
> 
> MemoryRegionOps struct in the MemoryRegion has two members, 'valid'
> and 'impl' . I think 'valid' determines the behavior of the
> MemoryRegion exposed to the guest, and 'impl' determines the behavior
> of the MemoryRegion exposed to the QEMU memory region manager.
> 
> Consider the situation where we have a MemoryRegion with the following
> parameters:
> 
> ```
> MemoryRegion mr = (MemoryRegion){
>     //...
>     .ops = (MemoryRegionOps){
>         //...
>     .read = ops_read_function;
>     .write = ops_write_function;
>     .valid.min_access_size = 4;
>     .valid.max_access_size = 4;
>     .valid.unaligned = true;
>     .impl.min_access_size = 2;
>     .impl.max_access_size = 2;
>     .impl.unaligned = false;
>     };
> };
> ```
> 
> With this MemoryRegion 'mr', the guest can read(addr=0x1, size=4)
> because 'valid.unaligned' is true.  But 'impl.unaligned' is false, so
> 'mr.ops->read()' function does not support addr=0x1, which is
> unaligned. In this situation, we need to convert the unaligned access
> to multiple aligned accesses, such as:
> 
> - mr.ops->read(addr=0x0, size=2)
> - mr.ops->read(addr=0x2, size=2)
> - mr.ops->read(addr=0x4, size=2)
> 
> After that, we should return a result of read(addr=0x1, size=4) from
> above mr.ops->read() results, I think.

Yes.  I agree with your analysis and understanding.

Thanks,

-- 
Peter Xu


Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 2 months ago
Sorry for late reply.

On 2024/12/07 1:42, Peter Xu wrote:
> On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
>> In this email, I explain what this patch set will resolve and an
>> overview of this patch set. I will respond to your specific code
>> review comments in a separate email.
> Yes, that's OK.
>
>> On 2024/12/03 6:23, Peter Xu wrote:
>>> On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
>>>> The previous code ignored 'impl.unaligned' and handled unaligned
>>>> accesses as is. But this implementation could not emulate specific
>>>> registers of some devices that allow unaligned access such as xHCI
>>>> Host Controller Capability Registers.
>>> I have some comment that can be naive, please bare with me..
>>>
>>> Firstly, could you provide an example in the commit message, of what would
>>> start working after this patch?
>> Sorry, I'll describe what will start working in the next version of
>> this patch set. I'll also provide an example here.  After applying
>> this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
>> Capability Registers region will work correctly. For example, the read
>> result will return 0x0110 (version 1.1.0). Previously, a
>> read(addr=0x2, size=2) in the Capability Register region would return
>> 0, which is incorrect. According to the xHCI specification, the
>> Capability Register region does not prohibit accesses of any size or
>> unaligned accesses.
> Thanks for the context, Tomoyuki.
>
> I assume it's about xhci_cap_ops then.  If you agree we can also mention
> xhci_cap_ops when dscribing it, so readers can easily reference the MR
> attributes from the code alongside with understanding the use case.
>
> Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
> can be changed to 2 (together with additional xhci_cap_read/write support)?
>
> Note that I'm not saying it must do so even if it would work for xHCI, but
> if the memory API change is only for one device, then it can still be
> discussed about which option would be better on changing the device or the
> core.
>
> Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
> to share together when describing the issue.  That will be very persuasive
> input that a generic solution is needed.
OK, I understand. I will try to describe 'xhci_cap_ops' and related topics.
Currently, the actual 'xhci_cap_ops' code is as follows:

```
static const MemoryRegionOps xhci_cap_ops = {
     .read = xhci_cap_read,
     .write = xhci_cap_write,
     .valid.min_access_size = 1,
     .valid.max_access_size = 4,
     .impl.min_access_size = 4,
     .impl.max_access_size = 4,
     .endianness = DEVICE_LITTLE_ENDIAN,
};
```

According to the above code, the guest can access this MemoryRegion
with 1-4 bytes.  'valid.unaligned' is also not explicitly defined, so
it is treated as 'false'. This means the guest can access this MR with
1-4 bytes, as long as the access is aligned. However, the xHCI
specification does not prohibit unaligned accesses.

Simply adding '.valid.unaligned = true' will not resolve this problem
because 'impl.unaligned' is also 'false'. In this situation, where
'valid.unaligned' is 'true' but 'impl.unaligned' is 'false', we need
to emulate unaligned accesses by splitting them into multiple aligned
accesses.

An alternative solution would be to fix 'xhci_cap_{read,write}',
update '.impl.min_access_size = 1', and set '.impl.unaligned = true'
to allow the guest to perform unaligned accesses with 1-4 bytes. With
this solution, we wouldn't need to modify core memory code.

However, applying this approach throughout the QEMU codebase would
increase the complexity of device implementations. If a device allows
unaligned guest access to its register region, the device implementer
would needs to handle unaligned accesses explicitly. Additionally,
the distinction between 'valid' and 'impl' would become almost
meaningless, making it unclear why they are separated.

"Ideally", we could consider one of the following changes:

1. Introduce an emulation mechanism for unaligned accesses using
    multiple aligned accesses.
2. Remove either 'valid' or 'impl' and unify these functionality.

Solution 2 would require extensive changes to the codebase and memory
API, making it impractical.  Solution 1 seems to align with QEMU's
original intentions. Actually, there is a comment in 'memory.c' that
states:

`/* FIXME: support unaligned access? */`

This patch set implements solution 1. If there is a better way to
resolve these issues, I would greatly appreciate your suggestions.

Thanks,
Tomoyuki HIROSE
>>> IIUC things like read(addr=0x2, size=8) should already working before but
>>> it'll be cut into 4 times read() over 2 bytes for unaligned=false, am I
>>> right?
>> Yes, I also think so. I think the operation read(addr=0x2, size=8) in
>> a MemoryRegion with impl.unaligned==false should be split into
>> multiple aligned read() operations. The access size should depends on
>> the region's 'impl.max_access_size' and 'impl.min_access_size'
>> . Actually, the comments in 'include/exec/memory.h' seem to confirm
>> this behavior:
>>
>> ```
>>      /* If true, unaligned accesses are supported.  Otherwise all accesses
>>       * are converted to (possibly multiple) naturally aligned accesses.
>>       */
>>      bool unaligned;
>> ```
>>
>> MemoryRegionOps struct in the MemoryRegion has two members, 'valid'
>> and 'impl' . I think 'valid' determines the behavior of the
>> MemoryRegion exposed to the guest, and 'impl' determines the behavior
>> of the MemoryRegion exposed to the QEMU memory region manager.
>>
>> Consider the situation where we have a MemoryRegion with the following
>> parameters:
>>
>> ```
>> MemoryRegion mr = (MemoryRegion){
>>      //...
>>      .ops = (MemoryRegionOps){
>>          //...
>>      .read = ops_read_function;
>>      .write = ops_write_function;
>>      .valid.min_access_size = 4;
>>      .valid.max_access_size = 4;
>>      .valid.unaligned = true;
>>      .impl.min_access_size = 2;
>>      .impl.max_access_size = 2;
>>      .impl.unaligned = false;
>>      };
>> };
>> ```
>>
>> With this MemoryRegion 'mr', the guest can read(addr=0x1, size=4)
>> because 'valid.unaligned' is true.  But 'impl.unaligned' is false, so
>> 'mr.ops->read()' function does not support addr=0x1, which is
>> unaligned. In this situation, we need to convert the unaligned access
>> to multiple aligned accesses, such as:
>>
>> - mr.ops->read(addr=0x0, size=2)
>> - mr.ops->read(addr=0x2, size=2)
>> - mr.ops->read(addr=0x4, size=2)
>>
>> After that, we should return a result of read(addr=0x1, size=4) from
>> above mr.ops->read() results, I think.
> Yes.  I agree with your analysis and understanding.
>
> Thanks,
>

Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 2 months ago
On Wed, Dec 11, 2024 at 06:35:57PM +0900, Tomoyuki HIROSE wrote:
> Sorry for late reply.
> 
> On 2024/12/07 1:42, Peter Xu wrote:
> > On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
> > > In this email, I explain what this patch set will resolve and an
> > > overview of this patch set. I will respond to your specific code
> > > review comments in a separate email.
> > Yes, that's OK.
> > 
> > > On 2024/12/03 6:23, Peter Xu wrote:
> > > > On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
> > > > > The previous code ignored 'impl.unaligned' and handled unaligned
> > > > > accesses as is. But this implementation could not emulate specific
> > > > > registers of some devices that allow unaligned access such as xHCI
> > > > > Host Controller Capability Registers.
> > > > I have some comment that can be naive, please bare with me..
> > > > 
> > > > Firstly, could you provide an example in the commit message, of what would
> > > > start working after this patch?
> > > Sorry, I'll describe what will start working in the next version of
> > > this patch set. I'll also provide an example here.  After applying
> > > this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
> > > Capability Registers region will work correctly. For example, the read
> > > result will return 0x0110 (version 1.1.0). Previously, a
> > > read(addr=0x2, size=2) in the Capability Register region would return
> > > 0, which is incorrect. According to the xHCI specification, the
> > > Capability Register region does not prohibit accesses of any size or
> > > unaligned accesses.
> > Thanks for the context, Tomoyuki.
> > 
> > I assume it's about xhci_cap_ops then.  If you agree we can also mention
> > xhci_cap_ops when dscribing it, so readers can easily reference the MR
> > attributes from the code alongside with understanding the use case.
> > 
> > Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
> > can be changed to 2 (together with additional xhci_cap_read/write support)?
> > 
> > Note that I'm not saying it must do so even if it would work for xHCI, but
> > if the memory API change is only for one device, then it can still be
> > discussed about which option would be better on changing the device or the
> > core.
> > 
> > Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
> > to share together when describing the issue.  That will be very persuasive
> > input that a generic solution is needed.
> OK, I understand. I will try to describe 'xhci_cap_ops' and related topics.

Thanks.

> Currently, the actual 'xhci_cap_ops' code is as follows:
> 
> ```
> static const MemoryRegionOps xhci_cap_ops = {
>     .read = xhci_cap_read,
>     .write = xhci_cap_write,
>     .valid.min_access_size = 1,
>     .valid.max_access_size = 4,
>     .impl.min_access_size = 4,
>     .impl.max_access_size = 4,
>     .endianness = DEVICE_LITTLE_ENDIAN,
> };
> ```
> 
> According to the above code, the guest can access this MemoryRegion
> with 1-4 bytes.  'valid.unaligned' is also not explicitly defined, so
> it is treated as 'false'. This means the guest can access this MR with
> 1-4 bytes, as long as the access is aligned. However, the xHCI
> specification does not prohibit unaligned accesses.
> 
> Simply adding '.valid.unaligned = true' will not resolve this problem
> because 'impl.unaligned' is also 'false'. In this situation, where
> 'valid.unaligned' is 'true' but 'impl.unaligned' is 'false', we need
> to emulate unaligned accesses by splitting them into multiple aligned
> accesses.

Correct.

> 
> An alternative solution would be to fix 'xhci_cap_{read,write}',
> update '.impl.min_access_size = 1', and set '.impl.unaligned = true'
> to allow the guest to perform unaligned accesses with 1-4 bytes. With
> this solution, we wouldn't need to modify core memory code.
> 
> However, applying this approach throughout the QEMU codebase would
> increase the complexity of device implementations. If a device allows
> unaligned guest access to its register region, the device implementer
> would needs to handle unaligned accesses explicitly. Additionally,
> the distinction between 'valid' and 'impl' would become almost
> meaningless, making it unclear why they are separated.

I get it now, let's stick with the core memory change.

> 
> "Ideally", we could consider one of the following changes:
> 
> 1. Introduce an emulation mechanism for unaligned accesses using
>    multiple aligned accesses.
> 2. Remove either 'valid' or 'impl' and unify these functionality.
> 
> Solution 2 would require extensive changes to the codebase and memory
> API, making it impractical. 

Why it is impractical?  Let me explain my question..

Firstly, valid.unaligned makes perfect sense to me.  That describes whether
the device emulation allows unaligned access at all.  So I do think we need
this, and yes when xHCI controller supports unaligned access, this is the
flag to be set TRUE instead of FALSE.

However, impl.unaligned is confusing to me.

From literal POV, it says, "the MR ops implemented unaligned access".

If you check my initial reply to this patch, I had a similar question: from
such definition, whenever a device emulation sets impl.unaligned=true, I
think it means we should simply pass over the MR request to the ops, no
matter if it's aligned or not, especially when it's not aligned memory core
shouldn't need to do any trick on amplifying the MR access, simply because
the device said it supports unaligned access in its implementation.  That's
the only meaningful definition of impl.unaligned that I can think of so far.

However, after I try to read more of the problem, I don't think any MR ops
would like to implement such complicated logic, the norm should be like
xHCI MR ops where it supports only aligned access in MR ops, then the
memory core is hopefully always be able to convert an unaligned access into
one or multiple aligned access internally.

IOW, it makes more sense to me that we keep valid.unaligned, but drop
impl.unaligned.  Would that make sense to you (and Peter)?  That kind of
matches with the comment you quoted below on saying that unaligned access
is broken - I'm not 100% sure whether it's talking about impl.unaligned,
but it would make sense if so.

Meanwhile, I do see that we already have two impl.unaligned=true users:

hw/pci-host/raven.c:    .impl.unaligned = true,
system/ioport.c:    .impl.unaligned = true,

I actually have no idea whether they're working at all if accesses can be
unaligned internally, and how they work, if at least impl.unaligned seems
to be totally broken.

> Solution 1 seems to align with QEMU's
> original intentions. Actually, there is a comment in 'memory.c' that
> states:
> 
> `/* FIXME: support unaligned access? */`
> 
> This patch set implements solution 1. If there is a better way to
> resolve these issues, I would greatly appreciate your suggestions.

I think if my above understanding is correct, I can kind of understand your
solution now.  But then I wonder whether we should already drop
impl.unaligned with your solution.

Also, I don't think I am 100% sure yet on how the amplification of the
accessed (as proposed in your patch) would have side effects to the device
emulation.  For example, read(0x2, 0x4) with impl.access_size_min=4 now
will be amplified to two continuous:

  read(0x0, 0x4)
  read(0x4, 0x4)

Then there will be side effects of reading (addr=0x0, size=0x2) portion,
and (addr=0x6, size=0x2) portion, that is not part of the request.  Maybe
it's as simple as: when device emulation has such side effect, it should
always set valid.unaligned=false already.

Thanks,

-- 
Peter Xu


Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 1 month ago
On 2024/12/12 7:54, Peter Xu wrote:
> On Wed, Dec 11, 2024 at 06:35:57PM +0900, Tomoyuki HIROSE wrote:
>> Sorry for late reply.
>>
>> On 2024/12/07 1:42, Peter Xu wrote:
>>> On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
>>>> In this email, I explain what this patch set will resolve and an
>>>> overview of this patch set. I will respond to your specific code
>>>> review comments in a separate email.
>>> Yes, that's OK.
>>>
>>>> On 2024/12/03 6:23, Peter Xu wrote:
>>>>> On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
>>>>>> The previous code ignored 'impl.unaligned' and handled unaligned
>>>>>> accesses as is. But this implementation could not emulate specific
>>>>>> registers of some devices that allow unaligned access such as xHCI
>>>>>> Host Controller Capability Registers.
>>>>> I have some comment that can be naive, please bare with me..
>>>>>
>>>>> Firstly, could you provide an example in the commit message, of what would
>>>>> start working after this patch?
>>>> Sorry, I'll describe what will start working in the next version of
>>>> this patch set. I'll also provide an example here.  After applying
>>>> this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
>>>> Capability Registers region will work correctly. For example, the read
>>>> result will return 0x0110 (version 1.1.0). Previously, a
>>>> read(addr=0x2, size=2) in the Capability Register region would return
>>>> 0, which is incorrect. According to the xHCI specification, the
>>>> Capability Register region does not prohibit accesses of any size or
>>>> unaligned accesses.
>>> Thanks for the context, Tomoyuki.
>>>
>>> I assume it's about xhci_cap_ops then.  If you agree we can also mention
>>> xhci_cap_ops when dscribing it, so readers can easily reference the MR
>>> attributes from the code alongside with understanding the use case.
>>>
>>> Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
>>> can be changed to 2 (together with additional xhci_cap_read/write support)?
>>>
>>> Note that I'm not saying it must do so even if it would work for xHCI, but
>>> if the memory API change is only for one device, then it can still be
>>> discussed about which option would be better on changing the device or the
>>> core.
>>>
>>> Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
>>> to share together when describing the issue.  That will be very persuasive
>>> input that a generic solution is needed.
>> OK, I understand. I will try to describe 'xhci_cap_ops' and related topics.
> Thanks.
>
>> Currently, the actual 'xhci_cap_ops' code is as follows:
>>
>> ```
>> static const MemoryRegionOps xhci_cap_ops = {
>>      .read = xhci_cap_read,
>>      .write = xhci_cap_write,
>>      .valid.min_access_size = 1,
>>      .valid.max_access_size = 4,
>>      .impl.min_access_size = 4,
>>      .impl.max_access_size = 4,
>>      .endianness = DEVICE_LITTLE_ENDIAN,
>> };
>> ```
>>
>> According to the above code, the guest can access this MemoryRegion
>> with 1-4 bytes.  'valid.unaligned' is also not explicitly defined, so
>> it is treated as 'false'. This means the guest can access this MR with
>> 1-4 bytes, as long as the access is aligned. However, the xHCI
>> specification does not prohibit unaligned accesses.
>>
>> Simply adding '.valid.unaligned = true' will not resolve this problem
>> because 'impl.unaligned' is also 'false'. In this situation, where
>> 'valid.unaligned' is 'true' but 'impl.unaligned' is 'false', we need
>> to emulate unaligned accesses by splitting them into multiple aligned
>> accesses.
> Correct.
>
>> An alternative solution would be to fix 'xhci_cap_{read,write}',
>> update '.impl.min_access_size = 1', and set '.impl.unaligned = true'
>> to allow the guest to perform unaligned accesses with 1-4 bytes. With
>> this solution, we wouldn't need to modify core memory code.
>>
>> However, applying this approach throughout the QEMU codebase would
>> increase the complexity of device implementations. If a device allows
>> unaligned guest access to its register region, the device implementer
>> would needs to handle unaligned accesses explicitly. Additionally,
>> the distinction between 'valid' and 'impl' would become almost
>> meaningless, making it unclear why they are separated.
> I get it now, let's stick with the core memory change.
>
>> "Ideally", we could consider one of the following changes:
>>
>> 1. Introduce an emulation mechanism for unaligned accesses using
>>     multiple aligned accesses.
>> 2. Remove either 'valid' or 'impl' and unify these functionality.
>>
>> Solution 2 would require extensive changes to the codebase and memory
>> API, making it impractical.
> Why it is impractical?  Let me explain my question..
>
> Firstly, valid.unaligned makes perfect sense to me.  That describes whether
> the device emulation allows unaligned access at all.  So I do think we need
> this, and yes when xHCI controller supports unaligned access, this is the
> flag to be set TRUE instead of FALSE.
>
> However, impl.unaligned is confusing to me.
>
>  From literal POV, it says, "the MR ops implemented unaligned access".
>
> If you check my initial reply to this patch, I had a similar question: from
> such definition, whenever a device emulation sets impl.unaligned=true, I
> think it means we should simply pass over the MR request to the ops, no
> matter if it's aligned or not, especially when it's not aligned memory core
> shouldn't need to do any trick on amplifying the MR access, simply because
> the device said it supports unaligned access in its implementation.  That's
> the only meaningful definition of impl.unaligned that I can think of so far.

I have the same understanding.  I found a relevant section in the
documentation at 'docs/devel/memory.rst':

```
In addition various constraints can be supplied to control how these
callbacks are called:

- .valid.min_access_size, .valid.max_access_size define the access sizes
   (in bytes) which the device accepts; accesses outside this range will
   have device and bus specific behaviour (ignored, or machine check)
- .valid.unaligned specifies that the *device being modelled* supports
   unaligned accesses; if false, unaligned accesses will invoke the
   appropriate bus or CPU specific behaviour.
- .impl.min_access_size, .impl.max_access_size define the access sizes
   (in bytes) supported by the *implementation*; other access sizes will be
   emulated using the ones available.  For example a 4-byte write will be
   emulated using four 1-byte writes, if .impl.max_access_size = 1.
- .impl.unaligned specifies that the *implementation* supports unaligned
   accesses; if false, unaligned accesses will be emulated by two aligned
   accesses.
```

> However, after I try to read more of the problem, I don't think any MR ops
> would like to implement such complicated logic, the norm should be like
> xHCI MR ops where it supports only aligned access in MR ops, then the
> memory core is hopefully always be able to convert an unaligned access into
> one or multiple aligned access internally.
>
> IOW, it makes more sense to me that we keep valid.unaligned, but drop
> impl.unaligned.  Would that make sense to you (and Peter)?  That kind of
> matches with the comment you quoted below on saying that unaligned access
> is broken - I'm not 100% sure whether it's talking about impl.unaligned,
> but it would make sense if so.

I agree with you.

> Meanwhile, I do see that we already have two impl.unaligned=true users:
>
> hw/pci-host/raven.c:    .impl.unaligned = true,
> system/ioport.c:    .impl.unaligned = true,
>
> I actually have no idea whether they're working at all if accesses can be
> unaligned internally, and how they work, if at least impl.unaligned seems
> to be totally broken.

I initially assumed there would be more users, so I expected that a
lot of changes would be needed.  MR can be categorized into the
following patterns:

1. `impl.unaligned == true`
2. `impl.unaligned == false` and `valid.unaligned == false`
3. `impl.unaligned == false` and `valid.unaligned == true`

- Pattern 1: No special handling is required since the implementation
   supports unaligned accesses. The MR can handle both aligned and
   unaligned accesses seamlessly.
- Pattern 2: No additional handling is needed because unaligned
   accesses are invalid in this MR. Any unaligned access is treated as
   an illegal operation.
- Pattern 3: This is the only pattern that requires consideration. We
   must emulate unaligned accesses using aligned accesses.

I searched by keyword "unaligned = true" and got the following result:

```
$ rg "unaligned = true"
system/memory.c
1398:        .unaligned = true,
1403:        .unaligned = true,

system/ioport.c
223:    .valid.unaligned = true,
224:    .impl.unaligned = true,

hw/xtensa/mx_pic.c
271:        .unaligned = true,

hw/pci-host/raven.c
203:    .impl.unaligned = true,
204:    .valid.unaligned = true,

hw/riscv/riscv-iommu.c
2108:        .unaligned = true,

hw/ssi/npcm7xx_fiu.c
256:        .unaligned = true,

hw/cxl/cxl-host.c
285:        .unaligned = true,
290:        .unaligned = true,

hw/i386/xen/xen_platform.c
412:        .unaligned = true,
417:        .unaligned = true,

hw/display/vmware_vga.c
1306:        .unaligned = true,
1309:        .unaligned = true,
```

In this result, I found two pattern 3 in the codebase:

- hw/xtensa/mx_pic.c
- hw/ssi/npcm7xx_fiu.c

```
static const MemoryRegionOps xtensa_mx_pic_ops = {
     .read = xtensa_mx_pic_ext_reg_read,
     .write = xtensa_mx_pic_ext_reg_write,
     .endianness = DEVICE_NATIVE_ENDIAN,
     .valid = {
         .unaligned = true,
     },
};
```

```
static const MemoryRegionOps npcm7xx_fiu_flash_ops = {
     .read = npcm7xx_fiu_flash_read,
     .write = npcm7xx_fiu_flash_write,
     .endianness = DEVICE_LITTLE_ENDIAN,
     .valid = {
         .min_access_size = 1,
         .max_access_size = 8,
         .unaligned = true,
     },
};
```

Note that these implementations are implicitly 'impl.unaligned ==
false'; the 'impl.unaligned' field simply does not exist in these
cases. However, it is possible that these implementations inherently
support unaligned accesses.

To summarize, if we decide to remove the 'impl' field, we might need
to revisit and make changes to the MR implementation in these codes.

>> Solution 1 seems to align with QEMU's
>> original intentions. Actually, there is a comment in 'memory.c' that
>> states:
>>
>> `/* FIXME: support unaligned access? */`
>>
>> This patch set implements solution 1. If there is a better way to
>> resolve these issues, I would greatly appreciate your suggestions.
> I think if my above understanding is correct, I can kind of understand your
> solution now.  But then I wonder whether we should already drop
> impl.unaligned with your solution.
>
> Also, I don't think I am 100% sure yet on how the amplification of the
> accessed (as proposed in your patch) would have side effects to the device
> emulation.  For example, read(0x2, 0x4) with impl.access_size_min=4 now
> will be amplified to two continuous:
>
>    read(0x0, 0x4)
>    read(0x4, 0x4)
>
> Then there will be side effects of reading (addr=0x0, size=0x2) portion,
> and (addr=0x6, size=0x2) portion, that is not part of the request.  Maybe
> it's as simple as: when device emulation has such side effect, it should
> always set valid.unaligned=false already.

There is also a potential issue regarding side effects. Consider a
device where a register value changes upon a read access. Assume the
device has the following register map:

```
31                       8        0 (bit)
+---------------------------------+
|         Reg1(lo)       |  Reg0  | 0 byte
+---------------------------------+
|                        |Reg1(hi)| 4 byte
```

In this case, let’s assume that Reg0 is a register whose value
changes whenever it is read.
Now, if the guest issues a read(addr=0x1, size=4) on this device's
MR(impl.unaligned=false, valid.unaligned=true), the unaligned access
must be split into two aligned accesses:

1. read(addr=0x0, size=4)
2. read(addr=0x4, size=4)

However, this results in Reg0 being read as part of the first aligned
access, potentially triggering its side effect. This unintended side
effect violates the semantics of the original unaligned read. If we
don't want to allow this, we should set 'valid.unaligned = false'.

Thanks,
Tomoyuki HIROSE

> Thanks,
>

Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 1 month ago
On Thu, Dec 12, 2024 at 02:39:41PM +0900, Tomoyuki HIROSE wrote:
> On 2024/12/12 7:54, Peter Xu wrote:
> > On Wed, Dec 11, 2024 at 06:35:57PM +0900, Tomoyuki HIROSE wrote:
> > > Sorry for late reply.
> > > 
> > > On 2024/12/07 1:42, Peter Xu wrote:
> > > > On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
> > > > > In this email, I explain what this patch set will resolve and an
> > > > > overview of this patch set. I will respond to your specific code
> > > > > review comments in a separate email.
> > > > Yes, that's OK.
> > > > 
> > > > > On 2024/12/03 6:23, Peter Xu wrote:
> > > > > > On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
> > > > > > > The previous code ignored 'impl.unaligned' and handled unaligned
> > > > > > > accesses as is. But this implementation could not emulate specific
> > > > > > > registers of some devices that allow unaligned access such as xHCI
> > > > > > > Host Controller Capability Registers.
> > > > > > I have some comment that can be naive, please bare with me..
> > > > > > 
> > > > > > Firstly, could you provide an example in the commit message, of what would
> > > > > > start working after this patch?
> > > > > Sorry, I'll describe what will start working in the next version of
> > > > > this patch set. I'll also provide an example here.  After applying
> > > > > this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
> > > > > Capability Registers region will work correctly. For example, the read
> > > > > result will return 0x0110 (version 1.1.0). Previously, a
> > > > > read(addr=0x2, size=2) in the Capability Register region would return
> > > > > 0, which is incorrect. According to the xHCI specification, the
> > > > > Capability Register region does not prohibit accesses of any size or
> > > > > unaligned accesses.
> > > > Thanks for the context, Tomoyuki.
> > > > 
> > > > I assume it's about xhci_cap_ops then.  If you agree we can also mention
> > > > xhci_cap_ops when dscribing it, so readers can easily reference the MR
> > > > attributes from the code alongside with understanding the use case.
> > > > 
> > > > Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
> > > > can be changed to 2 (together with additional xhci_cap_read/write support)?
> > > > 
> > > > Note that I'm not saying it must do so even if it would work for xHCI, but
> > > > if the memory API change is only for one device, then it can still be
> > > > discussed about which option would be better on changing the device or the
> > > > core.
> > > > 
> > > > Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
> > > > to share together when describing the issue.  That will be very persuasive
> > > > input that a generic solution is needed.
> > > OK, I understand. I will try to describe 'xhci_cap_ops' and related topics.
> > Thanks.
> > 
> > > Currently, the actual 'xhci_cap_ops' code is as follows:
> > > 
> > > ```
> > > static const MemoryRegionOps xhci_cap_ops = {
> > >      .read = xhci_cap_read,
> > >      .write = xhci_cap_write,
> > >      .valid.min_access_size = 1,
> > >      .valid.max_access_size = 4,
> > >      .impl.min_access_size = 4,
> > >      .impl.max_access_size = 4,
> > >      .endianness = DEVICE_LITTLE_ENDIAN,
> > > };
> > > ```
> > > 
> > > According to the above code, the guest can access this MemoryRegion
> > > with 1-4 bytes.  'valid.unaligned' is also not explicitly defined, so
> > > it is treated as 'false'. This means the guest can access this MR with
> > > 1-4 bytes, as long as the access is aligned. However, the xHCI
> > > specification does not prohibit unaligned accesses.
> > > 
> > > Simply adding '.valid.unaligned = true' will not resolve this problem
> > > because 'impl.unaligned' is also 'false'. In this situation, where
> > > 'valid.unaligned' is 'true' but 'impl.unaligned' is 'false', we need
> > > to emulate unaligned accesses by splitting them into multiple aligned
> > > accesses.
> > Correct.
> > 
> > > An alternative solution would be to fix 'xhci_cap_{read,write}',
> > > update '.impl.min_access_size = 1', and set '.impl.unaligned = true'
> > > to allow the guest to perform unaligned accesses with 1-4 bytes. With
> > > this solution, we wouldn't need to modify core memory code.
> > > 
> > > However, applying this approach throughout the QEMU codebase would
> > > increase the complexity of device implementations. If a device allows
> > > unaligned guest access to its register region, the device implementer
> > > would needs to handle unaligned accesses explicitly. Additionally,
> > > the distinction between 'valid' and 'impl' would become almost
> > > meaningless, making it unclear why they are separated.
> > I get it now, let's stick with the core memory change.
> > 
> > > "Ideally", we could consider one of the following changes:
> > > 
> > > 1. Introduce an emulation mechanism for unaligned accesses using
> > >     multiple aligned accesses.
> > > 2. Remove either 'valid' or 'impl' and unify these functionality.
> > > 
> > > Solution 2 would require extensive changes to the codebase and memory
> > > API, making it impractical.
> > Why it is impractical?  Let me explain my question..
> > 
> > Firstly, valid.unaligned makes perfect sense to me.  That describes whether
> > the device emulation allows unaligned access at all.  So I do think we need
> > this, and yes when xHCI controller supports unaligned access, this is the
> > flag to be set TRUE instead of FALSE.
> > 
> > However, impl.unaligned is confusing to me.
> > 
> >  From literal POV, it says, "the MR ops implemented unaligned access".
> > 
> > If you check my initial reply to this patch, I had a similar question: from
> > such definition, whenever a device emulation sets impl.unaligned=true, I
> > think it means we should simply pass over the MR request to the ops, no
> > matter if it's aligned or not, especially when it's not aligned memory core
> > shouldn't need to do any trick on amplifying the MR access, simply because
> > the device said it supports unaligned access in its implementation.  That's
> > the only meaningful definition of impl.unaligned that I can think of so far.
> 
> I have the same understanding.  I found a relevant section in the
> documentation at 'docs/devel/memory.rst':
> 
> ```
> In addition various constraints can be supplied to control how these
> callbacks are called:
> 
> - .valid.min_access_size, .valid.max_access_size define the access sizes
>   (in bytes) which the device accepts; accesses outside this range will
>   have device and bus specific behaviour (ignored, or machine check)
> - .valid.unaligned specifies that the *device being modelled* supports
>   unaligned accesses; if false, unaligned accesses will invoke the
>   appropriate bus or CPU specific behaviour.
> - .impl.min_access_size, .impl.max_access_size define the access sizes
>   (in bytes) supported by the *implementation*; other access sizes will be
>   emulated using the ones available.  For example a 4-byte write will be
>   emulated using four 1-byte writes, if .impl.max_access_size = 1.
> - .impl.unaligned specifies that the *implementation* supports unaligned
>   accesses; if false, unaligned accesses will be emulated by two aligned
>   accesses.
> ```

Ah yes.

> 
> > However, after I try to read more of the problem, I don't think any MR ops
> > would like to implement such complicated logic, the norm should be like
> > xHCI MR ops where it supports only aligned access in MR ops, then the
> > memory core is hopefully always be able to convert an unaligned access into
> > one or multiple aligned access internally.
> > 
> > IOW, it makes more sense to me that we keep valid.unaligned, but drop
> > impl.unaligned.  Would that make sense to you (and Peter)?  That kind of
> > matches with the comment you quoted below on saying that unaligned access
> > is broken - I'm not 100% sure whether it's talking about impl.unaligned,
> > but it would make sense if so.
> 
> I agree with you.
> 
> > Meanwhile, I do see that we already have two impl.unaligned=true users:
> > 
> > hw/pci-host/raven.c:    .impl.unaligned = true,
> > system/ioport.c:    .impl.unaligned = true,
> > 
> > I actually have no idea whether they're working at all if accesses can be
> > unaligned internally, and how they work, if at least impl.unaligned seems
> > to be totally broken.
> 
> I initially assumed there would be more users, so I expected that a
> lot of changes would be needed.  MR can be categorized into the
> following patterns:
> 
> 1. `impl.unaligned == true`

From your description below, I suppose you meant:

  1. `impl.unaligned == true` and `valid.unaligned == true`

That may still be worthwhile to be spelled out, because I do see there's
one of pattern 4, which is:

  4. `impl.unaligned == true` and `valid.unaligned == false`

See:

static const MemoryRegionOps riscv_iommu_trap_ops = {
    .read_with_attrs = riscv_iommu_trap_read,
    .write_with_attrs = riscv_iommu_trap_write,
    .endianness = DEVICE_LITTLE_ENDIAN,
    .impl = {
        .min_access_size = 4,
        .max_access_size = 8,
        .unaligned = true,
    },
    .valid = {
        .min_access_size = 4,
        .max_access_size = 8,
    }
};

Even though I don't think it's a valid pattern..  I don't see how that
could differ in behavior against pattern 2 you listed below, if the upper
layer should always have rejected unaligned access.  So maybe it really
should have reported impl.unaligned=false.

> 2. `impl.unaligned == false` and `valid.unaligned == false`
> 3. `impl.unaligned == false` and `valid.unaligned == true`
> 
> - Pattern 1: No special handling is required since the implementation
>   supports unaligned accesses. The MR can handle both aligned and
>   unaligned accesses seamlessly.
> - Pattern 2: No additional handling is needed because unaligned
>   accesses are invalid in this MR. Any unaligned access is treated as
>   an illegal operation.
> - Pattern 3: This is the only pattern that requires consideration. We
>   must emulate unaligned accesses using aligned accesses.
> 
> I searched by keyword "unaligned = true" and got the following result:

Indeed I missed the ".impl = { .unaligned = XXX ... }" cases..

> 
> ```
> $ rg "unaligned = true"
> system/memory.c
> 1398:        .unaligned = true,
> 1403:        .unaligned = true,
> 
> system/ioport.c
> 223:    .valid.unaligned = true,
> 224:    .impl.unaligned = true,
> 
> hw/xtensa/mx_pic.c
> 271:        .unaligned = true,
> 
> hw/pci-host/raven.c
> 203:    .impl.unaligned = true,
> 204:    .valid.unaligned = true,
> 
> hw/riscv/riscv-iommu.c
> 2108:        .unaligned = true,
> 
> hw/ssi/npcm7xx_fiu.c
> 256:        .unaligned = true,
> 
> hw/cxl/cxl-host.c
> 285:        .unaligned = true,
> 290:        .unaligned = true,
> 
> hw/i386/xen/xen_platform.c
> 412:        .unaligned = true,
> 417:        .unaligned = true,
> 
> hw/display/vmware_vga.c
> 1306:        .unaligned = true,
> 1309:        .unaligned = true,
> ```
> 
> In this result, I found two pattern 3 in the codebase:
> 
> - hw/xtensa/mx_pic.c
> - hw/ssi/npcm7xx_fiu.c
> 
> ```
> static const MemoryRegionOps xtensa_mx_pic_ops = {
>     .read = xtensa_mx_pic_ext_reg_read,
>     .write = xtensa_mx_pic_ext_reg_write,
>     .endianness = DEVICE_NATIVE_ENDIAN,
>     .valid = {
>         .unaligned = true,
>     },
> };
> ```
> 
> ```
> static const MemoryRegionOps npcm7xx_fiu_flash_ops = {
>     .read = npcm7xx_fiu_flash_read,
>     .write = npcm7xx_fiu_flash_write,
>     .endianness = DEVICE_LITTLE_ENDIAN,
>     .valid = {
>         .min_access_size = 1,
>         .max_access_size = 8,
>         .unaligned = true,
>     },
> };
> ```
> 
> Note that these implementations are implicitly 'impl.unaligned ==
> false'; the 'impl.unaligned' field simply does not exist in these
> cases. However, it is possible that these implementations inherently
> support unaligned accesses.
> 
> To summarize, if we decide to remove the 'impl' field, we might need
> to revisit and make changes to the MR implementation in these codes.

IIUC what we need to change should be adding impl.unaligned=true into above
two use cases, am I right?

Said that because IIUC QEMU has processed pattern 3 (vaild.unaligned=true,
impl.unaligned=false) exactly like what it should do with pattern 1
(valid.unaligned=true, impl.unaligned=true).

That is, if I read it right, the current access_with_adjusted_size() should
always pass in unaligned address into MR ops (as long as addr is unaligned,
and also if valid.unaligned=true), assuming they'll be able to tackle with
it, even if impl.unaligned can be reported false.  That's exactly what
needs fixing then.

So.. it turns out we shouldn't drop impl.unaligned?  Because above two
seems to be the real user of such.  What we may want to do is:

  - Change above two use cases, adding impl.unaligned=true.

    This step should hopefully have zero effect in reality on the two
    devices.  One thing to mention is both of them do not look like to have
    an upper bound of max_access_size (either 8 which is the maximum, or
    not specified).

  - Implement the real pattern 3 (which is what this patch wanted to do)

  - Declare pattern 3 for whatever device that want to support it (which
    will differ from above two examples).

> 
> > > Solution 1 seems to align with QEMU's
> > > original intentions. Actually, there is a comment in 'memory.c' that
> > > states:
> > > 
> > > `/* FIXME: support unaligned access? */`
> > > 
> > > This patch set implements solution 1. If there is a better way to
> > > resolve these issues, I would greatly appreciate your suggestions.
> > I think if my above understanding is correct, I can kind of understand your
> > solution now.  But then I wonder whether we should already drop
> > impl.unaligned with your solution.
> > 
> > Also, I don't think I am 100% sure yet on how the amplification of the
> > accessed (as proposed in your patch) would have side effects to the device
> > emulation.  For example, read(0x2, 0x4) with impl.access_size_min=4 now
> > will be amplified to two continuous:
> > 
> >    read(0x0, 0x4)
> >    read(0x4, 0x4)
> > 
> > Then there will be side effects of reading (addr=0x0, size=0x2) portion,
> > and (addr=0x6, size=0x2) portion, that is not part of the request.  Maybe
> > it's as simple as: when device emulation has such side effect, it should
> > always set valid.unaligned=false already.
> 
> There is also a potential issue regarding side effects. Consider a
> device where a register value changes upon a read access. Assume the
> device has the following register map:
> 
> ```
> 31                       8        0 (bit)
> +---------------------------------+
> |         Reg1(lo)       |  Reg0  | 0 byte
> +---------------------------------+
> |                        |Reg1(hi)| 4 byte
> ```
> 
> In this case, let’s assume that Reg0 is a register whose value
> changes whenever it is read.
> Now, if the guest issues a read(addr=0x1, size=4) on this device's
> MR(impl.unaligned=false, valid.unaligned=true), the unaligned access
> must be split into two aligned accesses:
> 
> 1. read(addr=0x0, size=4)
> 2. read(addr=0x4, size=4)
> 
> However, this results in Reg0 being read as part of the first aligned
> access, potentially triggering its side effect. This unintended side
> effect violates the semantics of the original unaligned read. If we
> don't want to allow this, we should set 'valid.unaligned = false'.

Right.  I guess we're on the same page now on the side effect part of
things..  We may want to document this after implementation of pattern 3
somewhere so that the device emulation developers are aware of it.

Thanks,

-- 
Peter Xu


Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 1 month ago
Happy new year, Peter.
I had another job and was late in replying to your email, sorry.

On 2024/12/13 0:46, Peter Xu wrote:

> On Thu, Dec 12, 2024 at 02:39:41PM +0900, Tomoyuki HIROSE wrote:
>> On 2024/12/12 7:54, Peter Xu wrote:
>>> On Wed, Dec 11, 2024 at 06:35:57PM +0900, Tomoyuki HIROSE wrote:
>>>> Sorry for late reply.
>>>>
>>>> On 2024/12/07 1:42, Peter Xu wrote:
>>>>> On Fri, Dec 06, 2024 at 05:31:33PM +0900, Tomoyuki HIROSE wrote:
>>>>>> In this email, I explain what this patch set will resolve and an
>>>>>> overview of this patch set. I will respond to your specific code
>>>>>> review comments in a separate email.
>>>>> Yes, that's OK.
>>>>>
>>>>>> On 2024/12/03 6:23, Peter Xu wrote:
>>>>>>> On Fri, Nov 08, 2024 at 12:29:46PM +0900, Tomoyuki HIROSE wrote:
>>>>>>>> The previous code ignored 'impl.unaligned' and handled unaligned
>>>>>>>> accesses as is. But this implementation could not emulate specific
>>>>>>>> registers of some devices that allow unaligned access such as xHCI
>>>>>>>> Host Controller Capability Registers.
>>>>>>> I have some comment that can be naive, please bare with me..
>>>>>>>
>>>>>>> Firstly, could you provide an example in the commit message, of what would
>>>>>>> start working after this patch?
>>>>>> Sorry, I'll describe what will start working in the next version of
>>>>>> this patch set. I'll also provide an example here.  After applying
>>>>>> this patch set, a read(addr=0x2, size=2) in the xHCI Host Controller
>>>>>> Capability Registers region will work correctly. For example, the read
>>>>>> result will return 0x0110 (version 1.1.0). Previously, a
>>>>>> read(addr=0x2, size=2) in the Capability Register region would return
>>>>>> 0, which is incorrect. According to the xHCI specification, the
>>>>>> Capability Register region does not prohibit accesses of any size or
>>>>>> unaligned accesses.
>>>>> Thanks for the context, Tomoyuki.
>>>>>
>>>>> I assume it's about xhci_cap_ops then.  If you agree we can also mention
>>>>> xhci_cap_ops when dscribing it, so readers can easily reference the MR
>>>>> attributes from the code alongside with understanding the use case.
>>>>>
>>>>> Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
>>>>> can be changed to 2 (together with additional xhci_cap_read/write support)?
>>>>>
>>>>> Note that I'm not saying it must do so even if it would work for xHCI, but
>>>>> if the memory API change is only for one device, then it can still be
>>>>> discussed about which option would be better on changing the device or the
>>>>> core.
>>>>>
>>>>> Meanwhile, if there's more use cases on the impl.unaligned, it'll be nice
>>>>> to share together when describing the issue.  That will be very persuasive
>>>>> input that a generic solution is needed.
>>>> OK, I understand. I will try to describe 'xhci_cap_ops' and related topics.
>>> Thanks.
>>>
>>>> Currently, the actual 'xhci_cap_ops' code is as follows:
>>>>
>>>> ```
>>>> static const MemoryRegionOps xhci_cap_ops = {
>>>>       .read = xhci_cap_read,
>>>>       .write = xhci_cap_write,
>>>>       .valid.min_access_size = 1,
>>>>       .valid.max_access_size = 4,
>>>>       .impl.min_access_size = 4,
>>>>       .impl.max_access_size = 4,
>>>>       .endianness = DEVICE_LITTLE_ENDIAN,
>>>> };
>>>> ```
>>>>
>>>> According to the above code, the guest can access this MemoryRegion
>>>> with 1-4 bytes.  'valid.unaligned' is also not explicitly defined, so
>>>> it is treated as 'false'. This means the guest can access this MR with
>>>> 1-4 bytes, as long as the access is aligned. However, the xHCI
>>>> specification does not prohibit unaligned accesses.
>>>>
>>>> Simply adding '.valid.unaligned = true' will not resolve this problem
>>>> because 'impl.unaligned' is also 'false'. In this situation, where
>>>> 'valid.unaligned' is 'true' but 'impl.unaligned' is 'false', we need
>>>> to emulate unaligned accesses by splitting them into multiple aligned
>>>> accesses.
>>> Correct.
>>>
>>>> An alternative solution would be to fix 'xhci_cap_{read,write}',
>>>> update '.impl.min_access_size = 1', and set '.impl.unaligned = true'
>>>> to allow the guest to perform unaligned accesses with 1-4 bytes. With
>>>> this solution, we wouldn't need to modify core memory code.
>>>>
>>>> However, applying this approach throughout the QEMU codebase would
>>>> increase the complexity of device implementations. If a device allows
>>>> unaligned guest access to its register region, the device implementer
>>>> would needs to handle unaligned accesses explicitly. Additionally,
>>>> the distinction between 'valid' and 'impl' would become almost
>>>> meaningless, making it unclear why they are separated.
>>> I get it now, let's stick with the core memory change.
>>>
>>>> "Ideally", we could consider one of the following changes:
>>>>
>>>> 1. Introduce an emulation mechanism for unaligned accesses using
>>>>      multiple aligned accesses.
>>>> 2. Remove either 'valid' or 'impl' and unify these functionality.
>>>>
>>>> Solution 2 would require extensive changes to the codebase and memory
>>>> API, making it impractical.
>>> Why it is impractical?  Let me explain my question..
>>>
>>> Firstly, valid.unaligned makes perfect sense to me.  That describes whether
>>> the device emulation allows unaligned access at all.  So I do think we need
>>> this, and yes when xHCI controller supports unaligned access, this is the
>>> flag to be set TRUE instead of FALSE.
>>>
>>> However, impl.unaligned is confusing to me.
>>>
>>>   From literal POV, it says, "the MR ops implemented unaligned access".
>>>
>>> If you check my initial reply to this patch, I had a similar question: from
>>> such definition, whenever a device emulation sets impl.unaligned=true, I
>>> think it means we should simply pass over the MR request to the ops, no
>>> matter if it's aligned or not, especially when it's not aligned memory core
>>> shouldn't need to do any trick on amplifying the MR access, simply because
>>> the device said it supports unaligned access in its implementation.  That's
>>> the only meaningful definition of impl.unaligned that I can think of so far.
>> I have the same understanding.  I found a relevant section in the
>> documentation at 'docs/devel/memory.rst':
>>
>> ```
>> In addition various constraints can be supplied to control how these
>> callbacks are called:
>>
>> - .valid.min_access_size, .valid.max_access_size define the access sizes
>>    (in bytes) which the device accepts; accesses outside this range will
>>    have device and bus specific behaviour (ignored, or machine check)
>> - .valid.unaligned specifies that the *device being modelled* supports
>>    unaligned accesses; if false, unaligned accesses will invoke the
>>    appropriate bus or CPU specific behaviour.
>> - .impl.min_access_size, .impl.max_access_size define the access sizes
>>    (in bytes) supported by the *implementation*; other access sizes will be
>>    emulated using the ones available.  For example a 4-byte write will be
>>    emulated using four 1-byte writes, if .impl.max_access_size = 1.
>> - .impl.unaligned specifies that the *implementation* supports unaligned
>>    accesses; if false, unaligned accesses will be emulated by two aligned
>>    accesses.
>> ```
> Ah yes.
>
>>> However, after I try to read more of the problem, I don't think any MR ops
>>> would like to implement such complicated logic, the norm should be like
>>> xHCI MR ops where it supports only aligned access in MR ops, then the
>>> memory core is hopefully always be able to convert an unaligned access into
>>> one or multiple aligned access internally.
>>>
>>> IOW, it makes more sense to me that we keep valid.unaligned, but drop
>>> impl.unaligned.  Would that make sense to you (and Peter)?  That kind of
>>> matches with the comment you quoted below on saying that unaligned access
>>> is broken - I'm not 100% sure whether it's talking about impl.unaligned,
>>> but it would make sense if so.
>> I agree with you.
>>
>>> Meanwhile, I do see that we already have two impl.unaligned=true users:
>>>
>>> hw/pci-host/raven.c:    .impl.unaligned = true,
>>> system/ioport.c:    .impl.unaligned = true,
>>>
>>> I actually have no idea whether they're working at all if accesses can be
>>> unaligned internally, and how they work, if at least impl.unaligned seems
>>> to be totally broken.
>> I initially assumed there would be more users, so I expected that a
>> lot of changes would be needed.  MR can be categorized into the
>> following patterns:
>>
>> 1. `impl.unaligned == true`
>  From your description below, I suppose you meant:
>
>    1. `impl.unaligned == true` and `valid.unaligned == true`
>
> That may still be worthwhile to be spelled out, because I do see there's
> one of pattern 4, which is:
>
>    4. `impl.unaligned == true` and `valid.unaligned == false`
>
> See:
>
> static const MemoryRegionOps riscv_iommu_trap_ops = {
>      .read_with_attrs = riscv_iommu_trap_read,
>      .write_with_attrs = riscv_iommu_trap_write,
>      .endianness = DEVICE_LITTLE_ENDIAN,
>      .impl = {
>          .min_access_size = 4,
>          .max_access_size = 8,
>          .unaligned = true,
>      },
>      .valid = {
>          .min_access_size = 4,
>          .max_access_size = 8,
>      }
> };
>
> Even though I don't think it's a valid pattern..  I don't see how that
> could differ in behavior against pattern 2 you listed below, if the upper
> layer should always have rejected unaligned access.  So maybe it really
> should have reported impl.unaligned=false.
>
>> 2. `impl.unaligned == false` and `valid.unaligned == false`
>> 3. `impl.unaligned == false` and `valid.unaligned == true`
>>
>> - Pattern 1: No special handling is required since the implementation
>>    supports unaligned accesses. The MR can handle both aligned and
>>    unaligned accesses seamlessly.
>> - Pattern 2: No additional handling is needed because unaligned
>>    accesses are invalid in this MR. Any unaligned access is treated as
>>    an illegal operation.
>> - Pattern 3: This is the only pattern that requires consideration. We
>>    must emulate unaligned accesses using aligned accesses.
>>
>> I searched by keyword "unaligned = true" and got the following result:
> Indeed I missed the ".impl = { .unaligned = XXX ... }" cases..
>
>> ```
>> $ rg "unaligned = true"
>> system/memory.c
>> 1398:        .unaligned = true,
>> 1403:        .unaligned = true,
>>
>> system/ioport.c
>> 223:    .valid.unaligned = true,
>> 224:    .impl.unaligned = true,
>>
>> hw/xtensa/mx_pic.c
>> 271:        .unaligned = true,
>>
>> hw/pci-host/raven.c
>> 203:    .impl.unaligned = true,
>> 204:    .valid.unaligned = true,
>>
>> hw/riscv/riscv-iommu.c
>> 2108:        .unaligned = true,
>>
>> hw/ssi/npcm7xx_fiu.c
>> 256:        .unaligned = true,
>>
>> hw/cxl/cxl-host.c
>> 285:        .unaligned = true,
>> 290:        .unaligned = true,
>>
>> hw/i386/xen/xen_platform.c
>> 412:        .unaligned = true,
>> 417:        .unaligned = true,
>>
>> hw/display/vmware_vga.c
>> 1306:        .unaligned = true,
>> 1309:        .unaligned = true,
>> ```
>>
>> In this result, I found two pattern 3 in the codebase:
>>
>> - hw/xtensa/mx_pic.c
>> - hw/ssi/npcm7xx_fiu.c
>>
>> ```
>> static const MemoryRegionOps xtensa_mx_pic_ops = {
>>      .read = xtensa_mx_pic_ext_reg_read,
>>      .write = xtensa_mx_pic_ext_reg_write,
>>      .endianness = DEVICE_NATIVE_ENDIAN,
>>      .valid = {
>>          .unaligned = true,
>>      },
>> };
>> ```
>>
>> ```
>> static const MemoryRegionOps npcm7xx_fiu_flash_ops = {
>>      .read = npcm7xx_fiu_flash_read,
>>      .write = npcm7xx_fiu_flash_write,
>>      .endianness = DEVICE_LITTLE_ENDIAN,
>>      .valid = {
>>          .min_access_size = 1,
>>          .max_access_size = 8,
>>          .unaligned = true,
>>      },
>> };
>> ```
>>
>> Note that these implementations are implicitly 'impl.unaligned ==
>> false'; the 'impl.unaligned' field simply does not exist in these
>> cases. However, it is possible that these implementations inherently
>> support unaligned accesses.
>>
>> To summarize, if we decide to remove the 'impl' field, we might need
>> to revisit and make changes to the MR implementation in these codes.
> IIUC what we need to change should be adding impl.unaligned=true into above
> two use cases, am I right?
>
> Said that because IIUC QEMU has processed pattern 3 (vaild.unaligned=true,
> impl.unaligned=false) exactly like what it should do with pattern 1
> (valid.unaligned=true, impl.unaligned=true).
>
> That is, if I read it right, the current access_with_adjusted_size() should
> always pass in unaligned address into MR ops (as long as addr is unaligned,
> and also if valid.unaligned=true), assuming they'll be able to tackle with
> it, even if impl.unaligned can be reported false.  That's exactly what
> needs fixing then.
>
> So.. it turns out we shouldn't drop impl.unaligned?  Because above two
> seems to be the real user of such.  What we may want to do is:
>
>    - Change above two use cases, adding impl.unaligned=true.
>
>      This step should hopefully have zero effect in reality on the two
>      devices.  One thing to mention is both of them do not look like to have
>      an upper bound of max_access_size (either 8 which is the maximum, or
>      not specified).

This might be a good way. In this way, we need to add 'impl.unaligned
= true' to the xHCI Capability Register's MR. We also need to fix the
MR implementation to be safe when unaligned accessing (current xHCI
implementation does not handle unaligned accesses but the spec allows
unaligned accesses).

In addition, maybe it would be better to document the constraint that
the situation where 'valid.unaligned = true' and 'impl.unaligned =
false' is not supported.

Thanks,

Tomoyuki HIROSE
>
>    - Implement the real pattern 3 (which is what this patch wanted to do)
>
>    - Declare pattern 3 for whatever device that want to support it (which
>      will differ from above two examples).
>
>>>> Solution 1 seems to align with QEMU's
>>>> original intentions. Actually, there is a comment in 'memory.c' that
>>>> states:
>>>>
>>>> `/* FIXME: support unaligned access? */`
>>>>
>>>> This patch set implements solution 1. If there is a better way to
>>>> resolve these issues, I would greatly appreciate your suggestions.
>>> I think if my above understanding is correct, I can kind of understand your
>>> solution now.  But then I wonder whether we should already drop
>>> impl.unaligned with your solution.
>>>
>>> Also, I don't think I am 100% sure yet on how the amplification of the
>>> accessed (as proposed in your patch) would have side effects to the device
>>> emulation.  For example, read(0x2, 0x4) with impl.access_size_min=4 now
>>> will be amplified to two continuous:
>>>
>>>     read(0x0, 0x4)
>>>     read(0x4, 0x4)
>>>
>>> Then there will be side effects of reading (addr=0x0, size=0x2) portion,
>>> and (addr=0x6, size=0x2) portion, that is not part of the request.  Maybe
>>> it's as simple as: when device emulation has such side effect, it should
>>> always set valid.unaligned=false already.
>> There is also a potential issue regarding side effects. Consider a
>> device where a register value changes upon a read access. Assume the
>> device has the following register map:
>>
>> ```
>> 31                       8        0 (bit)
>> +---------------------------------+
>> |         Reg1(lo)       |  Reg0  | 0 byte
>> +---------------------------------+
>> |                        |Reg1(hi)| 4 byte
>> ```
>>
>> In this case, let’s assume that Reg0 is a register whose value
>> changes whenever it is read.
>> Now, if the guest issues a read(addr=0x1, size=4) on this device's
>> MR(impl.unaligned=false, valid.unaligned=true), the unaligned access
>> must be split into two aligned accesses:
>>
>> 1. read(addr=0x0, size=4)
>> 2. read(addr=0x4, size=4)
>>
>> However, this results in Reg0 being read as part of the first aligned
>> access, potentially triggering its side effect. This unintended side
>> effect violates the semantics of the original unaligned read. If we
>> don't want to allow this, we should set 'valid.unaligned = false'.
> Right.  I guess we're on the same page now on the side effect part of
> things..  We may want to document this after implementation of pattern 3
> somewhere so that the device emulation developers are aware of it.
>
> Thanks,
>

Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 1 month ago
Hi, Tomoyuki,

On Wed, Jan 08, 2025 at 11:58:10AM +0900, Tomoyuki HIROSE wrote:
> Happy new year, Peter.
> I had another job and was late in replying to your email, sorry.

Happy new year.  That's fine. :)

[...]

> > So.. it turns out we shouldn't drop impl.unaligned?  Because above two
> > seems to be the real user of such.  What we may want to do is:
> > 
> >    - Change above two use cases, adding impl.unaligned=true.
> > 
> >      This step should hopefully have zero effect in reality on the two
> >      devices.  One thing to mention is both of them do not look like to have
> >      an upper bound of max_access_size (either 8 which is the maximum, or
> >      not specified).
> 
> This might be a good way. In this way, we need to add 'impl.unaligned
> = true' to the xHCI Capability Register's MR. We also need to fix the

We need to keep xHCI's impl.unaligned to FALSE?  IIUC only if it's FALSE
would it start to use your new code in this series to automatically convert
the unaligned access request into one or multiple aligned accesses (without
changing xHCI's MR ops implementation, IOW resolve this in memory core).

I just had another look at your last patch:

https://lore.kernel.org/qemu-devel/20241108032952.56692-6-tomoyuki.hirose@igel.co.jp/

index d85adaca0d..f35cbe526f 100644
--- a/hw/usb/hcd-xhci.c
+++ b/hw/usb/hcd-xhci.c
@@ -3165,9 +3165,11 @@ static const MemoryRegionOps xhci_cap_ops = {
     .read = xhci_cap_read,
     .write = xhci_cap_write,
     .valid.min_access_size = 1,
-    .valid.max_access_size = 4,
+    .valid.max_access_size = 8,
+    .valid.unaligned = true,
     .impl.min_access_size = 4,
     .impl.max_access_size = 4,
+    .impl.unaligned = false,
     .endianness = DEVICE_LITTLE_ENDIAN,
 };

I think that should keep being valid.  So "valid.unaligned = true" will
start enable unaligned accesses from the API level which will start to
follow the xHCI controller's spec, then ".impl.unaligned = false" tells the
memory core to _not_ pass unaligned accesses to MR ops, instead break them
down properly.

> MR implementation to be safe when unaligned accessing (current xHCI
> implementation does not handle unaligned accesses but the spec allows
> unaligned accesses).
> 
> In addition, maybe it would be better to document the constraint that
> the situation where 'valid.unaligned = true' and 'impl.unaligned =
> false' is not supported.

Do you perhaps mean this instead?

  valid.unaligned = FALSE && impl.unaligned == TRUE

If so, I agree.  I think we could even consider adding an assertion into
memory_region_init_io() to make sure it won't be set.

Thanks,

-- 
Peter Xu
Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year, 1 month ago
On 2025/01/09 1:50, Peter Xu wrote:
> Hi, Tomoyuki,
>
> On Wed, Jan 08, 2025 at 11:58:10AM +0900, Tomoyuki HIROSE wrote:
>> Happy new year, Peter.
>> I had another job and was late in replying to your email, sorry.
> Happy new year.  That's fine. :)
>
> [...]
>
>>> So.. it turns out we shouldn't drop impl.unaligned?  Because above two
>>> seems to be the real user of such.  What we may want to do is:
>>>
>>>     - Change above two use cases, adding impl.unaligned=true.
>>>
>>>       This step should hopefully have zero effect in reality on the two
>>>       devices.  One thing to mention is both of them do not look like to have
>>>       an upper bound of max_access_size (either 8 which is the maximum, or
>>>       not specified).
>> This might be a good way. In this way, we need to add 'impl.unaligned
>> = true' to the xHCI Capability Register's MR. We also need to fix the
> We need to keep xHCI's impl.unaligned to FALSE?  IIUC only if it's FALSE
> would it start to use your new code in this series to automatically convert
> the unaligned access request into one or multiple aligned accesses (without
> changing xHCI's MR ops implementation, IOW resolve this in memory core).

Yes, we need to keep it to 'false' because xHCI's MR implementation
does not supported unaligned accesses.

> I just had another look at your last patch:
>
> https://lore.kernel.org/qemu-devel/20241108032952.56692-6-tomoyuki.hirose@igel.co.jp/
>
> index d85adaca0d..f35cbe526f 100644
> --- a/hw/usb/hcd-xhci.c
> +++ b/hw/usb/hcd-xhci.c
> @@ -3165,9 +3165,11 @@ static const MemoryRegionOps xhci_cap_ops = {
>       .read = xhci_cap_read,
>       .write = xhci_cap_write,
>       .valid.min_access_size = 1,
> -    .valid.max_access_size = 4,
> +    .valid.max_access_size = 8,
> +    .valid.unaligned = true,
>       .impl.min_access_size = 4,
>       .impl.max_access_size = 4,
> +    .impl.unaligned = false,
>       .endianness = DEVICE_LITTLE_ENDIAN,
>   };
>
> I think that should keep being valid.  So "valid.unaligned = true" will
> start enable unaligned accesses from the API level which will start to
> follow the xHCI controller's spec, then ".impl.unaligned = false" tells the
> memory core to _not_ pass unaligned accesses to MR ops, instead break them
> down properly.
>
>> MR implementation to be safe when unaligned accessing (current xHCI
>> implementation does not handle unaligned accesses but the spec allows
>> unaligned accesses).
>>
>> In addition, maybe it would be better to document the constraint that
>> the situation where 'valid.unaligned = true' and 'impl.unaligned =
>> false' is not supported.
> Do you perhaps mean this instead?
>
>    valid.unaligned = FALSE && impl.unaligned == TRUE
>
> If so, I agree.  I think we could even consider adding an assertion into
> memory_region_init_io() to make sure it won't be set.
>
> Thanks,
>

I'm sorry if I've misunderstood, but are the following understandings
correct?:
- Need to merge my patch that converts an unaligned access to aligned
   accesses.
- Need to add 'impl.unaligned = true' in the following two places.
   - hw/xtensa/mx_pic.c
   - hw/ssi/npcm7xx_fiu.c
- Add an assertion that to check for invalid patterns, additionally.

Thanks,
Tomoyuki HIROSE


Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 1 month ago
On Fri, Jan 10, 2025 at 07:11:27PM +0900, Tomoyuki HIROSE wrote:
> > > MR implementation to be safe when unaligned accessing (current xHCI
> > > implementation does not handle unaligned accesses but the spec allows
> > > unaligned accesses).
> > > 
> > > In addition, maybe it would be better to document the constraint that
> > > the situation where 'valid.unaligned = true' and 'impl.unaligned =
> > > false' is not supported.
> > Do you perhaps mean this instead?
> > 
> >    valid.unaligned = FALSE && impl.unaligned == TRUE
> > 
> > If so, I agree.  I think we could even consider adding an assertion into
> > memory_region_init_io() to make sure it won't be set.
> > 
> > Thanks,
> > 
> 
> I'm sorry if I've misunderstood, but are the following understandings
> correct?:
> - Need to merge my patch that converts an unaligned access to aligned
>   accesses.
> - Need to add 'impl.unaligned = true' in the following two places.
>   - hw/xtensa/mx_pic.c
>   - hw/ssi/npcm7xx_fiu.c
> - Add an assertion that to check for invalid patterns, additionally.

Yes, all these sound good to me.

Thanks,

-- 
Peter Xu


Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Tomoyuki HIROSE 1 year ago
On 2025/01/11 0:08, Peter Xu wrote:
> On Fri, Jan 10, 2025 at 07:11:27PM +0900, Tomoyuki HIROSE wrote:
>>>> MR implementation to be safe when unaligned accessing (current xHCI
>>>> implementation does not handle unaligned accesses but the spec allows
>>>> unaligned accesses).
>>>>
>>>> In addition, maybe it would be better to document the constraint that
>>>> the situation where 'valid.unaligned = true' and 'impl.unaligned =
>>>> false' is not supported.
>>> Do you perhaps mean this instead?
>>>
>>>     valid.unaligned = FALSE && impl.unaligned == TRUE
>>>
>>> If so, I agree.  I think we could even consider adding an assertion into
>>> memory_region_init_io() to make sure it won't be set.
>>>
>>> Thanks,
>>>
>> I'm sorry if I've misunderstood, but are the following understandings
>> correct?:
>> - Need to merge my patch that converts an unaligned access to aligned
>>    accesses.
>> - Need to add 'impl.unaligned = true' in the following two places.
>>    - hw/xtensa/mx_pic.c
>>    - hw/ssi/npcm7xx_fiu.c
>> - Add an assertion that to check for invalid patterns, additionally.
> Yes, all these sound good to me.
>
> Thanks,
>

OK, thanks.
I will prepare patch v2 according to the above understandings.

Tomoyuki HIROSE

Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Maydell 1 year, 2 months ago
On Fri, 6 Dec 2024 at 16:43, Peter Xu <peterx@redhat.com> wrote:
> I assume it's about xhci_cap_ops then.  If you agree we can also mention
> xhci_cap_ops when dscribing it, so readers can easily reference the MR
> attributes from the code alongside with understanding the use case.
>
> Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
> can be changed to 2 (together with additional xhci_cap_read/write support)?
>
> Note that I'm not saying it must do so even if it would work for xHCI, but
> if the memory API change is only for one device, then it can still be
> discussed about which option would be better on changing the device or the
> core.

I think the memory system core has been broken in this area
for a long time -- it purports to support impls which only
do a subset of what the valid operations are, but it actually
does buggy and wrong things in some cases. So far
we have effectively worked around it by avoiding defining
MemoryRegionOps that try to use the buggy areas, but I
think it's much better to fix the code so it really does
what it's theoretically intended to do.

thanks
-- PMM
Re: [RFC PATCH 2/5] system/memory: support unaligned access
Posted by Peter Xu 1 year, 2 months ago
On Wed, Dec 11, 2024 at 09:56:21AM +0000, Peter Maydell wrote:
> On Fri, 6 Dec 2024 at 16:43, Peter Xu <peterx@redhat.com> wrote:
> > I assume it's about xhci_cap_ops then.  If you agree we can also mention
> > xhci_cap_ops when dscribing it, so readers can easily reference the MR
> > attributes from the code alongside with understanding the use case.
> >
> > Does it mean that it could also work if xhci_cap_ops.impl.min_access_size
> > can be changed to 2 (together with additional xhci_cap_read/write support)?
> >
> > Note that I'm not saying it must do so even if it would work for xHCI, but
> > if the memory API change is only for one device, then it can still be
> > discussed about which option would be better on changing the device or the
> > core.
> 
> I think the memory system core has been broken in this area
> for a long time -- it purports to support impls which only
> do a subset of what the valid operations are, but it actually
> does buggy and wrong things in some cases. So far
> we have effectively worked around it by avoiding defining
> MemoryRegionOps that try to use the buggy areas, but I
> think it's much better to fix the code so it really does
> what it's theoretically intended to do.

Thanks, Peter.  I assume it means there're a lot of devices that can use
this model.  Then it makes perfect sense to do it in memory core.

Though I do have some confusion on why we needed impl.unaligned at all.  I
see that Tomoyuki raised similar question, even if not exactly the same
one.  I'll try to continue the discussion there.

-- 
Peter Xu