[Qemu-devel] [PATCH 3/4] cputlb: Byte swap memory transaction attribute

tony.nguyen@bt.com posted 4 patches 6 years, 6 months ago
Maintainers: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Cornelia Huck <cohuck@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Aleksandar Markovic <amarkovic@wavecomp.com>, Halil Pasic <pasic@linux.ibm.com>, Eduardo Habkost <ehabkost@redhat.com>, Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, David Gibson <david@gibson.dropbear.id.au>, Andrzej Zaborowski <balrogg@gmail.com>, Laurent Vivier <laurent@vivier.eu>, Aurelien Jarno <aurelien@aurel32.net>, Claudio Fontana <claudio.fontana@huawei.com>, Sagar Karandikar <sagark@eecs.berkeley.edu>, Aleksandar Rikalo <arikalo@wavecomp.com>, Palmer Dabbelt <palmer@sifive.com>, Stefan Hajnoczi <stefanha@redhat.com>, Bastian Koppelmann <kbastian@mail.uni-paderborn.de>, David Hildenbrand <david@redhat.com>, Alex Williamson <alex.williamson@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Richard Henderson <rth@twiddle.net>, Peter Maydell <peter.maydell@linaro.org>, Artyom Tarasenko <atar4qemu@gmail.com>, Collin Walling <walling@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, Stafford Horne <shorne@gmail.com>, Alistair Francis <Alistair.Francis@wdc.com>
There is a newer version of this series
[Qemu-devel] [PATCH 3/4] cputlb: Byte swap memory transaction attribute
Posted by tony.nguyen@bt.com 6 years, 6 months ago
Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 10 +++++++++-
 include/exec/memattrs.h |  2 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index baa61719ad..11debb7dda 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -731,7 +731,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
               vaddr, paddr, prot, mmu_idx);
 
     address = vaddr_page;
-    if (size < TARGET_PAGE_SIZE) {
+    if (size < TARGET_PAGE_SIZE || attrs.byte_swap) {
         /*
          * Slow-path the TLB entries; we will repeat the MMU check and TLB
          * fill on every access.
@@ -891,6 +891,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;
 
+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +937,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;
 
+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477d71..a0644ebba1 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
-- 
2.17.2

Re: [Qemu-devel] [PATCH 3/4] cputlb: Byte swap memory transaction attribute
Posted by Richard Henderson 6 years, 6 months ago
On 7/16/19 11:08 PM, tony.nguyen@bt.com wrote:
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index baa61719ad..11debb7dda 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -731,7 +731,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
>                vaddr, paddr, prot, mmu_idx);
>  
>      address = vaddr_page;
> -    if (size < TARGET_PAGE_SIZE) {
> +    if (size < TARGET_PAGE_SIZE || attrs.byte_swap) {

I don't think you want to re-use TLB_RECHECK.  This operation requires the
slow-path, yes, but not another call into cpu->cc->tlb_fill.


r~