[PATCH v3 11/12] target/ppc: Streamline construction of VRMA SLB entry

David Gibson posted 12 patches 5 years, 11 months ago
Maintainers: David Gibson <david@gibson.dropbear.id.au>
There is a newer version of this series
[PATCH v3 11/12] target/ppc: Streamline construction of VRMA SLB entry
Posted by David Gibson 5 years, 11 months ago
When in VRMA mode (i.e. a guest thinks it has the MMU off, but the
hypervisor is still applying translation) we use a special SLB entry,
rather than looking up an SLBE by address as we do when guest translation
is on.

We build that special entry in ppc_hash64_update_vrma() along with some
logic for handling some non-VRMA cases.  Split the actual build of the
VRMA SLBE into a separate helper and streamline it a bit.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target/ppc/mmu-hash64.c | 79 ++++++++++++++++++++---------------------
 1 file changed, 38 insertions(+), 41 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 170a78bd2e..06cfff9860 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -789,6 +789,39 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
     }
 }
 
+static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong lpcr = env->spr[SPR_LPCR];
+    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
+    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
+    int i;
+
+    /*
+     * Make one up. Mostly ignore the ESID which will not be needed
+     * for translation
+     */
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
+
+        if (!sps->page_shift) {
+            break;
+        }
+
+        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
+            slb->esid = SLB_ESID_V;
+            slb->vsid = vsid;
+            slb->sps = sps;
+            return 0;
+        }
+    }
+
+    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
+                 TARGET_FMT_lx"\n", lpcr);
+
+    return -1;
+}
+
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                                 int rwx, int mmu_idx)
 {
@@ -1044,53 +1077,17 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
 static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
 {
     CPUPPCState *env = &cpu->env;
-    const PPCHash64SegmentPageSizes *sps = NULL;
-    target_ulong esid, vsid, lpcr;
     ppc_slb_t *slb = &env->vrma_slb;
-    uint32_t vrmasd;
-    int i;
-
-    /* First clear it */
-    slb->esid = slb->vsid = 0;
-    slb->sps = NULL;
 
     /* Is VRMA enabled ? */
     if (ppc_hash64_use_vrma(env)) {
-        return;
-    }
-
-    /*
-     * Make one up. Mostly ignore the ESID which will not be needed
-     * for translation
-     */
-    lpcr = env->spr[SPR_LPCR];
-    vsid = SLB_VSID_VRMA;
-    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
-    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
-    esid = SLB_ESID_V;
-
-    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
-        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
-
-        if (!sps1->page_shift) {
-            break;
-        }
-
-        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
-            sps = sps1;
-            break;
-        }
-    }
-
-    if (!sps) {
-        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
-                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
-        return;
+        if (build_vrma_slbe(cpu, slb) == 0)
+            return;
     }
 
-    slb->vsid = vsid;
-    slb->esid = esid;
-    slb->sps = sps;
+    /* Otherwise, clear it to indicate error */
+    slb->esid = slb->vsid = 0;
+    slb->sps = NULL;
 }
 
 void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
-- 
2.24.1


Re: [PATCH v3 11/12] target/ppc: Streamline construction of VRMA SLB entry
Posted by Fabiano Rosas 5 years, 11 months ago
David Gibson <david@gibson.dropbear.id.au> writes:


Hi, just a nitpick, feel free to ignore.

> When in VRMA mode (i.e. a guest thinks it has the MMU off, but the
> hypervisor is still applying translation) we use a special SLB entry,
> rather than looking up an SLBE by address as we do when guest translation
> is on.
>
> We build that special entry in ppc_hash64_update_vrma() along with some
> logic for handling some non-VRMA cases.  Split the actual build of the
> VRMA SLBE into a separate helper and streamline it a bit.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target/ppc/mmu-hash64.c | 79 ++++++++++++++++++++---------------------
>  1 file changed, 38 insertions(+), 41 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 170a78bd2e..06cfff9860 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -789,6 +789,39 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
>      }
>  }
>  
> +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    target_ulong lpcr = env->spr[SPR_LPCR];
> +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
> +    int i;
> +
> +    /*
> +     * Make one up. Mostly ignore the ESID which will not be needed
> +     * for translation
> +     */

I find this comment a bit vague. I suggest we either leave it behind or
make it more precise. The ISA says:

"translation of effective addresses to virtual addresses use the SLBE
values in Figure 18 instead of the entry in the SLB corresponding to the
ESID"


Re: [PATCH v3 11/12] target/ppc: Streamline construction of VRMA SLB entry
Posted by David Gibson 5 years, 11 months ago
On Wed, Feb 19, 2020 at 11:34:22AM -0300, Fabiano Rosas wrote:
> David Gibson <david@gibson.dropbear.id.au> writes:
> 
> 
> Hi, just a nitpick, feel free to ignore.
> 
> > When in VRMA mode (i.e. a guest thinks it has the MMU off, but the
> > hypervisor is still applying translation) we use a special SLB entry,
> > rather than looking up an SLBE by address as we do when guest translation
> > is on.
> >
> > We build that special entry in ppc_hash64_update_vrma() along with some
> > logic for handling some non-VRMA cases.  Split the actual build of the
> > VRMA SLBE into a separate helper and streamline it a bit.
> >
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  target/ppc/mmu-hash64.c | 79 ++++++++++++++++++++---------------------
> >  1 file changed, 38 insertions(+), 41 deletions(-)
> >
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 170a78bd2e..06cfff9860 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -789,6 +789,39 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
> >      }
> >  }
> >  
> > +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
> > +{
> > +    CPUPPCState *env = &cpu->env;
> > +    target_ulong lpcr = env->spr[SPR_LPCR];
> > +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> > +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
> > +    int i;
> > +
> > +    /*
> > +     * Make one up. Mostly ignore the ESID which will not be needed
> > +     * for translation
> > +     */
> 
> I find this comment a bit vague. I suggest we either leave it behind or
> make it more precise. The ISA says:
> 
> "translation of effective addresses to virtual addresses use the SLBE
> values in Figure 18 instead of the entry in the SLB corresponding to the
> ESID"

Yeah, it wasn't very helpful in its initial location, and it's even
less helpful here.  I've dropped it.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson