From nobody Tue Dec 16 11:03:55 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1487937709487445.11704533167494; Fri, 24 Feb 2017 04:01:49 -0800 (PST) Received: from localhost ([::1]:36512 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1chEZG-0005Vb-PX for importer@patchew.org; Fri, 24 Feb 2017 07:01:46 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47692) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1chE1V-0004vI-WD for qemu-devel@nongnu.org; Fri, 24 Feb 2017 06:26:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1chE1T-0007dF-KS for qemu-devel@nongnu.org; Fri, 24 Feb 2017 06:26:53 -0500 Received: from mail-wm0-x22c.google.com ([2a00:1450:400c:c09::22c]:34942) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1chE1T-0007c0-AK for qemu-devel@nongnu.org; Fri, 24 Feb 2017 06:26:51 -0500 Received: by mail-wm0-x22c.google.com with SMTP id v186so12178121wmd.0 for ; Fri, 24 Feb 2017 03:26:51 -0800 (PST) Received: from zen.linaro.local (host5-81-235-77.range5-81.btcentralplus.com. [5.81.235.77]) by smtp.gmail.com with ESMTPSA id v102sm9982182wrb.11.2017.02.24.03.26.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Feb 2017 03:26:44 -0800 (PST) Received: from zen.home (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 63C4D3E08D4; Fri, 24 Feb 2017 11:21:10 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OORh97rUhEGQoPHdFqqDdHkOeuuR22RJSdjOauTMTIk=; b=VearbVOMjYAOoRHd0y4JPlfItKEY0PoVGf+RJlYPEaIu1DoQBSMNyyKmnCvhRY8R3X k3/3lkTSK1mw9QHTfpF3FMiatOBctpsexFR4qRJw2O6lJg7GdCiuKcbG3UENqK/6/RIr Y19buCRQxjJ4m+IG+3aUVSAycKGwKO0dge7D8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OORh97rUhEGQoPHdFqqDdHkOeuuR22RJSdjOauTMTIk=; b=alkohl/X2VUUXsr7STTK5qvQL9JB+gowgvF14XZFKqNPgYhinZfRdHtGQ+jjcp8RRx WyqI9WFzPsgpLegQq6KrAPluMwt0xjV1dC367NQSCacHB1raUnhhPzjXr1S9o0DvLVZT +OaM6CdE3bPekWay2q+f1SkH7vP0hEilWo7qNFmcNzIoli9/fhKOVsShgTTkRuYzabsW SZRN8pPCucBQDEfoGRY76EljAJV9iZNpb+dECM37tTwvNVvLoEOyIOAcPrMqSH5S57wx 82gMQmZNapjCZUqRBWyEPgFlPUDTFVs/pa1C04hcyxrAY2gGaFc+fWCJ9U6n9QxlyBIQ 9cEA== X-Gm-Message-State: AMke39kY+mOVrrXd19hbD6DZpykT1dIpqTQplu+8Vkw18NsP7ev8qPhAvms5rUkZOOwX9rmL X-Received: by 10.28.8.130 with SMTP id 124mr2068879wmi.65.1487935610108; Fri, 24 Feb 2017 03:26:50 -0800 (PST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 24 Feb 2017 11:21:01 +0000 Message-Id: <20170224112109.3147-17-alex.bennee@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170224112109.3147-1-alex.bennee@linaro.org> References: <20170224112109.3147-1-alex.bennee@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22c Subject: [Qemu-devel] [PULL 16/24] cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Crosthwaite , Mark Cave-Ayland , qemu-devel@nongnu.org, "open list:ARM" , Paolo Bonzini , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Artyom Tarasenko , Richard Henderson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 While the vargs approach was flexible the original MTTCG ended up having munge the bits to a bitmap so the data could be used in deferred work helpers. Instead of hiding that in cputlb we push the change to the API to make it take a bitmap of MMU indexes instead. For ARM some the resulting flushes end up being quite long so to aid readability I've tended to move the index shifting to a new line so all the bits being or-ed together line up nicely, for example: tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1SE1) | (1 << ARMMMUIdx_S1SE0)); Signed-off-by: Alex Benn=C3=A9e [AT: SPARC parts only] Reviewed-by: Artyom Tarasenko Reviewed-by: Richard Henderson [PM: ARM parts only] Reviewed-by: Peter Maydell --- cputlb.c | 60 +++++++++-------------- include/exec/exec-all.h | 13 ++--- target/arm/helper.c | 116 ++++++++++++++++++++++++++++-------------= ---- target/sparc/ldst_helper.c | 8 ++-- 4 files changed, 107 insertions(+), 90 deletions(-) diff --git a/cputlb.c b/cputlb.c index 5dfd3c3ba9..97e5c12de8 100644 --- a/cputlb.c +++ b/cputlb.c @@ -122,26 +122,25 @@ void tlb_flush(CPUState *cpu) } } =20 -static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, va_list argp) +static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { CPUArchState *env =3D cpu->env_ptr; + unsigned long mmu_idx_bitmask =3D idxmap; + int mmu_idx; =20 assert_cpu_is_self(cpu); tlb_debug("start\n"); =20 tb_lock(); =20 - for (;;) { - int mmu_idx =3D va_arg(argp, int); - - if (mmu_idx < 0) { - break; - } + for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { =20 - tlb_debug("%d\n", mmu_idx); + if (test_bit(mmu_idx, &mmu_idx_bitmask)) { + tlb_debug("%d\n", mmu_idx); =20 - memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); - memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); + memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[= 0])); + } } =20 memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); @@ -149,12 +148,9 @@ static inline void v_tlb_flush_by_mmuidx(CPUState *cpu= , va_list argp) tb_unlock(); } =20 -void tlb_flush_by_mmuidx(CPUState *cpu, ...) +void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { - va_list argp; - va_start(argp, cpu); - v_tlb_flush_by_mmuidx(cpu, argp); - va_end(argp); + v_tlb_flush_by_mmuidx(cpu, idxmap); } =20 static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong ad= dr) @@ -219,13 +215,11 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) } } =20 -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t i= dxmap) { CPUArchState *env =3D cpu->env_ptr; - int i, k; - va_list argp; - - va_start(argp, addr); + unsigned long mmu_idx_bitmap =3D idxmap; + int i, page, mmu_idx; =20 assert_cpu_is_self(cpu); tlb_debug("addr "TARGET_FMT_lx"\n", addr); @@ -236,31 +230,23 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_u= long addr, ...) TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", env->tlb_flush_addr, env->tlb_flush_mask); =20 - v_tlb_flush_by_mmuidx(cpu, argp); - va_end(argp); + v_tlb_flush_by_mmuidx(cpu, idxmap); return; } =20 addr &=3D TARGET_PAGE_MASK; - i =3D (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - - for (;;) { - int mmu_idx =3D va_arg(argp, int); + page =3D (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); =20 - if (mmu_idx < 0) { - break; - } - - tlb_debug("idx %d\n", mmu_idx); - - tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { + if (test_bit(mmu_idx, &mmu_idx_bitmap)) { + tlb_flush_entry(&env->tlb_table[mmu_idx][page], addr); =20 - /* check whether there are vltb entries that need to be flushed */ - for (k =3D 0; k < CPU_VTLB_SIZE; k++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); + /* check whether there are vltb entries that need to be flushe= d */ + for (i =3D 0; i < CPU_VTLB_SIZE; i++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][i], addr); + } } } - va_end(argp); =20 tb_flush_jmp_cache(cpu, addr); } diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index c694e3482b..e94e6849dd 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -106,21 +106,22 @@ void tlb_flush(CPUState *cpu); * tlb_flush_page_by_mmuidx: * @cpu: CPU whose TLB should be flushed * @addr: virtual address of page to be flushed - * @...: list of MMU indexes to flush, terminated by a negative value + * @idxmap: bitmap of MMU indexes to flush * * Flush one page from the TLB of the specified CPU, for the specified * MMU indexes. */ -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...); +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, + uint16_t idxmap); /** * tlb_flush_by_mmuidx: * @cpu: CPU whose TLB should be flushed - * @...: list of MMU indexes to flush, terminated by a negative value + * @idxmap: bitmap of MMU indexes to flush * * Flush all entries from the TLB of the specified CPU, for the specified * MMU indexes. */ -void tlb_flush_by_mmuidx(CPUState *cpu, ...); +void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap); /** * tlb_set_page_with_attrs: * @cpu: CPU to add this TLB entry for @@ -169,11 +170,11 @@ static inline void tlb_flush(CPUState *cpu) } =20 static inline void tlb_flush_page_by_mmuidx(CPUState *cpu, - target_ulong addr, ...) + target_ulong addr, uint16_t id= xmap) { } =20 -static inline void tlb_flush_by_mmuidx(CPUState *cpu, ...) +static inline void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { } #endif diff --git a/target/arm/helper.c b/target/arm/helper.c index 753a69d40d..b41d0494d1 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -578,8 +578,10 @@ static void tlbiall_nsnh_write(CPUARMState *env, const= ARMCPRegInfo *ri, { CPUState *cs =3D ENV_GET_CPU(env); =20 - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, - ARMMMUIdx_S2NS, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0) | + (1 << ARMMMUIdx_S2NS)); } =20 static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -588,8 +590,10 @@ static void tlbiall_nsnh_is_write(CPUARMState *env, co= nst ARMCPRegInfo *ri, CPUState *other_cs; =20 CPU_FOREACH(other_cs) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, ARMMMUIdx_S2NS, -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0) | + (1 << ARMMMUIdx_S2NS)); } } =20 @@ -611,7 +615,7 @@ static void tlbiipas2_write(CPUARMState *env, const ARM= CPRegInfo *ri, =20 pageaddr =3D sextract64(value << 12, 0, 40); =20 - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS)); } =20 static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -627,7 +631,7 @@ static void tlbiipas2_is_write(CPUARMState *env, const = ARMCPRegInfo *ri, pageaddr =3D sextract64(value << 12, 0, 40); =20 CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S2NS)= ); } } =20 @@ -636,7 +640,7 @@ static void tlbiall_hyp_write(CPUARMState *env, const A= RMCPRegInfo *ri, { CPUState *cs =3D ENV_GET_CPU(env); =20 - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1E2, -1); + tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2)); } =20 static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -645,7 +649,7 @@ static void tlbiall_hyp_is_write(CPUARMState *env, cons= t ARMCPRegInfo *ri, CPUState *other_cs; =20 CPU_FOREACH(other_cs) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1E2, -1); + tlb_flush_by_mmuidx(other_cs, (1 << ARMMMUIdx_S1E2)); } } =20 @@ -655,7 +659,7 @@ static void tlbimva_hyp_write(CPUARMState *env, const A= RMCPRegInfo *ri, CPUState *cs =3D ENV_GET_CPU(env); uint64_t pageaddr =3D value & ~MAKE_64BIT_MASK(0, 12); =20 - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2)); } =20 static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -665,7 +669,7 @@ static void tlbimva_hyp_is_write(CPUARMState *env, cons= t ARMCPRegInfo *ri, uint64_t pageaddr =3D value & ~MAKE_64BIT_MASK(0, 12); =20 CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1E2)= ); } } =20 @@ -2542,8 +2546,10 @@ static void vttbr_write(CPUARMState *env, const ARMC= PRegInfo *ri, =20 /* Accesses to VTTBR may change the VMID so we must flush the TLB. */ if (raw_read(env, ri) !=3D value) { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, - ARMMMUIdx_S2NS, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0) | + (1 << ARMMMUIdx_S2NS)); raw_write(env, ri, value); } } @@ -2902,9 +2908,13 @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env= , const ARMCPRegInfo *ri, CPUState *cs =3D CPU(cpu); =20 if (arm_is_secure_below_el3(env)) { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } =20 @@ -2916,10 +2926,13 @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *= env, const ARMCPRegInfo *ri, =20 CPU_FOREACH(other_cs) { if (sec) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0= , -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } } @@ -2935,13 +2948,19 @@ static void tlbi_aa64_alle1_write(CPUARMState *env,= const ARMCPRegInfo *ri, CPUState *cs =3D CPU(cpu); =20 if (arm_is_secure_below_el3(env)) { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else { if (arm_feature(env, ARM_FEATURE_EL2)) { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, - ARMMMUIdx_S2NS, -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0) | + (1 << ARMMMUIdx_S2NS)); } else { - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, = -1); + tlb_flush_by_mmuidx(cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } } @@ -2952,7 +2971,7 @@ static void tlbi_aa64_alle2_write(CPUARMState *env, c= onst ARMCPRegInfo *ri, ARMCPU *cpu =3D arm_env_get_cpu(env); CPUState *cs =3D CPU(cpu); =20 - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1E2, -1); + tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2)); } =20 static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -2961,7 +2980,7 @@ static void tlbi_aa64_alle3_write(CPUARMState *env, c= onst ARMCPRegInfo *ri, ARMCPU *cpu =3D arm_env_get_cpu(env); CPUState *cs =3D CPU(cpu); =20 - tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1E3, -1); + tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E3)); } =20 static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *= ri, @@ -2977,13 +2996,18 @@ static void tlbi_aa64_alle1is_write(CPUARMState *en= v, const ARMCPRegInfo *ri, =20 CPU_FOREACH(other_cs) { if (sec) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0= , -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else if (has_el2) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, ARMMMUIdx_S2NS, -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0) | + (1 << ARMMMUIdx_S2NS)); } else { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, -1); + tlb_flush_by_mmuidx(other_cs, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } } @@ -2994,7 +3018,7 @@ static void tlbi_aa64_alle2is_write(CPUARMState *env,= const ARMCPRegInfo *ri, CPUState *other_cs; =20 CPU_FOREACH(other_cs) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1E2, -1); + tlb_flush_by_mmuidx(other_cs, (1 << ARMMMUIdx_S1E2)); } } =20 @@ -3004,7 +3028,7 @@ static void tlbi_aa64_alle3is_write(CPUARMState *env,= const ARMCPRegInfo *ri, CPUState *other_cs; =20 CPU_FOREACH(other_cs) { - tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1E3, -1); + tlb_flush_by_mmuidx(other_cs, (1 << ARMMMUIdx_S1E3)); } } =20 @@ -3021,11 +3045,13 @@ static void tlbi_aa64_vae1_write(CPUARMState *env, = const ARMCPRegInfo *ri, uint64_t pageaddr =3D sextract64(value << 12, 0, 56); =20 if (arm_is_secure_below_el3(env)) { - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1SE1, - ARMMMUIdx_S1SE0, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else { - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } =20 @@ -3040,7 +3066,7 @@ static void tlbi_aa64_vae2_write(CPUARMState *env, co= nst ARMCPRegInfo *ri, CPUState *cs =3D CPU(cpu); uint64_t pageaddr =3D sextract64(value << 12, 0, 56); =20 - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2)); } =20 static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -3054,7 +3080,7 @@ static void tlbi_aa64_vae3_write(CPUARMState *env, co= nst ARMCPRegInfo *ri, CPUState *cs =3D CPU(cpu); uint64_t pageaddr =3D sextract64(value << 12, 0, 56); =20 - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E3, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E3)); } =20 static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *r= i, @@ -3066,11 +3092,13 @@ static void tlbi_aa64_vae1is_write(CPUARMState *env= , const ARMCPRegInfo *ri, =20 CPU_FOREACH(other_cs) { if (sec) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1SE1, - ARMMMUIdx_S1SE0, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, + (1 << ARMMMUIdx_S1SE1) | + (1 << ARMMMUIdx_S1SE0)); } else { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, + (1 << ARMMMUIdx_S12NSE1) | + (1 << ARMMMUIdx_S12NSE0)); } } } @@ -3082,7 +3110,7 @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, = const ARMCPRegInfo *ri, uint64_t pageaddr =3D sextract64(value << 12, 0, 56); =20 CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1E2)= ); } } =20 @@ -3093,7 +3121,7 @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, = const ARMCPRegInfo *ri, uint64_t pageaddr =3D sextract64(value << 12, 0, 56); =20 CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E3, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1E3)= ); } } =20 @@ -3116,7 +3144,7 @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env,= const ARMCPRegInfo *ri, =20 pageaddr =3D sextract64(value << 12, 0, 48); =20 - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS)); } =20 static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo= *ri, @@ -3132,7 +3160,7 @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *en= v, const ARMCPRegInfo *ri, pageaddr =3D sextract64(value << 12, 0, 48); =20 CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S2NS)= ); } } =20 diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c index 2c05d6af75..57968d9143 100644 --- a/target/sparc/ldst_helper.c +++ b/target/sparc/ldst_helper.c @@ -1768,13 +1768,15 @@ void helper_st_asi(CPUSPARCState *env, target_ulong= addr, target_ulong val, case 1: env->dmmu.mmu_primary_context =3D val; env->immu.mmu_primary_context =3D val; - tlb_flush_by_mmuidx(CPU(cpu), MMU_USER_IDX, MMU_KERNEL_IDX, = -1); + tlb_flush_by_mmuidx(CPU(cpu), + (1 << MMU_USER_IDX) | (1 << MMU_KERNEL_I= DX)); break; case 2: env->dmmu.mmu_secondary_context =3D val; env->immu.mmu_secondary_context =3D val; - tlb_flush_by_mmuidx(CPU(cpu), MMU_USER_SECONDARY_IDX, - MMU_KERNEL_SECONDARY_IDX, -1); + tlb_flush_by_mmuidx(CPU(cpu), + (1 << MMU_USER_SECONDARY_IDX) | + (1 << MMU_KERNEL_SECONDARY_IDX)); break; default: cpu_unassigned_access(cs, addr, true, false, 1, size); --=20 2.11.0