From nobody Thu Dec 18 22:23:38 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1562031533; cv=none; d=zoho.com; s=zohoarc; b=idPmgt0fCrqkwPugPmjDqOzrRbhIb2eIFYcv4oUnl0wX7cLu5U7O57MgwY1DOyEYYsBja6/49nKfYMilxSEpDDeKFtCzg3Gh65FwSg2rpRcDoIvsjiYZbUJz48cHr6Zwy8DER90jwl+FeOj7NasS3FbJ4Wak1CsNkYFo4TAvIns= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1562031533; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=fdrDNbvOacI/5pnc14JmN0WEOnjbN8Mg7gxzfagzIQY=; b=cNfLYTmK29cbIEuFTSf0bcaLlB+tqGkgzm9Iqok8emsvQHyCJXVEbIwIDBnp0lptkQUsrxnfb8+W89Phc5AEkYHecOHrJkVim0F9ir0Nx5CfgS3WqDkmSS1i+ffwZOfxcqrvOOsRCCMukwXlUBi+ZIie6ZGef11Jz1y+IqoVsMw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1562031533706871.9064395608552; Mon, 1 Jul 2019 18:38:53 -0700 (PDT) Received: from localhost ([::1]:46942 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hi7kw-0000uf-HY for importer@patchew.org; Mon, 01 Jul 2019 21:38:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54188) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hi4Ip-0001k2-OX for qemu-devel@nongnu.org; Mon, 01 Jul 2019 17:57:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hi4In-0007ID-Lf for qemu-devel@nongnu.org; Mon, 01 Jul 2019 17:57:35 -0400 Received: from mail-wr1-f53.google.com ([209.85.221.53]:46067) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hi4Il-0007BB-Ew for qemu-devel@nongnu.org; Mon, 01 Jul 2019 17:57:33 -0400 Received: by mail-wr1-f53.google.com with SMTP id f9so15384102wre.12 for ; Mon, 01 Jul 2019 14:57:29 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [81.2.115.148]) by smtp.gmail.com with ESMTPSA id o6sm26573508wra.27.2019.07.01.09.40.24 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 01 Jul 2019 09:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=fdrDNbvOacI/5pnc14JmN0WEOnjbN8Mg7gxzfagzIQY=; b=fRoZJvYvB+ATgw/uLzWGE4PWwt48VIceP5oOGHYf/8DDXzReAmIrzjaH9pEsG1AjTD pGDMzcm7gYYK2xFOdD4oivjm6NQyjKAUMb8WCXLBsIX2M/7FgTywHympwB5ipAGmo5i+ 7zXRcc1fpsItpgHuZSjnA1BrRACMsTFfI1bYejJpfBU8RRy3SG1KR5aMVliPOWRmgfFs YFji1XFxQuiGhFgyWvuBwTNiLrQXFAb8ajtyUpVKn2Urn582nJ5h3vqQ42PrJXXCd/Fr wmY0REwUlLLmwHMnVL1pHPXzxPzJe3qw16C8GUrE3Rv6cGrcj8I0ImrCYcXeqcWyOD1t wavg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fdrDNbvOacI/5pnc14JmN0WEOnjbN8Mg7gxzfagzIQY=; b=mUYgXJTqkhFMnsRDBA9BgxoGGAUBqXb6+pKivzdo35ve7beTIf069VIsjP3rg+7Zc/ DOZNya7YUxNPMpRGuYdn6JkpX/mZecBDx2Teb3g3hvRAf0dK2Tz9ixvcFHpoDpNsMl1R f/XhO4kfUDfPRsuNBjXZgvNkbVp5WHXXaAtmRqtZGHb4bj9rMv7UiVGYm/ExfAbQiwtx ZNHpBvy2SEGiVK3zxy91ZtDyRNsBVpQsgaji4gIb9nl2UkO0FwAPdh629jW8hy6SL2fQ 5vlOwDa3sKNj+ebgcHca7JPJiY2pcV5WRgehzh6O/k4bd+4oNSByLskQbbjRlOFl2jE4 RDPw== X-Gm-Message-State: APjAAAVEr8O2yNER7yahT2e+YnVzm/o1PhvC1HU68nUJH0HT5ywxgbRr h9LkxnevzA0vay8EywQFgVNwfSe044mCPw== X-Google-Smtp-Source: APXvYqzDu2FYDg68HafCIoQYRaLHpX9MoHNCUDiTLEzlf1Kl6QWaZ+ELVAqJUkUg4BPZEgsPxz//qA== X-Received: by 2002:adf:e947:: with SMTP id m7mr8961791wrn.123.1561999225981; Mon, 01 Jul 2019 09:40:25 -0700 (PDT) From: Peter Maydell To: qemu-devel@nongnu.org Date: Mon, 1 Jul 2019 17:39:33 +0100 Message-Id: <20190701163943.22313-37-peter.maydell@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190701163943.22313-1-peter.maydell@linaro.org> References: <20190701163943.22313-1-peter.maydell@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.221.53 Subject: [Qemu-devel] [PULL 36/46] target/arm: Move the DC ZVA helper into op_helper X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Samuel Ortiz Those helpers are a software implementation of the ARM v8 memory zeroing op code. They should be moved to the op helper file, which is going to eventually be built only when TCG is enabled. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford Signed-off-by: Samuel Ortiz Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Philippe Mathieu-Daud=C3=A9 Message-id: 20190701132516.26392-10-philmd@redhat.com [PMD: Rebased] Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Peter Maydell Signed-off-by: Peter Maydell --- target/arm/helper.c | 92 ----------------------------------------- target/arm/op_helper.c | 93 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 93 insertions(+), 92 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index c77ed852155..a87fda91914 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -13308,98 +13308,6 @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address,= int size, #endif } =20 -void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in) -{ - /* - * Implement DC ZVA, which zeroes a fixed-length block of memory. - * Note that we do not implement the (architecturally mandated) - * alignment fault for attempts to use this on Device memory - * (which matches the usual QEMU behaviour of not implementing either - * alignment faults or any memory attribute handling). - */ - - ARMCPU *cpu =3D env_archcpu(env); - uint64_t blocklen =3D 4 << cpu->dcz_blocksize; - uint64_t vaddr =3D vaddr_in & ~(blocklen - 1); - -#ifndef CONFIG_USER_ONLY - { - /* - * Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than - * the block size so we might have to do more than one TLB lookup. - * We know that in fact for any v8 CPU the page size is at least 4K - * and the block size must be 2K or less, but TARGET_PAGE_SIZE is = only - * 1K as an artefact of legacy v5 subpage support being present in= the - * same QEMU executable. So in practice the hostaddr[] array has - * two entries, given the current setting of TARGET_PAGE_BITS_MIN. - */ - int maxidx =3D DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE); - void *hostaddr[DIV_ROUND_UP(2 * KiB, 1 << TARGET_PAGE_BITS_MIN)]; - int try, i; - unsigned mmu_idx =3D cpu_mmu_index(env, false); - TCGMemOpIdx oi =3D make_memop_idx(MO_UB, mmu_idx); - - assert(maxidx <=3D ARRAY_SIZE(hostaddr)); - - for (try =3D 0; try < 2; try++) { - - for (i =3D 0; i < maxidx; i++) { - hostaddr[i] =3D tlb_vaddr_to_host(env, - vaddr + TARGET_PAGE_SIZE *= i, - 1, mmu_idx); - if (!hostaddr[i]) { - break; - } - } - if (i =3D=3D maxidx) { - /* - * If it's all in the TLB it's fair game for just writing = to; - * we know we don't need to update dirty status, etc. - */ - for (i =3D 0; i < maxidx - 1; i++) { - memset(hostaddr[i], 0, TARGET_PAGE_SIZE); - } - memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE)); - return; - } - /* - * OK, try a store and see if we can populate the tlb. This - * might cause an exception if the memory isn't writable, - * in which case we will longjmp out of here. We must for - * this purpose use the actual register value passed to us - * so that we get the fault address right. - */ - helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC()); - /* Now we can populate the other TLB entries, if any */ - for (i =3D 0; i < maxidx; i++) { - uint64_t va =3D vaddr + TARGET_PAGE_SIZE * i; - if (va !=3D (vaddr_in & TARGET_PAGE_MASK)) { - helper_ret_stb_mmu(env, va, 0, oi, GETPC()); - } - } - } - - /* - * Slow path (probably attempt to do this to an I/O device or - * similar, or clearing of a block of code we have translations - * cached for). Just do a series of byte writes as the architecture - * demands. It's not worth trying to use a cpu_physical_memory_map= (), - * memset(), unmap() sequence here because: - * + we'd need to account for the blocksize being larger than a p= age - * + the direct-RAM access case is almost always going to be dealt - * with in the fastpath code above, so there's no speed benefit - * + we would have to deal with the map returning NULL because the - * bounce buffer was in use - */ - for (i =3D 0; i < blocklen; i++) { - helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC()); - } - } -#else - memset(g2h(vaddr), 0, blocklen); -#endif -} - /* Note that signed overflow is undefined in C. The following routines are careful to use unsigned types where modulo arithmetic is required. Failure to do so _will_ break on newer gcc. */ diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c index b1952486c61..7c835d3ce77 100644 --- a/target/arm/op_helper.c +++ b/target/arm/op_helper.c @@ -17,6 +17,7 @@ * License along with this library; if not, see . */ #include "qemu/osdep.h" +#include "qemu/units.h" #include "qemu/log.h" #include "qemu/main-loop.h" #include "cpu.h" @@ -1325,3 +1326,95 @@ uint32_t HELPER(ror_cc)(CPUARMState *env, uint32_t x= , uint32_t i) return ((uint32_t)x >> shift) | (x << (32 - shift)); } } + +void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in) +{ + /* + * Implement DC ZVA, which zeroes a fixed-length block of memory. + * Note that we do not implement the (architecturally mandated) + * alignment fault for attempts to use this on Device memory + * (which matches the usual QEMU behaviour of not implementing either + * alignment faults or any memory attribute handling). + */ + + ARMCPU *cpu =3D env_archcpu(env); + uint64_t blocklen =3D 4 << cpu->dcz_blocksize; + uint64_t vaddr =3D vaddr_in & ~(blocklen - 1); + +#ifndef CONFIG_USER_ONLY + { + /* + * Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than + * the block size so we might have to do more than one TLB lookup. + * We know that in fact for any v8 CPU the page size is at least 4K + * and the block size must be 2K or less, but TARGET_PAGE_SIZE is = only + * 1K as an artefact of legacy v5 subpage support being present in= the + * same QEMU executable. So in practice the hostaddr[] array has + * two entries, given the current setting of TARGET_PAGE_BITS_MIN. + */ + int maxidx =3D DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE); + void *hostaddr[DIV_ROUND_UP(2 * KiB, 1 << TARGET_PAGE_BITS_MIN)]; + int try, i; + unsigned mmu_idx =3D cpu_mmu_index(env, false); + TCGMemOpIdx oi =3D make_memop_idx(MO_UB, mmu_idx); + + assert(maxidx <=3D ARRAY_SIZE(hostaddr)); + + for (try =3D 0; try < 2; try++) { + + for (i =3D 0; i < maxidx; i++) { + hostaddr[i] =3D tlb_vaddr_to_host(env, + vaddr + TARGET_PAGE_SIZE *= i, + 1, mmu_idx); + if (!hostaddr[i]) { + break; + } + } + if (i =3D=3D maxidx) { + /* + * If it's all in the TLB it's fair game for just writing = to; + * we know we don't need to update dirty status, etc. + */ + for (i =3D 0; i < maxidx - 1; i++) { + memset(hostaddr[i], 0, TARGET_PAGE_SIZE); + } + memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE)); + return; + } + /* + * OK, try a store and see if we can populate the tlb. This + * might cause an exception if the memory isn't writable, + * in which case we will longjmp out of here. We must for + * this purpose use the actual register value passed to us + * so that we get the fault address right. + */ + helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC()); + /* Now we can populate the other TLB entries, if any */ + for (i =3D 0; i < maxidx; i++) { + uint64_t va =3D vaddr + TARGET_PAGE_SIZE * i; + if (va !=3D (vaddr_in & TARGET_PAGE_MASK)) { + helper_ret_stb_mmu(env, va, 0, oi, GETPC()); + } + } + } + + /* + * Slow path (probably attempt to do this to an I/O device or + * similar, or clearing of a block of code we have translations + * cached for). Just do a series of byte writes as the architecture + * demands. It's not worth trying to use a cpu_physical_memory_map= (), + * memset(), unmap() sequence here because: + * + we'd need to account for the blocksize being larger than a p= age + * + the direct-RAM access case is almost always going to be dealt + * with in the fastpath code above, so there's no speed benefit + * + we would have to deal with the map returning NULL because the + * bounce buffer was in use + */ + for (i =3D 0; i < blocklen; i++) { + helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC()); + } + } +#else + memset(g2h(vaddr), 0, blocklen); +#endif +} --=20 2.20.1