From nobody Wed Nov 12 05:29:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1568784909; cv=none; d=zoho.com; s=zohoarc; b=clwO+pnet/9OEMnmyGAmS8kvyb0e3MmwRvPpAIsJQTOnfCqE2GOCFKDpMSJfMsHpQ/yrHXJ5ojlI/bWt63/NY7gRRj4Uew8MESLFP8JMBs8kLqJ11L7JxupBxypq8Kqrf8XYyc8mjTh1pOblibluYAikVE4BDklTUYQOyR1i3fk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568784909; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=ll7U9QRzSbTN8x22U7OcqdwgQ5P82BQhn/BTbG9f2FQ33Q5paYDUKBTLxypIXmZqFl7KJS+gEsN30yiUORbmvBUYVwEHVzQIAstE4U3Gu55lEQe/QH3VffIqVJ2PdVMQC2gGyEui8T6MbxUh9lMiKP1K0vCDi8DUUrE/dcfHDcI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568784909540405.0359045941144; Tue, 17 Sep 2019 22:35:09 -0700 (PDT) Received: from localhost ([::1]:54736 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iAScK-0007Pq-4b for importer@patchew.org; Wed, 18 Sep 2019 01:35:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53570) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUM-0002LE-8E for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUK-00072F-9Y for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:50 -0400 Received: from mail-pg1-x542.google.com ([2607:f8b0:4864:20::542]:39105) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUK-00071G-0O for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:48 -0400 Received: by mail-pg1-x542.google.com with SMTP id u17so3301379pgi.6 for ; Tue, 17 Sep 2019 22:26:47 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=P+wLyu+SuDRFMOpwlWZ3V0rbe8ckD4kt+H5JiZjGy/FL+1T2D5gbX9EIoMy/7R24L9 oKTFjtKMZRc6L+V2psfC5xYijtBVX9nd0NuOUCHRlOANXzy9fNTrdX7sQGm/hp3DXdqZ +CdfRuOaJmkL1dHxXJ8EI36720S31ZepH1cd0muCpOigigv8w7ImeaM4lWevrJG4jfRS YdoCh4dNkg3QQbW15DJXT1HnQd2+JS03EOdQqhQDW1puUBN2IYDcuDJ64I4KmGoy6UuH uOTTBr7e1j9nvMrAHQi6CvDFERng9S7Xw2aQOrNH+cD0Bui9/o24W+g5UK4OKansxxzL skIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=iEz0Ihwme65JMPkaZY4n3yEn0wWn526JVVS81Mh0Hsjtb2nOIGE+Kdmr8B8CQVrJJI 4BzhKWK2ZX425NJSSh1ERdEI5KqqGrkx0VWc6dsAEOBVMSYxrL5S8gpgv0Hcv02wV+V1 Zo/9ISBpeouJ5XVZxxDtTkROkCAEIQ54L+SNX0v/3mttRtvU28RTn0OSKMoJxdo9U8y1 02e0QsiWjLfjsKi7GeCV2HFZYQhUl7huZNy5waJJKnq13lpDzYIz6nVEJ73uyVYbwv1q m1m8YhQEwwoL6E6iSrEai3jdJNdqLvwjw52ujeIDCsT0hXsm8N8jo294yBY/6KRL69La c26g== X-Gm-Message-State: APjAAAWaGjCI2H8qC0RNaqsTghUVI2XhNybTACZgTMhrmutv572BH3Oc +NUk7Zgjb1CQ1af2fiviTlVXyCFCqoU= X-Google-Smtp-Source: APXvYqydj5yWU6CI4S+bctuCp/QqSK2LFmUWEx8gQnwsUq/nW2YdclF5mQ4l0+F3ZpYb9nzgruPLJA== X-Received: by 2002:a63:34cb:: with SMTP id b194mr2276638pga.446.1568784405343; Tue, 17 Sep 2019 22:26:45 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:39 -0700 Message-Id: <20190918052641.21300-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::542 Subject: [Qemu-devel] [RFC 1/3] exec: Adjust notdirty tracing X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The memory_region_tb_read tracepoint is unreachable, since notdirty is supposed to apply only to reads. The memory_region_tb_write tracepoint is mis-named, because notdirty is not only used for TB invalidation. It is also used for e.g. VGA RAM updates. Replace memory_region_tb_write with memory_notdirty_write, and place it in memory_notdirty_write_prepare where it can catch all of the instances. Add memory_notdirty_dirty to log when we no longer intercept writes to a page. Signed-off-by: Richard Henderson --- exec.c | 3 +++ memory.c | 4 ---- trace-events | 4 ++-- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/exec.c b/exec.c index 8b998974f8..9babe57615 100644 --- a/exec.c +++ b/exec.c @@ -2755,6 +2755,8 @@ void memory_notdirty_write_prepare(NotDirtyInfo *ndi, ndi->size =3D size; ndi->pages =3D NULL; =20 + trace_memory_notdirty_write(mem_vaddr, ram_addr, size); + assert(tcg_enabled()); if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { ndi->pages =3D page_collection_lock(ram_addr, ram_addr + size); @@ -2779,6 +2781,7 @@ void memory_notdirty_write_complete(NotDirtyInfo *ndi) /* we remove the notdirty callback only if the code has been flushed */ if (!cpu_physical_memory_is_clean(ndi->ram_addr)) { + trace_memory_notdirty_dirty(ndi->mem_vaddr); tlb_set_dirty(ndi->cpu, ndi->mem_vaddr); } } diff --git a/memory.c b/memory.c index b9dd6b94ca..57c44c97db 100644 --- a/memory.c +++ b/memory.c @@ -438,7 +438,6 @@ static MemTxResult memory_region_read_accessor(MemoryR= egion *mr, /* Accesses to code which has previously been translated into a TB= show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_read(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, s= ize); @@ -465,7 +464,6 @@ static MemTxResult memory_region_read_with_attrs_access= or(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB= show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_read(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, s= ize); @@ -490,7 +488,6 @@ static MemTxResult memory_region_write_accessor(MemoryR= egion *mr, /* Accesses to code which has previously been translated into a TB= show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_write(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, = size); @@ -515,7 +512,6 @@ static MemTxResult memory_region_write_with_attrs_acces= sor(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB= show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_write(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, = size); diff --git a/trace-events b/trace-events index 823a4ae64e..5c9a1631e7 100644 --- a/trace-events +++ b/trace-events @@ -52,14 +52,14 @@ dma_map_wait(void *dbs) "dbs=3D%p" find_ram_offset(uint64_t size, uint64_t offset) "size: 0x%" PRIx64 " @ 0x%= " PRIx64 find_ram_offset_loop(uint64_t size, uint64_t candidate, uint64_t offset, u= int64_t next, uint64_t mingap) "trying size: 0x%" PRIx64 " @ 0x%" PRIx64 ",= offset: 0x%" PRIx64" next: 0x%" PRIx64 " mingap: 0x%" PRIx64 ram_block_discard_range(const char *rbname, void *hva, size_t length, bool= need_madvise, bool need_fallocate, int ret) "%s@%p + 0x%zx: madvise: %d fa= llocate: %d ret: %d" +memory_notdirty_write(uint64_t vaddr, uint64_t ram_addr, unsigned size) "0= x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" +memory_notdirty_dirty(uint64_t vaddr) "0x%" PRIx64 =20 # memory.c memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t va= lue, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %= u" memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t v= alue, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size = %u" memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint6= 4_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64= " size %u" memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint= 64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx6= 4" size %u" -memory_region_tb_read(int cpu_index, uint64_t addr, uint64_t value, unsign= ed size) "cpu %d addr 0x%"PRIx64" value 0x%"PRIx64" size %u" -memory_region_tb_write(int cpu_index, uint64_t addr, uint64_t value, unsig= ned size) "cpu %d addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint= 64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64"= size %u" memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uin= t64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64= " size %u" flatview_new(void *view, void *root) "%p (root %p)" --=20 2.17.1 From nobody Wed Nov 12 05:29:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1568784648; cv=none; d=zoho.com; s=zohoarc; b=iB/dA/TuH0fPGRj0bOQvrwAm0xWtfmMaXg1mTyJ55z+Ok29DQovNDYcxqr6v05VrEFOEYaJsHg/fDKYbwCJf3ViEvExFCGNqyao0tWRDjldyrAQonHy67AKPvaRd6zICv0zGG8cc5aZIxNhVOAo1zjwIXdke58S5kZM23biSc18= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568784648; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=BkND++YBEBQsTD2jEq5gJi9R6vamVFZc9qYGGBjMSwRivdaQxPfTFga9oOwOUBd39sEg562zVEN0w4xR2XDSzEx/nrWFCURqSUBexsAwNyCraLH9I1qIGGI21r2gTXCpYArrLvy9U2RN4dnwk9OYSomW7b0WAfmn8S15NyU2LYw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568784648821605.2734902795592; Tue, 17 Sep 2019 22:30:48 -0700 (PDT) Received: from localhost ([::1]:54692 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASYA-0003nv-Ry for importer@patchew.org; Wed, 18 Sep 2019 01:30:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53586) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUO-0002LV-9s for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUK-00072f-PT for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from mail-pg1-x541.google.com ([2607:f8b0:4864:20::541]:40547) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUK-00071w-Gy for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:48 -0400 Received: by mail-pg1-x541.google.com with SMTP id w10so3299328pgj.7 for ; Tue, 17 Sep 2019 22:26:48 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=VTZ54ZA5ekmnyDYQYsZ6NPEEyVFlLIFyBcteVaQNn3vmesK8ygPPDNnnVQhN3obbtR 4ZpfjqcuVE8J/Tfy2wq/1j97aeBqcGrFTAyt1b6RrM4emZg+9NDJyVOSpfcvDoc7cbTS 5CjhBkPgr2pXUMBUPP/H2t7LCB+w9s7CM1LZklJg3bf0Mnh7hOshE/eRDoYXCFdjRFRe /y+TbZB2VC4pKmIjrAcSfqbQsrGng5y1WaZNK124Aggm5gVY6XAm6faLJroAoi4cCXf7 p04kh3yG7YK7tQIv8jrsNtfggCRvftog1P/SMzwBr/YOqsLisPfnbXsRJ1b64WiLf+/K NMhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=FliFiVhPSDX95DuXU3WdXvKzfMImN9CW7q5BxhjZEQX0ZZdimnnRh97hhCF5P/uttl r+pIyUsvocRW3n1x1q+AEfROMLdYBe6N0ySgwKv2+OZo5xF1Z36v1SyskidUdIdNW/km TDGnMkLD91fBWPLt/YJMwapcg8yDWBu77QdFcoLaPaaL9Ywg2KpxHcORxq6MTSGrMOD8 1CmoX8VXld2YE62zoF5nzL69wQVwMynCfc5PCM9lAx7kk/R3iYzDnSr8jUlilpWjNFX2 +IrSjXxZ2XMIpUVeToPRhtlEogoyRiQRh38X7fwsy2C9th/5ufZGFHPa37gq5le9xiEs SiGA== X-Gm-Message-State: APjAAAUlZhM4f1E3JAAcqJPaQ2V5LuS7DRH8zt+r9eDnqDYbypLsPJBZ ys54HU4j92DO1lqYBv5hc5iUtsCc/UA= X-Google-Smtp-Source: APXvYqyy5RgDukOlEHYR0oJbE3BWhKDwXGupR1XU5yU4gR9z0oAyWaduyWuCn3WNVBFAD8RXdTjGFw== X-Received: by 2002:a62:5214:: with SMTP id g20mr2190363pfb.103.1568784407045; Tue, 17 Sep 2019 22:26:47 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:40 -0700 Message-Id: <20190918052641.21300-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::541 Subject: [Qemu-devel] [RFC 2/3] cputlb: Move NOTDIRTY handling from I/O path to TLB path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Pages that we want to track for NOTDIRTY are RAM. We do not really need to go through the I/O path to handle them. Create cpu_notdirty_write() from the corpses of memory_notdirty_write_prepare and memory_notdirty_write_complete. Use this new function to implement all of the notdirty handling. This merge is enabled by a previous patch, 9458a9a1df1a ("memory: fix race between TCG and accesses"), which forces users of the dirty bitmap to delay reads until all vcpu have exited any TB. Thus we no longer require the actual write to happen between *_prepare and *_complete. Signed-off-by: Richard Henderson --- include/exec/cpu-common.h | 1 - include/exec/memory-internal.h | 53 +++--------------- accel/tcg/cputlb.c | 66 +++++++++++++---------- exec.c | 98 ++++++---------------------------- memory.c | 16 ------ 5 files changed, 61 insertions(+), 173 deletions(-) diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index f7dbe75fbc..06c60c82be 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -101,7 +101,6 @@ void qemu_flush_coalesced_mmio_buffer(void); void cpu_flush_icache_range(hwaddr start, hwaddr len); =20 extern struct MemoryRegion io_mem_rom; -extern struct MemoryRegion io_mem_notdirty; =20 typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); =20 diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h index ef4fb92371..55f75e7315 100644 --- a/include/exec/memory-internal.h +++ b/include/exec/memory-internal.h @@ -52,67 +52,28 @@ void mtree_print_dispatch(struct AddressSpaceDispatch *= d, =20 struct page_collection; =20 -/* Opaque struct for passing info from memory_notdirty_write_prepare() - * to memory_notdirty_write_complete(). Callers should treat all fields - * as private, with the exception of @active. - * - * @active is a field which is not touched by either the prepare or - * complete functions, but which the caller can use if it wishes to - * track whether it has called prepare for this struct and so needs - * to later call the complete function. - */ -typedef struct { - CPUState *cpu; - struct page_collection *pages; - ram_addr_t ram_addr; - vaddr mem_vaddr; - unsigned size; - bool active; -} NotDirtyInfo; - /** - * memory_notdirty_write_prepare: call before writing to non-dirty memory - * @ndi: pointer to opaque NotDirtyInfo struct + * cpu_notdirty_write: call before writing to non-dirty memory * @cpu: CPU doing the write * @mem_vaddr: virtual address of write * @ram_addr: the ram address of the write * @size: size of write in bytes * - * Any code which writes to the host memory corresponding to - * guest RAM which has been marked as NOTDIRTY must wrap those - * writes in calls to memory_notdirty_write_prepare() and - * memory_notdirty_write_complete(): + * Any code which writes to the host memory corresponding to guest RAM + * which has been marked as NOTDIRTY must call cpu_notdirty_write(). * - * NotDirtyInfo ndi; - * memory_notdirty_write_prepare(&ndi, ....); - * ... perform write here ... - * memory_notdirty_write_complete(&ndi); - * - * These calls will ensure that we flush any TCG translated code for + * This function ensures that we flush any TCG translated code for * the memory being written, update the dirty bits and (if possible) * remove the slowpath callback for writing to the memory. * * This must only be called if we are using TCG; it will assert otherwise. * - * We may take locks in the prepare call, so callers must ensure that - * they don't exit (via longjump or otherwise) without calling complete. - * * This call must only be made inside an RCU critical section. * (Note that while we're executing a TCG TB we're always in an - * RCU critical section, which is likely to be the case for callers - * of these functions.) + * RCU critical section, which is likely to be the case for any callers.) */ -void memory_notdirty_write_prepare(NotDirtyInfo *ndi, - CPUState *cpu, - vaddr mem_vaddr, - ram_addr_t ram_addr, - unsigned size); -/** - * memory_notdirty_write_complete: finish write to non-dirty memory - * @ndi: pointer to the opaque NotDirtyInfo struct which was initialized - * by memory_not_dirty_write_prepare(). - */ -void memory_notdirty_write_complete(NotDirtyInfo *ndi); +void cpu_notdirty_write(CPUState *cpu, vaddr mem_vaddr, + ram_addr_t ram_addr, unsigned size); =20 #endif #endif diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 354a75927a..7c4c763b88 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -904,7 +904,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEnt= ry *iotlbentry, mr =3D section->mr; mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr; cpu->mem_io_pc =3D retaddr; - if (mr !=3D &io_mem_rom && mr !=3D &io_mem_notdirty && !cpu->can_do_io= ) { + if (mr !=3D &io_mem_rom && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } =20 @@ -945,7 +945,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry = *iotlbentry, section =3D iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr =3D section->mr; mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr; - if (mr !=3D &io_mem_rom && mr !=3D &io_mem_notdirty && !cpu->can_do_io= ) { + if (mr !=3D &io_mem_rom && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } cpu->mem_io_vaddr =3D addr; @@ -1117,16 +1117,26 @@ void *probe_access(CPUArchState *env, target_ulong = addr, int size, return NULL; } =20 - /* Handle watchpoints. */ - if (tlb_addr & TLB_WATCHPOINT) { - cpu_check_watchpoint(env_cpu(env), addr, size, - env_tlb(env)->d[mmu_idx].iotlb[index].attrs, - wp_access, retaddr); - } + if (unlikely(tlb_addr & (TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_MMIO))) { + CPUIOTLBEntry *iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[inde= x]; =20 - if (tlb_addr & (TLB_NOTDIRTY | TLB_MMIO)) { - /* I/O access */ - return NULL; + /* Reject memory mapped I/O. */ + if (tlb_addr & TLB_MMIO) { + /* I/O access */ + return NULL; + } + + /* Handle watchpoints. */ + if (tlb_addr & TLB_WATCHPOINT) { + cpu_check_watchpoint(env_cpu(env), addr, size, iotlbentry->att= rs, + wp_access, retaddr); + } + + /* Handle clean pages. */ + if (tlb_addr & TLB_NOTDIRTY) { + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, size); + } } =20 return (void *)((uintptr_t)addr + entry->addend); @@ -1185,8 +1195,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr ad= dr, /* Probe for a read-modify-write atomic operation. Do not allow unaligned * operations, or io operations to proceed. Return the host address. */ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr, - NotDirtyInfo *ndi) + TCGMemOpIdx oi, uintptr_t retaddr) { size_t mmu_idx =3D get_mmuidx(oi); uintptr_t index =3D tlb_index(env, mmu_idx, addr); @@ -1227,7 +1236,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, tar= get_ulong addr, tlb_addr =3D tlb_addr_write(tlbe) & ~TLB_INVALID_MASK; } =20 - /* Notice an IO access or a needs-MMU-lookup access */ + /* Notice an IO access */ if (unlikely(tlb_addr & TLB_MMIO)) { /* There's really nothing that can be done to support this apart from stop-the-world. */ @@ -1246,12 +1255,10 @@ static void *atomic_mmu_lookup(CPUArchState *env, t= arget_ulong addr, =20 hostaddr =3D (void *)((uintptr_t)addr + tlbe->addend); =20 - ndi->active =3D false; if (unlikely(tlb_addr & TLB_NOTDIRTY)) { - ndi->active =3D true; - memory_notdirty_write_prepare(ndi, env_cpu(env), addr, - qemu_ram_addr_from_host_nofail(hosta= ddr), - 1 << s_bits); + CPUIOTLBEntry *iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[inde= x]; + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, 1 << s_bits); } =20 return hostaddr; @@ -1603,12 +1610,18 @@ store_helper(CPUArchState *env, target_ulong addr, = uint64_t val, } =20 /* Handle I/O access. */ - if (likely(tlb_addr & (TLB_MMIO | TLB_NOTDIRTY))) { + if (likely(tlb_addr & TLB_MMIO)) { io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0)); return; } =20 + /* Handle clean pages. This is always RAM. */ + if (tlb_addr & TLB_NOTDIRTY) { + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, size); + } + if (unlikely(tlb_addr & TLB_BSWAP)) { haddr =3D (void *)((uintptr_t)addr + entry->addend); direct_swap(haddr, val); @@ -1735,14 +1748,9 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulo= ng addr, uint64_t val, #define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr #define ATOMIC_NAME(X) \ HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu)) -#define ATOMIC_MMU_DECLS NotDirtyInfo ndi -#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr, &ndi) -#define ATOMIC_MMU_CLEANUP \ - do { \ - if (unlikely(ndi.active)) { \ - memory_notdirty_write_complete(&ndi); \ - } \ - } while (0) +#define ATOMIC_MMU_DECLS=20 +#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr) +#define ATOMIC_MMU_CLEANUP =20 #define DATA_SIZE 1 #include "atomic_template.h" @@ -1770,7 +1778,7 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulon= g addr, uint64_t val, #undef ATOMIC_MMU_LOOKUP #define EXTRA_ARGS , TCGMemOpIdx oi #define ATOMIC_NAME(X) HELPER(glue(glue(atomic_ ## X, SUFFIX), END)) -#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, GETPC(), &ndi) +#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, GETPC()) =20 #define DATA_SIZE 1 #include "atomic_template.h" diff --git a/exec.c b/exec.c index 9babe57615..219198e80e 100644 --- a/exec.c +++ b/exec.c @@ -88,7 +88,7 @@ static MemoryRegion *system_io; AddressSpace address_space_io; AddressSpace address_space_memory; =20 -MemoryRegion io_mem_rom, io_mem_notdirty; +MemoryRegion io_mem_rom; static MemoryRegion io_mem_unassigned; #endif =20 @@ -191,8 +191,7 @@ typedef struct subpage_t { } subpage_t; =20 #define PHYS_SECTION_UNASSIGNED 0 -#define PHYS_SECTION_NOTDIRTY 1 -#define PHYS_SECTION_ROM 2 +#define PHYS_SECTION_ROM 1 =20 static void io_mem_init(void); static void memory_map_init(void); @@ -1473,9 +1472,7 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu, if (memory_region_is_ram(section->mr)) { /* Normal RAM. */ iotlb =3D memory_region_get_ram_addr(section->mr) + xlat; - if (!section->readonly) { - iotlb |=3D PHYS_SECTION_NOTDIRTY; - } else { + if (section->readonly) { iotlb |=3D PHYS_SECTION_ROM; } } else { @@ -2743,85 +2740,33 @@ ram_addr_t qemu_ram_addr_from_host(void *ptr) } =20 /* Called within RCU critical section. */ -void memory_notdirty_write_prepare(NotDirtyInfo *ndi, - CPUState *cpu, - vaddr mem_vaddr, - ram_addr_t ram_addr, - unsigned size) +void cpu_notdirty_write(CPUState *cpu, vaddr mem_vaddr, + ram_addr_t ram_addr, unsigned size) { - ndi->cpu =3D cpu; - ndi->ram_addr =3D ram_addr; - ndi->mem_vaddr =3D mem_vaddr; - ndi->size =3D size; - ndi->pages =3D NULL; - trace_memory_notdirty_write(mem_vaddr, ram_addr, size); =20 assert(tcg_enabled()); if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { - ndi->pages =3D page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast(ndi->pages, ram_addr, size); - } -} + struct page_collection *pages; =20 -/* Called within RCU critical section. */ -void memory_notdirty_write_complete(NotDirtyInfo *ndi) -{ - if (ndi->pages) { - assert(tcg_enabled()); - page_collection_unlock(ndi->pages); - ndi->pages =3D NULL; + pages =3D page_collection_lock(ram_addr, ram_addr + size); + tb_invalidate_phys_page_fast(pages, ram_addr, size); + page_collection_unlock(pages); } =20 - /* Set both VGA and migration bits for simplicity and to remove + /* + * Set both VGA and migration bits for simplicity and to remove * the notdirty callback faster. */ - cpu_physical_memory_set_dirty_range(ndi->ram_addr, ndi->size, - DIRTY_CLIENTS_NOCODE); - /* we remove the notdirty callback only if the code has been - flushed */ - if (!cpu_physical_memory_is_clean(ndi->ram_addr)) { - trace_memory_notdirty_dirty(ndi->mem_vaddr); - tlb_set_dirty(ndi->cpu, ndi->mem_vaddr); + cpu_physical_memory_set_dirty_range(ram_addr, size, DIRTY_CLIENTS_NOCO= DE); + + /* We remove the notdirty callback only if the code has been flushed. = */ + if (!cpu_physical_memory_is_clean(ram_addr)) { + trace_memory_notdirty_dirty(mem_vaddr); + tlb_set_dirty(cpu, mem_vaddr); } } =20 -/* Called within RCU critical section. */ -static void notdirty_mem_write(void *opaque, hwaddr ram_addr, - uint64_t val, unsigned size) -{ - NotDirtyInfo ndi; - - memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_v= addr, - ram_addr, size); - - stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val); - memory_notdirty_write_complete(&ndi); -} - -static bool notdirty_mem_accepts(void *opaque, hwaddr addr, - unsigned size, bool is_write, - MemTxAttrs attrs) -{ - return is_write; -} - -static const MemoryRegionOps notdirty_mem_ops =3D { - .write =3D notdirty_mem_write, - .valid.accepts =3D notdirty_mem_accepts, - .endianness =3D DEVICE_NATIVE_ENDIAN, - .valid =3D { - .min_access_size =3D 1, - .max_access_size =3D 8, - .unaligned =3D false, - }, - .impl =3D { - .min_access_size =3D 1, - .max_access_size =3D 8, - .unaligned =3D false, - }, -}; - /* Generate a debug exception if a watchpoint has been hit. */ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len, MemTxAttrs attrs, int flags, uintptr_t ra) @@ -3051,13 +2996,6 @@ static void io_mem_init(void) NULL, NULL, UINT64_MAX); memory_region_init_io(&io_mem_unassigned, NULL, &unassigned_mem_ops, N= ULL, NULL, UINT64_MAX); - - /* io_mem_notdirty calls tb_invalidate_phys_page_fast, - * which can be called without the iothread mutex. - */ - memory_region_init_io(&io_mem_notdirty, NULL, ¬dirty_mem_ops, NULL, - NULL, UINT64_MAX); - memory_region_clear_global_locking(&io_mem_notdirty); } =20 AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv) @@ -3067,8 +3005,6 @@ AddressSpaceDispatch *address_space_dispatch_new(Flat= View *fv) =20 n =3D dummy_section(&d->map, fv, &io_mem_unassigned); assert(n =3D=3D PHYS_SECTION_UNASSIGNED); - n =3D dummy_section(&d->map, fv, &io_mem_notdirty); - assert(n =3D=3D PHYS_SECTION_NOTDIRTY); n =3D dummy_section(&d->map, fv, &io_mem_rom); assert(n =3D=3D PHYS_SECTION_ROM); =20 diff --git a/memory.c b/memory.c index 57c44c97db..a99b8c0767 100644 --- a/memory.c +++ b/memory.c @@ -434,10 +434,6 @@ static MemTxResult memory_region_read_accessor(Memory= Region *mr, tmp =3D mr->ops->read(mr->opaque, addr, size); if (mr->subpage) { trace_memory_region_subpage_read(get_cpu_index(), mr, addr, tmp, s= ize); - } else if (mr =3D=3D &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB= show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, s= ize); @@ -460,10 +456,6 @@ static MemTxResult memory_region_read_with_attrs_acces= sor(MemoryRegion *mr, r =3D mr->ops->read_with_attrs(mr->opaque, addr, &tmp, size, attrs); if (mr->subpage) { trace_memory_region_subpage_read(get_cpu_index(), mr, addr, tmp, s= ize); - } else if (mr =3D=3D &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB= show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, s= ize); @@ -484,10 +476,6 @@ static MemTxResult memory_region_write_accessor(Memory= Region *mr, =20 if (mr->subpage) { trace_memory_region_subpage_write(get_cpu_index(), mr, addr, tmp, = size); - } else if (mr =3D=3D &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB= show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, = size); @@ -508,10 +496,6 @@ static MemTxResult memory_region_write_with_attrs_acce= ssor(MemoryRegion *mr, =20 if (mr->subpage) { trace_memory_region_subpage_write(get_cpu_index(), mr, addr, tmp, = size); - } else if (mr =3D=3D &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB= show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr =3D memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, = size); --=20 2.17.1 From nobody Wed Nov 12 05:29:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1568784647; cv=none; d=zoho.com; s=zohoarc; b=Wgth/KHq0qlyugV2bn72q5T6E2+LXWh5Dh+WzeUsFPEj5Hr5W+ijRFAYgsGo6mYHeHfCM9ZxMfVdQqA4OzPWw34rOssC3p6gzIA6e0f6TLGCT1/k3G2wxp+QgbioVAEru3XfeEs2g9M5NaNChmfN1B1V7xUVzNkkt//P+E+N1Uo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568784647; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=FgZoca7wxJNTvi/l26wsxI1zKENwIBJ+d0uVmbX4gTLtKsxVgEJKLRNlx5M4nFw5YM+EVO3A7JN0Bih7Xb+BWAxE0FefbV8uRmi4atAi2q77TAKxjunafXxmxJUFkVgMImt+OsoKKqwwXA6OG+xcfYm7amLoKHE0ajYu6lcr87w= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568784647253886.2465004466375; Tue, 17 Sep 2019 22:30:47 -0700 (PDT) Received: from localhost ([::1]:54684 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASY8-0003lc-NX for importer@patchew.org; Wed, 18 Sep 2019 01:30:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53587) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUO-0002LW-A2 for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUM-00073e-Bw for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]:44425) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUM-00072u-5M for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:50 -0400 Received: by mail-pl1-x62a.google.com with SMTP id q15so255258pll.11 for ; Tue, 17 Sep 2019 22:26:49 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=y/+oaWBuyI5yPWanOBQjTAR3Z0htuRA5jPPHDcShFxZq0c1RH+4WxEzr0sIC0N0v5Y VIlwG+MFMq2zgpeVY7Z0dt6D5ctHHywnDAip5nHY1T/6UuH2WWuIWvhMftLbkpzVb/NK PpWLsdyVBbn4CmKYy/4Ho5INVsfuK6Jo0OR4YoxKc/wYitS55B2tpsWrkJmGTFPQs+WN FCumZ0VmMe6h6zzD8cllxustDwK5Mjw+sa0R2ft6NBzZ5vUZQkmi3mv7zlPNgXJcHubF quN1bG3tY/JoESN0cgWQuUh5KeEJa2xJg+UhyLkfkuJ7+SEQlEMBG52i/835RSdur6Oy 9EGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=IrJqCKLoMt6NZFJC4Nac3QQCdIFTGzXDdNq2sRYJzZSLdtjNzOq1keHBgaMbZVVpVM N3cj0hy3U2fZwyeewhZHmZkWROMjC9kZD6mDQFrtEudgYaffKzof7Xm/JHcWYsJ/ZLgY JWpuM9q1P4iPfx+TbrCEZhbNNmuBrn6Ce53ix6CIVRfMhKxrC9hCvDyFZcOqoh4rQce+ dgpdKPCBUI4oJAVsbe2dbReI2/2fiGQjQqridbCdYIXX38DVCetZnSniIcdN5S+uSuA9 tQDWMfcIsMG29iO0YQf6wvn1N+vd3wkoj7yju7jI6dTZZMiDwkL/t2paCzjcps83nTo3 kt9A== X-Gm-Message-State: APjAAAX5B1mQraRb92nN3zB8UWkXTbMgxigAdbatSo4LEXsw/+7hnpL0 LRFA1PSK8rNQ3FiPhQcET6RW+LzMlbI= X-Google-Smtp-Source: APXvYqwNuO2yOuuo+tsqTQoa7EkVzRR0KYQHjqtVVrGnEGqhntmbrG5xSJuIe0kmkqzmoXQqT28HFg== X-Received: by 2002:a17:902:aa4a:: with SMTP id c10mr2305065plr.340.1568784408240; Tue, 17 Sep 2019 22:26:48 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:41 -0700 Message-Id: <20190918052641.21300-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::62a Subject: [Qemu-devel] [RFC 3/3] cputlb: Remove ATOMIC_MMU_DECLS X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This macro no longer has a non-empty definition. Signed-off-by: Richard Henderson --- accel/tcg/atomic_template.h | 12 ------------ accel/tcg/cputlb.c | 1 - accel/tcg/user-exec.c | 1 - 3 files changed, 14 deletions(-) diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h index 287433d809..107660d5d3 100644 --- a/accel/tcg/atomic_template.h +++ b/accel/tcg/atomic_template.h @@ -95,7 +95,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; DATA_TYPE ret; =20 @@ -113,7 +112,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target= _ulong addr, #if HAVE_ATOMIC128 ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE val, *haddr =3D ATOMIC_MMU_LOOKUP; =20 ATOMIC_TRACE_LD; @@ -125,7 +123,6 @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulon= g addr EXTRA_ARGS) void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; =20 ATOMIC_TRACE_ST; @@ -137,7 +134,6 @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong ad= dr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; DATA_TYPE ret; =20 @@ -151,7 +147,6 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ul= ong addr, ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE val EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; \ DATA_TYPE ret; \ \ @@ -183,7 +178,6 @@ GEN_ATOMIC_HELPER(xor_fetch) ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE xval EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ XDATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; \ XDATA_TYPE cmp, old, new, val =3D xval; \ \ @@ -229,7 +223,6 @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new) ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; DATA_TYPE ret; =20 @@ -247,7 +240,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target= _ulong addr, #if HAVE_ATOMIC128 ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE val, *haddr =3D ATOMIC_MMU_LOOKUP; =20 ATOMIC_TRACE_LD; @@ -259,7 +251,6 @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulon= g addr EXTRA_ARGS) void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; =20 ATOMIC_TRACE_ST; @@ -272,7 +263,6 @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong ad= dr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; ABI_TYPE ret; =20 @@ -286,7 +276,6 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ul= ong addr, ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE val EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ DATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; \ DATA_TYPE ret; \ \ @@ -316,7 +305,6 @@ GEN_ATOMIC_HELPER(xor_fetch) ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE xval EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ XDATA_TYPE *haddr =3D ATOMIC_MMU_LOOKUP; \ XDATA_TYPE ldo, ldn, old, new, val =3D xval; \ \ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 7c4c763b88..d048fc82c9 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1748,7 +1748,6 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulon= g addr, uint64_t val, #define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr #define ATOMIC_NAME(X) \ HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu)) -#define ATOMIC_MMU_DECLS=20 #define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr) #define ATOMIC_MMU_CLEANUP =20 diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 71c4bf6477..c353e452ea 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -748,7 +748,6 @@ static void *atomic_mmu_lookup(CPUArchState *env, targe= t_ulong addr, } =20 /* Macro to call the above, with local variables from the use context. */ -#define ATOMIC_MMU_DECLS do {} while (0) #define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC()) #define ATOMIC_MMU_CLEANUP do { clear_helper_retaddr(); } while (0) =20 --=20 2.17.1