From nobody Fri Apr 10 20:20:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 145B0C28B2B for ; Fri, 19 Aug 2022 00:57:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344762AbiHSA5E (ORCPT ); Thu, 18 Aug 2022 20:57:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239078AbiHSA46 (ORCPT ); Thu, 18 Aug 2022 20:56:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3295DDF0BE for ; Thu, 18 Aug 2022 17:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660870614; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oFSlz+/RxL0bslKv2xWNcP5n57+tukjA5+kNw0OWvAo=; b=Ja30WnTW9vhl1n9hzahiB/2APBApEar0X0Su56AT5gzbJTE2Qgsqq8KFbM3/LcdsUPWejw nFBaSWWfvrlH9ABX7+tZOiQ29Yop0xBruRx0aFGf8l16NmPt2boL7fcv4LL+wsjM9RNwAk FyLoe+OwGZmEay/FDW6KzdDCgcV0NCA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-177-0S5n4U2iNbqMMr-b1ojhpg-1; Thu, 18 Aug 2022 20:56:50 -0400 X-MC-Unique: 0S5n4U2iNbqMMr-b1ojhpg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 79FF1185A7B2; Fri, 19 Aug 2022 00:56:49 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-16.bne.redhat.com [10.64.54.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A15D2C15BB8; Fri, 19 Aug 2022 00:56:41 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, peterx@redhat.com, pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, drjones@redhat.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking Date: Fri, 19 Aug 2022 08:55:57 +0800 Message-Id: <20220819005601.198436-2-gshan@redhat.com> In-Reply-To: <20220819005601.198436-1-gshan@redhat.com> References: <20220819005601.198436-1-gshan@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The ring-based dirty memory tracking has been available and enabled on x86 for a while. The feature is beneficial when the number of dirty pages is small in a checkpointing system or live migration scenario. More details can be found from fb04a1eddb1a ("KVM: X86: Implement ring-based dirty memory tracking"). This enables the ring-based dirty memory tracking on ARM64. It's notable that no extra reserved ring entries are needed on ARM64 because the huge pages are always split into base pages when page dirty tracking is enabled. Signed-off-by: Gavin Shan --- Documentation/virt/kvm/api.rst | 2 +- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/arm.c | 8 ++++++++ 4 files changed, 11 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index abd7c32126ce..19fa1ac017ed 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -8022,7 +8022,7 @@ regardless of what has actually been exposed through = the CPUID leaf. 8.29 KVM_CAP_DIRTY_LOG_RING --------------------------- =20 -:Architectures: x86 +:Architectures: x86, arm64 :Parameters: args[0] - size of the dirty log ring =20 KVM is capable of tracking dirty memory using ring buffers that are diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index 3bb134355874..7e04b0b8d2b2 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -43,6 +43,7 @@ #define __KVM_HAVE_VCPU_EVENTS =20 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 +#define KVM_DIRTY_LOG_PAGE_OFFSET 64 =20 #define KVM_REG_SIZE(id) \ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 815cc118c675..0309b2d0f2da 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -32,6 +32,7 @@ menuconfig KVM select KVM_VFIO select HAVE_KVM_EVENTFD select HAVE_KVM_IRQFD + select HAVE_KVM_DIRTY_RING select HAVE_KVM_MSI select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQ_ROUTING diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 986cee6fbc7f..3de6b9b39db7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -866,6 +866,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (!ret) ret =3D 1; =20 + /* Force vcpu exit if its dirty ring is soft-full */ + if (unlikely(vcpu->kvm->dirty_ring_size && + kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) { + vcpu->run->exit_reason =3D KVM_EXIT_DIRTY_RING_FULL; + trace_kvm_dirty_ring_exit(vcpu); + ret =3D 0; + } + if (ret > 0) ret =3D check_vcpu_requests(vcpu); =20 --=20 2.23.0 From nobody Fri Apr 10 20:20:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBF07C32774 for ; Fri, 19 Aug 2022 00:57:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344462AbiHSA5O (ORCPT ); Thu, 18 Aug 2022 20:57:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239026AbiHSA5J (ORCPT ); Thu, 18 Aug 2022 20:57:09 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6B3DDF0AB for ; Thu, 18 Aug 2022 17:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660870626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WXZV61jfgDNJXnq5uwdk9hoJi8yzuP+++eERMshJ/0Q=; b=MWCaV9fGWio1MopK4E1juVAqoE/jNaBgTVOB74WbfyMybfsxgrmpJFaY8l5eDs5/cWpl+5 D+dRYHqgqosb1OpEgwgryvBBTT2/W4fpHuA9YdKUzwMSC+Vrxgz6+307Skt0VxuwUs1OUM e1AAnEEdam34tP/KOdmH3lcjlx3rVSA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-321-VilcA4G8OhCHa0uI9An1Eg-1; Thu, 18 Aug 2022 20:57:01 -0400 X-MC-Unique: VilcA4G8OhCHa0uI9An1Eg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D3A3C185A7B2; Fri, 19 Aug 2022 00:56:58 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-16.bne.redhat.com [10.64.54.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 09C02C15BB8; Fri, 19 Aug 2022 00:56:49 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, peterx@redhat.com, pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, drjones@redhat.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v1 2/5] KVM: selftests: Use host page size to map ring buffer in dirty_log_test Date: Fri, 19 Aug 2022 08:55:58 +0800 Message-Id: <20220819005601.198436-3-gshan@redhat.com> In-Reply-To: <20220819005601.198436-1-gshan@redhat.com> References: <20220819005601.198436-1-gshan@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In vcpu_map_dirty_ring(), the guest's page size is used to figure out the offset in the virtual area. It works fine when we have same page size on host and guest. However, it fails when the page sizes on host and guest are different, like below error messages indicates. Actually, the offset should be figured out according to host's page size. Otherwise, the virtual area associated with the ring buffer can't be identified by host. # ./dirty_log_test -M dirty-ring -m 7 Setting log mode to: 'dirty-ring' Test iterations: 32, interval: 10 (ms) Testing guest mode: PA-bits:40, VA-bits:48, 64K pages guest physical test memory offset: 0xffbffc0000 vcpu stops because vcpu is kicked out... Notifying vcpu to continue vcpu continues now. =3D=3D=3D=3D Test Assertion Failure =3D=3D=3D=3D lib/kvm_util.c:1477: addr =3D=3D MAP_FAILED pid=3D9000 tid=3D9000 errno=3D0 - Success 1 0x0000000000405f5b: vcpu_map_dirty_ring at kvm_util.c:1477 2 0x0000000000402ebb: dirty_ring_collect_dirty_pages at dirty_log_test.c= :349 3 0x00000000004029b3: log_mode_collect_dirty_pages at dirty_log_test.c:4= 78 4 (inlined by) run_test at dirty_log_test.c:778 5 (inlined by) run_test at dirty_log_test.c:691 6 0x0000000000403a57: for_each_guest_mode at guest_modes.c:105 7 0x0000000000401ccf: main at dirty_log_test.c:921 8 0x0000ffffb06ec79b: ?? ??:0 9 0x0000ffffb06ec86b: ?? ??:0 10 0x0000000000401def: _start at ??:? Dirty ring mapped private Fix the issue by using host's page size to map the ring buffer. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 9889fe0d8919..4e823cbe6b48 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1464,7 +1464,7 @@ struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcp= u *vcpu) =20 void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu) { - uint32_t page_size =3D vcpu->vm->page_size; + uint32_t page_size =3D getpagesize(); uint32_t size =3D vcpu->vm->dirty_ring_size; =20 TEST_ASSERT(size > 0, "Should enable dirty ring first"); --=20 2.23.0 From nobody Fri Apr 10 20:20:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27754C00140 for ; Fri, 19 Aug 2022 00:57:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345925AbiHSA51 (ORCPT ); Thu, 18 Aug 2022 20:57:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345238AbiHSA5S (ORCPT ); Thu, 18 Aug 2022 20:57:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00F24DF4CA for ; Thu, 18 Aug 2022 17:57:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660870634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W+KM/oNj0I/wS0RXKgmyise3ZxF2pvHFwuH0MTKYePA=; b=MAOnrqFX9Mhmu1VBwyxOs4BBk2tKpTZTFLLWZIBTvm4dRDQYVK9GQNCNTYj9HxhRv3yxR3 mnKiuV7T5s5n47lw4bZuvlTY4eJv6rk2Wm5vmM8ijjCDhnD6OFuUmkDDom6tpD8G70KtOo gZ6jvvk8nszuiuhniW0WomIJmErQeoE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-252-0IVyFOiYPsyaJ_WaOfR3kA-1; Thu, 18 Aug 2022 20:57:08 -0400 X-MC-Unique: 0IVyFOiYPsyaJ_WaOfR3kA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC7DA381078F; Fri, 19 Aug 2022 00:57:07 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-16.bne.redhat.com [10.64.54.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 708AEC15BB8; Fri, 19 Aug 2022 00:56:59 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, peterx@redhat.com, pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, drjones@redhat.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v1 3/5] KVM: selftests: Dirty host pages in dirty_log_test Date: Fri, 19 Aug 2022 08:55:59 +0800 Message-Id: <20220819005601.198436-4-gshan@redhat.com> In-Reply-To: <20220819005601.198436-1-gshan@redhat.com> References: <20220819005601.198436-1-gshan@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It's assumed that 1024 host pages, instead of guest pages, are dirtied in each iteration in guest_code(). The current implementation misses the case of mismatched page sizes in host and guest. For example, ARM64 could have 64KB page size in guest, but 4KB page size in host. (TEST_PAGES_PER_LOOP / 16), instead of TEST_PAGES_PER_LOOP, host pages are dirtied in every iteration. Fix the issue by touching all sub-pages when we have mismatched page sizes in host and guest. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/dirty_log_test.c | 50 +++++++++++++++----- 1 file changed, 39 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/s= elftests/kvm/dirty_log_test.c index 9c883c94d478..50b02186ce12 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -70,6 +70,7 @@ * that may change. */ static uint64_t host_page_size; +static uint64_t host_num_pages; static uint64_t guest_page_size; static uint64_t guest_num_pages; static uint64_t random_array[TEST_PAGES_PER_LOOP]; @@ -94,8 +95,23 @@ static uint64_t guest_test_virt_mem =3D DEFAULT_GUEST_TE= ST_MEM; */ static void guest_code(void) { + uint64_t num_pages, page_size, sub_page_size; uint64_t addr; - int i; + int pages_per_loop, i, j; + + /* + * The page sizes on host and VM could be different. We need + * to perform writing on all sub-pages. + */ + if (host_page_size >=3D guest_page_size) { + num_pages =3D host_num_pages; + page_size =3D host_page_size; + sub_page_size =3D host_page_size; + } else { + num_pages =3D guest_num_pages; + page_size =3D guest_page_size; + sub_page_size =3D host_page_size; + } =20 /* * On s390x, all pages of a 1M segment are initially marked as dirty @@ -103,18 +119,29 @@ static void guest_code(void) * To compensate this specialty in this test, we need to touch all * pages during the first iteration. */ - for (i =3D 0; i < guest_num_pages; i++) { - addr =3D guest_test_virt_mem + i * guest_page_size; - *(uint64_t *)addr =3D READ_ONCE(iteration); + for (i =3D 0; i < num_pages; i++) { + addr =3D guest_test_virt_mem + i * page_size; + addr =3D align_down(addr, page_size); + + for (j =3D 0; j < page_size / sub_page_size; j++) { + *(uint64_t *)(addr + j * sub_page_size) =3D + READ_ONCE(iteration); + } } =20 + pages_per_loop =3D (TEST_PAGES_PER_LOOP * sub_page_size) / page_size; + while (true) { - for (i =3D 0; i < TEST_PAGES_PER_LOOP; i++) { + for (i =3D 0; i < pages_per_loop; i++) { addr =3D guest_test_virt_mem; - addr +=3D (READ_ONCE(random_array[i]) % guest_num_pages) - * guest_page_size; - addr =3D align_down(addr, host_page_size); - *(uint64_t *)addr =3D READ_ONCE(iteration); + addr +=3D (READ_ONCE(random_array[i]) % num_pages) + * page_size; + addr =3D align_down(addr, page_size); + + for (j =3D 0; j < page_size / sub_page_size; j++) { + *(uint64_t *)(addr + j * sub_page_size) =3D + READ_ONCE(iteration); + } } =20 /* Tell the host that we need more random numbers */ @@ -713,14 +740,14 @@ static void run_test(enum vm_guest_mode mode, void *a= rg) 2ul << (DIRTY_MEM_BITS - PAGE_SHIFT_4K), guest_code); =20 guest_page_size =3D vm->page_size; + host_page_size =3D getpagesize(); + /* * A little more than 1G of guest page sized pages. Cover the * case where the size is not aligned to 64 pages. */ guest_num_pages =3D (1ul << (DIRTY_MEM_BITS - vm->page_shift)) + 3; guest_num_pages =3D vm_adjust_num_guest_pages(mode, guest_num_pages); - - host_page_size =3D getpagesize(); host_num_pages =3D vm_num_host_pages(mode, guest_num_pages); =20 if (!p->phys_offset) { @@ -760,6 +787,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) sync_global_to_guest(vm, host_page_size); sync_global_to_guest(vm, guest_page_size); sync_global_to_guest(vm, guest_test_virt_mem); + sync_global_to_guest(vm, host_num_pages); sync_global_to_guest(vm, guest_num_pages); =20 /* Start the iterations */ --=20 2.23.0 From nobody Fri Apr 10 20:20:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFCD9C28B2B for ; Fri, 19 Aug 2022 00:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346026AbiHSA5n (ORCPT ); Thu, 18 Aug 2022 20:57:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345936AbiHSA52 (ORCPT ); Thu, 18 Aug 2022 20:57:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D440BDF4ED for ; Thu, 18 Aug 2022 17:57:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660870643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eOji/HuXDZiCixckybyhN3qmBbPMHE4IBB02x/jZa4k=; b=GMkdO+xf34OzRD/OoEBqsgdq7taizk6cPObFKYhY8rNWMiZrJT56cZbWslYky+W0f2GD7t YkQPB06+adPom+4pDvKlfHHqfOGIkxWpd9qs2+S8k8lqTIb2kz73K7orawr4Mfr/nEt/ET dwL/+0VcD7IB35taXYzAF4KJ4zanBsE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-508-MBMGmm7_NKKGLqi_t_4vLQ-1; Thu, 18 Aug 2022 20:57:17 -0400 X-MC-Unique: MBMGmm7_NKKGLqi_t_4vLQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4265485A584; Fri, 19 Aug 2022 00:57:16 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-16.bne.redhat.com [10.64.54.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5A7C4C15BB8; Fri, 19 Aug 2022 00:57:08 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, peterx@redhat.com, pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, drjones@redhat.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v1 4/5] KVM: selftests: Clear dirty ring states between two modes in dirty_log_test Date: Fri, 19 Aug 2022 08:56:00 +0800 Message-Id: <20220819005601.198436-5-gshan@redhat.com> In-Reply-To: <20220819005601.198436-1-gshan@redhat.com> References: <20220819005601.198436-1-gshan@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There are two states, which need to be cleared before next mode is executed. Otherwise, we will hit failure as the following messages indicate. - The variable 'dirty_ring_vcpu_ring_full' shared by main and vcpu thread. It's indicating if the vcpu exit due to full ring buffer. The value can be carried from previous mode (VM_MODE_P40V48_4K) to current one (VM_MODE_P40V48_64K) when VM_MODE_P40V48_16K isn't supported. - The current ring buffer index needs to be reset before next mode (VM_MODE_P40V48_64K) is executed. Otherwise, the stale value is carried from previous mode (VM_MODE_P40V48_4K). # ./dirty_log_test -M dirty-ring Setting log mode to: 'dirty-ring' Test iterations: 32, interval: 10 (ms) Testing guest mode: PA-bits:40, VA-bits:48, 4K pages guest physical test memory offset: 0xffbfffc000 : Dirtied 995328 pages Total bits checked: dirty (1012434), clear (7114123), track_next (966700) Testing guest mode: PA-bits:40, VA-bits:48, 64K pages guest physical test memory offset: 0xffbffc0000 vcpu stops because vcpu is kicked out... vcpu continues now. Notifying vcpu to continue Iteration 1 collected 0 pages vcpu stops because dirty ring is full... vcpu continues now. vcpu stops because dirty ring is full... vcpu continues now. vcpu stops because dirty ring is full... =3D=3D=3D=3D Test Assertion Failure =3D=3D=3D=3D dirty_log_test.c:369: cleared =3D=3D count pid=3D10541 tid=3D10541 errno=3D22 - Invalid argument 1 0x0000000000403087: dirty_ring_collect_dirty_pages at dirty_log_test= .c:369 2 0x0000000000402a0b: log_mode_collect_dirty_pages at dirty_log_test.c= :492 3 (inlined by) run_test at dirty_log_test.c:795 4 (inlined by) run_test at dirty_log_test.c:705 5 0x0000000000403a37: for_each_guest_mode at guest_modes.c:100 6 0x0000000000401ccf: main at dirty_log_test.c:938 7 0x0000ffff9ecd279b: ?? ??:0 8 0x0000ffff9ecd286b: ?? ??:0 9 0x0000000000401def: _start at ??:? Reset dirty pages (0) mismatch with collected (35566) Fix the issues by clearing 'dirty_ring_vcpu_ring_full' and the ring buffer index before a new mode is executed. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/dirty_log_test.c | 27 ++++++++++++-------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/s= elftests/kvm/dirty_log_test.c index 50b02186ce12..450e97d10de7 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -252,13 +252,15 @@ static void clear_log_create_vm_done(struct kvm_vm *v= m) } =20 static void dirty_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot, - void *bitmap, uint32_t num_pages) + void *bitmap, uint32_t num_pages, + uint32_t *unused) { kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap); } =20 static void clear_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot, - void *bitmap, uint32_t num_pages) + void *bitmap, uint32_t num_pages, + uint32_t *unused) { kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap); kvm_vm_clear_dirty_log(vcpu->vm, slot, bitmap, 0, num_pages); @@ -354,10 +356,9 @@ static void dirty_ring_continue_vcpu(void) } =20 static void dirty_ring_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot, - void *bitmap, uint32_t num_pages) + void *bitmap, uint32_t num_pages, + uint32_t *ring_buf_idx) { - /* We only have one vcpu */ - static uint32_t fetch_index =3D 0; uint32_t count =3D 0, cleared; bool continued_vcpu =3D false; =20 @@ -374,7 +375,8 @@ static void dirty_ring_collect_dirty_pages(struct kvm_v= cpu *vcpu, int slot, =20 /* Only have one vcpu */ count =3D dirty_ring_collect_one(vcpu_map_dirty_ring(vcpu), - slot, bitmap, num_pages, &fetch_index); + slot, bitmap, num_pages, + ring_buf_idx); =20 cleared =3D kvm_vm_reset_dirty_ring(vcpu->vm); =20 @@ -431,7 +433,8 @@ struct log_mode { void (*create_vm_done)(struct kvm_vm *vm); /* Hook to collect the dirty pages into the bitmap provided */ void (*collect_dirty_pages) (struct kvm_vcpu *vcpu, int slot, - void *bitmap, uint32_t num_pages); + void *bitmap, uint32_t num_pages, + uint32_t *ring_buf_idx); /* Hook to call when after each vcpu run */ void (*after_vcpu_run)(struct kvm_vcpu *vcpu, int ret, int err); void (*before_vcpu_join) (void); @@ -496,13 +499,14 @@ static void log_mode_create_vm_done(struct kvm_vm *vm) } =20 static void log_mode_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot, - void *bitmap, uint32_t num_pages) + void *bitmap, uint32_t num_pages, + uint32_t *ring_buf_idx) { struct log_mode *mode =3D &log_modes[host_log_mode]; =20 TEST_ASSERT(mode->collect_dirty_pages !=3D NULL, "collect_dirty_pages() is required for any log mode!"); - mode->collect_dirty_pages(vcpu, slot, bitmap, num_pages); + mode->collect_dirty_pages(vcpu, slot, bitmap, num_pages, ring_buf_idx); } =20 static void log_mode_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int er= r) @@ -721,6 +725,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct kvm_vcpu *vcpu; struct kvm_vm *vm; unsigned long *bmap; + uint32_t ring_buf_idx =3D 0; =20 if (!log_mode_supported()) { print_skip("Log mode '%s' not supported", @@ -797,6 +802,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) host_dirty_count =3D 0; host_clear_count =3D 0; host_track_next_count =3D 0; + WRITE_ONCE(dirty_ring_vcpu_ring_full, false); =20 pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu); =20 @@ -804,7 +810,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) /* Give the vcpu thread some time to dirty some pages */ usleep(p->interval * 1000); log_mode_collect_dirty_pages(vcpu, TEST_MEM_SLOT_INDEX, - bmap, host_num_pages); + bmap, host_num_pages, + &ring_buf_idx); =20 /* * See vcpu_sync_stop_requested definition for details on why --=20 2.23.0 From nobody Fri Apr 10 20:20:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2587C00140 for ; Fri, 19 Aug 2022 00:57:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345685AbiHSA5r (ORCPT ); Thu, 18 Aug 2022 20:57:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346066AbiHSA5i (ORCPT ); Thu, 18 Aug 2022 20:57:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1BBFDF650 for ; Thu, 18 Aug 2022 17:57:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660870651; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NtiIgnIOWAAHwJQ2LveDQTHCn3qhSJFmqy9sxEsG1rc=; b=MYZzpYaGYZuDC3c3NpxjkJ81fPmvyocs94YkRfy4kjp+1bXmQ5ysFPQkfu9pYJtUGquXXe zoqP7tjZ4IUMQkKzoqiah6wDQI5sMzdiodL6x2MxZPbAZjq6c0aM1oFBggC8nEZrAJ7y6u ru7D+a8X4FiGKV09fwErMdK2T3n8js0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-491-hcWi1V5tPBipyahNrluaWA-1; Thu, 18 Aug 2022 20:57:26 -0400 X-MC-Unique: hcWi1V5tPBipyahNrluaWA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BCBDF801755; Fri, 19 Aug 2022 00:57:24 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-16.bne.redhat.com [10.64.54.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C878FC15BB8; Fri, 19 Aug 2022 00:57:16 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, peterx@redhat.com, pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, drjones@redhat.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v1 5/5] KVM: selftests: Automate choosing dirty ring size in dirty_log_test Date: Fri, 19 Aug 2022 08:56:01 +0800 Message-Id: <20220819005601.198436-6-gshan@redhat.com> In-Reply-To: <20220819005601.198436-1-gshan@redhat.com> References: <20220819005601.198436-1-gshan@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In the dirty ring case, we rely on VM_EXIT due to full dirty ring state. On ARM64 system, there are 4096 host pages when the host page size is 64KB. In this case, the vcpu never exits due to the full dirty ring state. The vcpu corrupts same set of pages, but the dirty page information isn't collected in the main thread. This leads to infinite loop as the following log shows. # ./dirty_log_test -M dirty-ring -c 65536 -m 5 Setting log mode to: 'dirty-ring' Test iterations: 32, interval: 10 (ms) Testing guest mode: PA-bits:40, VA-bits:48, 4K pages guest physical test memory offset: 0xffbffe0000 vcpu stops because vcpu is kicked out... Notifying vcpu to continue vcpu continues now. Iteration 1 collected 576 pages Fix the issue by automatically choosing the best dirty ring size, to ensure VM_EXIT due to full dirty ring state. The option '-c' provides a hint to it, instead of the value of it. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/dirty_log_test.c | 24 ++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/s= elftests/kvm/dirty_log_test.c index 450e97d10de7..ad31b6e3fe6a 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -23,6 +23,9 @@ #include "guest_modes.h" #include "processor.h" =20 +#define DIRTY_MEM_BITS 30 /* 1G */ +#define PAGE_SHIFT_4K 12 + /* The memory slot index to track dirty pages */ #define TEST_MEM_SLOT_INDEX 1 =20 @@ -298,6 +301,22 @@ static bool dirty_ring_supported(void) =20 static void dirty_ring_create_vm_done(struct kvm_vm *vm) { + uint64_t pages; + uint32_t limit; + + /* + * We rely on VM_EXIT due to full dirty ring state. Adjust + * the ring buffer size to ensure we're able to reach the + * full dirty ring state. + */ + pages =3D (1ul << (DIRTY_MEM_BITS - vm->page_shift)) + 3; + pages =3D vm_adjust_num_guest_pages(vm->mode, pages); + pages =3D vm_num_host_pages(vm->mode, pages); + + limit =3D 1 << (31 - __builtin_clz(pages)); + test_dirty_ring_count =3D 1 << (31 - __builtin_clz(test_dirty_ring_count)= ); + test_dirty_ring_count =3D min(limit, test_dirty_ring_count); + /* * Switch to dirty ring mode after VM creation but before any * of the vcpu creation. @@ -710,9 +729,6 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode= , struct kvm_vcpu **vcpu, return vm; } =20 -#define DIRTY_MEM_BITS 30 /* 1G */ -#define PAGE_SHIFT_4K 12 - struct test_params { unsigned long iterations; unsigned long interval; @@ -856,7 +872,7 @@ static void help(char *name) printf("usage: %s [-h] [-i iterations] [-I interval] " "[-p offset] [-m mode]\n", name); puts(""); - printf(" -c: specify dirty ring size, in number of entries\n"); + printf(" -c: hint to dirty ring size, in number of entries\n"); printf(" (only useful for dirty-ring test; default: %"PRIu32")\n", TEST_DIRTY_RING_COUNT); printf(" -i: specify iteration counts (default: %"PRIu64")\n", --=20 2.23.0