From nobody Mon Feb 9 10:50:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D12CCA485 for ; Mon, 13 Jun 2022 17:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242996AbiFMRzt (ORCPT ); Mon, 13 Jun 2022 13:55:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243233AbiFMRyc (ORCPT ); Mon, 13 Jun 2022 13:54:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E34F779364 for ; Mon, 13 Jun 2022 06:41:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655127673; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P9yrMuD6ukTFazK2Uy+QeKxPG2N/dyGDAn0N0MS9xh0=; b=OuswTJplZIGjqZDvl8LC21Opp7IOGvBhFXE5ItnVWf2PiTbhhJ/H3jFtOcBusSJm8wYD+3 h3QoAEOeBBf8vbximQ9rfWisaWC0D951JmaNCx4mKwaZr7b2ejgN8Cr6Vp53yB4dWpDHgr 2EoxGctNaT//cPAr+oI9PI/meT3RdCw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-626-3Ib7ZrteMEm0WscajhTnQA-1; Mon, 13 Jun 2022 09:41:10 -0400 X-MC-Unique: 3Ib7ZrteMEm0WscajhTnQA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6A785804181; Mon, 13 Jun 2022 13:41:09 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.60]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8637F492CA2; Mon, 13 Jun 2022 13:41:07 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , Yuan Yao , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 38/39] KVM: selftests: hyperv_svm_test: Introduce L2 TLB flush test Date: Mon, 13 Jun 2022 15:39:21 +0200 Message-Id: <20220613133922.2875594-39-vkuznets@redhat.com> In-Reply-To: <20220613133922.2875594-1-vkuznets@redhat.com> References: <20220613133922.2875594-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/x86_64/hyperv_svm_test.c | 54 +++++++++++++++++-- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_svm_test.c index c5cd9835dbd6..4f4d788e1b78 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c @@ -41,6 +41,9 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS (1U << 31) =20 +#define HV_SVM_EXITCODE_ENL 0xF0000000 +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1) + void l2_guest_code(void) { GUEST_SYNC(3); @@ -56,11 +59,25 @@ void l2_guest_code(void) =20 GUEST_SYNC(5); =20 + /* L2 TLB flush tests */ + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + rdmsr(MSR_FS_BASE); + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + /* Make sure we're not issuing Hyper-V TLB flush call again */ + __asm__ __volatile__ ("mov $0xdeadbeef, %rcx"); + /* Done, exit to L1 and never come back. */ vmmcall(); } =20 -static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm) +static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm, + vm_vaddr_t pgs_gpa) { unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; struct vmcb *vmcb =3D svm->vmcb; @@ -69,13 +86,23 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) =20 GUEST_SYNC(1); =20 - wrmsr(HV_X64_MSR_GUEST_OS_ID, (u64)0x8100 << 48); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); + enable_vp_assist(svm->vp_assist_gpa, svm->vp_assist); =20 GUEST_ASSERT(svm->vmcb_gpa); /* Prepare for L2 execution. */ generic_svm_setup(svm, l2_guest_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 + /* L2 TLB flush setup */ + hve->partition_assist_page =3D svm->partition_assist_gpa; + hve->hv_enlightenments_control.nested_flush_hypercall =3D 1; + hve->hv_vm_id =3D 1; + hve->hv_vp_id =3D 1; + current_vp_assist->nested_control.features.directhypercall =3D 1; + *(u32 *)(svm->partition_assist) =3D 0; + GUEST_SYNC(2); run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); @@ -110,6 +137,20 @@ static void __attribute__((__flatten__)) guest_code(st= ruct svm_test_data *svm) GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); vmcb->save.rip +=3D 2; /* rdmsr */ =20 + + /* + * L2 TLB flush test. First VMCALL should be handled directly by L0, + * no VMCALL exit expected. + */ + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); + vmcb->save.rip +=3D 2; /* rdmsr */ + /* Enable synthetic vmexit */ + *(u32 *)(svm->partition_assist) =3D 1; + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D HV_SVM_EXITCODE_ENL); + GUEST_ASSERT(vmcb->control.exit_info_1 =3D=3D HV_SVM_ENL_EXITCODE_TRAP_AF= TER_FLUSH); + run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); GUEST_SYNC(6); @@ -120,7 +161,7 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) int main(int argc, char *argv[]) { vm_vaddr_t nested_gva =3D 0; - + vm_vaddr_t hcall_page; struct kvm_vcpu *vcpu; struct kvm_vm *vm; struct kvm_run *run; @@ -134,7 +175,12 @@ int main(int argc, char *argv[]) vcpu_set_hv_cpuid(vcpu); run =3D vcpu->run; vcpu_alloc_svm(vm, &nested_gva); - vcpu_args_set(vcpu, 1, nested_gva); + + hcall_page =3D vm_vaddr_alloc_pages(vm, 1); + memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize()); + + vcpu_args_set(vcpu, 2, nested_gva, addr_gva2gpa(vm, hcall_page)); + vcpu_set_msr(vcpu, HV_X64_MSR_VP_INDEX, vcpu->id); =20 for (stage =3D 1;; stage++) { vcpu_run(vcpu); --=20 2.35.3