From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B726C4321E for ; Wed, 25 May 2022 09:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236806AbiEYJBt (ORCPT ); Wed, 25 May 2022 05:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230274AbiEYJBp (ORCPT ); Wed, 25 May 2022 05:01:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 88720703DC for ; Wed, 25 May 2022 02:01:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469302; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aIJTJIDCCKp+Pf3jyzyPqkfkf4mMXGMBGd7+AGgu8no=; b=RNWe+Qlww+uCvkZXXo2qB/Abb1vC2LjxmeZ25YqoTpKfYtplyLOjtEjS0YM2u51uoOqHpS DPalscggcWe2YktYj4Qm+S66cwinHH1/HWeOb1OV0V5y0pe0oaYvgqsliJ3fziJkEBjKHT ACXP0CynyUAtT95fO2qW/cX7vlL+ug8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-260-t7inQzRHP1O9XX8yaX4M6w-1; Wed, 25 May 2022 05:01:39 -0400 X-MC-Unique: t7inQzRHP1O9XX8yaX4M6w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA65680A0B5; Wed, 25 May 2022 09:01:38 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 085B840CFD0A; Wed, 25 May 2022 09:01:36 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 01/37] KVM: x86: Rename 'enable_direct_tlbflush' to 'enable_l2_tlb_flush' Date: Wed, 25 May 2022 11:00:57 +0200 Message-Id: <20220525090133.1264239-2-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To make terminology between Hyper-V-on-KVM and KVM-on-Hyper-V consistent, rename 'enable_direct_tlbflush' to 'enable_l2_tlb_flush'. The change eliminates the use of confusing 'direct' and adds the missing underscore. No functional change. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm-x86-ops.h | 2 +- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/svm_onhyperv.c | 2 +- arch/x86/kvm/svm/svm_onhyperv.h | 6 +++--- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 6 +++--- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 5f1f8778be90..438b3042b901 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -122,7 +122,7 @@ KVM_X86_OP_OPTIONAL(vm_move_enc_context_from) KVM_X86_OP(get_msr_feature) KVM_X86_OP(can_emulate_instruction) KVM_X86_OP(apic_init_signal_blocked) -KVM_X86_OP_OPTIONAL(enable_direct_tlbflush) +KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) KVM_X86_OP_OPTIONAL(migrate_timers) KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 9cdc5bbd721f..151880cfab9e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1523,7 +1523,7 @@ struct kvm_x86_ops { void *insn, int insn_len); =20 bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu); - int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu); + int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu); =20 void (*migrate_timers)(struct kvm_vcpu *vcpu); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyper= v.c index 8cdc62c74a96..69a7014d1cef 100644 --- a/arch/x86/kvm/svm/svm_onhyperv.c +++ b/arch/x86/kvm/svm/svm_onhyperv.c @@ -14,7 +14,7 @@ #include "kvm_onhyperv.h" #include "svm_onhyperv.h" =20 -int svm_hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) +int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu) { struct hv_enlightenments *hve; struct hv_partition_assist_pg **p_hv_pa_pg =3D diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyper= v.h index e2fc59380465..d6ec4aeebedb 100644 --- a/arch/x86/kvm/svm/svm_onhyperv.h +++ b/arch/x86/kvm/svm/svm_onhyperv.h @@ -13,7 +13,7 @@ =20 static struct kvm_x86_ops svm_x86_ops; =20 -int svm_hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu); +int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu); =20 static inline void svm_hv_init_vmcb(struct vmcb *vmcb) { @@ -51,8 +51,8 @@ static inline void svm_hv_hardware_setup(void) =20 vp_ap->nested_control.features.directhypercall =3D 1; } - svm_x86_ops.enable_direct_tlbflush =3D - svm_hv_enable_direct_tlbflush; + svm_x86_ops.enable_l2_tlb_flush =3D + svm_hv_enable_l2_tlb_flush; } } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8bbcf2071faf..4380b6930647 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -464,7 +464,7 @@ static unsigned long host_idt_base; static bool __read_mostly enlightened_vmcs =3D true; module_param(enlightened_vmcs, bool, 0444); =20 -static int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) +static int hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu) { struct hv_enlightened_vmcs *evmcs; struct hv_partition_assist_pg **p_hv_pa_pg =3D @@ -8307,8 +8307,8 @@ static int __init vmx_init(void) } =20 if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) - vmx_x86_ops.enable_direct_tlbflush - =3D hv_enable_direct_tlbflush; + vmx_x86_ops.enable_l2_tlb_flush + =3D hv_enable_l2_tlb_flush; =20 } else { enlightened_vmcs =3D false; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 04812eaaf61b..891507b2eca5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4380,7 +4380,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, lon= g ext) kvm_x86_ops.nested_ops->get_state(NULL, NULL, 0) : 0; break; case KVM_CAP_HYPERV_DIRECT_TLBFLUSH: - r =3D kvm_x86_ops.enable_direct_tlbflush !=3D NULL; + r =3D kvm_x86_ops.enable_l2_tlb_flush !=3D NULL; break; case KVM_CAP_HYPERV_ENLIGHTENED_VMCS: r =3D kvm_x86_ops.nested_ops->enable_evmcs !=3D NULL; @@ -5290,10 +5290,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcp= u *vcpu, } return r; case KVM_CAP_HYPERV_DIRECT_TLBFLUSH: - if (!kvm_x86_ops.enable_direct_tlbflush) + if (!kvm_x86_ops.enable_l2_tlb_flush) return -ENOTTY; =20 - return static_call(kvm_x86_enable_direct_tlbflush)(vcpu); + return static_call(kvm_x86_enable_l2_tlb_flush)(vcpu); =20 case KVM_CAP_HYPERV_ENFORCE_CPUID: return kvm_hv_set_enforce_cpuid(vcpu, cap->args[0]); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 069D7C4332F for ; Wed, 25 May 2022 09:02:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237580AbiEYJCB (ORCPT ); Wed, 25 May 2022 05:02:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236816AbiEYJBr (ORCPT ); Wed, 25 May 2022 05:01:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 627A97037E for ; Wed, 25 May 2022 02:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469305; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B2s/9SfYEmUCeB/Q2Gft8JfkhOG9QuJaBzKgdBSdiJo=; b=cyPGiPAGklRuVEtayVPbkQbUnaIIqa3QlvmOU5LMKCnhTavEyYbOpvA3ILM84oyB14xG70 Hgz0a/PiCqIzeQJUH9t6A3/C3QfOS7bqFkp2jzskfxuzCuTpguKFuzBAipbFKdALcKM1K+ yFSj3Xefnytc82QgpwbpUUpXXPGESGs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-487-7KriNd-POmCDUVXiSb8mng-1; Wed, 25 May 2022 05:01:41 -0400 X-MC-Unique: 7KriNd-POmCDUVXiSb8mng-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C2982101AA46; Wed, 25 May 2022 09:01:40 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1AED640CF8EF; Wed, 25 May 2022 09:01:38 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/37] KVM: x86: hyper-v: Resurrect dedicated KVM_REQ_HV_TLB_FLUSH flag Date: Wed, 25 May 2022 11:00:58 +0200 Message-Id: <20220525090133.1264239-3-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to implementing fine-grained Hyper-V TLB flush and L2 TLB flush, resurrect dedicated KVM_REQ_HV_TLB_FLUSH request bit. As KVM_REQ_TLB_FLUSH_GUEST/KVM_REQ_TLB_FLUSH_GUEST/KVM_REQ_TLB_FLUSH_CURRENT are stronger operations, clear KVM_REQ_HV_TLB_FLUSH request in kvm_service_local_tlb_flush_requests() when any of these were also requested. No (real) functional change intended. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/hyperv.c | 4 ++-- arch/x86/kvm/x86.c | 10 ++++++++-- 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 151880cfab9e..92509ee6ae1b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -105,6 +105,8 @@ KVM_ARCH_REQ_FLAGS(30, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_MMU_FREE_OBSOLETE_ROOTS \ KVM_ARCH_REQ_FLAGS(31, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_HV_TLB_FLUSH \ + KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) =20 #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 46f9dfb60469..b402ad059eb9 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1876,11 +1876,11 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { - kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST); + kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 - kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST, vcpu_mask); + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); } =20 ret_success: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 891507b2eca5..f98503431f8d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3363,11 +3363,17 @@ static inline void kvm_vcpu_flush_tlb_current(struc= t kvm_vcpu *vcpu) */ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu) { - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) { kvm_vcpu_flush_tlb_current(vcpu); + kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + } =20 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) { + kvm_vcpu_flush_tlb_guest(vcpu); + kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); + } } EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EFF9C433FE for ; Wed, 25 May 2022 09:02:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238601AbiEYJCH (ORCPT ); Wed, 25 May 2022 05:02:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237056AbiEYJBs (ORCPT ); Wed, 25 May 2022 05:01:48 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8BDF8703F1 for ; Wed, 25 May 2022 02:01:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469306; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MgIp+nU0cbHkkbgJcXaRLv57Q4w+bXbatv5BUiJMWzk=; b=Wv98Kh1WnCMltGH7Eb0vUfwklgvKlccCq72ivV/40n7HrXPTQQZ1lhfsRJf5/bRMGUXyYa bOiNIgyhAtGF3iokeqfxa9TgVpiDRX83rHbH9BaSDlov1JCAodwMlmu6QWaT41O301OLmF sAwVRf4bMO0znd6M/ccpEIJEgDs3x1g= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-587-AhYExhMFNGuBhyWK4cuTLA-1; Wed, 25 May 2022 05:01:43 -0400 X-MC-Unique: AhYExhMFNGuBhyWK4cuTLA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C2828811E90; Wed, 25 May 2022 09:01:42 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0C1A9405D4BF; Wed, 25 May 2022 09:01:40 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 03/37] KVM: x86: hyper-v: Introduce TLB flush fifo Date: Wed, 25 May 2022 11:00:59 +0200 Message-Id: <20220525090133.1264239-4-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To allow flushing individual GVAs instead of always flushing the whole VPID a per-vCPU structure to pass the requests is needed. Use standard 'kfifo' to queue two types of entries: individual GVA (GFN + up to 4095 following GFNs in the lower 12 bits) and 'flush all'. The size of the fifo is arbitrary set to '16'. Note, kvm_hv_flush_tlb() only queues 'flush all' entries for now and kvm_hv_vcpu_flush_tlb() doesn't actually read the fifo just resets the queue before doing full TLB flush so the functional change is very small but the infrastructure is prepared to handle individual GVA flush requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 20 +++++++++++++++ arch/x86/kvm/hyperv.c | 45 +++++++++++++++++++++++++++++++++ arch/x86/kvm/hyperv.h | 16 ++++++++++++ arch/x86/kvm/x86.c | 5 ++-- arch/x86/kvm/x86.h | 1 + 5 files changed, 85 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 92509ee6ae1b..31e87c5cbf1e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -25,6 +25,7 @@ #include #include #include +#include =20 #include #include @@ -597,6 +598,23 @@ struct kvm_vcpu_hv_synic { bool dont_zero_synic_pages; }; =20 +/* The maximum number of entries on the TLB flush fifo. */ +#define KVM_HV_TLB_FLUSH_FIFO_SIZE (16) +/* + * Note: the following 'magic' entry is made up by KVM to avoid putting + * anything besides GVA on the TLB flush fifo. It is theoretically possible + * to observe a request to flush 4095 PFNs starting from 0xfffffffffffff000 + * which will look identical. KVM's action to 'flush everything' instead of + * flushing these particular addresses is, however, fully legitimate as + * flushing more than requested is always OK. + */ +#define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1) + +struct kvm_vcpu_hv_tlb_flush_fifo { + spinlock_t write_lock; + DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE); +}; + /* Hyper-V per vcpu emulation context */ struct kvm_vcpu_hv { struct kvm_vcpu *vcpu; @@ -616,6 +634,8 @@ struct kvm_vcpu_hv { u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */ u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ } cpuid_cache; + + struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index b402ad059eb9..c8b22bf67577 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -29,6 +29,7 @@ #include #include #include +#include #include =20 #include @@ -954,6 +955,9 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) =20 hv_vcpu->vp_index =3D vcpu->vcpu_idx; =20 + INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries); + spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock); + return 0; } =20 @@ -1789,6 +1793,35 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, st= ruct kvm_hv_hcall *hc, var_cnt * sizeof(*sparse_banks)); } =20 +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 entry =3D KVM_HV_TLB_FLUSHALL_ENTRY; + + if (!hv_vcpu) + return; + + tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + + kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo-= >write_lock); +} + +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + kvm_vcpu_flush_tlb_guest(vcpu); + + if (!hv_vcpu) + return; + + tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + + kfifo_reset_out(&tlb_flush_fifo->entries); +} + static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { struct kvm *kvm =3D vcpu->kvm; @@ -1797,6 +1830,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); u64 valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; + struct kvm_vcpu *v; + unsigned long i; bool all_cpus; =20 /* @@ -1876,10 +1911,20 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { + kvm_for_each_vcpu(i, v, kvm) + hv_tlb_flush_enqueue(v); + kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 + for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) { + v =3D kvm_get_vcpu(kvm, i); + if (!v) + continue; + hv_tlb_flush_enqueue(v); + } + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); } =20 diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index da2737f2a956..87d0a0152ad7 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -147,4 +147,20 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kv= m_hyperv_eventfd *args); int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries); =20 + +static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + if (!hv_vcpu) + return; + + tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + + kfifo_reset_out(&tlb_flush_fifo->entries); +} +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu); + + #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f98503431f8d..ac0ed0cbd499 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3330,7 +3330,7 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *v= cpu) static_call(kvm_x86_flush_tlb_all)(vcpu); } =20 -static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) { ++vcpu->stat.tlb_flush; =20 @@ -3370,7 +3370,8 @@ void kvm_service_local_tlb_flush_requests(struct kvm_= vcpu *vcpu) =20 if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); - kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) + kvm_hv_vcpu_empty_flush_tlb(vcpu); } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 588792f00334..2324f496c500 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -58,6 +58,7 @@ static inline unsigned int __shrink_ple_window(unsigned i= nt val, =20 #define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL =20 +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu); void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu); int kvm_check_nested_events(struct kvm_vcpu *vcpu); =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F39CCC433EF for ; Wed, 25 May 2022 09:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237519AbiEYJCM (ORCPT ); Wed, 25 May 2022 05:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237053AbiEYJBv (ORCPT ); Wed, 25 May 2022 05:01:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 713B6703F1 for ; Wed, 25 May 2022 02:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469309; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ktt5GkcvRowFnqBBrjZz2/vjib0LB1ovolYXWBrXEGo=; b=Bniit3UXZ9Egzc6eJzWss+sYXGnqLb7Jv0H0x+/bu4LKkb4vXIrhnV9+8tsgjP7kv8fUD9 XtD4eNGoeG8XUAORWnq7BpGm2doxoGuvkEKjLa9Pjp8kJ8/WyNkP/0HbM4OE6MgF811PAQ dVGv/bpEznEnSY7S5wg1KDIut/OYUG0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-557-u4HPx99BM8C7EHMW3gOZvA-1; Wed, 25 May 2022 05:01:46 -0400 X-MC-Unique: u4HPx99BM8C7EHMW3gOZvA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF35F29AA3B4; Wed, 25 May 2022 09:01:45 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DB3F405D4BF; Wed, 25 May 2022 09:01:42 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 04/37] KVM: x86: hyper-v: Add helper to read hypercall data for array Date: Wed, 25 May 2022 11:01:00 +0200 Message-Id: <20220525090133.1264239-5-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Move the guts of kvm_get_sparse_vp_set() to a helper so that the code for reading a guest-provided array can be reused in the future, e.g. for getting a list of virtual addresses whose TLB entries need to be flushed. Opportunisticaly swap the order of the data and XMM adjustment so that the XMM/gpa offsets are bundled together. No functional change intended. Signed-off-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 53 +++++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index c8b22bf67577..762b0b699fdf 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1759,38 +1759,51 @@ struct kvm_hv_hcall { sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS]; }; =20 -static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, - int consumed_xmm_halves, - u64 *sparse_banks, gpa_t offset) -{ - u16 var_cnt; - int i; =20 - if (hc->var_cnt > 64) - return -EINVAL; - - /* Ignore banks that cannot possibly contain a legal VP index. */ - var_cnt =3D min_t(u16, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS); +static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc, + u16 orig_cnt, u16 cnt_cap, u64 *data, + int consumed_xmm_halves, gpa_t offset) +{ + /* + * Preserve the original count when ignoring entries via a "cap", KVM + * still needs to validate the guest input (though the non-XMM path + * punts on the checks). + */ + u16 cnt =3D min(orig_cnt, cnt_cap); + int i, j; =20 if (hc->fast) { /* * Each XMM holds two sparse banks, but do not count halves that * have already been consumed for hypercall parameters. */ - if (hc->var_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halv= es) + if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves) return HV_STATUS_INVALID_HYPERCALL_INPUT; - for (i =3D 0; i < var_cnt; i++) { - int j =3D i + consumed_xmm_halves; + + for (i =3D 0; i < cnt; i++) { + j =3D i + consumed_xmm_halves; if (j % 2) - sparse_banks[i] =3D sse128_hi(hc->xmm[j / 2]); + data[i] =3D sse128_hi(hc->xmm[j / 2]); else - sparse_banks[i] =3D sse128_lo(hc->xmm[j / 2]); + data[i] =3D sse128_lo(hc->xmm[j / 2]); } return 0; } =20 - return kvm_read_guest(kvm, hc->ingpa + offset, sparse_banks, - var_cnt * sizeof(*sparse_banks)); + return kvm_read_guest(kvm, hc->ingpa + offset, data, + cnt * sizeof(*data)); +} + +static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, + u64 *sparse_banks, int consumed_xmm_halves, + gpa_t offset) +{ + if (hc->var_cnt > 64) + return -EINVAL; + + /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */ + return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SE= T_BITS, + sparse_banks, consumed_xmm_halves, offset); } =20 static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu) @@ -1899,7 +1912,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, 2, sparse_banks, + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2, offsetof(struct hv_tlb_flush_ex, hv_vp_set.bank_contents))) return HV_STATUS_INVALID_HYPERCALL_INPUT; @@ -2010,7 +2023,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, str= uct kvm_hv_hcall *hc) if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, 1, sparse_banks, + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 1, offsetof(struct hv_send_ipi_ex, vp_set.bank_contents))) return HV_STATUS_INVALID_HYPERCALL_INPUT; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5364CC433F5 for ; Wed, 25 May 2022 09:02:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231398AbiEYJCS (ORCPT ); Wed, 25 May 2022 05:02:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238081AbiEYJB5 (ORCPT ); Wed, 25 May 2022 05:01:57 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1A2C17DE3A for ; Wed, 25 May 2022 02:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469313; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6uqBm3igjiTkZoAEa3llRLTCNMOe2Jyrla567Yi0Bdc=; b=OMcw69B+DsBOT74n5BXuQCYLUCXhxQyilofkSGTdPpnWA6jdQ+5ShCOpY9p/hKnE+e9wJM TAXUIXY7DtmtLYlrfLqTipRRm/8lQ2ytho5zJVmwPUujx4v6UvbZmmMF3SNhPcbsCT8GIe cscO/e0VsjidLqWqkdcg1tLkA9ckWJ8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-31-yn3vHqUlOcGik_o_HcWBZQ-1; Wed, 25 May 2022 05:01:48 -0400 X-MC-Unique: yn3vHqUlOcGik_o_HcWBZQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B97913C10236; Wed, 25 May 2022 09:01:47 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0B00F40CFD0A; Wed, 25 May 2022 09:01:45 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 05/37] KVM: x86: hyper-v: Handle HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls gently Date: Wed, 25 May 2022 11:01:01 +0200 Message-Id: <20220525090133.1264239-6-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by flushing the whole VPID and this is sub-optimal. Switch to handling these requests with 'flush_tlb_gva()' hooks instead. Use the newly introduced TLB flush fifo to queue the requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 102 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 90 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 762b0b699fdf..576749973727 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1806,32 +1806,84 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, s= truct kvm_hv_hcall *hc, sparse_banks, consumed_xmm_halves, offset); } =20 -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu) +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hca= ll *hc, u64 entries[], + int consumed_xmm_halves, gpa_t offset) +{ + return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, + entries, consumed_xmm_halves, offset); +} + +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int = count) { struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); u64 entry =3D KVM_HV_TLB_FLUSHALL_ENTRY; + unsigned long flags; =20 if (!hv_vcpu) return; =20 tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; =20 - kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo-= >write_lock); + spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags); + + /* + * All entries should fit on the fifo leaving one free for 'flush all' + * entry in case another request comes in. In case there's not enough + * space, just put 'flush all' entry there. + */ + if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) { + WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) !=3D count); + goto out_unlock; + } + + /* + * Note: full fifo always contains 'flush all' entry, no need to check the + * return value. + */ + kfifo_in(&tlb_flush_fifo->entries, &entry, 1); + +out_unlock: + spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags); } =20 void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE]; + int i, j, count; + gva_t gva; =20 - kvm_vcpu_flush_tlb_guest(vcpu); - - if (!hv_vcpu) + if (!tdp_enabled || !hv_vcpu) { + kvm_vcpu_flush_tlb_guest(vcpu); return; + } =20 tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; =20 + count =3D kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_F= IFO_SIZE); + + for (i =3D 0; i < count; i++) { + if (entries[i] =3D=3D KVM_HV_TLB_FLUSHALL_ENTRY) + goto out_flush_all; + + /* + * Lower 12 bits of 'address' encode the number of additional + * pages to flush. + */ + gva =3D entries[i] & PAGE_MASK; + for (j =3D 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) + static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE); + + ++vcpu->stat.tlb_flush; + } + goto out_empty_ring; + +out_flush_all: + kvm_vcpu_flush_tlb_guest(vcpu); + +out_empty_ring: kfifo_reset_out(&tlb_flush_fifo->entries); } =20 @@ -1841,11 +1893,21 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + /* + * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE' + * entries on the TLB flush fifo. The last entry, however, needs to be + * always left free for 'flush all' entry which gets placed when + * there is not enough space to put all the requested entries. + */ + u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1]; + u64 *tlb_flush_entries; u64 valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; struct kvm_vcpu *v; unsigned long i; bool all_cpus; + int consumed_xmm_halves =3D 0; + gpa_t data_offset; =20 /* * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the @@ -1861,10 +1923,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) flush.address_space =3D hc->ingpa; flush.flags =3D hc->outgpa; flush.processor_mask =3D sse128_lo(hc->xmm[0]); + consumed_xmm_halves =3D 1; } else { if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush)))) return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset =3D sizeof(flush); } =20 trace_kvm_hv_flush_tlb(flush.processor_mask, @@ -1888,10 +1952,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) flush_ex.flags =3D hc->outgpa; memcpy(&flush_ex.hv_vp_set, &hc->xmm[0], sizeof(hc->xmm[0])); + consumed_xmm_halves =3D 2; } else { if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex, sizeof(flush_ex)))) return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset =3D sizeof(flush_ex); } =20 trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, @@ -1907,25 +1973,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) return HV_STATUS_INVALID_HYPERCALL_INPUT; =20 if (all_cpus) - goto do_flush; + goto read_flush_entries; =20 if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2, - offsetof(struct hv_tlb_flush_ex, - hv_vp_set.bank_contents))) + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves, + data_offset)) + return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset +=3D hc->var_cnt * sizeof(sparse_banks[0]); + consumed_xmm_halves +=3D hc->var_cnt; + } + +read_flush_entries: + if (hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || + hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX || + hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) { + tlb_flush_entries =3D NULL; + } else { + if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries, + consumed_xmm_halves, data_offset)) return HV_STATUS_INVALID_HYPERCALL_INPUT; + tlb_flush_entries =3D __tlb_flush_entries; } =20 -do_flush: /* * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { kvm_for_each_vcpu(i, v, kvm) - hv_tlb_flush_enqueue(v); + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); =20 kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { @@ -1935,7 +2013,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) v =3D kvm_get_vcpu(kvm, i); if (!v) continue; - hv_tlb_flush_enqueue(v); + hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); } =20 kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2051BC433F5 for ; Wed, 25 May 2022 09:02:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235652AbiEYJC0 (ORCPT ); Wed, 25 May 2022 05:02:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238288AbiEYJB6 (ORCPT ); Wed, 25 May 2022 05:01:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5CC6C8198B for ; Wed, 25 May 2022 02:01:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=On79BoR3I89OG/tfaqzTF+/NsxS347djRibEvOA1X7g=; b=IBZZ39xejUm9RmyauTjs7V93g101QUTToIuF0opV4bZapGjGtXyvgFkBxhb/zp8GmiYNvK NsJdqKXEnixEqWIU72U7upau2gH//nz3iAEydLs5viKLOGAyVzdEK9vKsSpZfNkpHYxB9w TODagQ1Y69rRDNvKJ9L0yuq4VP/56Xo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-624-n28l-sAuNkOiz8Zj6F7mQA-1; Wed, 25 May 2022 05:01:50 -0400 X-MC-Unique: n28l-sAuNkOiz8Zj6F7mQA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ACFEF101A54E; Wed, 25 May 2022 09:01:49 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 024AE40CFD0A; Wed, 25 May 2022 09:01:47 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 06/37] KVM: x86: hyper-v: Expose support for extended gva ranges for flush hypercalls Date: Wed, 25 May 2022 11:01:02 +0200 Message-Id: <20220525090133.1264239-7-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Extended GVA ranges support bit seems to indicate whether lower 12 bits of GVA can be used to specify up to 4095 additional consequent GVAs to flush. This is somewhat described in TLFS. Previously, KVM was handling HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} requests by flushing the whole VPID so technically, extended GVA ranges were already supported. As such requests are handled more gently now, advertizing support for extended ranges starts making sense to reduce the size of TLB flush requests. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/hyperv-tlfs.h | 2 ++ arch/x86/kvm/hyperv.c | 1 + 2 files changed, 3 insertions(+) diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hype= rv-tlfs.h index 0a9407dc0859..5225a85c08c3 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -61,6 +61,8 @@ #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(10) /* Support for debug MSRs available */ #define HV_FEATURE_DEBUG_MSRS_AVAILABLE BIT(11) +/* Support for extended gva ranges for flush hypercalls available */ +#define HV_FEATURE_EXT_GVA_RANGES_FLUSH BIT(14) /* * Support for returning hypercall output block via XMM * registers is available diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 576749973727..f491e26ce162 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2644,6 +2644,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kv= m_cpuid2 *cpuid, ent->ebx |=3D HV_DEBUGGING; ent->edx |=3D HV_X64_GUEST_DEBUGGING_AVAILABLE; ent->edx |=3D HV_FEATURE_DEBUG_MSRS_AVAILABLE; + ent->edx |=3D HV_FEATURE_EXT_GVA_RANGES_FLUSH; =20 /* * Direct Synthetic timers only make sense with in-kernel --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5833C433F5 for ; Wed, 25 May 2022 09:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232494AbiEYJCo (ORCPT ); Wed, 25 May 2022 05:02:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238454AbiEYJB6 (ORCPT ); Wed, 25 May 2022 05:01:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B81518217F for ; Wed, 25 May 2022 02:01:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L9SzRD6jCyKrifxfxgvXKWwka0PVt5BLcyEV0CPXePE=; b=Hw0K8DJr0zzhkMeeEkdI4CY1uP0NcRAkNx90j7r0rf3nq5MLEWMKuy37uDAl/xdxmZzy/v dhWyxO1LWpj+63hkTr8Rog43ixgSLv81rg4eZDM9Ol0F3Au7EuOGJyZx544m5XgVD5iD+D CD1S1D4FcyK/t01UwIkHddzE14WzA5U= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-224-6XRhHbN0O3-uADAR2SMHSg-1; Wed, 25 May 2022 05:01:52 -0400 X-MC-Unique: 6XRhHbN0O3-uADAR2SMHSg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 093B08001EA; Wed, 25 May 2022 09:01:52 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA0C040CFD0A; Wed, 25 May 2022 09:01:49 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 07/37] KVM: x86: Prepare kvm_hv_flush_tlb() to handle L2's GPAs Date: Wed, 25 May 2022 11:01:03 +0200 Message-Id: <20220525090133.1264239-8-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to translate the specified L2 GPA to L1 GPA to read hypercall arguments from there. No functional change as KVM doesn't handle VMCALL/VMMCALL from L2 yet. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index f491e26ce162..4973a8802e7f 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -23,6 +23,7 @@ #include "ioapic.h" #include "cpuid.h" #include "hyperv.h" +#include "mmu.h" #include "xen.h" =20 #include @@ -1917,6 +1918,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, s= truct kvm_hv_hcall *hc) */ BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64); =20 + if (!hc->fast && is_guest_mode(vcpu)) { + hc->ingpa =3D translate_nested_gpa(vcpu, hc->ingpa, 0, NULL); + if (unlikely(hc->ingpa =3D=3D UNMAPPED_GVA)) + return HV_STATUS_INVALID_HYPERCALL_INPUT; + } + if (hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST || hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE) { if (hc->fast) { --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D59F6C433F5 for ; Wed, 25 May 2022 09:02:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239079AbiEYJCg (ORCPT ); Wed, 25 May 2022 05:02:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238676AbiEYJCE (ORCPT ); Wed, 25 May 2022 05:02:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 710CC7CDE7 for ; Wed, 25 May 2022 02:01:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nng4Bpby9SbxbMmp9Nub7wzlYHfiy2rUZLajFMlVZtU=; b=FUJbE+ykeG0LJiL3YRfJWY2TrGYNZ4LBS/s/l7BFDSuxuYFuh3zLdeEKnV++s80LG9vYNk Qfrw23etbJiSPEtJXrayh/BcyZwO7t3DDq1Op37PEffjW+NSN/i6Kc0ZZmJktiB1zqZHJ6 R8OClMQiJf8GtaarKZ9SuG1+W3k3sGs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-623-NTl7W1EePY61xZ8HgZFliw-1; Wed, 25 May 2022 05:01:54 -0400 X-MC-Unique: NTl7W1EePY61xZ8HgZFliw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EE1FD802814; Wed, 25 May 2022 09:01:53 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 479E240CFD0A; Wed, 25 May 2022 09:01:52 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 08/37] x86/hyperv: Introduce HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK constants Date: Wed, 25 May 2022 11:01:04 +0200 Message-Id: <20220525090133.1264239-9-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It may not come clear from where the magical '64' value used in __cpumask_to_vpset() come from. Moreover, '64' means both the maximum sparse bank number as well as the number of vCPUs per bank. Add defines to make things clear. These defines are also going to be used by KVM. No functional change. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- include/asm-generic/hyperv-tlfs.h | 5 +++++ include/asm-generic/mshyperv.h | 11 ++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv= -tlfs.h index fdce7a4cfc6f..020ca9bdbb79 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -399,6 +399,11 @@ struct hv_vpset { u64 bank_contents[]; } __packed; =20 +/* The maximum number of sparse vCPU banks which can be encoded by 'struct= hv_vpset' */ +#define HV_MAX_SPARSE_VCPU_BANKS (64) +/* The number of vCPUs in one sparse bank */ +#define HV_VCPUS_PER_SPARSE_BANK (64) + /* HvCallSendSyntheticClusterIpi hypercall */ struct hv_send_ipi { u32 vector; diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index c08758b6b364..0abe91df1ef6 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -214,9 +214,10 @@ static inline int __cpumask_to_vpset(struct hv_vpset *= vpset, { int cpu, vcpu, vcpu_bank, vcpu_offset, nr_bank =3D 1; int this_cpu =3D smp_processor_id(); + int max_vcpu_bank =3D hv_max_vp_index / HV_VCPUS_PER_SPARSE_BANK; =20 - /* valid_bank_mask can represent up to 64 banks */ - if (hv_max_vp_index / 64 >=3D 64) + /* vpset.valid_bank_mask can represent up to HV_MAX_SPARSE_VCPU_BANKS ban= ks */ + if (max_vcpu_bank >=3D HV_MAX_SPARSE_VCPU_BANKS) return 0; =20 /* @@ -224,7 +225,7 @@ static inline int __cpumask_to_vpset(struct hv_vpset *v= pset, * structs are not cleared between calls, we risk flushing unneeded * vCPUs otherwise. */ - for (vcpu_bank =3D 0; vcpu_bank <=3D hv_max_vp_index / 64; vcpu_bank++) + for (vcpu_bank =3D 0; vcpu_bank <=3D max_vcpu_bank; vcpu_bank++) vpset->bank_contents[vcpu_bank] =3D 0; =20 /* @@ -236,8 +237,8 @@ static inline int __cpumask_to_vpset(struct hv_vpset *v= pset, vcpu =3D hv_cpu_number_to_vp_number(cpu); if (vcpu =3D=3D VP_INVAL) return -1; - vcpu_bank =3D vcpu / 64; - vcpu_offset =3D vcpu % 64; + vcpu_bank =3D vcpu / HV_VCPUS_PER_SPARSE_BANK; + vcpu_offset =3D vcpu % HV_VCPUS_PER_SPARSE_BANK; __set_bit(vcpu_offset, (unsigned long *) &vpset->bank_contents[vcpu_bank]); if (vcpu_bank >=3D nr_bank) --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6EA4C433EF for ; Wed, 25 May 2022 09:02:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231250AbiEYJCu (ORCPT ); Wed, 25 May 2022 05:02:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238904AbiEYJCJ (ORCPT ); Wed, 25 May 2022 05:02:09 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 240F78BD0A for ; Wed, 25 May 2022 02:02:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469321; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V9OE9A1J7MhXEQI861QGYgq4npm3tTbcYB39RAWk55g=; b=PFYFQJaH/0xx22in7Lwk4kHIGbb8ZnDVLRmavquuwrCcxtrZU4Y+o1nPiwLkK8uXsTP/87 9PecW61OsQhn+TmWHMfLOlS1lMf+IGxSUypo6+9S0Lp5MpXEPzKC+BQOMu06uFY8FtxH3B xb6k3bXgi9YSlvVWXOl9Pia9sk8EMcs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-153-kL6MjbtvNhyrACzGYuX1fw-1; Wed, 25 May 2022 05:01:56 -0400 X-MC-Unique: kL6MjbtvNhyrACzGYuX1fw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EDC35811E7A; Wed, 25 May 2022 09:01:55 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B6D740CFD0A; Wed, 25 May 2022 09:01:54 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 09/37] KVM: x86: hyper-v: Use HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK instead of raw '64' Date: Wed, 25 May 2022 11:01:05 +0200 Message-Id: <20220525090133.1264239-10-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It may not be clear from where the '64' limit for the maximum sparse bank number comes from, use HV_MAX_SPARSE_VCPU_BANKS define instead. Use HV_VCPUS_PER_SPARSE_BANK in KVM_HV_MAX_SPARSE_VCPU_SET_BITS's definition. Opportunistically adjust the comment around BUILD_BUG_ON(). No functional change. Reviewed-by: Maxim Levitsky Suggested-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 4973a8802e7f..287eaca4db3c 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -43,7 +43,7 @@ /* "Hv#1" signature */ #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648 =20 -#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64) +#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, HV_VCP= US_PER_SPARSE_BANK) =20 static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, bool vcpu_kick); @@ -1799,7 +1799,7 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, str= uct kvm_hv_hcall *hc, u64 *sparse_banks, int consumed_xmm_halves, gpa_t offset) { - if (hc->var_cnt > 64) + if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS) return -EINVAL; =20 /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */ @@ -1911,12 +1911,11 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) gpa_t data_offset; =20 /* - * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the - * valid mask is a u64. Fail the build if KVM's max allowed number of - * vCPUs (>4096) would exceed this limit, KVM will additional changes - * for Hyper-V support to avoid setting the guest up to fail. + * The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS + * sparse banks. Fail the build if KVM's max allowed number of + * vCPUs (>4096) exceeds this limit. */ - BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64); + BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > HV_MAX_SPARSE_VCPU_BANKS); =20 if (!hc->fast && is_guest_mode(vcpu)) { hc->ingpa =3D translate_nested_gpa(vcpu, hc->ingpa, 0, NULL); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F5BFC433F5 for ; Wed, 25 May 2022 09:02:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232976AbiEYJCc (ORCPT ); Wed, 25 May 2022 05:02:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238651AbiEYJCI (ORCPT ); Wed, 25 May 2022 05:02:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 836378B094 for ; Wed, 25 May 2022 02:02:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HoOAusWmUMY8Nd0DAMfN6GftY7vo4orAj4jzmtrXqh4=; b=bR1nEK9x1cRTXhd5zEYfpbD3ecX4+0mvi3I76M+Y/xZ0shkSQrw2QLI3LQGAE2a7d2fbeT TEZL+5iMGHnT3DCxoEnMuEYff66Cz1nxSJUXICkVcl70WY/gpIzozpQcwL3KhwKl+uHUpd sj9TRpi57IMJmsBcWR2sqxrUEllBLaQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-671-8BvxXyzcNTWV3T6S8_zv-A-1; Wed, 25 May 2022 05:01:58 -0400 X-MC-Unique: 8BvxXyzcNTWV3T6S8_zv-A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DA0FD804191; Wed, 25 May 2022 09:01:57 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 35CBF405D4BF; Wed, 25 May 2022 09:01:56 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 10/37] KVM: x86: hyper-v: Don't use sparse_set_to_vcpu_mask() in kvm_hv_send_ipi() Date: Wed, 25 May 2022 11:01:06 +0200 Message-Id: <20220525090133.1264239-11-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Get rid of on-stack allocation of vcpu_mask and optimize kvm_hv_send_ipi() for a smaller number of vCPUs in the request. When Hyper-V TLB flush is in use, HvSendSyntheticClusterIpi{,Ex} calls are not commonly used to send IPIs to a large number of vCPUs (and are rarely used in general). Introduce hv_is_vp_in_sparse_set() to directly check if the specified VP_ID is present in sparse vCPU set. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 287eaca4db3c..dbefb492aa35 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1747,6 +1747,25 @@ static void sparse_set_to_vcpu_mask(struct kvm *kvm,= u64 *sparse_banks, } } =20 +static bool hv_is_vp_in_sparse_set(u32 vp_id, u64 valid_bank_mask, u64 spa= rse_banks[]) +{ + int bank, sbank =3D 0; + + if (!test_bit(vp_id / HV_VCPUS_PER_SPARSE_BANK, + (unsigned long *)&valid_bank_mask)) + return false; + + for_each_set_bit(bank, (unsigned long *)&valid_bank_mask, + KVM_HV_MAX_SPARSE_VCPU_SET_BITS) { + if (bank =3D=3D vp_id / HV_VCPUS_PER_SPARSE_BANK) + break; + sbank++; + } + + return test_bit(vp_id % HV_VCPUS_PER_SPARSE_BANK, + (unsigned long *)&sparse_banks[sbank]); +} + struct kvm_hv_hcall { u64 param; u64 ingpa; @@ -2031,8 +2050,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) ((u64)hc->rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET); } =20 -static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector, - unsigned long *vcpu_bitmap) +static void kvm_hv_send_ipi_to_many(struct kvm *kvm, u32 vector, + u64 *sparse_banks, u64 valid_bank_mask) { struct kvm_lapic_irq irq =3D { .delivery_mode =3D APIC_DM_FIXED, @@ -2042,7 +2061,10 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u3= 2 vector, unsigned long i; =20 kvm_for_each_vcpu(i, vcpu, kvm) { - if (vcpu_bitmap && !test_bit(i, vcpu_bitmap)) + if (sparse_banks && + !hv_is_vp_in_sparse_set(kvm_hv_get_vpindex(vcpu), + valid_bank_mask, + sparse_banks)) continue; =20 /* We fail only when APIC is disabled */ @@ -2055,7 +2077,6 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, str= uct kvm_hv_hcall *hc) struct kvm *kvm =3D vcpu->kvm; struct hv_send_ipi_ex send_ipi_ex; struct hv_send_ipi send_ipi; - DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); unsigned long valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; u32 vector; @@ -2117,13 +2138,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR)) return HV_STATUS_INVALID_HYPERCALL_INPUT; =20 - if (all_cpus) { - kvm_send_ipi_to_many(kvm, vector, NULL); - } else { - sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); - - kvm_send_ipi_to_many(kvm, vector, vcpu_mask); - } + kvm_hv_send_ipi_to_many(kvm, vector, all_cpus ? NULL : sparse_banks, vali= d_bank_mask); =20 ret_success: return HV_STATUS_SUCCESS; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E755C433F5 for ; Wed, 25 May 2022 09:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238125AbiEYJDB (ORCPT ); Wed, 25 May 2022 05:03:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239578AbiEYJCh (ORCPT ); Wed, 25 May 2022 05:02:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 239057037E for ; Wed, 25 May 2022 02:02:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nTSuzjqTNPhZK9bb3oZLYoPDBRYXPddJSlrjBPPN1KM=; b=KdZX9miiNaYLo573PzZdE6eNCfv5XcFjWTWG6MtbIMg9usiob7swBMoAQpmEVnHmgUfGdC uOA888DAVr3YXj5LaWhQQe2v5dlXfhKJMEa5PK22pyfh3mQKEsnGaRXAx3wYaHd9CJGYgP GMj4t7wIEjvWo0Jw3oz8Nz2AQk8f4lk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-375-92fNJ8r8NMOQQBb9CbxeFg-1; Wed, 25 May 2022 05:02:00 -0400 X-MC-Unique: 92fNJ8r8NMOQQBb9CbxeFg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF478801228; Wed, 25 May 2022 09:01:59 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 28423405D4BF; Wed, 25 May 2022 09:01:58 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 11/37] KVM: x86: hyper-v: Create a separate fifo for L2 TLB flush Date: Wed, 25 May 2022 11:01:07 +0200 Message-Id: <20220525090133.1264239-12-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to use a separate fifo from regular (L1) Hyper-V TLB flush requests: e.g. when a request to flush something in L2 is made, the target vCPU can transition from L2 to L1, receive a request to flush a GVA for L1 and then try to enter L2 back. The first request needs to be processed at this point. Similarly, requests to flush GVAs in L1 must wait until L2 exits to L1. No functional change as KVM doesn't handle L2 TLB flush requests from L2 yet. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 8 +++++++- arch/x86/kvm/hyperv.c | 11 +++++++---- arch/x86/kvm/hyperv.h | 17 ++++++++++++++--- 3 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 31e87c5cbf1e..e497bebe229f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -610,6 +610,12 @@ struct kvm_vcpu_hv_synic { */ #define KVM_HV_TLB_FLUSHALL_ENTRY ((u64)-1) =20 +enum hv_tlb_flush_fifos { + HV_L1_TLB_FLUSH_FIFO, + HV_L2_TLB_FLUSH_FIFO, + HV_NR_TLB_FLUSH_FIFOS, +}; + struct kvm_vcpu_hv_tlb_flush_fifo { spinlock_t write_lock; DECLARE_KFIFO(entries, u64, KVM_HV_TLB_FLUSH_FIFO_SIZE); @@ -635,7 +641,7 @@ struct kvm_vcpu_hv { u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ } cpuid_cache; =20 - struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo; + struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index dbefb492aa35..32bd77c65543 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -956,8 +956,10 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) =20 hv_vcpu->vp_index =3D vcpu->vcpu_idx; =20 - INIT_KFIFO(hv_vcpu->tlb_flush_fifo.entries); - spin_lock_init(&hv_vcpu->tlb_flush_fifo.write_lock); + for (i =3D 0; i < HV_NR_TLB_FLUSH_FIFOS; i++) { + INIT_KFIFO(hv_vcpu->tlb_flush_fifo[i].entries); + spin_lock_init(&hv_vcpu->tlb_flush_fifo[i].write_lock); + } =20 return 0; } @@ -1843,7 +1845,8 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcp= u, u64 *entries, int count) if (!hv_vcpu) return; =20 - tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */ + tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO]; =20 spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags); =20 @@ -1880,7 +1883,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) return; } =20 - tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + tlb_flush_fifo =3D kvm_hv_get_tlb_flush_fifo(vcpu); =20 count =3D kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_F= IFO_SIZE); =20 diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 87d0a0152ad7..aaced3768954 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -22,6 +22,7 @@ #define __ARCH_X86_KVM_HYPERV_H__ =20 #include +#include "x86.h" =20 /* * The #defines related to the synthetic debugger are required by KDNet, b= ut @@ -147,16 +148,26 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct k= vm_hyperv_eventfd *args); int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries); =20 +static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo= (struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + int i =3D !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO : + HV_L2_TLB_FLUSH_FIFO; + + /* KVM does not handle L2 TLB flush requests yet */ + WARN_ON_ONCE(i !=3D HV_L1_TLB_FLUSH_FIFO); + + return &hv_vcpu->tlb_flush_fifo[i]; +} =20 static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; - struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); =20 - if (!hv_vcpu) + if (!to_hv_vcpu(vcpu)) return; =20 - tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo; + tlb_flush_fifo =3D kvm_hv_get_tlb_flush_fifo(vcpu); =20 kfifo_reset_out(&tlb_flush_fifo->entries); } --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 279FDC433EF for ; Wed, 25 May 2022 09:02:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232133AbiEYJCx (ORCPT ); Wed, 25 May 2022 05:02:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238338AbiEYJC1 (ORCPT ); Wed, 25 May 2022 05:02:27 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5AF267B9FF for ; Wed, 25 May 2022 02:02:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3goww1Y74FObOGTiebE2SayEFqssvuIMVg+MArgZzhU=; b=M03qAeaXS+bYGEQM3tL3isWrAypWawNwx/JIMoxk6HLOe4bq37aOOgCO8oMpIDFBcDFmEm K32bxW/JGFNgUdigCaeQv+4zwJjZ6uIP9LUGrWoaWqdVX0kPsTJtG2BQaw/gmeQhw0Yv+L RQ44nFSPUqy3SUulASLdXIyr0ZB3GJA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-264-vfqYJ9WvP-yXuPZxr8ROIw-1; Wed, 25 May 2022 05:02:02 -0400 X-MC-Unique: vfqYJ9WvP-yXuPZxr8ROIw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C0EA5833977; Wed, 25 May 2022 09:02:01 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1C04B40CF8EF; Wed, 25 May 2022 09:01:59 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 12/37] KVM: x86: hyper-v: Use preallocated buffer in 'struct kvm_vcpu_hv' instead of on-stack 'sparse_banks' Date: Wed, 25 May 2022 11:01:08 +0200 Message-Id: <20220525090133.1264239-13-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To make kvm_hv_flush_tlb() ready to handle L2 TLB flush requests, KVM needs to allow for all 64 sparse vCPU banks regardless of KVM_MAX_VCPUs as L1 may use vCPU overcommit for L2. To avoid growing on-stack allocation, make 'sparse_banks' part of per-vCPU 'struct kvm_vcpu_hv' which is allocated dynamically. Note: sparse_set_to_vcpu_mask() can't currently be used to handle L2 requests as KVM does not keep L2 VM_ID -> L2 VCPU_ID -> L1 vCPU mappings, i.e. its vp_bitmap array is still bounded by the number of L1 vCPUs and so can remain an on-stack allocation. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/hyperv.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index e497bebe229f..7dc4ff202512 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -642,6 +642,9 @@ struct kvm_vcpu_hv { } cpuid_cache; =20 struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; + + /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ + u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 32bd77c65543..7c68b355253f 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1912,6 +1912,8 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) =20 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 *sparse_banks =3D hv_vcpu->sparse_banks; struct kvm *kvm =3D vcpu->kvm; struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; @@ -1925,7 +1927,6 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1]; u64 *tlb_flush_entries; u64 valid_bank_mask; - u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; struct kvm_vcpu *v; unsigned long i; bool all_cpus; @@ -2077,11 +2078,12 @@ static void kvm_hv_send_ipi_to_many(struct kvm *kvm= , u32 vector, =20 static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 *sparse_banks =3D hv_vcpu->sparse_banks; struct kvm *kvm =3D vcpu->kvm; struct hv_send_ipi_ex send_ipi_ex; struct hv_send_ipi send_ipi; unsigned long valid_bank_mask; - u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; u32 vector; bool all_cpus; =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95013C433F5 for ; Wed, 25 May 2022 09:03:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235280AbiEYJDP (ORCPT ); Wed, 25 May 2022 05:03:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239288AbiEYJCj (ORCPT ); Wed, 25 May 2022 05:02:39 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6F21C880D9 for ; Wed, 25 May 2022 02:02:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469328; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+WTK0xkwEfzNCvTYXiLpEhjwJ1CB9E2ldwa3lz7LgV4=; b=W0D2HVIccCo/hsDbA2Iax2e6fUUtReWfmqPiHfUlYVVGyVBHVwqYiHnjtsNmczeCuEzxsj zOyXp8fXXMakKB3hEUeZXuksaiuqM75K1tb/J79kRTvNWWs10fi52ddrOHJPtM0pVj2NuQ KttdcpvQ4fWUsy8AFODaqLinHvT8Shk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-388-HhGDdU9SObmB-757rMlN3Q-1; Wed, 25 May 2022 05:02:04 -0400 X-MC-Unique: HhGDdU9SObmB-757rMlN3Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BB84E811E83; Wed, 25 May 2022 09:02:03 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E93340CFD0A; Wed, 25 May 2022 09:02:01 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 13/37] KVM: nVMX: Keep track of hv_vm_id/hv_vp_id when eVMCS is in use Date: Wed, 25 May 2022 11:01:09 +0200 Message-Id: <20220525090133.1264239-14-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to keep track of L2's VM_ID/ VP_IDs which are set by L1 hypervisor. 'Partition assist page' address is also needed to handle post-flush exit to L1 upon request. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 6 ++++++ arch/x86/kvm/vmx/nested.c | 15 +++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7dc4ff202512..8bb224dac57f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -645,6 +645,12 @@ struct kvm_vcpu_hv { =20 /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; + + struct { + u64 pa_page_gpa; + u64 vm_id; + u32 vp_id; + } nested; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index a6688663da4d..ee88921c6156 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -225,6 +225,7 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *vm= x) =20 static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 if (evmptr_is_valid(vmx->nested.hv_evmcs_vmptr)) { @@ -233,6 +234,12 @@ static inline void nested_release_evmcs(struct kvm_vcp= u *vcpu) } =20 vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; + + if (hv_vcpu) { + hv_vcpu->nested.pa_page_gpa =3D INVALID_GPA; + hv_vcpu->nested.vm_id =3D 0; + hv_vcpu->nested.vp_id =3D 0; + } } =20 static void vmx_sync_vmcs_host_state(struct vcpu_vmx *vmx, @@ -1591,11 +1598,19 @@ static void copy_enlightened_to_vmcs12(struct vcpu_= vmx *vmx, u32 hv_clean_fields { struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; struct hv_enlightened_vmcs *evmcs =3D vmx->nested.hv_evmcs; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(&vmx->vcpu); =20 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ vmcs12->tpr_threshold =3D evmcs->tpr_threshold; vmcs12->guest_rip =3D evmcs->guest_rip; =20 + if (unlikely(!(hv_clean_fields & + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL))) { + hv_vcpu->nested.pa_page_gpa =3D evmcs->partition_assist_page; + hv_vcpu->nested.vm_id =3D evmcs->hv_vm_id; + hv_vcpu->nested.vp_id =3D evmcs->hv_vp_id; + } + if (unlikely(!(hv_clean_fields & HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_BASIC))) { vmcs12->guest_rsp =3D evmcs->guest_rsp; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 897D7C433F5 for ; Wed, 25 May 2022 09:03:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238875AbiEYJDJ (ORCPT ); Wed, 25 May 2022 05:03:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237139AbiEYJCq (ORCPT ); Wed, 25 May 2022 05:02:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9F6598A045 for ; Wed, 25 May 2022 02:02:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469328; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1IjjfA4YuTaARNmfW+LbY2qM4TyIOzjg7+F7juI0HUw=; b=iIBz0n6IqjIesKPTg/P6AMXsExB0WIY6kRdpPO74PFxLYkrfGn0XfLgjHgLGuRxLwLz/rk uxqTjqVX+Hj10tDTC/W6oKMK73WvmS639EqpW6b537GyFAGfblXHka62XJk/ZYWV0a6OBw GPRidgCZVuGnLwnmeDPtEgRIa+40NmI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-YDSD9F12Nhmfy6qGcopF9w-1; Wed, 25 May 2022 05:02:07 -0400 X-MC-Unique: YDSD9F12Nhmfy6qGcopF9w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B320B3C10233; Wed, 25 May 2022 09:02:06 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05ED340CF8EF; Wed, 25 May 2022 09:02:03 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 14/37] KVM: nSVM: Keep track of Hyper-V hv_vm_id/hv_vp_id Date: Wed, 25 May 2022 11:01:10 +0200 Message-Id: <20220525090133.1264239-15-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition assist page address to handle L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ arch/x86/kvm/svm/nested.c | 2 ++ 2 files changed, 18 insertions(+) diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index 7d6d97968fb9..8cf702fed7e5 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -9,6 +9,7 @@ #include =20 #include "../hyperv.h" +#include "svm.h" =20 /* * Hyper-V uses the software reserved 32 bytes in VMCB @@ -32,4 +33,19 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW =20 +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + struct hv_enlightenments *hve =3D + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + if (!hv_vcpu) + return; + + hv_vcpu->nested.pa_page_gpa =3D hve->partition_assist_page; + hv_vcpu->nested.vm_id =3D hve->hv_vm_id; + hv_vcpu->nested.vp_id =3D hve->hv_vp_id; +} + #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index bed5e1692cef..91174f0120a2 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -761,6 +761,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmc= b12_gpa, if (kvm_vcpu_apicv_active(vcpu)) kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu); =20 + nested_svm_hv_update_vm_vp_ids(vcpu); + return 0; } =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33F2CC433EF for ; Wed, 25 May 2022 09:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241134AbiEYJDq (ORCPT ); Wed, 25 May 2022 05:03:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240270AbiEYJD2 (ORCPT ); Wed, 25 May 2022 05:03:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7279992D0C for ; Wed, 25 May 2022 02:02:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FWs48XDz3Tvdn0Qygo9yW9hxcaZtTOfYZ3U55MH86YA=; b=SObu5lnUp1oaDsyhLkwbrNArayGw7hheveVQD12XQRZ3HuW+hvzsDbxZt/Nzr8OlRNtJgz L3PvGLf8qa2ycPcpXPRT/fzmwE7z31WhvRBDAXVXtyn1UDUQfQHyLXzjDN/a+O5EC/oXdd 6nvzbXXrDJ6t33jSNmsEyBCHgw5Y7AM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-168-NuZLJjJpM_uqyvRaFYgmPg-1; Wed, 25 May 2022 05:02:09 -0400 X-MC-Unique: NuZLJjJpM_uqyvRaFYgmPg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A40178219B2; Wed, 25 May 2022 09:02:08 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id EF66E40CF8EF; Wed, 25 May 2022 09:02:06 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 15/37] KVM: x86: Introduce .hv_inject_synthetic_vmexit_post_tlb_flush() nested hook Date: Wed, 25 May 2022 11:01:11 +0200 Message-Id: <20220525090133.1264239-16-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hyper-V supports injecting synthetic L2->L1 exit after performing L2 TLB flush operation but the procedure is vendor specific. Introduce .hv_inject_synthetic_vmexit_post_tlb_flush nested hook for it. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/Makefile | 3 ++- arch/x86/kvm/svm/hyperv.c | 11 +++++++++++ arch/x86/kvm/svm/hyperv.h | 2 ++ arch/x86/kvm/svm/nested.c | 1 + arch/x86/kvm/vmx/evmcs.c | 4 ++++ arch/x86/kvm/vmx/evmcs.h | 1 + arch/x86/kvm/vmx/nested.c | 1 + 8 files changed, 23 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kvm/svm/hyperv.c diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 8bb224dac57f..19b62589bb2c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1593,6 +1593,7 @@ struct kvm_x86_nested_ops { int (*enable_evmcs)(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu); + void (*hv_inject_synthetic_vmexit_post_tlb_flush)(struct kvm_vcpu *vcpu); }; =20 struct kvm_x86_init_ops { diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 30f244b64523..b6d53b045692 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -25,7 +25,8 @@ kvm-intel-y +=3D vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o= vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o kvm-intel-$(CONFIG_X86_SGX_KVM) +=3D vmx/sgx.o =20 -kvm-amd-y +=3D svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o = svm/sev.o +kvm-amd-y +=3D svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \ + svm/sev.o svm/hyperv.o =20 ifdef CONFIG_HYPERV kvm-amd-y +=3D svm/svm_onhyperv.o diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c new file mode 100644 index 000000000000..911f51021af1 --- /dev/null +++ b/arch/x86/kvm/svm/hyperv.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD SVM specific code for Hyper-V on KVM. + * + * Copyright 2022 Red Hat, Inc. and/or its affiliates. + */ +#include "hyperv.h" + +void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu) +{ +} diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index 8cf702fed7e5..dd2e393f84a0 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -48,4 +48,6 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct = kvm_vcpu *vcpu) hv_vcpu->nested.vp_id =3D hve->hv_vp_id; } =20 +void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); + #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 91174f0120a2..3b243abe0121 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1665,4 +1665,5 @@ struct kvm_x86_nested_ops svm_nested_ops =3D { .get_nested_state_pages =3D svm_get_nested_state_pages, .get_state =3D svm_get_nested_state, .set_state =3D svm_set_nested_state, + .hv_inject_synthetic_vmexit_post_tlb_flush =3D svm_hv_inject_synthetic_vm= exit_post_tlb_flush, }; diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index 6a61b1ae7942..805afc170b5b 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -439,3 +439,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, =20 return 0; } + +void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu) +{ +} diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index f886a8ff0342..584741b85eb6 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -245,5 +245,6 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata); int nested_evmcs_check_controls(struct vmcs12 *vmcs12); +void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __KVM_X86_VMX_EVMCS_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ee88921c6156..c18495098834 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6850,4 +6850,5 @@ struct kvm_x86_nested_ops vmx_nested_ops =3D { .write_log_dirty =3D nested_vmx_write_pml_buffer, .enable_evmcs =3D nested_enable_evmcs, .get_evmcs_version =3D nested_get_evmcs_version, + .hv_inject_synthetic_vmexit_post_tlb_flush =3D vmx_hv_inject_synthetic_vm= exit_post_tlb_flush, }; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A7EBC433F5 for ; Wed, 25 May 2022 09:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231435AbiEYJDh (ORCPT ); Wed, 25 May 2022 05:03:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237139AbiEYJDN (ORCPT ); Wed, 25 May 2022 05:03:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2DF5A8DDC3 for ; Wed, 25 May 2022 02:02:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469333; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V1WILEWqg0+562QNDZjape19jnCOEngqZqCfqefM0qg=; b=HD8inK5cPVINp1Ifl/OJouziMd2IbVvwj4AjKw8QmhrmOQRCmqF4f/dk7rn84kWsmYntKu wb7ARiM93BMpJKevWn5L8YeFDr6l5UY8ZIYzxqR39OSwIKzAJuYDX0Vnh7XAnJ5Su2tG5t jYDmYP/OfpPZghueq/chGu79yAlsdeA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-5-Dk0R-fkwPPWVOhMl6fQtPw-1; Wed, 25 May 2022 05:02:11 -0400 X-MC-Unique: Dk0R-fkwPPWVOhMl6fQtPw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 45826858284; Wed, 25 May 2022 09:02:11 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id E209A40CFD0A; Wed, 25 May 2022 09:02:08 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 16/37] KVM: x86: hyper-v: Introduce kvm_hv_is_tlb_flush_hcall() Date: Wed, 25 May 2022 11:01:12 +0200 Message-Id: <20220525090133.1264239-17-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The newly introduced helper checks whether vCPU is performing a Hyper-V TLB flush hypercall. This is required to filter out L2 TLB flush hypercalls for processing. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index aaced3768954..10c5aaa99f9f 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -171,6 +171,24 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct = kvm_vcpu *vcpu) =20 kfifo_reset_out(&tlb_flush_fifo->entries); } + +static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u16 code; + + if (!hv_vcpu) + return false; + + code =3D is_64_bit_hypercall(vcpu) ? kvm_rcx_read(vcpu) : + kvm_rax_read(vcpu); + + return (code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX); +} + void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu); =20 =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AECE8C433F5 for ; Wed, 25 May 2022 09:04:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232405AbiEYJEi (ORCPT ); Wed, 25 May 2022 05:04:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241367AbiEYJDq (ORCPT ); Wed, 25 May 2022 05:03:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 851DA98085 for ; Wed, 25 May 2022 02:02:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ba++GzeZysphmYXzSiynKoU5i6RVATp8laHUNfKEKvs=; b=G2gkScHCQO/7mkRlvcMnF2iNsQoG+m3I8H5ja8wnRq5Run29QzJJ2HnXiQC8QUAbX8Ys1i EwJ7qvymcyRCoxVsP7iWH0vXmqJ/94sljkkTV2qyk7Zc8gXzbfhVayLiEpeCKmKS3BqByx ivWDhYEu1+FS32cZxLVoUXT7AjkEnvw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-101-T7U6lZmuN8ml0YDA8DMXiA-1; Wed, 25 May 2022 05:02:13 -0400 X-MC-Unique: T7U6lZmuN8ml0YDA8DMXiA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 39D59801228; Wed, 25 May 2022 09:02:13 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 857FF40CF8EF; Wed, 25 May 2022 09:02:11 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 17/37] KVM: x86: hyper-v: L2 TLB flush Date: Wed, 25 May 2022 11:01:13 +0200 Message-Id: <20220525090133.1264239-18-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Handle L2 TLB flush requests by going through all vCPUs and checking whether there are vCPUs running the same VM_ID with a VP_ID specified in the requests. Perform synthetic exit to L2 upon finish. Note, while checking VM_ID/VP_ID of running vCPUs seem to be a bit racy, we count on the fact that KVM flushes the whole L2 VPID upon transition. Also, KVM_REQ_HV_TLB_FLUSH request needs to be done upon transition between L1 and L2 to make sure all pending requests are always processed. For the reference, Hyper-V TLFS refers to the feature as "Direct Virtual Flush". Note, nVMX/nSVM code does not handle VMCALL/VMMCALL from L2 yet. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 78 ++++++++++++++++++++++++++++++++++++------- arch/x86/kvm/hyperv.h | 3 -- arch/x86/kvm/trace.h | 21 +++++++----- 3 files changed, 79 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 7c68b355253f..e3fedc89d84b 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -34,6 +34,7 @@ #include =20 #include +#include #include =20 #include "trace.h" @@ -1835,9 +1836,10 @@ static int kvm_hv_get_tlb_flush_entries(struct kvm *= kvm, struct kvm_hv_hcall *hc entries, consumed_xmm_halves, offset); } =20 -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int = count) +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo, + u64 *entries, int count) { - struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); u64 entry =3D KVM_HV_TLB_FLUSHALL_ENTRY; unsigned long flags; @@ -1845,9 +1847,6 @@ static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcp= u, u64 *entries, int count) if (!hv_vcpu) return; =20 - /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */ - tlb_flush_fifo =3D &hv_vcpu->tlb_flush_fifo[HV_L1_TLB_FLUSH_FIFO]; - spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags); =20 /* @@ -1918,6 +1917,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; /* * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE' * entries on the TLB flush fifo. The last entry, however, needs to be @@ -1961,7 +1961,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) } =20 trace_kvm_hv_flush_tlb(flush.processor_mask, - flush.address_space, flush.flags); + flush.address_space, flush.flags, + is_guest_mode(vcpu)); =20 valid_bank_mask =3D BIT_ULL(0); sparse_banks[0] =3D flush.processor_mask; @@ -1992,7 +1993,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, flush_ex.hv_vp_set.format, flush_ex.address_space, - flush_ex.flags); + flush_ex.flags, is_guest_mode(vcpu)); =20 valid_bank_mask =3D flush_ex.hv_vp_set.valid_bank_mask; all_cpus =3D flush_ex.hv_vp_set.format !=3D @@ -2026,23 +2027,59 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) tlb_flush_entries =3D __tlb_flush_entries; } =20 + tlb_flush_fifo =3D kvm_hv_get_tlb_flush_fifo(vcpu); + /* * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't * analyze it here, flush TLB regardless of the specified address space. */ - if (all_cpus) { + if (all_cpus && !is_guest_mode(vcpu)) { kvm_for_each_vcpu(i, v, kvm) - hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); + hv_tlb_flush_enqueue(v, tlb_flush_fifo, + tlb_flush_entries, hc->rep_cnt); =20 kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); - } else { + } else if (!is_guest_mode(vcpu)) { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) { v =3D kvm_get_vcpu(kvm, i); if (!v) continue; - hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); + hv_tlb_flush_enqueue(v, tlb_flush_fifo, + tlb_flush_entries, hc->rep_cnt); + } + + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); + } else { + struct kvm_vcpu_hv *hv_v; + + bitmap_zero(vcpu_mask, KVM_MAX_VCPUS); + + kvm_for_each_vcpu(i, v, kvm) { + hv_v =3D to_hv_vcpu(v); + + /* + * The following check races with nested vCPUs entering/exiting + * and/or migrating between L1's vCPUs, however the only case when + * KVM *must* flush the TLB is when the target L2 vCPU keeps + * running on the same L1 vCPU from the moment of the request until + * kvm_hv_flush_tlb() returns. TLB is fully flushed in all other + * cases, e.g. when the target L2 vCPU migrates to a different L1 + * vCPU or when the corresponding L1 vCPU temporary switches to a + * different L2 vCPU while the request is being processed. + */ + if (!hv_v || hv_v->nested.vm_id !=3D hv_vcpu->nested.vm_id) + continue; + + if (!all_cpus && + !hv_is_vp_in_sparse_set(hv_v->nested.vp_id, valid_bank_mask, + sparse_banks)) + continue; + + __set_bit(i, vcpu_mask); + hv_tlb_flush_enqueue(v, tlb_flush_fifo, + tlb_flush_entries, hc->rep_cnt); } =20 kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); @@ -2230,10 +2267,27 @@ static void kvm_hv_hypercall_set_result(struct kvm_= vcpu *vcpu, u64 result) =20 static int kvm_hv_hypercall_complete(struct kvm_vcpu *vcpu, u64 result) { + int ret; + trace_kvm_hv_hypercall_done(result); kvm_hv_hypercall_set_result(vcpu, result); ++vcpu->stat.hypercalls; - return kvm_skip_emulated_instruction(vcpu); + ret =3D kvm_skip_emulated_instruction(vcpu); + + if (unlikely(hv_result_success(result) && is_guest_mode(vcpu) + && kvm_hv_is_tlb_flush_hcall(vcpu))) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u32 tlb_lock_count; + + if (unlikely(kvm_read_guest(vcpu->kvm, hv_vcpu->nested.pa_page_gpa, + &tlb_lock_count, sizeof(tlb_lock_count)))) + kvm_inject_gp(vcpu, 0); + + if (tlb_lock_count) + kvm_x86_ops.nested_ops->hv_inject_synthetic_vmexit_post_tlb_flush(vcpu); + } + + return ret; } =20 static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 10c5aaa99f9f..b6583d02b2ea 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -154,9 +154,6 @@ static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv= _get_tlb_flush_fifo(struc int i =3D !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_FIFO : HV_L2_TLB_FLUSH_FIFO; =20 - /* KVM does not handle L2 TLB flush requests yet */ - WARN_ON_ONCE(i !=3D HV_L1_TLB_FLUSH_FIFO); - return &hv_vcpu->tlb_flush_fifo[i]; } =20 diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index de4762517569..e173f03bc27a 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -1499,38 +1499,41 @@ TRACE_EVENT(kvm_hv_timer_state, * Tracepoint for kvm_hv_flush_tlb. */ TRACE_EVENT(kvm_hv_flush_tlb, - TP_PROTO(u64 processor_mask, u64 address_space, u64 flags), - TP_ARGS(processor_mask, address_space, flags), + TP_PROTO(u64 processor_mask, u64 address_space, u64 flags, bool guest_mod= e), + TP_ARGS(processor_mask, address_space, flags, guest_mode), =20 TP_STRUCT__entry( __field(u64, processor_mask) __field(u64, address_space) __field(u64, flags) + __field(bool, guest_mode) ), =20 TP_fast_assign( __entry->processor_mask =3D processor_mask; __entry->address_space =3D address_space; __entry->flags =3D flags; + __entry->guest_mode =3D guest_mode; ), =20 - TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx", + TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx %s", __entry->processor_mask, __entry->address_space, - __entry->flags) + __entry->flags, __entry->guest_mode ? "(L2)" : "") ); =20 /* * Tracepoint for kvm_hv_flush_tlb_ex. */ TRACE_EVENT(kvm_hv_flush_tlb_ex, - TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags), - TP_ARGS(valid_bank_mask, format, address_space, flags), + TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags, b= ool guest_mode), + TP_ARGS(valid_bank_mask, format, address_space, flags, guest_mode), =20 TP_STRUCT__entry( __field(u64, valid_bank_mask) __field(u64, format) __field(u64, address_space) __field(u64, flags) + __field(bool, guest_mode) ), =20 TP_fast_assign( @@ -1538,12 +1541,14 @@ TRACE_EVENT(kvm_hv_flush_tlb_ex, __entry->format =3D format; __entry->address_space =3D address_space; __entry->flags =3D flags; + __entry->guest_mode =3D guest_mode; ), =20 TP_printk("valid_bank_mask 0x%llx format 0x%llx " - "address_space 0x%llx flags 0x%llx", + "address_space 0x%llx flags 0x%llx %s", __entry->valid_bank_mask, __entry->format, - __entry->address_space, __entry->flags) + __entry->address_space, __entry->flags, + __entry->guest_mode ? "(L2)" : "") ); =20 /* --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10573C433EF for ; Wed, 25 May 2022 09:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234081AbiEYJHL (ORCPT ); Wed, 25 May 2022 05:07:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241840AbiEYJEy (ORCPT ); Wed, 25 May 2022 05:04:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 780CEA2069 for ; Wed, 25 May 2022 02:03:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469338; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mJMd56U0RvAFSw2xlUEExcb1jIqSKTPUhfJNeFFxDSw=; b=GkbXUGllEaLZg3s8p7ZpT7x3D2JlQI4//4xVVdiLVgZPawK7zZBcvQQ4oWKdPot5OCSEuX 1EYET234+t2O9uJvlFzaEJncqCqy8z/5Z+BrEVzxgk5MID/dv3NUYCtFtxM+QP3SAelzJO GyQrB2hqqW5J9xId74hGotB30NDTqEA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-249-OTXWGLRBNR2bgAXtCnys1Q-1; Wed, 25 May 2022 05:02:15 -0400 X-MC-Unique: OTXWGLRBNR2bgAXtCnys1Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2C3181C161A9; Wed, 25 May 2022 09:02:15 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 767CE40CF8EF; Wed, 25 May 2022 09:02:13 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 18/37] KVM: x86: hyper-v: Introduce fast guest_hv_cpuid_has_l2_tlb_flush() check Date: Wed, 25 May 2022 11:01:14 +0200 Message-Id: <20220525090133.1264239-19-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a helper to quickly check if KVM needs to handle VMCALL/VMMCALL from L2 in L0 to process L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/hyperv.c | 6 ++++++ arch/x86/kvm/hyperv.h | 7 +++++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 19b62589bb2c..c8e75f529b9d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -639,6 +639,7 @@ struct kvm_vcpu_hv { u32 enlightenments_eax; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EAX */ u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */ u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ + u32 nested_features_eax; /* HYPERV_CPUID_NESTED_FEATURES.EAX */ } cpuid_cache; =20 struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index e3fedc89d84b..9a41835ff4bc 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2229,6 +2229,12 @@ void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu) hv_vcpu->cpuid_cache.syndbg_cap_eax =3D entry->eax; else hv_vcpu->cpuid_cache.syndbg_cap_eax =3D 0; + + entry =3D kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_NESTED_FEATURES, 0); + if (entry) + hv_vcpu->cpuid_cache.nested_features_eax =3D entry->eax; + else + hv_vcpu->cpuid_cache.nested_features_eax =3D 0; } =20 int kvm_hv_set_enforce_cpuid(struct kvm_vcpu *vcpu, bool enforce) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index b6583d02b2ea..9c9b842bbd73 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -169,6 +169,13 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct = kvm_vcpu *vcpu) kfifo_reset_out(&tlb_flush_fifo->entries); } =20 +static inline bool guest_hv_cpuid_has_l2_tlb_flush(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + return hv_vcpu && (hv_vcpu->cpuid_cache.nested_features_eax & HV_X64_NEST= ED_DIRECT_FLUSH); +} + static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E579C433EF for ; Wed, 25 May 2022 09:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231448AbiEYJEw (ORCPT ); Wed, 25 May 2022 05:04:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241106AbiEYJER (ORCPT ); Wed, 25 May 2022 05:04:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0A1FB9C2E4 for ; Wed, 25 May 2022 02:03:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F94Tm16Pv+6X1Qxpix8Ro059dU0Hc/OW94AtIQA4pXw=; b=KU2aM35LfGKJf9j+vrjoT2qgj2e7rIleQA8FcN4b3NTWN5bZRJCMI8jwcElulOhIfGhKVN F2c48nerYHP24qj5mCl7DzApu+S7SSyBvbQEd6AnW3TFKDu7a0o8RXK8EmaCMGhwZDOepw GszxaiaXxjmDLDV359V1tBLx8R/t3e0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-537-AWE361fBMwGFiO485JcCkQ-1; Wed, 25 May 2022 05:02:17 -0400 X-MC-Unique: AWE361fBMwGFiO485JcCkQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 238943C01DA2; Wed, 25 May 2022 09:02:17 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 68B3E40CFD0A; Wed, 25 May 2022 09:02:15 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 19/37] x86/hyperv: Fix 'struct hv_enlightened_vmcs' definition Date: Wed, 25 May 2022 11:01:15 +0200 Message-Id: <20220525090133.1264239-20-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Section 1.9 of TLFS v6.0b says: "All structures are padded in such a way that fields are aligned naturally (that is, an 8-byte field is aligned to an offset of 8 bytes and so on)". 'struct enlightened_vmcs' has a glitch: ... struct { u32 nested_flush_hypercall:1; /* 836: 0 4= */ u32 msr_bitmap:1; /* 836: 1 4 */ u32 reserved:30; /* 836: 2 4 */ } hv_enlightenments_control; /* 836 4 */ u32 hv_vp_id; /* 840 4 */ u64 hv_vm_id; /* 844 8 */ u64 partition_assist_page; /* 852 8 */ ... And the observed values in 'partition_assist_page' make no sense at all. Fix the layout by padding the structure properly. Fixes: 68d1eb72ee99 ("x86/hyper-v: define struct hv_enlightened_vmcs and cl= ean field bits") Reviewed-by: Maxim Levitsky Reviewed-by: Michael Kelley Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/hyperv-tlfs.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hype= rv-tlfs.h index 5225a85c08c3..e7ddae8e02c6 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -548,7 +548,7 @@ struct hv_enlightened_vmcs { u64 guest_rip; =20 u32 hv_clean_fields; - u32 hv_padding_32; + u32 padding32_1; u32 hv_synthetic_controls; struct { u32 nested_flush_hypercall:1; @@ -556,7 +556,7 @@ struct hv_enlightened_vmcs { u32 reserved:30; } __packed hv_enlightenments_control; u32 hv_vp_id; - + u32 padding32_2; u64 hv_vm_id; u64 partition_assist_page; u64 padding64_4[4]; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 555EDC433F5 for ; Wed, 25 May 2022 09:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237208AbiEYJHV (ORCPT ); Wed, 25 May 2022 05:07:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239964AbiEYJEw (ORCPT ); Wed, 25 May 2022 05:04:52 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8D2B4A2049 for ; Wed, 25 May 2022 02:03:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=onhhXeUnDVZj8cKa7TLJAU5kPzjLl2QoRIXTWwSp/6I=; b=BLEYDTkvh5nw4tDj0+xTLQkOOfY38vv7veeyQXl2ym3IZVGKbBQA2e9gPKXt4kz3XOiQMq eQvNtT8fYrS8gqN8KEGDUjkVLtjWgD40Eml5IXBOQ90kJliUYkZ1VqGUASOYUdhpsSaaB9 kZiI7QRSwmDO70WQny+s/8RnJlCbPpw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-284-aqPHZdjvNpGJ7hd9OssNnQ-1; Wed, 25 May 2022 05:02:19 -0400 X-MC-Unique: aqPHZdjvNpGJ7hd9OssNnQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 429C41C161AE; Wed, 25 May 2022 09:02:19 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F9B640CFD0A; Wed, 25 May 2022 09:02:17 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 20/37] KVM: nVMX: hyper-v: Enable L2 TLB flush Date: Wed, 25 May 2022 11:01:16 +0200 Message-Id: <20220525090133.1264239-21-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable L2 TLB flush feature on nVMX when: - Enlightened VMCS is in use. - The feature flag is enabled in eVMCS. - The feature flag is enabled in partition assist page. Perform synthetic vmexit to L1 after processing TLB flush call upon request (HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH). Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/vmx/evmcs.c | 20 ++++++++++++++++++++ arch/x86/kvm/vmx/evmcs.h | 10 ++++++++++ arch/x86/kvm/vmx/nested.c | 16 ++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index 805afc170b5b..7c537a4f602e 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -6,6 +6,7 @@ #include "../hyperv.h" #include "../cpuid.h" #include "evmcs.h" +#include "nested.h" #include "vmcs.h" #include "vmx.h" #include "trace.h" @@ -440,6 +441,25 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, return 0; } =20 +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + struct hv_enlightened_vmcs *evmcs =3D vmx->nested.hv_evmcs; + struct hv_vp_assist_page assist_page; + + if (!evmcs) + return false; + + if (!evmcs->hv_enlightenments_control.nested_flush_hypercall) + return false; + + if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page))) + return false; + + return assist_page.nested_control.features.directhypercall; +} + void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu) { + nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0,= 0); } diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index 584741b85eb6..be34b68b3f02 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -66,6 +66,15 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs); #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CT= RL) #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING) =20 +/* + * Note, Hyper-V isn't actually stealing bit 28 from Intel, just abusing i= t by + * pairing it with architecturally impossible exit reasons. Bit 28 is set= only + * on SMI exits to a SMI transfer monitor (STM) and if and only if a MTF V= M-Exit + * is pending. I.e. it will never be set by hardware for non-SMI exits (t= here + * are only three), nor will it ever be set unless the VMM is an STM. + */ +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 + struct evmcs_field { u16 offset; u16 clean_field; @@ -245,6 +254,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata); int nested_evmcs_check_controls(struct vmcs12 *vmcs12); +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu); void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __KVM_X86_VMX_EVMCS_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c18495098834..b1a47db07761 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1170,6 +1170,17 @@ static void nested_vmx_transition_tlb_flush(struct k= vm_vcpu *vcpu, { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 + /* + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or + * L2's VP_ID upon request from the guest. Make sure we check for + * pending entries for the case when the request got misplaced (e.g. + * a transition from L2->L1 happened while processing L2 TLB flush + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush + * anything if there are no requests in the corresponding buffer. + */ + if (to_hv_vcpu(vcpu)) + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + /* * If vmcs12 doesn't use VPID, L1 expects linear and combined mappings * for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a @@ -5997,6 +6008,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu= *vcpu, * Handle L2's bus locks in L0 directly. */ return true; + case EXIT_REASON_VMCALL: + /* Hyper-V L2 TLB flush hypercall is handled by L0 */ + return guest_hv_cpuid_has_l2_tlb_flush(vcpu) && + nested_evmcs_l2_tlb_flush_enabled(vcpu) && + kvm_hv_is_tlb_flush_hcall(vcpu); default: break; } --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E23C433F5 for ; Wed, 25 May 2022 09:05:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240577AbiEYJF1 (ORCPT ); Wed, 25 May 2022 05:05:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239977AbiEYJEp (ORCPT ); Wed, 25 May 2022 05:04:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 991E69CF63 for ; Wed, 25 May 2022 02:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KilInDwXSVF7+aga2/zZ4/KtDVLkGpwa7DByyzJGd/8=; b=HcfsGh+mBjK7nuQjGxy4Y+G4WdBmdxNoOPYCoNCtQNLKk7eflqgKXaQXvLv337Ot8mLsIK 6mRxVZUyjUFDT6RiZ7nwhH990CTeFqUlza6isJzV6O0ioGQcfXY0bjBu+EUlrxbScfHI9b 5KlAG0xdqlA3BJvLSXBlaEGeZLo5IwY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-362-XahouciIP0qPp-Xn5q5YQg-1; Wed, 25 May 2022 05:02:21 -0400 X-MC-Unique: XahouciIP0qPp-Xn5q5YQg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0375E3C01D9B; Wed, 25 May 2022 09:02:21 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4ECA5405D4BF; Wed, 25 May 2022 09:02:19 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 21/37] KVM: nSVM: hyper-v: Enable L2 TLB flush Date: Wed, 25 May 2022 11:01:17 +0200 Message-Id: <20220525090133.1264239-22-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement Hyper-V L2 TLB flush for nSVM. The feature needs to be enabled both in extended 'nested controls' in VMCB and partition assist page. According to Hyper-V TLFS, synthetic vmexit to L1 is performed with - HV_SVM_EXITCODE_ENL exit_code. - HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH exit_info_1. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/svm/hyperv.c | 7 +++++++ arch/x86/kvm/svm/hyperv.h | 19 +++++++++++++++++++ arch/x86/kvm/svm/nested.c | 27 +++++++++++++++++++++++++-- 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c index 911f51021af1..088f6429b24c 100644 --- a/arch/x86/kvm/svm/hyperv.c +++ b/arch/x86/kvm/svm/hyperv.c @@ -8,4 +8,11 @@ =20 void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm =3D to_svm(vcpu); + + svm->vmcb->control.exit_code =3D HV_SVM_EXITCODE_ENL; + svm->vmcb->control.exit_code_hi =3D 0; + svm->vmcb->control.exit_info_1 =3D HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH; + svm->vmcb->control.exit_info_2 =3D 0; + nested_svm_vmexit(svm); } diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index dd2e393f84a0..6ea78499e21b 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -33,6 +33,9 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW =20 +#define HV_SVM_EXITCODE_ENL 0xF0000000 +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1) + static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -48,6 +51,22 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct= kvm_vcpu *vcpu) hv_vcpu->nested.vp_id =3D hve->hv_vp_id; } =20 +static inline bool nested_svm_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + struct hv_enlightenments *hve =3D + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; + struct hv_vp_assist_page assist_page; + + if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page))) + return false; + + if (!hve->hv_enlightenments_control.nested_flush_hypercall) + return false; + + return assist_page.nested_control.features.directhypercall; +} + void svm_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 3b243abe0121..864d4690ded4 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -171,8 +171,12 @@ void recalc_intercepts(struct vcpu_svm *svm) vmcb_clr_intercept(c, INTERCEPT_VINTR); } =20 - /* We don't want to see VMMCALLs from a nested guest */ - vmcb_clr_intercept(c, INTERCEPT_VMMCALL); + /* + * We want to see VMMCALLs from a nested guest only when Hyper-V L2 TLB + * flush feature is enabled. + */ + if (!nested_svm_l2_tlb_flush_enabled(&svm->vcpu)) + vmcb_clr_intercept(c, INTERCEPT_VMMCALL); =20 for (i =3D 0; i < MAX_INTERCEPT; i++) c->intercepts[i] |=3D g->intercepts[i]; @@ -488,6 +492,17 @@ static void nested_save_pending_event_to_vmcb12(struct= vcpu_svm *svm, =20 static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu) { + /* + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or + * L2's VP_ID upon request from the guest. Make sure we check for + * pending entries for the case when the request got misplaced (e.g. + * a transition from L2->L1 happened while processing L2 TLB flush + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush + * anything if there are no requests in the corresponding buffer. + */ + if (to_hv_vcpu(vcpu)) + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + /* * TODO: optimize unconditional TLB flush/MMU sync. A partial list of * things to fix before this can be conditional: @@ -1357,6 +1372,7 @@ static int svm_check_nested_events(struct kvm_vcpu *v= cpu) int nested_svm_exit_special(struct vcpu_svm *svm) { u32 exit_code =3D svm->vmcb->control.exit_code; + struct kvm_vcpu *vcpu =3D &svm->vcpu; =20 switch (exit_code) { case SVM_EXIT_INTR: @@ -1375,6 +1391,13 @@ int nested_svm_exit_special(struct vcpu_svm *svm) return NESTED_EXIT_HOST; break; } + case SVM_EXIT_VMMCALL: + /* Hyper-V L2 TLB flush hypercall is handled by L0 */ + if (guest_hv_cpuid_has_l2_tlb_flush(vcpu) && + nested_svm_l2_tlb_flush_enabled(vcpu) && + kvm_hv_is_tlb_flush_hcall(vcpu)) + return NESTED_EXIT_HOST; + break; default: break; } --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF5F3C433F5 for ; Wed, 25 May 2022 09:05:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239087AbiEYJFT (ORCPT ); Wed, 25 May 2022 05:05:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240013AbiEYJEp (ORCPT ); Wed, 25 May 2022 05:04:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CD1D89CF68 for ; Wed, 25 May 2022 02:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469346; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LD6H/7F5euHu6V/2GlTmGfwg6AmcqYIKVVcIAKTKGX8=; b=E+GJgzwWKSSVL8+4DjYfO9G9sfpYIdte2aRQTIAVB/uCK74NJqEfvz/xpXDBxBxyuvoCqg 2JRrd+46FIVff7Gfk2DGvV+N7gOFtX5Sny3xlghTO10ZkZ6ZW6QGswW3fG9Sl5DLycKYvR emQYqbmqVFR/JWiyE+G7sjwL4jhOcWU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-647-zFCimh1JM-av-SB0FSl7Ag-1; Wed, 25 May 2022 05:02:23 -0400 X-MC-Unique: zFCimh1JM-av-SB0FSl7Ag-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07E913802B87; Wed, 25 May 2022 09:02:23 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 462C5405D4BF; Wed, 25 May 2022 09:02:21 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 22/37] KVM: x86: Expose Hyper-V L2 TLB flush feature Date: Wed, 25 May 2022 11:01:18 +0200 Message-Id: <20220525090133.1264239-23-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" With both nSVM and nVMX implementations in place, KVM can now expose Hyper-V L2 TLB flush feature to userspace. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 9a41835ff4bc..7cdad050a5b7 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2774,6 +2774,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kv= m_cpuid2 *cpuid, =20 case HYPERV_CPUID_NESTED_FEATURES: ent->eax =3D evmcs_ver; + ent->eax |=3D HV_X64_NESTED_DIRECT_FLUSH; ent->eax |=3D HV_X64_NESTED_MSR_BITMAP; =20 break; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5A9DC433F5 for ; Wed, 25 May 2022 09:07:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234324AbiEYJHQ (ORCPT ); Wed, 25 May 2022 05:07:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241863AbiEYJEy (ORCPT ); Wed, 25 May 2022 05:04:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C3F95A2078 for ; Wed, 25 May 2022 02:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469348; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SJqnoMIyDtAaoz1Br09ZDqI08HPFqigT6eWqsfRcazQ=; b=jRy8TB7CoBtMY860EdNrkpaKFK4ptwMiE6hfrrYGI8GxqZpBfxh1Rts6+SzKCbiSVjcWtF K9ZPoODFyn3mbOdWqOsueMCiTGt0lyoJpWIvGRAKyKB3aXxnky8bK7unxkcEomAqPR+M+q gayuURhdaIk754LytdFOJEK022yNYlM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-574-B32IVRsrONCLcss9zl-V2A-1; Wed, 25 May 2022 05:02:25 -0400 X-MC-Unique: B32IVRsrONCLcss9zl-V2A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EE3861C161A0; Wed, 25 May 2022 09:02:24 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4710640CFD0A; Wed, 25 May 2022 09:02:23 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 23/37] KVM: selftests: Better XMM read/write helpers Date: Wed, 25 May 2022 11:01:19 +0200 Message-Id: <20220525090133.1264239-24-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_xmm()/get_xmm() helpers are fairly useless as they only read 64 bits from 128-bit registers. Moreover, these helpers are not used. Borrow _kvm_read_sse_reg()/_kvm_write_sse_reg() from KVM limiting them to XMM0-XMM8 for now. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/processor.h | 70 ++++++++++--------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..9ad7602a257b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -296,71 +296,73 @@ static inline void cpuid(uint32_t *eax, uint32_t *ebx, : "memory"); } =20 -#define SET_XMM(__var, __xmm) \ - asm volatile("movq %0, %%"#__xmm : : "r"(__var) : #__xmm) +typedef u32 __attribute__((vector_size(16))) sse128_t; +#define __sse128_u union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; } +#define sse128_lo(x) ({ __sse128_u t; t.vec =3D x; t.as_u64[0]; }) +#define sse128_hi(x) ({ __sse128_u t; t.vec =3D x; t.as_u64[1]; }) =20 -static inline void set_xmm(int n, unsigned long val) +static inline void read_sse_reg(int reg, sse128_t *data) { - switch (n) { + switch (reg) { case 0: - SET_XMM(val, xmm0); + asm("movdqa %%xmm0, %0" : "=3Dm"(*data)); break; case 1: - SET_XMM(val, xmm1); + asm("movdqa %%xmm1, %0" : "=3Dm"(*data)); break; case 2: - SET_XMM(val, xmm2); + asm("movdqa %%xmm2, %0" : "=3Dm"(*data)); break; case 3: - SET_XMM(val, xmm3); + asm("movdqa %%xmm3, %0" : "=3Dm"(*data)); break; case 4: - SET_XMM(val, xmm4); + asm("movdqa %%xmm4, %0" : "=3Dm"(*data)); break; case 5: - SET_XMM(val, xmm5); + asm("movdqa %%xmm5, %0" : "=3Dm"(*data)); break; case 6: - SET_XMM(val, xmm6); + asm("movdqa %%xmm6, %0" : "=3Dm"(*data)); break; case 7: - SET_XMM(val, xmm7); + asm("movdqa %%xmm7, %0" : "=3Dm"(*data)); break; + default: + BUG(); } } =20 -#define GET_XMM(__xmm) \ -({ \ - unsigned long __val; \ - asm volatile("movq %%"#__xmm", %0" : "=3Dr"(__val)); \ - __val; \ -}) - -static inline unsigned long get_xmm(int n) +static inline void write_sse_reg(int reg, const sse128_t *data) { - assert(n >=3D 0 && n <=3D 7); - - switch (n) { + switch (reg) { case 0: - return GET_XMM(xmm0); + asm("movdqa %0, %%xmm0" : : "m"(*data)); + break; case 1: - return GET_XMM(xmm1); + asm("movdqa %0, %%xmm1" : : "m"(*data)); + break; case 2: - return GET_XMM(xmm2); + asm("movdqa %0, %%xmm2" : : "m"(*data)); + break; case 3: - return GET_XMM(xmm3); + asm("movdqa %0, %%xmm3" : : "m"(*data)); + break; case 4: - return GET_XMM(xmm4); + asm("movdqa %0, %%xmm4" : : "m"(*data)); + break; case 5: - return GET_XMM(xmm5); + asm("movdqa %0, %%xmm5" : : "m"(*data)); + break; case 6: - return GET_XMM(xmm6); + asm("movdqa %0, %%xmm6" : : "m"(*data)); + break; case 7: - return GET_XMM(xmm7); + asm("movdqa %0, %%xmm7" : : "m"(*data)); + break; + default: + BUG(); } - - /* never reached */ - return 0; } =20 static inline void cpu_relax(void) --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A655BC433F5 for ; Wed, 25 May 2022 09:08:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240360AbiEYJIE (ORCPT ); Wed, 25 May 2022 05:08:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243089AbiEYJFw (ORCPT ); Wed, 25 May 2022 05:05:52 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 59918A5022 for ; Wed, 25 May 2022 02:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469352; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U1sMy8xaJxhqapgk5p9GIXqaotAkSrql7bAakOStiUU=; b=ec5cba4fcuLvT+8ZObT87kBfv4M46lp24jOurXYSa8Tpvn4TecrpYBJ+jiA3LrRZh+dsv8 i5Ec8Gnd0uZq44Rf1XVs4n0y6OgicBhWJdP/pq6rWEKwf4HPZl94/UK5F874DYUEHnqwYZ qq0EG93v1NHeph0qP5uXHCW9/pz71UQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-283-ngW-K9mdMyquC0PRoGRk-w-1; Wed, 25 May 2022 05:02:27 -0400 X-MC-Unique: ngW-K9mdMyquC0PRoGRk-w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD1EA3C01D9D; Wed, 25 May 2022 09:02:26 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 36D8040CFD0A; Wed, 25 May 2022 09:02:25 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 24/37] KVM: selftests: Move HYPERV_LINUX_OS_ID definition to a common header Date: Wed, 25 May 2022 11:01:20 +0200 Message-Id: <20220525090133.1264239-25-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" HYPERV_LINUX_OS_ID needs to be written to HV_X64_MSR_GUEST_OS_ID by each Hyper-V specific selftest. Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/include/x86_64/hyperv.h | 3 +++ tools/testing/selftests/kvm/x86_64/hyperv_features.c | 5 ++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index b66910702c0a..f0a8a93694b2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -185,4 +185,7 @@ /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) =20 +/* Proper HV_X64_MSR_GUEST_OS_ID value */ +#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) + #endif /* !SELFTEST_KVM_HYPERV_H */ diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_features.c index 672915ce73d8..98c020356925 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c @@ -14,7 +14,6 @@ #include "hyperv.h" =20 #define VCPU_ID 0 -#define LINUX_OS_ID ((u64)0x8100 << 48) =20 extern unsigned char rdmsr_start; extern unsigned char rdmsr_end; @@ -127,7 +126,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcal= l_data *hcall) int i =3D 0; u64 res, input, output; =20 - wrmsr(HV_X64_MSR_GUEST_OS_ID, LINUX_OS_ID); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); =20 while (hcall->control) { @@ -230,7 +229,7 @@ static void guest_test_msrs_access(void) */ msr->idx =3D HV_X64_MSR_GUEST_OS_ID; msr->write =3D 1; - msr->write_val =3D LINUX_OS_ID; + msr->write_val =3D HYPERV_LINUX_OS_ID; msr->available =3D 1; break; case 3: --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D571C433EF for ; Wed, 25 May 2022 09:06:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239317AbiEYJFf (ORCPT ); Wed, 25 May 2022 05:05:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240985AbiEYJEu (ORCPT ); Wed, 25 May 2022 05:04:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 190989CF64 for ; Wed, 25 May 2022 02:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O2ns/abyGGY2lUHqvbl8bWjfPO/YjY8976ue1eurT7I=; b=gXCSPXntTJTSDDcvnCZi7lLUCL925YJ8ZSNDo/1AvmPP3zhhOX1h4tntPmYb4Q2op6RDrr gkFy1KotxE5F+nFy9gBDTV7UO2afb0TyqCNC3930JNS1uRuGD+7BhjxvWX8ZS9ssmJqaMm +SLK1UgMV0bU0jBf2UrQRIJCMYZxfOQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-653-YfjjwJiIMdqBzf6BKBQNIg-1; Wed, 25 May 2022 05:02:29 -0400 X-MC-Unique: YfjjwJiIMdqBzf6BKBQNIg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F404F2999B56; Wed, 25 May 2022 09:02:28 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 252E2405D4BF; Wed, 25 May 2022 09:02:27 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 25/37] KVM: selftests: Move the function doing Hyper-V hypercall to a common header Date: Wed, 25 May 2022 11:01:21 +0200 Message-Id: <20220525090133.1264239-26-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" All Hyper-V specific tests issuing hypercalls need this. Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/hyperv.h | 15 +++++++++++++++ .../selftests/kvm/x86_64/hyperv_features.c | 17 +---------------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index f0a8a93694b2..e0a1b4c2fbbc 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -185,6 +185,21 @@ /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) =20 +static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address, + vm_vaddr_t output_address) +{ + u64 hv_status; + + asm volatile("mov %3, %%r8\n" + "vmcall" + : "=3Da" (hv_status), + "+c" (control), "+d" (input_address) + : "r" (output_address) + : "cc", "memory", "r8", "r9", "r10", "r11"); + + return hv_status; +} + /* Proper HV_X64_MSR_GUEST_OS_ID value */ #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_features.c index 98c020356925..788d570e991e 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c @@ -48,21 +48,6 @@ static void do_wrmsr(u32 idx, u64 val) static int nr_gp; static int nr_ud; =20 -static inline u64 hypercall(u64 control, vm_vaddr_t input_address, - vm_vaddr_t output_address) -{ - u64 hv_status; - - asm volatile("mov %3, %%r8\n" - "vmcall" - : "=3Da" (hv_status), - "+c" (control), "+d" (input_address) - : "r" (output_address) - : "cc", "memory", "r8", "r9", "r10", "r11"); - - return hv_status; -} - static void guest_gp_handler(struct ex_regs *regs) { unsigned char *rip =3D (unsigned char *)regs->rip; @@ -138,7 +123,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcal= l_data *hcall) input =3D output =3D 0; } =20 - res =3D hypercall(hcall->control, input, output); + res =3D hyperv_hypercall(hcall->control, input, output); if (hcall->ud_expected) GUEST_ASSERT(nr_ud =3D=3D 1); else --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C6AFC433F5 for ; Wed, 25 May 2022 09:06:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232987AbiEYJGI (ORCPT ); Wed, 25 May 2022 05:06:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237633AbiEYJFS (ORCPT ); Wed, 25 May 2022 05:05:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E13E8D68F for ; Wed, 25 May 2022 02:03:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rxefNA4YhOvCW59m8QedLmNjG+d/1Mk/U9nDfx5eCQI=; b=HrIngfi2ecnl2Mk5Kur+4YpLr8aOK6sKh7I6x8JyEr6qGDBkyNyh/oI2KBVwrLI5OE3YQy fb2u1tSYZn+V6i/p/xiEYvZZxsspwQLcRvkRi9u11b0maCBt5hzqo3o38658/C5oL8B9Oq C/f4Ps4uKugpSTou21LRClqt41z3D+0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-507-vIdP_prhMq2V2KZpfwc63g-1; Wed, 25 May 2022 05:02:31 -0400 X-MC-Unique: vIdP_prhMq2V2KZpfwc63g-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E257C2999B51; Wed, 25 May 2022 09:02:30 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C45840CF8EF; Wed, 25 May 2022 09:02:29 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 26/37] KVM: selftests: Hyper-V PV IPI selftest Date: Wed, 25 May 2022 11:01:22 +0200 Message-Id: <20220525090133.1264239-27-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a selftest for Hyper-V PV IPI hypercalls (HvCallSendSyntheticClusterIpi, HvCallSendSyntheticClusterIpiEx). The test creates one 'sender' vCPU and two 'receiver' vCPU and then issues various combinations of send IPI hypercalls in both 'normal' and 'fast' (with XMM input where necessary) mode. Later, the test checks whether IPIs were delivered to the expected destination vCPU[s]. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/hyperv.h | 12 + .../testing/selftests/kvm/x86_64/hyperv_ipi.c | 352 ++++++++++++++++++ 4 files changed, 366 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_ipi.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 4f48f9c2411d..103faed95771 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -23,6 +23,7 @@ /x86_64/hyperv_clock /x86_64/hyperv_cpuid /x86_64/hyperv_features +/x86_64/hyperv_ipi /x86_64/hyperv_svm_test /x86_64/max_vcpuid_cap_test /x86_64/mmio_warning_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 8c3db2f75315..d504b177b510 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/fix_hypercall_test TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_clock TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_cpuid TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_features +TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_ipi TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_svm_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_pv_test diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index e0a1b4c2fbbc..1b467626be58 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -9,6 +9,8 @@ #ifndef SELFTEST_KVM_HYPERV_H #define SELFTEST_KVM_HYPERV_H =20 +#include "processor.h" + #define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS 0x40000000 #define HYPERV_CPUID_INTERFACE 0x40000001 #define HYPERV_CPUID_VERSION 0x40000002 @@ -184,6 +186,7 @@ =20 /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) +#define HV_HYPERCALL_VARHEAD_OFFSET 17 =20 static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address, vm_vaddr_t output_address) @@ -200,6 +203,15 @@ static inline u64 hyperv_hypercall(u64 control, vm_vad= dr_t input_address, return hv_status; } =20 +/* Write 'Fast' hypercall input 'data' to the first 'n_sse_regs' SSE regs = */ +static inline void hyperv_write_xmm_input(void *data, int n_sse_regs) +{ + int i; + + for (i =3D 0; i < n_sse_regs; i++) + write_sse_reg(i, (sse128_t *)(data + sizeof(sse128_t) * i)); +} + /* Proper HV_X64_MSR_GUEST_OS_ID value */ #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testin= g/selftests/kvm/x86_64/hyperv_ipi.c new file mode 100644 index 000000000000..a8e834be62bc --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c @@ -0,0 +1,352 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V HvCallSendSyntheticClusterIpi{,Ex} tests + * + * Copyright (C) 2022, Red Hat, Inc. + * + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "hyperv.h" +#include "test_util.h" +#include "vmx.h" + +#define SENDER_VCPU_ID 1 +#define RECEIVER_VCPU_ID_1 2 +#define RECEIVER_VCPU_ID_2 65 + +#define IPI_VECTOR 0xfe + +static volatile uint64_t ipis_rcvd[RECEIVER_VCPU_ID_2 + 1]; + +struct thread_params { + struct kvm_vm *vm; + uint32_t vcpu_id; +}; + +struct hv_vpset { + u64 format; + u64 valid_bank_mask; + u64 bank_contents[2]; +}; + +enum HV_GENERIC_SET_FORMAT { + HV_GENERIC_SET_SPARSE_4K, + HV_GENERIC_SET_ALL, +}; + +/* HvCallSendSyntheticClusterIpi hypercall */ +struct hv_send_ipi { + u32 vector; + u32 reserved; + u64 cpu_mask; +}; + +/* HvCallSendSyntheticClusterIpiEx hypercall */ +struct hv_send_ipi_ex { + u32 vector; + u32 reserved; + struct hv_vpset vp_set; +}; + +static inline void hv_init(vm_vaddr_t pgs_gpa) +{ + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); +} + +static void receiver_code(void *hcall_page, vm_vaddr_t pgs_gpa) +{ + u32 vcpu_id; + + x2apic_enable(); + hv_init(pgs_gpa); + + vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + + /* Signal sender vCPU we're ready */ + ipis_rcvd[vcpu_id] =3D (u64)-1; + + for (;;) + asm volatile("sti; hlt; cli"); +} + +static void guest_ipi_handler(struct ex_regs *regs) +{ + u32 vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + + ipis_rcvd[vcpu_id]++; + wrmsr(HV_X64_MSR_EOI, 1); +} + +static inline void nop_loop(void) +{ + int i; + + for (i =3D 0; i < 100000000; i++) + asm volatile("nop"); +} + +static void sender_guest_code(void *hcall_page, vm_vaddr_t pgs_gpa) +{ + struct hv_send_ipi *ipi =3D (struct hv_send_ipi *)hcall_page; + struct hv_send_ipi_ex *ipi_ex =3D (struct hv_send_ipi_ex *)hcall_page; + int stage =3D 1, ipis_expected[2] =3D {0}; + u64 res; + + hv_init(pgs_gpa); + GUEST_SYNC(stage++); + + /* Wait for receiver vCPUs to come up */ + while (!ipis_rcvd[RECEIVER_VCPU_ID_1] || !ipis_rcvd[RECEIVER_VCPU_ID_2]) + nop_loop(); + ipis_rcvd[RECEIVER_VCPU_ID_1] =3D ipis_rcvd[RECEIVER_VCPU_ID_2] =3D 0; + + /* 'Slow' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */ + ipi->vector =3D IPI_VECTOR; + ipi->cpu_mask =3D 1 << RECEIVER_VCPU_ID_1; + res =3D hyperv_hypercall(HVCALL_SEND_IPI, pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'Fast' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */ + res =3D hyperv_hypercall(HVCALL_SEND_IPI | HV_HYPERCALL_FAST_BIT, + IPI_VECTOR, 1 << RECEIVER_VCPU_ID_1); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 0; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_1); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_= OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */ + hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 1); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 1; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_2 - 64); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_= OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */ + hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 1); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1,2} = */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 1 | 1; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_1); + ipi_ex->vp_set.bank_contents[1] =3D BIT(RECEIVER_VCPU_ID_2 - 64); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | (2 << HV_HYPERCALL_VARHEAD_= OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1= , 2} */ + hyperv_write_xmm_input(&ipi_ex->vp_set.valid_bank_mask, 2); + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_ALL; + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX, pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* + * 'XMM Fast' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL. + * Nothing to write anything to XMM regs. + */ + res =3D hyperv_hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT, + IPI_VECTOR, HV_GENERIC_SET_ALL); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + GUEST_DONE(); +} + +static void *vcpu_thread(void *arg) +{ + struct thread_params *params =3D (struct thread_params *)arg; + struct ucall uc; + int old; + int r; + unsigned int exit_reason; + + r =3D pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old); + TEST_ASSERT(r =3D=3D 0, + "pthread_setcanceltype failed on vcpu_id=3D%u with errno=3D%d", + params->vcpu_id, r); + + vcpu_run(params->vm, params->vcpu_id); + exit_reason =3D vcpu_state(params->vm, params->vcpu_id)->exit_reason; + + TEST_ASSERT(exit_reason =3D=3D KVM_EXIT_IO, + "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT= _IO", + params->vcpu_id, exit_reason, exit_reason_str(exit_reason)); + + if (get_ucall(params->vm, params->vcpu_id, &uc) =3D=3D UCALL_ABORT) { + TEST_ASSERT(false, + "vCPU %u exited with error: %s.\n", + params->vcpu_id, (const char *)uc.args[0]); + } + + return NULL; +} + +static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id) +{ + void *retval; + int r; + + r =3D pthread_cancel(thread); + TEST_ASSERT(r =3D=3D 0, + "pthread_cancel on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + + r =3D pthread_join(thread, &retval); + TEST_ASSERT(r =3D=3D 0, + "pthread_join on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + TEST_ASSERT(retval =3D=3D PTHREAD_CANCELED, + "expected retval=3D%p, got %p", PTHREAD_CANCELED, + retval); +} + +int main(int argc, char *argv[]) +{ + int r; + pthread_t threads[2]; + struct thread_params params[2]; + struct kvm_vm *vm; + struct kvm_run *run; + vm_vaddr_t hcall_page; + struct ucall uc; + int stage =3D 1; + + vm =3D vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code); + params[0].vm =3D vm; + params[1].vm =3D vm; + + /* Hypercall input/output */ + hcall_page =3D vm_vaddr_alloc_pages(vm, 2); + memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); + + vm_init_descriptor_tables(vm); + + vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_1, receiver_code); + vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_1); + vcpu_args_set(vm, RECEIVER_VCPU_ID_1, 2, hcall_page, addr_gva2gpa(vm, hca= ll_page)); + vcpu_set_msr(vm, RECEIVER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_I= D_1); + vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_1); + + vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_2, receiver_code); + vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_2); + vcpu_args_set(vm, RECEIVER_VCPU_ID_2, 2, hcall_page, addr_gva2gpa(vm, hca= ll_page)); + vcpu_set_msr(vm, RECEIVER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_I= D_2); + vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_2); + + vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler); + + vcpu_args_set(vm, SENDER_VCPU_ID, 2, hcall_page, addr_gva2gpa(vm, hcall_p= age)); + vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID); + + params[0].vcpu_id =3D RECEIVER_VCPU_ID_1; + r =3D pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + params[1].vcpu_id =3D RECEIVER_VCPU_ID_2; + r =3D pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + run =3D vcpu_state(vm, SENDER_VCPU_ID); + + while (true) { + r =3D _vcpu_run(vm, SENDER_VCPU_ID); + TEST_ASSERT(!r, "vcpu_run failed: %d\n", r); + TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, + "unexpected exit reason: %u (%s)", + run->exit_reason, exit_reason_str(run->exit_reason)); + + switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) { + case UCALL_SYNC: + TEST_ASSERT(uc.args[1] =3D=3D stage, + "Unexpected stage: %ld (%d expected)\n", + uc.args[1], stage); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], + __FILE__, uc.args[1]); + return 1; + case UCALL_DONE: + return 0; + } + + stage++; + } + + cancel_join_vcpu_thread(threads[0], RECEIVER_VCPU_ID_1); + cancel_join_vcpu_thread(threads[1], RECEIVER_VCPU_ID_2); + kvm_vm_free(vm); + + return 0; +} --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E74CC433EF for ; Wed, 25 May 2022 09:07:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235587AbiEYJHs (ORCPT ); Wed, 25 May 2022 05:07:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242680AbiEYJF0 (ORCPT ); Wed, 25 May 2022 05:05:26 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 907E69346C for ; Wed, 25 May 2022 02:03:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469356; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ry63xFrwO6O2lt3NumR064ix6uPD5MMi+epdiiFne5w=; b=Z14VQoT4c+SvXFHzbNkYth54p9URrH+0lo7XH8pbu6wEA89gEv1MPDPcGKJMq8zhsW5fcy xgnYTidRHZBKHZcNYGlVuJJaa79m2IL7Yv3Tp/uF3WMjf7DBEywFCAobl6YHDYlxZ378EP m8ZLeIYDjXCF2EwgnHeCHoADmGnAZtI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-191-GoX2hHSvPSqiz4xr-fkiHg-1; Wed, 25 May 2022 05:02:33 -0400 X-MC-Unique: GoX2hHSvPSqiz4xr-fkiHg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DB7ED1C161A3; Wed, 25 May 2022 09:02:32 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2B23A40CFD0A; Wed, 25 May 2022 09:02:31 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 27/37] KVM: selftests: Fill in vm->vpages_mapped bitmap in virt_map() too Date: Wed, 25 May 2022 11:01:23 +0200 Message-Id: <20220525090133.1264239-28-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Similar to vm_vaddr_alloc(), virt_map() needs to reflect the mapping in vm->vpages_mapped. Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/lib/kvm_util.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 1665a220abcb..936be9c9f870 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1445,6 +1445,9 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint= 64_t paddr, virt_pg_map(vm, vaddr, paddr); vaddr +=3D page_size; paddr +=3D page_size; + + sparsebit_set(vm->vpages_mapped, + vaddr >> vm->page_shift); } } =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 754BEC433EF for ; Wed, 25 May 2022 09:06:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242713AbiEYJFq (ORCPT ); Wed, 25 May 2022 05:05:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231866AbiEYJEt (ORCPT ); Wed, 25 May 2022 05:04:49 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C3012A0047 for ; Wed, 25 May 2022 02:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hI5tcymMyNY0x/KbxdPcK+U4n8lHpHeb3PjdglObBsg=; b=SPqzoiieGa+k8oB1DtUgzxlimVKVzYUcNjiKt+KVR+eO93pswxGtv1nAebsl+DLNDI5JIH 2EKYbEa3C8sc9nR/tB0057XvGt+FZOn9TrVJYtqceUdfCWruIR543IlTuSf4nIK7OcCeNH ESi2opGSx/h57Se/9QeBUjw+kSj2Ero= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-675-nD6Y8Hz8OCOorOTCyAXyrQ-1; Wed, 25 May 2022 05:02:35 -0400 X-MC-Unique: nD6Y8Hz8OCOorOTCyAXyrQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D2D7C1C161A3; Wed, 25 May 2022 09:02:34 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 23F9840CFD0A; Wed, 25 May 2022 09:02:33 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 28/37] KVM: selftests: Export vm_vaddr_unused_gap() to make it possible to request unmapped ranges Date: Wed, 25 May 2022 11:01:24 +0200 Message-Id: <20220525090133.1264239-29-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, tests can only request a new vaddr range by using vm_vaddr_alloc()/vm_vaddr_alloc_page()/vm_vaddr_alloc_pages() but these functions allocate and map physical pages too. Make it possible to request unmapped range too. Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/te= sting/selftests/kvm/include/kvm_util_base.h index 92cef0ffb19e..8273fe93c4f6 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -169,6 +169,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_= t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa= ); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); +vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t va= ddr_min); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_m= in); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 936be9c9f870..37df67780787 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1263,8 +1263,8 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid) * TEST_ASSERT failure occurs for invalid input or no area of at least * sz unallocated bytes >=3D vaddr_min is available. */ -static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, - vm_vaddr_t vaddr_min) +vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min) { uint64_t pages =3D (sz + vm->page_size - 1) >> vm->page_shift; =20 --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D27FDC433EF for ; Wed, 25 May 2022 09:06:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231691AbiEYJGy (ORCPT ); Wed, 25 May 2022 05:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235517AbiEYJFR (ORCPT ); Wed, 25 May 2022 05:05:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5027B82144 for ; Wed, 25 May 2022 02:03:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t4Hrsu9vaLNBzegX0s8dWoRk8Rsa/7rUxISFdwQHSFk=; b=BharAH9YwrX7DpuwhqoDwOphXi+RmFn8Q7A1wMFoom5lJCjmarcM8dl91iFcsLbF6lrXRh UP8f/3Aw+Tf8GqR18kfIIiT4cZHkMYB01KvhkdqstA4cyKclERDBbimtdVxoYb6zryDkF0 FKVMGdeWXX2AanSLfz9VEEUdcSrvo/k= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-441-pCunkXMePrKHkUX-t0802A-1; Wed, 25 May 2022 05:02:37 -0400 X-MC-Unique: pCunkXMePrKHkUX-t0802A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CB016811E76; Wed, 25 May 2022 09:02:36 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1FB02405D4BF; Wed, 25 May 2022 09:02:34 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 29/37] KVM: selftests: Export _vm_get_page_table_entry() and struct pageTableEntry/pageUpperEntry definitions Date: Wed, 25 May 2022 11:01:25 +0200 Message-Id: <20220525090133.1264239-30-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make it possible for tests to mangle guest's page table entries. Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/processor.h | 34 ++++++++++++++++++ .../selftests/kvm/lib/x86_64/processor.c | 36 ++----------------- 2 files changed, 36 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 9ad7602a257b..046807b8ea4f 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -441,6 +441,40 @@ void vcpu_init_descriptor_tables(struct kvm_vm *vm, ui= nt32_t vcpuid); void vm_install_exception_handler(struct kvm_vm *vm, int vector, void (*handler)(struct ex_regs *)); =20 +/* Virtual translation table structure declarations */ +struct pageUpperEntry { + uint64_t present:1; + uint64_t writable:1; + uint64_t user:1; + uint64_t write_through:1; + uint64_t cache_disable:1; + uint64_t accessed:1; + uint64_t ignored_06:1; + uint64_t page_size:1; + uint64_t ignored_11_08:4; + uint64_t pfn:40; + uint64_t ignored_62_52:11; + uint64_t execute_disable:1; +}; + +struct pageTableEntry { + uint64_t present:1; + uint64_t writable:1; + uint64_t user:1; + uint64_t write_through:1; + uint64_t cache_disable:1; + uint64_t accessed:1; + uint64_t dirty:1; + uint64_t reserved_07:1; + uint64_t global:1; + uint64_t ignored_11_09:3; + uint64_t pfn:40; + uint64_t ignored_62_52:11; + uint64_t execute_disable:1; +}; + +struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcp= uid, + uint64_t vaddr); uint64_t vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t v= addr); void vm_set_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr, uint64_t pte); diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/tes= ting/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..f8090b521357 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -19,38 +19,6 @@ =20 vm_vaddr_t exception_handlers; =20 -/* Virtual translation table structure declarations */ -struct pageUpperEntry { - uint64_t present:1; - uint64_t writable:1; - uint64_t user:1; - uint64_t write_through:1; - uint64_t cache_disable:1; - uint64_t accessed:1; - uint64_t ignored_06:1; - uint64_t page_size:1; - uint64_t ignored_11_08:4; - uint64_t pfn:40; - uint64_t ignored_62_52:11; - uint64_t execute_disable:1; -}; - -struct pageTableEntry { - uint64_t present:1; - uint64_t writable:1; - uint64_t user:1; - uint64_t write_through:1; - uint64_t cache_disable:1; - uint64_t accessed:1; - uint64_t dirty:1; - uint64_t reserved_07:1; - uint64_t global:1; - uint64_t ignored_11_09:3; - uint64_t pfn:40; - uint64_t ignored_62_52:11; - uint64_t execute_disable:1; -}; - void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent) { @@ -282,8 +250,8 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uin= t64_t paddr) __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); } =20 -static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, = int vcpuid, - uint64_t vaddr) +struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcp= uid, + uint64_t vaddr) { uint16_t index[4]; struct pageUpperEntry *pml4e, *pdpe, *pde; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EACCFC433EF for ; Wed, 25 May 2022 09:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234200AbiEYJH3 (ORCPT ); Wed, 25 May 2022 05:07:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238821AbiEYJFS (ORCPT ); Wed, 25 May 2022 05:05:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B72018DDC3 for ; Wed, 25 May 2022 02:03:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469363; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W/2CQtJSwF97FbLl6I5Lkw56nYrZjayRQKavwgbbcCk=; b=TlEasPzHpPY9EN1Spgeu0FWqIAiYehj8nQhNCdyJdWThIXWgB1swMQwuoR1rZDNdteCcrp YroZ8sJPumpGA6OQZHhDNfUZNk8pjQ7zJHtn2G6Y7jgqjMaRiol8aIP4Rp5t7fYeZe6762 hBXVv/ajTwGFVb3EmNadbFHe9wvBTPc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-112-Jb5ixnnTPKWhu4BTu16MbQ-1; Wed, 25 May 2022 05:02:39 -0400 X-MC-Unique: Jb5ixnnTPKWhu4BTu16MbQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E72653802B88; Wed, 25 May 2022 09:02:38 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1ED30405D4BF; Wed, 25 May 2022 09:02:36 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 30/37] KVM: selftests: Hyper-V PV TLB flush selftest Date: Wed, 25 May 2022 11:01:26 +0200 Message-Id: <20220525090133.1264239-31-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a selftest for Hyper-V PV TLB flush hypercalls (HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressList/HvFlushVirtualAddressListEx). The test creates one 'sender' vCPU and two 'worker' vCPU which do busy loop reading from a certain GVA checking the observed value. Sender vCPU drops to the host to swap the data page with another page filled with a different value. The expectation for workers is also altered. Without TLB flush on worker vCPUs, they may continue to observe old value. To guard against accidental TLB flushes for worker vCPUs the test is repeated 100 times. Hyper-V TLB flush hypercalls are tested in both 'normal' and 'XMM fast' modes. Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/hyperv.h | 1 + .../selftests/kvm/x86_64/hyperv_tlb_flush.c | 663 ++++++++++++++++++ 4 files changed, 666 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 103faed95771..c8da539ea4a6 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -25,6 +25,7 @@ /x86_64/hyperv_features /x86_64/hyperv_ipi /x86_64/hyperv_svm_test +/x86_64/hyperv_tlb_flush /x86_64/max_vcpuid_cap_test /x86_64/mmio_warning_test /x86_64/mmu_role_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index d504b177b510..5a96649fcc24 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -54,6 +54,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_cpuid TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_features TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_ipi TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_svm_test +TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_tlb_flush TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_pv_test TEST_GEN_PROGS_x86_64 +=3D x86_64/mmio_warning_test diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index 1b467626be58..c302027fa6d5 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -187,6 +187,7 @@ /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) #define HV_HYPERCALL_VARHEAD_OFFSET 17 +#define HV_HYPERCALL_REP_COMP_OFFSET 32 =20 static inline u64 hyperv_hypercall(u64 control, vm_vaddr_t input_address, vm_vaddr_t output_address) diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c b/tools/= testing/selftests/kvm/x86_64/hyperv_tlb_flush.c new file mode 100644 index 000000000000..7d7392341988 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c @@ -0,0 +1,663 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V HvFlushVirtualAddress{List,Space}{,Ex} tests + * + * Copyright (C) 2022, Red Hat, Inc. + * + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "processor.h" +#include "hyperv.h" +#include "test_util.h" +#include "vmx.h" + +#define SENDER_VCPU_ID 1 +#define WORKER_VCPU_ID_1 2 +#define WORKER_VCPU_ID_2 65 + +#define NTRY 100 +#define NTEST_PAGES 2 + +#define PAGE_SIZE 4096 +#define PAGE_MASK (~(PAGE_SIZE - 1)) + +struct thread_params { + struct kvm_vm *vm; + uint32_t vcpu_id; +}; + +struct hv_vpset { + u64 format; + u64 valid_bank_mask; + u64 bank_contents[]; +}; + +enum HV_GENERIC_SET_FORMAT { + HV_GENERIC_SET_SPARSE_4K, + HV_GENERIC_SET_ALL, +}; + +#define HV_FLUSH_ALL_PROCESSORS BIT(0) +#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES BIT(1) +#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY BIT(2) +#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT BIT(3) + +/* HvFlushVirtualAddressSpace, HvFlushVirtualAddressList hypercalls */ +struct hv_tlb_flush { + u64 address_space; + u64 flags; + u64 processor_mask; + u64 gva_list[]; +} __packed; + +/* HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressListEx hypercalls */ +struct hv_tlb_flush_ex { + u64 address_space; + u64 flags; + struct hv_vpset hv_vp_set; + u64 gva_list[]; +} __packed; + +/* + * Pass the following info to 'workers' and 'sender' + * - Hypercall page's GVA + * - Hypercall page's GPA + * - Test pages GVA + * - GVAs of the test pages' PTEs + */ +struct test_data { + vm_vaddr_t hcall_gva; + vm_paddr_t hcall_gpa; + vm_vaddr_t test_pages; + vm_vaddr_t test_pages_pte[NTEST_PAGES]; +}; + +/* 'Worker' vCPU code checking the contents of the test page */ +static void worker_guest_code(vm_vaddr_t test_data) +{ + struct test_data *data =3D (struct test_data *)test_data; + u32 vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + unsigned char chr_exp1, chr_exp2, chr_cur; + + x2apic_enable(); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + + for (;;) { + /* Read the expected char, then check what's in the test pages and then + * check the expectation again to make sure it wasn't updated in the mea= ntime. + */ + chr_exp1 =3D READ_ONCE(*(unsigned char *) + (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id)); + asm volatile("lfence"); + chr_cur =3D *(unsigned char *)data->test_pages; + asm volatile("lfence"); + chr_exp2 =3D READ_ONCE(*(unsigned char *) + (data->test_pages + PAGE_SIZE * NTEST_PAGES + vcpu_id)); + if (chr_exp1 && chr_exp1 =3D=3D chr_exp2) + GUEST_ASSERT(chr_cur =3D=3D chr_exp1); + asm volatile("nop"); + } +} + +/* + * Write per-CPU info indicating what each 'worker' CPU is supposed to see= in + * test page. '0' means don't check. + */ +static void set_expected_char(void *addr, unsigned char chr, int vcpu_id) +{ + asm volatile("mfence"); + *(unsigned char *)(addr + NTEST_PAGES * PAGE_SIZE + vcpu_id) =3D chr; +} + +/* Update PTEs swapping two test pages */ +static void swap_two_test_pages(vm_paddr_t pte_gva1, vm_paddr_t pte_gva2) +{ + uint64_t pte[2]; + + pte[0] =3D *(uint64_t *)pte_gva1; + pte[1] =3D *(uint64_t *)pte_gva2; + + *(uint64_t *)pte_gva1 =3D pte[1]; + *(uint64_t *)pte_gva2 =3D pte[0]; +} + +/* Delay */ +static inline void rep_nop(void) +{ + int i; + + for (i =3D 0; i < 1000000; i++) + asm volatile("nop"); +} + +/* + * Prepare to test: 'disable' workers by setting the expectation to '0', + * clear hypercall input page and then swap two test pages. + */ +static inline void prepare_to_test(struct test_data *data) +{ + /* Clear hypercall input page */ + memset((void *)data->hcall_gva, 0, PAGE_SIZE); + + /* 'Disable' workers */ + set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char((void *)data->test_pages, 0x0, WORKER_VCPU_ID_2); + + /* Make sure workers have enough time to notice */ + asm volatile("mfence"); + rep_nop(); + + /* Swap test page mappings */ + swap_two_test_pages(data->test_pages_pte[0], data->test_pages_pte[1]); +} + +/* + * Finalize the test: check hypercall resule set the expected char for + * 'worker' CPUs and give them some time to test. + */ +static inline void post_test(struct test_data *data, u64 res, + char exp_char1, char exp_char2) +{ + /* Check hypercall return code */ + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + + /* Set the expectation for workers, '0' means don't test */ + set_expected_char((void *)data->test_pages, exp_char1, WORKER_VCPU_ID_1); + set_expected_char((void *)data->test_pages, exp_char2, WORKER_VCPU_ID_2); + + /* Make sure workers have enough time to test */ + asm volatile("mfence"); + rep_nop(); +} + +/* Main vCPU doing the test */ +static void sender_guest_code(vm_vaddr_t test_data) +{ + struct test_data *data =3D (struct test_data *)test_data; + struct hv_tlb_flush *flush =3D (struct hv_tlb_flush *)data->hcall_gva; + struct hv_tlb_flush_ex *flush_ex =3D (struct hv_tlb_flush_ex *)data->hcal= l_gva; + vm_paddr_t hcall_gpa =3D data->hcall_gpa; + u64 res; + int i, stage =3D 1; + + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, data->hcall_gpa); + + /* "Slow" hypercalls */ + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa, + hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + flush->gva_list[0] =3D (u64)data->test_pages; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROC= ESSORS; + flush->processor_mask =3D 0; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, hcall_gpa, + hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROC= ESSORS; + flush->gva_list[0] =3D (u64)data->test_pages; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [1] */ + flush_ex->gva_list[1] =3D (u64)data->test_pages; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (1 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64) | + BIT_ULL(WORKER_VCPU_ID_1 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_1 / 64) | + BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [2] */ + flush_ex->gva_list[2] =3D (u64)data->test_pages; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (2 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + flush_ex->gva_list[0] =3D (u64)data->test_pages; + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + hcall_gpa, hcall_gpa + PAGE_SIZE); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + /* "Fast" hypercalls */ + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + hyperv_write_xmm_input(&flush->processor_mask, 1); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + flush->gva_list[0] =3D (u64)data->test_pages; + hyperv_write_xmm_input(&flush->processor_mask, 1); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + HV_HYPERCALL_FAST_BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, 0x0); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + hyperv_write_xmm_input(&flush->processor_mask, 1); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush->gva_list[0] =3D (u64)data->test_pages; + hyperv_write_xmm_input(&flush->processor_mask, 1); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + HV_HYPERCALL_FAST_BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [1] */ + flush_ex->gva_list[1] =3D (u64)data->test_pages; + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, 0x0, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64) | + BIT_ULL(WORKER_VCPU_ID_1 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + HV_HYPERCALL_FAST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_1 / 64) | + BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [2] */ + flush_ex->gva_list[2] =3D (u64)data->test_pages; + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + HV_HYPERCALL_FAST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + HV_HYPERCALL_FAST_BIT, + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_SYNC(stage++); + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + prepare_to_test(data); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + flush_ex->gva_list[0] =3D (u64)data->test_pages; + hyperv_write_xmm_input(&flush_ex->hv_vp_set, 2); + res =3D hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + HV_HYPERCALL_FAST_BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + post_test(data, res, i % 2 ? 0x1 : 0x2, i % 2 ? 0x1 : 0x2); + } + + GUEST_DONE(); +} + +static void *vcpu_thread(void *arg) +{ + struct thread_params *params =3D (struct thread_params *)arg; + struct ucall uc; + int old; + int r; + unsigned int exit_reason; + + r =3D pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old); + TEST_ASSERT(r =3D=3D 0, + "pthread_setcanceltype failed on vcpu_id=3D%u with errno=3D%d", + params->vcpu_id, r); + + vcpu_run(params->vm, params->vcpu_id); + exit_reason =3D vcpu_state(params->vm, params->vcpu_id)->exit_reason; + + TEST_ASSERT(exit_reason =3D=3D KVM_EXIT_IO, + "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT= _IO", + params->vcpu_id, exit_reason, exit_reason_str(exit_reason)); + + if (get_ucall(params->vm, params->vcpu_id, &uc) =3D=3D UCALL_ABORT) { + TEST_ASSERT(false, + "vCPU %u exited with error: %s.\n", + params->vcpu_id, (const char *)uc.args[0]); + } + + return NULL; +} + +static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id) +{ + void *retval; + int r; + + r =3D pthread_cancel(thread); + TEST_ASSERT(r =3D=3D 0, + "pthread_cancel on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + + r =3D pthread_join(thread, &retval); + TEST_ASSERT(r =3D=3D 0, + "pthread_join on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + TEST_ASSERT(retval =3D=3D PTHREAD_CANCELED, + "expected retval=3D%p, got %p", PTHREAD_CANCELED, + retval); +} + +int main(int argc, char *argv[]) +{ + pthread_t threads[2]; + struct thread_params params[2]; + struct kvm_vm *vm; + struct kvm_run *run; + vm_vaddr_t test_data_page, gva; + vm_paddr_t gpa; + struct pageTableEntry *pte; + struct test_data *data; + struct ucall uc; + int stage =3D 1, r, i; + + vm =3D vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code); + params[0].vm =3D vm; + params[1].vm =3D vm; + + /* Test data page */ + test_data_page =3D vm_vaddr_alloc_page(vm); + data =3D (struct test_data *)addr_gva2hva(vm, test_data_page); + + /* Hypercall input/output */ + data->hcall_gva =3D vm_vaddr_alloc_pages(vm, 2); + data->hcall_gpa =3D addr_gva2gpa(vm, data->hcall_gva); + memset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE); + + /* + * Test pages: the first one is filled with '0x1's, the second with '0x2's + * and the test will swap their mappings. The third page keeps the indica= tion + * about the current state of mappings. + */ + data->test_pages =3D vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1); + for (i =3D 0; i < NTEST_PAGES; i++) + memset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i), + (char)(i + 1), PAGE_SIZE); + set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID= _1); + set_expected_char(addr_gva2hva(vm, data->test_pages), 0x0, WORKER_VCPU_ID= _2); + + /* + * Get PTE pointers for test pages and map them inside the guest. + * Use separate page for each PTE for simplicity. + */ + gva =3D vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VAD= DR); + for (i =3D 0; i < NTEST_PAGES; i++) { + pte =3D _vm_get_page_table_entry(vm, SENDER_VCPU_ID, + data->test_pages + i * PAGE_SIZE); + gpa =3D addr_hva2gpa(vm, pte); + __virt_pg_map(vm, gva + PAGE_SIZE * i, gpa & PAGE_MASK, X86_PAGE_SIZE_4K= ); + data->test_pages_pte[i] =3D gva + (gpa & ~PAGE_MASK); + } + + /* + * Sender vCPU which performs the test: swaps test pages, sets expectation + * for 'workers' and issues TLB flush hypercalls. + */ + vcpu_args_set(vm, SENDER_VCPU_ID, 1, test_data_page); + vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID); + + /* Create worker vCPUs which check the contents of the test pages */ + vm_vcpu_add_default(vm, WORKER_VCPU_ID_1, worker_guest_code); + vcpu_args_set(vm, WORKER_VCPU_ID_1, 1, test_data_page); + vcpu_set_msr(vm, WORKER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_1); + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_1); + + vm_vcpu_add_default(vm, WORKER_VCPU_ID_2, worker_guest_code); + vcpu_args_set(vm, WORKER_VCPU_ID_2, 1, test_data_page); + vcpu_set_msr(vm, WORKER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_2); + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_2); + + params[0].vcpu_id =3D WORKER_VCPU_ID_1; + r =3D pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create failed errno=3D%d", errno); + + params[1].vcpu_id =3D WORKER_VCPU_ID_2; + r =3D pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create failed errno=3D%d", errno); + + run =3D vcpu_state(vm, SENDER_VCPU_ID); + + while (true) { + r =3D _vcpu_run(vm, SENDER_VCPU_ID); + TEST_ASSERT(!r, "vcpu_run failed: %d\n", r); + TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, + "unexpected exit reason: %u (%s)", + run->exit_reason, exit_reason_str(run->exit_reason)); + + switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) { + case UCALL_SYNC: + TEST_ASSERT(uc.args[1] =3D=3D stage, + "Unexpected stage: %ld (%d expected)\n", + uc.args[1], stage); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], + __FILE__, uc.args[1]); + return 1; + case UCALL_DONE: + return 0; + } + + stage++; + } + + cancel_join_vcpu_thread(threads[0], WORKER_VCPU_ID_1); + cancel_join_vcpu_thread(threads[1], WORKER_VCPU_ID_2); + kvm_vm_free(vm); + + return 0; +} --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FEADC433EF for ; Wed, 25 May 2022 09:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230351AbiEYJGo (ORCPT ); Wed, 25 May 2022 05:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241473AbiEYJFX (ORCPT ); Wed, 25 May 2022 05:05:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7E062A2070 for ; Wed, 25 May 2022 02:03:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469364; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SdqS0NQPZCKEUwD2dIY/ynaHh6p2yecQUXwSGu03RVo=; b=FZ6iiiW/x3wheblFn47RXUKtgeEE39ZFJaCLFxNLdcGV8qLUGE4c83qY+YjcFa0ZmRoYO7 7kXHFz+iUs0/SIRJ7ZzD++hFSh+MyQiCX44qH8++XH9hZ+vIbnmh/WTKjfsKMhP+SeGckB h3NncI10SiW8y1let0X9oSUF/d81M78= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-156-2N4YNbCxM3iQzzLdZh1WnA-1; Wed, 25 May 2022 05:02:41 -0400 X-MC-Unique: 2N4YNbCxM3iQzzLdZh1WnA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE02429DD995; Wed, 25 May 2022 09:02:40 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 332DA405D4BF; Wed, 25 May 2022 09:02:39 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 31/37] KVM: selftests: Sync 'struct hv_enlightened_vmcs' definition with hyperv-tlfs.h Date: Wed, 25 May 2022 11:01:27 +0200 Message-Id: <20220525090133.1264239-32-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" 'struct hv_enlightened_vmcs' definition in selftests is not '__packed' and so we rely on the compiler doing the right padding. This is not obvious so it seems beneficial to use the same definition as in kernel. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/include/x86_64/evmcs.h | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index cc5d14a45702..b6067b555110 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -41,6 +41,8 @@ struct hv_enlightened_vmcs { u16 host_gs_selector; u16 host_tr_selector; =20 + u16 padding16_1; + u64 host_ia32_pat; u64 host_ia32_efer; =20 @@ -159,7 +161,7 @@ struct hv_enlightened_vmcs { u64 ept_pointer; =20 u16 virtual_processor_id; - u16 padding16[3]; + u16 padding16_2[3]; =20 u64 padding64_2[5]; u64 guest_physical_address; @@ -195,15 +197,15 @@ struct hv_enlightened_vmcs { u64 guest_rip; =20 u32 hv_clean_fields; - u32 hv_padding_32; + u32 padding32_1; u32 hv_synthetic_controls; struct { u32 nested_flush_hypercall:1; u32 msr_bitmap:1; u32 reserved:30; - } hv_enlightenments_control; + } __packed hv_enlightenments_control; u32 hv_vp_id; - + u32 padding32_2; u64 hv_vm_id; u64 partition_assist_page; u64 padding64_4[4]; @@ -211,7 +213,7 @@ struct hv_enlightened_vmcs { u64 padding64_5[7]; u64 xss_exit_bitmap; u64 padding64_6[7]; -}; +} __packed; =20 #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE 0 #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_IO_BITMAP BIT(0) --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36D82C433EF for ; Wed, 25 May 2022 09:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231698AbiEYJNT (ORCPT ); Wed, 25 May 2022 05:13:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231584AbiEYJNG (ORCPT ); Wed, 25 May 2022 05:13:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A66B69B184 for ; Wed, 25 May 2022 02:09:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469740; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=da7nS1EdTAInnQ5mYClBlXrsAp+Wl4jzAQ2Ex+71s1o=; b=ckMKu2DodQVSuAJrWMAJeHYgHXJsoVU4kGTbBXXC9kZxDzWoj0s0lXS9PZDOnWDYn+3vEo IvBESun+Tk6/tsjrfKqH83u49mQObERWKjjenK7W6u23ilL8h9fVkNoqFMfexOoUDJpZQK rHkKQ7jEs+sVw0Yx0abO8w+e9ZfVJD0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-613-Z9YWCNDCNQWky5B8JW6jHw-1; Wed, 25 May 2022 05:02:43 -0400 X-MC-Unique: Z9YWCNDCNQWky5B8JW6jHw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D1FE83C01D9C; Wed, 25 May 2022 09:02:42 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 26CDE40CFD0A; Wed, 25 May 2022 09:02:41 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 32/37] KVM: selftests: nVMX: Allocate Hyper-V partition assist page Date: Wed, 25 May 2022 11:01:28 +0200 Message-Id: <20220525090133.1264239-33-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate so-called Partition assist page and link it to 'struct vmx_pages'. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/include/x86_64/vmx.h | 4 ++++ tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 +++++++ 2 files changed, 11 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testi= ng/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..f99922ca8259 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -567,6 +567,10 @@ struct vmx_pages { uint64_t enlightened_vmcs_gpa; void *enlightened_vmcs; =20 + void *partition_assist_hva; + uint64_t partition_assist_gpa; + void *partition_assist; + void *eptp_hva; uint64_t eptp_gpa; void *eptp; diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/s= elftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..3db21e0e1a8f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -124,6 +124,13 @@ vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gv= a) vmx->enlightened_vmcs_gpa =3D addr_gva2gpa(vm, (uintptr_t)vmx->enlightened_vmcs); =20 + /* Setup of a region of guest memory for the partition assist page. */ + vmx->partition_assist =3D (void *)vm_vaddr_alloc_page(vm); + vmx->partition_assist_hva =3D + addr_gva2hva(vm, (uintptr_t)vmx->partition_assist); + vmx->partition_assist_gpa =3D + addr_gva2gpa(vm, (uintptr_t)vmx->partition_assist); + *p_vmx_gva =3D vmx_gva; return vmx; } --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CBD5C433F5 for ; Wed, 25 May 2022 09:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231134AbiEYJGV (ORCPT ); Wed, 25 May 2022 05:06:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236350AbiEYJFT (ORCPT ); Wed, 25 May 2022 05:05:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6E66B8FFB5 for ; Wed, 25 May 2022 02:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yw6Y/YJaz6MWDgAzhUNIURhMRuLfI7K2vVFM83d4yck=; b=H8H/j5a0Bt5ZxXqrOPC8XgcDCxo0sAtSF+l/pJszOH9PREx8mToFknbM77WLAPkIuVhz4C WB4Rt67xAY5stQUvg6dNHh2IrnNzHGsuFW2xnNQia4I5wQGeRU9+i9sQdRT4U9yxqqRu7e 6kIV1PXNTAEZuZV6asJUjmZYL0MoWfs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-498-MOFx0w6nNjuppNV8fMaVIQ-1; Wed, 25 May 2022 05:02:45 -0400 X-MC-Unique: MOFx0w6nNjuppNV8fMaVIQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C3B78101A54E; Wed, 25 May 2022 09:02:44 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1CD68400DB3A; Wed, 25 May 2022 09:02:42 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 33/37] KVM: selftests: nSVM: Allocate Hyper-V partition assist and VP assist pages Date: Wed, 25 May 2022 11:01:29 +0200 Message-Id: <20220525090133.1264239-34-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate VP assist and Partition assist pages and link them to 'struct svm_test_data'. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++ tools/testing/selftests/kvm/lib/x86_64/svm.c | 10 ++++++++++ 2 files changed, 20 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/= testing/selftests/kvm/include/x86_64/svm_util.h index a25aabd8f5e7..640859b58fd6 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -34,6 +34,16 @@ struct svm_test_data { void *msr; /* gva */ void *msr_hva; uint64_t msr_gpa; + + /* Hyper-V VP assist page */ + void *vp_assist; /* gva */ + void *vp_assist_hva; + uint64_t vp_assist_gpa; + + /* Hyper-V Partition assist page */ + void *partition_assist; /* gva */ + void *partition_assist_hva; + uint64_t partition_assist_gpa; }; =20 struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_= gva); diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/s= elftests/kvm/lib/x86_64/svm.c index 736ee4a23df6..c284e8f87f5c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -48,6 +48,16 @@ vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva) svm->msr_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->msr); memset(svm->msr_hva, 0, getpagesize()); =20 + svm->vp_assist =3D (void *)vm_vaddr_alloc_page(vm); + svm->vp_assist_hva =3D addr_gva2hva(vm, (uintptr_t)svm->vp_assist); + svm->vp_assist_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->vp_assist); + memset(svm->vp_assist_hva, 0, getpagesize()); + + svm->partition_assist =3D (void *)vm_vaddr_alloc_page(vm); + svm->partition_assist_hva =3D addr_gva2hva(vm, (uintptr_t)svm->partition_= assist); + svm->partition_assist_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->partition_= assist); + memset(svm->partition_assist_hva, 0, getpagesize()); + *p_svm_gva =3D svm_gva; return svm; } --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 833ECC4332F for ; Wed, 25 May 2022 09:06:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241050AbiEYJGu (ORCPT ); Wed, 25 May 2022 05:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235798AbiEYJFR (ORCPT ); Wed, 25 May 2022 05:05:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E848F8CCFA for ; Wed, 25 May 2022 02:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469370; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CY4apJS2z1cKUFJKMHfQr4ItLc6osQFAwRlPYuJ6Rzs=; b=cfu80L2SFRiGKLjEPJpESfjTqXGRBwTF3mtWnPJRhSohxFibYQdnrZot1H+uI88Dkj7dYT ckSz6uBuFytJzKSrZaXxOkeQvjnI9cEq82XqDAYPvIswZWh8fCF2eu8tbB2qhtPJkNBSID 5nWWBzCkHWhb9U1KUXzDJ7G3QSzlALE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-145-Cwb-Ksw9M9SRQc-g3MzzRw-1; Wed, 25 May 2022 05:02:47 -0400 X-MC-Unique: Cwb-Ksw9M9SRQc-g3MzzRw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BCF62100BAB8; Wed, 25 May 2022 09:02:46 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0D86F40CFD0A; Wed, 25 May 2022 09:02:44 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 34/37] KVM: selftests: Sync 'struct hv_vp_assist_page' definition with hyperv-tlfs.h Date: Wed, 25 May 2022 11:01:30 +0200 Message-Id: <20220525090133.1264239-35-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" 'struct hv_vp_assist_page' definition doesn't match TLFS. Also, define 'struct hv_nested_enlightenments_control' and use it instead of opaque '__u64'. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/evmcs.h | 22 ++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index b6067b555110..9c965ba73dec 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -20,14 +20,26 @@ =20 extern bool enable_evmcs; =20 +struct hv_nested_enlightenments_control { + struct { + __u32 directhypercall:1; + __u32 reserved:31; + } features; + struct { + __u32 reserved; + } hypercallControls; +} __packed; + +/* Define virtual processor assist page structure. */ struct hv_vp_assist_page { __u32 apic_assist; - __u32 reserved; - __u64 vtl_control[2]; - __u64 nested_enlightenments_control[2]; - __u32 enlighten_vmentry; + __u32 reserved1; + __u64 vtl_control[3]; + struct hv_nested_enlightenments_control nested_control; + __u8 enlighten_vmentry; + __u8 reserved2[7]; __u64 current_nested_vmcs; -}; +} __packed; =20 struct hv_enlightened_vmcs { u32 revision_id; --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B3BBC433F5 for ; Wed, 25 May 2022 09:07:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231893AbiEYJHE (ORCPT ); Wed, 25 May 2022 05:07:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242338AbiEYJFP (ORCPT ); Wed, 25 May 2022 05:05:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8D8B58A32F for ; Wed, 25 May 2022 02:03:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469370; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WmQXP+V5EH6gp+pKvT+WcZfpc2iFEdmdRYSyDvRLRvA=; b=fMTlAv1ekcPFPc3+c150x/l4QGz1BJTttM/+J+oU6JT2yLVup3lUCF2+yOYeerfWFa4v8w dsYPmcwDpmKZUD0FE7CYR1YPJpL/Hj6btyefPKiBJogxqclZp88UDquH63Ha3xVwOPOJDB xxDsYXvYv4sYZ1X3j3Pzd2R2gaRHBzc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-266-aSXV9d80PBSm2T86s5tYSA-1; Wed, 25 May 2022 05:02:49 -0400 X-MC-Unique: aSXV9d80PBSm2T86s5tYSA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B95FB833973; Wed, 25 May 2022 09:02:48 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0882940CF8EF; Wed, 25 May 2022 09:02:46 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 35/37] KVM: selftests: evmcs_test: Introduce L2 TLB flush test Date: Wed, 25 May 2022 11:01:31 +0200 Message-Id: <20220525090133.1264239-36-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/evmcs.h | 2 + .../testing/selftests/kvm/x86_64/evmcs_test.c | 42 ++++++++++++++++++- 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index 9c965ba73dec..36c0a67d8602 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -252,6 +252,8 @@ struct hv_enlightened_vmcs { #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) =20 +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 + extern struct hv_enlightened_vmcs *current_evmcs; extern struct hv_vp_assist_page *current_vp_assist; =20 diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testin= g/selftests/kvm/x86_64/evmcs_test.c index d12e043aa2ee..38de7d8c378a 100644 --- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c +++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c @@ -16,6 +16,7 @@ =20 #include "kvm_util.h" =20 +#include "hyperv.h" #include "vmx.h" =20 #define VCPU_ID 5 @@ -67,15 +68,27 @@ void l2_guest_code(void) vmcall(); rdmsr_gs_base(); /* intercepted */ =20 + /* L2 TLB flush tests */ + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_B= IT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + rdmsr_fs_base(); + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_B= IT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + /* Make sure we're no issuing Hyper-V TLB flush call again */ + __asm__ __volatile__ ("mov $0xdeadbeef, %rcx"); + /* Done, exit to L1 and never come back. */ vmcall(); } =20 -void guest_code(struct vmx_pages *vmx_pages) +void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t pgs_gpa) { #define L2_GUEST_STACK_SIZE 64 unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; =20 + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); + x2apic_enable(); =20 GUEST_SYNC(1); @@ -105,6 +118,14 @@ void guest_code(struct vmx_pages *vmx_pages) vmwrite(PIN_BASED_VM_EXEC_CONTROL, vmreadz(PIN_BASED_VM_EXEC_CONTROL) | PIN_BASED_NMI_EXITING); =20 + /* L2 TLB flush setup */ + current_evmcs->partition_assist_page =3D vmx_pages->partition_assist_gpa; + current_evmcs->hv_enlightenments_control.nested_flush_hypercall =3D 1; + current_evmcs->hv_vm_id =3D 1; + current_evmcs->hv_vp_id =3D 1; + current_vp_assist->nested_control.features.directhypercall =3D 1; + *(u32 *)(vmx_pages->partition_assist) =3D 0; + GUEST_ASSERT(!vmlaunch()); GUEST_ASSERT(vmptrstz() =3D=3D vmx_pages->enlightened_vmcs_gpa); =20 @@ -149,6 +170,18 @@ void guest_code(struct vmx_pages *vmx_pages) GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_MSR_READ); current_evmcs->guest_rip +=3D 2; /* rdmsr */ =20 + /* + * L2 TLB flush test. First VMCALL should be handled directly by L0, + * no VMCALL exit expected. + */ + GUEST_ASSERT(!vmresume()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_MSR_READ); + current_evmcs->guest_rip +=3D 2; /* rdmsr */ + /* Enable synthetic vmexit */ + *(u32 *)(vmx_pages->partition_assist) =3D 1; + GUEST_ASSERT(!vmresume()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D HV_VMX_SYNTHETIC_EXIT_REASON_= TRAP_AFTER_FLUSH); + GUEST_ASSERT(!vmresume()); GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_VMCALL); GUEST_SYNC(11); @@ -201,6 +234,7 @@ static void save_restore_vm(struct kvm_vm *vm) int main(int argc, char *argv[]) { vm_vaddr_t vmx_pages_gva =3D 0; + vm_vaddr_t hcall_page; =20 struct kvm_vm *vm; struct kvm_run *run; @@ -217,11 +251,15 @@ int main(int argc, char *argv[]) exit(KSFT_SKIP); } =20 + hcall_page =3D vm_vaddr_alloc_pages(vm, 1); + memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize()); + vcpu_set_hv_cpuid(vm, VCPU_ID); vcpu_enable_evmcs(vm, VCPU_ID); =20 vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); + vcpu_args_set(vm, VCPU_ID, 2, vmx_pages_gva, addr_gva2gpa(vm, hcall_page)= ); + vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID); =20 vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vm, VCPU_ID); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2E4BC433F5 for ; Wed, 25 May 2022 09:06:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238024AbiEYJGa (ORCPT ); Wed, 25 May 2022 05:06:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240921AbiEYJFV (ORCPT ); Wed, 25 May 2022 05:05:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 785718CB2B for ; Wed, 25 May 2022 02:03:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469374; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vDUzYANPiazk/7kwFI50eOiXZShdx7pYpFOIiyLzZJk=; b=I/GTw6X8nWrrHfO8k5GK3uL49egeosKeEDaYX+2hcGPR5yiedtNOhm5QPg3RsU0JFniHc/ OxM7+DwtskzezQdBLxG9Rz9fCWndR/RPjvwnN4yYCXYyCAoQceDRAKW20pWlrpl7YsQOBD ZkRMyz7XBDUT6ekVygQXqJmB/80xKNg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-376-OYQr_oIKP9GLiEj8AC4zdQ-1; Wed, 25 May 2022 05:02:51 -0400 X-MC-Unique: OYQr_oIKP9GLiEj8AC4zdQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D2C113802B87; Wed, 25 May 2022 09:02:50 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0931440CFD0A; Wed, 25 May 2022 09:02:48 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 36/37] KVM: selftests: Move Hyper-V VP assist page enablement out of evmcs.h Date: Wed, 25 May 2022 11:01:32 +0200 Message-Id: <20220525090133.1264239-37-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hyper-V VP assist page is not eVMCS specific, it is also used for enlightened nSVM. Move the code to vendor neutral place. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/Makefile | 2 +- .../selftests/kvm/include/x86_64/evmcs.h | 40 +------------------ .../selftests/kvm/include/x86_64/hyperv.h | 31 ++++++++++++++ .../testing/selftests/kvm/lib/x86_64/hyperv.c | 21 ++++++++++ .../testing/selftests/kvm/x86_64/evmcs_test.c | 1 + 5 files changed, 56 insertions(+), 39 deletions(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/hyperv.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 5a96649fcc24..fe4166fbe26a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv) endif =20 LIBKVM =3D lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib= /sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 =3D lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.= c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S +LIBKVM_x86_64 =3D lib/x86_64/apic.c lib/x86_64/hyperv.c lib/x86_64/process= or.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handle= rs.S LIBKVM_aarch64 =3D lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64= /handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c l= ib/aarch64/vgic.c LIBKVM_s390x =3D lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318= _test_handler.c LIBKVM_riscv =3D lib/riscv/processor.c lib/riscv/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index 36c0a67d8602..026586b53013 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -10,6 +10,7 @@ #define SELFTEST_KVM_EVMCS_H =20 #include +#include "hyperv.h" #include "vmx.h" =20 #define u16 uint16_t @@ -20,27 +21,6 @@ =20 extern bool enable_evmcs; =20 -struct hv_nested_enlightenments_control { - struct { - __u32 directhypercall:1; - __u32 reserved:31; - } features; - struct { - __u32 reserved; - } hypercallControls; -} __packed; - -/* Define virtual processor assist page structure. */ -struct hv_vp_assist_page { - __u32 apic_assist; - __u32 reserved1; - __u64 vtl_control[3]; - struct hv_nested_enlightenments_control nested_control; - __u8 enlighten_vmentry; - __u8 reserved2[7]; - __u64 current_nested_vmcs; -} __packed; - struct hv_enlightened_vmcs { u32 revision_id; u32 abort; @@ -246,31 +226,15 @@ struct hv_enlightened_vmcs { #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL BIT(15) #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL 0xFFFF =20 -#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073 -#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001 -#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12 -#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ - (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) - #define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 =20 extern struct hv_enlightened_vmcs *current_evmcs; -extern struct hv_vp_assist_page *current_vp_assist; =20 int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id); =20 -static inline int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist) +static inline void evmcs_enable(void) { - u64 val =3D (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) | - HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; - - wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val); - - current_vp_assist =3D vp_assist; - enable_evmcs =3D true; - - return 0; } =20 static inline int evmcs_vmptrld(uint64_t vmcs_pa, void *vmcs) diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index c302027fa6d5..a2561f31dabb 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -216,4 +216,35 @@ static inline void hyperv_write_xmm_input(void *data, = int n_sse_regs) /* Proper HV_X64_MSR_GUEST_OS_ID value */ #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 +#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073 +#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001 +#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12 +#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ + (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) + +struct hv_nested_enlightenments_control { + struct { + __u32 directhypercall:1; + __u32 reserved:31; + } features; + struct { + __u32 reserved; + } hypercallControls; +} __packed; + +/* Define virtual processor assist page structure. */ +struct hv_vp_assist_page { + __u32 apic_assist; + __u32 reserved1; + __u64 vtl_control[3]; + struct hv_nested_enlightenments_control nested_control; + __u8 enlighten_vmentry; + __u8 reserved2[7]; + __u64 current_nested_vmcs; +} __packed; + +extern struct hv_vp_assist_page *current_vp_assist; + +int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist); + #endif /* !SELFTEST_KVM_HYPERV_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/hyperv.c b/tools/testin= g/selftests/kvm/lib/x86_64/hyperv.c new file mode 100644 index 000000000000..32dc0afd9e5b --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/hyperv.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Hyper-V specific functions. + * + * Copyright (C) 2021, Red Hat Inc. + */ +#include +#include "processor.h" +#include "hyperv.h" + +int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist) +{ + uint64_t val =3D (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) | + HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; + + wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val); + + current_vp_assist =3D vp_assist; + + return 0; +} diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testin= g/selftests/kvm/x86_64/evmcs_test.c index 38de7d8c378a..6627d3814670 100644 --- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c +++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c @@ -95,6 +95,7 @@ void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t p= gs_gpa) GUEST_SYNC(2); =20 enable_vp_assist(vmx_pages->vp_assist_gpa, vmx_pages->vp_assist); + evmcs_enable(); =20 GUEST_ASSERT(vmx_pages->vmcs_gpa); GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); --=20 2.35.3 From nobody Wed Apr 29 02:00:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E9E6C433F5 for ; Wed, 25 May 2022 09:06:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236551AbiEYJGh (ORCPT ); Wed, 25 May 2022 05:06:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242457AbiEYJFZ (ORCPT ); Wed, 25 May 2022 05:05:25 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC6358FF9B for ; Wed, 25 May 2022 02:03:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653469376; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j6UcPVGyOBL1GHc+6wAYUv3sopuKDwrkJi4MawVzxo0=; b=apknXBQanEDlFQtjYyP7YNsiJqdv8Nq4PKNr6ZvIAltFp91n6d+def0WxkWZ2e0ZKsva4A U1mOUlYJOPyF3Mn2yJohjG2ryCSY7CNlVeCC/FUCZBA6hyWHB3Xo+h5ZuHV43lFxq6W2sj SR4cL+5XGpIyeVZI17Zh6p2himRagRA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-450-EDqfyiY0Mka6ZOjBHC_sUA-1; Wed, 25 May 2022 05:02:53 -0400 X-MC-Unique: EDqfyiY0Mka6ZOjBHC_sUA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CBC0C3C01D9B; Wed, 25 May 2022 09:02:52 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.194.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24F0A40CFD0A; Wed, 25 May 2022 09:02:51 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Maxim Levitsky , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 37/37] KVM: selftests: hyperv_svm_test: Introduce L2 TLB flush test Date: Wed, 25 May 2022 11:01:33 +0200 Message-Id: <20220525090133.1264239-38-vkuznets@redhat.com> In-Reply-To: <20220525090133.1264239-1-vkuznets@redhat.com> References: <20220525090133.1264239-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Reviewed-by: Maxim Levitsky Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/x86_64/hyperv_svm_test.c | 54 +++++++++++++++++-- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_svm_test.c index 21f5ca9197da..cd4969da58a0 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c @@ -42,6 +42,9 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS (1U << 31) =20 +#define HV_SVM_EXITCODE_ENL 0xF0000000 +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1) + static inline void vmmcall(void) { __asm__ __volatile__("vmmcall"); @@ -62,11 +65,25 @@ void l2_guest_code(void) =20 GUEST_SYNC(5); =20 + /* L2 TLB flush tests */ + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + rdmsr(MSR_FS_BASE); + hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | + HV_FLUSH_ALL_PROCESSORS); + /* Make sure we're not issuing Hyper-V TLB flush call again */ + __asm__ __volatile__ ("mov $0xdeadbeef, %rcx"); + /* Done, exit to L1 and never come back. */ vmmcall(); } =20 -static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm) +static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm, + vm_vaddr_t pgs_gpa) { unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; struct vmcb *vmcb =3D svm->vmcb; @@ -75,13 +92,23 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) =20 GUEST_SYNC(1); =20 - wrmsr(HV_X64_MSR_GUEST_OS_ID, (u64)0x8100 << 48); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); + enable_vp_assist(svm->vp_assist_gpa, svm->vp_assist); =20 GUEST_ASSERT(svm->vmcb_gpa); /* Prepare for L2 execution. */ generic_svm_setup(svm, l2_guest_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 + /* L2 TLB flush setup */ + hve->partition_assist_page =3D svm->partition_assist_gpa; + hve->hv_enlightenments_control.nested_flush_hypercall =3D 1; + hve->hv_vm_id =3D 1; + hve->hv_vp_id =3D 1; + current_vp_assist->nested_control.features.directhypercall =3D 1; + *(u32 *)(svm->partition_assist) =3D 0; + GUEST_SYNC(2); run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); @@ -116,6 +143,20 @@ static void __attribute__((__flatten__)) guest_code(st= ruct svm_test_data *svm) GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); vmcb->save.rip +=3D 2; /* rdmsr */ =20 + + /* + * L2 TLB flush test. First VMCALL should be handled directly by L0, + * no VMCALL exit expected. + */ + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); + vmcb->save.rip +=3D 2; /* rdmsr */ + /* Enable synthetic vmexit */ + *(u32 *)(svm->partition_assist) =3D 1; + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D HV_SVM_EXITCODE_ENL); + GUEST_ASSERT(vmcb->control.exit_info_1 =3D=3D HV_SVM_ENL_EXITCODE_TRAP_AF= TER_FLUSH); + run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); GUEST_SYNC(6); @@ -126,7 +167,7 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) int main(int argc, char *argv[]) { vm_vaddr_t nested_gva =3D 0; - + vm_vaddr_t hcall_page; struct kvm_vm *vm; struct kvm_run *run; struct ucall uc; @@ -141,7 +182,12 @@ int main(int argc, char *argv[]) vcpu_set_hv_cpuid(vm, VCPU_ID); run =3D vcpu_state(vm, VCPU_ID); vcpu_alloc_svm(vm, &nested_gva); - vcpu_args_set(vm, VCPU_ID, 1, nested_gva); + + hcall_page =3D vm_vaddr_alloc_pages(vm, 1); + memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize()); + + vcpu_args_set(vm, VCPU_ID, 2, nested_gva, addr_gva2gpa(vm, hcall_page)); + vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID); =20 for (stage =3D 1;; stage++) { _vcpu_run(vm, VCPU_ID); --=20 2.35.3