From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63816C35274 for ; Thu, 14 Apr 2022 13:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244614AbiDNNhe (ORCPT ); Thu, 14 Apr 2022 09:37:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244527AbiDNN1g (ORCPT ); Thu, 14 Apr 2022 09:27:36 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 90799A0BDA for ; Thu, 14 Apr 2022 06:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942425; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=41K8bdYnySCAb22xB5SGmKgm1a6rr9RW0K3/aFIuvnk=; b=NR6n1fUtiqSODjM+j5evmcEBLfry8XmyfEV4cpseuhz026Li+Qatzc++KDsBjJ24Sdan2G +bpAee203HzY4z+oaCvpv+It+EtF8Qy1KgSDmEtBBPYMcysxu+oKM29V1k61INy2rrsUg+ mtKao/TZrbWo+HVe0Ml+tX6TFTyJ+yk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-645-dzkYdj68M2-2yB1MGD9TkQ-1; Thu, 14 Apr 2022 09:20:19 -0400 X-MC-Unique: dzkYdj68M2-2yB1MGD9TkQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9831E185A7BA; Thu, 14 Apr 2022 13:20:18 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id B488C53CD; Thu, 14 Apr 2022 13:20:16 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/34] KVM: x86: hyper-v: Resurrect dedicated KVM_REQ_HV_TLB_FLUSH flag Date: Thu, 14 Apr 2022 15:19:40 +0200 Message-Id: <20220414132013.1588929-2-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to implementing fine-grained Hyper-V TLB flush and L2 TLB flush, resurrect dedicated KVM_REQ_HV_TLB_FLUSH request bit. As KVM_REQ_TLB_FLUSH_GUEST is a stronger operation, clear KVM_REQ_HV_TLB_FLUSH request in kvm_service_local_tlb_flush_requests() when KVM_REQ_TLB_FLUSH_GUEST was also requested. No functional change intended. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/hyperv.c | 4 ++-- arch/x86/kvm/x86.c | 6 +++++- 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 2c20f715f009..1de3ad9308d8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -105,6 +105,8 @@ KVM_ARCH_REQ_FLAGS(30, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_MMU_FREE_OBSOLETE_ROOTS \ KVM_ARCH_REQ_FLAGS(31, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_HV_TLB_FLUSH \ + KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) =20 #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 46f9dfb60469..b402ad059eb9 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1876,11 +1876,11 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { - kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST); + kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 - kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST, vcpu_mask); + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); } =20 ret_success: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ab336f7c82e4..f633cff8cd7f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3360,8 +3360,12 @@ void kvm_service_local_tlb_flush_requests(struct kvm= _vcpu *vcpu) if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) kvm_vcpu_flush_tlb_current(vcpu); =20 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); + kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) { + kvm_vcpu_flush_tlb_guest(vcpu); + } } EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F64BC47082 for ; Thu, 14 Apr 2022 13:39:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244958AbiDNNh6 (ORCPT ); Thu, 14 Apr 2022 09:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244535AbiDNN1h (ORCPT ); Thu, 14 Apr 2022 09:27:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CDA65A0BF0 for ; Thu, 14 Apr 2022 06:20:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942426; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f5iYihMb+lSF7RY0apxqBt12TNsUyerl0pY6JXjrXAo=; b=S8/yG+JBw26rPlgw8cc4DhSJTDoYAa0+3w0XN4jNrvRPFM8F8LvTmAPTyNAVRuOyfqrX+V jAeOeSe0fKPKVcLbnYyQufjzmb5xHhWfoEFFH++6E8ti+lu4Cn1Zn/9W24XvQRY7+Bx/Qu AjblvhJlZRDeQJHwQExjw4T45s1wQMc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-_31dG2XyPuit4xdxxmNTRQ-1; Thu, 14 Apr 2022 09:20:21 -0400 X-MC-Unique: _31dG2XyPuit4xdxxmNTRQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C5883800882; Thu, 14 Apr 2022 13:20:20 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id E2E687774; Thu, 14 Apr 2022 13:20:18 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/34] KVM: x86: hyper-v: Introduce TLB flush ring Date: Thu, 14 Apr 2022 15:19:41 +0200 Message-Id: <20220414132013.1588929-3-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To allow flushing individual GVAs instead of always flushing the whole VPID a per-vCPU structure to pass the requests is needed. Introduce a simple ring write-locked structure to hold two types of entries: individual GVA (GFN + up to 4095 following GFNs in the lower 12 bits) and 'flush all'. The queuing rule is: if there's not enough space on the ring to put the request and leave at least 1 entry for 'flush all' - put 'flush all' entry. The size of the ring is arbitrary set to '16'. Note, kvm_hv_flush_tlb() only queues 'flush all' entries for now so there's very small functional change but the infrastructure is prepared to handle individual GVA flush requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 16 +++++++ arch/x86/kvm/hyperv.c | 83 +++++++++++++++++++++++++++++++++ arch/x86/kvm/hyperv.h | 13 ++++++ arch/x86/kvm/x86.c | 5 +- arch/x86/kvm/x86.h | 1 + 5 files changed, 116 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 1de3ad9308d8..b4dd2ff61658 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -578,6 +578,20 @@ struct kvm_vcpu_hv_synic { bool dont_zero_synic_pages; }; =20 +#define KVM_HV_TLB_FLUSH_RING_SIZE (16) + +struct kvm_vcpu_hv_tlb_flush_entry { + u64 addr; + u64 flush_all:1; + u64 pad:63; +}; + +struct kvm_vcpu_hv_tlb_flush_ring { + int read_idx, write_idx; + spinlock_t write_lock; + struct kvm_vcpu_hv_tlb_flush_entry entries[KVM_HV_TLB_FLUSH_RING_SIZE]; +}; + /* Hyper-V per vcpu emulation context */ struct kvm_vcpu_hv { struct kvm_vcpu *vcpu; @@ -597,6 +611,8 @@ struct kvm_vcpu_hv { u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */ u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ } cpuid_cache; + + struct kvm_vcpu_hv_tlb_flush_ring tlb_flush_ring; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index b402ad059eb9..fb716cf919ed 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -29,6 +29,7 @@ #include #include #include +#include #include =20 #include @@ -954,6 +955,8 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) =20 hv_vcpu->vp_index =3D vcpu->vcpu_idx; =20 + spin_lock_init(&hv_vcpu->tlb_flush_ring.write_lock); + return 0; } =20 @@ -1789,6 +1792,74 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, st= ruct kvm_hv_hcall *hc, var_cnt * sizeof(*sparse_banks)); } =20 +static inline int hv_tlb_flush_ring_free(struct kvm_vcpu_hv *hv_vcpu, + int read_idx, int write_idx) +{ + if (write_idx >=3D read_idx) + return KVM_HV_TLB_FLUSH_RING_SIZE - (write_idx - read_idx) - 1; + + return read_idx - write_idx - 1; +} + +static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + int ring_free, write_idx, read_idx; + unsigned long flags; + + if (!hv_vcpu) + return; + + tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring; + + spin_lock_irqsave(&tlb_flush_ring->write_lock, flags); + + /* + * 'read_idx' is updated by the vCPU which does the flush, this + * happens without 'tlb_flush_ring->write_lock' being held; make + * sure we read it once. + */ + read_idx =3D READ_ONCE(tlb_flush_ring->read_idx); + /* + * 'write_idx' is only updated here, under 'tlb_flush_ring->write_lock'. + * allow the compiler to re-read it, it can't change. + */ + write_idx =3D tlb_flush_ring->write_idx; + + ring_free =3D hv_tlb_flush_ring_free(hv_vcpu, read_idx, write_idx); + /* Full ring always contains 'flush all' entry */ + if (!ring_free) + goto out_unlock; + + tlb_flush_ring->entries[write_idx].addr =3D 0; + tlb_flush_ring->entries[write_idx].flush_all =3D 1; + /* + * Advance write index only after filling in the entry to + * synchronize with lockless reader. + */ + smp_wmb(); + tlb_flush_ring->write_idx =3D (write_idx + 1) % KVM_HV_TLB_FLUSH_RING_SIZ= E; + +out_unlock: + spin_unlock_irqrestore(&tlb_flush_ring->write_lock, flags); +} + +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + kvm_vcpu_flush_tlb_guest(vcpu); + + if (!hv_vcpu) + return; + + tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring; + + tlb_flush_ring->read_idx =3D tlb_flush_ring->write_idx; +} + static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { struct kvm *kvm =3D vcpu->kvm; @@ -1797,6 +1868,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); u64 valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; + struct kvm_vcpu *v; + unsigned long i; bool all_cpus; =20 /* @@ -1876,10 +1949,20 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { + kvm_for_each_vcpu(i, v, kvm) + hv_tlb_flush_ring_enqueue(v); + kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 + for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) { + v =3D kvm_get_vcpu(kvm, i); + if (!v) + continue; + hv_tlb_flush_ring_enqueue(v); + } + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); } =20 diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index da2737f2a956..6847caeaaf84 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -147,4 +147,17 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kv= m_hyperv_eventfd *args); int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries); =20 + +static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + if (!hv_vcpu) + return; + + hv_vcpu->tlb_flush_ring.read_idx =3D hv_vcpu->tlb_flush_ring.write_idx; +} +void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu); + + #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f633cff8cd7f..e5aec386d299 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3324,7 +3324,7 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *v= cpu) static_call(kvm_x86_flush_tlb_all)(vcpu); } =20 -static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) { ++vcpu->stat.tlb_flush; =20 @@ -3362,7 +3362,8 @@ void kvm_service_local_tlb_flush_requests(struct kvm_= vcpu *vcpu) =20 if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); - kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) + kvm_hv_vcpu_empty_flush_tlb(vcpu); } else if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 588792f00334..2324f496c500 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -58,6 +58,7 @@ static inline unsigned int __shrink_ple_window(unsigned i= nt val, =20 #define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL =20 +void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu); void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu); int kvm_check_nested_events(struct kvm_vcpu *vcpu); =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB54DC47080 for ; Thu, 14 Apr 2022 13:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244689AbiDNNhp (ORCPT ); Thu, 14 Apr 2022 09:37:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244525AbiDNN1g (ORCPT ); Thu, 14 Apr 2022 09:27:36 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4FC89A0BE5 for ; Thu, 14 Apr 2022 06:20:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942426; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HcgpLz6Bgg313P6Vxs6xZF8dLYOvlYbafoE0peYRSRk=; b=ZGSfOXc+HzxrW739kwGTXNbfk8h3mVwECLABbZyplsB9EwFNMG0J3AnKvv3FtHkmABec96 H4aqqq58B5o3BH7joos+5sE2PgXkEDmdzIpvdaoZpOANO40UOKRqIIpEOeHM6SCk66AQ1I w/a0OGL65PJwsaRDSphne7P5ii31VZg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-426-Lr7s0jboOFyPsHR4-NweKA-1; Thu, 14 Apr 2022 09:20:23 -0400 X-MC-Unique: Lr7s0jboOFyPsHR4-NweKA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 94573805A30; Thu, 14 Apr 2022 13:20:22 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0F9D053CD; Thu, 14 Apr 2022 13:20:20 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 03/34] KVM: x86: hyper-v: Add helper to read hypercall data for array Date: Thu, 14 Apr 2022 15:19:42 +0200 Message-Id: <20220414132013.1588929-4-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Move the guts of kvm_get_sparse_vp_set() to a helper so that the code for reading a guest-provided array can be reused in the future, e.g. for getting a list of virtual addresses whose TLB entries need to be flushed. Opportunisticaly swap the order of the data and XMM adjustment so that the XMM/gpa offsets are bundled together. No functional change intended. Signed-off-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 53 +++++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index fb716cf919ed..d66c27fd1e8a 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1758,38 +1758,51 @@ struct kvm_hv_hcall { sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS]; }; =20 -static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, - int consumed_xmm_halves, - u64 *sparse_banks, gpa_t offset) -{ - u16 var_cnt; - int i; =20 - if (hc->var_cnt > 64) - return -EINVAL; - - /* Ignore banks that cannot possibly contain a legal VP index. */ - var_cnt =3D min_t(u16, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS); +static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc, + u16 orig_cnt, u16 cnt_cap, u64 *data, + int consumed_xmm_halves, gpa_t offset) +{ + /* + * Preserve the original count when ignoring entries via a "cap", KVM + * still needs to validate the guest input (though the non-XMM path + * punts on the checks). + */ + u16 cnt =3D min(orig_cnt, cnt_cap); + int i, j; =20 if (hc->fast) { /* * Each XMM holds two sparse banks, but do not count halves that * have already been consumed for hypercall parameters. */ - if (hc->var_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halv= es) + if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves) return HV_STATUS_INVALID_HYPERCALL_INPUT; - for (i =3D 0; i < var_cnt; i++) { - int j =3D i + consumed_xmm_halves; + + for (i =3D 0; i < cnt; i++) { + j =3D i + consumed_xmm_halves; if (j % 2) - sparse_banks[i] =3D sse128_hi(hc->xmm[j / 2]); + data[i] =3D sse128_hi(hc->xmm[j / 2]); else - sparse_banks[i] =3D sse128_lo(hc->xmm[j / 2]); + data[i] =3D sse128_lo(hc->xmm[j / 2]); } return 0; } =20 - return kvm_read_guest(kvm, hc->ingpa + offset, sparse_banks, - var_cnt * sizeof(*sparse_banks)); + return kvm_read_guest(kvm, hc->ingpa + offset, data, + cnt * sizeof(*data)); +} + +static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, + u64 *sparse_banks, int consumed_xmm_halves, + gpa_t offset) +{ + if (hc->var_cnt > 64) + return -EINVAL; + + /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */ + return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SE= T_BITS, + sparse_banks, consumed_xmm_halves, offset); } =20 static inline int hv_tlb_flush_ring_free(struct kvm_vcpu_hv *hv_vcpu, @@ -1937,7 +1950,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, 2, sparse_banks, + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2, offsetof(struct hv_tlb_flush_ex, hv_vp_set.bank_contents))) return HV_STATUS_INVALID_HYPERCALL_INPUT; @@ -2048,7 +2061,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, str= uct kvm_hv_hcall *hc) if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, 1, sparse_banks, + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 1, offsetof(struct hv_send_ipi_ex, vp_set.bank_contents))) return HV_STATUS_INVALID_HYPERCALL_INPUT; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5895C352A7 for ; Thu, 14 Apr 2022 14:01:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346560AbiDNN5k (ORCPT ); Thu, 14 Apr 2022 09:57:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244557AbiDNN1n (ORCPT ); Thu, 14 Apr 2022 09:27:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BE1C5A1444 for ; Thu, 14 Apr 2022 06:20:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zKNRMm/zmlnxnTqTidayLxe/grc/WSScQh4/mFK6IaI=; b=C+7rKZSBU1CxTKDUwj0dtA6qew2Sah2rvBxTuApZXB6QFGfr51dllDKtpZJgUgEmBmEenH ZN4B6YwFMi9yHHVyTjQRuvDZ2Zd2LwazciY9MFhqQUFdFtvLtwDnvcwOohqHlq0P1vrl4R g1f5sNqioTXvZv3Ck6KbGf7Jf5TXdUM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-443-yDkcsHHdM3eE5vWYe24yzQ-1; Thu, 14 Apr 2022 09:20:25 -0400 X-MC-Unique: yDkcsHHdM3eE5vWYe24yzQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 81BEB185A794; Thu, 14 Apr 2022 13:20:24 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id D41767C28; Thu, 14 Apr 2022 13:20:22 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 04/34] KVM: x86: hyper-v: Handle HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls gently Date: Thu, 14 Apr 2022 15:19:43 +0200 Message-Id: <20220414132013.1588929-5-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by flushing the whole VPID and this is sub-optimal. Switch to handling these requests with 'flush_tlb_gva()' hooks instead. Use the newly introduced TLB flush ring to queue the requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 132 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 115 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index d66c27fd1e8a..759e1a16e5c3 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1805,6 +1805,13 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, st= ruct kvm_hv_hcall *hc, sparse_banks, consumed_xmm_halves, offset); } =20 +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hca= ll *hc, u64 entries[], + int consumed_xmm_halves, gpa_t offset) +{ + return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, + entries, consumed_xmm_halves, offset); +} + static inline int hv_tlb_flush_ring_free(struct kvm_vcpu_hv *hv_vcpu, int read_idx, int write_idx) { @@ -1814,12 +1821,13 @@ static inline int hv_tlb_flush_ring_free(struct kvm= _vcpu_hv *hv_vcpu, return read_idx - write_idx - 1; } =20 -static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu *vcpu) +static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu *vcpu, u64 *entries,= int count) { struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); int ring_free, write_idx, read_idx; unsigned long flags; + int i; =20 if (!hv_vcpu) return; @@ -1845,14 +1853,34 @@ static void hv_tlb_flush_ring_enqueue(struct kvm_vc= pu *vcpu) if (!ring_free) goto out_unlock; =20 - tlb_flush_ring->entries[write_idx].addr =3D 0; - tlb_flush_ring->entries[write_idx].flush_all =3D 1; /* - * Advance write index only after filling in the entry to - * synchronize with lockless reader. + * All entries should fit on the ring leaving one free for 'flush all' + * entry in case another request comes in. In case there's not enough + * space, just put 'flush all' entry there. + */ + if (!count || count >=3D ring_free - 1 || !entries) { + tlb_flush_ring->entries[write_idx].addr =3D 0; + tlb_flush_ring->entries[write_idx].flush_all =3D 1; + /* + * Advance write index only after filling in the entry to + * synchronize with lockless reader. + */ + smp_wmb(); + tlb_flush_ring->write_idx =3D (write_idx + 1) % KVM_HV_TLB_FLUSH_RING_SI= ZE; + goto out_unlock; + } + + for (i =3D 0; i < count; i++) { + tlb_flush_ring->entries[write_idx].addr =3D entries[i]; + tlb_flush_ring->entries[write_idx].flush_all =3D 0; + write_idx =3D (write_idx + 1) % KVM_HV_TLB_FLUSH_RING_SIZE; + } + /* + * Advance write index only after filling in the entry to synchronize + * with lockless reader. */ smp_wmb(); - tlb_flush_ring->write_idx =3D (write_idx + 1) % KVM_HV_TLB_FLUSH_RING_SIZ= E; + tlb_flush_ring->write_idx =3D write_idx; =20 out_unlock: spin_unlock_irqrestore(&tlb_flush_ring->write_lock, flags); @@ -1862,15 +1890,58 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + struct kvm_vcpu_hv_tlb_flush_entry *entry; + int read_idx, write_idx; + u64 address; + u32 count; + int i, j; =20 - kvm_vcpu_flush_tlb_guest(vcpu); - - if (!hv_vcpu) + if (!tdp_enabled || !hv_vcpu) { + kvm_vcpu_flush_tlb_guest(vcpu); return; + } =20 tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring; =20 - tlb_flush_ring->read_idx =3D tlb_flush_ring->write_idx; + /* + * TLB flush must be performed on the target vCPU so 'read_idx' + * (AKA 'tail') cannot change underneath, the compiler is free + * to re-read it. + */ + read_idx =3D tlb_flush_ring->read_idx; + + /* + * 'write_idx' (AKA 'head') can be concurently updated by a different + * vCPU so we must be sure it's read once. + */ + write_idx =3D READ_ONCE(tlb_flush_ring->write_idx); + + /* Pairs with smp_wmb() in hv_tlb_flush_ring_enqueue() */ + smp_rmb(); + + for (i =3D read_idx; i !=3D write_idx; i =3D (i + 1) % KVM_HV_TLB_FLUSH_R= ING_SIZE) { + entry =3D &tlb_flush_ring->entries[i]; + + if (entry->flush_all) + goto out_flush_all; + + /* + * Lower 12 bits of 'address' encode the number of additional + * pages to flush. + */ + address =3D entry->addr & PAGE_MASK; + count =3D (entry->addr & ~PAGE_MASK) + 1; + for (j =3D 0; j < count; j++) + static_call(kvm_x86_flush_tlb_gva)(vcpu, address + j * PAGE_SIZE); + } + ++vcpu->stat.tlb_flush; + goto out_empty_ring; + +out_flush_all: + kvm_vcpu_flush_tlb_guest(vcpu); + +out_empty_ring: + tlb_flush_ring->read_idx =3D write_idx; } =20 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) @@ -1879,11 +1950,22 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + /* + * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_RING_SIZE - 1' + * entries on the TLB Flush ring as when 'read_idx =3D=3D write_idx' the + * ring is considered as empty. The last entry on the ring, however, + * needs to be always left free for 'flush all' entry which gets placed + * when there is not enough space to put all the requested entries. + */ + u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_RING_SIZE - 2]; + u64 *tlb_flush_entries; u64 valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; struct kvm_vcpu *v; unsigned long i; bool all_cpus; + int consumed_xmm_halves =3D 0; + gpa_t data_offset; =20 /* * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the @@ -1899,10 +1981,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) flush.address_space =3D hc->ingpa; flush.flags =3D hc->outgpa; flush.processor_mask =3D sse128_lo(hc->xmm[0]); + consumed_xmm_halves =3D 1; } else { if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush)))) return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset =3D sizeof(flush); } =20 trace_kvm_hv_flush_tlb(flush.processor_mask, @@ -1926,10 +2010,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) flush_ex.flags =3D hc->outgpa; memcpy(&flush_ex.hv_vp_set, &hc->xmm[0], sizeof(hc->xmm[0])); + consumed_xmm_halves =3D 2; } else { if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex, sizeof(flush_ex)))) return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset =3D sizeof(flush_ex); } =20 trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, @@ -1945,25 +2031,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) return HV_STATUS_INVALID_HYPERCALL_INPUT; =20 if (all_cpus) - goto do_flush; + goto read_flush_entries; =20 if (!hc->var_cnt) goto ret_success; =20 - if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2, - offsetof(struct hv_tlb_flush_ex, - hv_vp_set.bank_contents))) + if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves, + data_offset)) + return HV_STATUS_INVALID_HYPERCALL_INPUT; + data_offset +=3D hc->var_cnt * sizeof(sparse_banks[0]); + consumed_xmm_halves +=3D hc->var_cnt; + } + +read_flush_entries: + if (hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || + hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX || + hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) { + tlb_flush_entries =3D NULL; + } else { + if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries, + consumed_xmm_halves, data_offset)) return HV_STATUS_INVALID_HYPERCALL_INPUT; + tlb_flush_entries =3D __tlb_flush_entries; } =20 -do_flush: /* * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't * analyze it here, flush TLB regardless of the specified address space. */ if (all_cpus) { kvm_for_each_vcpu(i, v, kvm) - hv_tlb_flush_ring_enqueue(v); + hv_tlb_flush_ring_enqueue(v, tlb_flush_entries, hc->rep_cnt); =20 kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); } else { @@ -1973,7 +2071,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) v =3D kvm_get_vcpu(kvm, i); if (!v) continue; - hv_tlb_flush_ring_enqueue(v); + hv_tlb_flush_ring_enqueue(v, tlb_flush_entries, hc->rep_cnt); } =20 kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F004C433EF for ; Thu, 14 Apr 2022 14:02:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348250AbiDNODy (ORCPT ); Thu, 14 Apr 2022 10:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244564AbiDNN1n (ORCPT ); Thu, 14 Apr 2022 09:27:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E3757A0BF6 for ; Thu, 14 Apr 2022 06:20:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JsNnlC4y7P/vpE0xDeBLZNUoZ387++jz5xn6fyKPXY0=; b=eyUr9wZdIoe+vtvZ7KsiTTpsMBqvLFQdL/K2hMcHCwQthDqF2yJ9HFUxunPRYEBFpBhBi1 VYUp9BoSEAbqkfDdnpUXVyRKvvrGvEVIbdvidS4AGBDVRxWxjUPR1jTycftyMgRy/gSpIc 9OISnK0sHBkr4s/f0HFXcTuQ+X1LHhI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-284-53Dr_1THPoeY7HodHojW1g-1; Thu, 14 Apr 2022 09:20:27 -0400 X-MC-Unique: 53Dr_1THPoeY7HodHojW1g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6026D800B21; Thu, 14 Apr 2022 13:20:26 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id BFAAB53CD; Thu, 14 Apr 2022 13:20:24 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 05/34] KVM: x86: hyper-v: Expose support for extended gva ranges for flush hypercalls Date: Thu, 14 Apr 2022 15:19:44 +0200 Message-Id: <20220414132013.1588929-6-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Extended GVA ranges support bit seems to indicate whether lower 12 bits of GVA can be used to specify up to 4095 additional consequent GVAs to flush. This is somewhat described in TLFS. Previously, KVM was handling HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} requests by flushing the whole VPID so technically, extended GVA ranges were already supported. As such requests are handled more gently now, advertizing support for extended ranges starts making sense to reduce the size of TLB flush requests. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/hyperv-tlfs.h | 2 ++ arch/x86/kvm/hyperv.c | 1 + 2 files changed, 3 insertions(+) diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hype= rv-tlfs.h index 0a9407dc0859..5225a85c08c3 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -61,6 +61,8 @@ #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(10) /* Support for debug MSRs available */ #define HV_FEATURE_DEBUG_MSRS_AVAILABLE BIT(11) +/* Support for extended gva ranges for flush hypercalls available */ +#define HV_FEATURE_EXT_GVA_RANGES_FLUSH BIT(14) /* * Support for returning hypercall output block via XMM * registers is available diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 759e1a16e5c3..1a6f9628cee9 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2702,6 +2702,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kv= m_cpuid2 *cpuid, ent->ebx |=3D HV_DEBUGGING; ent->edx |=3D HV_X64_GUEST_DEBUGGING_AVAILABLE; ent->edx |=3D HV_FEATURE_DEBUG_MSRS_AVAILABLE; + ent->edx |=3D HV_FEATURE_EXT_GVA_RANGES_FLUSH; =20 /* * Direct Synthetic timers only make sense with in-kernel --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27819C47088 for ; Thu, 14 Apr 2022 13:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245349AbiDNNi1 (ORCPT ); Thu, 14 Apr 2022 09:38:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244653AbiDNN1y (ORCPT ); Thu, 14 Apr 2022 09:27:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 205C9A2047 for ; Thu, 14 Apr 2022 06:20:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=guPhHfLWQaBPsCmuIyaAa+212gzE/QcUkyjWekHqQX4=; b=ZOYTelyqu6SvhNPPu6ixvY7z3GYtF/UO/xvCdFIBF6Zs22+FxhP3aKCDvKbnKCWoqzlk1Q fuf+9dY2R7Fe0nLc7C29f69DgDonDt359NBqKU72P+p2z+Mf61tLt8REsOG4/M60eKKo+s ukUun06LclitZUKL0BdG5SSWNVluaB4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-424-3jbKgnbbPeSaioWPbhlXGA-1; Thu, 14 Apr 2022 09:20:29 -0400 X-MC-Unique: 3jbKgnbbPeSaioWPbhlXGA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 62071100BABE; Thu, 14 Apr 2022 13:20:28 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id BD4A07C28; Thu, 14 Apr 2022 13:20:26 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 06/34] KVM: x86: Prepare kvm_hv_flush_tlb() to handle L2's GPAs Date: Thu, 14 Apr 2022 15:19:45 +0200 Message-Id: <20220414132013.1588929-7-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to translate the specified L2 GPA to L1 GPA to read hypercall arguments from there. No fucntional change as KVM doesn't handle VMCALL/VMMCALL from L2 yet. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 1a6f9628cee9..fc4bb0ead9fa 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -23,6 +23,7 @@ #include "ioapic.h" #include "cpuid.h" #include "hyperv.h" +#include "mmu.h" #include "xen.h" =20 #include @@ -1975,6 +1976,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, s= truct kvm_hv_hcall *hc) */ BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64); =20 + if (!hc->fast && is_guest_mode(vcpu)) { + hc->ingpa =3D translate_nested_gpa(vcpu, hc->ingpa, 0, NULL); + if (unlikely(hc->ingpa =3D=3D UNMAPPED_GVA)) + return HV_STATUS_INVALID_HYPERCALL_INPUT; + } + if (hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST || hc->code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE) { if (hc->fast) { --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB537C388F3 for ; Thu, 14 Apr 2022 13:39:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245199AbiDNNiQ (ORCPT ); Thu, 14 Apr 2022 09:38:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244609AbiDNN1u (ORCPT ); Thu, 14 Apr 2022 09:27:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 10331A1463 for ; Thu, 14 Apr 2022 06:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942434; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K4KbFy9pDfTMle1k/JyKiBtHGLqWVzDUZ7hIxRDL20o=; b=T9RhfHRjoQEgX+B8BIRFgbgqBK56zR1HVjsyJYniATXMy769RmC3cbxPSOMApx0EvIwEcK cQBoZhlgyr0LwnDr77tTX+XrQC15S0KLGpsjxF2TmyoIDhLH02i6XU+gkaqwdpqREz14Uz GYt22iCxPacskU53QEkaAYde7R/yD0M= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-zGGhXgLEMeWC2MtmG-7mJQ-1; Thu, 14 Apr 2022 09:20:31 -0400 X-MC-Unique: zGGhXgLEMeWC2MtmG-7mJQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A6506185A794; Thu, 14 Apr 2022 13:20:30 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id A489E53CD; Thu, 14 Apr 2022 13:20:28 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 07/34] x86/hyperv: Introduce HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK constants Date: Thu, 14 Apr 2022 15:19:46 +0200 Message-Id: <20220414132013.1588929-8-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It may not come clear from where the magical '64' value used in __cpumask_to_vpset() come from. Moreover, '64' means both the maximum sparse bank number as well as the number of vCPUs per bank. Add defines to make things clear. These defines are also going to be used by KVM. No functional change. Signed-off-by: Vitaly Kuznetsov Acked-by: Wei Liu Reviewed-by: Maxim Levitsky --- include/asm-generic/hyperv-tlfs.h | 5 +++++ include/asm-generic/mshyperv.h | 11 ++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv= -tlfs.h index fdce7a4cfc6f..020ca9bdbb79 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -399,6 +399,11 @@ struct hv_vpset { u64 bank_contents[]; } __packed; =20 +/* The maximum number of sparse vCPU banks which can be encoded by 'struct= hv_vpset' */ +#define HV_MAX_SPARSE_VCPU_BANKS (64) +/* The number of vCPUs in one sparse bank */ +#define HV_VCPUS_PER_SPARSE_BANK (64) + /* HvCallSendSyntheticClusterIpi hypercall */ struct hv_send_ipi { u32 vector; diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index c08758b6b364..0abe91df1ef6 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -214,9 +214,10 @@ static inline int __cpumask_to_vpset(struct hv_vpset *= vpset, { int cpu, vcpu, vcpu_bank, vcpu_offset, nr_bank =3D 1; int this_cpu =3D smp_processor_id(); + int max_vcpu_bank =3D hv_max_vp_index / HV_VCPUS_PER_SPARSE_BANK; =20 - /* valid_bank_mask can represent up to 64 banks */ - if (hv_max_vp_index / 64 >=3D 64) + /* vpset.valid_bank_mask can represent up to HV_MAX_SPARSE_VCPU_BANKS ban= ks */ + if (max_vcpu_bank >=3D HV_MAX_SPARSE_VCPU_BANKS) return 0; =20 /* @@ -224,7 +225,7 @@ static inline int __cpumask_to_vpset(struct hv_vpset *v= pset, * structs are not cleared between calls, we risk flushing unneeded * vCPUs otherwise. */ - for (vcpu_bank =3D 0; vcpu_bank <=3D hv_max_vp_index / 64; vcpu_bank++) + for (vcpu_bank =3D 0; vcpu_bank <=3D max_vcpu_bank; vcpu_bank++) vpset->bank_contents[vcpu_bank] =3D 0; =20 /* @@ -236,8 +237,8 @@ static inline int __cpumask_to_vpset(struct hv_vpset *v= pset, vcpu =3D hv_cpu_number_to_vp_number(cpu); if (vcpu =3D=3D VP_INVAL) return -1; - vcpu_bank =3D vcpu / 64; - vcpu_offset =3D vcpu % 64; + vcpu_bank =3D vcpu / HV_VCPUS_PER_SPARSE_BANK; + vcpu_offset =3D vcpu % HV_VCPUS_PER_SPARSE_BANK; __set_bit(vcpu_offset, (unsigned long *) &vpset->bank_contents[vcpu_bank]); if (vcpu_bank >=3D nr_bank) --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EB71C38A02 for ; Thu, 14 Apr 2022 13:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245254AbiDNNiW (ORCPT ); Thu, 14 Apr 2022 09:38:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244616AbiDNN1v (ORCPT ); Thu, 14 Apr 2022 09:27:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 108F3A146C for ; Thu, 14 Apr 2022 06:20:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942435; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MA+P58Kcy3Y41kZuOptcOtnH38Yjcii314EdqAv5GIY=; b=GM3ZkVjA/2AJ9qLPK7HWBB1spDjTHjbRD8psz6GuQVkakXPVGB/iE9kWIl3ZVLlQ8LinFf qrvlMZv8XrD8JKMOPBLicpB36DWbdtfVqdEB2zs9qjBy1q+jQ9kSl1TI0yTOtd6gkpRSuE xuLExpj3YTrFD+/snopgHIWfNpXl/ZA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-663-8mA-w6saNEuVEaKvtiARqA-1; Thu, 14 Apr 2022 09:20:33 -0400 X-MC-Unique: 8mA-w6saNEuVEaKvtiARqA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E2398805F7C; Thu, 14 Apr 2022 13:20:32 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1292C7C28; Thu, 14 Apr 2022 13:20:30 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 08/34] KVM: x86: hyper-v: Use HV_MAX_SPARSE_VCPU_BANKS/HV_VCPUS_PER_SPARSE_BANK instead of raw '64' Date: Thu, 14 Apr 2022 15:19:47 +0200 Message-Id: <20220414132013.1588929-9-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It may not be clear from where the '64' limit for the maximum sparse bank number comes from, use HV_MAX_SPARSE_VCPU_BANKS define instead. Use HV_VCPUS_PER_SPARSE_BANK in KVM_HV_MAX_SPARSE_VCPU_SET_BITS's definition. Opportunistically adjust the comment around BUILD_BUG_ON(). No functional change. Suggested-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index fc4bb0ead9fa..3cf68645a2e6 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -43,7 +43,7 @@ /* "Hv#1" signature */ #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648 =20 -#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64) +#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, HV_VCP= US_PER_SPARSE_BANK) =20 static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, bool vcpu_kick); @@ -1798,7 +1798,7 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, str= uct kvm_hv_hcall *hc, u64 *sparse_banks, int consumed_xmm_halves, gpa_t offset) { - if (hc->var_cnt > 64) + if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS) return -EINVAL; =20 /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */ @@ -1969,12 +1969,11 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) gpa_t data_offset; =20 /* - * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the - * valid mask is a u64. Fail the build if KVM's max allowed number of - * vCPUs (>4096) would exceed this limit, KVM will additional changes - * for Hyper-V support to avoid setting the guest up to fail. + * The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS + * sparse banks. Fail the build if KVM's max allowed number of + * vCPUs (>4096) exceeds this limit. */ - BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > 64); + BUILD_BUG_ON(KVM_HV_MAX_SPARSE_VCPU_SET_BITS > HV_MAX_SPARSE_VCPU_BANKS); =20 if (!hc->fast && is_guest_mode(vcpu)) { hc->ingpa =3D translate_nested_gpa(vcpu, hc->ingpa, 0, NULL); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 956F5C47080 for ; Thu, 14 Apr 2022 14:01:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346545AbiDNN5g (ORCPT ); Thu, 14 Apr 2022 09:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244630AbiDNN1v (ORCPT ); Thu, 14 Apr 2022 09:27:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B2D8EA204E for ; Thu, 14 Apr 2022 06:20:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942437; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CxB8iaRITqlyT1LXAuvMAWvTt3GuClHhFoZbY8SSysE=; b=CUBkkoJgGf97vd4067W5l6ezRMljoAVagfhOJtDqXYzg3F8xewNBtkEmBuI5pWGq4W/gHc H75pVb0Ut/JgfTcMgHM0obWquEsLU9vvgyTF0KEiVFSyzhvTrToRDoDg8b8rdLCFqyHTfo lnIdpzF+9+PIpStE5hSYLhR9Xydnrho= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-518-N9OqsHY-Mv6bPG_8PU3kMA-1; Thu, 14 Apr 2022 09:20:36 -0400 X-MC-Unique: N9OqsHY-Mv6bPG_8PU3kMA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 47E801C0152F; Thu, 14 Apr 2022 13:20:35 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4664B9D7B; Thu, 14 Apr 2022 13:20:33 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/34] KVM: x86: hyper-v: Don't use sparse_set_to_vcpu_mask() in kvm_hv_send_ipi() Date: Thu, 14 Apr 2022 15:19:48 +0200 Message-Id: <20220414132013.1588929-10-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Get rid of on-stack allocation of vcpu_mask and optimize kvm_hv_send_ipi() for a smaller number of vCPUs in the request. When Hyper-V TLB flush is in use, HvSendSyntheticClusterIpi{,Ex} calls are not commonly used to send IPIs to a large number of vCPUs (and are rarely used in general). Introduce hv_is_vp_in_sparse_set() to directly check if the specified VP_ID is present in sparse vCPU set. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 3cf68645a2e6..aebbb598ad1d 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1746,6 +1746,25 @@ static void sparse_set_to_vcpu_mask(struct kvm *kvm,= u64 *sparse_banks, } } =20 +static bool hv_is_vp_in_sparse_set(u32 vp_id, u64 valid_bank_mask, u64 spa= rse_banks[]) +{ + int bank, sbank =3D 0; + + if (!test_bit(vp_id / HV_VCPUS_PER_SPARSE_BANK, + (unsigned long *)&valid_bank_mask)) + return false; + + for_each_set_bit(bank, (unsigned long *)&valid_bank_mask, + KVM_HV_MAX_SPARSE_VCPU_SET_BITS) { + if (bank =3D=3D vp_id / HV_VCPUS_PER_SPARSE_BANK) + break; + sbank++; + } + + return test_bit(vp_id % HV_VCPUS_PER_SPARSE_BANK, + (unsigned long *)&sparse_banks[sbank]); +} + struct kvm_hv_hcall { u64 param; u64 ingpa; @@ -2089,8 +2108,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) ((u64)hc->rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET); } =20 -static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector, - unsigned long *vcpu_bitmap) +static void kvm_hv_send_ipi_to_many(struct kvm *kvm, u32 vector, + u64 *sparse_banks, u64 valid_bank_mask) { struct kvm_lapic_irq irq =3D { .delivery_mode =3D APIC_DM_FIXED, @@ -2100,7 +2119,10 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u3= 2 vector, unsigned long i; =20 kvm_for_each_vcpu(i, vcpu, kvm) { - if (vcpu_bitmap && !test_bit(i, vcpu_bitmap)) + if (sparse_banks && + !hv_is_vp_in_sparse_set(kvm_hv_get_vpindex(vcpu), + valid_bank_mask, + sparse_banks)) continue; =20 /* We fail only when APIC is disabled */ @@ -2113,7 +2135,6 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, str= uct kvm_hv_hcall *hc) struct kvm *kvm =3D vcpu->kvm; struct hv_send_ipi_ex send_ipi_ex; struct hv_send_ipi send_ipi; - DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); unsigned long valid_bank_mask; u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; u32 vector; @@ -2175,13 +2196,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR)) return HV_STATUS_INVALID_HYPERCALL_INPUT; =20 - if (all_cpus) { - kvm_send_ipi_to_many(kvm, vector, NULL); - } else { - sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); - - kvm_send_ipi_to_many(kvm, vector, vcpu_mask); - } + kvm_hv_send_ipi_to_many(kvm, vector, all_cpus ? NULL : sparse_banks, vali= d_bank_mask); =20 ret_success: return HV_STATUS_SUCCESS; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 946A8C47087 for ; Thu, 14 Apr 2022 13:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245567AbiDNNin (ORCPT ); Thu, 14 Apr 2022 09:38:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244706AbiDNN2C (ORCPT ); Thu, 14 Apr 2022 09:28:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A0E1BA27EA for ; Thu, 14 Apr 2022 06:20:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942446; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vOHkBSrZAsh0XtqHzXIEOMGoZu0mNmnj6KVj9x1wFHI=; b=h7ICJyQA0K3s8tVnpDl1SvqALPsS/PvUBqwWwupXuLJZr+I+fuIhp1Ah6uZfnvZa5Z1UmG 2hSUx0dhi4TCGQ8uWmB6k8uGSZwJMaQv7Q0neHwYdA6M0XiPm1qsfi//A9PtvWGmzZypXP LU1Yahkm/CAKAC8F9fzd3sa3a0Sx4Ps= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-610-L-GgCv12OD6luR26ZeGVVQ-1; Thu, 14 Apr 2022 09:20:43 -0400 X-MC-Unique: L-GgCv12OD6luR26ZeGVVQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 55696108C19F; Thu, 14 Apr 2022 13:20:37 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8DDD37774; Thu, 14 Apr 2022 13:20:35 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 10/34] KVM: x86: hyper-v: Create a separate ring for L2 TLB flush Date: Thu, 14 Apr 2022 15:19:49 +0200 Message-Id: <20220414132013.1588929-11-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to use a separate ring from regular (L1) Hyper-V TLB flush requests: e.g. when a request to flush something in L2 is made, the target vCPU can transition from L2 to L1, receive a request to flush a GVA for L1 and then try to enter L2 back. The first request needs to be processed at this point. Similarly, requests to flush GVAs in L1 must wait until L2 exits to L1. No functional change as KVM doesn't handle L2 TLB flush requests from L2 yet. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 8 +++++++- arch/x86/kvm/hyperv.c | 8 +++++--- arch/x86/kvm/hyperv.h | 19 ++++++++++++++++--- 3 files changed, 28 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index b4dd2ff61658..058061621872 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -580,6 +580,12 @@ struct kvm_vcpu_hv_synic { =20 #define KVM_HV_TLB_FLUSH_RING_SIZE (16) =20 +enum hv_tlb_flush_rings { + HV_L1_TLB_FLUSH_RING, + HV_L2_TLB_FLUSH_RING, + HV_NR_TLB_FLUSH_RINGS, +}; + struct kvm_vcpu_hv_tlb_flush_entry { u64 addr; u64 flush_all:1; @@ -612,7 +618,7 @@ struct kvm_vcpu_hv { u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ } cpuid_cache; =20 - struct kvm_vcpu_hv_tlb_flush_ring tlb_flush_ring; + struct kvm_vcpu_hv_tlb_flush_ring tlb_flush_ring[HV_NR_TLB_FLUSH_RINGS]; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index aebbb598ad1d..1cef2b8f7001 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -956,7 +956,8 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) =20 hv_vcpu->vp_index =3D vcpu->vcpu_idx; =20 - spin_lock_init(&hv_vcpu->tlb_flush_ring.write_lock); + for (i =3D 0; i < HV_NR_TLB_FLUSH_RINGS; i++) + spin_lock_init(&hv_vcpu->tlb_flush_ring[i].write_lock); =20 return 0; } @@ -1852,7 +1853,8 @@ static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu= *vcpu, u64 *entries, int c if (!hv_vcpu) return; =20 - tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring; + /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */ + tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring[HV_L1_TLB_FLUSH_RING]; =20 spin_lock_irqsave(&tlb_flush_ring->write_lock, flags); =20 @@ -1921,7 +1923,7 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) return; } =20 - tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring; + tlb_flush_ring =3D kvm_hv_get_tlb_flush_ring(vcpu); =20 /* * TLB flush must be performed on the target vCPU so 'read_idx' diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 6847caeaaf84..d59f96700104 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -22,6 +22,7 @@ #define __ARCH_X86_KVM_HYPERV_H__ =20 #include +#include "x86.h" =20 /* * The #defines related to the synthetic debugger are required by KDNet, b= ut @@ -147,15 +148,27 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct k= vm_hyperv_eventfd *args); int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries); =20 +static inline struct kvm_vcpu_hv_tlb_flush_ring *kvm_hv_get_tlb_flush_ring= (struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + int i =3D !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_RING : + HV_L2_TLB_FLUSH_RING; + + /* KVM does not handle L2 TLB flush requests yet */ + WARN_ON_ONCE(i !=3D HV_L1_TLB_FLUSH_RING); + + return &hv_vcpu->tlb_flush_ring[i]; +} =20 static inline void kvm_hv_vcpu_empty_flush_tlb(struct kvm_vcpu *vcpu) { - struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; =20 - if (!hv_vcpu) + if (!to_hv_vcpu(vcpu)) return; =20 - hv_vcpu->tlb_flush_ring.read_idx =3D hv_vcpu->tlb_flush_ring.write_idx; + tlb_flush_ring =3D kvm_hv_get_tlb_flush_ring(vcpu); + tlb_flush_ring->read_idx =3D tlb_flush_ring->write_idx; } void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu); =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9705C38A06 for ; Thu, 14 Apr 2022 13:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245578AbiDNNiv (ORCPT ); Thu, 14 Apr 2022 09:38:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244708AbiDNN2C (ORCPT ); Thu, 14 Apr 2022 09:28:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC6F3A27ED for ; Thu, 14 Apr 2022 06:20:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942446; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qkqFYsX1GJuFAgE9Tv1saFsZGgkMsmfzA+EnM1j3Wsg=; b=FKHilXBrJ8jsfSC6BEWFI1gnEeLwwilqA+zh+gal5QLf9tT0wH19eMZUJhClaAUg6cveHJ +T0pw1AKD6zvF4w/YIB5LZnvqEJI1fyTUebM8MTE+oleZkhOH2NriJ+TOdh9MgYviSxE32 P/9be6+SIMrsyk4817IwC4m8ZcaIAUc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-549-lrklUccCObKvXmg8uFSZ6Q-1; Thu, 14 Apr 2022 09:20:45 -0400 X-MC-Unique: lrklUccCObKvXmg8uFSZ6Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A86331C10468; Thu, 14 Apr 2022 13:20:39 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 99A6B814B; Thu, 14 Apr 2022 13:20:37 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 11/34] KVM: x86: hyper-v: Use preallocated buffer in 'struct kvm_vcpu_hv' instead of on-stack 'sparse_banks' Date: Thu, 14 Apr 2022 15:19:50 +0200 Message-Id: <20220414132013.1588929-12-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To make kvm_hv_flush_tlb() ready to handle L2 TLB flush requests, KVM needs to allow for all 64 sparse vCPU banks regardless of KVM_MAX_VCPUs as L1 may use vCPU overcommit for L2. To avoid growing on-stack allocation, make 'sparse_banks' part of per-vCPU 'struct kvm_vcpu_hv' which is allocated dynamically. Note: sparse_set_to_vcpu_mask() keeps using on-stack allocation as it won't be used to handle L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/hyperv.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 058061621872..837c07e213de 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -619,6 +619,9 @@ struct kvm_vcpu_hv { } cpuid_cache; =20 struct kvm_vcpu_hv_tlb_flush_ring tlb_flush_ring[HV_NR_TLB_FLUSH_RINGS]; + + /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ + u64 sparse_banks[64]; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 1cef2b8f7001..e9793d36acca 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1968,6 +1968,8 @@ void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) =20 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 *sparse_banks =3D hv_vcpu->sparse_banks; struct kvm *kvm =3D vcpu->kvm; struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; @@ -1982,7 +1984,6 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_RING_SIZE - 2]; u64 *tlb_flush_entries; u64 valid_bank_mask; - u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; struct kvm_vcpu *v; unsigned long i; bool all_cpus; @@ -2134,11 +2135,12 @@ static void kvm_hv_send_ipi_to_many(struct kvm *kvm= , u32 vector, =20 static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u64 *sparse_banks =3D hv_vcpu->sparse_banks; struct kvm *kvm =3D vcpu->kvm; struct hv_send_ipi_ex send_ipi_ex; struct hv_send_ipi send_ipi; unsigned long valid_bank_mask; - u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; u32 vector; bool all_cpus; =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2022DC4167E for ; Thu, 14 Apr 2022 13:39:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343526AbiDNNjE (ORCPT ); Thu, 14 Apr 2022 09:39:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244775AbiDNN2G (ORCPT ); Thu, 14 Apr 2022 09:28:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 44141A56CE for ; Thu, 14 Apr 2022 06:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942453; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=50CzWrTQDY55h6HJNGeIUa1zcvA+e6RQRniRY8pLQbs=; b=U5798atSFHgCdsVzug/u5Tggdal1ReMWuY5+fnoeAJOIG5nOMoYTeGF8C4gasAszugNqPY Ek92wsDIPJ/UcUS811zOX5+LQOvc2DNNX894v3tie3kutDo0Qouh3VBiicVl33qJ0nQeh/ uNmk3ENdcWw8oFkk4hGx5BDj/kFx8YU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-210-9s2vFTo1NsqNRg4f7F2PjQ-1; Thu, 14 Apr 2022 09:20:52 -0400 X-MC-Unique: 9s2vFTo1NsqNRg4f7F2PjQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4DB0F901865; Thu, 14 Apr 2022 13:20:41 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id B451A9D42; Thu, 14 Apr 2022 13:20:39 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 12/34] KVM: nVMX: Keep track of hv_vm_id/hv_vp_id when eVMCS is in use Date: Thu, 14 Apr 2022 15:19:51 +0200 Message-Id: <20220414132013.1588929-13-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To handle L2 TLB flush requests, KVM needs to keep track of L2's VM_ID/ VP_IDs which are set by L1 hypervisor. 'Partition assist page' address is also needed to handle post-flush exit to L1 upon request. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 6 ++++++ arch/x86/kvm/vmx/nested.c | 15 +++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 837c07e213de..8b2a52bf26c0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -622,6 +622,12 @@ struct kvm_vcpu_hv { =20 /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ u64 sparse_banks[64]; + + struct { + u64 pa_page_gpa; + u64 vm_id; + u32 vp_id; + } nested; }; =20 /* Xen HVM per vcpu emulation context */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index a6688663da4d..ee88921c6156 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -225,6 +225,7 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *vm= x) =20 static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 if (evmptr_is_valid(vmx->nested.hv_evmcs_vmptr)) { @@ -233,6 +234,12 @@ static inline void nested_release_evmcs(struct kvm_vcp= u *vcpu) } =20 vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; + + if (hv_vcpu) { + hv_vcpu->nested.pa_page_gpa =3D INVALID_GPA; + hv_vcpu->nested.vm_id =3D 0; + hv_vcpu->nested.vp_id =3D 0; + } } =20 static void vmx_sync_vmcs_host_state(struct vcpu_vmx *vmx, @@ -1591,11 +1598,19 @@ static void copy_enlightened_to_vmcs12(struct vcpu_= vmx *vmx, u32 hv_clean_fields { struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; struct hv_enlightened_vmcs *evmcs =3D vmx->nested.hv_evmcs; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(&vmx->vcpu); =20 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ vmcs12->tpr_threshold =3D evmcs->tpr_threshold; vmcs12->guest_rip =3D evmcs->guest_rip; =20 + if (unlikely(!(hv_clean_fields & + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL))) { + hv_vcpu->nested.pa_page_gpa =3D evmcs->partition_assist_page; + hv_vcpu->nested.vm_id =3D evmcs->hv_vm_id; + hv_vcpu->nested.vp_id =3D evmcs->hv_vp_id; + } + if (unlikely(!(hv_clean_fields & HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_BASIC))) { vmcs12->guest_rsp =3D evmcs->guest_rsp; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECF9AC41535 for ; Thu, 14 Apr 2022 13:39:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343667AbiDNNjO (ORCPT ); Thu, 14 Apr 2022 09:39:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244810AbiDNN2I (ORCPT ); Thu, 14 Apr 2022 09:28:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EDD1598F6D for ; Thu, 14 Apr 2022 06:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=80hZSj7GYq252KXZcJptxGG0C+doNDnXvxWCT6FefmU=; b=YvVD+hfrIInImavCScf/CVYE8UU3QCWWblOlPK9+a5Mimw+QbsoXjqh8TUO8cHgdsClMPk xUrrkaYcCAraFnqVO9wpDimiohcm3LUqMkcSlbYhDV9hyw6qUZl9ybg8ybT3UHRZFSssYv y8dODm4pJjCcpZIKbXXD9/S3zDwQiP4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-449-N0SpzraFPMe8jJHL1xLUZw-1; Thu, 14 Apr 2022 09:20:53 -0400 X-MC-Unique: N0SpzraFPMe8jJHL1xLUZw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3648C87AA08; Thu, 14 Apr 2022 13:20:43 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8CA26814B; Thu, 14 Apr 2022 13:20:41 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 13/34] KVM: nSVM: Keep track of Hyper-V hv_vm_id/hv_vp_id Date: Thu, 14 Apr 2022 15:19:52 +0200 Message-Id: <20220414132013.1588929-14-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition assist page address to handle L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ arch/x86/kvm/svm/nested.c | 2 ++ 2 files changed, 18 insertions(+) diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index 7d6d97968fb9..8cf702fed7e5 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -9,6 +9,7 @@ #include =20 #include "../hyperv.h" +#include "svm.h" =20 /* * Hyper-V uses the software reserved 32 bytes in VMCB @@ -32,4 +33,19 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW =20 +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + struct hv_enlightenments *hve =3D + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + if (!hv_vcpu) + return; + + hv_vcpu->nested.pa_page_gpa =3D hve->partition_assist_page; + hv_vcpu->nested.vm_id =3D hve->hv_vm_id; + hv_vcpu->nested.vp_id =3D hve->hv_vp_id; +} + #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index bed5e1692cef..2d1a76343404 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) =20 svm->nested.nested_run_pending =3D 1; =20 + nested_svm_hv_update_vm_vp_ids(vcpu); + if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) goto out_exit_err; =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1362AC38A2D for ; Thu, 14 Apr 2022 13:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343784AbiDNNjX (ORCPT ); Thu, 14 Apr 2022 09:39:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244849AbiDNN2L (ORCPT ); Thu, 14 Apr 2022 09:28:11 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 676B2A6E09 for ; Thu, 14 Apr 2022 06:21:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942461; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+P10btfGU5+jPgzdbB3q5L1GGxBZYxDw4fMThv+wUPA=; b=RDOJWani8nFZD0i9xyuDEi/VOjKP2FcPeMEJEe/l0kUDM4whWNuFzAolEY1qmwRRD0Vp7L 4n2kYvm7QkqFYL+TjbmPKWbsF4fmQ/vAvVB/qREBEmgaeiFgH88IN5YgpHKjGumRfhbzWP BvEOmhkUcmwVmwcQnXXNd3xTERQpUTw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-227-nlEAr6l6Mjqd6WQfGR1S1Q-1; Thu, 14 Apr 2022 09:20:58 -0400 X-MC-Unique: nlEAr6l6Mjqd6WQfGR1S1Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 36BFB1066685; Thu, 14 Apr 2022 13:20:45 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 73A477C28; Thu, 14 Apr 2022 13:20:43 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 14/34] KVM: x86: Introduce .post_hv_l2_tlb_flush() nested hook Date: Thu, 14 Apr 2022 15:19:53 +0200 Message-Id: <20220414132013.1588929-15-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hyper-V supports injecting synthetic L2->L1 exit after performing L2 TLB flush operation but the procedure is vendor specific. Introduce .post_hv_l2_tlb_flush() nested hook for it. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/Makefile | 3 ++- arch/x86/kvm/svm/hyperv.c | 11 +++++++++++ arch/x86/kvm/svm/hyperv.h | 2 ++ arch/x86/kvm/svm/nested.c | 1 + arch/x86/kvm/vmx/evmcs.c | 4 ++++ arch/x86/kvm/vmx/evmcs.h | 1 + arch/x86/kvm/vmx/nested.c | 1 + 8 files changed, 23 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kvm/svm/hyperv.c diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 8b2a52bf26c0..ce62fde5f4ff 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1558,6 +1558,7 @@ struct kvm_x86_nested_ops { int (*enable_evmcs)(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu); + void (*post_hv_l2_tlb_flush)(struct kvm_vcpu *vcpu); }; =20 struct kvm_x86_init_ops { diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 30f244b64523..b6d53b045692 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -25,7 +25,8 @@ kvm-intel-y +=3D vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o= vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o kvm-intel-$(CONFIG_X86_SGX_KVM) +=3D vmx/sgx.o =20 -kvm-amd-y +=3D svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o = svm/sev.o +kvm-amd-y +=3D svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \ + svm/sev.o svm/hyperv.o =20 ifdef CONFIG_HYPERV kvm-amd-y +=3D svm/svm_onhyperv.o diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c new file mode 100644 index 000000000000..c0749fc282fe --- /dev/null +++ b/arch/x86/kvm/svm/hyperv.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD SVM specific code for Hyper-V on KVM. + * + * Copyright 2022 Red Hat, Inc. and/or its affiliates. + */ +#include "hyperv.h" + +void svm_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu) +{ +} diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index 8cf702fed7e5..a2b0d7580b0d 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -48,4 +48,6 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct = kvm_vcpu *vcpu) hv_vcpu->nested.vp_id =3D hve->hv_vp_id; } =20 +void svm_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu); + #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 2d1a76343404..de3f27301b5c 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1665,4 +1665,5 @@ struct kvm_x86_nested_ops svm_nested_ops =3D { .get_nested_state_pages =3D svm_get_nested_state_pages, .get_state =3D svm_get_nested_state, .set_state =3D svm_set_nested_state, + .post_hv_l2_tlb_flush =3D svm_post_hv_l2_tlb_flush, }; diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index 87e3dc10edf4..e390e67496df 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -437,3 +437,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, =20 return 0; } + +void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu) +{ +} diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index 8d70f9aea94b..b120b0ead4f3 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -244,5 +244,6 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata); int nested_evmcs_check_controls(struct vmcs12 *vmcs12); +void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __KVM_X86_VMX_EVMCS_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ee88921c6156..cc6c944b5815 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6850,4 +6850,5 @@ struct kvm_x86_nested_ops vmx_nested_ops =3D { .write_log_dirty =3D nested_vmx_write_pml_buffer, .enable_evmcs =3D nested_enable_evmcs, .get_evmcs_version =3D nested_get_evmcs_version, + .post_hv_l2_tlb_flush =3D vmx_post_hv_l2_tlb_flush, }; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35B31C38A2C for ; Thu, 14 Apr 2022 13:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343846AbiDNNj0 (ORCPT ); Thu, 14 Apr 2022 09:39:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244874AbiDNN2N (ORCPT ); Thu, 14 Apr 2022 09:28:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 48E65A6E36 for ; Thu, 14 Apr 2022 06:21:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nLH13vK6YhUccliR3apzUYTKpkvsAYqQrKUY2KtRR1w=; b=TLxli1TqdN+79EoaOLVKdCqe1DUgA4Pa02RRe5h/nupGJA9AqUVXiUhvxdslPEoKwLa41f A6m1YB8DrJrvQyW52yPNxINkGTsySdk9aIXvRUadd3m/aZ9GF0iVgw+RG0YEfT1nmboIz9 YXeiY0vtITRk1gL0Dw5IdtRbzvzbWc4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-361-PmzDrDJxMRSsRhyIRBh42A-1; Thu, 14 Apr 2022 09:21:05 -0400 X-MC-Unique: PmzDrDJxMRSsRhyIRBh42A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E3C9894EF4; Thu, 14 Apr 2022 13:20:47 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 751D27C28; Thu, 14 Apr 2022 13:20:45 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 15/34] KVM: x86: hyper-v: Introduce kvm_hv_is_tlb_flush_hcall() Date: Thu, 14 Apr 2022 15:19:54 +0200 Message-Id: <20220414132013.1588929-16-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The newly introduced helper checks whether vCPU is performing a Hyper-V TLB flush hypercall. This is required to filter out L2 TLB flush hypercalls for processing. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index d59f96700104..ca67c18cef2c 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -170,6 +170,24 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct = kvm_vcpu *vcpu) tlb_flush_ring =3D kvm_hv_get_tlb_flush_ring(vcpu); tlb_flush_ring->read_idx =3D tlb_flush_ring->write_idx; } + +static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u16 code; + + if (!hv_vcpu) + return false; + + code =3D is_64_bit_hypercall(vcpu) ? kvm_rcx_read(vcpu) : + kvm_rax_read(vcpu); + + return (code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX || + code =3D=3D HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX); +} + void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu); =20 =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51781C3A59E for ; Thu, 14 Apr 2022 13:39:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344237AbiDNNkX (ORCPT ); Thu, 14 Apr 2022 09:40:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244950AbiDNN2V (ORCPT ); Thu, 14 Apr 2022 09:28:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 63CAB92D1E for ; Thu, 14 Apr 2022 06:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942481; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zuxh7E2BkAVJM0qJ4qmpzA+JgS/yqIM9usYKpOwtUy0=; b=hR1Ax9zpWf6W85xIOVmQ3Pt0BrNvtsAwq+eykWLIwTby1ouwsGtI/TWsXhUM4rBPi6V8Ak 62H3J47z/a4LVrxFt+PYQcoovwhUZHCs9MulNMiwqIz992wQxLGpIepjNUxfkawFMHvZCu 55TJJ1kHWDCQDPV6kFEBUY18K5k//V4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-658-UovJ7KS9NNmcgKoK0kg4ow-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: UovJ7KS9NNmcgKoK0kg4ow-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A9E110B9542; Thu, 14 Apr 2022 13:20:49 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9183253CD; Thu, 14 Apr 2022 13:20:47 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 16/34] KVM: x86: hyper-v: L2 TLB flush Date: Thu, 14 Apr 2022 15:19:55 +0200 Message-Id: <20220414132013.1588929-17-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Handle L2 TLB flush requests by going through all vCPUs and checking whether there are vCPUs running the same VM_ID with a VP_ID specified in the requests. Perform synthetic exit to L2 upon finish. Note, while checking VM_ID/VP_ID of running vCPUs seem to be a bit racy, we count on the fact that KVM flushes the whole L2 VPID upon transition. Also, KVM_REQ_HV_TLB_FLUSH request needs to be done upon transition between L1 and L2 to make sure all pending requests are always processed. For the reference, Hyper-V TLFS refers to the feature as "Direct Virtual Flush". Note, nVMX/nSVM code does not handle VMCALL/VMMCALL from L2 yet. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.c | 73 ++++++++++++++++++++++++++++++++++++------- arch/x86/kvm/hyperv.h | 3 -- arch/x86/kvm/trace.h | 21 ++++++++----- 3 files changed, 74 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index e9793d36acca..79aabe0c33ec 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -34,6 +34,7 @@ #include =20 #include +#include #include =20 #include "trace.h" @@ -1842,9 +1843,10 @@ static inline int hv_tlb_flush_ring_free(struct kvm_= vcpu_hv *hv_vcpu, return read_idx - write_idx - 1; } =20 -static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu *vcpu, u64 *entries,= int count) +static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu *vcpu, + struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring, + u64 *entries, int count) { - struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); int ring_free, write_idx, read_idx; unsigned long flags; @@ -1853,9 +1855,6 @@ static void hv_tlb_flush_ring_enqueue(struct kvm_vcpu= *vcpu, u64 *entries, int c if (!hv_vcpu) return; =20 - /* kvm_hv_flush_tlb() is not ready to handle requests for L2s yet */ - tlb_flush_ring =3D &hv_vcpu->tlb_flush_ring[HV_L1_TLB_FLUSH_RING]; - spin_lock_irqsave(&tlb_flush_ring->write_lock, flags); =20 /* @@ -1974,6 +1973,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + struct kvm_vcpu_hv_tlb_flush_ring *tlb_flush_ring; /* * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_RING_SIZE - 1' * entries on the TLB Flush ring as when 'read_idx =3D=3D write_idx' the @@ -2018,7 +2018,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) } =20 trace_kvm_hv_flush_tlb(flush.processor_mask, - flush.address_space, flush.flags); + flush.address_space, flush.flags, + is_guest_mode(vcpu)); =20 valid_bank_mask =3D BIT_ULL(0); sparse_banks[0] =3D flush.processor_mask; @@ -2049,7 +2050,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, st= ruct kvm_hv_hcall *hc) trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, flush_ex.hv_vp_set.format, flush_ex.address_space, - flush_ex.flags); + flush_ex.flags, is_guest_mode(vcpu)); =20 valid_bank_mask =3D flush_ex.hv_vp_set.valid_bank_mask; all_cpus =3D flush_ex.hv_vp_set.format !=3D @@ -2083,23 +2084,54 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, = struct kvm_hv_hcall *hc) tlb_flush_entries =3D __tlb_flush_entries; } =20 + tlb_flush_ring =3D kvm_hv_get_tlb_flush_ring(vcpu); + /* * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't * analyze it here, flush TLB regardless of the specified address space. */ - if (all_cpus) { + if (all_cpus && !is_guest_mode(vcpu)) { kvm_for_each_vcpu(i, v, kvm) - hv_tlb_flush_ring_enqueue(v, tlb_flush_entries, hc->rep_cnt); + hv_tlb_flush_ring_enqueue(v, tlb_flush_ring, + tlb_flush_entries, hc->rep_cnt); =20 kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); - } else { + } else if (!is_guest_mode(vcpu)) { sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask, vcpu_mask); =20 for_each_set_bit(i, vcpu_mask, KVM_MAX_VCPUS) { v =3D kvm_get_vcpu(kvm, i); if (!v) continue; - hv_tlb_flush_ring_enqueue(v, tlb_flush_entries, hc->rep_cnt); + hv_tlb_flush_ring_enqueue(v, tlb_flush_ring, + tlb_flush_entries, hc->rep_cnt); + } + + kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); + } else { + struct kvm_vcpu_hv *hv_v; + + bitmap_zero(vcpu_mask, KVM_MAX_VCPUS); + + kvm_for_each_vcpu(i, v, kvm) { + hv_v =3D to_hv_vcpu(v); + + /* + * TLB is fully flushed on L2 VM change: either by KVM + * (on a eVMPTR switch) or by L1 hypervisor (in case it + * re-purposes the active eVMCS for a different VM/VP). + */ + if (!hv_v || hv_v->nested.vm_id !=3D hv_vcpu->nested.vm_id) + continue; + + if (!all_cpus && + !hv_is_vp_in_sparse_set(hv_v->nested.vp_id, valid_bank_mask, + sparse_banks)) + continue; + + __set_bit(i, vcpu_mask); + hv_tlb_flush_ring_enqueue(v, tlb_flush_ring, + tlb_flush_entries, hc->rep_cnt); } =20 kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); @@ -2287,10 +2319,27 @@ static void kvm_hv_hypercall_set_result(struct kvm_= vcpu *vcpu, u64 result) =20 static int kvm_hv_hypercall_complete(struct kvm_vcpu *vcpu, u64 result) { + int ret; + trace_kvm_hv_hypercall_done(result); kvm_hv_hypercall_set_result(vcpu, result); ++vcpu->stat.hypercalls; - return kvm_skip_emulated_instruction(vcpu); + ret =3D kvm_skip_emulated_instruction(vcpu); + + if (unlikely(hv_result_success(result) && is_guest_mode(vcpu) + && kvm_hv_is_tlb_flush_hcall(vcpu))) { + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + u32 tlb_lock_count; + + if (unlikely(kvm_read_guest(vcpu->kvm, hv_vcpu->nested.pa_page_gpa, + &tlb_lock_count, sizeof(tlb_lock_count)))) + kvm_inject_gp(vcpu, 0); + + if (tlb_lock_count) + kvm_x86_ops.nested_ops->post_hv_l2_tlb_flush(vcpu); + } + + return ret; } =20 static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index ca67c18cef2c..f593c9fd1dee 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -154,9 +154,6 @@ static inline struct kvm_vcpu_hv_tlb_flush_ring *kvm_hv= _get_tlb_flush_ring(struc int i =3D !is_guest_mode(vcpu) ? HV_L1_TLB_FLUSH_RING : HV_L2_TLB_FLUSH_RING; =20 - /* KVM does not handle L2 TLB flush requests yet */ - WARN_ON_ONCE(i !=3D HV_L1_TLB_FLUSH_RING); - return &hv_vcpu->tlb_flush_ring[i]; } =20 diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index e3a24b8f04be..af7896182935 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -1479,38 +1479,41 @@ TRACE_EVENT(kvm_hv_timer_state, * Tracepoint for kvm_hv_flush_tlb. */ TRACE_EVENT(kvm_hv_flush_tlb, - TP_PROTO(u64 processor_mask, u64 address_space, u64 flags), - TP_ARGS(processor_mask, address_space, flags), + TP_PROTO(u64 processor_mask, u64 address_space, u64 flags, bool guest_mod= e), + TP_ARGS(processor_mask, address_space, flags, guest_mode), =20 TP_STRUCT__entry( __field(u64, processor_mask) __field(u64, address_space) __field(u64, flags) + __field(bool, guest_mode) ), =20 TP_fast_assign( __entry->processor_mask =3D processor_mask; __entry->address_space =3D address_space; __entry->flags =3D flags; + __entry->guest_mode =3D guest_mode; ), =20 - TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx", + TP_printk("processor_mask 0x%llx address_space 0x%llx flags 0x%llx %s", __entry->processor_mask, __entry->address_space, - __entry->flags) + __entry->flags, __entry->guest_mode ? "(L2)" : "") ); =20 /* * Tracepoint for kvm_hv_flush_tlb_ex. */ TRACE_EVENT(kvm_hv_flush_tlb_ex, - TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags), - TP_ARGS(valid_bank_mask, format, address_space, flags), + TP_PROTO(u64 valid_bank_mask, u64 format, u64 address_space, u64 flags, b= ool guest_mode), + TP_ARGS(valid_bank_mask, format, address_space, flags, guest_mode), =20 TP_STRUCT__entry( __field(u64, valid_bank_mask) __field(u64, format) __field(u64, address_space) __field(u64, flags) + __field(bool, guest_mode) ), =20 TP_fast_assign( @@ -1518,12 +1521,14 @@ TRACE_EVENT(kvm_hv_flush_tlb_ex, __entry->format =3D format; __entry->address_space =3D address_space; __entry->flags =3D flags; + __entry->guest_mode =3D guest_mode; ), =20 TP_printk("valid_bank_mask 0x%llx format 0x%llx " - "address_space 0x%llx flags 0x%llx", + "address_space 0x%llx flags 0x%llx %s", __entry->valid_bank_mask, __entry->format, - __entry->address_space, __entry->flags) + __entry->address_space, __entry->flags, + __entry->guest_mode ? "(L2)" : "") ); =20 /* --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7377DC433F5 for ; Thu, 14 Apr 2022 14:11:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347005AbiDNOLg (ORCPT ); Thu, 14 Apr 2022 10:11:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244871AbiDNN2N (ORCPT ); Thu, 14 Apr 2022 09:28:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A8143A76D0 for ; Thu, 14 Apr 2022 06:21:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942468; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UOg2AgFte5MATNuwhmZnhfjrYREecC41nw9W9kik+X8=; b=NYvdOHZ450hBoa/GTY4Be6uKlsaneN1uWd6EpAEgSFpaaDk6h+S2YNvqt4G20SN5nHucRi FpXDUzGW2V0VaBIUzO1wCEZG1t6AS7Vk931k5R/LiMnCpphcC28imu2Y5yItX+1Nnramy+ j7wsSnXVVsTC6DXwwXgsDdu7enMMHjs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-660-aDOaM6_EPo2bbTwEaNYNuw-1; Thu, 14 Apr 2022 09:21:05 -0400 X-MC-Unique: aDOaM6_EPo2bbTwEaNYNuw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3B9E989FF06; Thu, 14 Apr 2022 13:20:51 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C3DD7774; Thu, 14 Apr 2022 13:20:49 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 17/34] KVM: x86: hyper-v: Introduce fast kvm_hv_l2_tlb_flush_exposed() check Date: Thu, 14 Apr 2022 15:19:56 +0200 Message-Id: <20220414132013.1588929-18-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a helper to quickly check if KVM needs to handle VMCALL/VMMCALL from L2 in L0 to process L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/hyperv.c | 6 ++++++ arch/x86/kvm/hyperv.h | 7 +++++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index ce62fde5f4ff..168600490bd1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -616,6 +616,7 @@ struct kvm_vcpu_hv { u32 enlightenments_eax; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EAX */ u32 enlightenments_ebx; /* HYPERV_CPUID_ENLIGHTMENT_INFO.EBX */ u32 syndbg_cap_eax; /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ + u32 nested_features_eax; /* HYPERV_CPUID_NESTED_FEATURES.EAX */ } cpuid_cache; =20 struct kvm_vcpu_hv_tlb_flush_ring tlb_flush_ring[HV_NR_TLB_FLUSH_RINGS]; diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 79aabe0c33ec..68a0df4e3f66 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2281,6 +2281,12 @@ void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu) hv_vcpu->cpuid_cache.syndbg_cap_eax =3D entry->eax; else hv_vcpu->cpuid_cache.syndbg_cap_eax =3D 0; + + entry =3D kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_NESTED_FEATURES, 0); + if (entry) + hv_vcpu->cpuid_cache.nested_features_eax =3D entry->eax; + else + hv_vcpu->cpuid_cache.nested_features_eax =3D 0; } =20 int kvm_hv_set_enforce_cpuid(struct kvm_vcpu *vcpu, bool enforce) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index f593c9fd1dee..d8cb6d70dbc8 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -168,6 +168,13 @@ static inline void kvm_hv_vcpu_empty_flush_tlb(struct = kvm_vcpu *vcpu) tlb_flush_ring->read_idx =3D tlb_flush_ring->write_idx; } =20 +static inline bool kvm_hv_l2_tlb_flush_exposed(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); + + return hv_vcpu && (hv_vcpu->cpuid_cache.nested_features_eax & HV_X64_NEST= ED_DIRECT_FLUSH); +} + static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DAD0C35280 for ; Thu, 14 Apr 2022 13:39:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344126AbiDNNj4 (ORCPT ); Thu, 14 Apr 2022 09:39:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244906AbiDNN2P (ORCPT ); Thu, 14 Apr 2022 09:28:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 997DDA7771 for ; Thu, 14 Apr 2022 06:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942477; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ymMl8hA8K/esTw1tg/apm+DYb9LQJWsl4/Hevby8goQ=; b=Qc7WmYkJ/XDUFJQUxy3fUOtuVYsSOknpTY5mf6Ekz6fxbwo7MKknsLoHXAzrvg+fddAxEi x+T5fBXJmRHaGbVriX6JO3p1mCfl/TQOoMVXOKIy7lhj1rjBHtKXxSMoKd/p0LgzT5TArb yOe2idevwhJFwG1qADsUd+v4O+ABV8c= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-480-9uE9hCFMNhWH1RBS7kJ8pQ-1; Thu, 14 Apr 2022 09:21:12 -0400 X-MC-Unique: 9uE9hCFMNhWH1RBS7kJ8pQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 39C212A5957A; Thu, 14 Apr 2022 13:20:53 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7A01C7C28; Thu, 14 Apr 2022 13:20:51 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 18/34] x86/hyperv: Fix 'struct hv_enlightened_vmcs' definition Date: Thu, 14 Apr 2022 15:19:57 +0200 Message-Id: <20220414132013.1588929-19-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Section 1.9 of TLFS v6.0b says: "All structures are padded in such a way that fields are aligned naturally (that is, an 8-byte field is aligned to an offset of 8 bytes and so on)". 'struct enlightened_vmcs' has a glitch: ... struct { u32 nested_flush_hypercall:1; /* 836: 0 4= */ u32 msr_bitmap:1; /* 836: 1 4 */ u32 reserved:30; /* 836: 2 4 */ } hv_enlightenments_control; /* 836 4 */ u32 hv_vp_id; /* 840 4 */ u64 hv_vm_id; /* 844 8 */ u64 partition_assist_page; /* 852 8 */ ... And the observed values in 'partition_assist_page' make no sense at all. Fix the layout by padding the structure properly. Fixes: 68d1eb72ee99 ("x86/hyper-v: define struct hv_enlightened_vmcs and cl= ean field bits") Reviewed-by: Michael Kelley Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/hyperv-tlfs.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hype= rv-tlfs.h index 5225a85c08c3..e7ddae8e02c6 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -548,7 +548,7 @@ struct hv_enlightened_vmcs { u64 guest_rip; =20 u32 hv_clean_fields; - u32 hv_padding_32; + u32 padding32_1; u32 hv_synthetic_controls; struct { u32 nested_flush_hypercall:1; @@ -556,7 +556,7 @@ struct hv_enlightened_vmcs { u32 reserved:30; } __packed hv_enlightenments_control; u32 hv_vp_id; - + u32 padding32_2; u64 hv_vm_id; u64 partition_assist_page; u64 padding64_4[4]; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C12D6C4707A for ; Thu, 14 Apr 2022 14:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346392AbiDNN4r (ORCPT ); Thu, 14 Apr 2022 09:56:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244938AbiDNN2T (ORCPT ); Thu, 14 Apr 2022 09:28:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E05C29233C for ; Thu, 14 Apr 2022 06:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942479; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MT8vgLDxgMK5gNkAuk585zBCwERy4TI/VOrbfeHlFRQ=; b=eTKv6HZVlS8VtMdr8W90r8ookSo89ggoGl03U0YB0m2ZApncE2giEOYzWjWcvhgPvOicUt ghxWhA8cIklea6JTGaA0Pb09ablHXSLfAPs2SLwQAcWe8/neEpQMiFSvwUQ3qM9PwoGcpV 9RcTLhJs2Kh+UG78lHWrAy283g/Sa6E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-671-oir2jDqAMtm5tEXIoFUR-w-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: oir2jDqAMtm5tEXIoFUR-w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 093DF803533; Thu, 14 Apr 2022 13:20:55 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77BD453CD; Thu, 14 Apr 2022 13:20:53 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 19/34] KVM: nVMX: hyper-v: Enable L2 TLB flush Date: Thu, 14 Apr 2022 15:19:58 +0200 Message-Id: <20220414132013.1588929-20-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable L2 TLB flush feature on nVMX when: - Enlightened VMCS is in use. - The feature flag is enabled in eVMCS. - The feature flag is enabled in partition assist page. Perform synthetic vmexit to L1 after processing TLB flush call upon request (HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH). Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/vmx/evmcs.c | 20 ++++++++++++++++++++ arch/x86/kvm/vmx/evmcs.h | 10 ++++++++++ arch/x86/kvm/vmx/nested.c | 16 ++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index e390e67496df..e0cb2e223daa 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -6,6 +6,7 @@ #include "../hyperv.h" #include "../cpuid.h" #include "evmcs.h" +#include "nested.h" #include "vmcs.h" #include "vmx.h" #include "trace.h" @@ -438,6 +439,25 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, return 0; } =20 +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + struct hv_enlightened_vmcs *evmcs =3D vmx->nested.hv_evmcs; + struct hv_vp_assist_page assist_page; + + if (!evmcs) + return false; + + if (!evmcs->hv_enlightenments_control.nested_flush_hypercall) + return false; + + if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page))) + return false; + + return assist_page.nested_control.features.directhypercall; +} + void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu) { + nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0,= 0); } diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index b120b0ead4f3..ddbdb557cc53 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -65,6 +65,15 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs); #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CT= RL) #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING) =20 +/* + * Note, Hyper-V isn't actually stealing bit 28 from Intel, just abusing i= t by + * pairing it with architecturally impossible exit reasons. Bit 28 is set= only + * on SMI exits to a SMI transfer monitor (STM) and if and only if a MTF V= M-Exit + * is pending. I.e. it will never be set by hardware for non-SMI exits (t= here + * are only three), nor will it ever be set unless the VMM is an STM. + */ +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 + struct evmcs_field { u16 offset; u16 clean_field; @@ -244,6 +253,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version); void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata); int nested_evmcs_check_controls(struct vmcs12 *vmcs12); +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu); void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __KVM_X86_VMX_EVMCS_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index cc6c944b5815..3e2ef5edad4a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1170,6 +1170,17 @@ static void nested_vmx_transition_tlb_flush(struct k= vm_vcpu *vcpu, { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 + /* + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or + * L2's VP_ID upon request from the guest. Make sure we check for + * pending entries for the case when the request got misplaced (e.g. + * a transition from L2->L1 happened while processing L2 TLB flush + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush + * anything if there are no requests in the corresponding buffer. + */ + if (to_hv_vcpu(vcpu)) + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + /* * If vmcs12 doesn't use VPID, L1 expects linear and combined mappings * for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a @@ -5997,6 +6008,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu= *vcpu, * Handle L2's bus locks in L0 directly. */ return true; + case EXIT_REASON_VMCALL: + /* Hyper-V L2 TLB flush hypercall is handled by L0 */ + return kvm_hv_l2_tlb_flush_exposed(vcpu) && + nested_evmcs_l2_tlb_flush_enabled(vcpu) && + kvm_hv_is_tlb_flush_hcall(vcpu); default: break; } --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2407FC35280 for ; Thu, 14 Apr 2022 14:01:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346442AbiDNN5H (ORCPT ); Thu, 14 Apr 2022 09:57:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244894AbiDNN2O (ORCPT ); Thu, 14 Apr 2022 09:28:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6F485A775F for ; Thu, 14 Apr 2022 06:21:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942474; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8GnOa2nJ6fSpwmAR7Vt3Kl/N0xAjTqupgL1N6sCmJHI=; b=VvWXyvOKQkf7uZy6USwiMF0ho9UivF5S5DzFhw9mXu5ONCOTcrKkTVB+lj/RLwsXm/sR/L rMkk4p86RynQ9+C2kolRZU/kOXuslOQE1vNgYuhx9IhbtStfjU5DlTE+Iyk/eBJM8oF9We U3k7Cy5n3vrol+77sNXDafn9CMWLh7k= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-513-iruDZws-PEq6HuEuzVViwg-1; Thu, 14 Apr 2022 09:21:11 -0400 X-MC-Unique: iruDZws-PEq6HuEuzVViwg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 05910811E83; Thu, 14 Apr 2022 13:20:57 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4521A7774; Thu, 14 Apr 2022 13:20:55 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 20/34] KVM: x86: KVM_REQ_TLB_FLUSH_CURRENT is a superset of KVM_REQ_HV_TLB_FLUSH too Date: Thu, 14 Apr 2022 15:19:59 +0200 Message-Id: <20220414132013.1588929-21-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" KVM_REQ_TLB_FLUSH_CURRENT is an even stronger operation than KVM_REQ_TLB_FLUSH_GUEST so KVM_REQ_HV_TLB_FLUSH needs not to be processed after it. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/x86.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e5aec386d299..d3839e648ab3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3357,8 +3357,11 @@ static inline void kvm_vcpu_flush_tlb_current(struct= kvm_vcpu *vcpu) */ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu) { - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) { kvm_vcpu_flush_tlb_current(vcpu); + if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu)) + kvm_hv_vcpu_empty_flush_tlb(vcpu); + } =20 if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) { kvm_vcpu_flush_tlb_guest(vcpu); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49EFDC3A5A3 for ; Thu, 14 Apr 2022 13:39:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344143AbiDNNkB (ORCPT ); Thu, 14 Apr 2022 09:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244926AbiDNN2S (ORCPT ); Thu, 14 Apr 2022 09:28:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A378BA8894 for ; Thu, 14 Apr 2022 06:21:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Eg7OCRbNJrhM4raGwDzXlbxhUPbRgxis/FVMtvmwPPE=; b=JGhpUIRnqoATYmzQn2G1y0pyPcL1+WQFptNsQiVCAMCDyKoM/E1wICkGkgBcTU8V1PkwaG 3p+PZ18YXGim+0LQZsgWxNaZHErueV63sjgaPS5Yinphjt2dui2eSSpW8XJoxoMDU16xfP bOBVNU9katQ2AKToZtX8FtXgeUIHxfw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-118-TuULmECKMIOWxv8ABHaDGA-1; Thu, 14 Apr 2022 09:21:12 -0400 X-MC-Unique: TuULmECKMIOWxv8ABHaDGA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6FF672A59559; Thu, 14 Apr 2022 13:20:59 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E5F57C28; Thu, 14 Apr 2022 13:20:57 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 21/34] KVM: nSVM: hyper-v: Enable L2 TLB flush Date: Thu, 14 Apr 2022 15:20:00 +0200 Message-Id: <20220414132013.1588929-22-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement Hyper-V L2 TLB flush for nSVM. The feature needs to be enabled both in extended 'nested controls' in VMCB and partition assist page. According to Hyper-V TLFS, synthetic vmexit to L1 is performed with - HV_SVM_EXITCODE_ENL exit_code. - HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH exit_info_1. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/svm/hyperv.c | 7 +++++++ arch/x86/kvm/svm/hyperv.h | 19 +++++++++++++++++++ arch/x86/kvm/svm/nested.c | 22 +++++++++++++++++++++- 3 files changed, 47 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/hyperv.c b/arch/x86/kvm/svm/hyperv.c index c0749fc282fe..3842548bb88c 100644 --- a/arch/x86/kvm/svm/hyperv.c +++ b/arch/x86/kvm/svm/hyperv.c @@ -8,4 +8,11 @@ =20 void svm_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm =3D to_svm(vcpu); + + svm->vmcb->control.exit_code =3D HV_SVM_EXITCODE_ENL; + svm->vmcb->control.exit_code_hi =3D 0; + svm->vmcb->control.exit_info_1 =3D HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH; + svm->vmcb->control.exit_info_2 =3D 0; + nested_svm_vmexit(svm); } diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index a2b0d7580b0d..cd33e89f9f61 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -33,6 +33,9 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW =20 +#define HV_SVM_EXITCODE_ENL 0xF0000000 +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1) + static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -48,6 +51,22 @@ static inline void nested_svm_hv_update_vm_vp_ids(struct= kvm_vcpu *vcpu) hv_vcpu->nested.vp_id =3D hve->hv_vp_id; } =20 +static inline bool nested_svm_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + struct hv_enlightenments *hve =3D + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; + struct hv_vp_assist_page assist_page; + + if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page))) + return false; + + if (!hve->hv_enlightenments_control.nested_flush_hypercall) + return false; + + return assist_page.nested_control.features.directhypercall; +} + void svm_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu); =20 #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index de3f27301b5c..a6d9807c09b1 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -172,7 +172,8 @@ void recalc_intercepts(struct vcpu_svm *svm) } =20 /* We don't want to see VMMCALLs from a nested guest */ - vmcb_clr_intercept(c, INTERCEPT_VMMCALL); + if (!nested_svm_l2_tlb_flush_enabled(&svm->vcpu)) + vmcb_clr_intercept(c, INTERCEPT_VMMCALL); =20 for (i =3D 0; i < MAX_INTERCEPT; i++) c->intercepts[i] |=3D g->intercepts[i]; @@ -488,6 +489,17 @@ static void nested_save_pending_event_to_vmcb12(struct= vcpu_svm *svm, =20 static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu) { + /* + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or + * L2's VP_ID upon request from the guest. Make sure we check for + * pending entries for the case when the request got misplaced (e.g. + * a transition from L2->L1 happened while processing L2 TLB flush + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush + * anything if there are no requests in the corresponding buffer. + */ + if (to_hv_vcpu(vcpu)) + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu); + /* * TODO: optimize unconditional TLB flush/MMU sync. A partial list of * things to fix before this can be conditional: @@ -1357,6 +1369,7 @@ static int svm_check_nested_events(struct kvm_vcpu *v= cpu) int nested_svm_exit_special(struct vcpu_svm *svm) { u32 exit_code =3D svm->vmcb->control.exit_code; + struct kvm_vcpu *vcpu =3D &svm->vcpu; =20 switch (exit_code) { case SVM_EXIT_INTR: @@ -1375,6 +1388,13 @@ int nested_svm_exit_special(struct vcpu_svm *svm) return NESTED_EXIT_HOST; break; } + case SVM_EXIT_VMMCALL: + /* Hyper-V L2 TLB flush hypercall is handled by L0 */ + if (kvm_hv_l2_tlb_flush_exposed(vcpu) && + nested_svm_l2_tlb_flush_enabled(vcpu) && + kvm_hv_is_tlb_flush_hcall(vcpu)) + return NESTED_EXIT_HOST; + break; default: break; } --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E365C4332F for ; Thu, 14 Apr 2022 14:11:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346793AbiDNOLW (ORCPT ); Thu, 14 Apr 2022 10:11:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244913AbiDNN2R (ORCPT ); Thu, 14 Apr 2022 09:28:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0BC7DA888D for ; Thu, 14 Apr 2022 06:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xdKsxLQ5UO1JQNnacve3qftEaVyF2Om7H+/cS9EK+qY=; b=LN7BMDokm4XWzzV7gUPOQ5EUcn6+7sX1BvisByUUnALdaSPUIGhHv3k6pAw18bWuMhIolC rOaDtBNBMUtD+WFMf1xcQhXKdXDXOlNmhjByaFRYA5R0MynbXJuFR4La84ALcO2FQMd7/y ozo6hQOlDIa3w4qLLOXKAb2O+zfEvHA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-588-tXAivXl3NK-sf2bT_vDU6w-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: tXAivXl3NK-sf2bT_vDU6w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8EAE48339A5; Thu, 14 Apr 2022 13:21:01 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF8297C28; Thu, 14 Apr 2022 13:20:59 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 22/34] KVM: x86: Expose Hyper-V L2 TLB flush feature Date: Thu, 14 Apr 2022 15:20:01 +0200 Message-Id: <20220414132013.1588929-23-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" With both nSVM and nVMX implementations in place, KVM can now expose Hyper-V L2 TLB flush feature to userspace. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/kvm/hyperv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 68a0df4e3f66..1d6927538bc7 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2826,6 +2826,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kv= m_cpuid2 *cpuid, =20 case HYPERV_CPUID_NESTED_FEATURES: ent->eax =3D evmcs_ver; + ent->eax |=3D HV_X64_NESTED_DIRECT_FLUSH; ent->eax |=3D HV_X64_NESTED_MSR_BITMAP; =20 break; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D242AC3A59F for ; Thu, 14 Apr 2022 13:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344107AbiDNNju (ORCPT ); Thu, 14 Apr 2022 09:39:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244905AbiDNN2P (ORCPT ); Thu, 14 Apr 2022 09:28:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B0F8FA776B for ; Thu, 14 Apr 2022 06:21:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942476; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IJ2Vgk74sDWyEWQ+SIKJGSpw8vilP+eCSCMx/I4v0m0=; b=Ov8j8N9LMPEpX5q/J15uRFhEZlu+GpsfGwBGP2KcYKkPQCSS+MmCKgtJOZqukZNr9FbGOl wxXONAxmpgwz8YdeOsPErqNBiB5u4GQXkImIKhAGzxPxOvwU8NDo/B9dzl1hgMvNvgRk3Z 47nEounhDf69YXJRaUgN5153sLWjLF0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-486-ILnEP6bIMnGyUEI8hcZwBg-1; Thu, 14 Apr 2022 09:21:14 -0400 X-MC-Unique: ILnEP6bIMnGyUEI8hcZwBg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D26F9805F6F; Thu, 14 Apr 2022 13:21:03 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id F1FBC53CD; Thu, 14 Apr 2022 13:21:01 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 23/34] KVM: selftests: Better XMM read/write helpers Date: Thu, 14 Apr 2022 15:20:02 +0200 Message-Id: <20220414132013.1588929-24-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_xmm()/get_xmm() helpers are fairly useless as they only read 64 bits from 128-bit registers. Moreover, these helpers are not used. Borrow _kvm_read_sse_reg()/_kvm_write_sse_reg() from KVM limiting them to XMM0-XMM8 for now. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- .../selftests/kvm/include/x86_64/processor.h | 70 ++++++++++--------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..9ad7602a257b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -296,71 +296,73 @@ static inline void cpuid(uint32_t *eax, uint32_t *ebx, : "memory"); } =20 -#define SET_XMM(__var, __xmm) \ - asm volatile("movq %0, %%"#__xmm : : "r"(__var) : #__xmm) +typedef u32 __attribute__((vector_size(16))) sse128_t; +#define __sse128_u union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; } +#define sse128_lo(x) ({ __sse128_u t; t.vec =3D x; t.as_u64[0]; }) +#define sse128_hi(x) ({ __sse128_u t; t.vec =3D x; t.as_u64[1]; }) =20 -static inline void set_xmm(int n, unsigned long val) +static inline void read_sse_reg(int reg, sse128_t *data) { - switch (n) { + switch (reg) { case 0: - SET_XMM(val, xmm0); + asm("movdqa %%xmm0, %0" : "=3Dm"(*data)); break; case 1: - SET_XMM(val, xmm1); + asm("movdqa %%xmm1, %0" : "=3Dm"(*data)); break; case 2: - SET_XMM(val, xmm2); + asm("movdqa %%xmm2, %0" : "=3Dm"(*data)); break; case 3: - SET_XMM(val, xmm3); + asm("movdqa %%xmm3, %0" : "=3Dm"(*data)); break; case 4: - SET_XMM(val, xmm4); + asm("movdqa %%xmm4, %0" : "=3Dm"(*data)); break; case 5: - SET_XMM(val, xmm5); + asm("movdqa %%xmm5, %0" : "=3Dm"(*data)); break; case 6: - SET_XMM(val, xmm6); + asm("movdqa %%xmm6, %0" : "=3Dm"(*data)); break; case 7: - SET_XMM(val, xmm7); + asm("movdqa %%xmm7, %0" : "=3Dm"(*data)); break; + default: + BUG(); } } =20 -#define GET_XMM(__xmm) \ -({ \ - unsigned long __val; \ - asm volatile("movq %%"#__xmm", %0" : "=3Dr"(__val)); \ - __val; \ -}) - -static inline unsigned long get_xmm(int n) +static inline void write_sse_reg(int reg, const sse128_t *data) { - assert(n >=3D 0 && n <=3D 7); - - switch (n) { + switch (reg) { case 0: - return GET_XMM(xmm0); + asm("movdqa %0, %%xmm0" : : "m"(*data)); + break; case 1: - return GET_XMM(xmm1); + asm("movdqa %0, %%xmm1" : : "m"(*data)); + break; case 2: - return GET_XMM(xmm2); + asm("movdqa %0, %%xmm2" : : "m"(*data)); + break; case 3: - return GET_XMM(xmm3); + asm("movdqa %0, %%xmm3" : : "m"(*data)); + break; case 4: - return GET_XMM(xmm4); + asm("movdqa %0, %%xmm4" : : "m"(*data)); + break; case 5: - return GET_XMM(xmm5); + asm("movdqa %0, %%xmm5" : : "m"(*data)); + break; case 6: - return GET_XMM(xmm6); + asm("movdqa %0, %%xmm6" : : "m"(*data)); + break; case 7: - return GET_XMM(xmm7); + asm("movdqa %0, %%xmm7" : : "m"(*data)); + break; + default: + BUG(); } - - /* never reached */ - return 0; } =20 static inline void cpu_relax(void) --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CD6DC433F5 for ; Thu, 14 Apr 2022 14:11:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346770AbiDNOLT (ORCPT ); Thu, 14 Apr 2022 10:11:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244924AbiDNN2S (ORCPT ); Thu, 14 Apr 2022 09:28:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0C0FCA8890 for ; Thu, 14 Apr 2022 06:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NNU8+N3QEm3ONxfrBZdGtyoiL6xy54eD6yOz8aNf1KM=; b=X+/NSmsnqWJfRWzGn1HAsoOo8cp/HG9c4e4VHR2wgSLC1wegQhMT0uy576h2cajpOoUE2a 6eqdjR9/IJAZN2S4WRnmkdaVvjqMh6EmYX3g8YFb/yS4AZ6cxyRhReoSau3FyuyoUPFlcN Fyd1R+TGgD+nEfOkd9in5wh+1bDpgnY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-2-QRiBKY5UOwKerEVJqgLR7A-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: QRiBKY5UOwKerEVJqgLR7A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 188A918013AF; Thu, 14 Apr 2022 13:21:06 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1DA777C28; Thu, 14 Apr 2022 13:21:03 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 24/34] KVM: selftests: Hyper-V PV IPI selftest Date: Thu, 14 Apr 2022 15:20:03 +0200 Message-Id: <20220414132013.1588929-25-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a selftest for Hyper-V PV IPI hypercalls (HvCallSendSyntheticClusterIpi, HvCallSendSyntheticClusterIpiEx). The test creates one 'sender' vCPU and two 'receiver' vCPU and then issues various combinations of send IPI hypercalls in both 'normal' and 'fast' (with XMM input where necessary) mode. Later, the test checks whether IPIs were delivered to the expected destination vCPU[s]. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/hyperv.h | 3 + .../selftests/kvm/x86_64/hyperv_features.c | 5 +- .../testing/selftests/kvm/x86_64/hyperv_ipi.c | 374 ++++++++++++++++++ 5 files changed, 381 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_ipi.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 56140068b763..5d5fbb161d56 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -23,6 +23,7 @@ /x86_64/hyperv_clock /x86_64/hyperv_cpuid /x86_64/hyperv_features +/x86_64/hyperv_ipi /x86_64/hyperv_svm_test /x86_64/mmio_warning_test /x86_64/mmu_role_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index af582d168621..44889f897fe7 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/fix_hypercall_test TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_clock TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_cpuid TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_features +TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_ipi TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_svm_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_pv_test diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index b66910702c0a..f51d6fab8e93 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -184,5 +184,8 @@ =20 /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) +#define HV_HYPERCALL_VARHEAD_OFFSET 17 + +#define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 #endif /* !SELFTEST_KVM_HYPERV_H */ diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_features.c index 672915ce73d8..98c020356925 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c @@ -14,7 +14,6 @@ #include "hyperv.h" =20 #define VCPU_ID 0 -#define LINUX_OS_ID ((u64)0x8100 << 48) =20 extern unsigned char rdmsr_start; extern unsigned char rdmsr_end; @@ -127,7 +126,7 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcal= l_data *hcall) int i =3D 0; u64 res, input, output; =20 - wrmsr(HV_X64_MSR_GUEST_OS_ID, LINUX_OS_ID); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); =20 while (hcall->control) { @@ -230,7 +229,7 @@ static void guest_test_msrs_access(void) */ msr->idx =3D HV_X64_MSR_GUEST_OS_ID; msr->write =3D 1; - msr->write_val =3D LINUX_OS_ID; + msr->write_val =3D HYPERV_LINUX_OS_ID; msr->available =3D 1; break; case 3: diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testin= g/selftests/kvm/x86_64/hyperv_ipi.c new file mode 100644 index 000000000000..075963c32d45 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V HvCallSendSyntheticClusterIpi{,Ex} tests + * + * Copyright (C) 2022, Red Hat, Inc. + * + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "hyperv.h" +#include "processor.h" +#include "test_util.h" +#include "vmx.h" + +#define SENDER_VCPU_ID 1 +#define RECEIVER_VCPU_ID_1 2 +#define RECEIVER_VCPU_ID_2 65 + +#define IPI_VECTOR 0xfe + +static volatile uint64_t ipis_rcvd[RECEIVER_VCPU_ID_2 + 1]; + +struct thread_params { + struct kvm_vm *vm; + uint32_t vcpu_id; +}; + +struct hv_vpset { + u64 format; + u64 valid_bank_mask; + u64 bank_contents[2]; +}; + +enum HV_GENERIC_SET_FORMAT { + HV_GENERIC_SET_SPARSE_4K, + HV_GENERIC_SET_ALL, +}; + +/* HvCallSendSyntheticClusterIpi hypercall */ +struct hv_send_ipi { + u32 vector; + u32 reserved; + u64 cpu_mask; +}; + +/* HvCallSendSyntheticClusterIpiEx hypercall */ +struct hv_send_ipi_ex { + u32 vector; + u32 reserved; + struct hv_vpset vp_set; +}; + +static inline void hv_init(vm_vaddr_t pgs_gpa) +{ + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); +} + +static void receiver_code(void *hcall_page, vm_vaddr_t pgs_gpa) +{ + u32 vcpu_id; + + x2apic_enable(); + hv_init(pgs_gpa); + + vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + + /* Signal sender vCPU we're ready */ + ipis_rcvd[vcpu_id] =3D (u64)-1; + + for (;;) + asm volatile("sti; hlt; cli"); +} + +static void guest_ipi_handler(struct ex_regs *regs) +{ + u32 vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + + ipis_rcvd[vcpu_id]++; + wrmsr(HV_X64_MSR_EOI, 1); +} + +static inline u64 hypercall(u64 control, vm_vaddr_t arg1, vm_vaddr_t arg2) +{ + u64 hv_status; + + asm volatile("mov %3, %%r8\n" + "vmcall" + : "=3Da" (hv_status), + "+c" (control), "+d" (arg1) + : "r" (arg2) + : "cc", "memory", "r8", "r9", "r10", "r11"); + + return hv_status; +} + +static inline void nop_loop(void) +{ + int i; + + for (i =3D 0; i < 100000000; i++) + asm volatile("nop"); +} + +static inline void sync_to_xmm(void *data) +{ + int i; + + for (i =3D 0; i < 8; i++) + write_sse_reg(i, (sse128_t *)(data + sizeof(sse128_t) * i)); +} + +static void sender_guest_code(void *hcall_page, vm_vaddr_t pgs_gpa) +{ + struct hv_send_ipi *ipi =3D (struct hv_send_ipi *)hcall_page; + struct hv_send_ipi_ex *ipi_ex =3D (struct hv_send_ipi_ex *)hcall_page; + int stage =3D 1, ipis_expected[2] =3D {0}; + u64 res; + + hv_init(pgs_gpa); + GUEST_SYNC(stage++); + + /* Wait for receiver vCPUs to come up */ + while (!ipis_rcvd[RECEIVER_VCPU_ID_1] || !ipis_rcvd[RECEIVER_VCPU_ID_2]) + nop_loop(); + ipis_rcvd[RECEIVER_VCPU_ID_1] =3D ipis_rcvd[RECEIVER_VCPU_ID_2] =3D 0; + + /* 'Slow' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */ + ipi->vector =3D IPI_VECTOR; + ipi->cpu_mask =3D 1 << RECEIVER_VCPU_ID_1; + res =3D hypercall(HVCALL_SEND_IPI, pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'Fast' HvCallSendSyntheticClusterIpi to RECEIVER_VCPU_ID_1 */ + res =3D hypercall(HVCALL_SEND_IPI | HV_HYPERCALL_FAST_BIT, + IPI_VECTOR, 1 << RECEIVER_VCPU_ID_1); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 0; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_1); + res =3D hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_1 */ + sync_to_xmm(&ipi_ex->vp_set.valid_bank_mask); + res =3D hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 1; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_2 - 64); + res =3D hypercall(HVCALL_SEND_IPI_EX | (1 << HV_HYPERCALL_VARHEAD_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to RECEIVER_VCPU_ID_2 */ + sync_to_xmm(&ipi_ex->vp_set.valid_bank_mask); + res =3D hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1,2} = */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + ipi_ex->vp_set.valid_bank_mask =3D 1 << 1 | 1; + ipi_ex->vp_set.bank_contents[0] =3D BIT(RECEIVER_VCPU_ID_1); + ipi_ex->vp_set.bank_contents[1] =3D BIT(RECEIVER_VCPU_ID_2 - 64); + res =3D hypercall(HVCALL_SEND_IPI_EX | (2 << HV_HYPERCALL_VARHEAD_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to both RECEIVER_VCPU_ID_{1= , 2} */ + sync_to_xmm(&ipi_ex->vp_set.valid_bank_mask); + res =3D hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + IPI_VECTOR, HV_GENERIC_SET_SPARSE_4K); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + /* 'Slow' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL */ + memset(hcall_page, 0, 4096); + ipi_ex->vector =3D IPI_VECTOR; + ipi_ex->vp_set.format =3D HV_GENERIC_SET_ALL; + res =3D hypercall(HVCALL_SEND_IPI_EX, + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + /* 'XMM Fast' HvCallSendSyntheticClusterIpiEx to HV_GENERIC_SET_ALL */ + sync_to_xmm(&ipi_ex->vp_set.valid_bank_mask); + res =3D hypercall(HVCALL_SEND_IPI_EX | HV_HYPERCALL_FAST_BIT, + IPI_VECTOR, HV_GENERIC_SET_ALL); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + nop_loop(); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_1] =3D=3D ++ipis_expected[0]); + GUEST_ASSERT(ipis_rcvd[RECEIVER_VCPU_ID_2] =3D=3D ++ipis_expected[1]); + GUEST_SYNC(stage++); + + GUEST_DONE(); +} + +static void *vcpu_thread(void *arg) +{ + struct thread_params *params =3D (struct thread_params *)arg; + struct ucall uc; + int old; + int r; + unsigned int exit_reason; + + r =3D pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old); + TEST_ASSERT(r =3D=3D 0, + "pthread_setcanceltype failed on vcpu_id=3D%u with errno=3D%d", + params->vcpu_id, r); + + vcpu_run(params->vm, params->vcpu_id); + exit_reason =3D vcpu_state(params->vm, params->vcpu_id)->exit_reason; + + TEST_ASSERT(exit_reason =3D=3D KVM_EXIT_IO, + "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT= _IO", + params->vcpu_id, exit_reason, exit_reason_str(exit_reason)); + + if (get_ucall(params->vm, params->vcpu_id, &uc) =3D=3D UCALL_ABORT) { + TEST_ASSERT(false, + "vCPU %u exited with error: %s.\n", + params->vcpu_id, (const char *)uc.args[0]); + } + + return NULL; +} + +static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id) +{ + void *retval; + int r; + + r =3D pthread_cancel(thread); + TEST_ASSERT(r =3D=3D 0, + "pthread_cancel on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + + r =3D pthread_join(thread, &retval); + TEST_ASSERT(r =3D=3D 0, + "pthread_join on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + TEST_ASSERT(retval =3D=3D PTHREAD_CANCELED, + "expected retval=3D%p, got %p", PTHREAD_CANCELED, + retval); +} + +int main(int argc, char *argv[]) +{ + int r; + pthread_t threads[2]; + struct thread_params params[2]; + struct kvm_vm *vm; + struct kvm_run *run; + vm_vaddr_t hcall_page; + struct ucall uc; + int stage =3D 1; + + vm =3D vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code); + params[0].vm =3D vm; + params[1].vm =3D vm; + + /* Hypercall input/output */ + hcall_page =3D vm_vaddr_alloc_pages(vm, 2); + memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); + + vm_init_descriptor_tables(vm); + + vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_1, receiver_code); + vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_1); + vcpu_args_set(vm, RECEIVER_VCPU_ID_1, 2, hcall_page, addr_gva2gpa(vm, hca= ll_page)); + vcpu_set_msr(vm, RECEIVER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_I= D_1); + vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_1); + + vm_vcpu_add_default(vm, RECEIVER_VCPU_ID_2, receiver_code); + vcpu_init_descriptor_tables(vm, RECEIVER_VCPU_ID_2); + vcpu_args_set(vm, RECEIVER_VCPU_ID_2, 2, hcall_page, addr_gva2gpa(vm, hca= ll_page)); + vcpu_set_msr(vm, RECEIVER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_I= D_2); + vcpu_set_hv_cpuid(vm, RECEIVER_VCPU_ID_2); + + vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler); + + vcpu_args_set(vm, SENDER_VCPU_ID, 2, hcall_page, addr_gva2gpa(vm, hcall_p= age)); + vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID); + + params[0].vcpu_id =3D RECEIVER_VCPU_ID_1; + r =3D pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + params[1].vcpu_id =3D RECEIVER_VCPU_ID_2; + r =3D pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + run =3D vcpu_state(vm, SENDER_VCPU_ID); + + while (true) { + r =3D _vcpu_run(vm, SENDER_VCPU_ID); + TEST_ASSERT(!r, "vcpu_run failed: %d\n", r); + TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, + "unexpected exit reason: %u (%s)", + run->exit_reason, exit_reason_str(run->exit_reason)); + + switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) { + case UCALL_SYNC: + TEST_ASSERT(uc.args[1] =3D=3D stage, + "Unexpected stage: %ld (%d expected)\n", + uc.args[1], stage); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], + __FILE__, uc.args[1]); + return 1; + case UCALL_DONE: + return 0; + } + + stage++; + } + + cancel_join_vcpu_thread(threads[0], RECEIVER_VCPU_ID_1); + cancel_join_vcpu_thread(threads[1], RECEIVER_VCPU_ID_2); + kvm_vm_free(vm); + + return 0; +} --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B90C8C3A59D for ; Thu, 14 Apr 2022 13:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344087AbiDNNjm (ORCPT ); Thu, 14 Apr 2022 09:39:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244904AbiDNN2P (ORCPT ); Thu, 14 Apr 2022 09:28:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6BA08A7770 for ; Thu, 14 Apr 2022 06:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942477; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zm7vWWpJoFp3QcOqQVtM2lUKGQCpQSZQDFy6GatKo98=; b=gae7PrjA0wdZUsxQlhiVf59WD6AzXJNvdwd4bEGp5JAUgLeaHRVEOuPQ2KiEHfom1TfnAs g5frFNXBIUDxx34N0uHFHf6vwYTI+ZQjAmiIeOXh1tLNWqUjJDG1ovlOpIUD/DabgLsX8P U6DEOHGTV3MaMTKiVSVoXY8I8RJM8L0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-659-7za_fzKSNOqilB9AI3PYsA-1; Thu, 14 Apr 2022 09:21:13 -0400 X-MC-Unique: 7za_fzKSNOqilB9AI3PYsA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E7FF91801229; Thu, 14 Apr 2022 13:21:07 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 56CB210725; Thu, 14 Apr 2022 13:21:06 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 25/34] KVM: selftests: Make it possible to replace PTEs with __virt_pg_map() Date: Thu, 14 Apr 2022 15:20:04 +0200 Message-Id: <20220414132013.1588929-26-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __virt_pg_map() makes an assumption that leaf PTE is not present. This is not suitable if the test wants to replace an already present PTE. Hyper-V PV TLB flush test is going to need that. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/include/x86_64/processor.h | 2 +- tools/testing/selftests/kvm/lib/x86_64/processor.c | 6 +++--- tools/testing/selftests/kvm/max_guest_memory_test.c | 2 +- tools/testing/selftests/kvm/x86_64/mmu_role_test.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 9ad7602a257b..c20b18d05119 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -473,7 +473,7 @@ enum x86_page_size { X86_PAGE_SIZE_1G, }; void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size); + enum x86_page_size page_size, bool replace); =20 /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/tes= ting/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..20df3e84d777 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -229,7 +229,7 @@ static struct pageUpperEntry *virt_create_upper_pte(str= uct kvm_vm *vm, } =20 void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size) + enum x86_page_size page_size, bool replace) { const uint64_t pg_size =3D 1ull << ((page_size * 9) + 12); struct pageUpperEntry *pml4e, *pdpe, *pde; @@ -270,7 +270,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, =20 /* Fill in page table entry. */ pte =3D virt_get_pte(vm, pde->pfn, vaddr, 0); - TEST_ASSERT(!pte->present, + TEST_ASSERT(replace || !pte->present, "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); pte->pfn =3D paddr >> vm->page_shift; pte->writable =3D true; @@ -279,7 +279,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, =20 void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); + __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K, false); } =20 static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, = int vcpuid, diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/te= sting/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..437f77633b0e 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -244,7 +244,7 @@ int main(int argc, char *argv[]) #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ for (i =3D 0; i < slot_size; i +=3D size_1gb) - __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G, false); #else for (i =3D 0; i < slot_size; i +=3D vm_get_page_size(vm)) virt_pg_map(vm, gpa + i, gpa + i); diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/tes= ting/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..e3fdf320b9f4 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -35,7 +35,7 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_= val) run =3D vcpu_state(vm, VCPU_ID); =20 /* Map 1gb page without a backing memlot. */ - __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G, false); =20 r =3D _vcpu_run(vm, VCPU_ID); =20 --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA21FC433FE for ; Thu, 14 Apr 2022 14:11:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346748AbiDNOLR (ORCPT ); Thu, 14 Apr 2022 10:11:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244974AbiDNN2W (ORCPT ); Thu, 14 Apr 2022 09:28:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 73ABDA88A4 for ; Thu, 14 Apr 2022 06:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942485; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IELvZuMWrpoGDGmjF2TZBVUXoqeb+p7+8DLpvZL7020=; b=DYCdHnax5Fw3rafD95BkQ5HDBwwjJCYR4l52D4H+NTDjVbCdNBiHgDQXETmaN8dQdgkD4V VwwLogjXZUKrtxXDgushVOOQeKOlTxj4N+sLCD/Gqpy9z7qhsq4tV8NRca8jDsXfl2xCK6 F5VQYcQYMUWfg2Ttz9BGL26LrBG0tS4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-653-V1RtiXF0PyysddchZK0ReA-1; Thu, 14 Apr 2022 09:21:14 -0400 X-MC-Unique: V1RtiXF0PyysddchZK0ReA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0CF0B296A625; Thu, 14 Apr 2022 13:21:10 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 386477C28; Thu, 14 Apr 2022 13:21:08 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 26/34] KVM: selftests: Hyper-V PV TLB flush selftest Date: Thu, 14 Apr 2022 15:20:05 +0200 Message-Id: <20220414132013.1588929-27-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a selftest for Hyper-V PV TLB flush hypercalls (HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressList/HvFlushVirtualAddressListEx). The test creates one 'sender' vCPU and two 'worker' vCPU which do busy loop reading from a certain GVA checking the observed value. Sender vCPU drops to the host to swap the data page with another page filled with a different value. The expectation for workers is also altered. Without TLB flush on worker vCPUs, they may continue to observe old value. To guard against accidental TLB flushes for worker vCPUs the test is repeated 100 times. Hyper-V TLB flush hypercalls are tested in both 'normal' and 'XMM fast' modes. Signed-off-by: Vitaly Kuznetsov --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/hyperv.h | 1 + .../selftests/kvm/x86_64/hyperv_tlb_flush.c | 647 ++++++++++++++++++ 4 files changed, 650 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 5d5fbb161d56..1a1d09e414d5 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -25,6 +25,7 @@ /x86_64/hyperv_features /x86_64/hyperv_ipi /x86_64/hyperv_svm_test +/x86_64/hyperv_tlb_flush /x86_64/mmio_warning_test /x86_64/mmu_role_test /x86_64/platform_info_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 44889f897fe7..8b83abc09a1a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -54,6 +54,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_cpuid TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_features TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_ipi TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_svm_test +TEST_GEN_PROGS_x86_64 +=3D x86_64/hyperv_tlb_flush TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_clock_test TEST_GEN_PROGS_x86_64 +=3D x86_64/kvm_pv_test TEST_GEN_PROGS_x86_64 +=3D x86_64/mmio_warning_test diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index f51d6fab8e93..1e34dd7c5075 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -185,6 +185,7 @@ /* hypercall options */ #define HV_HYPERCALL_FAST_BIT BIT(16) #define HV_HYPERCALL_VARHEAD_OFFSET 17 +#define HV_HYPERCALL_REP_COMP_OFFSET 32 =20 #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c b/tools/= testing/selftests/kvm/x86_64/hyperv_tlb_flush.c new file mode 100644 index 000000000000..00bcae45ddd2 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c @@ -0,0 +1,647 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V HvFlushVirtualAddress{List,Space}{,Ex} tests + * + * Copyright (C) 2022, Red Hat, Inc. + * + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "hyperv.h" +#include "processor.h" +#include "test_util.h" +#include "vmx.h" + +#define SENDER_VCPU_ID 1 +#define WORKER_VCPU_ID_1 2 +#define WORKER_VCPU_ID_2 65 + +#define NTRY 100 + +struct thread_params { + struct kvm_vm *vm; + uint32_t vcpu_id; +}; + +struct hv_vpset { + u64 format; + u64 valid_bank_mask; + u64 bank_contents[]; +}; + +enum HV_GENERIC_SET_FORMAT { + HV_GENERIC_SET_SPARSE_4K, + HV_GENERIC_SET_ALL, +}; + +#define HV_FLUSH_ALL_PROCESSORS BIT(0) +#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES BIT(1) +#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY BIT(2) +#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT BIT(3) + +/* HvFlushVirtualAddressSpace, HvFlushVirtualAddressList hypercalls */ +struct hv_tlb_flush { + u64 address_space; + u64 flags; + u64 processor_mask; + u64 gva_list[]; +} __packed; + +/* HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressListEx hypercalls */ +struct hv_tlb_flush_ex { + u64 address_space; + u64 flags; + struct hv_vpset hv_vp_set; + u64 gva_list[]; +} __packed; + +static inline void hv_init(vm_vaddr_t pgs_gpa) +{ + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); +} + +static void worker_code(void *test_pages, vm_vaddr_t pgs_gpa) +{ + u32 vcpu_id =3D rdmsr(HV_X64_MSR_VP_INDEX); + unsigned char chr; + + x2apic_enable(); + hv_init(pgs_gpa); + + for (;;) { + chr =3D READ_ONCE(*(unsigned char *)(test_pages + 4096 * 2 + vcpu_id)); + if (chr) + GUEST_ASSERT(*(unsigned char *)test_pages =3D=3D chr); + asm volatile("nop"); + } +} + +static inline u64 hypercall(u64 control, vm_vaddr_t arg1, vm_vaddr_t arg2) +{ + u64 hv_status; + + asm volatile("mov %3, %%r8\n" + "vmcall" + : "=3Da" (hv_status), + "+c" (control), "+d" (arg1) + : "r" (arg2) + : "cc", "memory", "r8", "r9", "r10", "r11"); + + return hv_status; +} + +static inline void nop_loop(void) +{ + int i; + + for (i =3D 0; i < 10000000; i++) + asm volatile("nop"); +} + +static inline void sync_to_xmm(void *data) +{ + int i; + + for (i =3D 0; i < 8; i++) + write_sse_reg(i, (sse128_t *)(data + sizeof(sse128_t) * i)); +} + +static void set_expected_char(void *addr, unsigned char chr, int vcpu_id) +{ + asm volatile("mfence"); + *(unsigned char *)(addr + 2 * 4096 + vcpu_id) =3D chr; +} + +static void sender_guest_code(void *hcall_page, void *test_pages, vm_vaddr= _t pgs_gpa) +{ + struct hv_tlb_flush *flush =3D (struct hv_tlb_flush *)hcall_page; + struct hv_tlb_flush_ex *flush_ex =3D (struct hv_tlb_flush_ex *)hcall_page; + int stage =3D 1, i; + u64 res; + + hv_init(pgs_gpa); + + /* "Slow" hypercalls */ + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, pgs_gpa, pgs_gpa += 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + flush->gva_list[0] =3D (u64)test_pages; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROC= ESSORS; + flush->processor_mask =3D 0; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, pgs_gpa, pgs_gpa += 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROC= ESSORS; + flush->gva_list[0] =3D (u64)test_pages; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [1] */ + flush_ex->gva_list[1] =3D (u64)test_pages; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (1 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64) | + BIT_ULL(WORKER_VCPU_ID_1 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_1 / 64) | + BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [2] */ + flush_ex->gva_list[2] =3D (u64)test_pages; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (2 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + flush_ex->gva_list[0] =3D (u64)test_pages; + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + pgs_gpa, pgs_gpa + 4096); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* "Fast" hypercalls */ + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + sync_to_xmm(&flush->processor_mask); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | + HV_HYPERCALL_FAST_BIT, 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for WORKER_VCPU_ID_1 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->processor_mask =3D BIT(WORKER_VCPU_ID_1); + flush->gva_list[0] =3D (u64)test_pages; + sync_to_xmm(&flush->processor_mask); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | HV_HYPERCALL_FAST_= BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + sync_to_xmm(&flush->processor_mask); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST= _BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST for HV_FLUSH_ALL_PROCESSORS */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush->gva_list[0] =3D (u64)test_pages; + sync_to_xmm(&flush->processor_mask); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST | HV_HYPERCALL_FAST_= BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | HV_HYPERCALL_F= AST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for WORKER_VCPU_ID_2 */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [1] */ + flush_ex->gva_list[1] =3D (u64)test_pages; + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | HV_HYPERCALL_FA= ST_BIT | + (1 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_2 / 64) | + BIT_ULL(WORKER_VCPU_ID_1 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | HV_HYPERCALL_F= AST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for both vCPUs */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_SPARSE_4K; + flush_ex->hv_vp_set.valid_bank_mask =3D BIT_ULL(WORKER_VCPU_ID_1 / 64) | + BIT_ULL(WORKER_VCPU_ID_2 / 64); + flush_ex->hv_vp_set.bank_contents[0] =3D BIT_ULL(WORKER_VCPU_ID_1 % 64); + flush_ex->hv_vp_set.bank_contents[1] =3D BIT_ULL(WORKER_VCPU_ID_2 % 64); + /* bank_contents and gva_list occupy the same space, thus [2] */ + flush_ex->gva_list[2] =3D (u64)test_pages; + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | HV_HYPERCALL_FA= ST_BIT | + (2 << HV_HYPERCALL_VARHEAD_OFFSET) | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX | HV_HYPERCALL_F= AST_BIT, + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + /* HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX for HV_GENERIC_SET_ALL */ + for (i =3D 0; i < NTRY; i++) { + memset(hcall_page, 0, 4096); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_1); + set_expected_char(test_pages, 0x0, WORKER_VCPU_ID_2); + GUEST_SYNC(stage++); + flush_ex->flags =3D HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush_ex->hv_vp_set.format =3D HV_GENERIC_SET_ALL; + flush_ex->gva_list[0] =3D (u64)test_pages; + sync_to_xmm(&flush_ex->hv_vp_set); + res =3D hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX | HV_HYPERCALL_FA= ST_BIT | + (1UL << HV_HYPERCALL_REP_COMP_OFFSET), + 0x0, HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES); + GUEST_ASSERT((res & 0xffff) =3D=3D 0); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_1); + set_expected_char(test_pages, i % 2 ? 0x1 : 0x2, WORKER_VCPU_ID_2); + nop_loop(); + } + + GUEST_DONE(); +} + +static void *vcpu_thread(void *arg) +{ + struct thread_params *params =3D (struct thread_params *)arg; + struct ucall uc; + int old; + int r; + unsigned int exit_reason; + + r =3D pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old); + TEST_ASSERT(r =3D=3D 0, + "pthread_setcanceltype failed on vcpu_id=3D%u with errno=3D%d", + params->vcpu_id, r); + + vcpu_run(params->vm, params->vcpu_id); + exit_reason =3D vcpu_state(params->vm, params->vcpu_id)->exit_reason; + + TEST_ASSERT(exit_reason =3D=3D KVM_EXIT_IO, + "vCPU %u exited with unexpected exit reason %u-%s, expected KVM_EXIT= _IO", + params->vcpu_id, exit_reason, exit_reason_str(exit_reason)); + + if (get_ucall(params->vm, params->vcpu_id, &uc) =3D=3D UCALL_ABORT) { + TEST_ASSERT(false, + "vCPU %u exited with error: %s.\n", + params->vcpu_id, (const char *)uc.args[0]); + } + + return NULL; +} + +static void cancel_join_vcpu_thread(pthread_t thread, uint32_t vcpu_id) +{ + void *retval; + int r; + + r =3D pthread_cancel(thread); + TEST_ASSERT(r =3D=3D 0, + "pthread_cancel on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + + r =3D pthread_join(thread, &retval); + TEST_ASSERT(r =3D=3D 0, + "pthread_join on vcpu_id=3D%d failed with errno=3D%d", + vcpu_id, r); + TEST_ASSERT(retval =3D=3D PTHREAD_CANCELED, + "expected retval=3D%p, got %p", PTHREAD_CANCELED, + retval); +} + +int main(int argc, char *argv[]) +{ + int r; + pthread_t threads[2]; + struct thread_params params[2]; + struct kvm_vm *vm; + struct kvm_run *run; + vm_vaddr_t hcall_page, test_pages; + struct ucall uc; + int stage =3D 1; + + vm =3D vm_create_default(SENDER_VCPU_ID, 0, sender_guest_code); + params[0].vm =3D vm; + params[1].vm =3D vm; + + /* Hypercall input/output */ + hcall_page =3D vm_vaddr_alloc_pages(vm, 2); + memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); + + /* + * Test pages: the first one is filled with '0x1's, the second with '0x2's + * and the test will swap their mappings. The third page keeps the indica= tion + * about the current state of mappings. + */ + test_pages =3D vm_vaddr_alloc_pages(vm, 3); + memset(addr_gva2hva(vm, test_pages), 0x1, 4096); + memset(addr_gva2hva(vm, test_pages) + 4096, 0x2, 4096); + set_expected_char(addr_gva2hva(vm, test_pages), 0x0, WORKER_VCPU_ID_1); + set_expected_char(addr_gva2hva(vm, test_pages), 0x0, WORKER_VCPU_ID_2); + + vm_vcpu_add_default(vm, WORKER_VCPU_ID_1, worker_code); + vcpu_args_set(vm, WORKER_VCPU_ID_1, 2, test_pages, addr_gva2gpa(vm, hcall= _page)); + vcpu_set_msr(vm, WORKER_VCPU_ID_1, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_1); + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_1); + + vm_vcpu_add_default(vm, WORKER_VCPU_ID_2, worker_code); + vcpu_args_set(vm, WORKER_VCPU_ID_2, 2, test_pages, addr_gva2gpa(vm, hcall= _page)); + vcpu_set_msr(vm, WORKER_VCPU_ID_2, HV_X64_MSR_VP_INDEX, WORKER_VCPU_ID_2); + vcpu_set_hv_cpuid(vm, WORKER_VCPU_ID_2); + + vcpu_args_set(vm, SENDER_VCPU_ID, 3, hcall_page, test_pages, + addr_gva2gpa(vm, hcall_page)); + vcpu_set_hv_cpuid(vm, SENDER_VCPU_ID); + + params[0].vcpu_id =3D WORKER_VCPU_ID_1; + r =3D pthread_create(&threads[0], NULL, vcpu_thread, ¶ms[0]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + params[1].vcpu_id =3D WORKER_VCPU_ID_2; + r =3D pthread_create(&threads[1], NULL, vcpu_thread, ¶ms[1]); + TEST_ASSERT(r =3D=3D 0, + "pthread_create halter failed errno=3D%d", errno); + + run =3D vcpu_state(vm, SENDER_VCPU_ID); + + while (true) { + r =3D _vcpu_run(vm, SENDER_VCPU_ID); + TEST_ASSERT(!r, "vcpu_run failed: %d\n", r); + TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, + "unexpected exit reason: %u (%s)", + run->exit_reason, exit_reason_str(run->exit_reason)); + + switch (get_ucall(vm, SENDER_VCPU_ID, &uc)) { + case UCALL_SYNC: + TEST_ASSERT(uc.args[1] =3D=3D stage, + "Unexpected stage: %ld (%d expected)\n", + uc.args[1], stage); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], + __FILE__, uc.args[1]); + return 1; + case UCALL_DONE: + return 0; + } + + /* Swap test pages */ + if (stage % 2) { + __virt_pg_map(vm, test_pages, addr_gva2gpa(vm, test_pages) + 4096, + X86_PAGE_SIZE_4K, true); + __virt_pg_map(vm, test_pages + 4096, addr_gva2gpa(vm, test_pages) - 409= 6, + X86_PAGE_SIZE_4K, true); + } else { + __virt_pg_map(vm, test_pages, addr_gva2gpa(vm, test_pages) - 4096, + X86_PAGE_SIZE_4K, true); + __virt_pg_map(vm, test_pages + 4096, addr_gva2gpa(vm, test_pages) + 409= 6, + X86_PAGE_SIZE_4K, true); + } + + stage++; + } + + cancel_join_vcpu_thread(threads[0], WORKER_VCPU_ID_1); + cancel_join_vcpu_thread(threads[1], WORKER_VCPU_ID_2); + kvm_vm_free(vm); + + return 0; +} --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F08E3C46467 for ; Thu, 14 Apr 2022 13:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344166AbiDNNkG (ORCPT ); Thu, 14 Apr 2022 09:40:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244925AbiDNN2S (ORCPT ); Thu, 14 Apr 2022 09:28:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0C06EA888E for ; Thu, 14 Apr 2022 06:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I+R/eNqjb1wpUKu1EY5DiMRn7IAObJ3SaayfFkvzkX4=; b=eXVYZjbP9hCF8ynJwnNrhwjt+3HKt1njv+3GHZjhy40Nj4DZ5lKBAtQD65R+etPaFqkdBf Wt8pBEbo+MoM2FHIsZEvDzdj63RVEX/UVHvh5JuLxQj5KKGC7812OdCTAfWeOa0xvrZB9c qmH1ntmN6JaBULec7lSKLx8Zh8wh1zE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-669-rbs9DGA3PLCik8fHTLS1OQ-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: rbs9DGA3PLCik8fHTLS1OQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EDE0B38149DD; Thu, 14 Apr 2022 13:21:11 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E27E53CD; Thu, 14 Apr 2022 13:21:10 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 27/34] KVM: selftests: Sync 'struct hv_enlightened_vmcs' definition with hyperv-tlfs.h Date: Thu, 14 Apr 2022 15:20:06 +0200 Message-Id: <20220414132013.1588929-28-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" 'struct hv_enlightened_vmcs' definition in selftests is not '__packed' and so we rely on the compiler doing the right padding. This is not obvious so it seems beneficial to use the same definition as in kernel. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/include/x86_64/evmcs.h | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index cc5d14a45702..b6067b555110 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -41,6 +41,8 @@ struct hv_enlightened_vmcs { u16 host_gs_selector; u16 host_tr_selector; =20 + u16 padding16_1; + u64 host_ia32_pat; u64 host_ia32_efer; =20 @@ -159,7 +161,7 @@ struct hv_enlightened_vmcs { u64 ept_pointer; =20 u16 virtual_processor_id; - u16 padding16[3]; + u16 padding16_2[3]; =20 u64 padding64_2[5]; u64 guest_physical_address; @@ -195,15 +197,15 @@ struct hv_enlightened_vmcs { u64 guest_rip; =20 u32 hv_clean_fields; - u32 hv_padding_32; + u32 padding32_1; u32 hv_synthetic_controls; struct { u32 nested_flush_hypercall:1; u32 msr_bitmap:1; u32 reserved:30; - } hv_enlightenments_control; + } __packed hv_enlightenments_control; u32 hv_vp_id; - + u32 padding32_2; u64 hv_vm_id; u64 partition_assist_page; u64 padding64_4[4]; @@ -211,7 +213,7 @@ struct hv_enlightened_vmcs { u64 padding64_5[7]; u64 xss_exit_bitmap; u64 padding64_6[7]; -}; +} __packed; =20 #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE 0 #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_IO_BITMAP BIT(0) --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8B56C35274 for ; Thu, 14 Apr 2022 14:02:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348297AbiDNOER (ORCPT ); Thu, 14 Apr 2022 10:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244930AbiDNN2T (ORCPT ); Thu, 14 Apr 2022 09:28:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A3B4EA889B for ; Thu, 14 Apr 2022 06:21:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9gqBon1sDqqv1Mso87dbXbhpfVbaV+fERFkHlpKdBCQ=; b=aBVKnyRrfAB5L8QIWbudOUcw1nwE9ud/zzUW3SeN0H3JNnOWAcfv/XvTj5AAgVPnbFNaee yIbFYLHN2Tnxp1If7BiXARwlpmQU0el6R5ULZxr3WPzjxgzymqCz6J5GcKUSWOOhqBFKGY dVtP9fyYLiwtsy84Az1Lv4vgzpppSY8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-635-hFzAl0dKNSqyxRKkr0S17g-1; Thu, 14 Apr 2022 09:21:15 -0400 X-MC-Unique: hFzAl0dKNSqyxRKkr0S17g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EFA13C11CC1; Thu, 14 Apr 2022 13:21:14 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4CEDA7C55; Thu, 14 Apr 2022 13:21:12 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 28/34] KVM: selftests: nVMX: Allocate Hyper-V partition assist page Date: Thu, 14 Apr 2022 15:20:07 +0200 Message-Id: <20220414132013.1588929-29-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate so-called Partition assist page and link it to 'struct vmx_pages'. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/include/x86_64/vmx.h | 4 ++++ tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 +++++++ 2 files changed, 11 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testi= ng/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..f99922ca8259 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -567,6 +567,10 @@ struct vmx_pages { uint64_t enlightened_vmcs_gpa; void *enlightened_vmcs; =20 + void *partition_assist_hva; + uint64_t partition_assist_gpa; + void *partition_assist; + void *eptp_hva; uint64_t eptp_gpa; void *eptp; diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/s= elftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..3db21e0e1a8f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -124,6 +124,13 @@ vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gv= a) vmx->enlightened_vmcs_gpa =3D addr_gva2gpa(vm, (uintptr_t)vmx->enlightened_vmcs); =20 + /* Setup of a region of guest memory for the partition assist page. */ + vmx->partition_assist =3D (void *)vm_vaddr_alloc_page(vm); + vmx->partition_assist_hva =3D + addr_gva2hva(vm, (uintptr_t)vmx->partition_assist); + vmx->partition_assist_gpa =3D + addr_gva2gpa(vm, (uintptr_t)vmx->partition_assist); + *p_vmx_gva =3D vmx_gva; return vmx; } --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 191EBC3527C for ; Thu, 14 Apr 2022 13:39:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344183AbiDNNkJ (ORCPT ); Thu, 14 Apr 2022 09:40:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244943AbiDNN2V (ORCPT ); Thu, 14 Apr 2022 09:28:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0F4D29285F for ; Thu, 14 Apr 2022 06:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942480; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VX8OTDg43Rju0T7Y0rVpxwd3AVpts23Te2SUGTg/OJE=; b=PlNKQr96o+XCespIN+6pRmPWhrGaU1Q8JHVE6oUUxOchW279fSMdv8JNLvTJGsqjhPjgCf aPoJg7lDRiTguW8tmS9gBxrojroLwJcJRvu8hckH7EVPDfI4V17f22MlpGdXOpoZZ+c3bP Eg+a4T9XIECAAKK+SKLUWueNuzF2fxA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-656-URWf1vzZMxyAxyDsee5fcw-1; Thu, 14 Apr 2022 09:21:17 -0400 X-MC-Unique: URWf1vzZMxyAxyDsee5fcw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 72F0D3817486; Thu, 14 Apr 2022 13:21:16 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3C9D7C55; Thu, 14 Apr 2022 13:21:14 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 29/34] KVM: selftests: nSVM: Allocate Hyper-V partition assist and VP assist pages Date: Thu, 14 Apr 2022 15:20:08 +0200 Message-Id: <20220414132013.1588929-30-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate VP assist and Partition assist pages and link them to 'struct svm_test_data'. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/include/x86_64/svm_util.h | 10 ++++++++++ tools/testing/selftests/kvm/lib/x86_64/svm.c | 10 ++++++++++ 2 files changed, 20 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/= testing/selftests/kvm/include/x86_64/svm_util.h index a25aabd8f5e7..640859b58fd6 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -34,6 +34,16 @@ struct svm_test_data { void *msr; /* gva */ void *msr_hva; uint64_t msr_gpa; + + /* Hyper-V VP assist page */ + void *vp_assist; /* gva */ + void *vp_assist_hva; + uint64_t vp_assist_gpa; + + /* Hyper-V Partition assist page */ + void *partition_assist; /* gva */ + void *partition_assist_hva; + uint64_t partition_assist_gpa; }; =20 struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_= gva); diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/s= elftests/kvm/lib/x86_64/svm.c index 736ee4a23df6..c284e8f87f5c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -48,6 +48,16 @@ vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva) svm->msr_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->msr); memset(svm->msr_hva, 0, getpagesize()); =20 + svm->vp_assist =3D (void *)vm_vaddr_alloc_page(vm); + svm->vp_assist_hva =3D addr_gva2hva(vm, (uintptr_t)svm->vp_assist); + svm->vp_assist_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->vp_assist); + memset(svm->vp_assist_hva, 0, getpagesize()); + + svm->partition_assist =3D (void *)vm_vaddr_alloc_page(vm); + svm->partition_assist_hva =3D addr_gva2hva(vm, (uintptr_t)svm->partition_= assist); + svm->partition_assist_gpa =3D addr_gva2gpa(vm, (uintptr_t)svm->partition_= assist); + memset(svm->partition_assist_hva, 0, getpagesize()); + *p_svm_gva =3D svm_gva; return svm; } --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E167BC35294 for ; Thu, 14 Apr 2022 14:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346411AbiDNN4y (ORCPT ); Thu, 14 Apr 2022 09:56:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244949AbiDNN2V (ORCPT ); Thu, 14 Apr 2022 09:28:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5BDF392D25 for ; Thu, 14 Apr 2022 06:21:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KPIyVXRJ+3a82MhUAQ3Qk34MlgnHhoRbfQUWHnDgoBY=; b=QHyDC6YfAbXAgEP5gbDexKHmI956qLZzQcX9nXJWB2pwQV7hFnF9Btau0419pZKcf9O+22 uATxwYpcePU2TlQx4QMQSx/E40kjEwIpiUDtDV55YA2CwkS2+xtXfgWsyIV4haNFavSXf3 S7dLS5nsxL5ZlT/vsjQJME6iXNLmV6A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-451-w3z16YEwNFK4hj0LQP9Jfw-1; Thu, 14 Apr 2022 09:21:19 -0400 X-MC-Unique: w3z16YEwNFK4hj0LQP9Jfw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7064D185A794; Thu, 14 Apr 2022 13:21:18 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE8F053CD; Thu, 14 Apr 2022 13:21:16 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 30/34] KVM: selftests: Sync 'struct hv_vp_assist_page' definition with hyperv-tlfs.h Date: Thu, 14 Apr 2022 15:20:09 +0200 Message-Id: <20220414132013.1588929-31-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" 'struct hv_vp_assist_page' definition doesn't match TLFS. Also, define 'struct hv_nested_enlightenments_control' and use it instead of opaque '__u64'. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- .../selftests/kvm/include/x86_64/evmcs.h | 22 ++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index b6067b555110..9c965ba73dec 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -20,14 +20,26 @@ =20 extern bool enable_evmcs; =20 +struct hv_nested_enlightenments_control { + struct { + __u32 directhypercall:1; + __u32 reserved:31; + } features; + struct { + __u32 reserved; + } hypercallControls; +} __packed; + +/* Define virtual processor assist page structure. */ struct hv_vp_assist_page { __u32 apic_assist; - __u32 reserved; - __u64 vtl_control[2]; - __u64 nested_enlightenments_control[2]; - __u32 enlighten_vmentry; + __u32 reserved1; + __u64 vtl_control[3]; + struct hv_nested_enlightenments_control nested_control; + __u8 enlighten_vmentry; + __u8 reserved2[7]; __u64 current_nested_vmcs; -}; +} __packed; =20 struct hv_enlightened_vmcs { u32 revision_id; --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D47DC4707F for ; Thu, 14 Apr 2022 13:39:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344203AbiDNNkM (ORCPT ); Thu, 14 Apr 2022 09:40:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244967AbiDNN2W (ORCPT ); Thu, 14 Apr 2022 09:28:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 27413939C8 for ; Thu, 14 Apr 2022 06:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942484; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rZuJFI5+WrfKz8zXuuvBEC1ez7qdJgm1x6dbeUA0HMY=; b=YgT13rahRUmXoZXJXhwhS1lSTDtrmebw/6jJ0q0Ebhq5BAiHUAhx25s8m5jxiUsWd2BmvI NITCVCc1lGsELkVXcZ4IDQwdmWPFuixWxn8IjmW6234o+9BSkTznvhrvhcXK4ywL1exDH3 xO/41F67/2wPZyWmUAsSt3m6/0wTKjI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-203-dQ0S61LNPKufHBq-p2uloA-1; Thu, 14 Apr 2022 09:21:21 -0400 X-MC-Unique: dQ0S61LNPKufHBq-p2uloA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6B6BF802803; Thu, 14 Apr 2022 13:21:20 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE31F7C28; Thu, 14 Apr 2022 13:21:18 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 31/34] KVM: selftests: evmcs_test: Introduce L2 TLB flush test Date: Thu, 14 Apr 2022 15:20:10 +0200 Message-Id: <20220414132013.1588929-32-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- .../selftests/kvm/include/x86_64/evmcs.h | 2 + .../testing/selftests/kvm/x86_64/evmcs_test.c | 52 ++++++++++++++++++- 2 files changed, 52 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index 9c965ba73dec..36c0a67d8602 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -252,6 +252,8 @@ struct hv_enlightened_vmcs { #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) =20 +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 + extern struct hv_enlightened_vmcs *current_evmcs; extern struct hv_vp_assist_page *current_vp_assist; =20 diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testin= g/selftests/kvm/x86_64/evmcs_test.c index d12e043aa2ee..8d2aa7600d78 100644 --- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c +++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c @@ -16,6 +16,7 @@ =20 #include "kvm_util.h" =20 +#include "hyperv.h" #include "vmx.h" =20 #define VCPU_ID 5 @@ -49,6 +50,16 @@ static inline void rdmsr_gs_base(void) "r13", "r14", "r15"); } =20 +static inline void hypercall(u64 control, vm_vaddr_t arg1, vm_vaddr_t arg2) +{ + asm volatile("mov %3, %%r8\n" + "vmcall" + : "+c" (control), "+d" (arg1) + : "r" (arg2) + : "cc", "memory", "rax", "rbx", "r8", "r9", "r10", + "r11", "r12", "r13", "r14", "r15"); +} + void l2_guest_code(void) { GUEST_SYNC(7); @@ -67,15 +78,27 @@ void l2_guest_code(void) vmcall(); rdmsr_gs_base(); /* intercepted */ =20 + /* L2 TLB flush tests */ + hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + rdmsr_fs_base(); + hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + /* Make sure we're no issuing Hyper-V TLB flush call again */ + __asm__ __volatile__ ("mov $0xdeadbeef, %rcx"); + /* Done, exit to L1 and never come back. */ vmcall(); } =20 -void guest_code(struct vmx_pages *vmx_pages) +void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t pgs_gpa) { #define L2_GUEST_STACK_SIZE 64 unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; =20 + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); + x2apic_enable(); =20 GUEST_SYNC(1); @@ -105,6 +128,14 @@ void guest_code(struct vmx_pages *vmx_pages) vmwrite(PIN_BASED_VM_EXEC_CONTROL, vmreadz(PIN_BASED_VM_EXEC_CONTROL) | PIN_BASED_NMI_EXITING); =20 + /* L2 TLB flush setup */ + current_evmcs->partition_assist_page =3D vmx_pages->partition_assist_gpa; + current_evmcs->hv_enlightenments_control.nested_flush_hypercall =3D 1; + current_evmcs->hv_vm_id =3D 1; + current_evmcs->hv_vp_id =3D 1; + current_vp_assist->nested_control.features.directhypercall =3D 1; + *(u32 *)(vmx_pages->partition_assist) =3D 0; + GUEST_ASSERT(!vmlaunch()); GUEST_ASSERT(vmptrstz() =3D=3D vmx_pages->enlightened_vmcs_gpa); =20 @@ -149,6 +180,18 @@ void guest_code(struct vmx_pages *vmx_pages) GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_MSR_READ); current_evmcs->guest_rip +=3D 2; /* rdmsr */ =20 + /* + * L2 TLB flush test. First VMCALL should be handled directly by L0, + * no VMCALL exit expected. + */ + GUEST_ASSERT(!vmresume()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_MSR_READ); + current_evmcs->guest_rip +=3D 2; /* rdmsr */ + /* Enable synthetic vmexit */ + *(u32 *)(vmx_pages->partition_assist) =3D 1; + GUEST_ASSERT(!vmresume()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D HV_VMX_SYNTHETIC_EXIT_REASON_= TRAP_AFTER_FLUSH); + GUEST_ASSERT(!vmresume()); GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_VMCALL); GUEST_SYNC(11); @@ -201,6 +244,7 @@ static void save_restore_vm(struct kvm_vm *vm) int main(int argc, char *argv[]) { vm_vaddr_t vmx_pages_gva =3D 0; + vm_vaddr_t hcall_page; =20 struct kvm_vm *vm; struct kvm_run *run; @@ -217,11 +261,15 @@ int main(int argc, char *argv[]) exit(KSFT_SKIP); } =20 + hcall_page =3D vm_vaddr_alloc_pages(vm, 1); + memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize()); + vcpu_set_hv_cpuid(vm, VCPU_ID); vcpu_enable_evmcs(vm, VCPU_ID); =20 vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); + vcpu_args_set(vm, VCPU_ID, 2, vmx_pages_gva, addr_gva2gpa(vm, hcall_page)= ); + vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID); =20 vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vm, VCPU_ID); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D400FC433EF for ; Thu, 14 Apr 2022 14:11:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346710AbiDNOLN (ORCPT ); Thu, 14 Apr 2022 10:11:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244983AbiDNN2X (ORCPT ); Thu, 14 Apr 2022 09:28:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 29469A88B0 for ; Thu, 14 Apr 2022 06:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wVb/GePMnykMk31AWaWPY9khGPIKNjlZHodF7Dj5xzw=; b=fT1dHvEL4ck9Shx/3Vbd19n7EXr2jVpuOyKvqL/BigU838kxQ8MZ42AHzb75iJq8BvP3jV JBeuuQIHXfLuJdRVN1wQBdtlMdQxbcItAAJJ3rCWNOomIlx3dKtbRdTFLRdvX105laS7nQ P9QUVsvy2gqwRhQMC6y4gyST1l3cn9A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-356-1bnPvQzvOgif6wFSavtqCA-1; Thu, 14 Apr 2022 09:21:22 -0400 X-MC-Unique: 1bnPvQzvOgif6wFSavtqCA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4B630811E84; Thu, 14 Apr 2022 13:21:22 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD8DD53CD; Thu, 14 Apr 2022 13:21:20 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 32/34] KVM: selftests: Move Hyper-V VP assist page enablement out of evmcs.h Date: Thu, 14 Apr 2022 15:20:11 +0200 Message-Id: <20220414132013.1588929-33-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hyper-V VP assist page is not eVMCS specific, it is also used for enlightened nSVM. Move the code to vendor neutral place. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- tools/testing/selftests/kvm/Makefile | 2 +- .../selftests/kvm/include/x86_64/evmcs.h | 40 +------------------ .../selftests/kvm/include/x86_64/hyperv.h | 31 ++++++++++++++ .../testing/selftests/kvm/lib/x86_64/hyperv.c | 21 ++++++++++ .../testing/selftests/kvm/x86_64/evmcs_test.c | 1 + 5 files changed, 56 insertions(+), 39 deletions(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/hyperv.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 8b83abc09a1a..ae13aa32f3ce 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv) endif =20 LIBKVM =3D lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib= /sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 =3D lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.= c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S +LIBKVM_x86_64 =3D lib/x86_64/apic.c lib/x86_64/hyperv.c lib/x86_64/process= or.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handle= rs.S LIBKVM_aarch64 =3D lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64= /handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c l= ib/aarch64/vgic.c LIBKVM_s390x =3D lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318= _test_handler.c LIBKVM_riscv =3D lib/riscv/processor.c lib/riscv/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/evmcs.h b/tools/tes= ting/selftests/kvm/include/x86_64/evmcs.h index 36c0a67d8602..026586b53013 100644 --- a/tools/testing/selftests/kvm/include/x86_64/evmcs.h +++ b/tools/testing/selftests/kvm/include/x86_64/evmcs.h @@ -10,6 +10,7 @@ #define SELFTEST_KVM_EVMCS_H =20 #include +#include "hyperv.h" #include "vmx.h" =20 #define u16 uint16_t @@ -20,27 +21,6 @@ =20 extern bool enable_evmcs; =20 -struct hv_nested_enlightenments_control { - struct { - __u32 directhypercall:1; - __u32 reserved:31; - } features; - struct { - __u32 reserved; - } hypercallControls; -} __packed; - -/* Define virtual processor assist page structure. */ -struct hv_vp_assist_page { - __u32 apic_assist; - __u32 reserved1; - __u64 vtl_control[3]; - struct hv_nested_enlightenments_control nested_control; - __u8 enlighten_vmentry; - __u8 reserved2[7]; - __u64 current_nested_vmcs; -} __packed; - struct hv_enlightened_vmcs { u32 revision_id; u32 abort; @@ -246,31 +226,15 @@ struct hv_enlightened_vmcs { #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ENLIGHTENMENTSCONTROL BIT(15) #define HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL 0xFFFF =20 -#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073 -#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001 -#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12 -#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ - (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) - #define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031 =20 extern struct hv_enlightened_vmcs *current_evmcs; -extern struct hv_vp_assist_page *current_vp_assist; =20 int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id); =20 -static inline int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist) +static inline void evmcs_enable(void) { - u64 val =3D (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) | - HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; - - wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val); - - current_vp_assist =3D vp_assist; - enable_evmcs =3D true; - - return 0; } =20 static inline int evmcs_vmptrld(uint64_t vmcs_pa, void *vmcs) diff --git a/tools/testing/selftests/kvm/include/x86_64/hyperv.h b/tools/te= sting/selftests/kvm/include/x86_64/hyperv.h index 1e34dd7c5075..095c15fc5381 100644 --- a/tools/testing/selftests/kvm/include/x86_64/hyperv.h +++ b/tools/testing/selftests/kvm/include/x86_64/hyperv.h @@ -189,4 +189,35 @@ =20 #define HYPERV_LINUX_OS_ID ((u64)0x8100 << 48) =20 +#define HV_X64_MSR_VP_ASSIST_PAGE 0x40000073 +#define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001 +#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12 +#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ + (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) + +struct hv_nested_enlightenments_control { + struct { + __u32 directhypercall:1; + __u32 reserved:31; + } features; + struct { + __u32 reserved; + } hypercallControls; +} __packed; + +/* Define virtual processor assist page structure. */ +struct hv_vp_assist_page { + __u32 apic_assist; + __u32 reserved1; + __u64 vtl_control[3]; + struct hv_nested_enlightenments_control nested_control; + __u8 enlighten_vmentry; + __u8 reserved2[7]; + __u64 current_nested_vmcs; +} __packed; + +extern struct hv_vp_assist_page *current_vp_assist; + +int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist); + #endif /* !SELFTEST_KVM_HYPERV_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/hyperv.c b/tools/testin= g/selftests/kvm/lib/x86_64/hyperv.c new file mode 100644 index 000000000000..32dc0afd9e5b --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/hyperv.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Hyper-V specific functions. + * + * Copyright (C) 2021, Red Hat Inc. + */ +#include +#include "processor.h" +#include "hyperv.h" + +int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist) +{ + uint64_t val =3D (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) | + HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; + + wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val); + + current_vp_assist =3D vp_assist; + + return 0; +} diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testin= g/selftests/kvm/x86_64/evmcs_test.c index 8d2aa7600d78..8fa50e76d557 100644 --- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c +++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c @@ -105,6 +105,7 @@ void guest_code(struct vmx_pages *vmx_pages, vm_vaddr_t= pgs_gpa) GUEST_SYNC(2); =20 enable_vp_assist(vmx_pages->vp_assist_gpa, vmx_pages->vp_assist); + evmcs_enable(); =20 GUEST_ASSERT(vmx_pages->vmcs_gpa); GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A2A1C4321E for ; Thu, 14 Apr 2022 14:11:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346657AbiDNOLH (ORCPT ); Thu, 14 Apr 2022 10:11:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244995AbiDNN2X (ORCPT ); Thu, 14 Apr 2022 09:28:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 36A0BA8ECA for ; Thu, 14 Apr 2022 06:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942488; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qOQ6CGsOxUVsyswsisxdsFR0Ts3+yJGpZ9FQepIZMdA=; b=D/k0NyjlthVebooC82ar5KS4a+bLB/BjKAqID0xramyP6p8/ajDAixFvQ3TwC6f1Xfi0kk 6GHi+dh8mrAAsic2hO0EUtPdEz/xIMmpIt8dIw4u9Vxn1xFUBcUZcepeoVEJ7YmnyaqJha 1+MI9Q4DxQeH9cPz2If5XPcPNNDPuE4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-424-S14lUwY_OqitwjEUsliXVQ-1; Thu, 14 Apr 2022 09:21:25 -0400 X-MC-Unique: S14lUwY_OqitwjEUsliXVQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9CB8080352D; Thu, 14 Apr 2022 13:21:24 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89E059D7F; Thu, 14 Apr 2022 13:21:22 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 33/34] KVM: selftests: hyperv_svm_test: Introduce L2 TLB flush test Date: Thu, 14 Apr 2022 15:20:12 +0200 Message-Id: <20220414132013.1588929-34-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- .../selftests/kvm/x86_64/hyperv_svm_test.c | 60 +++++++++++++++++-- 1 file changed, 56 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c b/tools/t= esting/selftests/kvm/x86_64/hyperv_svm_test.c index 21f5ca9197da..99f0a2ead7df 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c @@ -42,11 +42,24 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS (1U << 31) =20 +#define HV_SVM_EXITCODE_ENL 0xF0000000 +#define HV_SVM_ENL_EXITCODE_TRAP_AFTER_FLUSH (1) + static inline void vmmcall(void) { __asm__ __volatile__("vmmcall"); } =20 +static inline void hypercall(u64 control, vm_vaddr_t arg1, vm_vaddr_t arg2) +{ + asm volatile("mov %3, %%r8\n" + "vmmcall" + : "+c" (control), "+d" (arg1) + : "r" (arg2) + : "cc", "memory", "rax", "rbx", "r8", "r9", "r10", + "r11", "r12", "r13", "r14", "r15"); +} + void l2_guest_code(void) { GUEST_SYNC(3); @@ -62,11 +75,21 @@ void l2_guest_code(void) =20 GUEST_SYNC(5); =20 + /* L2 TLB flush tests */ + hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + rdmsr(MSR_FS_BASE); + hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0, + HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS); + /* Make sure we're not issuing Hyper-V TLB flush call again */ + __asm__ __volatile__ ("mov $0xdeadbeef, %rcx"); + /* Done, exit to L1 and never come back. */ vmmcall(); } =20 -static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm) +static void __attribute__((__flatten__)) guest_code(struct svm_test_data *= svm, + vm_vaddr_t pgs_gpa) { unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; struct vmcb *vmcb =3D svm->vmcb; @@ -75,13 +98,23 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) =20 GUEST_SYNC(1); =20 - wrmsr(HV_X64_MSR_GUEST_OS_ID, (u64)0x8100 << 48); + wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID); + wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa); + enable_vp_assist(svm->vp_assist_gpa, svm->vp_assist); =20 GUEST_ASSERT(svm->vmcb_gpa); /* Prepare for L2 execution. */ generic_svm_setup(svm, l2_guest_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 + /* L2 TLB flush setup */ + hve->partition_assist_page =3D svm->partition_assist_gpa; + hve->hv_enlightenments_control.nested_flush_hypercall =3D 1; + hve->hv_vm_id =3D 1; + hve->hv_vp_id =3D 1; + current_vp_assist->nested_control.features.directhypercall =3D 1; + *(u32 *)(svm->partition_assist) =3D 0; + GUEST_SYNC(2); run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); @@ -116,6 +149,20 @@ static void __attribute__((__flatten__)) guest_code(st= ruct svm_test_data *svm) GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); vmcb->save.rip +=3D 2; /* rdmsr */ =20 + + /* + * L2 TLB flush test. First VMCALL should be handled directly by L0, + * no VMCALL exit expected. + */ + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_MSR); + vmcb->save.rip +=3D 2; /* rdmsr */ + /* Enable synthetic vmexit */ + *(u32 *)(svm->partition_assist) =3D 1; + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code =3D=3D HV_SVM_EXITCODE_ENL); + GUEST_ASSERT(vmcb->control.exit_info_1 =3D=3D HV_SVM_ENL_EXITCODE_TRAP_AF= TER_FLUSH); + run_guest(vmcb, svm->vmcb_gpa); GUEST_ASSERT(vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); GUEST_SYNC(6); @@ -126,7 +173,7 @@ static void __attribute__((__flatten__)) guest_code(str= uct svm_test_data *svm) int main(int argc, char *argv[]) { vm_vaddr_t nested_gva =3D 0; - + vm_vaddr_t hcall_page; struct kvm_vm *vm; struct kvm_run *run; struct ucall uc; @@ -141,7 +188,12 @@ int main(int argc, char *argv[]) vcpu_set_hv_cpuid(vm, VCPU_ID); run =3D vcpu_state(vm, VCPU_ID); vcpu_alloc_svm(vm, &nested_gva); - vcpu_args_set(vm, VCPU_ID, 1, nested_gva); + + hcall_page =3D vm_vaddr_alloc_pages(vm, 1); + memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize()); + + vcpu_args_set(vm, VCPU_ID, 2, nested_gva, addr_gva2gpa(vm, hcall_page)); + vcpu_set_msr(vm, VCPU_ID, HV_X64_MSR_VP_INDEX, VCPU_ID); =20 for (stage =3D 1;; stage++) { _vcpu_run(vm, VCPU_ID); --=20 2.35.1 From nobody Mon May 11 04:11:51 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7A0BC43219 for ; Thu, 14 Apr 2022 14:11:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346634AbiDNOLE (ORCPT ); Thu, 14 Apr 2022 10:11:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245020AbiDNN2Z (ORCPT ); Thu, 14 Apr 2022 09:28:25 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 912C2A8EF5 for ; Thu, 14 Apr 2022 06:21:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649942491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=76LjVH/W5ULitQM1ID+xMK8IwGQrTK8V+5VzYil7/l0=; b=bgInNoLYQhA3OptxWzr2d0MUfd5qt+b9RvVf6vJGbcuq8SlyQ0nLeiiuWTaWPjlp2ddLlu lsEkwmXHJgxndD8LrSEZR3sXIonemU/jRGyrYF4Z67Ccc3GLxyfV1vW8k1tBeRmU8+5ucr y4CEp6m0JRvu4juOepdtMqoxlDM+UYA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-42-NL0AfjXgOs-niHezPGTypg-1; Thu, 14 Apr 2022 09:21:27 -0400 X-MC-Unique: NL0AfjXgOs-niHezPGTypg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 90D473C02B5E; Thu, 14 Apr 2022 13:21:26 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.195.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id DEC287C28; Thu, 14 Apr 2022 13:21:24 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 34/34] KVM: x86: Rename 'enable_direct_tlbflush' to 'enable_l2_tlb_flush' Date: Thu, 14 Apr 2022 15:20:13 +0200 Message-Id: <20220414132013.1588929-35-vkuznets@redhat.com> In-Reply-To: <20220414132013.1588929-1-vkuznets@redhat.com> References: <20220414132013.1588929-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To make terminology between Hyper-V-on-KVM and KVM-on-Hyper-V consistent, rename 'enable_direct_tlbflush' to 'enable_l2_tlb_flush'. The change eliminates the use of confusing 'direct' and adds the missing underscore. No functional change. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 2 +- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/svm_onhyperv.c | 2 +- arch/x86/kvm/svm/svm_onhyperv.h | 6 +++--- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 6 +++--- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 96e4e9842dfc..1e13612a6446 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -121,7 +121,7 @@ KVM_X86_OP_OPTIONAL(vm_move_enc_context_from) KVM_X86_OP(get_msr_feature) KVM_X86_OP(can_emulate_instruction) KVM_X86_OP(apic_init_signal_blocked) -KVM_X86_OP_OPTIONAL(enable_direct_tlbflush) +KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) KVM_X86_OP_OPTIONAL(migrate_timers) KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 168600490bd1..f4fd6da1f565 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1526,7 +1526,7 @@ struct kvm_x86_ops { void *insn, int insn_len); =20 bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu); - int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu); + int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu); =20 void (*migrate_timers)(struct kvm_vcpu *vcpu); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyper= v.c index 8cdc62c74a96..69a7014d1cef 100644 --- a/arch/x86/kvm/svm/svm_onhyperv.c +++ b/arch/x86/kvm/svm/svm_onhyperv.c @@ -14,7 +14,7 @@ #include "kvm_onhyperv.h" #include "svm_onhyperv.h" =20 -int svm_hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) +int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu) { struct hv_enlightenments *hve; struct hv_partition_assist_pg **p_hv_pa_pg =3D diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyper= v.h index e2fc59380465..d6ec4aeebedb 100644 --- a/arch/x86/kvm/svm/svm_onhyperv.h +++ b/arch/x86/kvm/svm/svm_onhyperv.h @@ -13,7 +13,7 @@ =20 static struct kvm_x86_ops svm_x86_ops; =20 -int svm_hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu); +int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu); =20 static inline void svm_hv_init_vmcb(struct vmcb *vmcb) { @@ -51,8 +51,8 @@ static inline void svm_hv_hardware_setup(void) =20 vp_ap->nested_control.features.directhypercall =3D 1; } - svm_x86_ops.enable_direct_tlbflush =3D - svm_hv_enable_direct_tlbflush; + svm_x86_ops.enable_l2_tlb_flush =3D + svm_hv_enable_l2_tlb_flush; } } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a81e44852f54..2b3c73b49dcb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -461,7 +461,7 @@ static unsigned long host_idt_base; static bool __read_mostly enlightened_vmcs =3D true; module_param(enlightened_vmcs, bool, 0444); =20 -static int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) +static int hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu) { struct hv_enlightened_vmcs *evmcs; struct hv_partition_assist_pg **p_hv_pa_pg =3D @@ -8151,8 +8151,8 @@ static int __init vmx_init(void) } =20 if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) - vmx_x86_ops.enable_direct_tlbflush - =3D hv_enable_direct_tlbflush; + vmx_x86_ops.enable_l2_tlb_flush + =3D hv_enable_l2_tlb_flush; =20 } else { enlightened_vmcs =3D false; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d3839e648ab3..d620c56bc526 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4365,7 +4365,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, lon= g ext) kvm_x86_ops.nested_ops->get_state(NULL, NULL, 0) : 0; break; case KVM_CAP_HYPERV_DIRECT_TLBFLUSH: - r =3D kvm_x86_ops.enable_direct_tlbflush !=3D NULL; + r =3D kvm_x86_ops.enable_l2_tlb_flush !=3D NULL; break; case KVM_CAP_HYPERV_ENLIGHTENED_VMCS: r =3D kvm_x86_ops.nested_ops->enable_evmcs !=3D NULL; @@ -5275,10 +5275,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcp= u *vcpu, } return r; case KVM_CAP_HYPERV_DIRECT_TLBFLUSH: - if (!kvm_x86_ops.enable_direct_tlbflush) + if (!kvm_x86_ops.enable_l2_tlb_flush) return -ENOTTY; =20 - return static_call(kvm_x86_enable_direct_tlbflush)(vcpu); + return static_call(kvm_x86_enable_l2_tlb_flush)(vcpu); =20 case KVM_CAP_HYPERV_ENFORCE_CPUID: return kvm_hv_set_enforce_cpuid(vcpu, cap->args[0]); --=20 2.35.1