From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88554EB64DA for ; Thu, 15 Jun 2023 20:12:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236338AbjFOUMt (ORCPT ); Thu, 15 Jun 2023 16:12:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229767AbjFOUMi (ORCPT ); Thu, 15 Jun 2023 16:12:38 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD3F912E; Thu, 15 Jun 2023 13:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859957; x=1718395957; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eOHhXhX/zKooxVF8rvqXJbJxqJhicIuTcsuvo4fz+k0=; b=CvgVwPD3hw6PuX3pdZyIrNjVhf3vKlseoa+4U2MuPO38mUk7Kf62Jhh+ eQOVEy7/J1mws2Bw/eiJD3ekccOuwUgjRHoFjdR0h+6aqSczXi4dZ77vw n2GzEWa17MUFsR5W8Cjt3IrgRD8QLrVvgn38wJ9RAvFxIycZ/m16a5iA3 6jXy+zYCcdVVy57id576beZPkkb9N9c0XBPZgZqtdO9qxo++Amt3i+C7A P1GnubKsCAkEaRwqBAW5/DgpyPrALGka+KVHboCypm7rWNKbkBeQWnLFj mvCBXGZkhvSV8MFgePwnwe1ENAqsxe+O32RgsBweSKbiC95dDLS6Gro2j A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611436" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611436" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576646" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576646" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:35 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 1/6] KVM: selftests: Fix test_add_overlapping_private_memory_regions() Date: Thu, 15 Jun 2023 13:12:14 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata The last test in test_add_overlapping_private_memory_regions() doesn't use overlapping regions resulting in the failure. When the region is overlaps with the existing ones, the error code is EEXIST instead of EINVAL. Pass the overlapping region, and check if the errno is EEXIST. Fixes: bdb645960cb5 ("KVM: selftests: Expand set_memory_region_test to vali= date guest_memfd()") Signed-off-by: Isaku Yamahata --- .../selftests/kvm/set_memory_region_test.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/t= esting/selftests/kvm/set_memory_region_test.c index f46841843300..ea7da324c4d6 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -432,6 +432,7 @@ static void test_add_overlapping_private_memory_regions= (void) { struct kvm_vm *vm; int memfd; + int r; =20 pr_info("Testing ADD of overlapping KVM_MEM_PRIVATE memory regions\n"); =20 @@ -453,8 +454,19 @@ static void test_add_overlapping_private_memory_region= s(void) vm_set_user_memory_region2(vm, MEM_REGION_SLOT, KVM_MEM_PRIVATE, MEM_REGION_GPA, 0, NULL, -1, 0); =20 - test_invalid_guest_memfd(vm, memfd, MEM_REGION_SIZE, - "Overlapping guest_memfd() bindings should fail"); + r =3D __vm_set_user_memory_region2(vm, MEM_REGION_SLOT, KVM_MEM_PRIVATE, + MEM_REGION_GPA * 2 - MEM_REGION_SIZE, + MEM_REGION_SIZE * 2, + 0, memfd, 0); + TEST_ASSERT(r =3D=3D -1 && errno =3D=3D EEXIST, "%s", + "Overlapping guest_memfd() bindings should fail"); + + r =3D __vm_set_user_memory_region2(vm, MEM_REGION_SLOT, KVM_MEM_PRIVATE, + MEM_REGION_GPA * 2 + MEM_REGION_SIZE, + MEM_REGION_SIZE * 2, + 0, memfd, 0); + TEST_ASSERT(r =3D=3D -1 && errno =3D=3D EEXIST, "%s", + "Overlapping guest_memfd() bindings should fail"); =20 close(memfd); kvm_vm_free(vm); --=20 2.25.1 From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DBADEB64DA for ; Thu, 15 Jun 2023 20:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236656AbjFOUMx (ORCPT ); Thu, 15 Jun 2023 16:12:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230174AbjFOUMj (ORCPT ); Thu, 15 Jun 2023 16:12:39 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C6E9E69; Thu, 15 Jun 2023 13:12:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859958; x=1718395958; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SXFiwNIFA8ag0+D8oI15OIEKtW2U4baWFXxyYWhjlKM=; b=Gn0bVqzuEIjd15icPYgK011Z0DfS5oKXzWBZ2tWlI5LhRh8tKFucOWSn 7ipLLymSp1FgcBVm5Hf7AgcaVfkk6P8VEXY05VNY11yK1rmdphaIiGGyA tsvDh7nU/TYJTgoEgOtqqJMxjrtg2pvUCRMSOZj2ULrzB9nq5O7cFW4i+ oN7zePLO/bPhmzxqTc06rG5wUbYWLtAQsA7myOCbUecYcEj8saUvaMMhM 6tzU6FVU4FAIPRyC8rrTgBVys2PiNzfxKXEeiOuYVLYqfZCIr6j/0HCsS 9QwH5tEYnn6qtgpu3V7ws51nhGU5Cd8uL8zw9T8V+WahslC57Hvu+Bvwf A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611447" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611447" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576651" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576651" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:36 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 2/6] KVM: selftests: Fix guest_memfd() Date: Thu, 15 Jun 2023 13:12:15 -0700 Message-Id: <9e3e99f78fcbd7db21368b5fe1d931feeb4db567.1686858861.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Some test cases should succeed. Check !ret instead of ret. Fixes: 36eedd5b91e3 ("KVM: selftests: Add basic selftest for guest_memfd()") Signed-off-by: Isaku Yamahata --- tools/testing/selftests/kvm/guest_memfd_test.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing= /selftests/kvm/guest_memfd_test.c index 3b6532b833b2..f3b99c1e5464 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -72,11 +72,11 @@ static void test_fallocate(int fd, size_t page_size, si= ze_t total_size) =20 ret =3D fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, total_size, page_size); - TEST_ASSERT(ret, "fallocate(PUNCH_HOLE) at total_size should be fine (no-= op)"); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) at total_size should be fine (no= -op)"); =20 ret =3D fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, total_size + page_size, page_size); - TEST_ASSERT(ret, "fallocate(PUNCH_HOLE) after total_size should be fine (= no-op)"); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) after total_size should be fine = (no-op)"); =20 ret =3D fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, page_size, page_size - 1); --=20 2.25.1 From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22E21EB64DB for ; Thu, 15 Jun 2023 20:12:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236932AbjFOUM5 (ORCPT ); Thu, 15 Jun 2023 16:12:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234932AbjFOUMk (ORCPT ); Thu, 15 Jun 2023 16:12:40 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 184D312E; Thu, 15 Jun 2023 13:12:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859959; x=1718395959; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UZ90WRxUjX2K077yAxQuqlxmZw9YzTEYYyZ0/+kWEtg=; b=IpEoEClPFW2hLIoU0bCK2sjekl6J9ikDHm2NqXsyz79SlOwxa3Yr7VQa c+SbQal20hf685lRXCr352qxlqmn7Njcekv9yJUo8H8wQ9tZlCc4V6XO6 dT0y/yJFNTkR8qmY4LFqp/g9JhERcAa7Y7DsbhnmEzvpe3rLTDG+1HbQ5 pleZduFEG6TYrIKMYvzMMB6YyCDlBJb2/2LHYhNglo/sM7/DpA2zzE/lf 58iqY1rfw11fG5UwBF4HwS9ORiqYcco+vjrQfaTY7qOG2M1+0ZCR/xTc6 iAbCm/13yT0f4MIW1qMEe9HMkCGqO3seHMNi2lE4phpDgC6qBTY7wRNqB w==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611456" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611456" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576654" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576654" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:37 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 3/6] KVM: x86/mmu: Pass round full 64-bit error code for the KVM page fault Date: Thu, 15 Jun 2023 13:12:16 -0700 Message-Id: <2dea4b9819d1aac4b3194b00c5effaf6d01b449f.1686858861.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Because the full 64-bit error code, or more info about the fault, for the KVM page fault will be needed for protected VM, TDX and SEV-SNP, update kvm_mmu_do_page_fault() to accept the 64-bit value so it can pass it to the callbacks. The upper 32 bits of error code are discarded at kvm_mmu_page_fault() by lower_32_bits(). Now it's passed down as full 64 bits. Because only FNAME(page_fault) depends on it, move lower_32_bits() into FNAME(page_fault). The accesses of fault->error_code are as follows - FNAME(page_fault): change to explicitly use lower_32_bits() - kvm_mmu_page_fault(): explicit mask with PFERR_RSVD_MASK, PFERR_NESTED_GUEST_PAGE - mmutrace: changed u32 -> u64 - pgprintk(): change %x -> %llx No functional change is intended. This is a preparation to pass on more info with page fault error code. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 4 ++-- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- 4 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc2b9a2f717c..b8ba7f11c3cb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4510,7 +4510,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault static int nonpaging_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - pgprintk("%s: gva %lx error %x\n", __func__, fault->addr, fault->error_co= de); + pgprintk("%s: gva %llx error %llx\n", __func__, fault->addr, fault->error= _code); =20 /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ fault->max_level =3D PG_LEVEL_2M; @@ -5820,8 +5820,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu= , gpa_t cr2_or_gpa, u64 err } =20 if (r =3D=3D RET_PF_INVALID) { - r =3D kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, - lower_32_bits(error_code), false, + r =3D kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false, &emulation_type); if (KVM_BUG_ON(r =3D=3D RET_PF_INVALID, vcpu->kvm)) return -EIO; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index f1786698ae00..7f9ec1e5b136 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -191,7 +191,7 @@ static inline bool is_nx_huge_page_enabled(struct kvm *= kvm) struct kvm_page_fault { /* arguments to kvm_mmu_do_page_fault. */ const gpa_t addr; - const u32 error_code; + const u64 error_code; const bool prefetch; =20 /* Derived from error_code. */ @@ -283,7 +283,7 @@ enum { }; =20 static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_o= r_gpa, - u32 err, bool prefetch, int *emulation_type) + u64 err, bool prefetch, int *emulation_type) { struct kvm_page_fault fault =3D { .addr =3D cr2_or_gpa, diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 2d7555381955..2e77883c92f6 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -261,7 +261,7 @@ TRACE_EVENT( TP_STRUCT__entry( __field(int, vcpu_id) __field(gpa_t, cr2_or_gpa) - __field(u32, error_code) + __field(u64, error_code) __field(u64 *, sptep) __field(u64, old_spte) __field(u64, new_spte) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0662e0278e70..ee4b881c5b39 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -758,7 +758,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault struct guest_walker walker; int r; =20 - pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_cod= e); + pgprintk("%s: addr %llx err %llx\n", __func__, fault->addr, fault->error_= code); WARN_ON_ONCE(fault->is_tdp); =20 /* @@ -767,7 +767,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault * The bit needs to be cleared before walking guest page tables. */ r =3D FNAME(walk_addr)(&walker, vcpu, fault->addr, - fault->error_code & ~PFERR_RSVD_MASK); + lower_32_bits(fault->error_code) & ~PFERR_RSVD_MASK); =20 /* * The page is not mapped by the guest. Let the guest handle it. --=20 2.25.1 From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C0DAEB64D9 for ; Thu, 15 Jun 2023 20:13:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236300AbjFOUNA (ORCPT ); Thu, 15 Jun 2023 16:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235146AbjFOUMk (ORCPT ); Thu, 15 Jun 2023 16:12:40 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5DBC1715; Thu, 15 Jun 2023 13:12:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859959; x=1718395959; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pWaX7pZXuIDTaETtNDJUBd6sIkUSbiwO7+qg1weP6GA=; b=b95OkILWRhFGUbZWn+PyzpXv0erUNB/JwSZsLpyVLepwyjcPdlJsVFbo Qlwo5PiH04mJZ4DZGFjcdocY5kqwoV4QYp+tkhch/eY9idVRzlqn+3gvv l7ILPQjFyZOmD0x4+xxbj/CySkwCKRKtIifSen+nnm+H5lwgZlK5OaRZi NsUoe7icGwCI9Obzhric47nA8frxG+nJfyOr2jO20mSaeKVKynrXSWNL9 9qwJAwg1LXWJJptAyY8W2bOWVp/mjvFxc0NGBHTqdkZBokj3ACu8YixQL Sl9n0peaj0TOKxUqtMVtL+gp/XFBKfuzA8bidM75joe5Oolx8ED8FAx1F g==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611466" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611466" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576657" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576657" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:38 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 4/6] KVM: x86: Introduce PFERR_GUEST_ENC_MASK to indicate fault is private Date: Thu, 15 Jun 2023 13:12:17 -0700 Message-Id: <02471a0e41717e40f415a96a2acbd80ba9d42e2e.1686858861.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata It is unfortunate and inflexible for kvm_mmu_do_page_fault() to call kvm_mem_is_private(), eventually looking up memory attributes. Later __kvm_faultin_pfn() looks up memory attributes again. There is a race condition that other threads can change memory attributes due to not gaining the mmu lock. SNP-SEV and TDX define theri way to indicate that the page fault is private. Add two PFERR codes to designate that the page fault is private and that it requires looking up memory attributes. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/kvm/mmu/mmu.c | 9 +++++++-- arch/x86/kvm/mmu/mmu_internal.h | 4 ++-- 3 files changed, 13 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 8ae131dc645d..2763f9837a0b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -255,7 +255,9 @@ enum x86_intercept_stage; #define PFERR_SGX_BIT 15 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 +#define PFERR_GUEST_ENC_BIT 34 #define PFERR_IMPLICIT_ACCESS_BIT 48 +#define PFERR_HASATTR_BIT 63 =20 #define PFERR_PRESENT_MASK BIT(PFERR_PRESENT_BIT) #define PFERR_WRITE_MASK BIT(PFERR_WRITE_BIT) @@ -266,7 +268,9 @@ enum x86_intercept_stage; #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) +#define PFERR_GUEST_ENC_MASK BIT_ULL(PFERR_GUEST_ENC_BIT) #define PFERR_IMPLICIT_ACCESS BIT_ULL(PFERR_IMPLICIT_ACCESS_BIT) +#define PFERR_HASATTR_MASK BIT_ULL(PFERR_HASATTR_BIT) =20 #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ PFERR_WRITE_MASK | \ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b8ba7f11c3cb..e9c9780bab89 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4358,6 +4358,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault { struct kvm_memory_slot *slot =3D fault->slot; bool async; + bool is_private; =20 /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4386,8 +4387,12 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault return RET_PF_EMULATE; } =20 - if (fault->is_private !=3D kvm_mem_is_private(vcpu->kvm, fault->gfn)) - return kvm_do_memory_fault_exit(vcpu, fault); + is_private =3D kvm_mem_is_private(vcpu->kvm, fault->gfn); + if (fault->error_code & PFERR_HASATTR_MASK) { + if (fault->is_private !=3D is_private) + return kvm_do_memory_fault_exit(vcpu, fault); + } else + fault->is_private =3D is_private; =20 if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 7f9ec1e5b136..22f2cd60cabf 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -203,7 +203,7 @@ struct kvm_page_fault { =20 /* Derived from mmu and global state. */ const bool is_tdp; - const bool is_private; + bool is_private; const bool nx_huge_page_workaround_enabled; =20 /* @@ -301,7 +301,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, .max_level =3D KVM_MAX_HUGEPAGE_LEVEL, .req_level =3D PG_LEVEL_4K, .goal_level =3D PG_LEVEL_4K, - .is_private =3D kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), + .is_private =3D err & PFERR_GUEST_ENC_MASK, }; int r; =20 --=20 2.25.1 From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2463EB64D9 for ; Thu, 15 Jun 2023 20:13:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbjFOUNE (ORCPT ); Thu, 15 Jun 2023 16:13:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235249AbjFOUMl (ORCPT ); Thu, 15 Jun 2023 16:12:41 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30DDE1BDB; Thu, 15 Jun 2023 13:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859960; x=1718395960; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uidS/Azok9aj46kMl0B6yHBkyaymtg8HjXcA/9hKJoQ=; b=GcT1IbQGxAJXplqUqzjNdA9aKhd4hoZW0M019HMc8aO4qWL86LQ6XD6/ VVSzuZ9OUgxtmxOEyxafwz0jZj8nFcaSKGl9IaJvOSzbeJC8TmTNIXdSL B0FuAWUUw/gDdJ0s5B3ktvu8Gblza8DRJRb7N4QIrp5bwy0P8U2TLD8BG 7UuOK3BVDQJLD772cYSCDzZq0l0iUDbsrEEGVfX4Ee0ZmcMzLQkfYofY/ cFiQQM34FJugx9ramPMhNMOVPqB5po8PddcPhlcgJ0zT+tQ+lSJE5U+Eu YZBoA3Cpp17spyLOeJXXXNQqoySS+vXoPs7fYMPlNlyRjov59NcOstJRU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611473" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611473" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576662" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576662" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:39 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 5/6] KVM: Add flags to struct kvm_gfn_range Date: Thu, 15 Jun 2023 13:12:18 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata TDX and SEV-SNP need to know the reason for a callback by kvm_unmap_gfn_range(). mmu notifier, set memory attributes ioctl or KVM gmem callback. The callback handler changes the behavior or does the additional housekeeping operation. For mmu notifier, it's zapping shared PTE. For set memory attributes, it's the conversion of memory attributes (private <=3D> shared). For KVM gmem, it's punching a hole in the range, a= nd releasing the file. Signed-off-by: Isaku Yamahata --- include/linux/kvm_host.h | 11 ++++++++++- virt/kvm/guest_mem.c | 10 +++++++--- virt/kvm/kvm_main.c | 4 +++- 3 files changed, 20 insertions(+), 5 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1a47cedae8a1..c049c0aa44d6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -256,12 +256,21 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif =20 #ifdef CONFIG_KVM_GENERIC_MMU_NOTIFIER + +#define KVM_GFN_RANGE_FLAGS_SET_MEM_ATTR BIT(0) +#define KVM_GFN_RANGE_FLAGS_GMEM_PUNCH_HOLE BIT(1) +#define KVM_GFN_RANGE_FLAGS_GMEM_RELEASE BIT(2) + struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; gfn_t end; - pte_t pte; + union { + pte_t pte; + u64 attrs; + }; bool may_block; + unsigned int flags; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index cdf2d84683c8..30b8f66784d4 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -99,7 +99,8 @@ static struct folio *kvm_gmem_get_folio(struct file *file= , pgoff_t index) } =20 static void kvm_gmem_invalidate_begin(struct kvm *kvm, struct kvm_gmem *gm= em, - pgoff_t start, pgoff_t end) + pgoff_t start, pgoff_t end, + unsigned int flags) { struct kvm_memory_slot *slot; unsigned long index; @@ -118,6 +119,7 @@ static void kvm_gmem_invalidate_begin(struct kvm *kvm, = struct kvm_gmem *gmem, .slot =3D slot, .pte =3D __pte(0), .may_block =3D true, + .flags =3D flags, }; =20 kvm_mmu_invalidate_range_add(kvm, gfn_range.start, gfn_range.end); @@ -156,7 +158,8 @@ static long kvm_gmem_punch_hole(struct file *file, loff= _t offset, loff_t len) */ filemap_invalidate_lock(file->f_mapping); =20 - kvm_gmem_invalidate_begin(kvm, gmem, start, end); + kvm_gmem_invalidate_begin(kvm, gmem, start, end, + KVM_GFN_RANGE_FLAGS_GMEM_PUNCH_HOLE); =20 truncate_inode_pages_range(file->f_mapping, offset, offset + len - 1); =20 @@ -263,7 +266,8 @@ static int kvm_gmem_release(struct inode *inode, struct= file *file) * Free the backing memory, and more importantly, zap all SPTEs that * pointed at this file. */ - kvm_gmem_invalidate_begin(kvm, gmem, 0, -1ul); + kvm_gmem_invalidate_begin(kvm, gmem, 0, -1ul, + KVM_GFN_RANGE_FLAGS_GMEM_RELEASE); truncate_inode_pages_final(file->f_mapping); kvm_gmem_invalidate_end(kvm, gmem, 0, -1ul); =20 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 422d49634c56..9cdfa2fb675f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -613,6 +613,7 @@ static __always_inline int __kvm_handle_hva_range(struc= t kvm *kvm, gfn_range.start =3D hva_to_gfn_memslot(hva_start, slot); gfn_range.end =3D hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot); gfn_range.slot =3D slot; + gfn_range.flags =3D 0; =20 if (!locked) { locked =3D true; @@ -2391,8 +2392,9 @@ static void kvm_mem_attrs_changed(struct kvm *kvm, un= signed long attrs, bool flush =3D false; int i; =20 - gfn_range.pte =3D __pte(0); + gfn_range.attrs =3D attrs; gfn_range.may_block =3D true; + gfn_range.flags =3D KVM_GFN_RANGE_FLAGS_SET_MEM_ATTR; =20 for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots =3D __kvm_memslots(kvm, i); --=20 2.25.1 From nobody Sat Feb 7 17:14:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D752EB64D9 for ; Thu, 15 Jun 2023 20:13:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236693AbjFOUNH (ORCPT ); Thu, 15 Jun 2023 16:13:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235649AbjFOUMm (ORCPT ); Thu, 15 Jun 2023 16:12:42 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 651911FF7; Thu, 15 Jun 2023 13:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686859961; x=1718395961; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A3iAeINTOQxjlIUETFqioqPcdTpYY+dG6kh7WvobNeQ=; b=E/gitTvcVFJ7pvGyNcibmFc2K6TWPLquy3ZzpPo8QFiXtEW+lAKtdDxi PntRgYvHq8sD/cQoz1xrIAtXiYA2ECYASQ8V/KqDjTi+7fQBXOdmCZxiG 2autZgs6H5APGcsY9BGcfwkhw1iMyHFrURdo9dPRoUlqUXhbROFYNagPL 0tOTcuNvEtWROqm7ibL0Jar2IdTDvlhWC+9ObeWqhNDm9p6+7xoVC3atX 71GomlMBz6qP+dSOF4aRr5uisxflXKUGIsqY4GugiiPEYt0jCFjlU4xz1 9kMaYV6ayfeYxTn99REnXGr3DU6rbZN/nDaYRn5LqmBliLSeQV2d6F4db Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387611486" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="387611486" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="712576666" X-IronPort-AV: E=Sophos;i="6.00,245,1681196400"; d="scan'208";a="712576666" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 13:12:40 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth Subject: [RFC PATCH 6/6] KVM: x86: Add is_vm_type_supported callback Date: Thu, 15 Jun 2023 13:12:19 -0700 Message-Id: <268aa027f991eb6afbfb338f88a33d409e81fd36.1686858861.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata For TDX, allow the backend can override the supported vm type. Add KVM_X86_TDX_VM to reserve the bit. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/kvm.h | 1 + arch/x86/kvm/svm/svm.c | 7 +++++++ arch/x86/kvm/vmx/vmx.c | 6 ++++++ arch/x86/kvm/x86.c | 10 +++++++++- arch/x86/kvm/x86.h | 2 ++ 7 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 13bc212cd4bc..c0143906fe6d 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -20,6 +20,7 @@ KVM_X86_OP(hardware_disable) KVM_X86_OP(hardware_unsetup) KVM_X86_OP(has_emulated_msr) KVM_X86_OP(vcpu_after_set_cpuid) +KVM_X86_OP(is_vm_type_supported) KVM_X86_OP(vm_init) KVM_X86_OP_OPTIONAL(vm_destroy) KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 2763f9837a0b..ce83e24a538d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1547,6 +1547,7 @@ struct kvm_x86_ops { bool (*has_emulated_msr)(struct kvm *kvm, u32 index); void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); =20 + bool (*is_vm_type_supported)(unsigned long vm_type); unsigned int vm_size; int (*vm_init)(struct kvm *kvm); void (*vm_destroy)(struct kvm *kvm); diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kv= m.h index 6afbfbb32d56..53d382b3b423 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -561,5 +561,6 @@ struct kvm_pmu_event_filter { =20 #define KVM_X86_DEFAULT_VM 0 #define KVM_X86_PROTECTED_VM 1 +#define KVM_X86_TDX_VM 2 =20 #endif /* _ASM_X86_KVM_H */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index eb308c9994f9..e9ed8729f63b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4756,6 +4756,12 @@ static void svm_vm_destroy(struct kvm *kvm) sev_vm_destroy(kvm); } =20 +static bool svm_is_vm_type_supported(unsigned long type) +{ + /* FIXME: Check if CPU is capable of SEV. */ + return __kvm_is_vm_type_supported(type); +} + static int svm_vm_init(struct kvm *kvm) { if (!pause_filter_count || !pause_filter_thresh) @@ -4784,6 +4790,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata =3D { .vcpu_free =3D svm_vcpu_free, .vcpu_reset =3D svm_vcpu_reset, =20 + .is_vm_type_supported =3D svm_is_vm_type_supported, .vm_size =3D sizeof(struct kvm_svm), .vm_init =3D svm_vm_init, .vm_destroy =3D svm_vm_destroy, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 44fb619803b8..b5394ba8cb9c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7469,6 +7469,11 @@ static int vmx_vcpu_create(struct kvm_vcpu *vcpu) return err; } =20 +static bool vmx_is_vm_type_supported(unsigned long type) +{ + return __kvm_is_vm_type_supported(type); +} + #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible.= See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/h= w-vuln/l1tf.html for details.\n" #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation d= isabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/d= oc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n" =20 @@ -8138,6 +8143,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata =3D { .hardware_disable =3D vmx_hardware_disable, .has_emulated_msr =3D vmx_has_emulated_msr, =20 + .is_vm_type_supported =3D vmx_is_vm_type_supported, .vm_size =3D sizeof(struct kvm_vmx), .vm_init =3D vmx_vm_init, .vm_destroy =3D vmx_vm_destroy, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c9e1c9369be2..b5f865f39a00 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4418,12 +4418,18 @@ static int kvm_ioctl_get_supported_hv_cpuid(struct = kvm_vcpu *vcpu, return 0; } =20 -static bool kvm_is_vm_type_supported(unsigned long type) +bool __kvm_is_vm_type_supported(unsigned long type) { return type =3D=3D KVM_X86_DEFAULT_VM || (type =3D=3D KVM_X86_PROTECTED_VM && IS_ENABLED(CONFIG_KVM_PROTECTED_VM) && tdp_enabled); } +EXPORT_SYMBOL_GPL(__kvm_is_vm_type_supported); + +static bool kvm_is_vm_type_supported(unsigned long type) +{ + return static_call(kvm_x86_is_vm_type_supported)(type); +} =20 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) { @@ -4618,6 +4624,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, lon= g ext) r =3D BIT(KVM_X86_DEFAULT_VM); if (kvm_is_vm_type_supported(KVM_X86_PROTECTED_VM)) r |=3D BIT(KVM_X86_PROTECTED_VM); + if (kvm_is_vm_type_supported(KVM_X86_TDX_VM)) + r |=3D BIT(KVM_X86_TDX_VM); break; default: break; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index c544602d07a3..7d5aa8f0571a 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -9,6 +9,8 @@ #include "kvm_cache_regs.h" #include "kvm_emulate.h" =20 +bool __kvm_is_vm_type_supported(unsigned long type); + struct kvm_caps { /* control of guest tsc rate supported? */ bool has_tsc_control; --=20 2.25.1