From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C828CC433F5 for ; Thu, 14 Apr 2022 07:40:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240461AbiDNHnG (ORCPT ); Thu, 14 Apr 2022 03:43:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240453AbiDNHm3 (ORCPT ); Thu, 14 Apr 2022 03:42:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 03117205E0 for ; Thu, 14 Apr 2022 00:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ej/w4HBGwYoDboGlilY+ru86ZG1EwdY47a9cBI8irf8=; b=OMLpTqlYKr+ni1TJw7kxCALJLuT2BeVvc9T8riQMqvGRXzjzwY9a+5emaSskIht49bBdi6 nyzObFOhjUWIwt8ZNKqXrOQNAHMLh3MFKWZuFOF1XrlamxiTZP3Xwpbg6uQLujBzfH9MLE yKsxAnCtBbpK9eqksowiWYJec6ku0OQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-467-ySX8cCLOPfycE2mUYSWKFw-1; Thu, 14 Apr 2022 03:40:01 -0400 X-MC-Unique: ySX8cCLOPfycE2mUYSWKFw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 64E6A296A61C; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 425A241D40E; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, David Matlack Subject: [PATCH 01/22] KVM: x86/mmu: nested EPT cannot be used in SMM Date: Thu, 14 Apr 2022 03:39:39 -0400 Message-Id: <20220414074000.31438-2-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The role.base.smm flag is always zero when setting up shadow EPT, do not bother copying it over from vcpu->arch.root_mmu. Reviewed-by: David Matlack Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c623019929a7..797c51bb6cda 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4910,9 +4910,11 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *= vcpu, bool accessed_dirty, { union kvm_mmu_role role =3D {0}; =20 - /* SMM flag is inherited from root_mmu */ - role.base.smm =3D vcpu->arch.root_mmu.mmu_role.base.smm; - + /* + * KVM does not support SMM transfer monitors, and consequently does not + * support the "entry to SMM" control either. role.base.smm is always 0. + */ + WARN_ON_ONCE(is_smm(vcpu)); role.base.level =3D level; role.base.has_4_byte_gpte =3D false; role.base.direct =3D false; --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C00D2C433F5 for ; Thu, 14 Apr 2022 07:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240408AbiDNHoJ (ORCPT ); Thu, 14 Apr 2022 03:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240491AbiDNHmd (ORCPT ); Thu, 14 Apr 2022 03:42:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7293F205E0 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4llhYUlEHnYS+rmS/rPK8xZy4asy8TJcfk9ASTbQTXg=; b=f7aQNsFtbp8Azny2iOlyXrIPwJ0PhxTADA4uJQBDnBmPrjQBVORUYmL7q8iteabB/WSoSa LAYi2wHUBFZaNt5t7qutPq8XcoA3zbEODgXXZNQqjYsT4iFFFF6T627avZcSG9BCTHujt9 H0g6lap6hHkfTmnPpNMNENbuq6OZCqI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-642-RKw1lzz6Mp2nLr21mGLC_Q-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: RKw1lzz6Mp2nLr21mGLC_Q-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8FD5A804184; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D63440D1DB; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, David Matlack Subject: [PATCH 02/22] KVM: x86/mmu: constify uses of struct kvm_mmu_role_regs Date: Thu, 14 Apr 2022 03:39:40 -0400 Message-Id: <20220414074000.31438-3-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" struct kvm_mmu_role_regs is computed just once and then accessed. Use const to make this clearer, even though the const fields of struct kvm_mmu_role_regs already prevent (or make it harder...) to modify the contents of the struct. Reviewed-by: David Matlack Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 797c51bb6cda..07b8550e68e9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -197,7 +197,8 @@ struct kvm_mmu_role_regs { * the single source of truth for the MMU's state. */ #define BUILD_MMU_ROLE_REGS_ACCESSOR(reg, name, flag) \ -static inline bool __maybe_unused ____is_##reg##_##name(struct kvm_mmu_rol= e_regs *regs)\ +static inline bool __maybe_unused \ +____is_##reg##_##name(const struct kvm_mmu_role_regs *regs) \ { \ return !!(regs->reg & flag); \ } @@ -244,7 +245,7 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struc= t kvm_vcpu *vcpu) return regs; } =20 -static int role_regs_to_root_level(struct kvm_mmu_role_regs *regs) +static int role_regs_to_root_level(const struct kvm_mmu_role_regs *regs) { if (!____is_cr0_pg(regs)) return 0; @@ -4705,7 +4706,7 @@ static void paging32_init_context(struct kvm_mmu *con= text) } =20 static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *= vcpu, - struct kvm_mmu_role_regs *regs) + const struct kvm_mmu_role_regs *regs) { union kvm_mmu_extended_role ext =3D {0}; =20 @@ -4728,7 +4729,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_= ext(struct kvm_vcpu *vcpu, } =20 static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs, + const struct kvm_mmu_role_regs *regs, bool base_only) { union kvm_mmu_role role =3D {0}; @@ -4764,7 +4765,8 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_vc= pu *vcpu) =20 static union kvm_mmu_role kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs, bool base_only) + const struct kvm_mmu_role_regs *regs, + bool base_only) { union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs, base_onl= y); =20 @@ -4810,7 +4812,8 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) =20 static union kvm_mmu_role kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs, bool base_only) + const struct kvm_mmu_role_regs *regs, + bool base_only) { union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs, base_onl= y); =20 @@ -4823,7 +4826,8 @@ kvm_calc_shadow_root_page_role_common(struct kvm_vcpu= *vcpu, =20 static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs, bool base_only) + const struct kvm_mmu_role_regs *regs, + bool base_only) { union kvm_mmu_role role =3D kvm_calc_shadow_root_page_role_common(vcpu, regs, base_only); @@ -4841,7 +4845,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *v= cpu, } =20 static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu = *context, - struct kvm_mmu_role_regs *regs, + const struct kvm_mmu_role_regs *regs, union kvm_mmu_role new_role) { if (new_role.as_u64 =3D=3D context->mmu_role.as_u64) @@ -4864,7 +4868,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *= vcpu, struct kvm_mmu *conte } =20 static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs) + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role new_role =3D @@ -4875,7 +4879,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, =20 static union kvm_mmu_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs) + const struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role =3D kvm_calc_shadow_root_page_role_common(vcpu, regs, false); @@ -4975,7 +4979,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) } =20 static union kvm_mmu_role -kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, struct kvm_mmu_role_regs *= regs) +kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_= regs *regs) { union kvm_mmu_role role; =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E7C4C433EF for ; Thu, 14 Apr 2022 07:40:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240537AbiDNHnM (ORCPT ); Thu, 14 Apr 2022 03:43:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240451AbiDNHm3 (ORCPT ); Thu, 14 Apr 2022 03:42:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 50AEF53723 for ; Thu, 14 Apr 2022 00:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922004; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o9bQvmPTCK11kqEDs1q9cZ1XNwIKypiXik4QOo0yR6k=; b=au/fTAgXg4jOX0zrOx2gBw8ID8yXlseyGfBSt21Ff3csJ9HPKMH+JU9eepgjOM20faODwq MQFrydHVavKYNW/hLGw2IZGCvIivPmM2gz9WtHgAXm2RJRMUwDhMWRiHhxfzLfOJEj3nm8 as/PSV7CWnG0maN0c/T9whX8mZmcXaM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-UOSB2-7aP6aqQv6pY3HtTA-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: UOSB2-7aP6aqQv6pY3HtTA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BAE4B83396B; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 97F5B41D40E; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, David Matlack Subject: [PATCH 03/22] KVM: x86/mmu: pull computation of kvm_mmu_role_regs to kvm_init_mmu Date: Thu, 14 Apr 2022 03:39:41 -0400 Message-Id: <20220414074000.31438-4-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The init_kvm_*mmu functions, with the exception of shadow NPT, do not need to know the full values of CR0/CR4/EFER; they only need to know the bits that make up the "role". This cleanup however will take quite a few incremental steps. As a start, pull the common computation of the struct kvm_mmu_role_regs into their caller: all of them extract the struct from the vcpu as the very first step. Reviewed-by: David Matlack Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 07b8550e68e9..d56875938c29 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4778,12 +4778,12 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vc= pu, return role; } =20 -static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) +static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - struct kvm_mmu_role_regs regs =3D vcpu_to_role_regs(vcpu); union kvm_mmu_role new_role =3D - kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, false); + kvm_calc_tdp_mmu_root_page_role(vcpu, regs, false); =20 if (new_role.as_u64 =3D=3D context->mmu_role.as_u64) return; @@ -4797,7 +4797,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; - context->root_level =3D role_regs_to_root_level(®s); + context->root_level =3D role_regs_to_root_level(regs); =20 if (!is_cr0_pg(context)) context->gva_to_gpa =3D nonpaging_gva_to_gpa; @@ -4966,12 +4966,12 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu,= bool execonly, } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); =20 -static void init_kvm_softmmu(struct kvm_vcpu *vcpu) +static void init_kvm_softmmu(struct kvm_vcpu *vcpu, + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - struct kvm_mmu_role_regs regs =3D vcpu_to_role_regs(vcpu); =20 - kvm_init_shadow_mmu(vcpu, ®s); + kvm_init_shadow_mmu(vcpu, regs); =20 context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; @@ -4995,10 +4995,10 @@ kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, con= st struct kvm_mmu_role_regs * return role; } =20 -static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) +static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, + const struct kvm_mmu_role_regs *regs) { - struct kvm_mmu_role_regs regs =3D vcpu_to_role_regs(vcpu); - union kvm_mmu_role new_role =3D kvm_calc_nested_mmu_role(vcpu, ®s); + union kvm_mmu_role new_role =3D kvm_calc_nested_mmu_role(vcpu, regs); struct kvm_mmu *g_context =3D &vcpu->arch.nested_mmu; =20 if (new_role.as_u64 =3D=3D g_context->mmu_role.as_u64) @@ -5038,12 +5038,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vc= pu) =20 void kvm_init_mmu(struct kvm_vcpu *vcpu) { + struct kvm_mmu_role_regs regs =3D vcpu_to_role_regs(vcpu); + if (mmu_is_nested(vcpu)) - init_kvm_nested_mmu(vcpu); + init_kvm_nested_mmu(vcpu, ®s); else if (tdp_enabled) - init_kvm_tdp_mmu(vcpu); + init_kvm_tdp_mmu(vcpu, ®s); else - init_kvm_softmmu(vcpu); + init_kvm_softmmu(vcpu, ®s); } EXPORT_SYMBOL_GPL(kvm_init_mmu); =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E90F5C433F5 for ; Thu, 14 Apr 2022 07:41:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240617AbiDNHn1 (ORCPT ); Thu, 14 Apr 2022 03:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240457AbiDNHma (ORCPT ); Thu, 14 Apr 2022 03:42:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 618E856752 for ; Thu, 14 Apr 2022 00:40:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RQ5CWUGjT4K4xbXyZ8WaQBFYz0mKZYfkG8ySlVTr0k4=; b=FHgVyH6DSnDZ6zocRxb7JVMKtXlby4n3NkdDGze6XpwndrFWRqAg8zSVQ4Rh9HIHoEN6nH ti5qOHub6t2dLSxAKxP61ceCZWiDBqJhSho/Vq8Te0n4M984J/BUQJAGH9FP6lRfgbs5ea 5fLn5bn8CN2zc0PfUARQoJ6QZSPuLrg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-642-4VnvEsw1Nvm-SapdnWd0mw-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: 4VnvEsw1Nvm-SapdnWd0mw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DFD39101AA45; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id C256841D40E; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 04/22] KVM: x86/mmu: rephrase unclear comment Date: Thu, 14 Apr 2022 03:39:42 -0400 Message-Id: <20220414074000.31438-5-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If accessed bits are not supported there simple isn't any distinction between accessed and non-accessed gPTEs, so the comment does not make much sense. Rephrase it in terms of what happens if accessed bits *are* supported. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7d4377f1ef2a..07a7832f96cb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -151,7 +151,7 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcp= u *vcpu, if (!FNAME(is_present_gpte)(gpte)) goto no_present; =20 - /* if accessed bit is not supported prefetch non accessed gpte */ + /* Prefetch only accessed entries (unless A/D bits are disabled). */ if (PT_HAVE_ACCESSED_DIRTY(vcpu->arch.mmu) && !(gpte & PT_GUEST_ACCESSED_MASK)) goto no_present; --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8C79C433FE for ; Thu, 14 Apr 2022 07:41:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240610AbiDNHnd (ORCPT ); Thu, 14 Apr 2022 03:43:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240468AbiDNHmb (ORCPT ); Thu, 14 Apr 2022 03:42:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 05BDE53723 for ; Thu, 14 Apr 2022 00:40:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P80Pne9jp9o6r6k3/2ox1p81M67beNS4dcD0WFC8ITI=; b=EBs+S4fqwFJkr2NVqtV18BaGvA5X/0xTzhcwmTKVmK6H1zP2jTk8Ge8Y4u+xlsFqkyWlin pit1LZWDhKBl3irAicQ4faC8u8r9TbKiG3DbNgQiADFG/wXijFgqPf86ImefqqtvfjH205 AoXLkTi67eZFjNLQ9UWs6tlktAd4YDs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-135-OCHe_ZO-OKO6yawyC_SS1Q-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: OCHe_ZO-OKO6yawyC_SS1Q-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1148B803F3A; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id E885E41D40E; Thu, 14 Apr 2022 07:40:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 05/22] KVM: x86: Clean up and document nested #PF workaround Date: Thu, 14 Apr 2022 03:39:43 -0400 Message-Id: <20220414074000.31438-6-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Replace the per-vendor hack-a-fix for KVM's #PF =3D> #PF =3D> #DF workaround with an explicit, common workaround in kvm_inject_emulated_page_fault(). Aside from being a hack, the current approach is brittle and incomplete, e.g. nSVM's KVM_SET_NESTED_STATE fails to set ->inject_page_fault(), and nVMX fails to apply the workaround when VMX is intercepting #PF due to allow_smaller_maxphyaddr=3D1. Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/nested.c | 18 +++++++++--------- arch/x86/kvm/vmx/nested.c | 15 ++++++--------- arch/x86/kvm/x86.c | 21 ++++++++++++++++++++- 4 files changed, 37 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index e1c695f72b8b..e46bd289e5df 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1516,6 +1516,8 @@ struct kvm_x86_ops { struct kvm_x86_nested_ops { void (*leave_nested)(struct kvm_vcpu *vcpu); int (*check_events)(struct kvm_vcpu *vcpu); + bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu, + struct x86_exception *fault); bool (*hv_timer_pending)(struct kvm_vcpu *vcpu); void (*triple_fault)(struct kvm_vcpu *vcpu); int (*get_state)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index caa691229b71..bed5e1692cef 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -55,24 +55,26 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu = *vcpu, nested_svm_vmexit(svm); } =20 -static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86= _exception *fault) +static bool nested_svm_handle_page_fault_workaround(struct kvm_vcpu *vcpu, + struct x86_exception *fault) { struct vcpu_svm *svm =3D to_svm(vcpu); struct vmcb *vmcb =3D svm->vmcb; =20 - WARN_ON(!is_guest_mode(vcpu)); + WARN_ON(!is_guest_mode(vcpu)); =20 if (vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) && - !svm->nested.nested_run_pending) { - vmcb->control.exit_code =3D SVM_EXIT_EXCP_BASE + PF_VECTOR; + !WARN_ON_ONCE(svm->nested.nested_run_pending)) { + vmcb->control.exit_code =3D SVM_EXIT_EXCP_BASE + PF_VECTOR; vmcb->control.exit_code_hi =3D 0; vmcb->control.exit_info_1 =3D fault->error_code; vmcb->control.exit_info_2 =3D fault->address; nested_svm_vmexit(svm); - } else { - kvm_inject_page_fault(vcpu, fault); + return true; } + + return false; } =20 static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) @@ -751,9 +753,6 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmc= b12_gpa, if (ret) return ret; =20 - if (!npt_enabled) - vcpu->arch.mmu->inject_page_fault =3D svm_inject_page_fault_nested; - if (!from_vmrun) kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); =20 @@ -1659,6 +1658,7 @@ static bool svm_get_nested_state_pages(struct kvm_vcp= u *vcpu) struct kvm_x86_nested_ops svm_nested_ops =3D { .leave_nested =3D svm_leave_nested, .check_events =3D svm_check_nested_events, + .handle_page_fault_workaround =3D nested_svm_handle_page_fault_workaround, .triple_fault =3D nested_svm_triple_fault, .get_nested_state_pages =3D svm_get_nested_state_pages, .get_state =3D svm_get_nested_state, diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 838ac7ab5950..a6688663da4d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -476,24 +476,23 @@ static int nested_vmx_check_exception(struct kvm_vcpu= *vcpu, unsigned long *exit return 0; } =20 - -static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, - struct x86_exception *fault) +static bool nested_vmx_handle_page_fault_workaround(struct kvm_vcpu *vcpu, + struct x86_exception *fault) { struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); =20 WARN_ON(!is_guest_mode(vcpu)); =20 if (nested_vmx_is_page_fault_vmexit(vmcs12, fault->error_code) && - !to_vmx(vcpu)->nested.nested_run_pending) { + !WARN_ON_ONCE(to_vmx(vcpu)->nested.nested_run_pending)) { vmcs12->vm_exit_intr_error_code =3D fault->error_code; nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, PF_VECTOR | INTR_TYPE_HARD_EXCEPTION | INTR_INFO_DELIVER_CODE_MASK | INTR_INFO_VALID_MASK, fault->address); - } else { - kvm_inject_page_fault(vcpu, fault); + return true; } + return false; } =20 static int nested_vmx_check_io_bitmap_controls(struct kvm_vcpu *vcpu, @@ -2614,9 +2613,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, stru= ct vmcs12 *vmcs12, vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); } =20 - if (!enable_ept) - vcpu->arch.walk_mmu->inject_page_fault =3D vmx_inject_page_fault_nested; - if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, vmcs12->guest_ia32_perf_global_ctrl))) { @@ -6830,6 +6826,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handl= ers[])(struct kvm_vcpu *)) struct kvm_x86_nested_ops vmx_nested_ops =3D { .leave_nested =3D vmx_leave_nested, .check_events =3D vmx_check_nested_events, + .handle_page_fault_workaround =3D nested_vmx_handle_page_fault_workaround, .hv_timer_pending =3D nested_vmx_preemption_timer_pending, .triple_fault =3D nested_vmx_triple_fault, .get_state =3D vmx_get_nested_state, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4e7f3a8da16a..9866853ca320 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -748,6 +748,7 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struc= t x86_exception *fault) } EXPORT_SYMBOL_GPL(kvm_inject_page_fault); =20 +/* Returns true if the page fault was immediately morphed into a VM-Exit. = */ bool kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { @@ -766,8 +767,26 @@ bool kvm_inject_emulated_page_fault(struct kvm_vcpu *v= cpu, kvm_mmu_invalidate_gva(vcpu, fault_mmu, fault->address, fault_mmu->root.hpa); =20 + /* + * A workaround for KVM's bad exception handling. If KVM injected an + * exception into L2, and L2 encountered a #PF while vectoring the + * injected exception, manually check to see if L1 wants to intercept + * #PF, otherwise queuing the #PF will lead to #DF or a lost exception. + * In all other cases, defer the check to nested_ops->check_events(), + * which will correctly handle priority (this does not). Note, other + * exceptions, e.g. #GP, are theoretically affected, #PF is simply the + * most problematic, e.g. when L0 and L1 are both intercepting #PF for + * shadow paging. + * + * TODO: Rewrite exception handling to track injected and pending + * (VM-Exit) exceptions separately. + */ + if (unlikely(vcpu->arch.exception.injected && is_guest_mode(vcpu)) && + kvm_x86_ops.nested_ops->handle_page_fault_workaround(vcpu, fault)) + return true; + fault_mmu->inject_page_fault(vcpu, fault); - return fault->nested_page_fault; + return false; } EXPORT_SYMBOL_GPL(kvm_inject_emulated_page_fault); =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6D75C433F5 for ; Thu, 14 Apr 2022 07:40:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240466AbiDNHnR (ORCPT ); Thu, 14 Apr 2022 03:43:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240465AbiDNHmb (ORCPT ); Thu, 14 Apr 2022 03:42:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 059CC532F0 for ; Thu, 14 Apr 2022 00:40:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0L91IV9juJwXTqLZivqOLbPgg0tCOhPwZrbJH/lPbL4=; b=RSXsc2N1HqNMwxD/zFONdwJZNuuvYqZwRIa+8kz4LaDGN1fsxz99MnnyGv+s2sVne63zxP JmpBQrma7r9CESRuhlUjJm5IzQKK/p/yUdZgzHGq6yIE/sq9Ugq6Gx9HkQ/rxkkdlSXK/c 7jyFWaVZ68PD4cFYC7QEK823JYP0S+o= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-205-LoULkJyjN0SUeft8gkiQSA-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: LoULkJyjN0SUeft8gkiQSA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 370B9805A30; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 19C4140D1DB; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 06/22] KVM: x86/mmu: remove "bool base_only" arguments Date: Thu, 14 Apr 2022 03:39:44 -0400 Message-Id: <20220414074000.31438-7-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The argument is always false now that kvm_mmu_calc_root_page_role has been removed. Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 66 +++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 43 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d56875938c29..7f156da3ca93 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4705,47 +4705,30 @@ static void paging32_init_context(struct kvm_mmu *c= ontext) context->direct_map =3D false; } =20 -static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *= vcpu, - const struct kvm_mmu_role_regs *regs) -{ - union kvm_mmu_extended_role ext =3D {0}; - - if (____is_cr0_pg(regs)) { - ext.cr0_pg =3D 1; - ext.cr4_pae =3D ____is_cr4_pae(regs); - ext.cr4_smep =3D ____is_cr4_smep(regs); - ext.cr4_smap =3D ____is_cr4_smap(regs); - ext.cr4_pse =3D ____is_cr4_pse(regs); - - /* PKEY and LA57 are active iff long mode is active. */ - ext.cr4_pke =3D ____is_efer_lma(regs) && ____is_cr4_pke(regs); - ext.cr4_la57 =3D ____is_efer_lma(regs) && ____is_cr4_la57(regs); - ext.efer_lma =3D ____is_efer_lma(regs); - } - - ext.valid =3D 1; - - return ext; -} - static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs, - bool base_only) + const struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role =3D {0}; =20 role.base.access =3D ACC_ALL; if (____is_cr0_pg(regs)) { + role.ext.cr0_pg =3D 1; role.base.efer_nx =3D ____is_efer_nx(regs); role.base.cr0_wp =3D ____is_cr0_wp(regs); + + role.ext.cr4_pae =3D ____is_cr4_pae(regs); + role.ext.cr4_smep =3D ____is_cr4_smep(regs); + role.ext.cr4_smap =3D ____is_cr4_smap(regs); + role.ext.cr4_pse =3D ____is_cr4_pse(regs); + + /* PKEY and LA57 are active iff long mode is active. */ + role.ext.cr4_pke =3D ____is_efer_lma(regs) && ____is_cr4_pke(regs); + role.ext.cr4_la57 =3D ____is_efer_lma(regs) && ____is_cr4_la57(regs); + role.ext.efer_lma =3D ____is_efer_lma(regs); } role.base.smm =3D is_smm(vcpu); role.base.guest_mode =3D is_guest_mode(vcpu); - - if (base_only) - return role; - - role.ext =3D kvm_calc_mmu_role_ext(vcpu, regs); + role.ext.valid =3D 1; =20 return role; } @@ -4765,10 +4748,9 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_v= cpu *vcpu) =20 static union kvm_mmu_role kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs, - bool base_only) + const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs, base_onl= y); + union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs); =20 role.base.ad_disabled =3D (shadow_accessed_mask =3D=3D 0); role.base.level =3D kvm_mmu_get_tdp_level(vcpu); @@ -4783,7 +4765,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role new_role =3D - kvm_calc_tdp_mmu_root_page_role(vcpu, regs, false); + kvm_calc_tdp_mmu_root_page_role(vcpu, regs); =20 if (new_role.as_u64 =3D=3D context->mmu_role.as_u64) return; @@ -4812,10 +4794,9 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, =20 static union kvm_mmu_role kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs, - bool base_only) + const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs, base_onl= y); + union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs); =20 role.base.smep_andnot_wp =3D role.ext.cr4_smep && !____is_cr0_wp(regs); role.base.smap_andnot_wp =3D role.ext.cr4_smap && !____is_cr0_wp(regs); @@ -4826,11 +4807,10 @@ kvm_calc_shadow_root_page_role_common(struct kvm_vc= pu *vcpu, =20 static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs, - bool base_only) + const struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role =3D - kvm_calc_shadow_root_page_role_common(vcpu, regs, base_only); + kvm_calc_shadow_root_page_role_common(vcpu, regs); =20 role.base.direct =3D !____is_cr0_pg(regs); =20 @@ -4872,7 +4852,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role new_role =3D - kvm_calc_shadow_mmu_root_page_role(vcpu, regs, false); + kvm_calc_shadow_mmu_root_page_role(vcpu, regs); =20 shadow_mmu_init_context(vcpu, context, regs, new_role); } @@ -4882,7 +4862,7 @@ kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *v= cpu, const struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role =3D - kvm_calc_shadow_root_page_role_common(vcpu, regs, false); + kvm_calc_shadow_root_page_role_common(vcpu, regs); =20 role.base.direct =3D false; role.base.level =3D kvm_mmu_get_tdp_level(vcpu); @@ -4983,7 +4963,7 @@ kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const= struct kvm_mmu_role_regs * { union kvm_mmu_role role; =20 - role =3D kvm_calc_shadow_root_page_role_common(vcpu, regs, false); + role =3D kvm_calc_shadow_root_page_role_common(vcpu, regs); =20 /* * Nested MMUs are used only for walking L2's gva->gpa, they never have --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C330CC433EF for ; Thu, 14 Apr 2022 07:41:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237449AbiDNHn5 (ORCPT ); Thu, 14 Apr 2022 03:43:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240473AbiDNHmc (ORCPT ); Thu, 14 Apr 2022 03:42:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ACA3856C0F for ; Thu, 14 Apr 2022 00:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=40lIAGSpAB/FfFBfKetPwwW4R/AL1PekHYbsZB51kfo=; b=KNPJ4fYTWDo0iY/pwIl8e5meq6DHSaNJau6/dtl1DxD8gkxTTc+kCVWrbINnq/cX/S2LN8 bc4G76N6CPIx+U0jvWNnTqz+VXC1jI8CoLz3IFPWSMPf+0n+GeD6QZk2Dw8r9PxYoAu3Qx fbwfLyU362WrjZFNZ15eUa2PN6uDH0U= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-539-cWhFOQjyMhyKYnWMjX0BOw-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: cWhFOQjyMhyKYnWMjX0BOw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5DB461C06915; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F64C40D1DB; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 07/22] KVM: x86/mmu: split cpu_role from mmu_role Date: Thu, 14 Apr 2022 03:39:45 -0400 Message-Id: <20220414074000.31438-8-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Snapshot the state of the processor registers that govern page walk into a new field of struct kvm_mmu. This is a more natural representation than having it *mostly* in mmu_role but not exclusively; the delta right now is represented in other fields, such as root_level. The nested MMU now has only the CPU role; and in fact the new function kvm_calc_cpu_role is analogous to the previous kvm_calc_nested_mmu_role, except that it has role.base.direct equal to !CR0.PG. For a walk-only MMU, "direct" has no meaning, but we set it to !CR0.PG so that role.ext.cr0_pg can go away in a future patch. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 109 ++++++++++++++++++++------------ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 3 files changed, 70 insertions(+), 42 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index e46bd289e5df..50edf52a3ef6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -438,6 +438,7 @@ struct kvm_mmu { struct kvm_mmu_page *sp); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; + union kvm_mmu_role cpu_role; union kvm_mmu_role mmu_role; u8 root_level; u8 shadow_root_level; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7f156da3ca93..0f8c4d8f2081 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -222,7 +222,7 @@ BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); #define BUILD_MMU_ROLE_ACCESSOR(base_or_ext, reg, name) \ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ { \ - return !!(mmu->mmu_role. base_or_ext . reg##_##name); \ + return !!(mmu->cpu_role. base_or_ext . reg##_##name); \ } BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg); BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); @@ -4705,6 +4705,41 @@ static void paging32_init_context(struct kvm_mmu *co= ntext) context->direct_map =3D false; } =20 +static union kvm_mmu_role +kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *r= egs) +{ + union kvm_mmu_role role =3D {0}; + + role.base.access =3D ACC_ALL; + role.base.smm =3D is_smm(vcpu); + role.base.guest_mode =3D is_guest_mode(vcpu); + role.ext.valid =3D 1; + + if (!____is_cr0_pg(regs)) { + role.base.direct =3D 1; + return role; + } + + role.base.efer_nx =3D ____is_efer_nx(regs); + role.base.cr0_wp =3D ____is_cr0_wp(regs); + role.base.smep_andnot_wp =3D ____is_cr4_smep(regs) && !____is_cr0_wp(regs= ); + role.base.smap_andnot_wp =3D ____is_cr4_smap(regs) && !____is_cr0_wp(regs= ); + role.base.has_4_byte_gpte =3D !____is_cr4_pae(regs); + role.base.level =3D role_regs_to_root_level(regs); + + role.ext.cr0_pg =3D 1; + role.ext.cr4_pae =3D ____is_cr4_pae(regs); + role.ext.cr4_smep =3D ____is_cr4_smep(regs); + role.ext.cr4_smap =3D ____is_cr4_smap(regs); + role.ext.cr4_pse =3D ____is_cr4_pse(regs); + + /* PKEY and LA57 are active iff long mode is active. */ + role.ext.cr4_pke =3D ____is_efer_lma(regs) && ____is_cr4_pke(regs); + role.ext.cr4_la57 =3D ____is_efer_lma(regs) && ____is_cr4_la57(regs); + role.ext.efer_lma =3D ____is_efer_lma(regs); + return role; +} + static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { @@ -4764,13 +4799,16 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_mmu_role new_role =3D + union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_mmu_role mmu_role =3D kvm_calc_tdp_mmu_root_page_role(vcpu, regs); =20 - if (new_role.as_u64 =3D=3D context->mmu_role.as_u64) + if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && + mmu_role.as_u64 =3D=3D context->mmu_role.as_u64) return; =20 - context->mmu_role.as_u64 =3D new_role.as_u64; + context->cpu_role.as_u64 =3D cpu_role.as_u64; + context->mmu_role.as_u64 =3D mmu_role.as_u64; context->page_fault =3D kvm_tdp_page_fault; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; @@ -4825,13 +4863,15 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu = *vcpu, } =20 static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu = *context, - const struct kvm_mmu_role_regs *regs, - union kvm_mmu_role new_role) + union kvm_mmu_role cpu_role, + union kvm_mmu_role mmu_role) { - if (new_role.as_u64 =3D=3D context->mmu_role.as_u64) + if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && + mmu_role.as_u64 =3D=3D context->mmu_role.as_u64) return; =20 - context->mmu_role.as_u64 =3D new_role.as_u64; + context->cpu_role.as_u64 =3D cpu_role.as_u64; + context->mmu_role.as_u64 =3D mmu_role.as_u64; =20 if (!is_cr0_pg(context)) nonpaging_init_context(context); @@ -4839,10 +4879,10 @@ static void shadow_mmu_init_context(struct kvm_vcpu= *vcpu, struct kvm_mmu *conte paging64_init_context(context); else paging32_init_context(context); - context->root_level =3D role_regs_to_root_level(regs); + context->root_level =3D cpu_role.base.level; =20 reset_guest_paging_metadata(vcpu, context); - context->shadow_root_level =3D new_role.base.level; + context->shadow_root_level =3D mmu_role.base.level; =20 reset_shadow_zero_bits_mask(vcpu, context); } @@ -4851,10 +4891,11 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vc= pu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_mmu_role new_role =3D + union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_mmu_role mmu_role =3D kvm_calc_shadow_mmu_root_page_role(vcpu, regs); =20 - shadow_mmu_init_context(vcpu, context, regs, new_role); + shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); } =20 static union kvm_mmu_role @@ -4879,11 +4920,10 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu,= unsigned long cr0, .cr4 =3D cr4 & ~X86_CR4_PKE, .efer =3D efer, }; - union kvm_mmu_role new_role; - - new_role =3D kvm_calc_shadow_npt_root_page_role(vcpu, ®s); + union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); + union kvm_mmu_role mmu_role =3D kvm_calc_shadow_npt_root_page_role(vcpu, = ®s);; =20 - shadow_mmu_init_context(vcpu, context, ®s, new_role); + shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); kvm_mmu_new_pgd(vcpu, nested_cr3); } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); @@ -4906,7 +4946,6 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *v= cpu, bool accessed_dirty, role.base.guest_mode =3D true; role.base.access =3D ACC_ALL; =20 - /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ role.ext.word =3D 0; role.ext.execonly =3D execonly; role.ext.valid =3D 1; @@ -4920,12 +4959,14 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu,= bool execonly, { struct kvm_mmu *context =3D &vcpu->arch.guest_mmu; u8 level =3D vmx_eptp_page_walk_level(new_eptp); - union kvm_mmu_role new_role =3D + union kvm_mmu_role new_mode =3D kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty, execonly, level); =20 - if (new_role.as_u64 !=3D context->mmu_role.as_u64) { - context->mmu_role.as_u64 =3D new_role.as_u64; + if (new_mode.as_u64 !=3D context->cpu_role.as_u64) { + /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ + context->cpu_role.as_u64 =3D new_mode.as_u64; + context->mmu_role.as_u64 =3D new_mode.as_u64; =20 context->shadow_root_level =3D level; =20 @@ -4958,37 +4999,20 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, context->inject_page_fault =3D kvm_inject_page_fault; } =20 -static union kvm_mmu_role -kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_= regs *regs) -{ - union kvm_mmu_role role; - - role =3D kvm_calc_shadow_root_page_role_common(vcpu, regs); - - /* - * Nested MMUs are used only for walking L2's gva->gpa, they never have - * shadow pages of their own and so "direct" has no meaning. Set it - * to "true" to try to detect bogus usage of the nested MMU. - */ - role.base.direct =3D true; - role.base.level =3D role_regs_to_root_level(regs); - return role; -} - static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role new_role =3D kvm_calc_nested_mmu_role(vcpu, regs); + union kvm_mmu_role new_mode =3D kvm_calc_cpu_role(vcpu, regs); struct kvm_mmu *g_context =3D &vcpu->arch.nested_mmu; =20 - if (new_role.as_u64 =3D=3D g_context->mmu_role.as_u64) + if (new_mode.as_u64 =3D=3D g_context->cpu_role.as_u64) return; =20 - g_context->mmu_role.as_u64 =3D new_role.as_u64; + g_context->cpu_role.as_u64 =3D new_mode.as_u64; g_context->get_guest_pgd =3D get_cr3; g_context->get_pdptr =3D kvm_pdptr_read; g_context->inject_page_fault =3D kvm_inject_page_fault; - g_context->root_level =3D new_role.base.level; + g_context->root_level =3D new_mode.base.level; =20 /* * L2 page tables are never shadowed, so there is no need to sync @@ -5046,6 +5070,9 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.root_mmu.mmu_role.ext.valid =3D 0; vcpu->arch.guest_mmu.mmu_role.ext.valid =3D 0; vcpu->arch.nested_mmu.mmu_role.ext.valid =3D 0; + vcpu->arch.root_mmu.cpu_role.ext.valid =3D 0; + vcpu->arch.guest_mmu.cpu_role.ext.valid =3D 0; + vcpu->arch.nested_mmu.cpu_role.ext.valid =3D 0; kvm_mmu_reset_context(vcpu); =20 /* diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 07a7832f96cb..56544c542d05 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -281,7 +281,7 @@ static inline bool FNAME(is_last_gpte)(struct kvm_mmu *= mmu, * is not reserved and does not indicate a large page at this level, * so clear PT_PAGE_SIZE_MASK in gpte if that is the case. */ - gpte &=3D level - (PT32_ROOT_LEVEL + mmu->mmu_role.ext.cr4_pse); + gpte &=3D level - (PT32_ROOT_LEVEL + mmu->cpu_role.ext.cr4_pse); #endif /* * PG_LEVEL_4K always terminates. The RHS has bit 7 set --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF62C433FE for ; Thu, 14 Apr 2022 07:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240004AbiDNHni (ORCPT ); Thu, 14 Apr 2022 03:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240469AbiDNHmb (ORCPT ); Thu, 14 Apr 2022 03:42:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AADB153B5E for ; Thu, 14 Apr 2022 00:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TQg7Ua0bJk+YwKnS1MCtplfhaMC9mBP41uxMhcpldzI=; b=WBVHdaz6zwDOLn8oT8POOyMwNsGI7RTzMenizS2dfriUXjX6VOttExQLjJFScp6CYYbliS l2EGONLUHkhkmbZemsp5bvJHpuixCYi8+FPWcPhh2W4qe9b6jaFJ8OwLUsAIwtCbYfRD1G WcV6/aArMn/nY3qjFhnZP9E5MWJVoV8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-571-TQAA5U6hM0KMVUYQyA2iJA-1; Thu, 14 Apr 2022 03:40:02 -0400 X-MC-Unique: TQAA5U6hM0KMVUYQyA2iJA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8284538149B7; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 654AD40D1DB; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 08/22] KVM: x86/mmu: do not recompute root level from kvm_mmu_role_regs Date: Thu, 14 Apr 2022 03:39:46 -0400 Message-Id: <20220414074000.31438-9-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The root_level can be found in the cpu_role (in fact the field is superfluous and could be removed, but one thing at a time). Since there is only one usage left of role_regs_to_root_level, inline it into kvm_calc_cpu_role. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0f8c4d8f2081..1ba452df8e67 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -245,19 +245,6 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(stru= ct kvm_vcpu *vcpu) return regs; } =20 -static int role_regs_to_root_level(const struct kvm_mmu_role_regs *regs) -{ - if (!____is_cr0_pg(regs)) - return 0; - else if (____is_efer_lma(regs)) - return ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL : - PT64_ROOT_4LEVEL; - else if (____is_cr4_pae(regs)) - return PT32E_ROOT_LEVEL; - else - return PT32_ROOT_LEVEL; -} - static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -4725,7 +4712,14 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struc= t kvm_mmu_role_regs *regs) role.base.smep_andnot_wp =3D ____is_cr4_smep(regs) && !____is_cr0_wp(regs= ); role.base.smap_andnot_wp =3D ____is_cr4_smap(regs) && !____is_cr0_wp(regs= ); role.base.has_4_byte_gpte =3D !____is_cr4_pae(regs); - role.base.level =3D role_regs_to_root_level(regs); + + if (____is_efer_lma(regs)) + role.base.level =3D ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL + : PT64_ROOT_4LEVEL; + else if (____is_cr4_pae(regs)) + role.base.level =3D PT32E_ROOT_LEVEL; + else + role.base.level =3D PT32_ROOT_LEVEL; =20 role.ext.cr0_pg =3D 1; role.ext.cr4_pae =3D ____is_cr4_pae(regs); @@ -4817,7 +4811,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; - context->root_level =3D role_regs_to_root_level(regs); + context->root_level =3D cpu_role.base.level; =20 if (!is_cr0_pg(context)) context->gva_to_gpa =3D nonpaging_gva_to_gpa; --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60683C433EF for ; Thu, 14 Apr 2022 07:41:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235294AbiDNHnm (ORCPT ); Thu, 14 Apr 2022 03:43:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240475AbiDNHmc (ORCPT ); Thu, 14 Apr 2022 03:42:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 40E7456C14 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F1yCsxHZou/Mo1AiUvg/mXxT+ZuQee/RSzG0HIGGWL8=; b=RJTpJRqmJ9uyAYV5qReqJUIlpdVakpSHbjwM+2c7yQeaGUnW265uYHo02yR7KxpEUy0o9W RKOjvFj9W1259qdgXe9DmPmLZxGQNeoHBdlldX51sFalXOhl0Y8wHiGHEiIFa5QOo8+HBm cuT+zfHPj2tfTLl1HvLeai2DjE+F+s4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-573-UkSJX4qpPSy2PxcdlNNEbA-1; Thu, 14 Apr 2022 03:40:03 -0400 X-MC-Unique: UkSJX4qpPSy2PxcdlNNEbA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A7CB2833973; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AEFE41D40E; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 09/22] KVM: x86/mmu: remove ept_ad field Date: Thu, 14 Apr 2022 03:39:47 -0400 Message-Id: <20220414074000.31438-10-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The ept_ad field is used during page walk to determine if the guest PTEs have accessed and dirty bits. In the MMU role, the ad_disabled bit represents whether the *shadow* PTEs have the bits, so it would be incorrect to replace PT_HAVE_ACCESSED_DIRTY with just !mmu->mmu_role.base.ad_disabled. However, the similar field in the CPU mode, ad_disabled, is initialized correctly: to the opposite value of ept_ad for shadow EPT, and zero for non-EPT guest paging modes (which always have A/D bits). It is therefore possible to compute PT_HAVE_ACCESSED_DIRTY from the CPU mode, like other page-format fields; it just has to be inverted to account for the different polarity. In fact, now that the CPU mode is distinct from the MMU roles, it would even be possible to remove PT_HAVE_ACCESSED_DIRTY macro altogether, and use !mmu->cpu_role.base.ad_disabled instead. I am not doing this because the macro has a small effect in terms of dead code elimination: text data bss dec hex 103544 16665 112 120321 1d601 # as of this patch 103746 16665 112 120523 1d6cb # without PT_HAVE_ACCESSED_DIRTY Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 1 - arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 3 files changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 50edf52a3ef6..a299236cfde5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -442,7 +442,6 @@ struct kvm_mmu { union kvm_mmu_role mmu_role; u8 root_level; u8 shadow_root_level; - u8 ept_ad; bool direct_map; =20 /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1ba452df8e67..fddc8a3237b0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4964,7 +4964,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, =20 context->shadow_root_level =3D level; =20 - context->ept_ad =3D accessed_dirty; context->page_fault =3D ept_page_fault; context->gva_to_gpa =3D ept_gva_to_gpa; context->sync_page =3D ept_sync_page; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 56544c542d05..298e502286cf 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -63,7 +63,7 @@ #define PT_LEVEL_BITS PT64_LEVEL_BITS #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 - #define PT_HAVE_ACCESSED_DIRTY(mmu) ((mmu)->ept_ad) + #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) #ifdef CONFIG_X86_64 #define CMPXCHG "cmpxchgq" #endif --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43083C4167E for ; Thu, 14 Apr 2022 07:40:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240521AbiDNHnI (ORCPT ); Thu, 14 Apr 2022 03:43:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240452AbiDNHm3 (ORCPT ); Thu, 14 Apr 2022 03:42:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A5E4D53B5E for ; Thu, 14 Apr 2022 00:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922004; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=StKL8wBLFuz+JBSqbcoQ4KdZ20uKl8Zl8ZBV7e3EAAw=; b=Q5eoNnOaMTwp3uu/s0q9g15VBC1oZ4/BlfjTKDGKFxsIvuTjQQiz4jT5aV8692EqH4b2g3 Pub0ZIQHHypSTDwA6iyzPDbZKZnXGHxIC/d5+6T4IwsJv8fTtg0ErpFbJAcopSzThqrKhv GC5n8zOKT3ZA7jM8LGGT9mPJ61w1CaY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-RuHUpxrFNvWC1e3WwDO-ZA-1; Thu, 14 Apr 2022 03:40:03 -0400 X-MC-Unique: RuHUpxrFNvWC1e3WwDO-ZA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E1854811E80; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3C1FC28100; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 10/22] KVM: x86/mmu: remove kvm_calc_shadow_root_page_role_common Date: Thu, 14 Apr 2022 03:39:48 -0400 Message-Id: <20220414074000.31438-11-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" kvm_calc_shadow_root_page_role_common is the same as kvm_calc_cpu_role except for the level, which is overwritten afterwards in kvm_calc_shadow_mmu_root_page_role and kvm_calc_shadow_npt_root_page_role. role.base.direct is already set correctly for the CPU role, and CR0.PG=3D1 is required for VMRUN so it will also be correct for nested NPT. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fddc8a3237b0..3f712d2de0ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4824,28 +4824,14 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, reset_tdp_shadow_zero_bits_mask(context); } =20 -static union kvm_mmu_role -kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) -{ - union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs); - - role.base.smep_andnot_wp =3D role.ext.cr4_smep && !____is_cr0_wp(regs); - role.base.smap_andnot_wp =3D role.ext.cr4_smap && !____is_cr0_wp(regs); - role.base.has_4_byte_gpte =3D ____is_cr0_pg(regs) && !____is_cr4_pae(regs= ); - - return role; -} - static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role role =3D - kvm_calc_shadow_root_page_role_common(vcpu, regs); - - role.base.direct =3D !____is_cr0_pg(regs); + union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_mmu_role role; =20 + role =3D cpu_role; if (!____is_efer_lma(regs)) role.base.level =3D PT32E_ROOT_LEVEL; else if (____is_cr4_la57(regs)) @@ -4896,10 +4882,11 @@ static union kvm_mmu_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role role =3D - kvm_calc_shadow_root_page_role_common(vcpu, regs); + union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_mmu_role role; =20 - role.base.direct =3D false; + WARN_ON_ONCE(cpu_role.base.direct); + role =3D cpu_role; role.base.level =3D kvm_mmu_get_tdp_level(vcpu); =20 return role; --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99F3BC433EF for ; Thu, 14 Apr 2022 07:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240550AbiDNHna (ORCPT ); Thu, 14 Apr 2022 03:43:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240458AbiDNHma (ORCPT ); Thu, 14 Apr 2022 03:42:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3370A541B8 for ; Thu, 14 Apr 2022 00:40:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=El175h5Qgwy7pSsLDAtLc4veOZtMyXcanhu5lIgdWFE=; b=aWPoTHZm7tfFZyTDqHScJwRs0tUN8paJh6kLw3ACChHK5KsOCB4N0c17fz+Ecd0kU7pJ36 +IgRH4TVPnOGNaitZ3Hss5izM82fgX7iZsRwiSeF8j433dXUEd2CoZLXlLRJ1HUWusd8jt NfVuZGrrNlx4vwBFzVrl+qulCxtjFQs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-573-ZTNJ5pIvOEyBEtYU5IrohA-1; Thu, 14 Apr 2022 03:40:03 -0400 X-MC-Unique: ZTNJ5pIvOEyBEtYU5IrohA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1192419705D8; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9668C28100; Thu, 14 Apr 2022 07:40:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 11/22] KVM: x86/mmu: cleanup computation of MMU roles for two-dimensional paging Date: Thu, 14 Apr 2022 03:39:49 -0400 Message-Id: <20220414074000.31438-12-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Inline kvm_calc_mmu_role_common into its sole caller, and simplify it by removing the computation of unnecessary bits. Extended bits are unnecessary because page walking uses the CPU role, and EFER.NX/CR0.WP can be set to one unconditionally---matching the format of shadow pages rather than the format of guest pages. The MMU role for two dimensional paging does still depend on the CPU role, even if only barely so, due to SMM and guest mode; for consistency, pass it down to kvm_calc_tdp_mmu_root_page_role instead of querying the vcpu with is_smm or is_guest_mode. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 41 +++++++++-------------------------------- 1 file changed, 9 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3f712d2de0ed..6c0287d60781 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4734,34 +4734,6 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struc= t kvm_mmu_role_regs *regs) return role; } =20 -static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) -{ - union kvm_mmu_role role =3D {0}; - - role.base.access =3D ACC_ALL; - if (____is_cr0_pg(regs)) { - role.ext.cr0_pg =3D 1; - role.base.efer_nx =3D ____is_efer_nx(regs); - role.base.cr0_wp =3D ____is_cr0_wp(regs); - - role.ext.cr4_pae =3D ____is_cr4_pae(regs); - role.ext.cr4_smep =3D ____is_cr4_smep(regs); - role.ext.cr4_smap =3D ____is_cr4_smap(regs); - role.ext.cr4_pse =3D ____is_cr4_pse(regs); - - /* PKEY and LA57 are active iff long mode is active. */ - role.ext.cr4_pke =3D ____is_efer_lma(regs) && ____is_cr4_pke(regs); - role.ext.cr4_la57 =3D ____is_efer_lma(regs) && ____is_cr4_la57(regs); - role.ext.efer_lma =3D ____is_efer_lma(regs); - } - role.base.smm =3D is_smm(vcpu); - role.base.guest_mode =3D is_guest_mode(vcpu); - role.ext.valid =3D 1; - - return role; -} - static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */ @@ -4777,14 +4749,20 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_= vcpu *vcpu) =20 static union kvm_mmu_role kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_mmu_role cpu_role) { - union kvm_mmu_role role =3D kvm_calc_mmu_role_common(vcpu, regs); + union kvm_mmu_role role =3D {0}; =20 + role.base.access =3D ACC_ALL; + role.base.cr0_wp =3D true; + role.base.efer_nx =3D true; + role.base.smm =3D cpu_role.base.smm; + role.base.guest_mode =3D cpu_role.base.guest_mode; role.base.ad_disabled =3D (shadow_accessed_mask =3D=3D 0); role.base.level =3D kvm_mmu_get_tdp_level(vcpu); role.base.direct =3D true; role.base.has_4_byte_gpte =3D false; + role.ext.valid =3D true; =20 return role; } @@ -4794,8 +4772,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); - union kvm_mmu_role mmu_role =3D - kvm_calc_tdp_mmu_root_page_role(vcpu, regs); + union kvm_mmu_role mmu_role =3D kvm_calc_tdp_mmu_root_page_role(vcpu, cpu= _role); =20 if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && mmu_role.as_u64 =3D=3D context->mmu_role.as_u64) --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 040D3C433FE for ; Thu, 14 Apr 2022 07:42:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240696AbiDNHoh (ORCPT ); Thu, 14 Apr 2022 03:44:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240496AbiDNHmd (ORCPT ); Thu, 14 Apr 2022 03:42:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 82C9553723 for ; Thu, 14 Apr 2022 00:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uGXfw0TwpBgMaCpL6PaUqyBneoOU1zZyXIhjUfcahI4=; b=PIifq11y0dNe3PJ4edgLWrNSHjLJgTZ0wByNmG1fuJYxcqivloy6P8o+pUMPF8vNots6F+ wyjrJ7L7IxDNHdmMdu/kAwZR11nQgianBwd7UvXF6ZQe3Y215sxvMyCStQQPgaUUREhP9y wMuDNZ2920sSUvbOk0BinF3rt211vNs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-662-XMghqsKXN_mWOZZatSDmPw-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: XMghqsKXN_mWOZZatSDmPw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 36D9D804190; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 18DCEC28100; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 12/22] KVM: x86/mmu: cleanup computation of MMU roles for shadow paging Date: Thu, 14 Apr 2022 03:39:50 -0400 Message-Id: <20220414074000.31438-13-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Pass the already-computed CPU role, instead of redoing it. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6c0287d60781..92dade92462c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4803,15 +4803,14 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, =20 static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_mmu_role cpu_role) { - union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_role role; =20 role =3D cpu_role; - if (!____is_efer_lma(regs)) + if (!cpu_role.ext.efer_lma) role.base.level =3D PT32E_ROOT_LEVEL; - else if (____is_cr4_la57(regs)) + else if (cpu_role.ext.cr4_la57) role.base.level =3D PT64_ROOT_5LEVEL; else role.base.level =3D PT64_ROOT_4LEVEL; @@ -4850,16 +4849,15 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vc= pu, struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_role mmu_role =3D - kvm_calc_shadow_mmu_root_page_role(vcpu, regs); + kvm_calc_shadow_mmu_root_page_role(vcpu, cpu_role); =20 shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); } =20 static union kvm_mmu_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_mmu_role cpu_role) { - union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_role role; =20 WARN_ON_ONCE(cpu_role.base.direct); @@ -4879,7 +4877,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u= nsigned long cr0, .efer =3D efer, }; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); - union kvm_mmu_role mmu_role =3D kvm_calc_shadow_npt_root_page_role(vcpu, = ®s);; + union kvm_mmu_role mmu_role =3D kvm_calc_shadow_npt_root_page_role(vcpu, = cpu_role); =20 shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); kvm_mmu_new_pgd(vcpu, nested_cr3); --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61E09C433F5 for ; Thu, 14 Apr 2022 07:41:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240469AbiDNHnq (ORCPT ); Thu, 14 Apr 2022 03:43:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240478AbiDNHmc (ORCPT ); Thu, 14 Apr 2022 03:42:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2FCF240931 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GS5GXdXYcQZI1wo1x5s5S3htNZXur5jE8g9OaqoOCOs=; b=O6KKRAEVdSkPAGJWBznDRZLOPyGbQXiqjd2yBiIaeEym2Dfa4qyXhOlsPR02gQ0NxUiPdO zl9EZULhBFeqrpe+M/ISpr0p7/c4U6rB4oebzsCmtYM9ymJyGw6YYyucocEx7PMp/U8kQD 4M6Jrus2sIG1VaEbcWSPDuj/kRzmaHg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-616-1dpg4S1vONy91vzehhuyqQ-1; Thu, 14 Apr 2022 03:40:03 -0400 X-MC-Unique: 1dpg4S1vONy91vzehhuyqQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59AB119705DB; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C6CCC28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 13/22] KVM: x86/mmu: store shadow EFER.NX in the MMU role Date: Thu, 14 Apr 2022 03:39:51 -0400 Message-Id: <20220414074000.31438-14-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that the MMU role is separate from the CPU role, it can be a truthful description of the format of the shadow pages. This includes whether the shadow pages use the NX bit; so force the efer_nx field of the MMU role when TDP is disabled, and remove the hardcoding it in the callers of reset_shadow_zero_bits_mask. In fact, the initialization of reserved SPTE bits can now be made common to shadow paging and shadow NPT; move it to shadow_mmu_init_context. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 92dade92462c..f491d3c47ac8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4430,16 +4430,6 @@ static inline u64 reserved_hpa_bits(void) static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { - /* - * KVM uses NX when TDP is disabled to handle a variety of scenarios, - * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and - * to generate correct permissions for CR0.WP=3D0/CR4.SMEP=3D1/EFER.NX=3D= 0. - * The iTLB multi-hit workaround can be toggled at any time, so assume - * NX can be used by any non-nested shadow MMU to avoid having to reset - * MMU contexts. Note, KVM forces EFER.NX=3D1 when TDP is disabled. - */ - bool uses_nx =3D is_efer_nx(context) || !tdp_enabled; - /* @amd adds a check on bit of SPTEs, which KVM shouldn't use anyways. */ bool is_amd =3D true; /* KVM doesn't use 2-level page tables for the shadow MMU. */ @@ -4451,7 +4441,8 @@ static void reset_shadow_zero_bits_mask(struct kvm_vc= pu *vcpu, =20 shadow_zero_check =3D &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, uses_nx, + context->shadow_root_level, + context->mmu_role.base.efer_nx, guest_can_use_gbpages(vcpu), is_pse, is_amd); =20 if (!shadow_me_mask) @@ -4815,6 +4806,16 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *= vcpu, else role.base.level =3D PT64_ROOT_4LEVEL; =20 + /* + * KVM forces EFER.NX=3D1 when TDP is disabled, reflect it in the MMU rol= e. + * KVM uses NX when TDP is disabled to handle a variety of scenarios, + * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and + * to generate correct permissions for CR0.WP=3D0/CR4.SMEP=3D1/EFER.NX=3D= 0. + * The iTLB multi-hit workaround can be toggled at any time, so assume + * NX can be used by any non-nested shadow MMU to avoid having to reset + * MMU contexts. + */ + role.base.efer_nx =3D true; return role; } =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D968C433EF for ; Thu, 14 Apr 2022 07:41:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240479AbiDNHoS (ORCPT ); Thu, 14 Apr 2022 03:44:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240499AbiDNHmd (ORCPT ); Thu, 14 Apr 2022 03:42:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 83D1A4DF65 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8gxavjzitBwqwp61MznMFSJ68yyjIUObHGZD4rlYWkQ=; b=bS8XcZB/QXbrbTD8fPYsclv3Bnz3/3ZxovPRuKNH4AwRVX1q4W+xF3BgOy+cCrTYdt9a2h aChcLwUVlekyHIoxIngjzARxcDQ3nswcbJhG1mRu7loPB2IacjZMpO+N8mBjoGQ8tY97Nj powtPj2kC94HYh3/EKrFAaey66SSj+c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-127-emAmHZyMPFuTTv78Hv5gXg-1; Thu, 14 Apr 2022 03:40:03 -0400 X-MC-Unique: emAmHZyMPFuTTv78Hv5gXg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7C7E280418A; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60657C28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 14/22] KVM: x86/mmu: remove extended bits from mmu_role, rename field Date: Thu, 14 Apr 2022 03:39:52 -0400 Message-Id: <20220414074000.31438-15-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mmu_role represents the role of the root of the page tables. It does not need any extended bits, as those govern only KVM's page table walking; the is_* functions used for page table walking always use the CPU role. ext.valid is not present anymore in the MMU role, but an all-zero MMU role is impossible because the level field is never zero in the MMU role. So just zap the whole mmu_role in order to force invalidation after CPUID is updated. While making this change, which requires touching almost every occurrence of "mmu_role", rename it to "root_role". Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 86 ++++++++++++++++----------------- arch/x86/kvm/mmu/paging_tmpl.h | 4 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 4 files changed, 46 insertions(+), 48 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index a299236cfde5..c81221d03a1b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -439,7 +439,7 @@ struct kvm_mmu { void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_mmu_role cpu_role; - union kvm_mmu_role mmu_role; + union kvm_mmu_page_role root_role; u8 root_level; u8 shadow_root_level; bool direct_map; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f491d3c47ac8..13eb2d40e0a3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -193,7 +193,7 @@ struct kvm_mmu_role_regs { =20 /* * Yes, lot's of underscores. They're a hint that you probably shouldn't = be - * reading from the role_regs. Once the mmu_role is constructed, it becom= es + * reading from the role_regs. Once the root_role is constructed, it beco= mes * the single source of truth for the MMU's state. */ #define BUILD_MMU_ROLE_REGS_ACCESSOR(reg, name, flag) \ @@ -2028,7 +2028,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct k= vm_vcpu *vcpu, int collisions =3D 0; LIST_HEAD(invalid_list); =20 - role =3D vcpu->arch.mmu->mmu_role.base; + role =3D vcpu->arch.mmu->root_role; role.level =3D level; role.direct =3D direct; role.access =3D access; @@ -3272,7 +3272,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, s= truct kvm_mmu *mmu) * This should not be called while L2 is active, L2 can't invalidate * _only_ its own roots, e.g. INVVPID unconditionally exits. */ - WARN_ON_ONCE(mmu->mmu_role.base.guest_mode); + WARN_ON_ONCE(mmu->root_role.guest_mode); =20 for (i =3D 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { root_hpa =3D mmu->prev_roots[i].hpa; @@ -4183,7 +4183,7 @@ static bool fast_pgd_switch(struct kvm *kvm, struct k= vm_mmu *mmu, void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) { struct kvm_mmu *mmu =3D vcpu->arch.mmu; - union kvm_mmu_page_role new_role =3D mmu->mmu_role.base; + union kvm_mmu_page_role new_role =3D mmu->root_role; =20 if (!fast_pgd_switch(vcpu->kvm, mmu, new_pgd, new_role)) { /* kvm_mmu_ensure_valid_pgd will set up a new root. */ @@ -4442,7 +4442,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vc= pu *vcpu, shadow_zero_check =3D &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->shadow_root_level, - context->mmu_role.base.efer_nx, + context->root_role.efer_nx, guest_can_use_gbpages(vcpu), is_pse, is_amd); =20 if (!shadow_me_mask) @@ -4738,22 +4738,21 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_= vcpu *vcpu) return max_tdp_level; } =20 -static union kvm_mmu_role +static union kvm_mmu_page_role kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, union kvm_mmu_role cpu_role) { - union kvm_mmu_role role =3D {0}; + union kvm_mmu_page_role role =3D {0}; =20 - role.base.access =3D ACC_ALL; - role.base.cr0_wp =3D true; - role.base.efer_nx =3D true; - role.base.smm =3D cpu_role.base.smm; - role.base.guest_mode =3D cpu_role.base.guest_mode; - role.base.ad_disabled =3D (shadow_accessed_mask =3D=3D 0); - role.base.level =3D kvm_mmu_get_tdp_level(vcpu); - role.base.direct =3D true; - role.base.has_4_byte_gpte =3D false; - role.ext.valid =3D true; + role.access =3D ACC_ALL; + role.cr0_wp =3D true; + role.efer_nx =3D true; + role.smm =3D cpu_role.base.smm; + role.guest_mode =3D cpu_role.base.guest_mode; + role.ad_disabled =3D (shadow_accessed_mask =3D=3D 0); + role.level =3D kvm_mmu_get_tdp_level(vcpu); + role.direct =3D true; + role.has_4_byte_gpte =3D false; =20 return role; } @@ -4763,14 +4762,14 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); - union kvm_mmu_role mmu_role =3D kvm_calc_tdp_mmu_root_page_role(vcpu, cpu= _role); + union kvm_mmu_page_role root_role =3D kvm_calc_tdp_mmu_root_page_role(vcp= u, cpu_role); =20 if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && - mmu_role.as_u64 =3D=3D context->mmu_role.as_u64) + root_role.word =3D=3D context->root_role.word) return; =20 context->cpu_role.as_u64 =3D cpu_role.as_u64; - context->mmu_role.as_u64 =3D mmu_role.as_u64; + context->root_role.word =3D root_role.word; context->page_fault =3D kvm_tdp_page_fault; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; @@ -4792,19 +4791,19 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, reset_tdp_shadow_zero_bits_mask(context); } =20 -static union kvm_mmu_role +static union kvm_mmu_page_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, union kvm_mmu_role cpu_role) { - union kvm_mmu_role role; + union kvm_mmu_page_role role; =20 - role =3D cpu_role; + role =3D cpu_role.base; if (!cpu_role.ext.efer_lma) - role.base.level =3D PT32E_ROOT_LEVEL; + role.level =3D PT32E_ROOT_LEVEL; else if (cpu_role.ext.cr4_la57) - role.base.level =3D PT64_ROOT_5LEVEL; + role.level =3D PT64_ROOT_5LEVEL; else - role.base.level =3D PT64_ROOT_4LEVEL; + role.level =3D PT64_ROOT_4LEVEL; =20 /* * KVM forces EFER.NX=3D1 when TDP is disabled, reflect it in the MMU rol= e. @@ -4815,20 +4814,20 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu = *vcpu, * NX can be used by any non-nested shadow MMU to avoid having to reset * MMU contexts. */ - role.base.efer_nx =3D true; + role.efer_nx =3D true; return role; } =20 static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu = *context, union kvm_mmu_role cpu_role, - union kvm_mmu_role mmu_role) + union kvm_mmu_page_role root_role) { if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && - mmu_role.as_u64 =3D=3D context->mmu_role.as_u64) + root_role.word =3D=3D context->root_role.word) return; =20 context->cpu_role.as_u64 =3D cpu_role.as_u64; - context->mmu_role.as_u64 =3D mmu_role.as_u64; + context->root_role.word =3D root_role.word; =20 if (!is_cr0_pg(context)) nonpaging_init_context(context); @@ -4839,7 +4838,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *= vcpu, struct kvm_mmu *conte context->root_level =3D cpu_role.base.level; =20 reset_guest_paging_metadata(vcpu, context); - context->shadow_root_level =3D mmu_role.base.level; + context->shadow_root_level =3D root_role.level; =20 reset_shadow_zero_bits_mask(vcpu, context); } @@ -4849,22 +4848,21 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vc= pu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); - union kvm_mmu_role mmu_role =3D + union kvm_mmu_page_role root_role =3D kvm_calc_shadow_mmu_root_page_role(vcpu, cpu_role); =20 - shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); + shadow_mmu_init_context(vcpu, context, cpu_role, root_role); } =20 -static union kvm_mmu_role +static union kvm_mmu_page_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, union kvm_mmu_role cpu_role) { - union kvm_mmu_role role; + union kvm_mmu_page_role role; =20 WARN_ON_ONCE(cpu_role.base.direct); - role =3D cpu_role; - role.base.level =3D kvm_mmu_get_tdp_level(vcpu); - + role =3D cpu_role.base; + role.level =3D kvm_mmu_get_tdp_level(vcpu); return role; } =20 @@ -4878,9 +4876,9 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u= nsigned long cr0, .efer =3D efer, }; union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); - union kvm_mmu_role mmu_role =3D kvm_calc_shadow_npt_root_page_role(vcpu, = cpu_role); + union kvm_mmu_page_role root_role =3D kvm_calc_shadow_npt_root_page_role(= vcpu, cpu_role); =20 - shadow_mmu_init_context(vcpu, context, cpu_role, mmu_role); + shadow_mmu_init_context(vcpu, context, cpu_role, root_role); kvm_mmu_new_pgd(vcpu, nested_cr3); } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); @@ -4923,7 +4921,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, if (new_mode.as_u64 !=3D context->cpu_role.as_u64) { /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ context->cpu_role.as_u64 =3D new_mode.as_u64; - context->mmu_role.as_u64 =3D new_mode.as_u64; + context->root_role.word =3D new_mode.base.word; =20 context->shadow_root_level =3D level; =20 @@ -5023,9 +5021,9 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) * problem is swept under the rug; KVM's CPUID API is horrific and * it's all but impossible to solve it without introducing a new API. */ - vcpu->arch.root_mmu.mmu_role.ext.valid =3D 0; - vcpu->arch.guest_mmu.mmu_role.ext.valid =3D 0; - vcpu->arch.nested_mmu.mmu_role.ext.valid =3D 0; + vcpu->arch.root_mmu.root_role.word =3D 0; + vcpu->arch.guest_mmu.root_role.word =3D 0; + vcpu->arch.nested_mmu.root_role.word =3D 0; vcpu->arch.root_mmu.cpu_role.ext.valid =3D 0; vcpu->arch.guest_mmu.cpu_role.ext.valid =3D 0; vcpu->arch.nested_mmu.cpu_role.ext.valid =3D 0; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 298e502286cf..24157f637bd7 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -988,7 +988,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, s= truct kvm_mmu *mmu, */ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { - union kvm_mmu_page_role mmu_role =3D vcpu->arch.mmu->mmu_role.base; + union kvm_mmu_page_role root_role =3D vcpu->arch.mmu->root_role; int i; bool host_writable; gpa_t first_pte_gpa; @@ -1016,7 +1016,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, st= ruct kvm_mmu_page *sp) * reserved bits checks will be wrong, etc... */ if (WARN_ON_ONCE(sp->role.direct || - (sp->role.word ^ mmu_role.word) & ~sync_role_ign.word)) + (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) return -1; =20 first_pte_gpa =3D FNAME(get_level1_sp_gpa)(sp); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c472769e0300..bbd2a6dc8c20 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -310,7 +310,7 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *= child_sp, =20 hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) { - union kvm_mmu_page_role role =3D vcpu->arch.mmu->mmu_role.base; + union kvm_mmu_page_role role =3D vcpu->arch.mmu->root_role; struct kvm *kvm =3D vcpu->kvm; struct kvm_mmu_page *root; =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5664CC433F5 for ; Thu, 14 Apr 2022 07:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240505AbiDNHo2 (ORCPT ); Thu, 14 Apr 2022 03:44:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240508AbiDNHme (ORCPT ); Thu, 14 Apr 2022 03:42:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4619040931 for ; Thu, 14 Apr 2022 00:40:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6W0jJ8VdTbxWN4De3uGlaS+0Uv/tBAivAEXdHTu3Sa8=; b=aLfZwzmWS/6gj26uvvYhrr6eUG8ZEWEWnBP2s3VkL8ArkZnBWI9STCJmmnqpT9V/d7Fq4w D3D9z5jtGrsddPg6RYqc9ZjALQO+FQhSAnnDEWpPNeDkJEOBV004cRP3yc9izBaWH0lVY3 ycAGF0HvCwEdwcWttVLlRaV4NQIKtME= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-648-2va3GMacPDKOhjFRl1cAAA-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: 2va3GMacPDKOhjFRl1cAAA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A075F811E76; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 844ACC28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 15/22] KVM: x86/mmu: rename kvm_mmu_role union Date: Thu, 14 Apr 2022 03:39:53 -0400 Message-Id: <20220414074000.31438-16-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It is quite confusing that the "full" union is called kvm_mmu_role but is used for the "cpu_role" field of struct kvm_mmu. Rename it to kvm_cpu_role. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/mmu/mmu.c | 28 ++++++++++++++-------------- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index c81221d03a1b..6bc5550ae530 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -281,7 +281,7 @@ struct kvm_kernel_irq_routing_entry; /* * kvm_mmu_page_role tracks the properties of a shadow page (where shadow = page * also includes TDP pages) to determine whether or not a page can be used= in - * the given MMU context. This is a subset of the overall kvm_mmu_role to + * the given MMU context. This is a subset of the overall kvm_cpu_role to * minimize the size of kvm_memory_slot.arch.gfn_track, i.e. allows alloca= ting * 2 bytes per gfn instead of 4 bytes per gfn. * @@ -378,7 +378,7 @@ union kvm_mmu_extended_role { }; }; =20 -union kvm_mmu_role { +union kvm_cpu_role { u64 as_u64; struct { union kvm_mmu_page_role base; @@ -438,7 +438,7 @@ struct kvm_mmu { struct kvm_mmu_page *sp); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; - union kvm_mmu_role cpu_role; + union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; u8 root_level; u8 shadow_root_level; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 13eb2d40e0a3..483a3761db81 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4683,10 +4683,10 @@ static void paging32_init_context(struct kvm_mmu *c= ontext) context->direct_map =3D false; } =20 -static union kvm_mmu_role +static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *r= egs) { - union kvm_mmu_role role =3D {0}; + union kvm_cpu_role role =3D {0}; =20 role.base.access =3D ACC_ALL; role.base.smm =3D is_smm(vcpu); @@ -4740,7 +4740,7 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_vc= pu *vcpu) =20 static union kvm_mmu_page_role kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, - union kvm_mmu_role cpu_role) + union kvm_cpu_role cpu_role) { union kvm_mmu_page_role role =3D {0}; =20 @@ -4761,7 +4761,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_page_role root_role =3D kvm_calc_tdp_mmu_root_page_role(vcp= u, cpu_role); =20 if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && @@ -4793,7 +4793,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, =20 static union kvm_mmu_page_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, - union kvm_mmu_role cpu_role) + union kvm_cpu_role cpu_role) { union kvm_mmu_page_role role; =20 @@ -4819,7 +4819,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *v= cpu, } =20 static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu = *context, - union kvm_mmu_role cpu_role, + union kvm_cpu_role cpu_role, union kvm_mmu_page_role root_role) { if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && @@ -4847,7 +4847,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_page_role root_role =3D kvm_calc_shadow_mmu_root_page_role(vcpu, cpu_role); =20 @@ -4856,7 +4856,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, =20 static union kvm_mmu_page_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, - union kvm_mmu_role cpu_role) + union kvm_cpu_role cpu_role) { union kvm_mmu_page_role role; =20 @@ -4875,7 +4875,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u= nsigned long cr0, .cr4 =3D cr4 & ~X86_CR4_PKE, .efer =3D efer, }; - union kvm_mmu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); + union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); union kvm_mmu_page_role root_role =3D kvm_calc_shadow_npt_root_page_role(= vcpu, cpu_role); =20 shadow_mmu_init_context(vcpu, context, cpu_role, root_role); @@ -4883,11 +4883,11 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu,= unsigned long cr0, } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); =20 -static union kvm_mmu_role +static union kvm_cpu_role kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_di= rty, bool execonly, u8 level) { - union kvm_mmu_role role =3D {0}; + union kvm_cpu_role role =3D {0}; =20 /* * KVM does not support SMM transfer monitors, and consequently does not @@ -4914,7 +4914,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, { struct kvm_mmu *context =3D &vcpu->arch.guest_mmu; u8 level =3D vmx_eptp_page_walk_level(new_eptp); - union kvm_mmu_role new_mode =3D + union kvm_cpu_role new_mode =3D kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty, execonly, level); =20 @@ -4956,7 +4956,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role new_mode =3D kvm_calc_cpu_role(vcpu, regs); + union kvm_cpu_role new_mode =3D kvm_calc_cpu_role(vcpu, regs); struct kvm_mmu *g_context =3D &vcpu->arch.nested_mmu; =20 if (new_mode.as_u64 =3D=3D g_context->cpu_role.as_u64) @@ -6233,7 +6233,7 @@ int kvm_mmu_vendor_module_init(void) */ BUILD_BUG_ON(sizeof(union kvm_mmu_page_role) !=3D sizeof(u32)); BUILD_BUG_ON(sizeof(union kvm_mmu_extended_role) !=3D sizeof(u32)); - BUILD_BUG_ON(sizeof(union kvm_mmu_role) !=3D sizeof(u64)); + BUILD_BUG_ON(sizeof(union kvm_cpu_role) !=3D sizeof(u64)); =20 kvm_mmu_reset_all_pte_masks(); =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBA6CC433EF for ; Thu, 14 Apr 2022 07:41:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240216AbiDNHoM (ORCPT ); Thu, 14 Apr 2022 03:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240479AbiDNHmc (ORCPT ); Thu, 14 Apr 2022 03:42:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2FFF556752 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EYMQcCghnPOQ/f3khRU/yVpP8fA8iTyg2FpufS+FDDk=; b=PuDv3XGneiVXGsnC8KFISD9ZOV0G71tzlEmX47IZGC5lBlIH3HQDumN282KXGn17TZvQXd qPrDYe9jjZBkh1I0KgkvtGUOSjsZPKV86SU04LjvPMum1vfmj3FQ4GnCwnb2vYxAhU+HxW ouTJwkfNQqOqpzKND3pOpqgNp4hf3UE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-320-1Y8ekmsmN3-wpETBycsWrg-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: 1Y8ekmsmN3-wpETBycsWrg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C42263C174D4; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id A7C9DC28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 16/22] KVM: x86/mmu: remove redundant bits from extended role Date: Thu, 14 Apr 2022 03:39:54 -0400 Message-Id: <20220414074000.31438-17-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Before the separation of the CPU and the MMU role, CR0.PG was not available in the base MMU role, because two-dimensional paging always used direct=3D1 in the MMU role. However, now that the raw role is snapshotted in mmu->cpu_role, CR0.PG *can* be found (though inverted) as !cpu_role.base.direct. There is no need to store it again in union kvm_mmu_extended_role; instead, write an is_cr0_pg accessor by hand that takes care of the inversion. Likewise, CR4.PAE is now always present in the CPU role as !cpu_role.base.has_4_byte_gpte. The inversion makes certain tests on the MMU role easier, and is easily hidden by the is_cr4_pae accessor when operating on the CPU role. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 2 -- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 6bc5550ae530..52ceeadbed28 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -367,8 +367,6 @@ union kvm_mmu_extended_role { struct { unsigned int valid:1; unsigned int execonly:1; - unsigned int cr0_pg:1; - unsigned int cr4_pae:1; unsigned int cr4_pse:1; unsigned int cr4_pke:1; unsigned int cr4_smap:1; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 483a3761db81..cf8a41675a79 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -224,16 +224,24 @@ static inline bool __maybe_unused is_##reg##_##name(s= truct kvm_mmu *mmu) \ { \ return !!(mmu->cpu_role. base_or_ext . reg##_##name); \ } -BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg); BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pae); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); =20 +static inline bool is_cr0_pg(struct kvm_mmu *mmu) +{ + return !mmu->cpu_role.base.direct; +} + +static inline bool is_cr4_pae(struct kvm_mmu *mmu) +{ + return !mmu->cpu_role.base.has_4_byte_gpte; +} + static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) { struct kvm_mmu_role_regs regs =3D { @@ -4712,8 +4720,6 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct= kvm_mmu_role_regs *regs) else role.base.level =3D PT32_ROOT_LEVEL; =20 - role.ext.cr0_pg =3D 1; - role.ext.cr4_pae =3D ____is_cr4_pae(regs); role.ext.cr4_smep =3D ____is_cr4_smep(regs); role.ext.cr4_smap =3D ____is_cr4_smap(regs); role.ext.cr4_pse =3D ____is_cr4_pse(regs); --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA05AC433FE for ; Thu, 14 Apr 2022 07:41:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240560AbiDNHnY (ORCPT ); Thu, 14 Apr 2022 03:43:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240459AbiDNHma (ORCPT ); Thu, 14 Apr 2022 03:42:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3430456201 for ; Thu, 14 Apr 2022 00:40:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqvqj5T+LOq6yAGMZt//QgZxyQ9dWMtUkAV4xPeREAg=; b=gWDIljCKvo/KQUkxr5FoCBj6QIDHMXGDrEtRg9l9/lLw7qt9fZKrJk3qTR5TqbcQ6RI/Ef DCdoMnmRCI7Cb7vdN2kzntZjtsDk4oGDRwxjGSJC+q9ssFaI7cntzu9erj/VSXmEOj4Cwa AqAWlZO52Lap2XSQDgcy6V+N2xSV5to= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-583-MtFxEjoXPXSW_DG4jlReRg-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: MtFxEjoXPXSW_DG4jlReRg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E78EE38149B6; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB794C28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 17/22] KVM: x86/mmu: remove valid from extended role Date: Thu, 14 Apr 2022 03:39:55 -0400 Message-Id: <20220414074000.31438-18-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The level and direct field of the CPU role can act as a marker for validity instead: exactly one of them is guaranteed to be nonzero, so a zero value for both means that the role is invalid and the MMU properties will be computed again. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 4 +--- arch/x86/kvm/mmu/mmu.c | 8 +++----- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 52ceeadbed28..1356959a2fe1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -347,8 +347,7 @@ union kvm_mmu_page_role { * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties * relevant to the current MMU configuration. When loading CR0, CR4, or = EFER, * including on nested transitions, if nothing in the full role changes th= en - * MMU re-configuration can be skipped. @valid bit is set on first usage s= o we - * don't treat all-zero structure as valid data. + * MMU re-configuration can be skipped. * * The properties that are tracked in the extended role but not the page r= ole * are for things that either (a) do not affect the validity of the shadow= page @@ -365,7 +364,6 @@ union kvm_mmu_page_role { union kvm_mmu_extended_role { u32 word; struct { - unsigned int valid:1; unsigned int execonly:1; unsigned int cr4_pse:1; unsigned int cr4_pke:1; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cf8a41675a79..33827d1e3d5a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4699,7 +4699,6 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct= kvm_mmu_role_regs *regs) role.base.access =3D ACC_ALL; role.base.smm =3D is_smm(vcpu); role.base.guest_mode =3D is_guest_mode(vcpu); - role.ext.valid =3D 1; =20 if (!____is_cr0_pg(regs)) { role.base.direct =3D 1; @@ -4909,7 +4908,6 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *v= cpu, bool accessed_dirty, =20 role.ext.word =3D 0; role.ext.execonly =3D execonly; - role.ext.valid =3D 1; =20 return role; } @@ -5030,9 +5028,9 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.root_mmu.root_role.word =3D 0; vcpu->arch.guest_mmu.root_role.word =3D 0; vcpu->arch.nested_mmu.root_role.word =3D 0; - vcpu->arch.root_mmu.cpu_role.ext.valid =3D 0; - vcpu->arch.guest_mmu.cpu_role.ext.valid =3D 0; - vcpu->arch.nested_mmu.cpu_role.ext.valid =3D 0; + vcpu->arch.root_mmu.cpu_role.as_u64 =3D 0; + vcpu->arch.guest_mmu.cpu_role.as_u64 =3D 0; + vcpu->arch.nested_mmu.cpu_role.as_u64 =3D 0; kvm_mmu_reset_context(vcpu); =20 /* --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEB19C433FE for ; Thu, 14 Apr 2022 07:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240670AbiDNHof (ORCPT ); Thu, 14 Apr 2022 03:44:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240506AbiDNHme (ORCPT ); Thu, 14 Apr 2022 03:42:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D6A02532F0 for ; Thu, 14 Apr 2022 00:40:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fGx1NqlmPHgaS764h2CsqWVqQ4YK5QPPrGStgAvJ6cM=; b=aqIFWQW4cOqZzia1igi5ILUAkKnnjR3OTMeQ3a/g0J0CGy9OZ4oQBQw6qvAGljibJhog8f 2u5wHdmDEgYuXoMgOrNuyDjb1/POhsXc8E8Nsxwn6m6mJiywuowSCur31kL9q/33czOBt0 IJZJAz9gPgGaVTtT4U7ncklKvTWu/Gs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-318-0PfYkateM8WahsqZCqsX_Q-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: 0PfYkateM8WahsqZCqsX_Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16D7580418F; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id EF236C28109; Thu, 14 Apr 2022 07:40:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 18/22] KVM: x86/mmu: simplify and/or inline computation of shadow MMU roles Date: Thu, 14 Apr 2022 03:39:56 -0400 Message-Id: <20220414074000.31438-19-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Shadow MMUs compute their role from cpu_role.base, simply by adjusting the root level. It's one line of code, so do not place it in a separate function. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 65 ++++++++++++++++-------------------------- 1 file changed, 24 insertions(+), 41 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 33827d1e3d5a..f22aa9970356 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -231,6 +231,7 @@ BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); +BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); =20 static inline bool is_cr0_pg(struct kvm_mmu *mmu) { @@ -4796,33 +4797,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, reset_tdp_shadow_zero_bits_mask(context); } =20 -static union kvm_mmu_page_role -kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, - union kvm_cpu_role cpu_role) -{ - union kvm_mmu_page_role role; - - role =3D cpu_role.base; - if (!cpu_role.ext.efer_lma) - role.level =3D PT32E_ROOT_LEVEL; - else if (cpu_role.ext.cr4_la57) - role.level =3D PT64_ROOT_5LEVEL; - else - role.level =3D PT64_ROOT_4LEVEL; - - /* - * KVM forces EFER.NX=3D1 when TDP is disabled, reflect it in the MMU rol= e. - * KVM uses NX when TDP is disabled to handle a variety of scenarios, - * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and - * to generate correct permissions for CR0.WP=3D0/CR4.SMEP=3D1/EFER.NX=3D= 0. - * The iTLB multi-hit workaround can be toggled at any time, so assume - * NX can be used by any non-nested shadow MMU to avoid having to reset - * MMU contexts. - */ - role.efer_nx =3D true; - return role; -} - static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu = *context, union kvm_cpu_role cpu_role, union kvm_mmu_page_role root_role) @@ -4853,22 +4827,25 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vc= pu, { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); - union kvm_mmu_page_role root_role =3D - kvm_calc_shadow_mmu_root_page_role(vcpu, cpu_role); + union kvm_mmu_page_role root_role; =20 - shadow_mmu_init_context(vcpu, context, cpu_role, root_role); -} + root_role =3D cpu_role.base; =20 -static union kvm_mmu_page_role -kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, - union kvm_cpu_role cpu_role) -{ - union kvm_mmu_page_role role; + /* KVM uses PAE paging whenever the guest isn't using 64-bit paging. */ + root_role.level =3D max_t(u32, root_role.level, PT32E_ROOT_LEVEL); =20 - WARN_ON_ONCE(cpu_role.base.direct); - role =3D cpu_role.base; - role.level =3D kvm_mmu_get_tdp_level(vcpu); - return role; + /* + * KVM forces EFER.NX=3D1 when TDP is disabled, reflect it in the MMU rol= e. + * KVM uses NX when TDP is disabled to handle a variety of scenarios, + * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and + * to generate correct permissions for CR0.WP=3D0/CR4.SMEP=3D1/EFER.NX=3D= 0. + * The iTLB multi-hit workaround can be toggled at any time, so assume + * NX can be used by any non-nested shadow MMU to avoid having to reset + * MMU contexts. + */ + root_role.efer_nx =3D true; + + shadow_mmu_init_context(vcpu, context, cpu_role, root_role); } =20 void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, @@ -4881,7 +4858,13 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, = unsigned long cr0, .efer =3D efer, }; union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); - union kvm_mmu_page_role root_role =3D kvm_calc_shadow_npt_root_page_role(= vcpu, cpu_role); + union kvm_mmu_page_role root_role; + + /* NPT requires CR0.PG=3D1. */ + WARN_ON_ONCE(cpu_role.base.direct); + + root_role =3D cpu_role.base; + root_role.level =3D kvm_mmu_get_tdp_level(vcpu); =20 shadow_mmu_init_context(vcpu, context, cpu_role, root_role); kvm_mmu_new_pgd(vcpu, nested_cr3); --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23234C433F5 for ; Thu, 14 Apr 2022 07:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240728AbiDNHoq (ORCPT ); Thu, 14 Apr 2022 03:44:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240502AbiDNHme (ORCPT ); Thu, 14 Apr 2022 03:42:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 09E4356C11 for ; Thu, 14 Apr 2022 00:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EWcpCvivQTVSzyazYGHjZpqxx/Mo5fOAnGDba/85XNc=; b=WLhItxs3jajTh81AWh1QdgLA+OQTNv33+wHkmCM6YrqEfrxAdz5en+HIsxPKnPl4m/J5G8 xxfJFgx0efPpM4gLHeKTDlNHKe6kfxaJpCRUMlfwUqvjy9IrTtNZ7nqqXVtvTMnr4R+8sw ZNG40MwJ8NNgqayVKN9zUHdh+vo5CLE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-658-Fjn9UIIXP0GhGrm9QXzvxA-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: Fjn9UIIXP0GhGrm9QXzvxA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A2DC19705DF; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1E5C1C28109; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 19/22] KVM: x86/mmu: pull CPU mode computation to kvm_init_mmu Date: Thu, 14 Apr 2022 03:39:57 -0400 Message-Id: <20220414074000.31438-20-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Do not lead init_kvm_*mmu into the temptation of poking into struct kvm_mmu_role_regs, by passing to it directly the CPU mode. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f22aa9970356..b75e50f3a025 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4764,10 +4764,9 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcp= u, } =20 static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_cpu_role cpu_role) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_page_role root_role =3D kvm_calc_tdp_mmu_root_page_role(vcp= u, cpu_role); =20 if (cpu_role.as_u64 =3D=3D context->cpu_role.as_u64 && @@ -4823,10 +4822,9 @@ static void shadow_mmu_init_context(struct kvm_vcpu = *vcpu, struct kvm_mmu *conte } =20 static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_cpu_role cpu_role) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; - union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, regs); union kvm_mmu_page_role root_role; =20 root_role =3D cpu_role.base; @@ -4929,11 +4927,11 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu,= bool execonly, EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); =20 static void init_kvm_softmmu(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_cpu_role cpu_role) { struct kvm_mmu *context =3D &vcpu->arch.root_mmu; =20 - kvm_init_shadow_mmu(vcpu, regs); + kvm_init_shadow_mmu(vcpu, cpu_role); =20 context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; @@ -4941,9 +4939,8 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, } =20 static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, - const struct kvm_mmu_role_regs *regs) + union kvm_cpu_role new_mode) { - union kvm_cpu_role new_mode =3D kvm_calc_cpu_role(vcpu, regs); struct kvm_mmu *g_context =3D &vcpu->arch.nested_mmu; =20 if (new_mode.as_u64 =3D=3D g_context->cpu_role.as_u64) @@ -4984,13 +4981,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vc= pu, void kvm_init_mmu(struct kvm_vcpu *vcpu) { struct kvm_mmu_role_regs regs =3D vcpu_to_role_regs(vcpu); + union kvm_cpu_role cpu_role =3D kvm_calc_cpu_role(vcpu, ®s); =20 if (mmu_is_nested(vcpu)) - init_kvm_nested_mmu(vcpu, ®s); + init_kvm_nested_mmu(vcpu, cpu_role); else if (tdp_enabled) - init_kvm_tdp_mmu(vcpu, ®s); + init_kvm_tdp_mmu(vcpu, cpu_role); else - init_kvm_softmmu(vcpu, ®s); + init_kvm_softmmu(vcpu, cpu_role); } EXPORT_SYMBOL_GPL(kvm_init_mmu); =20 --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C3DC433F5 for ; Thu, 14 Apr 2022 07:42:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232953AbiDNHot (ORCPT ); Thu, 14 Apr 2022 03:44:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240504AbiDNHme (ORCPT ); Thu, 14 Apr 2022 03:42:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D653D56C04 for ; Thu, 14 Apr 2022 00:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6mPRUfyEDDJKDpTxdJi6hDfR8SVuVOP1DXWAh1RdB4k=; b=NdvYHm3JR0UR8XbvHp/wyOwcOBmKvH1/H/u0RMeQflJ6RLY7oYu19aiccj3FwNVePgB7KI 5EOgXdSNe51fM2FXSoGPQytKKkwUU5xL6GNEyKnQEV5plyfk7VSOMMucALMLomnmkM5ThA TYtAdY8E3W+fEXfK9cHm2XzICzrc7V4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-658-6clDWQI8PW6YtY1lUOk36g-1; Thu, 14 Apr 2022 03:40:04 -0400 X-MC-Unique: 6clDWQI8PW6YtY1lUOk36g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 741A21C06920; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5698B7B47; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 20/22] KVM: x86/mmu: replace shadow_root_level with root_role.level Date: Thu, 14 Apr 2022 03:39:58 -0400 Message-Id: <20220414074000.31438-21-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" root_role.level is always the same value as shadow_level: - it's kvm_mmu_get_tdp_level(vcpu) when going through init_kvm_tdp_mmu - it's the level argument when going through kvm_init_shadow_ept_mmu - it's assigned directly from new_role.base.level when going through shadow_mmu_init_context Remove the duplication and get the level directly from the role. Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/mmu/mmu.c | 33 ++++++++++++++------------------- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 18 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 1356959a2fe1..11055a213a1b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -437,7 +437,6 @@ struct kvm_mmu { union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; u8 root_level; - u8 shadow_root_level; bool direct_map; =20 /* diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index e6cae6f22683..671cfeccf04e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -114,7 +114,7 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vc= pu) return; =20 static_call(kvm_x86_load_mmu_pgd)(vcpu, root_hpa, - vcpu->arch.mmu->shadow_root_level); + vcpu->arch.mmu->root_role.level); } =20 struct kvm_page_fault { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b75e50f3a025..54cc033e0646 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2129,7 +2129,7 @@ static void shadow_walk_init_using_root(struct kvm_sh= adow_walk_iterator *iterato { iterator->addr =3D addr; iterator->shadow_addr =3D root; - iterator->level =3D vcpu->arch.mmu->shadow_root_level; + iterator->level =3D vcpu->arch.mmu->root_role.level; =20 if (iterator->level >=3D PT64_ROOT_4LEVEL && vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL && @@ -3324,7 +3324,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gf= n_t gfn, gva_t gva, static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu =3D vcpu->arch.mmu; - u8 shadow_root_level =3D mmu->shadow_root_level; + u8 shadow_root_level =3D mmu->root_role.level; hpa_t root; unsigned i; int r; @@ -3474,7 +3474,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) */ if (mmu->root_level >=3D PT64_ROOT_4LEVEL) { root =3D mmu_alloc_root(vcpu, root_gfn, 0, - mmu->shadow_root_level, false); + mmu->root_role.level, false); mmu->root.hpa =3D root; goto set_root_pgd; } @@ -3490,7 +3490,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) * the shadow page table may be a PAE or a long mode page table. */ pm_mask =3D PT_PRESENT_MASK | shadow_me_mask; - if (mmu->shadow_root_level >=3D PT64_ROOT_4LEVEL) { + if (mmu->root_role.level >=3D PT64_ROOT_4LEVEL) { pm_mask |=3D PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; =20 if (WARN_ON_ONCE(!mmu->pml4_root)) { @@ -3499,7 +3499,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) } mmu->pml4_root[0] =3D __pa(mmu->pae_root) | pm_mask; =20 - if (mmu->shadow_root_level =3D=3D PT64_ROOT_5LEVEL) { + if (mmu->root_role.level =3D=3D PT64_ROOT_5LEVEL) { if (WARN_ON_ONCE(!mmu->pml5_root)) { r =3D -EIO; goto out_unlock; @@ -3524,9 +3524,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) mmu->pae_root[i] =3D root | pm_mask; } =20 - if (mmu->shadow_root_level =3D=3D PT64_ROOT_5LEVEL) + if (mmu->root_role.level =3D=3D PT64_ROOT_5LEVEL) mmu->root.hpa =3D __pa(mmu->pml5_root); - else if (mmu->shadow_root_level =3D=3D PT64_ROOT_4LEVEL) + else if (mmu->root_role.level =3D=3D PT64_ROOT_4LEVEL) mmu->root.hpa =3D __pa(mmu->pml4_root); else mmu->root.hpa =3D __pa(mmu->pae_root); @@ -3542,7 +3542,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu =3D vcpu->arch.mmu; - bool need_pml5 =3D mmu->shadow_root_level > PT64_ROOT_4LEVEL; + bool need_pml5 =3D mmu->root_role.level > PT64_ROOT_4LEVEL; u64 *pml5_root =3D NULL; u64 *pml4_root =3D NULL; u64 *pae_root; @@ -3554,7 +3554,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *v= cpu) * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. */ if (mmu->direct_map || mmu->root_level >=3D PT64_ROOT_4LEVEL || - mmu->shadow_root_level < PT64_ROOT_4LEVEL) + mmu->root_role.level < PT64_ROOT_4LEVEL) return 0; =20 /* @@ -4446,18 +4446,18 @@ static void reset_shadow_zero_bits_mask(struct kvm_= vcpu *vcpu, struct rsvd_bits_validate *shadow_zero_check; int i; =20 - WARN_ON_ONCE(context->shadow_root_level < PT32E_ROOT_LEVEL); + WARN_ON_ONCE(context->root_role.level < PT32E_ROOT_LEVEL); =20 shadow_zero_check =3D &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, + context->root_role.level, context->root_role.efer_nx, guest_can_use_gbpages(vcpu), is_pse, is_amd); =20 if (!shadow_me_mask) return; =20 - for (i =3D context->shadow_root_level; --i >=3D 0;) { + for (i =3D context->root_role.level; --i >=3D 0;) { shadow_zero_check->rsvd_bits_mask[0][i] &=3D ~shadow_me_mask; shadow_zero_check->rsvd_bits_mask[1][i] &=3D ~shadow_me_mask; } @@ -4484,7 +4484,7 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *conte= xt) =20 if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, false, + context->root_role.level, false, boot_cpu_has(X86_FEATURE_GBPAGES), false, true); else @@ -4495,7 +4495,7 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *conte= xt) if (!shadow_me_mask) return; =20 - for (i =3D context->shadow_root_level; --i >=3D 0;) { + for (i =3D context->root_role.level; --i >=3D 0;) { shadow_zero_check->rsvd_bits_mask[0][i] &=3D ~shadow_me_mask; shadow_zero_check->rsvd_bits_mask[1][i] &=3D ~shadow_me_mask; } @@ -4778,7 +4778,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault =3D kvm_tdp_page_fault; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; - context->shadow_root_level =3D kvm_mmu_get_tdp_level(vcpu); context->direct_map =3D true; context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; @@ -4816,8 +4815,6 @@ static void shadow_mmu_init_context(struct kvm_vcpu *= vcpu, struct kvm_mmu *conte context->root_level =3D cpu_role.base.level; =20 reset_guest_paging_metadata(vcpu, context); - context->shadow_root_level =3D root_role.level; - reset_shadow_zero_bits_mask(vcpu, context); } =20 @@ -4908,8 +4905,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, context->cpu_role.as_u64 =3D new_mode.as_u64; context->root_role.word =3D new_mode.base.word; =20 - context->shadow_root_level =3D level; - context->page_fault =3D ept_page_fault; context->gva_to_gpa =3D ept_gva_to_gpa; context->sync_page =3D ept_sync_page; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bbd2a6dc8c20..566548a3efa7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1834,7 +1834,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 a= ddr, u64 *sptes, gfn_t gfn =3D addr >> PAGE_SHIFT; int leaf =3D -1; =20 - *root_level =3D vcpu->arch.mmu->shadow_root_level; + *root_level =3D vcpu->arch.mmu->root_role.level; =20 tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf =3D iter.level; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 22bbd69495ad..fc1725b7d05f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3950,7 +3950,7 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, h= pa_t root_hpa, hv_track_root_tdp(vcpu, root_hpa); =20 cr3 =3D vcpu->arch.cr3; - } else if (vcpu->arch.mmu->shadow_root_level >=3D PT64_ROOT_4LEVEL) { + } else if (vcpu->arch.mmu->root_role.level >=3D PT64_ROOT_4LEVEL) { cr3 =3D __sme_set(root_hpa) | kvm_get_active_pcid(vcpu); } else { /* PCID in the guest should be impossible with a 32-bit MMU. */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index df0b70ccd289..cf8581978bce 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2948,7 +2948,7 @@ static void vmx_flush_tlb_current(struct kvm_vcpu *vc= pu) =20 if (enable_ept) ept_sync_context(construct_eptp(vcpu, root_hpa, - mmu->shadow_root_level)); + mmu->root_role.level)); else vpid_sync_context(vmx_get_current_vpid(vcpu)); } --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 602F7C433EF for ; Thu, 14 Apr 2022 07:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234224AbiDNHoC (ORCPT ); Thu, 14 Apr 2022 03:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240488AbiDNHmd (ORCPT ); Thu, 14 Apr 2022 03:42:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2FD6A541B8 for ; Thu, 14 Apr 2022 00:40:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wPd6WmAB+t2TGwAjwqQzAiGdMhNaFBOyw9AqxFcIvt8=; b=UTQyOXKnPLpMfdE71l+c94IkyIoR2rrh3XnRh129R8mXi+V2yc5DZgbgqz5daJeWmq/Gex ghXSOdbnR3O1EDI2tyDfoxOCAX60dhdnvmN3Ua4m6g7y+NBUXdXumzuJzfCJSTTfPt04sg K3fzC6XkopRsKnplBL7bpUEOUy29fk0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-157-LZ3vH8V9OuKXva30gQ1-KQ-1; Thu, 14 Apr 2022 03:40:05 -0400 X-MC-Unique: LZ3vH8V9OuKXva30gQ1-KQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 99004296A61A; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C6FF7B47; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 21/22] KVM: x86/mmu: replace root_level with cpu_role.base.level Date: Thu, 14 Apr 2022 03:39:59 -0400 Message-Id: <20220414074000.31438-22-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Remove another duplicate field of struct kvm_mmu. This time it's the root level for page table walking; the separate field is always initialized as cpu_role.base.level, so its users can look up the CPU mode directly instead. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 18 +++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 11055a213a1b..3e3fcffe1b88 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -436,7 +436,6 @@ struct kvm_mmu { struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; - u8 root_level; bool direct_map; =20 /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 54cc033e0646..507fcf3a5080 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2132,7 +2132,7 @@ static void shadow_walk_init_using_root(struct kvm_sh= adow_walk_iterator *iterato iterator->level =3D vcpu->arch.mmu->root_role.level; =20 if (iterator->level >=3D PT64_ROOT_4LEVEL && - vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL && + vcpu->arch.mmu->cpu_role.base.level < PT64_ROOT_4LEVEL && !vcpu->arch.mmu->direct_map) iterator->level =3D PT32E_ROOT_LEVEL; =20 @@ -3448,7 +3448,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) * On SVM, reading PDPTRs might access guest memory, which might fault * and thus might sleep. Grab the PDPTRs before acquiring mmu_lock. */ - if (mmu->root_level =3D=3D PT32E_ROOT_LEVEL) { + if (mmu->cpu_role.base.level =3D=3D PT32E_ROOT_LEVEL) { for (i =3D 0; i < 4; ++i) { pdptrs[i] =3D mmu->get_pdptr(vcpu, i); if (!(pdptrs[i] & PT_PRESENT_MASK)) @@ -3472,7 +3472,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) * Do we shadow a long mode page table? If so we need to * write-protect the guests page table root. */ - if (mmu->root_level >=3D PT64_ROOT_4LEVEL) { + if (mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL) { root =3D mmu_alloc_root(vcpu, root_gfn, 0, mmu->root_role.level, false); mmu->root.hpa =3D root; @@ -3511,7 +3511,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) for (i =3D 0; i < 4; ++i) { WARN_ON_ONCE(IS_VALID_PAE_ROOT(mmu->pae_root[i])); =20 - if (mmu->root_level =3D=3D PT32E_ROOT_LEVEL) { + if (mmu->cpu_role.base.level =3D=3D PT32E_ROOT_LEVEL) { if (!(pdptrs[i] & PT_PRESENT_MASK)) { mmu->pae_root[i] =3D INVALID_PAE_ROOT; continue; @@ -3553,7 +3553,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *v= cpu) * equivalent level in the guest's NPT to shadow. Allocate the tables * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. */ - if (mmu->direct_map || mmu->root_level >=3D PT64_ROOT_4LEVEL || + if (mmu->direct_map || mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL || mmu->root_role.level < PT64_ROOT_4LEVEL) return 0; =20 @@ -3658,7 +3658,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) =20 vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); =20 - if (vcpu->arch.mmu->root_level >=3D PT64_ROOT_4LEVEL) { + if (vcpu->arch.mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL) { hpa_t root =3D vcpu->arch.mmu->root.hpa; sp =3D to_shadow_page(root); =20 @@ -4374,7 +4374,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vc= pu, { __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, - context->root_level, is_efer_nx(context), + context->cpu_role.base.level, is_efer_nx(context), guest_can_use_gbpages(vcpu), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); @@ -4782,7 +4782,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; - context->root_level =3D cpu_role.base.level; =20 if (!is_cr0_pg(context)) context->gva_to_gpa =3D nonpaging_gva_to_gpa; @@ -4812,7 +4811,6 @@ static void shadow_mmu_init_context(struct kvm_vcpu *= vcpu, struct kvm_mmu *conte paging64_init_context(context); else paging32_init_context(context); - context->root_level =3D cpu_role.base.level; =20 reset_guest_paging_metadata(vcpu, context); reset_shadow_zero_bits_mask(vcpu, context); @@ -4909,7 +4907,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, context->gva_to_gpa =3D ept_gva_to_gpa; context->sync_page =3D ept_sync_page; context->invlpg =3D ept_invlpg; - context->root_level =3D level; context->direct_map =3D false; update_permission_bitmask(context, true); context->pkru_mask =3D 0; @@ -4945,7 +4942,6 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, g_context->get_guest_pgd =3D get_cr3; g_context->get_pdptr =3D kvm_pdptr_read; g_context->inject_page_fault =3D kvm_inject_page_fault; - g_context->root_level =3D new_mode.base.level; =20 /* * L2 page tables are never shadowed, so there is no need to sync diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 24157f637bd7..66f1acf153c4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -319,7 +319,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker= *walker, =20 trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: - walker->level =3D mmu->root_level; + walker->level =3D mmu->cpu_role.base.level; pte =3D mmu->get_guest_pgd(vcpu); have_ad =3D PT_HAVE_ACCESSED_DIRTY(mmu); =20 @@ -621,7 +621,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct k= vm_page_fault *fault, WARN_ON_ONCE(gw->gfn !=3D base_gfn); direct_access =3D gw->pte_access; =20 - top_level =3D vcpu->arch.mmu->root_level; + top_level =3D vcpu->arch.mmu->cpu_role.base.level; if (top_level =3D=3D PT32E_ROOT_LEVEL) top_level =3D PT32_ROOT_LEVEL; /* --=20 2.31.1 From nobody Sat May 18 15:08:16 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7BE0C433EF for ; Thu, 14 Apr 2022 07:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233340AbiDNHon (ORCPT ); Thu, 14 Apr 2022 03:44:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240501AbiDNHme (ORCPT ); Thu, 14 Apr 2022 03:42:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A74C557146 for ; Thu, 14 Apr 2022 00:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649922008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0BwCNQ8VtdLWkqO+3jKpyvN6wYwYXqjBMFCI2If12kA=; b=hdDL94D1BcepCrYoSgaYxM7Ym8pPx034FnOAT6NR2KqrXrq47IemD4QEPd0ZAWdYz7C+Mj UH3fwPD13R+/6mce8P+PanZa1j2AtR0BYCnVWDFdA9Nkcgqo0EtASiaVilPW6/0AJHjfZ2 YodQDTVBxe5dAKI/JTOXT9AkylB7z1Q= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-167-4u_Zl_UaPt-_9mYhdrnRLg-1; Thu, 14 Apr 2022 03:40:05 -0400 X-MC-Unique: 4u_Zl_UaPt-_9mYhdrnRLg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE4FF804191; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id A148B7AC3; Thu, 14 Apr 2022 07:40:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 22/22] KVM: x86/mmu: replace direct_map with root_role.direct Date: Thu, 14 Apr 2022 03:40:00 -0400 Message-Id: <20220414074000.31438-23-pbonzini@redhat.com> In-Reply-To: <20220414074000.31438-1-pbonzini@redhat.com> References: <20220414074000.31438-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" direct_map is always equal to the direct field of the root page's role: - for shadow paging, direct_map is true if CR0.PG=3D0 and root_role.direct = is copied from cpu_role.base.direct - for TDP, it is always true and root_role.direct is also always true - for shadow TDP, it is always false and root_role.direct is also always false Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 27 ++++++++++++--------------- arch/x86/kvm/x86.c | 12 ++++++------ 3 files changed, 18 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 3e3fcffe1b88..2c20f715f009 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -436,7 +436,6 @@ struct kvm_mmu { struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; - bool direct_map; =20 /* * The pkru_mask indicates if protection key checks are needed. It diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 507fcf3a5080..69a30d6d1e2b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2028,7 +2028,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct k= vm_vcpu *vcpu, int direct, unsigned int access) { - bool direct_mmu =3D vcpu->arch.mmu->direct_map; + bool direct_mmu =3D vcpu->arch.mmu->root_role.direct; union kvm_mmu_page_role role; struct hlist_head *sp_list; unsigned quadrant; @@ -2133,7 +2133,7 @@ static void shadow_walk_init_using_root(struct kvm_sh= adow_walk_iterator *iterato =20 if (iterator->level >=3D PT64_ROOT_4LEVEL && vcpu->arch.mmu->cpu_role.base.level < PT64_ROOT_4LEVEL && - !vcpu->arch.mmu->direct_map) + !vcpu->arch.mmu->root_role.direct) iterator->level =3D PT32E_ROOT_LEVEL; =20 if (iterator->level =3D=3D PT32E_ROOT_LEVEL) { @@ -2515,7 +2515,7 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcp= u *vcpu, gva_t gva) gpa_t gpa; int r; =20 - if (vcpu->arch.mmu->direct_map) + if (vcpu->arch.mmu->root_role.direct) return 0; =20 gpa =3D kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL); @@ -3553,7 +3553,8 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *v= cpu) * equivalent level in the guest's NPT to shadow. Allocate the tables * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. */ - if (mmu->direct_map || mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL || + if (mmu->root_role.direct || + mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL || mmu->root_role.level < PT64_ROOT_4LEVEL) return 0; =20 @@ -3650,7 +3651,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) int i; struct kvm_mmu_page *sp; =20 - if (vcpu->arch.mmu->direct_map) + if (vcpu->arch.mmu->root_role.direct) return; =20 if (!VALID_PAGE(vcpu->arch.mmu->root.hpa)) @@ -3880,7 +3881,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *= vcpu, gpa_t cr2_or_gpa, =20 arch.token =3D alloc_apf_token(vcpu); arch.gfn =3D gfn; - arch.direct_map =3D vcpu->arch.mmu->direct_map; + arch.direct_map =3D vcpu->arch.mmu->root_role.direct; arch.cr3 =3D vcpu->arch.mmu->get_guest_pgd(vcpu); =20 return kvm_setup_async_pf(vcpu, cr2_or_gpa, @@ -4098,7 +4099,6 @@ static void nonpaging_init_context(struct kvm_mmu *co= ntext) context->gva_to_gpa =3D nonpaging_gva_to_gpa; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; - context->direct_map =3D true; } =20 static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pg= d, @@ -4680,7 +4680,6 @@ static void paging64_init_context(struct kvm_mmu *con= text) context->gva_to_gpa =3D paging64_gva_to_gpa; context->sync_page =3D paging64_sync_page; context->invlpg =3D paging64_invlpg; - context->direct_map =3D false; } =20 static void paging32_init_context(struct kvm_mmu *context) @@ -4689,7 +4688,6 @@ static void paging32_init_context(struct kvm_mmu *con= text) context->gva_to_gpa =3D paging32_gva_to_gpa; context->sync_page =3D paging32_sync_page; context->invlpg =3D paging32_invlpg; - context->direct_map =3D false; } =20 static union kvm_cpu_role @@ -4778,7 +4776,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault =3D kvm_tdp_page_fault; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; - context->direct_map =3D true; context->get_guest_pgd =3D get_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; @@ -4907,7 +4904,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, context->gva_to_gpa =3D ept_gva_to_gpa; context->sync_page =3D ept_sync_page; context->invlpg =3D ept_invlpg; - context->direct_map =3D false; + update_permission_bitmask(context, true); context->pkru_mask =3D 0; reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level); @@ -5023,13 +5020,13 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; =20 - r =3D mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map); + r =3D mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->root_role.direct); if (r) goto out; r =3D mmu_alloc_special_roots(vcpu); if (r) goto out; - if (vcpu->arch.mmu->direct_map) + if (vcpu->arch.mmu->root_role.direct) r =3D mmu_alloc_direct_roots(vcpu); else r =3D mmu_alloc_shadow_roots(vcpu); @@ -5286,7 +5283,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t c= r2_or_gpa, u64 error_code, void *insn, int insn_len) { int r, emulation_type =3D EMULTYPE_PF; - bool direct =3D vcpu->arch.mmu->direct_map; + bool direct =3D vcpu->arch.mmu->root_role.direct; =20 if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; @@ -5317,7 +5314,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t c= r2_or_gpa, u64 error_code, * paging in both guests. If true, we simply unprotect the page * and resume the guest. */ - if (vcpu->arch.mmu->direct_map && + if (vcpu->arch.mmu->root_role.direct && (error_code & PFERR_NESTED_GUEST_PAGE) =3D=3D PFERR_NESTED_GUEST_PAGE= ) { kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); return 1; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9866853ca320..ab336f7c82e4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8101,7 +8101,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF))) return false; =20 - if (!vcpu->arch.mmu->direct_map) { + if (!vcpu->arch.mmu->root_role.direct) { /* * Write permission should be allowed since only * write access need to be emulated. @@ -8134,7 +8134,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, kvm_release_pfn_clean(pfn); =20 /* The instructions are well-emulated on direct mmu. */ - if (vcpu->arch.mmu->direct_map) { + if (vcpu->arch.mmu->root_role.direct) { unsigned int indirect_shadow_pages; =20 write_lock(&vcpu->kvm->mmu_lock); @@ -8202,7 +8202,7 @@ static bool retry_instruction(struct x86_emulate_ctxt= *ctxt, vcpu->arch.last_retry_eip =3D ctxt->eip; vcpu->arch.last_retry_addr =3D cr2_or_gpa; =20 - if (!vcpu->arch.mmu->direct_map) + if (!vcpu->arch.mmu->root_role.direct) gpa =3D kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); =20 kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); @@ -8482,7 +8482,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, ctxt->exception.address =3D cr2_or_gpa; =20 /* With shadow page tables, cr2 contains a GVA or nGPA. */ - if (vcpu->arch.mmu->direct_map) { + if (vcpu->arch.mmu->root_role.direct) { ctxt->gpa_available =3D true; ctxt->gpa_val =3D cr2_or_gpa; } @@ -12360,7 +12360,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcp= u, struct kvm_async_pf *work) { int r; =20 - if ((vcpu->arch.mmu->direct_map !=3D work->arch.direct_map) || + if ((vcpu->arch.mmu->root_role.direct !=3D work->arch.direct_map) || work->wakeup_all) return; =20 @@ -12368,7 +12368,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcp= u, struct kvm_async_pf *work) if (unlikely(r)) return; =20 - if (!vcpu->arch.mmu->direct_map && + if (!vcpu->arch.mmu->root_role.direct && work->arch.cr3 !=3D vcpu->arch.mmu->get_guest_pgd(vcpu)) return; =20 --=20 2.31.1