From nobody Wed Sep 10 05:30:56 2025 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18C5A2DF71C; Mon, 8 Sep 2025 21:33:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757367216; cv=none; b=A5VDNVtuvD0wf6KKsF3Zy1zJ2j/Ei1oJrr0u3VSmTjopJ5f8T+n0q93L6yc5bM/qUGR422rqB2JDIS5xodvEk+hOaNTsKEkErL+71nxju0/tAPT/c8tMYzp/oYVLX27kewZZUFXJgLM/hm3Gx5BVdpKwJOkyBXL20V56qgOPPFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757367216; c=relaxed/simple; bh=dcdN9kLk7VQEOw9Elsf+i9vD+FkayaCDHoUI9tT7Jo4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iKr391sWA2ObkyIek3wSrG85G3mo6siYiOlyuVGPWEqmx/UR5ADU1L6nf3IFYBJbeikGcuN5xeHJhHZHxiKjYdGqkyUGvxO7SsyFHbnwbBpQl+04zmaqXRmIGtJzqk8T6eZN3oETkojRyjwnyOIS1x8OZksnPa5fJ0JCoCA7TlU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Gq/0Mxzd; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Gq/0Mxzd" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WhcJ4ks4HR7lN45MGHC2zUK+ijZkfxtGRLCaVoT0v5s=; b=Gq/0MxzdEpQpNHdYEXw6rg22v3 jej35xilsOWPwV3rZEPBdCoN0yrqUe5upUU5hkRRsgmBSL8+fH9Cgs3b4zSi6zpamo0vMDWJCaxzs RRVZe00tiChPX2z9Lrlh7rbync642scx8OC4ZtqEvvxdtLSW7sjzUx9V7OJNDDgFy1bFMGwApZLnT FCVDV5MbR2ndmvi0LAJwomNTkdGoX5TGsgDbnBp+N+Zht0z0MmuRxqmfZNTFDGf5vAnYDfj7QcP66 zpwBC9WCjT5LMMOV5FJayNmQsjWjMlbxR3RBVWzcoV3sVLtomOeBHp4xPukv9AUyU39FYK/m2+LC6 cmo4m+MQ==; Received: from griffoul by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvjUT-0000000DO4m-03RQ; Mon, 08 Sep 2025 21:33:33 +0000 From: Fred Griffoul To: kvm@vger.kernel.org Cc: griffoul@gmail.com, Fred Griffoul , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Shuah Khan , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH 3/5] KVM: x86: Add nested state validation for pfncache support Date: Mon, 8 Sep 2025 22:32:28 +0100 Message-ID: <20250908213241.3189113-4-griffoul@infradead.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250908213241.3189113-1-griffoul@infradead.org> References: <20250908213241.3189113-1-griffoul@infradead.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Fred Griffoul Implement state validation for nested virtualization to enable pfncache support for L1 guest pages. This adds a new nested_ops callback 'is_nested_state_invalid()' that detects when KVM needs to reload nested virtualization state. A KVM_REQ_GET_NESTED_STATE_PAGES request is triggered to reload affected pages before L2 execution when it detects invalid state. The callback monitors L1 guest pages during guest entry/exit while the vCPU runs in IN_GUEST_MODE. Currently, VMX implementations return false, with full support planned for the next patch. Signed-off-by: Fred Griffoul --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/nested.c | 6 ++++++ arch/x86/kvm/x86.c | 14 +++++++++++++- 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index c56cc54d682a..97df7cac14e6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1939,6 +1939,7 @@ struct kvm_x86_nested_ops { struct kvm_nested_state __user *user_kvm_nested_state, struct kvm_nested_state *kvm_state); bool (*get_nested_state_pages)(struct kvm_vcpu *vcpu); + bool (*is_nested_state_invalid)(struct kvm_vcpu *vcpu); int (*write_log_dirty)(struct kvm_vcpu *vcpu, gpa_t l2_gpa); =20 int (*enable_evmcs)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index aa4fe1fe571d..06187b8baa19 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3472,6 +3472,11 @@ static bool vmx_get_nested_state_pages(struct kvm_vc= pu *vcpu) return true; } =20 +static bool vmx_is_nested_state_invalid(struct kvm_vcpu *vcpu) +{ + return false; +} + static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa) { struct vmcs12 *vmcs12; @@ -7366,6 +7371,7 @@ struct kvm_x86_nested_ops vmx_nested_ops =3D { .get_state =3D vmx_get_nested_state, .set_state =3D vmx_set_nested_state, .get_nested_state_pages =3D vmx_get_nested_state_pages, + .is_nested_state_invalid =3D vmx_is_nested_state_invalid, .write_log_dirty =3D nested_vmx_write_pml_buffer, #ifdef CONFIG_KVM_HYPERV .enable_evmcs =3D nested_enable_evmcs, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 33fba801b205..db4a6b62f81f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2125,12 +2125,24 @@ int kvm_emulate_monitor(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_emulate_monitor); =20 +static inline bool kvm_invalid_nested_state(struct kvm_vcpu *vcpu) +{ + if (is_guest_mode(vcpu) && + kvm_x86_ops.nested_ops->is_nested_state_invalid && + kvm_x86_ops.nested_ops->is_nested_state_invalid(vcpu)) { + kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); + return true; + } + return false; +} + static inline bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu) { xfer_to_guest_mode_prepare(); =20 return READ_ONCE(vcpu->mode) =3D=3D EXITING_GUEST_MODE || - kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending(); + kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending() || + kvm_invalid_nested_state(vcpu); } =20 /* --=20 2.51.0