From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1807D3644B6 for ; Wed, 11 Mar 2026 00:33:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189238; cv=none; b=NI8qtNXMKTXeugn0N/svEeCIz1QSWpnG/yX6f0b6i6sKXouu4YQllk1PUQokrqQhQH7djQihThyvS3Gqxh317tRVnNx+k2w3vYTa47UGJYxLdMdNUK3FiVvEW5RukYULvenyqUoKEwOa7tDwmP655MGOpB9zC9nwyCdFsYV5Pkc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189238; c=relaxed/simple; bh=a0f1FlFR/Z7SQKhHn397KaI8/VGpAunEqgFSRX9v8iY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KpMWV7pyn5vG97VpNw/E/6LcNg6lo9DZRh6foFNA7pW3GqRs6vAb3cT87QisqGKlxQMharbJN39EFEKRIqbtDRL7zZl3ZFVR6Ltzkk3p9mO7z9BT+4KHE5En8E507J8T0JTnDtTAT1qgTHfaDDB9Cea4eMP+yTbWaSvsqGzoROs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=S+1zyd08; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="S+1zyd08" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-c7387c70046so3074644a12.0 for ; Tue, 10 Mar 2026 17:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189235; x=1773794035; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=X69ggzZm2sR1A00OKZAFN6tQgMWt6GCX53/2b8xcbDE=; b=S+1zyd08htoPjJdR00Mwd4qZRx76B9LwZxWBw5NmMfvUS97c2+ciOgLYboUHHlf4N7 L5kAdT8HjMLv/aZnJKXXkMMnRfYT+rbIqv1AMD64sxELCJPJALv6E4bnKxiqIMDDC4kN +OoYyY8Eqb1e+9VJ/c5LLzkhMjgAGjqVEOWfDSYcJBPtwq7DreCK4n8f76aH9n4RYLQB IDD4cAWgaZFMQ3K79IJBbiPv5sT2YvQ6KQqJyQZIGNbOCpRVBHo/28aa6Og3kIIXwKtB MSBQAnuWipwScsgsbNCN/QMEjLeXkJlQZCK6gIdfm57MZ3yqremCGN8f7c+rvW/a5guA kuqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189235; x=1773794035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=X69ggzZm2sR1A00OKZAFN6tQgMWt6GCX53/2b8xcbDE=; b=Ceu6Nchib1QZX/H5ilmyILxB60CqWiXYqPWbmIXyXjliLdNvI6CENLrLlxcKoEcBRK nIdK9ZuPuZGb7lsQaUwpR/fCztcPhYLx+vgXxPR0yED7VDKiGItle5FylNGOsH9c+VmU P+dpXGqBoAKLgPUtqPRZSPJlaPgfpfgginHcsy9FvrE+IufMMidzXLJCwp4KJPABsKvF 1iB0rfGFex+qXFoVx/nNLvVNOCqEcj5s1KJZy4S4jKKV3g+llHRipZo2C8Dy70LrFB8G x/IKiWnx42qBj4qwvtHelGa6/GG1FXWIIcJh2Q3jI8Pf6hOePe74WBBmRE5VeaFkVXND 5BsA== X-Forwarded-Encrypted: i=1; AJvYcCXKqJB3/On1zhS0EQSMcoGJfLDzIKTkqlnl/Q2FVsiJHwa6al4nngJqDMrYLVr3vGmTxAMSxi/KV9ixHHU=@vger.kernel.org X-Gm-Message-State: AOJu0YzzF2lpo8xSBDisLl9Ex2dZ5f1/h1VvnzWIWmvrBKWk8NoE1DF1 epyohmo+doRnRxlKfkTBJ1BRkA0rUBc9/Z3Hdsn5yfE5fkfK1aqUyEGBcnhitpDCbZWL2FdDJqw NJ2yzIQ== X-Received: from pghq16.prod.google.com ([2002:a63:e210:0:b0:c73:356e:5ea1]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6300:2289:b0:398:9099:6079 with SMTP id adf61e73a8af0-398c5e6ceccmr429036637.7.1773189235315; Tue, 10 Mar 2026 17:33:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:40 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-2-seanjc@google.com> Subject: [PATCH 1/7] KVM: x86: Add dedicated storage for guest RIP From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add kvm_vcpu_arch.rip to track guest RIP instead of including it in the generic regs[] array. Decoupling RIP from regs[] will allow using a *completely* arbitrary index for RIP, as opposed to the mostly-arbitrary index that is currently used. That in turn will allow using indices 16-31 to track R16-R31 that are coming with APX. Note, although RIP can used for addressing, it does NOT have an architecturally defined index, and so can't be reached via flows like get_vmx_mem_address() where KVM "blindly" reads a general purpose register given the SIB information reported by hardware. For RIP-relative addressing, hardware reports the full "offset" in vmcs.EXIT_QUALIFICATION. Note #2, keep the available/dirty tracking as RSP is context switched through the VMCS, i.e. needs to be cached for VMX. Opportunistically rename NR_VCPU_REGS to NR_VCPU_GENERAL_PURPOSE_REGS to better capture what it tracks, and so that KVM can slot in R16-R13 without running into weirdness where KVM's definition of "EXREG" doesn't line up with APX's definition of "extended reg". No functional change intended. Cc: Chang S. Bae Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 10 ++++++---- arch/x86/kvm/kvm_cache_regs.h | 12 ++++++++---- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 6 +++--- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 +- 6 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index c94556fefb75..0461ba97a3be 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -191,10 +191,11 @@ enum kvm_reg { VCPU_REGS_R14 =3D __VCPU_REGS_R14, VCPU_REGS_R15 =3D __VCPU_REGS_R15, #endif - VCPU_REGS_RIP, - NR_VCPU_REGS, + NR_VCPU_GENERAL_PURPOSE_REGS, =20 - VCPU_EXREG_PDPTR =3D NR_VCPU_REGS, + VCPU_REG_RIP =3D NR_VCPU_GENERAL_PURPOSE_REGS, + + VCPU_EXREG_PDPTR, VCPU_EXREG_CR0, /* * Alias AMD's ERAPS (not a real register) to CR3 so that common code @@ -799,7 +800,8 @@ struct kvm_vcpu_arch { * rip and regs accesses must go through * kvm_{register,rip}_{read,write} functions. */ - unsigned long regs[NR_VCPU_REGS]; + unsigned long regs[NR_VCPU_GENERAL_PURPOSE_REGS]; + unsigned long rip; u32 regs_avail; u32 regs_dirty; =20 diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 8ddb01191d6f..9b7df9de0e87 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -112,7 +112,7 @@ static __always_inline bool kvm_register_test_and_mark_= available(struct kvm_vcpu */ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, i= nt reg) { - if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_REGS)) + if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_GENERAL_PURPOSE_REGS)) return 0; =20 if (!kvm_register_is_available(vcpu, reg)) @@ -124,7 +124,7 @@ static inline unsigned long kvm_register_read_raw(struc= t kvm_vcpu *vcpu, int reg static inline void kvm_register_write_raw(struct kvm_vcpu *vcpu, int reg, unsigned long val) { - if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_REGS)) + if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_GENERAL_PURPOSE_REGS)) return; =20 vcpu->arch.regs[reg] =3D val; @@ -133,12 +133,16 @@ static inline void kvm_register_write_raw(struct kvm_= vcpu *vcpu, int reg, =20 static inline unsigned long kvm_rip_read(struct kvm_vcpu *vcpu) { - return kvm_register_read_raw(vcpu, VCPU_REGS_RIP); + if (!kvm_register_is_available(vcpu, VCPU_REG_RIP)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_RIP); + + return vcpu->arch.rip; } =20 static inline void kvm_rip_write(struct kvm_vcpu *vcpu, unsigned long val) { - kvm_register_write_raw(vcpu, VCPU_REGS_RIP, val); + vcpu->arch.rip =3D val; + kvm_register_mark_dirty(vcpu, VCPU_REG_RIP); } =20 static inline unsigned long kvm_rsp_read(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index b1aa85a6ca5a..0dec619490c3 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -913,7 +913,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) save->r14 =3D svm->vcpu.arch.regs[VCPU_REGS_R14]; save->r15 =3D svm->vcpu.arch.regs[VCPU_REGS_R15]; #endif - save->rip =3D svm->vcpu.arch.regs[VCPU_REGS_RIP]; + save->rip =3D svm->vcpu.arch.rip; =20 /* Sync some non-GPR registers before encrypting */ save->xcr0 =3D svm->vcpu.arch.xcr0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 3407deac90bd..4b9d79412da7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4436,7 +4436,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip =3D vcpu->arch.regs[VCPU_REGS_RIP]; + svm->vmcb->save.rip =3D vcpu->arch.rip; =20 /* * Disable singlestep if we're injecting an interrupt/exception. @@ -4522,7 +4522,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) vcpu->arch.cr2 =3D svm->vmcb->save.cr2; vcpu->arch.regs[VCPU_REGS_RAX] =3D svm->vmcb->save.rax; vcpu->arch.regs[VCPU_REGS_RSP] =3D svm->vmcb->save.rsp; - vcpu->arch.regs[VCPU_REGS_RIP] =3D svm->vmcb->save.rip; + vcpu->arch.rip =3D svm->vmcb->save.rip; } vcpu->arch.regs_dirty =3D 0; =20 @@ -4954,7 +4954,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union= kvm_smram *smram) =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip =3D vcpu->arch.regs[VCPU_REGS_RIP]; + svm->vmcb->save.rip =3D vcpu->arch.rip; =20 nested_svm_simple_vmexit(svm, SVM_EXIT_SW); =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9302c16571cd..802cc5d8bf43 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2604,8 +2604,8 @@ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_re= g reg) case VCPU_REGS_RSP: vcpu->arch.regs[VCPU_REGS_RSP] =3D vmcs_readl(GUEST_RSP); break; - case VCPU_REGS_RIP: - vcpu->arch.regs[VCPU_REGS_RIP] =3D vmcs_readl(GUEST_RIP); + case VCPU_REG_RIP: + vcpu->arch.rip =3D vmcs_readl(GUEST_RIP); break; case VCPU_EXREG_PDPTR: if (enable_ept) @@ -7536,8 +7536,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) =20 if (kvm_register_is_dirty(vcpu, VCPU_REGS_RSP)) vmcs_writel(GUEST_RSP, vcpu->arch.regs[VCPU_REGS_RSP]); - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RIP)) - vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]); + if (kvm_register_is_dirty(vcpu, VCPU_REG_RIP)) + vmcs_writel(GUEST_RIP, vcpu->arch.rip); vcpu->arch.regs_dirty =3D 0; =20 if (run_flags & KVM_RUN_LOAD_GUEST_DR6) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 70bfe81dea54..31bee8b0e4a1 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -623,7 +623,7 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_C= ONTROL, 64) * cache on demand. Other registers not listed here are synced to * the cache immediately after VM-Exit. */ -#define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REGS_RIP) | \ +#define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REG_RIP) | \ (1 << VCPU_REGS_RSP) | \ (1 << VCPU_EXREG_RFLAGS) | \ (1 << VCPU_EXREG_PDPTR) | \ --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2A9A366DCC for ; Wed, 11 Mar 2026 00:33:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189241; cv=none; b=n4uNfqzkcAkOuVX848F/wbBC3SmY9MZYkUko13AdQS+wn2IIysD8k8cunpQi0UQo80giHE+C6rlIo+Xs1XUEe//6Ytqmm3+RJm18AIw0xowj2dsKTgDUp0KH6KqqHGQ6sT2yL5+Vhj+8sfqAgZERR+o2tX9EDSimCmo6QtTvNb0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189241; c=relaxed/simple; bh=lh5JVkGEag9f0hYmAJVXsyJIbP6ekPqWMp0jvrscSdw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kY7q3Vbnsoxnj6gpnz3tvgzFfkFgJNlCdLs81dCbLQeoThH6ViuokUwvyvX6a5kVvdbNVxmKWvKjmkaKjP6CPJS8olszMKSstHV8YkPhVM3M2Jt9SHJlAOtCGI2R9/2NSGL2cwPRN7IL8UtAmTyiFdgGliRM+iFY8hAtTvfIe5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tOJWDcHm; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tOJWDcHm" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2ae42659a39so597556995ad.1 for ; Tue, 10 Mar 2026 17:33:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189237; x=1773794037; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=m4/rjUf9nXrNPT+zwyq0SWLSoJ30n2Mv76Aib1NbDrU=; b=tOJWDcHmEsHVsVe80dy6ScaPvWLdBcAXhP4kcYU3J09Glm25vqudVjP1ZFycIbF+eq 9grINEDopND4letG8DOl0wzg2q6VY2VanJqqQBdgfe9BPCNaUyqWDhev+dYRn+VHN0f/ vdaAZYRfdsB2VXLpipE7FgXC2uk1ijgu/zkSFq2dT3cm5T1+18jdD8MQVD4zMIhLH/dT 6QoVngJ7NZEqeZVUJMYp2uVlPeGLqKSzBJqecKJ+K6kbkJZE3u0kuG3jTe80cbPi1nwG sfQIXcok+o9s1hBe2mZ4Sy+lOHJcFdXOfWiRg7fm7nnVlkzRAimrV/kSa/yoSo8vA7e1 FTeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189237; x=1773794037; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=m4/rjUf9nXrNPT+zwyq0SWLSoJ30n2Mv76Aib1NbDrU=; b=qaAXTrDU+Dphmk3nexFVFf5f3hQVm76tohNZENMdaWMCreJ0nebpIoqOQoU65Tcmb0 8N8bJBL3dKhxwo+dg8KT0I6OuHyV+YZscTBuEaZNEkO/QnsQeCr0jNXCUFOjdQCoJxJo 65SvoCn7/dkuEbFp7HEmpJxIE9FBSLu2p/nQJSdoYMkFcrhTuuPWFUos1D4WZDlUn4m3 2++V1xajQ58ea/GHRS/QqcdKa3UoYm3VNraejqC4rf+CZawBBIZeJ8OPae3G1VmBwScp gMbjrBeKrbLWDzaYAJFfkEHALpict5MnsF+arM1d5onR1fyR/RxPgOMVI33/TC2sK55j b9+A== X-Forwarded-Encrypted: i=1; AJvYcCUEYwPs9AKRcIlpy+99OaeIVZ8pNLIIGEp2N69geBTjUqMxLEjRwwbzwMqu/BZe4N67H11Wkv4rojPbIUw=@vger.kernel.org X-Gm-Message-State: AOJu0Yw9TDSDmm5YiHXO+PvVrKYp9RzhBLDwaPgQdD/beKJsyBAooqaM j2gf5OmJ2LE8S/YRrVy0z03DFjkuRiwX16A5JtQ6iweuLzZcfTrHvJTFuIi4kn9CZwIjysUITAH rgxkGsg== X-Received: from plpn14.prod.google.com ([2002:a17:902:968e:b0:2ae:3ad0:e4bd]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f602:b0:2ae:669a:874c with SMTP id d9443c01a7336-2aeae85b105mr7158745ad.34.1773189237269; Tue, 10 Mar 2026 17:33:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:41 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-3-seanjc@google.com> Subject: [PATCH 2/7] KVM: x86: Drop the "EX" part of "EXREG" to avoid collision with APX From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that NR_VCPU_REGS is no longer a thing, drop the "EX" is for extended (or maybe extra?") prefix from non-GRP registers to avoid a collision with APX (Advanced Performance Extensions), which adds: 16 additional general-purpose registers (GPRs) R16=E2=80=93R31, also refe= rred to as Extended GPRs (EGPRs) in this document; I.e. KVM's version of "extended" won't match with APX's definition. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 18 +++++++-------- arch/x86/kvm/kvm_cache_regs.h | 16 ++++++------- arch/x86/kvm/svm/svm.c | 6 ++--- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/vmx/nested.c | 6 ++--- arch/x86/kvm/vmx/tdx.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 40 ++++++++++++++++----------------- arch/x86/kvm/vmx/vmx.h | 20 ++++++++--------- arch/x86/kvm/x86.c | 16 ++++++------- 9 files changed, 64 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 0461ba97a3be..3af5e2661ade 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -195,8 +195,8 @@ enum kvm_reg { =20 VCPU_REG_RIP =3D NR_VCPU_GENERAL_PURPOSE_REGS, =20 - VCPU_EXREG_PDPTR, - VCPU_EXREG_CR0, + VCPU_REG_PDPTR, + VCPU_REG_CR0, /* * Alias AMD's ERAPS (not a real register) to CR3 so that common code * can trigger emulation of the RAP (Return Address Predictor) with @@ -204,13 +204,13 @@ enum kvm_reg { * is cleared on writes to CR3, i.e. marking CR3 dirty will naturally * mark ERAPS dirty as well. */ - VCPU_EXREG_CR3, - VCPU_EXREG_ERAPS =3D VCPU_EXREG_CR3, - VCPU_EXREG_CR4, - VCPU_EXREG_RFLAGS, - VCPU_EXREG_SEGMENTS, - VCPU_EXREG_EXIT_INFO_1, - VCPU_EXREG_EXIT_INFO_2, + VCPU_REG_CR3, + VCPU_REG_ERAPS =3D VCPU_REG_CR3, + VCPU_REG_CR4, + VCPU_REG_RFLAGS, + VCPU_REG_SEGMENTS, + VCPU_REG_EXIT_INFO_1, + VCPU_REG_EXIT_INFO_2, }; =20 enum { diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 9b7df9de0e87..ac1f9867a234 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -159,8 +159,8 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu,= int index) { might_sleep(); /* on svm */ =20 - if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR)) - kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_PDPTR); + if (!kvm_register_is_available(vcpu, VCPU_REG_PDPTR)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_PDPTR); =20 return vcpu->arch.walk_mmu->pdptrs[index]; } @@ -174,8 +174,8 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *= vcpu, ulong mask) { ulong tmask =3D mask & KVM_POSSIBLE_CR0_GUEST_BITS; if ((tmask & vcpu->arch.cr0_guest_owned_bits) && - !kvm_register_is_available(vcpu, VCPU_EXREG_CR0)) - kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR0); + !kvm_register_is_available(vcpu, VCPU_REG_CR0)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_CR0); return vcpu->arch.cr0 & mask; } =20 @@ -196,8 +196,8 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *= vcpu, ulong mask) { ulong tmask =3D mask & KVM_POSSIBLE_CR4_GUEST_BITS; if ((tmask & vcpu->arch.cr4_guest_owned_bits) && - !kvm_register_is_available(vcpu, VCPU_EXREG_CR4)) - kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR4); + !kvm_register_is_available(vcpu, VCPU_REG_CR4)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_CR4); return vcpu->arch.cr4 & mask; } =20 @@ -211,8 +211,8 @@ static __always_inline bool kvm_is_cr4_bit_set(struct k= vm_vcpu *vcpu, =20 static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu) { - if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3)) - kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR3); + if (!kvm_register_is_available(vcpu, VCPU_REG_CR3)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_CR3); return vcpu->arch.cr3; } =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4b9d79412da7..1712c21f4128 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1512,7 +1512,7 @@ static void svm_cache_reg(struct kvm_vcpu *vcpu, enum= kvm_reg reg) kvm_register_mark_available(vcpu, reg); =20 switch (reg) { - case VCPU_EXREG_PDPTR: + case VCPU_REG_PDPTR: /* * When !npt_enabled, mmu->pdptrs[] is already available since * it is always updated per SDM when moving to CRs. @@ -4197,7 +4197,7 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, = gva_t gva) =20 static void svm_flush_tlb_guest(struct kvm_vcpu *vcpu) { - kvm_register_mark_dirty(vcpu, VCPU_EXREG_ERAPS); + kvm_register_mark_dirty(vcpu, VCPU_REG_ERAPS); =20 svm_flush_tlb_asid(vcpu); } @@ -4473,7 +4473,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) svm->vmcb->save.cr2 =3D vcpu->arch.cr2; =20 if (guest_cpu_cap_has(vcpu, X86_FEATURE_ERAPS) && - kvm_register_is_dirty(vcpu, VCPU_EXREG_ERAPS)) + kvm_register_is_dirty(vcpu, VCPU_REG_ERAPS)) svm->vmcb->control.erap_ctl |=3D ERAP_CONTROL_CLEAR_RAP; =20 svm_fixup_nested_rips(vcpu); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 9909bb7d2d31..dea46130aa24 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -460,7 +460,7 @@ static inline bool svm_is_vmrun_failure(u64 exit_code) * KVM_REQ_LOAD_MMU_PGD is always requested when the cached vcpu->arch.cr3 * is changed. svm_load_mmu_pgd() then syncs the new CR3 value into the V= MCB. */ -#define SVM_REGS_LAZY_LOAD_SET (1 << VCPU_EXREG_PDPTR) +#define SVM_REGS_LAZY_LOAD_SET (1 << VCPU_REG_PDPTR) =20 static inline void __vmcb_set_intercept(unsigned long *intercepts, u32 bit) { diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 101588914cbb..942acc46f91d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1189,7 +1189,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu,= unsigned long cr3, } =20 vcpu->arch.cr3 =3D cr3; - kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_dirty(vcpu, VCPU_REG_CR3); =20 /* Re-initialize the MMU, e.g. to pick up CR4 MMU role changes. */ kvm_init_mmu(vcpu); @@ -4972,7 +4972,7 @@ static void nested_vmx_restore_host_state(struct kvm_= vcpu *vcpu) =20 nested_ept_uninit_mmu_context(vcpu); vcpu->arch.cr3 =3D vmcs_readl(GUEST_CR3); - kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_available(vcpu, VCPU_REG_CR3); =20 /* * Use ept_save_pdptrs(vcpu) to load the MMU's cached PDPTRs @@ -5074,7 +5074,7 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 v= m_exit_reason, kvm_service_local_tlb_flush_requests(vcpu); =20 /* - * VCPU_EXREG_PDPTR will be clobbered in arch/x86/kvm/vmx/vmx.h between + * VCPU_REG_PDPTR will be clobbered in arch/x86/kvm/vmx/vmx.h between * now and the new vmentry. Ensure that the VMCS02 PDPTR fields are * up-to-date before switching to L1. */ diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 1e47c194af53..c23ec4ac8bc8 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1013,8 +1013,8 @@ static fastpath_t tdx_exit_handlers_fastpath(struct k= vm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } =20 -#define TDX_REGS_AVAIL_SET (BIT_ULL(VCPU_EXREG_EXIT_INFO_1) | \ - BIT_ULL(VCPU_EXREG_EXIT_INFO_2) | \ +#define TDX_REGS_AVAIL_SET (BIT_ULL(VCPU_REG_EXIT_INFO_1) | \ + BIT_ULL(VCPU_REG_EXIT_INFO_2) | \ BIT_ULL(VCPU_REGS_RAX) | \ BIT_ULL(VCPU_REGS_RBX) | \ BIT_ULL(VCPU_REGS_RCX) | \ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 802cc5d8bf43..ed44eb5b4349 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -843,8 +843,8 @@ static bool vmx_segment_cache_test_set(struct vcpu_vmx = *vmx, unsigned seg, bool ret; u32 mask =3D 1 << (seg * SEG_FIELD_NR + field); =20 - if (!kvm_register_is_available(&vmx->vcpu, VCPU_EXREG_SEGMENTS)) { - kvm_register_mark_available(&vmx->vcpu, VCPU_EXREG_SEGMENTS); + if (!kvm_register_is_available(&vmx->vcpu, VCPU_REG_SEGMENTS)) { + kvm_register_mark_available(&vmx->vcpu, VCPU_REG_SEGMENTS); vmx->segment_cache.bitmask =3D 0; } ret =3D vmx->segment_cache.bitmask & mask; @@ -1609,8 +1609,8 @@ unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx =3D to_vmx(vcpu); unsigned long rflags, save_rflags; =20 - if (!kvm_register_is_available(vcpu, VCPU_EXREG_RFLAGS)) { - kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS); + if (!kvm_register_is_available(vcpu, VCPU_REG_RFLAGS)) { + kvm_register_mark_available(vcpu, VCPU_REG_RFLAGS); rflags =3D vmcs_readl(GUEST_RFLAGS); if (vmx->rmode.vm86_active) { rflags &=3D RMODE_GUEST_OWNED_EFLAGS_BITS; @@ -1633,7 +1633,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned l= ong rflags) * if L1 runs L2 as a restricted guest. */ if (is_unrestricted_guest(vcpu)) { - kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS); + kvm_register_mark_available(vcpu, VCPU_REG_RFLAGS); vmx->rflags =3D rflags; vmcs_writel(GUEST_RFLAGS, rflags); return; @@ -2607,17 +2607,17 @@ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_= reg reg) case VCPU_REG_RIP: vcpu->arch.rip =3D vmcs_readl(GUEST_RIP); break; - case VCPU_EXREG_PDPTR: + case VCPU_REG_PDPTR: if (enable_ept) ept_save_pdptrs(vcpu); break; - case VCPU_EXREG_CR0: + case VCPU_REG_CR0: guest_owned_bits =3D vcpu->arch.cr0_guest_owned_bits; =20 vcpu->arch.cr0 &=3D ~guest_owned_bits; vcpu->arch.cr0 |=3D vmcs_readl(GUEST_CR0) & guest_owned_bits; break; - case VCPU_EXREG_CR3: + case VCPU_REG_CR3: /* * When intercepting CR3 loads, e.g. for shadowing paging, KVM's * CR3 is loaded into hardware, not the guest's CR3. @@ -2625,7 +2625,7 @@ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_re= g reg) if (!(exec_controls_get(to_vmx(vcpu)) & CPU_BASED_CR3_LOAD_EXITING)) vcpu->arch.cr3 =3D vmcs_readl(GUEST_CR3); break; - case VCPU_EXREG_CR4: + case VCPU_REG_CR4: guest_owned_bits =3D vcpu->arch.cr4_guest_owned_bits; =20 vcpu->arch.cr4 &=3D ~guest_owned_bits; @@ -3350,7 +3350,7 @@ void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu =3D vcpu->arch.walk_mmu; =20 - if (!kvm_register_is_dirty(vcpu, VCPU_EXREG_PDPTR)) + if (!kvm_register_is_dirty(vcpu, VCPU_REG_PDPTR)) return; =20 if (is_pae_paging(vcpu)) { @@ -3373,7 +3373,7 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu) mmu->pdptrs[2] =3D vmcs_read64(GUEST_PDPTR2); mmu->pdptrs[3] =3D vmcs_read64(GUEST_PDPTR3); =20 - kvm_register_mark_available(vcpu, VCPU_EXREG_PDPTR); + kvm_register_mark_available(vcpu, VCPU_REG_PDPTR); } =20 #define CR3_EXITING_BITS (CPU_BASED_CR3_LOAD_EXITING | \ @@ -3416,7 +3416,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long= cr0) vmcs_writel(CR0_READ_SHADOW, cr0); vmcs_writel(GUEST_CR0, hw_cr0); vcpu->arch.cr0 =3D cr0; - kvm_register_mark_available(vcpu, VCPU_EXREG_CR0); + kvm_register_mark_available(vcpu, VCPU_REG_CR0); =20 #ifdef CONFIG_X86_64 if (vcpu->arch.efer & EFER_LME) { @@ -3434,8 +3434,8 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long= cr0) * (correctly) stop reading vmcs.GUEST_CR3 because it thinks * KVM's CR3 is installed. */ - if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3)) - vmx_cache_reg(vcpu, VCPU_EXREG_CR3); + if (!kvm_register_is_available(vcpu, VCPU_REG_CR3)) + vmx_cache_reg(vcpu, VCPU_REG_CR3); =20 /* * When running with EPT but not unrestricted guest, KVM must @@ -3472,7 +3472,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long= cr0) * GUEST_CR3 is still vmx->ept_identity_map_addr if EPT + !URG. */ if (!(old_cr0_pg & X86_CR0_PG) && (cr0 & X86_CR0_PG)) - kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_dirty(vcpu, VCPU_REG_CR3); } =20 /* depends on vcpu->arch.cr0 to be set to a new value */ @@ -3501,7 +3501,7 @@ void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t ro= ot_hpa, int root_level) =20 if (!enable_unrestricted_guest && !is_paging(vcpu)) guest_cr3 =3D to_kvm_vmx(kvm)->ept_identity_map_addr; - else if (kvm_register_is_dirty(vcpu, VCPU_EXREG_CR3)) + else if (kvm_register_is_dirty(vcpu, VCPU_REG_CR3)) guest_cr3 =3D vcpu->arch.cr3; else /* vmcs.GUEST_CR3 is already up-to-date. */ update_guest_cr3 =3D false; @@ -3561,7 +3561,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long= cr4) } =20 vcpu->arch.cr4 =3D cr4; - kvm_register_mark_available(vcpu, VCPU_EXREG_CR4); + kvm_register_mark_available(vcpu, VCPU_REG_CR4); =20 if (!enable_unrestricted_guest) { if (enable_ept) { @@ -5021,7 +5021,7 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_= event) vmcs_write32(GUEST_IDTR_LIMIT, 0xffff); =20 vmx_segment_cache_clear(vmx); - kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS); + kvm_register_mark_available(vcpu, VCPU_REG_SEGMENTS); =20 vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE); vmcs_write32(GUEST_INTERRUPTIBILITY_INFO, 0); @@ -7514,9 +7514,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) =20 vmx->vt.exit_reason.full =3D EXIT_REASON_INVALID_STATE; vmx->vt.exit_reason.failed_vmentry =3D 1; - kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1); + kvm_register_mark_available(vcpu, VCPU_REG_EXIT_INFO_1); vmx->vt.exit_qualification =3D ENTRY_FAIL_DEFAULT; - kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_2); + kvm_register_mark_available(vcpu, VCPU_REG_EXIT_INFO_2); vmx->vt.exit_intr_info =3D 0; return EXIT_FASTPATH_NONE; } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 31bee8b0e4a1..d3255a054185 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -320,7 +320,7 @@ static __always_inline unsigned long vmx_get_exit_qual(= struct kvm_vcpu *vcpu) { struct vcpu_vt *vt =3D to_vt(vcpu); =20 - if (!kvm_register_test_and_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1) && + if (!kvm_register_test_and_mark_available(vcpu, VCPU_REG_EXIT_INFO_1) && !WARN_ON_ONCE(is_td_vcpu(vcpu))) vt->exit_qualification =3D vmcs_readl(EXIT_QUALIFICATION); =20 @@ -331,7 +331,7 @@ static __always_inline u32 vmx_get_intr_info(struct kvm= _vcpu *vcpu) { struct vcpu_vt *vt =3D to_vt(vcpu); =20 - if (!kvm_register_test_and_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_2) && + if (!kvm_register_test_and_mark_available(vcpu, VCPU_REG_EXIT_INFO_2) && !WARN_ON_ONCE(is_td_vcpu(vcpu))) vt->exit_intr_info =3D vmcs_read32(VM_EXIT_INTR_INFO); =20 @@ -625,14 +625,14 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC= _CONTROL, 64) */ #define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REG_RIP) | \ (1 << VCPU_REGS_RSP) | \ - (1 << VCPU_EXREG_RFLAGS) | \ - (1 << VCPU_EXREG_PDPTR) | \ - (1 << VCPU_EXREG_SEGMENTS) | \ - (1 << VCPU_EXREG_CR0) | \ - (1 << VCPU_EXREG_CR3) | \ - (1 << VCPU_EXREG_CR4) | \ - (1 << VCPU_EXREG_EXIT_INFO_1) | \ - (1 << VCPU_EXREG_EXIT_INFO_2)) + (1 << VCPU_REG_RFLAGS) | \ + (1 << VCPU_REG_PDPTR) | \ + (1 << VCPU_REG_SEGMENTS) | \ + (1 << VCPU_REG_CR0) | \ + (1 << VCPU_REG_CR3) | \ + (1 << VCPU_REG_CR4) | \ + (1 << VCPU_REG_EXIT_INFO_1) | \ + (1 << VCPU_REG_EXIT_INFO_2)) =20 static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 879cdeb6adde..dd39ccbff0d6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1090,14 +1090,14 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned lon= g cr3) } =20 /* - * Marking VCPU_EXREG_PDPTR dirty doesn't work for !tdp_enabled. + * Marking VCPU_REG_PDPTR dirty doesn't work for !tdp_enabled. * Shadow page roots need to be reconstructed instead. */ if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) kvm_mmu_free_roots(vcpu->kvm, mmu, KVM_MMU_ROOT_CURRENT); =20 memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)); - kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); + kvm_register_mark_dirty(vcpu, VCPU_REG_PDPTR); kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu); vcpu->arch.pdptrs_from_userspace =3D false; =20 @@ -1478,7 +1478,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long = cr3) kvm_mmu_new_pgd(vcpu, cr3); =20 vcpu->arch.cr3 =3D cr3; - kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_dirty(vcpu, VCPU_REG_CR3); /* Do not call post_set_cr3, we do not get here for confidential guests. = */ =20 handle_tlb_flush: @@ -12446,7 +12446,7 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu= , struct kvm_sregs *sregs, vcpu->arch.cr2 =3D sregs->cr2; *mmu_reset_needed |=3D kvm_read_cr3(vcpu) !=3D sregs->cr3; vcpu->arch.cr3 =3D sregs->cr3; - kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_dirty(vcpu, VCPU_REG_CR3); kvm_x86_call(post_set_cr3)(vcpu, sregs->cr3); =20 kvm_set_cr8(vcpu, sregs->cr8); @@ -12539,7 +12539,7 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, stru= ct kvm_sregs2 *sregs2) for (i =3D 0; i < 4 ; i++) kvm_pdptr_write(vcpu, i, sregs2->pdptrs[i]); =20 - kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); + kvm_register_mark_dirty(vcpu, VCPU_REG_PDPTR); mmu_reset_needed =3D 1; vcpu->arch.pdptrs_from_userspace =3D true; } @@ -13084,7 +13084,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool ini= t_event) kvm_rip_write(vcpu, 0xfff0); =20 vcpu->arch.cr3 =3D 0; - kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); + kvm_register_mark_dirty(vcpu, VCPU_REG_CR3); =20 /* * CR0.CD/NW are set on RESET, preserved on INIT. Note, some versions @@ -14296,7 +14296,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsig= ned long type, gva_t gva) * the RAP (Return Address Predicator). */ if (guest_cpu_cap_has(vcpu, X86_FEATURE_ERAPS)) - kvm_register_is_dirty(vcpu, VCPU_EXREG_ERAPS); + kvm_register_is_dirty(vcpu, VCPU_REG_ERAPS); =20 kvm_invalidate_pcid(vcpu, operand.pcid); return kvm_skip_emulated_instruction(vcpu); @@ -14312,7 +14312,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsig= ned long type, gva_t gva) fallthrough; case INVPCID_TYPE_ALL_INCL_GLOBAL: /* - * Don't bother marking VCPU_EXREG_ERAPS dirty, SVM will take + * Don't bother marking VCPU_REG_ERAPS dirty, SVM will take * care of doing so when emulating the full guest TLB flush * (the RAP is cleared on all implicit TLB flushes). */ --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9D5C364931 for ; Wed, 11 Mar 2026 00:33:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189241; cv=none; b=BzIcimPD9kuySX55RDvT8vcxc1Paz46+F3eZCNHMhUBWlZ2ACJMptAEC3jwdaKKL5nHRv8t84TCt+yyxS1/U3CfoBJOl5u9yoH90PM8mNihK8iYehNo4A1bza45bEcdougC/nohEyoHN7uOr2hw+ikzZRlT3THC7n3UOKyymEBY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189241; c=relaxed/simple; bh=LACfdw/zW10Jq2WOM8fA9MujAFua+8h+3eY59ooXrwQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ldNjSZRDCLnrQw8lVX0wo705RnlvXoZjxsYud0XxFiimdtAIcXmEY27sR3P7xsxvoQX3EHmNQ5H1IoZIR/qh0PsB9LdVcpGnwGNDC+IjmerDAosPEAgEd1kqSPr5M5e68xbTjPIDCdlm8cz4+BUsy+5+bAZMR2omA2wCSH2EdpY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Jkbq/2Jy; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Jkbq/2Jy" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-35979a03106so10722697a91.1 for ; Tue, 10 Mar 2026 17:33:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189239; x=1773794039; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/u7P5DGbuL3/Pf42zn0rgLVQxjHAQYab6nCFQbQkuh4=; b=Jkbq/2Jy60rHeJbj+EUjMqftsPLmsFxLbPQISUAmx+Tb4oDaggiwou+y23HYkV9SJp tj10gzwwt/36E+/VsCe6UUVoZa/8jBw2NN7wuHBi3v4UvccoHAP6w0fhwSLuFUcfOa8K Id3IS9gt7B/Luukm5zsbgFUoxQt7bUCS+o9cql4qGBXMPpEDhmF2uyUcpse8NWS9USLZ Su+/Jqam+X0hOGsLeG8Q/c3gPYoH8QUjg02NY2T9mc9r9t6ko64FEmhTrDcCG3jdqvbP n1BZweUVBAowPAv3Vym3SmEzNXl8GURlsMT6OSKCIxUv1oqGo6D0wqzwq3QL94J2pyZd hKrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189239; x=1773794039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/u7P5DGbuL3/Pf42zn0rgLVQxjHAQYab6nCFQbQkuh4=; b=mSKtUcPh7oZaMIF62YypmxJRVsJYpD2Dn8HVWujO9pRjJkjXTlmDv6qGTyHHfPNEIs mR3QTv8CLxIfZrpntqEWxVlL27Bi3avVBfgMFsJoSbbk0Yzl3aeyJrtwa0U86VLxTwOA 39kNpgOIJg6cHU0YFq743R681Br7JWCZu4GnoToneLXy94XxNV7PCVIluoQSbhlvPrm4 una5FEelVAzqWqq0At890HqliJ/lM6EXhCTeejb9+hGM5H4X1N4sJKNmhYHE3KdEOlwy p6IItveKSZhYDs+jOShVwxcR5gIjLWx1lAFk9g7KW6mSuCvBu8xMDuvuBc0gXiwHFVSk u9gg== X-Forwarded-Encrypted: i=1; AJvYcCUi7YBzwcl1SckY1AzF/WmGEcWSVevIu5XCtjMXlS3ZTwL3ivPNKG7TX0LbLKSkyzGjqSHu5seyzRCZza8=@vger.kernel.org X-Gm-Message-State: AOJu0YzVS9r9tzAWAV5wVvrhRMQ/bd8N2wS/LM31X/eUrvHdls0r1Q6p 49yrWcMU9R7IRlzs9GeayQVxlmiJPBfJtomC/78N00xrPsKmfm2MXwG2uUplHRk/TxJJGAdK/lO QvV7tEQ== X-Received: from pgbs189.prod.google.com ([2002:a63:5ec6:0:b0:c73:999d:a1d5]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2d83:b0:359:981b:5dd2 with SMTP id 98e67ed59e1d1-35a0124d192mr631149a91.14.1773189239132; Tue, 10 Mar 2026 17:33:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:42 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-4-seanjc@google.com> Subject: [PATCH 3/7] KVM: nVMX: Do a bitwise-AND of regs_avail when switching active VMCS From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When switching between vmcs01 and vmcs02, do a bitwise-AND of regs_avail to effectively reset the mask for the new VMCS, purely to be consistent with all other "full" writes of regs_avail. In practice, a straight write versus a bitwise-AND will yield the same result, as kvm_arch_vcpu_create() marks *all* registers available (and dirty), and KVM never marks registers unavailable unless they're lazily loaded. This will allow adding wrapper APIs to set regs_{avail,dirty} without having to add special handling for a nVMX use case that doesn't exist in practice. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 942acc46f91d..af2aaef38502 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -310,7 +310,7 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, stru= ct loaded_vmcs *vmcs) vmx_sync_vmcs_host_state(vmx, prev); put_cpu(); =20 - vcpu->arch.regs_avail =3D ~VMX_REGS_LAZY_LOAD_SET; + vcpu->arch.regs_avail &=3D ~VMX_REGS_LAZY_LOAD_SET; =20 /* * All lazily updated registers will be reloaded from VMCS12 on both --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B712F366077 for ; Wed, 11 Mar 2026 00:34:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189244; cv=none; b=dP6D+AjSvkiURCWPqQQWDX8hwRLQEpPxij5PEKhqAoejmCokwtaVhX9DYT1ijlsneDVQlwkESEQJqi7aL08O4Pb82MuqgQrtbvwQhM2ZmDs3dM2D5ERlgkSVMHfN4bSgqNJmga0Apoj3ikUPnpz4dbsV2aHlZav3QlKsNJm5P4g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189244; c=relaxed/simple; bh=/AZ+SwQNuV1HBEW+2zllqRNISfAa3nxcBibWN/35fOM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T0uaL/lwCwN9mrleCPXcW1o6eCtDhZUgPXjqVv650pylIDMZtmGt+a2UQR1x7BzYC71kNZZGRJJN3PMMmeMRtt34olomDbKGmeFL4yJ9I/uerJSi7KBUNRk8xzjhkGMg2/2L8v0sINlJ1qxx292PzQae7AcT7t0wEfPv/TpgxWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=T1ahHzBV; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="T1ahHzBV" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-8298c733f52so187920b3a.0 for ; Tue, 10 Mar 2026 17:34:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189241; x=1773794041; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VrwT1qYZM1BVcF4TouPNh26klS1cOdhqo+z3t1px/qc=; b=T1ahHzBVDJ2gVUp2qFuRNU6/Mo1l0ksXq40Z7X4T9QleJXu1/EmfPPcZ/S5Kl0Q7le vqiixRzXbIEIx/1lmHc+lZcNJluj5oiFiZu5RbA+13Ppfcvt/5039Bbkcn9OBdTqXc6/ fsICG3iEQ0E83yNFVaEt0P2nIjsP2lmYrPVQyStj/PzsuxMZ2x4r5/Kf26APXk3slj3z gAj5AkRVhueSjWJHQzGPMoC132Kd9qVCKS20cvpCXsVWjFh/PJ+Q/fWVcjJFURCNKuBC 4y/2IB7VpOGvLc0v4unnkGdwjTWCB29yDd8RVoA3a4Yibveky0AaNIcA6avVjZSPAq1+ z+aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189241; x=1773794041; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VrwT1qYZM1BVcF4TouPNh26klS1cOdhqo+z3t1px/qc=; b=GOkY2ut/yd9uMLNN8/june6KDA5U8IwRpoSYznhYHDJtZZWYgj3wBTkFq8ga+A/Mrr kR5eP6hAr/iDgW1H52AWpHzE3LLkbQUG5dog0EUhOjvBhJvyr2NwnO4/AjIyC/r0Bs9i OWiLxWtpF7mJoESD+E7mapN/rVvX0/sgv380HrJi00fr0MMvFHcbuTaYIWkf5U9XzmV8 Uwn3b8PPc4w17ZLfMB+1MwdoUC6/glw4sn/szZUrwDuDezwtztud9C0Bt+2X2cp9pQoW aB4E7xFXfN8bz9BSMJYp+ACGuJ+DGxAPL8i+jW5BG23fHZvvX7lAAHUIWbAJyhNyQNCC mbqw== X-Forwarded-Encrypted: i=1; AJvYcCWaGTafMQTd+qjPnwjZNI66/wIR+au99lbqeINM0kfCkMpPJPi8ZjkkqE8pEAL0N9bQE7ExbQBd8cEye5E=@vger.kernel.org X-Gm-Message-State: AOJu0YwDQybH9njDNnZpDXL0egmN6CglRh+NAae1hx928mjGPFz+WvD4 UPHcIKkrcuzcPaTe3AEbYy8DIDH0o5NbXiuDtHJN+pOJRFKHRy7bBmtQHTyqiTSrFD9ckhnDEif 0WV1Zkw== X-Received: from pfbfm23.prod.google.com ([2002:a05:6a00:2f97:b0:823:747:7567]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:21d1:b0:825:2927:3aa6 with SMTP id d2e1a72fcca58-829f79d5943mr495224b3a.14.1773189240997; Tue, 10 Mar 2026 17:34:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:43 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-5-seanjc@google.com> Subject: [PATCH 4/7] KVM: x86: Add wrapper APIs to reset dirty/available register masks From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add wrappers for setting regs_{avail,dirty} in anticipation of turning the fields into proper bitmaps, at which point direct writes won't work so well. Deliberately leave the initialization in kvm_arch_vcpu_create() as-is, because the regs_avail logic in particular is special in that it's the one and only place where KVM marks eagerly synchronized registers as available. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/kvm_cache_regs.h | 19 +++++++++++++++++++ arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/tdx.c | 2 +- arch/x86/kvm/vmx/vmx.c | 4 ++-- 5 files changed, 26 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index ac1f9867a234..94e31cf38cb8 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -105,6 +105,25 @@ static __always_inline bool kvm_register_test_and_mark= _available(struct kvm_vcpu return arch___test_and_set_bit(reg, (unsigned long *)&vcpu->arch.regs_ava= il); } =20 +static __always_inline void kvm_reset_available_registers(struct kvm_vcpu = *vcpu, + u32 available_mask) +{ + /* + * Note the bitwise-AND! In practice, a straight write would also work + * as KVM initializes the mask to all ones and never clears registers + * that are eagerly synchronized. Using a bitwise-AND adds a bit of + * sanity checking as incorrectly marking an eagerly sync'd register + * unavailable will generate a WARN due to an unexpected cache request. + */ + vcpu->arch.regs_avail &=3D available_mask; +} + +static __always_inline void kvm_reset_dirty_registers(struct kvm_vcpu *vcp= u, + u32 dirty_mask) +{ + vcpu->arch.regs_dirty =3D dirty_mask; +} + /* * The "raw" register helpers are only for cases where the full 64 bits of= a * register are read/written irrespective of current vCPU mode. In other = words, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 1712c21f4128..1a6626c32188 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4524,7 +4524,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) vcpu->arch.regs[VCPU_REGS_RSP] =3D svm->vmcb->save.rsp; vcpu->arch.rip =3D svm->vmcb->save.rip; } - vcpu->arch.regs_dirty =3D 0; + kvm_reset_dirty_registers(vcpu, 0); =20 if (unlikely(svm->vmcb->control.exit_code =3D=3D SVM_EXIT_NMI)) kvm_before_interrupt(vcpu, KVM_HANDLING_NMI); @@ -4570,7 +4570,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) vcpu->arch.apf.host_apf_flags =3D kvm_read_and_reset_apf_flags(); =20 - vcpu->arch.regs_avail &=3D ~SVM_REGS_LAZY_LOAD_SET; + kvm_reset_available_registers(vcpu, ~SVM_REGS_LAZY_LOAD_SET); =20 if (!msr_write_intercepted(vcpu, MSR_AMD64_PERF_CNTR_GLOBAL_CTL)) rdmsrq(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, vcpu_to_pmu(vcpu)->global_ctrl); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index af2aaef38502..d4ba64bde709 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -310,13 +310,13 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, st= ruct loaded_vmcs *vmcs) vmx_sync_vmcs_host_state(vmx, prev); put_cpu(); =20 - vcpu->arch.regs_avail &=3D ~VMX_REGS_LAZY_LOAD_SET; + kvm_reset_available_registers(vcpu, ~VMX_REGS_LAZY_LOAD_SET); =20 /* * All lazily updated registers will be reloaded from VMCS12 on both * vmentry and vmexit. */ - vcpu->arch.regs_dirty =3D 0; + kvm_reset_dirty_registers(vcpu, 0); } =20 static void nested_put_vmcs12_pages(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c23ec4ac8bc8..d4cb6dc8098f 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1098,7 +1098,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) =20 tdx_load_host_xsave_state(vcpu); =20 - vcpu->arch.regs_avail &=3D TDX_REGS_AVAIL_SET; + kvm_reset_available_registers(vcpu, TDX_REGS_AVAIL_SET); =20 if (unlikely(tdx->vp_enter_ret =3D=3D EXIT_REASON_EPT_MISCONFIG)) return EXIT_FASTPATH_NONE; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ed44eb5b4349..217ea6e72c2f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7472,7 +7472,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vc= pu *vcpu, flags); =20 vcpu->arch.cr2 =3D native_read_cr2(); - vcpu->arch.regs_avail &=3D ~VMX_REGS_LAZY_LOAD_SET; + kvm_reset_available_registers(vcpu, ~VMX_REGS_LAZY_LOAD_SET); =20 vmx->idt_vectoring_info =3D 0; =20 @@ -7538,7 +7538,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) vmcs_writel(GUEST_RSP, vcpu->arch.regs[VCPU_REGS_RSP]); if (kvm_register_is_dirty(vcpu, VCPU_REG_RIP)) vmcs_writel(GUEST_RIP, vcpu->arch.rip); - vcpu->arch.regs_dirty =3D 0; + kvm_reset_dirty_registers(vcpu, 0); =20 if (run_flags & KVM_RUN_LOAD_GUEST_DR6) set_debugreg(vcpu->arch.dr6, 6); --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B241F36C582 for ; Wed, 11 Mar 2026 00:34:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189247; cv=none; b=Hl2pFITjODsq6Nh0QfrFhOqqO2RI//PwRAa0BMQHburWZQYJ2kuYNdQjOXJm60kcZ6kG0WWYFNvHeLCh9EM14aFITJMXrsjeR9Yie9z51QnaBtKHM1pcwjOz6U5QpI4tqNQpEqxHidZtoYccddSghzimlRDf43Rw3lEpBT8P1IU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189247; c=relaxed/simple; bh=gUHJnfT47WA8UME9igHBxYTLndDdZjAEZkok6tr14PQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RYuJtsb2X8Lshvpc/icKN3D5Cr7438ip8hjIptrgcfzQip0JJt9dAB5YRuWouTvB+HWFZLrdlxBhqp8aiNVxw4E48Oh25QGebvoc2sraDPRfCZrpsqNA2wEyzmGN/1JWxURBqRHFv3prv2Xfxobp6107vBzVhI+JlZqgnZrC7Dk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4BfCtqsM; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4BfCtqsM" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3568090851aso78225129a91.1 for ; Tue, 10 Mar 2026 17:34:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189243; x=1773794043; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Q91qKTw/u2zRo/3uRL6c/7UREv+NiPYJeCn/l/Npduc=; b=4BfCtqsMVombrMmAGYgYubrEmDxpCfo6sscP5jD/++J7XaPVTlnh8MEu5InCr0O35c DIQDi0SDiZK8oLKUyZuErDFOowKB34TTHt1jy/2pnNvIp9cF0uDhZmTVa7AMXhey1duT k5AwKz6V8WHZX/LKty/WkskoIaloSccjb2ozezaLbqHF0o3g8cDc0bCBGcqm8ZhY1mfN Zoh0kldWJD+9gBKCOIusbbQ35EDliKrifN2MKOcn9HJLsQA1gDxvgYY4iFrQTU/bxXIN LAuu1xaUdzn5TEig98Y/X+vAmMsrNEowQEeTL7X2cWCaaj2owtSndfJyE1hnPZZiYfy2 W+TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189243; x=1773794043; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Q91qKTw/u2zRo/3uRL6c/7UREv+NiPYJeCn/l/Npduc=; b=c9iZGmFCtd5sJ5wn1935U9E32p/g0L5NEkCC2SeAHGKUi8w9SQJRHNlwu4GxcGUjF7 147zI8lTqqijNJJVIvSPQDqtxhPdN+9Ns+mJ1af+6HbRFem2/3Fq30ws2InlXpN42VX2 mwF8o0aW99+2IhBlkr3KuYN+IgOox2CgLyvwsvem6eu0fqvz/kdOJNorhe1UL26y7fIi SGaGFVquDUCnHYXKwY/rPOq9CNy6YgqrJBjgZw+TLR6A5JHtm6RltwKngDQQAEvdVpig 3K3iayBg8xjw5EG+FyRplj3przPpoHWr2WBT8RyB9OkC9X3OpvZR+0t9bdP6rsLVTd0C C31g== X-Forwarded-Encrypted: i=1; AJvYcCWBOgyi/ep8g4TF21kpEOIr6oeQbZY0A4CrakF98KnOC6fdm8gR+gN3F4t8XTkfW4cc2P6VkPvi1ootHmo=@vger.kernel.org X-Gm-Message-State: AOJu0YykdhMnxP23axl7IlvQr1cHgTvjt1QIVXEkBWUpwf4lgXDTPKSp 0OLoWvyd05kuEXTQF3hL2Z4oPc/UpF+SUdu/ssMezOaDCQ2J8A7o6um4qD7T6YJxhWNy83DfF6Z CEYXcYQ== X-Received: from pjbjx10.prod.google.com ([2002:a17:90b:46ca:b0:359:8f46:13ce]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1d09:b0:359:9158:7459 with SMTP id 98e67ed59e1d1-35a00f1bd24mr748529a91.0.1773189242920; Tue, 10 Mar 2026 17:34:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:44 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-6-seanjc@google.com> Subject: [PATCH 5/7] KVM: x86: Track available/dirty register masks as "unsigned long" values From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert regs_{avail,dirty} and all related masks to "unsigned long" values as an intermediate step towards declaring the fields as actual bitmaps, and as a step toward support APX, which will push the total number of registers beyond 32 on 64-bit kernels. Opportunistically convert TDX's ULL bitmask to a UL to match everything else (TDX is 64-bit only, so it's a nop in the end). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/kvm_cache_regs.h | 4 ++-- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/vmx/tdx.c | 34 ++++++++++++++++----------------- arch/x86/kvm/vmx/vmx.h | 20 +++++++++---------- 5 files changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 3af5e2661ade..734c2eee58e0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -802,8 +802,8 @@ struct kvm_vcpu_arch { */ unsigned long regs[NR_VCPU_GENERAL_PURPOSE_REGS]; unsigned long rip; - u32 regs_avail; - u32 regs_dirty; + unsigned long regs_avail; + unsigned long regs_dirty; =20 unsigned long cr0; unsigned long cr0_guest_owned_bits; diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 94e31cf38cb8..5de6c7dfd63b 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -106,7 +106,7 @@ static __always_inline bool kvm_register_test_and_mark_= available(struct kvm_vcpu } =20 static __always_inline void kvm_reset_available_registers(struct kvm_vcpu = *vcpu, - u32 available_mask) + unsigned long available_mask) { /* * Note the bitwise-AND! In practice, a straight write would also work @@ -119,7 +119,7 @@ static __always_inline void kvm_reset_available_registe= rs(struct kvm_vcpu *vcpu, } =20 static __always_inline void kvm_reset_dirty_registers(struct kvm_vcpu *vcp= u, - u32 dirty_mask) + unsigned long dirty_mask) { vcpu->arch.regs_dirty =3D dirty_mask; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index dea46130aa24..7010db21e8cc 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -460,7 +460,7 @@ static inline bool svm_is_vmrun_failure(u64 exit_code) * KVM_REQ_LOAD_MMU_PGD is always requested when the cached vcpu->arch.cr3 * is changed. svm_load_mmu_pgd() then syncs the new CR3 value into the V= MCB. */ -#define SVM_REGS_LAZY_LOAD_SET (1 << VCPU_REG_PDPTR) +#define SVM_REGS_LAZY_LOAD_SET (BIT(VCPU_REG_PDPTR)) =20 static inline void __vmcb_set_intercept(unsigned long *intercepts, u32 bit) { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index d4cb6dc8098f..1e4f59cfdc0a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1013,23 +1013,23 @@ static fastpath_t tdx_exit_handlers_fastpath(struct= kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } =20 -#define TDX_REGS_AVAIL_SET (BIT_ULL(VCPU_REG_EXIT_INFO_1) | \ - BIT_ULL(VCPU_REG_EXIT_INFO_2) | \ - BIT_ULL(VCPU_REGS_RAX) | \ - BIT_ULL(VCPU_REGS_RBX) | \ - BIT_ULL(VCPU_REGS_RCX) | \ - BIT_ULL(VCPU_REGS_RDX) | \ - BIT_ULL(VCPU_REGS_RBP) | \ - BIT_ULL(VCPU_REGS_RSI) | \ - BIT_ULL(VCPU_REGS_RDI) | \ - BIT_ULL(VCPU_REGS_R8) | \ - BIT_ULL(VCPU_REGS_R9) | \ - BIT_ULL(VCPU_REGS_R10) | \ - BIT_ULL(VCPU_REGS_R11) | \ - BIT_ULL(VCPU_REGS_R12) | \ - BIT_ULL(VCPU_REGS_R13) | \ - BIT_ULL(VCPU_REGS_R14) | \ - BIT_ULL(VCPU_REGS_R15)) +#define TDX_REGS_AVAIL_SET (BIT(VCPU_REG_EXIT_INFO_1) | \ + BIT(VCPU_REG_EXIT_INFO_2) | \ + BIT(VCPU_REGS_RAX) | \ + BIT(VCPU_REGS_RBX) | \ + BIT(VCPU_REGS_RCX) | \ + BIT(VCPU_REGS_RDX) | \ + BIT(VCPU_REGS_RBP) | \ + BIT(VCPU_REGS_RSI) | \ + BIT(VCPU_REGS_RDI) | \ + BIT(VCPU_REGS_R8) | \ + BIT(VCPU_REGS_R9) | \ + BIT(VCPU_REGS_R10) | \ + BIT(VCPU_REGS_R11) | \ + BIT(VCPU_REGS_R12) | \ + BIT(VCPU_REGS_R13) | \ + BIT(VCPU_REGS_R14) | \ + BIT(VCPU_REGS_R15)) =20 static void tdx_load_host_xsave_state(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index d3255a054185..0962374c4cd3 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -623,16 +623,16 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC= _CONTROL, 64) * cache on demand. Other registers not listed here are synced to * the cache immediately after VM-Exit. */ -#define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REG_RIP) | \ - (1 << VCPU_REGS_RSP) | \ - (1 << VCPU_REG_RFLAGS) | \ - (1 << VCPU_REG_PDPTR) | \ - (1 << VCPU_REG_SEGMENTS) | \ - (1 << VCPU_REG_CR0) | \ - (1 << VCPU_REG_CR3) | \ - (1 << VCPU_REG_CR4) | \ - (1 << VCPU_REG_EXIT_INFO_1) | \ - (1 << VCPU_REG_EXIT_INFO_2)) +#define VMX_REGS_LAZY_LOAD_SET (BIT(VCPU_REGS_RSP) | \ + BIT(VCPU_REG_RIP) | \ + BIT(VCPU_REG_RFLAGS) | \ + BIT(VCPU_REG_PDPTR) | \ + BIT(VCPU_REG_SEGMENTS) | \ + BIT(VCPU_REG_CR0) | \ + BIT(VCPU_REG_CR3) | \ + BIT(VCPU_REG_CR4) | \ + BIT(VCPU_REG_EXIT_INFO_1) | \ + BIT(VCPU_REG_EXIT_INFO_2)) =20 static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) { --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B521036C592 for ; Wed, 11 Mar 2026 00:34:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189248; cv=none; b=KrSvPlXuQFgle9Ht/qJQ+i1hrovIH2NN8kYI9vfjb1IhIFTJiA3T8kbZLadg5PqaitWPFn/0ocYH+yp7uNPAxYrwf7kJiQwNZMgvqUn4vQWL4vQDpW/kvkcWmul7PjsCxwfICdsph9PxpgmVh3Djx8CppcVKS0MXx8e5px0lPDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189248; c=relaxed/simple; bh=W2LGyNZqQX/c9RYXmz11edzdl9owhvE9aNORh+BHn+A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iB+JY1hed0Dyiyh6Zy5FAHB1vTqWWqYZ/spQahNWQbyqek5TlBeigdYI1NCWZG8Yr1jg1hjMqObJ3IN7FFlnslPYio+fW5WvbEoHeRkKhpQy2KHdv1091s4u3I916lyo8MOgMIrNIqE7/+mWwHPDd+/VZkOq4I67ZmFnLN+lJ7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wDTy0L/r; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wDTy0L/r" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3598518beceso10316031a91.2 for ; Tue, 10 Mar 2026 17:34:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189245; x=1773794045; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=grOoDbAKPnsQ3mUYsQ7jw3YaC9t9ar7i6eOQ/Hn0Uzs=; b=wDTy0L/r9563amgzh5j81Q6qoSiD4Oy602eLFgsnGW4IfkTpMkpFy3IPKzLLs9oWu8 elEgsC4HXSM4bC2UeywZwwMbP46GvZEfgfL3BZ9MsMv45ZDOZTXg1IUba058vJ7gXv5r qUBmToe+kFDjSxeJV+yrijvljRxQjYXroeEOItSSf8sdhROmm3rm2joK0RcfpnOoPcwk 6igIDUl8U0JlUC+OuXVtjhLikKKRbmPnmBQp4mtl/xrNTNmsDpBwgT2PQmIugVQ6Ao4l nKgVYcOGX+i6csOYUDZn7IusQ4XqHMyT+WasEEmlFRhO02CyVPaJaIjxaWIsLo4UREs+ WkoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189245; x=1773794045; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=grOoDbAKPnsQ3mUYsQ7jw3YaC9t9ar7i6eOQ/Hn0Uzs=; b=qP2pJ0S55uOYbwJ6kU2+bXzAy/000ucdvLqI1OjFka7vkCmVgQov41hS3f4IYsoqr5 SCVDTlTowqqvMo0iebG/pGSW59eMIQgXYJZSie2FIpCtuppmKRKaW/jdK2hphdjxXwmu ryFGoCHMZqpxFuFjx71y7wm/O7X1vnBjOcVPbT2QtqQJJjY6bbgw9DmKGv+d1KMLqiUI aVeRMcKsV8WskNg6fyG4oKEDx1ja5eCfFF1is0A0XEYk0bSbcubdgB84pKOkmaQPcUkA M8IssgP/nnbfLKo9iD4oMsarmx32+C27rJ3L+AMC649CVQwWqxTDW1Qov71utyIYXSvp 3RoQ== X-Forwarded-Encrypted: i=1; AJvYcCXIQxykMH+OVfti15iLuZdVUh5081l7KcApSWozt+V4kHx3M+a6Kj9EZSI+ZWAbH6yKhj3yZ2uus6L42tY=@vger.kernel.org X-Gm-Message-State: AOJu0Yxg1vuja6SyjoE9abau0ZPDGUgMgh/15wVni7wqpLxCq/6BgQ+R P+RnCkZ90aOSBTW3d8zaYYQHkLcqypYbVpuZxjFpq8LkwfJo6UGag5mNN74ZtoHr88CSrSCU302 nifS/Nw== X-Received: from pjam23.prod.google.com ([2002:a17:90a:1597:b0:359:9051:f779]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1642:b0:359:94d8:34e7 with SMTP id 98e67ed59e1d1-35a01367b6amr697319a91.32.1773189245076; Tue, 10 Mar 2026 17:34:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:45 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-7-seanjc@google.com> Subject: [PATCH 6/7] KVM: x86: Use a proper bitmap for tracking available/dirty registers From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define regs_{avail,dirty} as bitmaps instead of U32s to harden against overflow, and to allow for dynamically sizing the bitmaps when APX comes along, which will add 16 more GPRs (R16-R31) and thus increase the total number of registers beyond 32. Open code writes in the "reset" APIs, as the writes are hot paths and bitmap_write() is complete overkill for what KVM needs. Even better, hardcoding writes to entry '0' in the array is a perfect excuse to assert that the array contains exactly one entry, e.g. to effectively add guard against defining R16-R31 in 32-bit kernels. For all intents and purposes, no functional change intended even though using bitmap_fill() will mean "undefined" registers are no longer marked available and dirty (KVM should never be querying those bits). Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 6 ++++-- arch/x86/kvm/kvm_cache_regs.h | 21 +++++++++++++-------- arch/x86/kvm/x86.c | 4 ++-- 3 files changed, 19 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 734c2eee58e0..cff9023f12c7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -211,6 +211,8 @@ enum kvm_reg { VCPU_REG_SEGMENTS, VCPU_REG_EXIT_INFO_1, VCPU_REG_EXIT_INFO_2, + + NR_VCPU_TOTAL_REGS, }; =20 enum { @@ -802,8 +804,8 @@ struct kvm_vcpu_arch { */ unsigned long regs[NR_VCPU_GENERAL_PURPOSE_REGS]; unsigned long rip; - unsigned long regs_avail; - unsigned long regs_dirty; + DECLARE_BITMAP(regs_avail, NR_VCPU_TOTAL_REGS); + DECLARE_BITMAP(regs_dirty, NR_VCPU_TOTAL_REGS); =20 unsigned long cr0; unsigned long cr0_guest_owned_bits; diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 5de6c7dfd63b..782710829608 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -67,29 +67,29 @@ static inline bool kvm_register_is_available(struct kvm= _vcpu *vcpu, enum kvm_reg reg) { kvm_assert_register_caching_allowed(vcpu); - return test_bit(reg, (unsigned long *)&vcpu->arch.regs_avail); + return test_bit(reg, vcpu->arch.regs_avail); } =20 static inline bool kvm_register_is_dirty(struct kvm_vcpu *vcpu, enum kvm_reg reg) { kvm_assert_register_caching_allowed(vcpu); - return test_bit(reg, (unsigned long *)&vcpu->arch.regs_dirty); + return test_bit(reg, vcpu->arch.regs_dirty); } =20 static inline void kvm_register_mark_available(struct kvm_vcpu *vcpu, enum kvm_reg reg) { kvm_assert_register_caching_allowed(vcpu); - __set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail); + __set_bit(reg, vcpu->arch.regs_avail); } =20 static inline void kvm_register_mark_dirty(struct kvm_vcpu *vcpu, enum kvm_reg reg) { kvm_assert_register_caching_allowed(vcpu); - __set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail); - __set_bit(reg, (unsigned long *)&vcpu->arch.regs_dirty); + __set_bit(reg, vcpu->arch.regs_avail); + __set_bit(reg, vcpu->arch.regs_dirty); } =20 /* @@ -102,12 +102,15 @@ static __always_inline bool kvm_register_test_and_mar= k_available(struct kvm_vcpu enum kvm_reg reg) { kvm_assert_register_caching_allowed(vcpu); - return arch___test_and_set_bit(reg, (unsigned long *)&vcpu->arch.regs_ava= il); + return arch___test_and_set_bit(reg, vcpu->arch.regs_avail); } =20 static __always_inline void kvm_reset_available_registers(struct kvm_vcpu = *vcpu, unsigned long available_mask) { + BUILD_BUG_ON(sizeof(available_mask) !=3D sizeof(vcpu->arch.regs_avail[0])= ); + BUILD_BUG_ON(ARRAY_SIZE(vcpu->arch.regs_avail) !=3D 1); + /* * Note the bitwise-AND! In practice, a straight write would also work * as KVM initializes the mask to all ones and never clears registers @@ -115,13 +118,15 @@ static __always_inline void kvm_reset_available_regis= ters(struct kvm_vcpu *vcpu, * sanity checking as incorrectly marking an eagerly sync'd register * unavailable will generate a WARN due to an unexpected cache request. */ - vcpu->arch.regs_avail &=3D available_mask; + vcpu->arch.regs_avail[0] &=3D available_mask; } =20 static __always_inline void kvm_reset_dirty_registers(struct kvm_vcpu *vcp= u, unsigned long dirty_mask) { - vcpu->arch.regs_dirty =3D dirty_mask; + BUILD_BUG_ON(sizeof(dirty_mask) !=3D sizeof(vcpu->arch.regs_dirty[0])); + BUILD_BUG_ON(ARRAY_SIZE(vcpu->arch.regs_dirty) !=3D 1); + vcpu->arch.regs_dirty[0] =3D dirty_mask; } =20 /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dd39ccbff0d6..c1e1b3030786 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12809,8 +12809,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) int r; =20 vcpu->arch.last_vmentry_cpu =3D -1; - vcpu->arch.regs_avail =3D ~0; - vcpu->arch.regs_dirty =3D ~0; + bitmap_fill(vcpu->arch.regs_avail, NR_VCPU_TOTAL_REGS); + bitmap_fill(vcpu->arch.regs_dirty, NR_VCPU_TOTAL_REGS); =20 kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm); =20 --=20 2.53.0.473.g4a7958ca14-goog From nobody Wed Apr 8 03:07:41 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8507B36C0CA for ; Wed, 11 Mar 2026 00:34:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189248; cv=none; b=efjShbVFQmfXBc512nevQqoPhZnnAV90fejuBxT/8YwqFguWdX6WmIuRPR9P62LFjhvUZ0q/3FpiKp08wy8peAISSSNUzPRfPx2tk6NBeL89XObmiBoNZivwoZZI/XZ5+sUoldoS7MDFNlQLnRIzswjvM2WjDtzkgZ48nGdZFlA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189248; c=relaxed/simple; bh=7Om9PaNSbH3o9snTxwvrRdRO+tJyXxxbS8Gn1LzLrWA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hwyinTZieQmlxacvjoDsUG+TcN0Gd4Oo5i5VV+NIMQUetfyrEiWcuM5n3U6JMFMXiIxvMdpGGX6MDC2RxpWqrTGZ+LFWphTIdDB7IrYsMZX7vLY2e6smU9rsutTjgBRMZ31VyCk1HOO2nMmJxKe7VLldboR4bnS1zhCQXGHbTQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DLhqR0Pm; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DLhqR0Pm" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-c629a3276e9so48726656a12.2 for ; Tue, 10 Mar 2026 17:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189247; x=1773794047; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r+juLCyU3+NAIYivhdo5Xlsn2qwlsyffxNBIS7NuIQ8=; b=DLhqR0PmuIu2Lwjc/KpEVLBvarxX+jQw4TB3RhkMpHXRjGkCZJr0L7L/c5qY2CPzGz OrdR6IrDWqM1Ixu3nXHn3gQkVROjWdL+1eaf1MPMRU7jKNGlusqLee2a9Aj2GAicXMEu S3x8wb3fto3y+P865PFFJmAU83PiGXTt1ZBQBY9H3B9f6URWuk54NRvL8LfW5UMamslX k+vovOFzBHDP/vPr/JVv0o5teNk66hlPjz52Jlwge8w5BxfhqW3PmrsLovc0AQFI3ipD cZB1aNUn8qR/7ctbqeZywqUS6HwmEsrl5KpWFGnHiVKrykRlzdezveBNc3o0iFWlBTtr qViQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189247; x=1773794047; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r+juLCyU3+NAIYivhdo5Xlsn2qwlsyffxNBIS7NuIQ8=; b=iOZvXJBHX5vjUW1vKHZxR4mDnwvzYyvdChpXpbPAOxsHPxoOf1xIfT9UX2W26+FsGw 3ehXmo12OfTykh+B6Vp804gnrcs0mZu8V2elKPGoix39MnlNJw5aB+14Co7CWz+8dbKF OFsj0NsSjQmuZw18onI/2tEC1vqz8zo9BJfoAok1NNGG435qzko1rJpdpv979YdCR/56 doGIAQwbiYz0HAKqTHnxV9f7iLyCCxISkS6nNwOp9a/1RNnVkJxDEF4ICYmm3q4zRUcw OU1yQb10zH9IUTnLur3+eu3LhcIGZd2JcWn5Sgvx4a5lyX61yKdASKOrlblEpFD2511k lX4A== X-Forwarded-Encrypted: i=1; AJvYcCXrMyQJVQMBptaIDsuKCT2FN8d8oXuwhZP/OaYLGSQ1LZdMeWKyJ8+Pa+wn+rEZxGqakt4wHabufYqQdYU=@vger.kernel.org X-Gm-Message-State: AOJu0Yz6AiuHM8ACethsYxp44t3zk6lZII5v27FcpC9fm2dvLoDUl97P 8wnD3bqQ9DZ8aISBwK8hnIME9esCdugXBGAHzWBVDwmVGfMicd38HZiMEUkkMUhqjWZTjQX0BH8 s3FmrwQ== X-Received: from pfkq5.prod.google.com ([2002:a05:6a00:845:b0:829:72ec:561c]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:13a5:b0:824:b1bb:f6b5 with SMTP id d2e1a72fcca58-829f71bcb9fmr777882b3a.63.1773189246869; Tue, 10 Mar 2026 17:34:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:46 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-8-seanjc@google.com> Subject: [PATCH 7/7] *** DO NOT MERGE *** KVM: x86: Pretend that APX is supported on 64-bit kernels From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index cff9023f12c7..3d9c8cc9d515 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -190,6 +190,27 @@ enum kvm_reg { VCPU_REGS_R13 =3D __VCPU_REGS_R13, VCPU_REGS_R14 =3D __VCPU_REGS_R14, VCPU_REGS_R15 =3D __VCPU_REGS_R15, +#define CONFIG_X86_APX + +#endif + +#ifdef CONFIG_X86_APX + VCPU_REG_R16 =3D VCPU_REGS_R15 + 1, + VCPU_REG_R17, + VCPU_REG_R18, + VCPU_REG_R19, + VCPU_REG_R20, + VCPU_REG_R21, + VCPU_REG_R22, + VCPU_REG_R23, + VCPU_REG_R24, + VCPU_REG_R25, + VCPU_REG_R26, + VCPU_REG_R27, + VCPU_REG_R28, + VCPU_REG_R29, + VCPU_REG_R30, + VCPU_REG_R31, #endif NR_VCPU_GENERAL_PURPOSE_REGS, =20 --=20 2.53.0.473.g4a7958ca14-goog