From nobody Wed Apr 8 04:44:49 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1807D3644B6 for ; Wed, 11 Mar 2026 00:33:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189238; cv=none; b=NI8qtNXMKTXeugn0N/svEeCIz1QSWpnG/yX6f0b6i6sKXouu4YQllk1PUQokrqQhQH7djQihThyvS3Gqxh317tRVnNx+k2w3vYTa47UGJYxLdMdNUK3FiVvEW5RukYULvenyqUoKEwOa7tDwmP655MGOpB9zC9nwyCdFsYV5Pkc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773189238; c=relaxed/simple; bh=a0f1FlFR/Z7SQKhHn397KaI8/VGpAunEqgFSRX9v8iY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KpMWV7pyn5vG97VpNw/E/6LcNg6lo9DZRh6foFNA7pW3GqRs6vAb3cT87QisqGKlxQMharbJN39EFEKRIqbtDRL7zZl3ZFVR6Ltzkk3p9mO7z9BT+4KHE5En8E507J8T0JTnDtTAT1qgTHfaDDB9Cea4eMP+yTbWaSvsqGzoROs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=S+1zyd08; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="S+1zyd08" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-c7387c70046so3074644a12.0 for ; Tue, 10 Mar 2026 17:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773189235; x=1773794035; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=X69ggzZm2sR1A00OKZAFN6tQgMWt6GCX53/2b8xcbDE=; b=S+1zyd08htoPjJdR00Mwd4qZRx76B9LwZxWBw5NmMfvUS97c2+ciOgLYboUHHlf4N7 L5kAdT8HjMLv/aZnJKXXkMMnRfYT+rbIqv1AMD64sxELCJPJALv6E4bnKxiqIMDDC4kN +OoYyY8Eqb1e+9VJ/c5LLzkhMjgAGjqVEOWfDSYcJBPtwq7DreCK4n8f76aH9n4RYLQB IDD4cAWgaZFMQ3K79IJBbiPv5sT2YvQ6KQqJyQZIGNbOCpRVBHo/28aa6Og3kIIXwKtB MSBQAnuWipwScsgsbNCN/QMEjLeXkJlQZCK6gIdfm57MZ3yqremCGN8f7c+rvW/a5guA kuqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773189235; x=1773794035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=X69ggzZm2sR1A00OKZAFN6tQgMWt6GCX53/2b8xcbDE=; b=Ceu6Nchib1QZX/H5ilmyILxB60CqWiXYqPWbmIXyXjliLdNvI6CENLrLlxcKoEcBRK nIdK9ZuPuZGb7lsQaUwpR/fCztcPhYLx+vgXxPR0yED7VDKiGItle5FylNGOsH9c+VmU P+dpXGqBoAKLgPUtqPRZSPJlaPgfpfgginHcsy9FvrE+IufMMidzXLJCwp4KJPABsKvF 1iB0rfGFex+qXFoVx/nNLvVNOCqEcj5s1KJZy4S4jKKV3g+llHRipZo2C8Dy70LrFB8G x/IKiWnx42qBj4qwvtHelGa6/GG1FXWIIcJh2Q3jI8Pf6hOePe74WBBmRE5VeaFkVXND 5BsA== X-Forwarded-Encrypted: i=1; AJvYcCXKqJB3/On1zhS0EQSMcoGJfLDzIKTkqlnl/Q2FVsiJHwa6al4nngJqDMrYLVr3vGmTxAMSxi/KV9ixHHU=@vger.kernel.org X-Gm-Message-State: AOJu0YzzF2lpo8xSBDisLl9Ex2dZ5f1/h1VvnzWIWmvrBKWk8NoE1DF1 epyohmo+doRnRxlKfkTBJ1BRkA0rUBc9/Z3Hdsn5yfE5fkfK1aqUyEGBcnhitpDCbZWL2FdDJqw NJ2yzIQ== X-Received: from pghq16.prod.google.com ([2002:a63:e210:0:b0:c73:356e:5ea1]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6300:2289:b0:398:9099:6079 with SMTP id adf61e73a8af0-398c5e6ceccmr429036637.7.1773189235315; Tue, 10 Mar 2026 17:33:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Mar 2026 17:33:40 -0700 In-Reply-To: <20260311003346.2626238-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311003346.2626238-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311003346.2626238-2-seanjc@google.com> Subject: [PATCH 1/7] KVM: x86: Add dedicated storage for guest RIP From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Kiryl Shutsemau Cc: kvm@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Chang S . Bae" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add kvm_vcpu_arch.rip to track guest RIP instead of including it in the generic regs[] array. Decoupling RIP from regs[] will allow using a *completely* arbitrary index for RIP, as opposed to the mostly-arbitrary index that is currently used. That in turn will allow using indices 16-31 to track R16-R31 that are coming with APX. Note, although RIP can used for addressing, it does NOT have an architecturally defined index, and so can't be reached via flows like get_vmx_mem_address() where KVM "blindly" reads a general purpose register given the SIB information reported by hardware. For RIP-relative addressing, hardware reports the full "offset" in vmcs.EXIT_QUALIFICATION. Note #2, keep the available/dirty tracking as RSP is context switched through the VMCS, i.e. needs to be cached for VMX. Opportunistically rename NR_VCPU_REGS to NR_VCPU_GENERAL_PURPOSE_REGS to better capture what it tracks, and so that KVM can slot in R16-R13 without running into weirdness where KVM's definition of "EXREG" doesn't line up with APX's definition of "extended reg". No functional change intended. Cc: Chang S. Bae Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 10 ++++++---- arch/x86/kvm/kvm_cache_regs.h | 12 ++++++++---- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 6 +++--- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 +- 6 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index c94556fefb75..0461ba97a3be 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -191,10 +191,11 @@ enum kvm_reg { VCPU_REGS_R14 =3D __VCPU_REGS_R14, VCPU_REGS_R15 =3D __VCPU_REGS_R15, #endif - VCPU_REGS_RIP, - NR_VCPU_REGS, + NR_VCPU_GENERAL_PURPOSE_REGS, =20 - VCPU_EXREG_PDPTR =3D NR_VCPU_REGS, + VCPU_REG_RIP =3D NR_VCPU_GENERAL_PURPOSE_REGS, + + VCPU_EXREG_PDPTR, VCPU_EXREG_CR0, /* * Alias AMD's ERAPS (not a real register) to CR3 so that common code @@ -799,7 +800,8 @@ struct kvm_vcpu_arch { * rip and regs accesses must go through * kvm_{register,rip}_{read,write} functions. */ - unsigned long regs[NR_VCPU_REGS]; + unsigned long regs[NR_VCPU_GENERAL_PURPOSE_REGS]; + unsigned long rip; u32 regs_avail; u32 regs_dirty; =20 diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 8ddb01191d6f..9b7df9de0e87 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -112,7 +112,7 @@ static __always_inline bool kvm_register_test_and_mark_= available(struct kvm_vcpu */ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, i= nt reg) { - if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_REGS)) + if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_GENERAL_PURPOSE_REGS)) return 0; =20 if (!kvm_register_is_available(vcpu, reg)) @@ -124,7 +124,7 @@ static inline unsigned long kvm_register_read_raw(struc= t kvm_vcpu *vcpu, int reg static inline void kvm_register_write_raw(struct kvm_vcpu *vcpu, int reg, unsigned long val) { - if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_REGS)) + if (WARN_ON_ONCE((unsigned int)reg >=3D NR_VCPU_GENERAL_PURPOSE_REGS)) return; =20 vcpu->arch.regs[reg] =3D val; @@ -133,12 +133,16 @@ static inline void kvm_register_write_raw(struct kvm_= vcpu *vcpu, int reg, =20 static inline unsigned long kvm_rip_read(struct kvm_vcpu *vcpu) { - return kvm_register_read_raw(vcpu, VCPU_REGS_RIP); + if (!kvm_register_is_available(vcpu, VCPU_REG_RIP)) + kvm_x86_call(cache_reg)(vcpu, VCPU_REG_RIP); + + return vcpu->arch.rip; } =20 static inline void kvm_rip_write(struct kvm_vcpu *vcpu, unsigned long val) { - kvm_register_write_raw(vcpu, VCPU_REGS_RIP, val); + vcpu->arch.rip =3D val; + kvm_register_mark_dirty(vcpu, VCPU_REG_RIP); } =20 static inline unsigned long kvm_rsp_read(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index b1aa85a6ca5a..0dec619490c3 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -913,7 +913,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) save->r14 =3D svm->vcpu.arch.regs[VCPU_REGS_R14]; save->r15 =3D svm->vcpu.arch.regs[VCPU_REGS_R15]; #endif - save->rip =3D svm->vcpu.arch.regs[VCPU_REGS_RIP]; + save->rip =3D svm->vcpu.arch.rip; =20 /* Sync some non-GPR registers before encrypting */ save->xcr0 =3D svm->vcpu.arch.xcr0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 3407deac90bd..4b9d79412da7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4436,7 +4436,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip =3D vcpu->arch.regs[VCPU_REGS_RIP]; + svm->vmcb->save.rip =3D vcpu->arch.rip; =20 /* * Disable singlestep if we're injecting an interrupt/exception. @@ -4522,7 +4522,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, u64 run_flags) vcpu->arch.cr2 =3D svm->vmcb->save.cr2; vcpu->arch.regs[VCPU_REGS_RAX] =3D svm->vmcb->save.rax; vcpu->arch.regs[VCPU_REGS_RSP] =3D svm->vmcb->save.rsp; - vcpu->arch.regs[VCPU_REGS_RIP] =3D svm->vmcb->save.rip; + vcpu->arch.rip =3D svm->vmcb->save.rip; } vcpu->arch.regs_dirty =3D 0; =20 @@ -4954,7 +4954,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union= kvm_smram *smram) =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip =3D vcpu->arch.regs[VCPU_REGS_RIP]; + svm->vmcb->save.rip =3D vcpu->arch.rip; =20 nested_svm_simple_vmexit(svm, SVM_EXIT_SW); =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9302c16571cd..802cc5d8bf43 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2604,8 +2604,8 @@ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_re= g reg) case VCPU_REGS_RSP: vcpu->arch.regs[VCPU_REGS_RSP] =3D vmcs_readl(GUEST_RSP); break; - case VCPU_REGS_RIP: - vcpu->arch.regs[VCPU_REGS_RIP] =3D vmcs_readl(GUEST_RIP); + case VCPU_REG_RIP: + vcpu->arch.rip =3D vmcs_readl(GUEST_RIP); break; case VCPU_EXREG_PDPTR: if (enable_ept) @@ -7536,8 +7536,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) =20 if (kvm_register_is_dirty(vcpu, VCPU_REGS_RSP)) vmcs_writel(GUEST_RSP, vcpu->arch.regs[VCPU_REGS_RSP]); - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RIP)) - vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]); + if (kvm_register_is_dirty(vcpu, VCPU_REG_RIP)) + vmcs_writel(GUEST_RIP, vcpu->arch.rip); vcpu->arch.regs_dirty =3D 0; =20 if (run_flags & KVM_RUN_LOAD_GUEST_DR6) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 70bfe81dea54..31bee8b0e4a1 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -623,7 +623,7 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_C= ONTROL, 64) * cache on demand. Other registers not listed here are synced to * the cache immediately after VM-Exit. */ -#define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REGS_RIP) | \ +#define VMX_REGS_LAZY_LOAD_SET ((1 << VCPU_REG_RIP) | \ (1 << VCPU_REGS_RSP) | \ (1 << VCPU_EXREG_RFLAGS) | \ (1 << VCPU_EXREG_PDPTR) | \ --=20 2.53.0.473.g4a7958ca14-goog