From nobody Fri Dec 19 04:52:57 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BA4E1FD7AB for ; Wed, 18 Dec 2024 19:41:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734550885; cv=none; b=VpY3ZwhPI1HYzOhFLyrEolstRl0EjWLziBkFzmMbqkehO5mKPBANzI/7H880DmU8+bPUp66xDovnjmR2/xnZ8jSdHuaGyLX2kzkWI+M1EceSizRvr6rbxERgYgGn1ZzK8xeotC5VsrGz2VKlV3uTfWhHKdYGtcUsIO3k0+uAaY4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734550885; c=relaxed/simple; bh=0vGK6tfLfX542UfszY1Fn0V+xsJqBcIX2FsF9e5d61U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tzQi/OD+x+JKBbmzTYunrWfPLv2VP9V7uM9NTFaokMP6uqRSXDk55HYu9Fc5M041xPJIevcKGzWK+Y7RSku7wmyPEr+2uHI6KpZ8LzjtPsTwfut+zK3AmxIDkoleZUTSk4U3Gx9lT1y+qzkbWJHf6cVfUayKDop9eGDTyWnKAMI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wqnc7nYn; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wqnc7nYn" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa68b4b957fso224092466b.3 for ; Wed, 18 Dec 2024 11:41:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550882; x=1735155682; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d6FhQs6LanOwv9XGucvwEIpaYgCNqRAQXO7arc2oCbQ=; b=wqnc7nYnufeSF45rCWzcjmwrw9FM7cixHV/rCTqhsdehUl2luhY0v71vaPtlFCMO84 ySFBMw3PDDDR26YvxQgDj2UOTa401IGI3ZYxppEWsBbwkM9LFIbN7qDaSAB7kIsADvoh TVt8/qzF61mF++QXpZ8D+p3OclbqAI/qM9dKWd5FKwWKpMlwhracTqqJjIM8s+PNT4F5 p+oK4GzeKzdE2YURZm1BqZfypBHYwUXXv7q0c25+SAOBMRvrRmJWf2+43GWFZMKtYwTx fdfPf6Rbx65m2yT4ElQgs5SrQJyoL7DgK0Vsvia1WIl7m7a77M1/3qxWbQWVYekg7z9F LsqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550882; x=1735155682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d6FhQs6LanOwv9XGucvwEIpaYgCNqRAQXO7arc2oCbQ=; b=jGNRUjnh2m3d4s6DEtNdKmgiWOxXJB8XJexuuMrlHh7muxoHb/PwyIcFn0Xn65ddVp 5K+DXdcTTKvhS7oydmX2+EF6/MG53Ys4jZaSUbxXhdpnyWkn3Dw1uQcZjHoWffP+HtDk c4mbfSiJk8QI86SRZ3e+TEYN1esXWnLFCkv33y9P5PToa7zY191ZY7MesbLrk+5D+vhx 6l0t16MLNcGlU9VMwtDTX3DCPFMgGlSjEU0+uRivN/WHPzRxfkKmwOGKud33XQyx81v9 UX7Zc0j9PHay5fTv2WHKvGW1LRHrz2e+Y9TfYmEMcqW43arqX/ZawD5Iq0GOFer4mUw+ 4GFQ== X-Forwarded-Encrypted: i=1; AJvYcCUn1aIFv4EMe95voAAiDcmUiDajvOLW13fQDYCyo1gr8OgR5vkXF1eGO7jBQfmCdBv5zJ/zkxFrTZHKpG8=@vger.kernel.org X-Gm-Message-State: AOJu0Yx0uJfkem1k107TBnhOibHLD/+PlVYJTSQgJKWXjKpd/uOtNd6y 6TGKPWJ9Jq9cpRz1ggzFJr7YFUsRzCmzfSCq0KECK5CNACrPvBru7EgQd/gwNPDnN3icfaS+BCD Zp/N6XA== X-Google-Smtp-Source: AGHT+IEQl6X8dwkHz9iHBFM9j6hF5IUGHurvJ5ons33f9UMxvnTCneUk8xKx1If6w/JhYrqNqB+3Cu0j6Sf1 X-Received: from edbfg15.prod.google.com ([2002:a05:6402:548f:b0:5d2:1b7e:45a4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51c9:b0:5d0:c67e:e263 with SMTP id 4fb4d7f45d1cf-5d7ee3ba9d5mr4362570a12.8.1734550882147; Wed, 18 Dec 2024 11:41:22 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:50 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-10-qperret@google.com> Subject: [PATCH v4 09/18] KVM: arm64: Introduce __pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Marc Zyngier Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 29 ++++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 93 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index ca2590344313..89c0fac69551 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..55cc62b2f469 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -619,12 +619,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) =20 kvm_arch_vcpu_load_debug_state_flags(vcpu); =20 + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } =20 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index f361d8b91930..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { =20 /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; =20 /* @@ -69,6 +75,7 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); =20 struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 6aa0b13d86e5..95d78db315b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -141,16 +141,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_v= cpu) host_cpu_if->vgic_lr[i] =3D hyp_cpu_if->vgic_lr[i]; } =20 +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |=3D hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; =20 - host_vcpu =3D kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); =20 /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -163,9 +193,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_contex= t *host_ctxt) goto out; } =20 - host_kvm =3D kern_hyp_va(host_vcpu->kvm); - hyp_vcpu =3D pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret =3D -EINVAL; goto out; @@ -176,12 +203,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_cont= ext *host_ctxt) ret =3D __kvm_vcpu_run(&hyp_vcpu->vcpu); =20 sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret =3D __kvm_vcpu_run(host_vcpu); + ret =3D __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) =3D ret; } @@ -409,6 +434,8 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d46a02e24e4a..496d186efb03 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,6 +23,12 @@ unsigned int kvm_arm_vmid_bits; =20 unsigned int kvm_host_sve_max_vl; =20 +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -306,15 +312,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_= t handle, struct pkvm_hyp_vcpu *hyp_vcpu =3D NULL; struct pkvm_hyp_vm *hyp_vm; =20 + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm =3D get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <=3D vcpu_idx) goto unlock; =20 hyp_vcpu =3D hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu =3D NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu =3D this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } =20 @@ -323,10 +344,18 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm =3D pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); =20 hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu =3D NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } =20 +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); + +} + struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) { struct pkvm_hyp_vm *hyp_vm; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index f267bc2486a1..c2ef41fff079 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); =20 if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); =20 if (has_vhe()) --=20 2.47.1.613.gc27f4b7a9f-goog