From nobody Wed Dec 17 10:44:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D45FE73451; Fri, 21 Mar 2025 00:11:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742515860; cv=none; b=A1YNjZ8/lkQnWWMVX1sgqkvRS9H7mgbo3upkVEF4iv3OyntsX1bq5G2oY1g9aSsh+7ikKdTU3RMDmKmezfm6nUoM+5EvGKKp4cc8j7qlmQF2LxIF8rQ7YzNPWIdp3QDPHMSMd1JPertM7V2zVUojq7wDqe5BgmkOi34hhNhsisg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742515860; c=relaxed/simple; bh=bMLnYcxgDsVfl3qf2t/7K2GT/5Ad6vn5A2AXrFSvN4Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=crVQo80mvEP9wYX1ZQa68z8UzdUyU2oK+AHkq4gob/pzqP52YB1Dn+U217QLE/ewJqmmx+UQzIrbsVaC6GMv3jpy/DfoFpJoRKG2UfbQY0wxh0IbqfKSaND5lwS4XOFNLs2GAM0xApAaQQfyzRen/MPSx9UykyWDicjG3Y65kEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DDtYx2iH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DDtYx2iH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F9DBC4CEDD; Fri, 21 Mar 2025 00:10:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742515860; bh=bMLnYcxgDsVfl3qf2t/7K2GT/5Ad6vn5A2AXrFSvN4Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DDtYx2iHwG0QacQXoUu+o0zMjbtECSltjgQG5cWwESuIYIJ3lBnamh2T0LQEkPfrq AmSByPcq9BXCXiEYVWeeDdHER40krp3JK8I1qPpRg48NXVrqjrAHrJc/8bLDP7Xj3j HDrmKFN79kKQzRxbvm1APLMlnOmXVd8BEX2bDgoZVjwX960oWpZ0elkH1zG1FFbyJ2 p1r7lrzQEnxKCPBuM98b0TYaQK988P1zlWom2Ng+0NZLkCMRi48qNvyQCgnoipiKUL H29RLX+FbUHH24TaoUO+4lKWpBbrwmHT6skVU+8Qam7OJdZUrK6PK0sU5HIHOgtJWx IQcO19u3jnwhw== From: Mark Brown Date: Fri, 21 Mar 2025 00:10:15 +0000 Subject: [PATCH 6.13 v2 6/8] KVM: arm64: Refactor exit handlers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250321-stable-sve-6-13-v2-6-3150e3370c40@kernel.org> References: <20250321-stable-sve-6-13-v2-0-3150e3370c40@kernel.org> In-Reply-To: <20250321-stable-sve-6-13-v2-0-3150e3370c40@kernel.org> To: Greg Kroah-Hartman , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, stable@vger.kernel.org, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=7922; i=broonie@kernel.org; h=from:subject:message-id; bh=6xhOZwhy4sZC07gnsUDXIpLu9KPK9C1sLl6G9nAD0Xk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn3K54hFqe3FFwJkcvv58UyuaHPwGWKLN7e2zDhWF7 42D4sCuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ9yueAAKCRAk1otyXVSH0NZdCA CEHR+GIozbRzt39xeJvN2ODQ/kUeiMQKuSmxpsDoQEdBf6OUO13nrhKbi/b7NvMiNPBFZo+K2XiriN 1x7c4k32rCSKK/UppsYlJRiu0Q8wjF7NDoyfapqfG/IMZFdZPPL0WDmPsAQNKVFGYZJB2XXV30S4vl HJZrgyxgvcJ6bUtcyxVa5LM9d3p6WfoEFldXWA7hHEkvO/hoHJTMyIsmX3jNyeJRsgdZi9rzft+swJ 3AFxafqS5bOs5JFId3iT3uaneMiPRpekXgxS6jzEcgGoZzs6UMouCxTkYzTPKPbmCdz4J0aFto2LZO kKhsKxrtOQ2On3+oXJ0MoaFxl7rvpx X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB From: Mark Rutland [ Upstream commit 9b66195063c5a145843547b1d692bd189be85287 ] The hyp exit handling logic is largely shared between VHE and nVHE/hVHE, with common logic in arch/arm64/kvm/hyp/include/hyp/switch.h. The code in the header depends on function definitions provided by arch/arm64/kvm/hyp/vhe/switch.c and arch/arm64/kvm/hyp/nvhe/switch.c when they include the header. This is an unusual header dependency, and prevents the use of arch/arm64/kvm/hyp/include/hyp/switch.h in other files as this would result in compiler warnings regarding missing definitions, e.g. | In file included from arch/arm64/kvm/hyp/nvhe/hyp-main.c:8: | ./arch/arm64/kvm/hyp/include/hyp/switch.h:733:31: warning: 'kvm_get_exit_= handler_array' used but never defined | 733 | static const exit_handler_fn *kvm_get_exit_handler_array(struct k= vm_vcpu *vcpu); | | ^~~~~~~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:735:13: warning: 'early_exit_fi= lter' used but never defined | 735 | static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_co= de); | | ^~~~~~~~~~~~~~~~~ Refactor the logic such that the header doesn't depend on anything from the C files. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-7-mark.rutland@arm.c= om Signed-off-by: Marc Zyngier Signed-off-by: Mark Brown --- arch/arm64/kvm/hyp/include/hyp/switch.h | 30 ++++++------------------------ arch/arm64/kvm/hyp/nvhe/switch.c | 28 ++++++++++++++++------------ arch/arm64/kvm/hyp/vhe/switch.c | 9 ++++----- 3 files changed, 26 insertions(+), 41 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7601d741bc2ae77ca9f359e4901926a5feac48b9..e330a7825b56f14ccb144810bc0= d31f7f400fb22 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -666,23 +666,16 @@ static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *= vcpu, u64 *exit_code) =20 typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *); =20 -static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *= vcpu); - -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code); - /* * Allow the hypervisor to handle the exit with an exit handler if it has = one. * * Returns true if the hypervisor handled the exit, and control should go = back * to the guest, or false if it hasn't. */ -static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_co= de) +static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_co= de, + const exit_handler_fn *handlers) { - const exit_handler_fn *handlers =3D kvm_get_exit_handler_array(vcpu); - exit_handler_fn fn; - - fn =3D handlers[kvm_vcpu_trap_get_class(vcpu)]; - + exit_handler_fn fn =3D handlers[kvm_vcpu_trap_get_class(vcpu)]; if (fn) return fn(vcpu, exit_code); =20 @@ -712,20 +705,9 @@ static inline void synchronize_vcpu_pstate(struct kvm_= vcpu *vcpu, u64 *exit_code * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool __fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_cod= e, + const exit_handler_fn *handlers) { - /* - * Save PSTATE early so that we can evaluate the vcpu mode - * early on. - */ - synchronize_vcpu_pstate(vcpu, exit_code); - - /* - * Check whether we want to repaint the state one way or - * another. - */ - early_exit_filter(vcpu, exit_code); - if (ARM_EXCEPTION_CODE(*exit_code) !=3D ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 =3D read_sysreg_el2(SYS_ESR); =20 @@ -755,7 +737,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vc= pu, u64 *exit_code) goto exit; =20 /* Check if there's an exit handler and allow it to handle the exit. */ - if (kvm_hyp_handle_exit(vcpu, exit_code)) + if (kvm_hyp_handle_exit(vcpu, exit_code, handlers)) goto guest; exit: /* Return to the host kernel and handle the exit */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 3ce16f90fe6af7be21bc7b84a9d8b3905b8b08a7..ee74006c47bc44ca1d9bdf1ce7d= 4d8a41cf8e494 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -224,19 +224,21 @@ static const exit_handler_fn *kvm_get_exit_handler_ar= ray(struct kvm_vcpu *vcpu) return hyp_exit_handlers; } =20 -/* - * Some guests (e.g., protected VMs) are not be allowed to run in AArch32. - * The ARMv8 architecture does not give the hypervisor a mechanism to prev= ent a - * guest from dropping to AArch32 EL0 if implemented by the CPU. If the - * hypervisor spots a guest in such a state ensure it is handled, and don't - * trust the host to spot or fix it. The check below is based on the one = in - * kvm_arch_vcpu_ioctl_run(). - * - * Returns false if the guest ran in AArch32 when it shouldn't have, and - * thus should exit to the host, or true if a the guest run loop can conti= nue. - */ -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + const exit_handler_fn *handlers =3D kvm_get_exit_handler_array(vcpu); + + synchronize_vcpu_pstate(vcpu, exit_code); + + /* + * Some guests (e.g., protected VMs) are not be allowed to run in + * AArch32. The ARMv8 architecture does not give the hypervisor a + * mechanism to prevent a guest from dropping to AArch32 EL0 if + * implemented by the CPU. If the hypervisor spots a guest in such a + * state ensure it is handled, and don't trust the host to spot or fix + * it. The check below is based on the one in + * kvm_arch_vcpu_ioctl_run(). + */ if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) { /* * As we have caught the guest red-handed, decide that it isn't @@ -249,6 +251,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u6= 4 *exit_code) *exit_code &=3D BIT(ARM_EXIT_WITH_SERROR_BIT); *exit_code |=3D ARM_EXCEPTION_IL; } + + return __fixup_guest_exit(vcpu, exit_code, handlers); } =20 /* Switch to the guest for legacy non-VHE systems */ diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index e7ca0424107adec2371ae4553ebab9857c60b6d9..46c1f5caf007331cdbbc806a184= e9b4721042fc0 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -423,13 +423,10 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [ESR_ELx_EC_MOPS] =3D kvm_hyp_handle_mops, }; =20 -static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *= vcpu) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { - return hyp_exit_handlers; -} + synchronize_vcpu_pstate(vcpu, exit_code); =20 -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) -{ /* * If we were in HYP context on entry, adjust the PSTATE view * so that the usual helpers work correctly. @@ -449,6 +446,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u6= 4 *exit_code) *vcpu_cpsr(vcpu) &=3D ~(PSR_MODE_MASK | PSR_MODE32_BIT); *vcpu_cpsr(vcpu) |=3D mode; } + + return __fixup_guest_exit(vcpu, exit_code, hyp_exit_handlers); } =20 /* Switch to the guest for VHE systems running in EL2 */ --=20 2.39.5