From nobody Wed Oct 8 19:57:29 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 640D62D5C7B; Wed, 25 Jun 2025 11:26:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750850775; cv=none; b=Feqs7mrIPe5x6T/RsqknOu8XvR4e1n8x71HQH2cyE1VsF8jml3KMLNpK34pHKWEqitBgFi7K0fEgse/583vLyAKN0YR8DagyPVfu9Pt9mZT4MR5Ap3AdOVlOr2RD5L+KNOaXL+iNNou+lMOw3VdLlaBWwRV/SummGc/QUDOUMT0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750850775; c=relaxed/simple; bh=t5Czspdqc/+swkE+iTTXPO7OTqmIuJvDau9PcpSWF1k=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=P/4HBc0VYTsGVcIWRozMGWGvYeNDAvlZA4vSj4KpCLjJfNkPAC6pgtoAQvz5U3hrhjBzoI41VZuJYvQ/T5xwjPi1dQWhpo/0Pqu6oAAuQ8K2WTf3+fQuIqx/wq0JAuVh4Y59mJyx21QwiE4BcPJfibpjSmQ798H25K/yh03t6ew= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ltX57O67; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ltX57O67" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB8F6C4CEF3; Wed, 25 Jun 2025 11:26:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750850775; bh=t5Czspdqc/+swkE+iTTXPO7OTqmIuJvDau9PcpSWF1k=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ltX57O679SABk8/A3DWR1+xFEBLjW90Im3SAvXVTltszSTqmtJP+UdrLNLkD1Ogl1 N1E7PqAtQiPNVE30nXqZ26X8ZcuREFQAR7KUJuqfwMDSRIvBoMl2+1E2m6fnNGTu9s +qopgf/CVEOjqTnjLWnY2NwyPpS1hvYaDljx8svLEChecmLarPVhUCsksMupnHFUzw 3Rgi7/Ne/z44jG7h9uJPj8VGvpjYHMjohlxMB1bzi/BtXKS0pJ+Co0cSvzxSXTKktz IqG3kv//i9xKR8L7yJkl9Zxwu8vh5Q4FK7e5n7DVcb8L94MwgCgJoxG6Liep2+lhci sHwR+IG6uEYAw== From: Mark Brown Date: Wed, 25 Jun 2025 11:48:15 +0100 Subject: [PATCH v6 24/28] KVM: arm64: Handle SME exceptions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250625-kvm-arm64-sme-v6-24-114cff4ffe04@kernel.org> References: <20250625-kvm-arm64-sme-v6-0-114cff4ffe04@kernel.org> In-Reply-To: <20250625-kvm-arm64-sme-v6-0-114cff4ffe04@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , Mark Rutland , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-08c49 X-Developer-Signature: v=1; a=openpgp-sha256; l=6175; i=broonie@kernel.org; h=from:subject:message-id; bh=t5Czspdqc/+swkE+iTTXPO7OTqmIuJvDau9PcpSWF1k=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBoW9x3Jc6L20Zv0e8ypCBnb3G2LsTpKNT899kCr LQoc73jT16JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaFvcdwAKCRAk1otyXVSH 0LPFB/9/DdLsjcX6FW3HMWrAT4SKINpGuOOhnAgmX5CPuIcBBMz6lkyEFv18IR/1PT0w4sK16de u7mNapbaldISmPziWTvJg1wX/oTDniBCDR5EER3An46HqsbMvYvExLXynpXHbsrYTZAdgqPqloM 87bKcEGdSqHGfI2YqGe144dQcfhE+UkNxFP63f/4b0lij7HtDjVB0hTN2AhBwG2zJUKSZ0rPpz0 IlFCzUx8e4Oa8Y/e+Z7S/mEBaljA1PVOdbppOvkmP2ZXd8Ao+H7je00u7lE23Sjk/iltxYBN8h9 0Uwwq4nPWw78wSgeokaXuW59izMQjArTHNcbUaSSJbUnM0eo X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE. Signed-off-by: Mark Brown --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vhe/switch.c | 17 ++++++++++++----- 4 files changed, 35 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 453266c96481..74e1db2e0458 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -232,6 +232,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } =20 +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -391,6 +404,7 @@ static exit_handle_fn arm_exit_handlers[] =3D { [ESR_ELx_EC_SVC64] =3D handle_svc, [ESR_ELx_EC_SYS64] =3D kvm_handle_sys_reg, [ESR_ELx_EC_SVE] =3D handle_sve, + [ESR_ELx_EC_SME] =3D handle_sme, [ESR_ELx_EC_ERET] =3D kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] =3D kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] =3D kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 02ff7654fa9d..9f30667e44d4 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -69,11 +69,8 @@ static inline void __activate_cptr_traps_nvhe(struct kvm= _vcpu *vcpu) { u64 val =3D CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; =20 - /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |=3D CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |=3D CPTR_EL2_TSM; =20 if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |=3D CPTR_EL2_TZ; @@ -101,6 +98,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_= vcpu *vcpu) val |=3D CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |=3D CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |=3D CPACR_EL1_SMEN; } =20 if (!vcpu_has_nv(vcpu)) @@ -142,6 +141,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm= _vcpu *vcpu) val &=3D ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &=3D ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &=3D ~CPACR_EL1_SMEN; =20 if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |=3D cptr & CPACR_EL1_E0POE; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 0e752b515d0f..3de22aff33a4 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -175,6 +175,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, @@ -186,7 +187,8 @@ static const exit_handler_fn pvm_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_SYS64] =3D kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] =3D kvm_handle_pvm_restricted, - [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] =3D kvm_handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] =3D kvm_handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] =3D kvm_hyp_handle_watchpt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 477f1580ffea..cb3062d53a5e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -438,22 +438,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu = *vcpu, u64 *exit_code) return true; } =20 -static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_cod= e) { u32 sysreg =3D esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); =20 if (!vcpu_has_nv(vcpu)) return false; =20 - if (sysreg !=3D SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + } =20 if (guest_owns_fp_regs()) return false; =20 /* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -473,7 +479,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *= vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true; =20 - if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true; =20 return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -502,6 +508,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, --=20 2.39.5