From nobody Mon Feb 9 14:32:52 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2689247291; Tue, 23 Dec 2025 01:21:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452904; cv=none; b=PTxh3zfOJZzALxZ+j4hEtGqihCZFqAXS9NnIw2HgEAe7zxfuvGEgt7XhefGFwKxq998agUB3jiiH4sCnT7d/OILBfA37TobmJMNgl56BYSmYwes4ZMOTmaQFzXpo9pXlWVqXwOjqNanNDPCCE0JYW3tTg06UVDY+ry3WwN2Mv7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452904; c=relaxed/simple; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=UB18IzJ2Ia+xMkI6qcBfmLloBIyurjNVVuC54aHh2PIvjxn7GCxPHGNWDwkSfN5q15WyiPAdZfnax1gqsTthXhG+v7HqiE6o9Ov2pMWXJ1G0Xe1YsBphxAbORyu37MBlXT2ADc9cTHR2bqZIlni7/84ajGZ7hui6ci5xwKPButQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XeSIKNy4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XeSIKNy4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E29AC19421; Tue, 23 Dec 2025 01:21:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452903; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=XeSIKNy4dmJ4FITZc7Xhq/jLsrjsAMW5tvkoDMTXFQk6Ufx+3n2cHLlMWhRIpqrpJ wbCRXLqQx+v2pP5EFYRkHYk+lgQxv0F+rhqkLNP1wCHPFewIBelGbN7rEnzYPrZrH3 XFNq+dMFzd5RoIfutFWwj9IJSzwrnr23MpqXM0y/MINvZ9V6lSX1+zCOUCsQyAiJTQ Lnyc7YxL+7g0DZL44QznEB6YbNfeBTzPuXp/FwU74GPV+PaKQtB0WNCLv07kJRPo0w DH/h8e3edB96n02gcUEawdiZxPd7C/Zrqhgxj7nzh9bNpw7ccpXNtyAfCWvpyiuUV4 oZnUtGzJaBsww== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:56 +0000 Subject: [PATCH v9 02/30] arm64/fpsimd: Update FA64 and ZT0 enables when loading SME state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-2-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=5757; i=broonie@kernel.org; h=from:subject:message-id; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6EgyQHkMUp98aV6cstXtSwXycFVJRnsWwP1 st+9rs8XkGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhAAKCRAk1otyXVSH 0Bs4B/9QfR1yEt5DLMNbseQ6/d4GVoKMEWEVd1Pv+/ZZzBK5nAJjUTRU7d+PWyp6j9KH5m/MB0s 7MHbFtcHjXU1voC0XPSGbeF9Fro07sSBtlfaQOC8Nm3MxUvSTzXfZSmx+/gulM6e3hV/0IJBndI EgX3l7UO9/+xdr89I95QtKR+2AX77L3xILve8VZttJrMOOzOHlvbO8Cr11edxKN/xaigVQj9Xcr 20MhZCGp9zBY5ELslDWTNGkg9vuGQo2SZc/1mHS+/WRNhd7JKLbLiSXchAyGsSbXAkmqIpyQO0a q82A1TE2cP6GBWPPKjwCoYjd+PofX+hQm04MnScI42Xd2qE0 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Currently we enable EL0 and EL1 access to FA64 and ZT0 at boot and leave them enabled throughout the runtime of the system. When we add KVM support we will need to make this configuration dynamic, these features may be disabled for some KVM guests. Since the host kernel saves the floating point state for non-protected guests and we wish to avoid KVM having to reload the floating point state needlessly on guest reentry let's move the configuration of these enables to the floating point state reload. We provide a helper which does the configuration as part of a read/modify/write operation along with the configuration of the task VL, then update the floating point state load and SME access trap to use it. We also remove the setting of the enable bits from the CPU feature identification and resume paths. There will be a small overhead from setting the enables one at a time but this should be negligable in the context of the state load or access trap. In order to avoid compiler warnings due to unused variables in !CONFIG_ARM64_SME cases we avoid storing the vector length in temporary variables. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 14 ++++++++++++ arch/arm64/kernel/cpufeature.c | 2 -- arch/arm64/kernel/fpsimd.c | 47 +++++++++++--------------------------= ---- 3 files changed, 26 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 1d2e33559bd5..ece65061dea0 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -428,6 +428,18 @@ static inline size_t sme_state_size(struct task_struct= const *task) return __sme_state_size(task_get_sme_vl(task)); } =20 +#define sme_cond_update_smcr(vl, fa64, zt0, reg) \ + do { \ + u64 __old =3D read_sysreg_s((reg)); \ + u64 __new =3D vl; \ + if (fa64) \ + __new |=3D SMCR_ELx_FA64; \ + if (zt0) \ + __new |=3D SMCR_ELx_EZT0; \ + if (__old !=3D __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else =20 static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -456,6 +468,8 @@ static inline size_t sme_state_size(struct task_struct = const *task) return 0; } =20 +#define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) + #endif /* ! CONFIG_ARM64_SME */ =20 /* For use by EFI runtime services calls only */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c840a93b9ef9..ca9e66cc62d8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2965,7 +2965,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME_FA64, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2973,7 +2972,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME2, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index c154f72634e0..be4499ff6498 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -405,11 +405,15 @@ static void task_fpsimd_load(void) =20 /* Restore SME, override SVE register configuration if needed */ if (system_supports_sme()) { - unsigned long sme_vl =3D task_get_sme_vl(current); - - /* Ensure VL is set up for restoring data */ + /* + * Ensure VL is set up for restoring data. KVM might + * disable subfeatures so we reset them each time. + */ if (test_thread_flag(TIF_SME)) - sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 write_sysreg_s(current->thread.svcr, SYS_SVCR); =20 @@ -1250,26 +1254,6 @@ void cpu_enable_sme(const struct arm64_cpu_capabilit= ies *__always_unused p) isb(); } =20 -void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME2 <=3D ARM64_SME); - - /* Allow use of ZT0 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, - SYS_SMCR_EL1); -} - -void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME_FA64 <=3D ARM64_SME); - - /* Allow use of FA64 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, - SYS_SMCR_EL1); -} - void __init sme_setup(void) { struct vl_info *info =3D &vl_info[ARM64_VEC_SME]; @@ -1314,17 +1298,9 @@ void __init sme_setup(void) =20 void sme_suspend_exit(void) { - u64 smcr =3D 0; - if (!system_supports_sme()) return; =20 - if (system_supports_fa64()) - smcr |=3D SMCR_ELx_FA64; - if (system_supports_sme2()) - smcr |=3D SMCR_ELx_EZT0; - - write_sysreg_s(smcr, SYS_SMCR_EL1); write_sysreg_s(0, SYS_SMPRI_EL1); } =20 @@ -1439,9 +1415,10 @@ void do_sme_acc(unsigned long esr, struct pt_regs *r= egs) WARN_ON(1); =20 if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { - unsigned long vq_minus_one =3D - sve_vq_from_vl(task_get_sme_vl(current)) - 1; - sme_set_vq(vq_minus_one); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 fpsimd_bind_task_to_cpu(); } else { --=20 2.47.3