From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2096239E8B; Tue, 23 Dec 2025 01:21:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452899; cv=none; b=TScCCZC+RRweM6b/SabN/D2ypXun5DV3ARwQDhBnisYHZVDcUGzl1umHnXHMmD2FNJf/7aLjGkHJxBboApNwzaVSAWEgCGu9gv/sBIqZtBpcI8TolTcLrTw3rbBrqkNgCOzfEeicICQND3/I2MH/jG+EyBnrrUJQyqG1NxcL3uw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452899; c=relaxed/simple; bh=tQ8nCbdUJTwo8MNk2MKXqTeJ5K+/vBwe4kIZ5maLhR4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Fq4YtKpshZ2ZwWPhaB5dTDnv4CDa61eiowtiI5t+a6KBsKt0wCO77DCEDdFqSvY9ccfHggS5IyRGzZNLFlqyadjuQomqeBXy2RYGd8e3HLrQgJBGh67lj3isNWf8oFIXVqqRJcwcxieRu0EpxqjSOO6ku9CN/OGSeZMhIL7EDxQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o+wMq0lM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o+wMq0lM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 60E5CC4CEF1; Tue, 23 Dec 2025 01:21:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452899; bh=tQ8nCbdUJTwo8MNk2MKXqTeJ5K+/vBwe4kIZ5maLhR4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=o+wMq0lM9/+TnKbl+gbnSg9x8/g3lu3ihq0SGAdQYaF+0bUoyrsOkzpJBpTTg2YB7 6tPMhxBwbyRvG5JB+g7fOiRNAIrtiYZUbAsainxIGTCa00nZ/XySqArv4JQ/wF2nkR tcfB/h8cRLNQ4teEl37H49eVcOD8wMJHG3EQRz9U9Y2EO43AtQ1HYdPAalb8mTx7ow aqihYKHy0ceFpTD0YvztYTipXXW3F0lUR7L5ovqEqfLVKmqXxoCDdh3KTEUicDUE0f DT9K2M3Wv68vVMR2t896QcnovfV+/nW5bGcSXGLLI9A/w3M0aReThDigP5BpMeCnPj NyC05wevFK1kQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:55 +0000 Subject: [PATCH v9 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-1-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=933; i=broonie@kernel.org; h=from:subject:message-id; bh=tQ8nCbdUJTwo8MNk2MKXqTeJ5K+/vBwe4kIZ5maLhR4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6DNOnywKTsW/riZW5QxkSLSd9jk8d/Qf6pY wo50Sp4oeSJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnugwAKCRAk1otyXVSH 0GA8B/9/H1nErknV81fzXgbWjeIGEgxDVkGySJGQzFxbf2QyQQAxiZK4/ij0KRuOJ8umNWiahUE iCFHNIebt6kPq9vpkt9LXE81goKCCbaUz+ZbUuYFVIukNpu6XQb9MHZ2KfhNdS1yDYAfm7HaM1S vA3DdQxXbL6034/x5lXG3nNe1+iilzoh9EsL7uXziStnrXVKUeBERhloAbF+t4XozWyy8KwIXgr c2X1v6nWIIxNqqnOT7K8/K0NXLpFM1B9OkEvuoxlltV3uh1gBA2bQ/8Ci6LI1dCEPfy2VIW3oTv vtLtah/EkNAeQ1vYB6oGQ2t8YRkukVeuLxzRJHC0z4+N9cpu X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Update the definiton of SMIDR_EL1 in the sysreg definition to reflect the information in DD0601 2025-06. This includes somewhat more generic ways of describing the sharing of SMCUs, more information on supported priorities and provides additional resolution for describing affinity groups. Signed-off-by: Mark Brown Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Fuad Tabba --- arch/arm64/tools/sysreg | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8921b51866d6..6bf143bfe57c 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -3660,11 +3660,15 @@ Field 3:0 BS EndSysreg =20 Sysreg SMIDR_EL1 3 1 0 0 6 -Res0 63:32 +Res0 63:60 +Field 59:56 NSMC +Field 55:52 HIP +Field 51:32 AFFINITY2 Field 31:24 IMPLEMENTER Field 23:16 REVISION Field 15 SMPS -Res0 14:12 +Field 14:13 SH +Res0 12 Field 11:0 AFFINITY EndSysreg =20 --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2689247291; Tue, 23 Dec 2025 01:21:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452904; cv=none; b=PTxh3zfOJZzALxZ+j4hEtGqihCZFqAXS9NnIw2HgEAe7zxfuvGEgt7XhefGFwKxq998agUB3jiiH4sCnT7d/OILBfA37TobmJMNgl56BYSmYwes4ZMOTmaQFzXpo9pXlWVqXwOjqNanNDPCCE0JYW3tTg06UVDY+ry3WwN2Mv7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452904; c=relaxed/simple; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=UB18IzJ2Ia+xMkI6qcBfmLloBIyurjNVVuC54aHh2PIvjxn7GCxPHGNWDwkSfN5q15WyiPAdZfnax1gqsTthXhG+v7HqiE6o9Ov2pMWXJ1G0Xe1YsBphxAbORyu37MBlXT2ADc9cTHR2bqZIlni7/84ajGZ7hui6ci5xwKPButQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XeSIKNy4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XeSIKNy4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E29AC19421; Tue, 23 Dec 2025 01:21:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452903; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=XeSIKNy4dmJ4FITZc7Xhq/jLsrjsAMW5tvkoDMTXFQk6Ufx+3n2cHLlMWhRIpqrpJ wbCRXLqQx+v2pP5EFYRkHYk+lgQxv0F+rhqkLNP1wCHPFewIBelGbN7rEnzYPrZrH3 XFNq+dMFzd5RoIfutFWwj9IJSzwrnr23MpqXM0y/MINvZ9V6lSX1+zCOUCsQyAiJTQ Lnyc7YxL+7g0DZL44QznEB6YbNfeBTzPuXp/FwU74GPV+PaKQtB0WNCLv07kJRPo0w DH/h8e3edB96n02gcUEawdiZxPd7C/Zrqhgxj7nzh9bNpw7ccpXNtyAfCWvpyiuUV4 oZnUtGzJaBsww== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:56 +0000 Subject: [PATCH v9 02/30] arm64/fpsimd: Update FA64 and ZT0 enables when loading SME state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-2-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=5757; i=broonie@kernel.org; h=from:subject:message-id; bh=B9o3mwWlphnHy9bcMkeL1ttkIEbnuvR4ZsA5NQCXSMo=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6EgyQHkMUp98aV6cstXtSwXycFVJRnsWwP1 st+9rs8XkGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhAAKCRAk1otyXVSH 0Bs4B/9QfR1yEt5DLMNbseQ6/d4GVoKMEWEVd1Pv+/ZZzBK5nAJjUTRU7d+PWyp6j9KH5m/MB0s 7MHbFtcHjXU1voC0XPSGbeF9Fro07sSBtlfaQOC8Nm3MxUvSTzXfZSmx+/gulM6e3hV/0IJBndI EgX3l7UO9/+xdr89I95QtKR+2AX77L3xILve8VZttJrMOOzOHlvbO8Cr11edxKN/xaigVQj9Xcr 20MhZCGp9zBY5ELslDWTNGkg9vuGQo2SZc/1mHS+/WRNhd7JKLbLiSXchAyGsSbXAkmqIpyQO0a q82A1TE2cP6GBWPPKjwCoYjd+PofX+hQm04MnScI42Xd2qE0 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Currently we enable EL0 and EL1 access to FA64 and ZT0 at boot and leave them enabled throughout the runtime of the system. When we add KVM support we will need to make this configuration dynamic, these features may be disabled for some KVM guests. Since the host kernel saves the floating point state for non-protected guests and we wish to avoid KVM having to reload the floating point state needlessly on guest reentry let's move the configuration of these enables to the floating point state reload. We provide a helper which does the configuration as part of a read/modify/write operation along with the configuration of the task VL, then update the floating point state load and SME access trap to use it. We also remove the setting of the enable bits from the CPU feature identification and resume paths. There will be a small overhead from setting the enables one at a time but this should be negligable in the context of the state load or access trap. In order to avoid compiler warnings due to unused variables in !CONFIG_ARM64_SME cases we avoid storing the vector length in temporary variables. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 14 ++++++++++++ arch/arm64/kernel/cpufeature.c | 2 -- arch/arm64/kernel/fpsimd.c | 47 +++++++++++--------------------------= ---- 3 files changed, 26 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 1d2e33559bd5..ece65061dea0 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -428,6 +428,18 @@ static inline size_t sme_state_size(struct task_struct= const *task) return __sme_state_size(task_get_sme_vl(task)); } =20 +#define sme_cond_update_smcr(vl, fa64, zt0, reg) \ + do { \ + u64 __old =3D read_sysreg_s((reg)); \ + u64 __new =3D vl; \ + if (fa64) \ + __new |=3D SMCR_ELx_FA64; \ + if (zt0) \ + __new |=3D SMCR_ELx_EZT0; \ + if (__old !=3D __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else =20 static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -456,6 +468,8 @@ static inline size_t sme_state_size(struct task_struct = const *task) return 0; } =20 +#define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) + #endif /* ! CONFIG_ARM64_SME */ =20 /* For use by EFI runtime services calls only */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c840a93b9ef9..ca9e66cc62d8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2965,7 +2965,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME_FA64, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2973,7 +2972,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME2, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index c154f72634e0..be4499ff6498 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -405,11 +405,15 @@ static void task_fpsimd_load(void) =20 /* Restore SME, override SVE register configuration if needed */ if (system_supports_sme()) { - unsigned long sme_vl =3D task_get_sme_vl(current); - - /* Ensure VL is set up for restoring data */ + /* + * Ensure VL is set up for restoring data. KVM might + * disable subfeatures so we reset them each time. + */ if (test_thread_flag(TIF_SME)) - sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 write_sysreg_s(current->thread.svcr, SYS_SVCR); =20 @@ -1250,26 +1254,6 @@ void cpu_enable_sme(const struct arm64_cpu_capabilit= ies *__always_unused p) isb(); } =20 -void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME2 <=3D ARM64_SME); - - /* Allow use of ZT0 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, - SYS_SMCR_EL1); -} - -void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME_FA64 <=3D ARM64_SME); - - /* Allow use of FA64 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, - SYS_SMCR_EL1); -} - void __init sme_setup(void) { struct vl_info *info =3D &vl_info[ARM64_VEC_SME]; @@ -1314,17 +1298,9 @@ void __init sme_setup(void) =20 void sme_suspend_exit(void) { - u64 smcr =3D 0; - if (!system_supports_sme()) return; =20 - if (system_supports_fa64()) - smcr |=3D SMCR_ELx_FA64; - if (system_supports_sme2()) - smcr |=3D SMCR_ELx_EZT0; - - write_sysreg_s(smcr, SYS_SMCR_EL1); write_sysreg_s(0, SYS_SMPRI_EL1); } =20 @@ -1439,9 +1415,10 @@ void do_sme_acc(unsigned long esr, struct pt_regs *r= egs) WARN_ON(1); =20 if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { - unsigned long vq_minus_one =3D - sve_vq_from_vl(task_get_sme_vl(current)) - 1; - sme_set_vq(vq_minus_one); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 fpsimd_bind_task_to_cpu(); } else { --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E47C25F7BF; Tue, 23 Dec 2025 01:21:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452908; cv=none; b=YophfO+CTNovH/wZzgHMLyfpd5C+kZfMnqbb+9TboABA2i26YVl+nXMb9sJdoMxkaM3PK5hUNE6LRusaySxzYpbkuDkWepPmDQFlrGw+BgC/9OAeIIu+dX/MbAuvCRbPayjdIMJNGcvPNau4RIe+H1L5Uset5Xg3oqRL7Lv2DuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452908; c=relaxed/simple; bh=AMoEF0nrqU26LQooV0kBfxDCIBukvmAs571HQFetjbk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dDedFHLWMfQdJSKvjdYHssjdersIk+vvUiLwDbWpoyYHOaSMJQnZGvtGa7S1o4wNR95N50JDtRV3KlxgvOGVd06fe+vGiaJXKFxfCyZ3jY+f7wyIpuJhbVVxT+uyFOmwPSer6P4F6VanmI5AaZU4AExz5vLiqVqejF9u/LGXy9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Z9DIaYE6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Z9DIaYE6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD117C19425; Tue, 23 Dec 2025 01:21:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452907; bh=AMoEF0nrqU26LQooV0kBfxDCIBukvmAs571HQFetjbk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Z9DIaYE67tRSltM+1hG5us1paTYsOBt6mnBkbXS7Ha8VkxAoThUi82GMOF31urORP dCqVhrLmjHvjRB3jvX9i8W7jFTTAx0Z/F0CECVBMwoxzD+Pjp8lfERPTPrJ2rDTia/ HRYJ84LW6XpiN8NOnHdTBUoGsygC6xcprc7ec4bcHu7mnSjv9pyJIrb3xxVdOrJLgG GEKnqYOPYnJkK+S1WZegocC5Q6ur1AfKvBvIwjCElqDrMMa80D4PZfJBuK0Yh3qP4T cOzl3QQNz6ehDR4vzLX3m2HwQEwAHA3KuYTHTDrni9AINCDPVuF/86euOo6gTGdUCd 5eWNNh2ySeReA== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:57 +0000 Subject: [PATCH v9 03/30] arm64/fpsimd: Decide to save ZT0 and streaming mode FFR at bind time Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-3-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3253; i=broonie@kernel.org; h=from:subject:message-id; bh=AMoEF0nrqU26LQooV0kBfxDCIBukvmAs571HQFetjbk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6EqgzzsT+BP4JPJTNb7yndoTQdPD2m7a+nK 80LYt1smamJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhAAKCRAk1otyXVSH 0CpxB/wMQFXj7q5Ry0rGSuTXXxCQ4zusY5Q91V6JpRdifOT+ybfhMl2E+fqzmSSQuaI/45E1rsR OG8TOpMjDJhmbPCNxkJCSE2+WiDzyT82rYDLJW2cxop5mXseS2pAIEbVs+gnexyXT+m6R7svl0k uYtnStqc2N4XIeG9iULMb0Cwbb1Wl+E6jjhS13OeUCyQ+maFDk7neI9dODsg5ZLttSa/oXGaBNV vWDXQLHK41Tyxzn0HLEkcqcKwyarx0Nfknsv7bUJOBlNseiIjNM/oWKj3WeRlJO7wMO1Gg0rseN FqkImvbppiz42Cssne1b4FoYpH8iwuE7PT+Pwi6M8jtH5mJY X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Some parts of the SME state are optional, enabled by additional features on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We unconditionally enable these for the host but for KVM we will allow the feature set exposed to guests to be restricted by the VMM. These are the FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2). We defer saving of guest floating point state for non-protected guests to the host kernel. We also want to avoid having to reconfigure the guest floating point state if nothing used the floating point state while running the host. If the guest was running with the optional features disabled then traps will be enabled for them so the host kernel will need to skip accessing that state when saving state for the guest. Support this by moving the decision about saving this state to the point where we bind floating point state to the CPU, adding a new variable to the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which features are enabled. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/fpsimd.c | 1 + 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index ece65061dea0..146c1af55e22 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -87,6 +87,7 @@ struct cpu_fp_state { void *sme_state; u64 *svcr; u64 *fpmr; + u64 sme_features; unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index be4499ff6498..887fce177c92 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -490,12 +490,12 @@ static void fpsimd_save_user_state(void) =20 if (*svcr & SVCR_ZA_MASK) sme_save_state(last->sme_state, - system_supports_sme2()); + last->sme_features & SMCR_ELx_EZT0); =20 /* If we are in streaming mode override regular SVE. */ if (*svcr & SVCR_SM_MASK) { save_sve_regs =3D true; - save_ffr =3D system_supports_fa64(); + save_ffr =3D last->sme_features & SMCR_ELx_FA64; vl =3D last->sme_vl; } } @@ -1671,6 +1671,12 @@ static void fpsimd_bind_task_to_cpu(void) last->to_save =3D FP_STATE_CURRENT; current->thread.fpsimd_cpu =3D smp_processor_id(); =20 + last->sme_features =3D 0; + if (system_supports_fa64()) + last->sme_features |=3D SMCR_ELx_FA64; + if (system_supports_sme2()) + last->sme_features |=3D SMCR_ELx_EZT0; + /* * Toggle SVE and SME trapping for userspace if needed, these * are serialsied by ret_to_user(). diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 15e17aca1dec..9158353d8be3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -80,6 +80,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type =3D &vcpu->arch.fp_type; + fp_state.sme_features =3D 0; =20 if (vcpu_has_sve(vcpu)) fp_state.to_save =3D FP_STATE_SVE; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D1C52580FB; Tue, 23 Dec 2025 01:21:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452912; cv=none; b=Uoo2q3WWIQf1o3gpTMX57wQwfviHLKGCLPYyrIAHTe2BfSYatZsbx5cnv1PSDLi2KW7MzOAXjUUJGP+YmdpfhW5qjaGeIbvIxZ9jlW6mg/bzD2+1FFdTmoDPqclPFYX5oKM0oLzHhbwVj6RgAWZG623tN8BDg8oF01rxzC1RwUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452912; c=relaxed/simple; bh=wS/Sr3h3O9OGRzttKyfjnEZ56/U90EAT24zyvgc9Crs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qeLly3uwPQoxwFmJL/lGebNxuPc4a2D054mNjiJh62zkC2UoD4k5k2SKlvQIzvUlJsc3TJRq6UBnYcHOLCF8XmEHzJZ9dYqusUZLFzxWKcAZd3fycPC9xl2A1Hr574b0dYZc41nJCa4JAYt4TgB8Obgqi34wdqqZTLYuktNppzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NCRgEV4K; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NCRgEV4K" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 275FDC19422; Tue, 23 Dec 2025 01:21:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452912; bh=wS/Sr3h3O9OGRzttKyfjnEZ56/U90EAT24zyvgc9Crs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=NCRgEV4Kpsov+E9xvN2oqiENZrZmI3cK1QfAEGgL+6TEdEaBRQ+TkqvIlNkmt0tMY M4pFTGj4mkpfXqyEEmGM3vZt87+ZXH4YJK78gh1Vfhh1mAu1Q10GeOEXQHoV8+2YZg sxJfXBikOufoBdkliAIb4aueDqZ3Ca5A1HM4OT0opWQieykAM3gtVQ4sQOjLFhcVZx uPW/Rqh/USCV/wi7OaDPbm0vmA8DzbLzp/QJ4mLnlysI86abk2NMsBcMG0dQ3xJI+W PITJ6HsijJbh0QLe/hwLWKWLQyeZpy1l5gfg2zDoD3Fatc+K5AEPWfnuAowM4AVvYU JULYuYvv9MeSw== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:58 +0000 Subject: [PATCH v9 04/30] arm64/fpsimd: Check enable bit for FA64 when saving EFI state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-4-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=1451; i=broonie@kernel.org; h=from:subject:message-id; bh=wS/Sr3h3O9OGRzttKyfjnEZ56/U90EAT24zyvgc9Crs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6FOp/yeFdQmWsIDTjuHRv8lf47vDI2UyZQS cSy5dLq5y+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhQAKCRAk1otyXVSH 0FcjB/9jxj9yHS7RtIqKmD88+ux1RuhFty0fPQPxGsOatiYtfDitIClUYiqlQe5PczTHfVw8vw0 fEldJX5Zh5i77R8pJL4zR2DJ6LSqoWkxYToLw8aAw2nrhlSAWNsdGBOIRzb6YKgkzCsfr37KC1c ELgf+8Fitx4EJcFcUwPObrAWPiC7C0gMdHZob6hrtLeH1xZSDZhMoJUk0eo5jDIob/nzvG1G3Sa rOvHSqywKra3AIdZQJc0P6OsyrEGzptWIr03M5+tLzNBCkknJ8IDDf/ijO6tvgTWpim6574o0em cks9cs0ZBC42gOiUFnMZMa7WG3dn6eVmIOS4H4Lfk/zaUClB X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Currently when deciding if we need to save FFR when in streaming mode prior to EFI calls we check if FA64 is supported by the system. Since KVM guest support will mean that FA64 might be enabled and disabled at runtime switch to checking if traps for FA64 are enabled in SMCR_EL1 instead. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kernel/fpsimd.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 887fce177c92..f4e8cee00198 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1948,6 +1948,11 @@ static bool efi_sm_state; * either doing something wrong or you need to propose some refactoring. */ =20 +static bool fa64_enabled(void) +{ + return read_sysreg_s(SYS_SMCR_EL1) & SMCR_ELx_FA64; +} + /* * __efi_fpsimd_begin(): prepare FPSIMD for making an EFI runtime services= call */ @@ -1980,7 +1985,7 @@ void __efi_fpsimd_begin(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr =3D !(svcr & SVCR_SM_MASK); } =20 @@ -2028,7 +2033,7 @@ void __efi_fpsimd_end(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr =3D false; } } --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCDF52777F9; Tue, 23 Dec 2025 01:21:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452917; cv=none; b=Y2B3gBYt8vkypLLOpz6ALaq4NbWePwy5Z1qYXJopHygKCYM4qHUKwAzqAOwkiN8fzvn5q1A3/d/RonViEq9YEGytDrdluhZB5bznyO/2yKzGSGhloZRhRdwsEkMhIHWgHd3x2mwpgpqNqVYEXcfa90cxRHSaVFpCxhZl3eNtUeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452917; c=relaxed/simple; bh=4+FJ7+9SNzMFsLU/lZOfnlAHMAOz0MdKHDvJ4fcVrRk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=uFrCVdacrvoqJoOxtGLaMZeRc0pG5AdCV/ajrt6mObnZvXg3/1UeCYquq6NMHkPH61SsW3tQB9tokFMQ8jSYOvx26p27Wp1ohxBm+lbXXxoTgPCiL9+/ksyhOUhMd42ZEl8zsMwWEXeqDDDuc61n5QFFG6LX3srjGKkwaMTSd7g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bI+mGMvr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bI+mGMvr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6453EC4CEF1; Tue, 23 Dec 2025 01:21:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452916; bh=4+FJ7+9SNzMFsLU/lZOfnlAHMAOz0MdKHDvJ4fcVrRk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=bI+mGMvr6H4pr6+RIaS0/WHsc0UELxttOk8dVJWS5Nj5KiDQ5QjlDUbESX/gqp37i nn4gaNLTaNR/L3s4rttanfzayaZj+zjOeQMmrVRqhkbA5FFjBIf16Hju+lRhHDuBWu mJmgFDdqBNMFab5QkSlKI/drSfNEeAgnDfMOVmeNsPJp++6JAMWSlkeePFCC5uM/86 jLicmnJIJmaveoG79JK8Nwzi+NCyC6+q38SkZT3spDsVITMSbtXvCmQqEbAzBWemnp c5SKFyLNjMlNiQLJHADQJUz5clv/LEy83QyshgnoiANMBt7JCrekEOVycqDSSBftrr XgoHwImiQIDiA== From: Mark Brown Date: Tue, 23 Dec 2025 01:20:59 +0000 Subject: [PATCH v9 05/30] arm64/fpsimd: Determine maximum virtualisable SME vector length Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-5-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=2177; i=broonie@kernel.org; h=from:subject:message-id; bh=4+FJ7+9SNzMFsLU/lZOfnlAHMAOz0MdKHDvJ4fcVrRk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6G94IqpPKtBF6i+APEEZA50ZuCkXbJI1Xmp pNNMT4DhmeJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhgAKCRAk1otyXVSH 0HaOB/9t0imWZqpN/qCLBUFVBTic1DgibgESyXd44hvr7rrNUUATGfqFo7hScqgfof9CTCj8b0T 0HC0+QtAtf3fPAeEJskjJ3iHr5P/Y7Gjspj8/s5XFxnb36iDbMmfWIqmciRJrA+fxvgL4MWnUXj 6R7artS98h2mfiYUmUiTI6DbVtuOdvf/JuC/keSkmAq2fUvOPCDfoau1lQaBGjg0cL6q13SL5MY 3mXBUVldRK31z4cr7dYMuRrjBh5jmOQsr4vZuunAniObziCOLEVnATLq8yeznT7qLKczgEbtV10 Tg2LzX5o6dP/i+kzgBmU66U2x3YWTqPeEsjj3Nh1Htyls4z9 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB As with SVE we can only virtualise SME vector lengths that are supported by all CPUs in the system, implement similar checks to those for SVE. Since unlike SVE there are no specific vector lengths that are architecturally required the handling is subtly different, we report a system where this happens with a maximum vector length of -1. Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f4e8cee00198..22f8397c67f0 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1257,7 +1257,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabiliti= es *__always_unused p) void __init sme_setup(void) { struct vl_info *info =3D &vl_info[ARM64_VEC_SME]; - int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + int min_bit, max_bit, b; =20 if (!system_supports_sme()) return; @@ -1288,12 +1289,32 @@ void __init sme_setup(void) */ set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32)); =20 + bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b =3D find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >=3D SVE_VQ_MAX) + /* All VLs virtualisable */ + info->max_virtualisable_vl =3D SVE_VQ_MAX; + else if (b =3D=3D SVE_VQ_MAX - 1) + /* No virtualisable VLs */ + info->max_virtualisable_vl =3D -1; + else + info->max_virtualisable_vl =3D sve_vl_from_vq(__bit_to_vq(b + 1)); + + if (info->max_virtualisable_vl > info->max_vl) + info->max_virtualisable_vl =3D info->max_vl; + pr_info("SME: minimum available vector length %u bytes per vector\n", info->min_vl); pr_info("SME: maximum available vector length %u bytes per vector\n", info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (info->max_virtualisable_vl < info->max_vl) + pr_warn("SME: unvirtualisable vector lengths present\n"); } =20 void sme_suspend_exit(void) --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 176AE275870; Tue, 23 Dec 2025 01:22:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452923; cv=none; b=WqXRBRk26nHRsSl/Bf4oJYojwQDpJr0lPpL+2WQSh3A5HRGYYFCyimfDKS5cNdkxGAW10ZriJlfsVgtn0gsRD26mwR5WK94isRK8zd2GGEpSK43hthpT+7xc7ML1f4qMeSofmdipTI94ZPKv00MO3LwY0odfx8u3U11fbOrg7tk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452923; c=relaxed/simple; bh=UcBGRJk70S1Rv85qJDISL95VGW1CnVltxuNnSLnZae0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bCAjFEyhR6UIJyp0iAEvyDWb+ygIfBGLUyX3HYtKln5/rI1K1EZuOMqJFuQi5/p4+5cGi51LZCbAU/qm46lQQur2+81tU5zeQcJPi3mGo2I9r4m8n6xoi3AKlm6VvxVnsOrN0wTgZ3Cdf2AUrVkIueuU3iNclg+EB9aPQ/2uTlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=E7yyC4qq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="E7yyC4qq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F732C116C6; Tue, 23 Dec 2025 01:21:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452920; bh=UcBGRJk70S1Rv85qJDISL95VGW1CnVltxuNnSLnZae0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=E7yyC4qqfA7Z/4QdPyqc7ZrKTGJdfcJg7FhYx0nX+gXWVxuFH++5TPGZzO/caQQfS XpLaLH8tmJPz9Ddfp7pwSc1+xeNTqSTx4p28gRsfnty3yeuvx+Zt0cg/CBU4NkR3TC 7gMvAcvzUXVM+NIXZxAerxOFplv/QLMDfTEbqI+8gUq6Cf8orGZ39ZwMk5OMbO5N8Z DQW5VGoVAHRtIOwCs+O1UvDf1bLodi0QHCTWbzl3zvNYLlF1Azmnd553GKsZs+m/IX kZ+Ru92Qgyf5BSZHVqgSOxK1w2kg+3//gUL9SQyB5DTvnu7olYffbXVL3iuv0mpfIR zFExHmJoqy3lQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:00 +0000 Subject: [PATCH v9 06/30] KVM: arm64: Pay attention to FFR parameter in SVE save and load Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-6-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=1028; i=broonie@kernel.org; h=from:subject:message-id; bh=UcBGRJk70S1Rv85qJDISL95VGW1CnVltxuNnSLnZae0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6HE2lIx24yhrPMc49cAVdCIj5v6Ca+PxFJM DBjjjoyPl+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhwAKCRAk1otyXVSH 0Km/B/9mi3IncLDznTqdkw6oHxNbhh/TkoU+szJI5TxJTKBu3cBLsGpmJdyK9E/Hk6ANuJCKcWY plu/p0/6f7/RgAPiCSYPu/dlC0s4OQ3XXlOedkB8NxCDFm/xyXGwD8XEiIY6hWF0hgNM/a7szsy fSCoTxt13+AZ54LGpyUU3dx583VmGJ2xux79Ozu1oUXf9lpVFvjNFsoxE1XYOF0Sa4gL5aAowdt itKE2Fw1RQ7XhGu9ELQ1TXGqyj9DR6zdiASflllMsKVPL2BS6IQDaVUruEi8FPMjqeAyUDlfWfb xtBPrZvvOp7Q8rX+rReGQtOmMIEgyeczAaL/tzrHYjHUdEjf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The hypervisor copies of the SVE save and load functions are prototyped with third arguments specifying FFR should be accessed but the assembly functions overwrite whatever is supplied to unconditionally access FFR. Remove this and use the supplied parameter. This has no effect currently since FFR is always present for SVE but will be important for SME. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/fpsimd.S | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index e950875e31ce..6e16cbfc5df2 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -21,13 +21,11 @@ SYM_FUNC_START(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state) =20 SYM_FUNC_START(__sve_restore_state) - mov x2, #1 sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state) =20 SYM_FUNC_START(__sve_save_state) - mov x2, #1 sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34C8B285CB4; Tue, 23 Dec 2025 01:22:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452925; cv=none; b=iaGXEz6IZ8YesAzxFnFATZ4nWyQs0U8BDYWpxB0eKMWrB/O5XZftT5TDhhRVj0k7U3owyY7UbnIgwl9XPNwP3/kDLlNVCKs6JbJONJ7eEoBaFrGYmdEXJ46kkU3hu+M7ywNF+REQd5IVpweF9XP/f+PPY1f1NuyQA8jHo4Ml1uU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452925; c=relaxed/simple; bh=KAYp79cho7mh3G8Wh0kwU5ifHyng2Eq5/E9BbUSvCvU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=a/kix+NG0hnPyBR2LkcIC1sdeE5EyHQhKK1Xz9HOia+QWnQiKE/ssMSKFPxRrrnuAhphGTjyeBqgqqEjV5BWI8Cp1VhZCdmC3IY7FYorFIOJqJ+FvLxshSjJZm7nV5pWCB0p0cZABJ+iYJUljhkHSSq1uNzN6Oe6hr4uA0w2paM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=c5ADR/HW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="c5ADR/HW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DEB4FC4CEF1; Tue, 23 Dec 2025 01:22:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452924; bh=KAYp79cho7mh3G8Wh0kwU5ifHyng2Eq5/E9BbUSvCvU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=c5ADR/HW3vb8ybeSojPTIvRoRK+qa5gbzIvZRulkoQe9ocgNPGJqPetM+1TPp435m Pe1DV9VzomswTnpSt/8ZELfHasVRH+FMaokO3yCQj+yvEmYtRWh6CuxVnn1H7jbF/L ph0fF1G6Lvih3rJNzulpBHCjUxA+9V3+qR7IYv6CEQlIAMpMm91ifwuYoBuFXMmhp4 xxc1j9g5SilmGfbfsTOLWxpLkjN4PQH6lv58gHu8VADrZYXL+qzx7cJ1//0RXV9Jco KFaunM6OywP4NAIi71B9JLiFppSTcv0dFcVWuqtFZmhHmx25tq5Z0kMV+EVMqyp7Ji s3g5191A7yrZQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:01 +0000 Subject: [PATCH v9 07/30] KVM: arm64: Pull ctxt_has_ helpers to start of sysreg-sr.h Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-7-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3907; i=broonie@kernel.org; h=from:subject:message-id; bh=KAYp79cho7mh3G8Wh0kwU5ifHyng2Eq5/E9BbUSvCvU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6HIVNkg9jY3hmfvKynsM3HkOpHsDiIJg+0a sAnKaVV0oiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuhwAKCRAk1otyXVSH 0B/sB/4iTZcIgsMUIg3sMq64FGf01TLQNHGYnspqQ7JoHaS4uKozVGQAdM8mgAbT40mtgCc3KGc 1BaSx5STM6tKRy28oc0jTTxVEvf3b4bjAcvEdhbDd6NvQuS4GXV9H9S9mOAliJ33zBFvEYNrjne PRdJgujzuZgKTir7tYajjgcvIv0f0BNxRF6Ovjn/Hg9oJcOAPX4XvEuiZoKoSXToLp6/gxnNyaJ MemQ1ULZlROi3oWAXCat2jzExbpgwRDAJlaqZb5g4w6eEAIuP/NKKGjIYiM6iP3J0X1F11CJ93Y Jzlj7amVzpt+Mo9EpQpruenXGphCafm2cV7o+u6gi36HxNlV X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Rather than add earlier prototypes of specific ctxt_has_ helpers let's just pull all their definitions to the top of sysreg-sr.h so they're all available to all the individual save/restore functions. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 84 +++++++++++++++-----------= ---- 1 file changed, 41 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hy= p/include/hyp/sysreg-sr.h index a17cbe7582de..5624fd705ae3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,8 +16,6 @@ #include #include =20 -static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt); - static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu =3D ctxt->__hyp_running_vcpu; @@ -28,47 +26,6 @@ static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_c= pu_context *ctxt) return vcpu; } =20 -static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) -{ - return host_data_ptr(host_ctxt) !=3D ctxt; -} - -static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); - - if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) - return &vcpu->arch.external_mdscr_el1; - - return &ctxt_sys_reg(ctxt, MDSCR_EL1); -} - -static inline u64 ctxt_midr_el1(struct kvm_cpu_context *ctxt) -{ - struct kvm *kvm =3D kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm); - - if (!(ctxt_is_guest(ctxt) && - test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))) - return read_cpuid_id(); - - return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1); -} - -static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - *ctxt_mdscr_el1(ctxt) =3D read_sysreg(mdscr_el1); - - // POR_EL0 can affect uaccess, so must be saved/restored early. - if (ctxt_has_s1poe(ctxt)) - ctxt_sys_reg(ctxt, POR_EL0) =3D read_sysreg_s(SYS_POR_EL0); -} - -static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); - ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); -} - static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); @@ -131,6 +88,47 @@ static inline bool ctxt_has_sctlr2(struct kvm_cpu_conte= xt *ctxt) return kvm_has_sctlr2(kern_hyp_va(vcpu->kvm)); } =20 +static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) +{ + return host_data_ptr(host_ctxt) !=3D ctxt; +} + +static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); + + if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) + return &vcpu->arch.external_mdscr_el1; + + return &ctxt_sys_reg(ctxt, MDSCR_EL1); +} + +static inline u64 ctxt_midr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm *kvm =3D kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm); + + if (!(ctxt_is_guest(ctxt) && + test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))) + return read_cpuid_id(); + + return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1); +} + +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + *ctxt_mdscr_el1(ctxt) =3D read_sysreg(mdscr_el1); + + // POR_EL0 can affect uaccess, so must be saved/restored early. + if (ctxt_has_s1poe(ctxt)) + ctxt_sys_reg(ctxt, POR_EL0) =3D read_sysreg_s(SYS_POR_EL0); +} + +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); + ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); +} + static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, SCTLR_EL1) =3D read_sysreg_el1(SYS_SCTLR); --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3377279DA6; Tue, 23 Dec 2025 01:22:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452929; cv=none; b=ODdjRO2Tc3rSmSb/9oWrsNWLZBafk1UsujY57RsHgTR5Eck2HU+Gx6IXkSth1OWD2/YRJBHMv8NbIhHiPCcP6XWUE67j2gOCxicPh2VcZcciVwIimIF9nJyUd0pYbL1wrV23Qi6/3Tq0DHedz0A1PTkZY0sdWsCSRJMOIoG4DBE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452929; c=relaxed/simple; bh=E1tho8A6hfPPGjuGEPK6k9ILaXOQXFCS51jTCVXwf/E=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PaeMnTk3TobJ3d8xJoKngJaXSl43RaIalqivE9Vfn32hhcpm3ER6zoF1v7ozyQLd5mj84AW/XhP6WGwkyoATY5W5pEh5V5K0LXKLFFuGUy6wkRoIlLGdL0l94NTQlrDvvmCe8Rlna3Yz7DXiwIp5g16vNPXeXz/wk8A8H12SR2Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gExXjjOl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gExXjjOl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30898C116C6; Tue, 23 Dec 2025 01:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452929; bh=E1tho8A6hfPPGjuGEPK6k9ILaXOQXFCS51jTCVXwf/E=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gExXjjOlepqN3I13y0vqBReUuuEzplfXjnjUFvjDeOlBhFhyGfU4eRAh0gMu6+4OD Amy1sOzU0+P+ZE2HRl9YMMoDJ7o5epqMgiUAvLE5D7qah7WWWg9azoijNuipTZgrES Vnt2zYye0PxiIdvQF1k3OkzHivUjs5nan+78m808PeufAJ/zpgfmyXNO8N0M7innO9 joL5iCwyH5dZ2DYdCtMCLmz1Y6WuprN8a2JKQDrTSaE43b7G9Dr4pcf1R92wEWVTTq g3+ZPfNZak26mZggNJANWMXi4W5r1dktVdQxmL/SehSbdzp+2MImOyO23QD9JONf3B uSGzrxyW7DeAg== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:02 +0000 Subject: [PATCH v9 08/30] KVM: arm64: Move SVE state access macros after feature test macros Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-8-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=2722; i=broonie@kernel.org; h=from:subject:message-id; bh=E1tho8A6hfPPGjuGEPK6k9ILaXOQXFCS51jTCVXwf/E=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6I+tag2JsffKzG88VawRRVvwEfOmEYhzfdw YuOxyEI/Y6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuiAAKCRAk1otyXVSH 0IZsB/9nwQDCBcfXW8cBxP2zWPcZOPYHXxojDf7/BDDGG43agQTrUFKY6M/y0F5vuy9dLfdKSPl BZPbc7ZYMO7iWg3arGKdU9DJL6jbyQ8DUj45o2cXcXhaRjdtwFCedIMFWsZMZVyF+MGYz3VfAql 2LcqCDsWakugDU6g1FmooNBmgNTxHIRWsIr1lHxErTUtCR/nd/fqG6D/ZW12TLb0ObQ7bYP7dFc lQhRJqj1KCTzUVY9VbTDwFQ7lDz8R4/G4VZ5bG5na7PDEE8MBuKvoEM8haAsdexPub0+ul+4/2l uj7LIRLkgThFIHZDgsFKKBW9NIMpAOONGLCZWW6UL9jeGGqD X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 50 +++++++++++++++++++----------------= ---- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index ac7f970c7883..e6d25db10a6b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1048,31 +1048,6 @@ struct kvm_vcpu_arch { #define NESTED_SERROR_PENDING __vcpu_single_flag(sflags, BIT(8)) =20 =20 -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_zcr_elx(vcpu) \ - (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) - -#define sve_state_size_from_vl(sve_max_vl) ({ \ - size_t __size_ret; \ - unsigned int __vq; \ - \ - if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ - __size_ret =3D 0; \ - } else { \ - __vq =3D sve_vq_from_vl(sve_max_vl); \ - __size_ret =3D SVE_SIG_REGS_SIZE(__vq); \ - } \ - \ - __size_ret; \ -}) - -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -1108,6 +1083,31 @@ struct kvm_vcpu_arch { =20 #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) =20 +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_zcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) + +#define sve_state_size_from_vl(sve_max_vl) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ + __size_ret =3D 0; \ + } else { \ + __vq =3D sve_vq_from_vl(sve_max_vl); \ + __size_ret =3D SVE_SIG_REGS_SIZE(__vq); \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D98298CB7; Tue, 23 Dec 2025 01:22:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452934; cv=none; b=jLLBqfFNWGaLn6zZvbvEMRCKNDzEJ2KOYdK9mnFdgEnalBXjtlqAAM8fkJn7hsMeKFtw7QGczdzNkidHaptDKTiajdBySkR6d24DjeNYl/Qk5ZclMfcbK0suCUoq/1vsJC94aNIWzbAOW8LgFMD2xV8h4A83iFTzJ8YHzEGYTFI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452934; c=relaxed/simple; bh=+ypBy50skWh94ap9snk65EQThjigelz45CXFL3Wzy3k=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WpADDleRyn64HIe1PuidhIYtzK91JUmYiTJocy1ko0EbZG0AEXlLGUvBe15fbQZyljyIoCrQzprRw+RK8DytbXGWpHwopYBfh/ycC7mla9Zos1Nofw5Ja77ZxyLSN2XAyGOY+AetK7bf8rKqQjLHWyOTZoDaPpCFJdLPj5n/DMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CwDY/Qn/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CwDY/Qn/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7203FC4CEF1; Tue, 23 Dec 2025 01:22:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452933; bh=+ypBy50skWh94ap9snk65EQThjigelz45CXFL3Wzy3k=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=CwDY/Qn/a9OsIPH6WcW9ktVh/dpBSXEkLwm6ii+u2EhP5plKkpeJ7Xvl6VSAfp/3L v7Q7GD2/V7xXSGtQl16/vuLXenCnMB8vQ6KT9aKdmzbuVmOFjhC8MlurXtZaXO2LiZ AIKAgNX69xr2jVlTwHwkopFVwxo0tnu39G1XbaUy8SovsoCPvN6N2ZIk8I6KH3OIu2 IwHuJAW3CCzy2VzI4qgwyIsvzGSK788m0MQEU4gxnpcyG71U8cxVUgrOohxVEqI01K QgRxBtUEdmgho+yNb92cp3GAk8TgnvxzyQDdLEKeWi0OgVH1UVDKbIxtUCQRrr9M/w mYvZ4EKOuS8iw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:03 +0000 Subject: [PATCH v9 09/30] KVM: arm64: Rename SVE finalization constants to be more general Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-9-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=7220; i=broonie@kernel.org; h=from:subject:message-id; bh=+ypBy50skWh94ap9snk65EQThjigelz45CXFL3Wzy3k=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6JvgtSS7+8smGY5d2eJdIyGeUNCvswtV8L8 a0Lpaiup+yJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuiQAKCRAk1otyXVSH 0Lj1B/9Ng7Ax+/dNoqIRhYT2QsnqaMTKtCC72vIrVvE2gHucue2BGIqnLL5zoHz22kanZEEs74K ErjFWralY5DUClLcZJDzZAWVfgq8+wH67DQBoYZBc04M5udaQEFUUQ/su2Ud2Sz6tD0aTevmqgF bl5WZr4GOUtca5pvKgi59x/eqqPrIdogPEb8PpaaC4x1Ho/j9qN6Snry1ZlDe6VHhWCmxfj718m f9eYibAcUZifTS7XrYKKETpfD3l20cykHs/5/nNVaeSnvs8ZU9fjk2yeDZcKU1INJofUWr9Nz1Y cIwVVj3p5rj1XsAmG/FDof9kXlAhkXFzeGTPFGaK+g97+A6t X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion. Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants. No functional change. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 8 +++++--- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 20 ++++++++++---------- 5 files changed, 27 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index e6d25db10a6b..0f3d26467bf0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -988,8 +988,8 @@ struct kvm_vcpu_arch { =20 /* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) -/* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +/* Vector config completed */ +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1)) /* pKVM VCPU setup completed */ #define VCPU_PKVM_FINALIZED __vcpu_single_flag(cflags, BIT(2)) =20 @@ -1062,6 +1062,8 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif =20 +#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1458,7 +1460,7 @@ struct kvm *kvm_arch_alloc_vm(void); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); =20 -#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINA= LIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINA= LIZED) =20 #define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index a792a599b9d6..c67564f02981 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -107,6 +107,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */ =20 +/* + * An alias for _SVE since we finalize VL configuration for both SVE and S= ME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1c87699fd886..d15aa2da1891 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -342,7 +342,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; =20 - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ =20 if (WARN_ON(vcpu->arch.sve_state)) @@ -497,7 +497,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (ret) return ret; =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -523,7 +523,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (ret) return ret; =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -599,7 +599,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu= *vcpu) return 0; =20 /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); =20 return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -617,7 +617,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *= vcpu, return 0; =20 /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); =20 /* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 8911338961c5..b402dcb7691e 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -445,7 +445,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h int ret =3D 0; =20 if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { - vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 959532422d3a..f7c63e145d54 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -92,7 +92,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -122,21 +122,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcp= u) } =09 vcpu->arch.sve_state =3D buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL; =20 - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 - return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); } =20 return -EINVAL; @@ -144,7 +144,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int fe= ature) =20 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false; =20 return true; @@ -163,7 +163,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) kfree(vcpu->arch.ccsidr); } =20 -static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); @@ -203,11 +203,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); } =20 if (vcpu_el1_is_32bit(vcpu)) --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1F0229B8EF; Tue, 23 Dec 2025 01:22:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452938; cv=none; b=tf9UZGOgLfNe8fhpoZnNFMragzAqj2yLCShSnEqJzXb93DblctVvXE/wK25wsH5iTYGb1TF+2M2UkvHMcrurAjXzmkeIUWeNEkGdbgg/5ZcinC+4x7O14PocINsIa4xTotbh6ZmFsLbWj5/BRyITHKQYBEinxYv4QA3WQKY2gSM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452938; c=relaxed/simple; bh=QJhbQWLdEtLaI4QENrJxKRYB8ihqF+aPCRbKAA0Aeu8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NtSjLbqIvD1Pz/CtTQhtwpH/lWRuOYsAyEtK826Wk640e1W32sRL3AvBRJcrC+6iZXzA4l/xkRYLacKrDFvhb0mXUu92uF3CxxYp8/Z9ZoskIgrrTi53SPANHr0lFnbEeL4J3pEOFG7FFY8cLrwyWqODtLQ2GFgFVFmQkfJF1PE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B6yq7dcR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B6yq7dcR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3947C19422; Tue, 23 Dec 2025 01:22:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452937; bh=QJhbQWLdEtLaI4QENrJxKRYB8ihqF+aPCRbKAA0Aeu8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=B6yq7dcRVieiStvuwKdBmxQcP87EWYdbzU3+1mRsDaPrBKIq2912Kn4Nyft8t6gph 3/VCFqvylx0GpS/uj1Ri5MXiZWWIi3nsi1rlFJ1oRBBOvqRcr/FxEJ7p4vwEG055nD cBK7BWQSjTDWm4hvSWAaQmTx4ohdX4Z6AFug0JZ7Rdjc8A5BNARWp3yH+gTVTkfRFZ avoV6Wq3/sgHTH6Ly2M17DIS5GMbGzOtYcDNXknOg+SuJP1fc9jyo+KEbVdbPMck8f f2UKdKcY6Yrm1nlGYtJcFDLBjWErztn2ZCyuNOCdvBJ27GCk4kuO7pUcRAwxTOEP3x 9K/nnsxRsUqIw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:04 +0000 Subject: [PATCH v9 10/30] KVM: arm64: Document the KVM ABI for SME Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-10-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=12749; i=broonie@kernel.org; h=from:subject:message-id; bh=QJhbQWLdEtLaI4QENrJxKRYB8ihqF+aPCRbKAA0Aeu8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6KUIVC2fsuYsrewvVKXdXNl018BOpe+nhuG 4ZvdMNsGUiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuigAKCRAk1otyXVSH 0DexB/4y7g1ZctSoaiYQiaO3byJsvqlyMb46+vJbWdOllFxnZLjrEOId553JJC2i9D2El+uyKoB wvGy/7niKn0hN/px+8BbLlTSrZ1bEgbqAHNW2QhwbHihyhDRxLNc1J07poGXMQluQvH64VIGDi6 DyNGVkfSe1reUm3/NqLVZInAo4+sgSV3nwm8eIlR1vtfRTTLxOhuC/kbWEZ053tQjzMDCJUY14R oVmTnjB0m9Eu8FyWWWKfPIc0L3IfbMNuhEHUGfd6GTb2ga/r4nzAH618vTqhi40/6LY/+hzFOER 9ABc1SwuuUL6WUF+f8bU6ANmxUmzzSG/tdo2BXZDrSfDsJUn X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME, the Scalable Matrix Extension, is an arm64 extension which adds support for matrix operations, with core concepts patterned after SVE. SVE introduced some complication in the ABI since it adds new vector floating point registers with runtime configurable size, the size being controlled by a parameter called the vector length (VL). To provide control of this to VMMs we offer two phase configuration of SVE, SVE must first be enabled for the vCPU with KVM_ARM_VCPU_INIT(KVM_ARM_VCPU_SVE), after which vector length may then be configured but the configurably sized floating point registers are inaccessible until finalized with a call to KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE) after which the configurably sized registers can be accessed. SME introduces an additional independent configurable vector length which as well as controlling the size of the new ZA register also provides an alternative view of the configurably sized SVE registers (known as streaming mode) with the guest able to switch between the two modes as it pleases. There is also a fixed sized register ZT0 introduced in SME2. As well as streaming mode the guest may enable and disable ZA and (where SME2 is available) ZT0 dynamically independently of streaming mode. These modes are controlled via the system register SVCR. We handle the configuration of the vector length for SME in a similar manner to SVE, requiring initialization and finalization of the feature with a pseudo register controlling the available SME vector lengths as for SVE. Further, if the guest has both SVE and SME then finalizing one prevents further configuration of the vector length for the other. Where both SVE and SME are configured for the guest we always present the SVE registers to userspace as having the larger of the configured maximum SVE and SME vector lengths, discarding extra data at load time and zero padding on read as required if the active vector length is lower. Note that this means that enabling or disabling streaming mode while the guest is stopped will not zero Zn or Pn as it will when the guest is running, but it does allow SVCR, Zn and Pn to be read and written in any order. Userspace access to ZA and (if configured) ZT0 is always available, they will be zeroed when the guest runs if disabled in SVCR and the value read will be zero if the guest stops with them disabled. This mirrors the behaviour of the architecture, enabling access causes ZA and ZT0 to be zeroed, while allowing access to SVCR, ZA and ZT0 to be performed in any order. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 120 +++++++++++++++++++++++++++++--------= ---- 1 file changed, 86 insertions(+), 34 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 01a3abef8abb..e024b9783932 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -406,7 +406,7 @@ Errors: instructions from device memory (arm64) ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) - EPERM SVE feature set but not finalized (arm64) + EPERM SVE or SME feature set but not finalized (arm64) =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 This ioctl is used to run a guest virtual cpu. While there are no @@ -2606,11 +2606,11 @@ Specifically: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =20 .. [1] These encodings are not accepted for SVE-enabled vcpus. See - :ref:`KVM_ARM_VCPU_INIT`. + :ref:`KVM_ARM_VCPU_INIT`. They are also not accepted when SME is + enabled without SVE and the vcpu is in streaming mode. =20 The equivalent register content can be accessed via bits [127:0] of - the corresponding SVE Zn registers instead for vcpus that have SVE - enabled (see below). + the corresponding SVE Zn registers in these cases (see below). =20 arm64 CCSIDR registers are demultiplexed by CSSELR value:: =20 @@ -2641,24 +2641,39 @@ arm64 SVE registers have the following bit patterns= :: 0x6050 0000 0015 060 FFR bits[256*slice + 255 : 256*sli= ce] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-regis= ter =20 -Access to register IDs where 2048 * slice >=3D 128 * max_vq will fail with -ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit -quadwords: see [2]_ below. +arm64 SME registers have the following bit patterns: + + 0x6080 0000 0017 00 ZA.H[n] bits[2048*slice + 2047 : 2= 048*slice] + 0x6060 0000 0017 0100 ZT0 + 0x6060 0000 0017 fffe KVM_REG_ARM64_SME_VLS pseudo-regis= ter + +Access to Z, P, FFR or ZA register IDs where 2048 * slice >=3D 128 * +max_vq will fail with ENOENT. max_vq is the vcpu's current maximum +supported vector length in 128-bit quadwords: see [2]_ below. + +Changing the value of SVCR.SM will result in the contents of +the Z, P and FFR registers being reset to 0. When restoring the +values of these registers for a VM with SME support it is +important that SVCR.SM be configured first. + +Access to the ZA and ZT0 registers is only available if SVCR.ZA is set +to 1. =20 These registers are only accessible on vcpus for which SVE is enabled. See KVM_ARM_VCPU_INIT for details. =20 -In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not -accessible until the vcpu's SVE configuration has been finalized -using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT -and KVM_ARM_VCPU_FINALIZE for more information about this procedure. +In addition, except for KVM_REG_ARM64_SVE_VLS and +KVM_REG_ARM64_SME_VLS, these registers are not accessible until the +vcpu's SVE and SME configuration has been finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). See KVM_ARM_VCPU_INIT and +KVM_ARM_VCPU_FINALIZE for more information about this procedure. =20 -KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector -lengths supported by the vcpu to be discovered and configured by -userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type -__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as -follows:: +KVM_REG_ARM64_SVE_VLS and KVM_ARM64_VCPU_SME_VLS are pseudo-registers +that allows the set of vector lengths supported by the vcpu to be +discovered and configured by userspace. When transferred to or from +user memory via KVM_GET_ONE_REG or KVM_SET_ONE_REG, the value of this +register is of type __u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the +set of vector lengths as follows:: =20 __u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS]; =20 @@ -2670,19 +2685,25 @@ follows:: /* Vector length vq * 16 bytes not supported */ =20 .. [2] The maximum value vq for which the above condition is true is - max_vq. This is the maximum vector length available to the guest on - this vcpu, and determines which register slices are visible through - this ioctl interface. + max_vq. This is the maximum vector length currently available to + the guest on this vcpu, and determines which register slices are + visible through this ioctl interface. + + If SME is supported then the max_vq used for the Z and P registers + while SVCR.SM is 1 this vector length will be the maximum SME + vector length max_vq_sme available for the guest, otherwise it + will be the maximum SVE vector length max_vq_sve available. =20 (See Documentation/arch/arm64/sve.rst for an explanation of the "vq" nomenclature.) =20 -KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. -KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that -the host supports. +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM_SME_VLS are only accessible +after KVM_ARM_VCPU_INIT. KVM_ARM_VCPU_INIT initialises them to the +best set of vector lengths that the host supports. =20 -Userspace may subsequently modify it if desired until the vcpu's SVE -configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). +Userspace may subsequently modify these registers if desired until the +vcpu's SVE and SME configuration is finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). =20 Apart from simply removing all vector lengths from the host set that exceed some value, support for arbitrarily chosen sets of vector lengths @@ -2690,8 +2711,8 @@ is hardware-dependent and may not be available. Atte= mpting to configure an invalid set of vector lengths via KVM_SET_ONE_REG will fail with EINVAL. =20 -After the vcpu's SVE configuration is finalized, further attempts to -write this register will fail with EPERM. +After the vcpu's SVE or SME configuration is finalized, further +attempts to write these registers will fail with EPERM. =20 arm64 bitmap feature firmware pseudo-registers have the following bit patt= ern:: =20 @@ -3490,6 +3511,7 @@ The initial values are defined as: - General Purpose registers, including PC and SP: set to 0 - FPSIMD/NEON registers: set to 0 - SVE registers: set to 0 + - SME registers: set to 0 - System registers: Reset to their architecturally defined values as for a warm reset to EL1 (resp. SVC) or EL2 (in the case of EL2 being enabled). @@ -3533,7 +3555,7 @@ Possible features: =20 - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. - Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 * After KVM_ARM_VCPU_INIT: =20 @@ -3541,7 +3563,7 @@ Possible features: initial value of this pseudo-register indicates the best set of vector lengths possible for a vcpu on this host. =20 - * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 - KVM_RUN and KVM_GET_REG_LIST are not available; =20 @@ -3554,11 +3576,40 @@ Possible features: KVM_SET_ONE_REG, to modify the set of vector lengths available for the vcpu. =20 - * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 - the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can no longer be written using KVM_SET_ONE_REG. =20 + - KVM_ARM_VCPU_SME: Enables SME for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SME. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SME_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable architectural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR, the matrix register + KVM_REG_ARM64_SME_ZA() or the LUT register KVM_REG_ARM64_ZT(); + + - KVM_REG_ARM64_SME_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - the KVM_REG_ARM64_SME_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. + - KVM_ARM_VCPU_HAS_EL2: Enable Nested Virtualisation support, booting the guest from EL2 instead of EL1. Depends on KVM_CAP_ARM_EL2. @@ -5143,11 +5194,12 @@ Errors: =20 Recognised values for feature: =20 - =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D - arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) - =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + arm64 KVM_ARM_VCPU_VEC (requires KVM_CAP_ARM_SVE or KVM_CAP_ARM_SME) + arm64 KVM_ARM_VCPU_SVE (alias for KVM_ARM_VCPU_VEC) + =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -Finalizes the configuration of the specified vcpu feature. +Finalizes the configuration of the specified vcpu features. =20 The vcpu must already have been initialised, enabling the affected feature= , by means of a successful :ref:`KVM_ARM_VCPU_INIT ` call wi= th the --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 330982D3A75; Tue, 23 Dec 2025 01:22:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452943; cv=none; b=bZ2GP4gLSwYZ8U9vCXaTKXmNweGkKdwKZT5wYSQS4zzilYN16n/k7xBmQ94o702jmmfGuT5D3K8F36yFQkmArursi8aIS3hOqIj5y0ujK3ItcvnhbdEEqZARvqhJf+4IXrs9K8ueYeUk/pUb8Owj7YOBFed/3XXRnbZ2l1WVz7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452943; c=relaxed/simple; bh=A4NZGrg+TEz6932+zi/k6JqFarqVN/v9J+9+Wy94BBA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=L0Ip4xCl5lrFYgEo7NwdXU4EyvdzyZVtyLJL3XGTNQOKPXB+IhV3LzN+IjlBkErZ6bgEzSrts9zQHvxAIIrhM6dv4cC8pMgsyF5RzBGOV8x5FGXFt4z+q4dzUuOwme7IyUIp7nsRC32EQebBtv/5ldEzK5dYSFrG/CKyt4rXdTQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qyZ69Kcb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qyZ69Kcb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1E3CC4CEF1; Tue, 23 Dec 2025 01:22:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452941; bh=A4NZGrg+TEz6932+zi/k6JqFarqVN/v9J+9+Wy94BBA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=qyZ69Kcb0JMVMtJ9LiFOsKQ4MTUhDH6m8LY2vS1rmL+JXQcp2UKibKlIMV5kH6Jeq pd5oTKhoCeVlE/+OBJgaYQFdml3cpRaqH1tupEiu1ix3Otz/UwIO3xvAOPm8FnsdB1 ++SlGSHV/8OvucH8cMLUz2xok6ikXjV7/sI8AYVezlAmpyNdkWhaXHOiYHbAfj6ksU Eh/lynQWiBt61kJOuFvVt5bDUTUyuyojXHMlCyawMRJ9CibwA7j7a2nYQ17B2PhGiF ruF9fcBPBIEOcjmnt4oYwnl9XJ7yDp7lrgOu4pBuk0mbGlxHh4UfhXVNzqSPJry/3C Zw+8iOPms44kg== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:05 +0000 Subject: [PATCH v9 11/30] KVM: arm64: Define internal features for SME Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-11-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3264; i=broonie@kernel.org; h=from:subject:message-id; bh=A4NZGrg+TEz6932+zi/k6JqFarqVN/v9J+9+Wy94BBA=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6LFCe39WIuuDEtOO/K4RmDUXHyHJaBm6U/x 0hrqF0DjneJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuiwAKCRAk1otyXVSH 0G/5B/9YxeU6cxwTbk61Pby5uUM1Mk4lRYZ+QxXUVDvTuKJSTgNEp1Y9UfPhG1kE+ynnpYcX1pm 3jXPtAU01yEgKY6dIwtN88EGpYQmkUhvw8WhmGxzYGhJcgSjznOarreIeeRdz6FF3Lj9BLu2iZo prBeSHs5RDFzNjovnKJtV+UK8dokaCRfWVy7KfbV9WGY0UCHped1ZH4RoFy2gtgH8D8fWX0U1QX 8Sk2f1DUUmvsu+fX7/VPyIzCfILM83mnRNMn4tGp0LQsIvbKQmNQ47xmBHOjnHqRDSVK/qvBMaC wZP9w+ezCgGRlpqCCXkWsbeJHEuAaZxsLRK1d8InmdtIjwwA X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In order to simplify interdependencies in the rest of the series define the feature detection for SME and it's subfeatures. Due to the need for vector length configuration we define a flag for SME like for SVE. We also have two subfeatures which add architectural state, FA64 and SME2, which are configured via the normal ID register scheme. Also provide helpers which check if the vCPU is in streaming mode or has ZA enabled. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++++++++++++++++++- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 0f3d26467bf0..0816180dc551 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -353,6 +353,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10 /* Unhandled SEAs are taken to userspace */ #define KVM_ARCH_FLAG_EXIT_SEA 11 + /* SME exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SME 12 unsigned long flags; =20 /* VM-wide vCPU feature set */ @@ -1062,7 +1064,16 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif =20 -#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) +#define kvm_has_sme(kvm) (system_supports_sme() && \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &(kvm)->arch.flags)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme(vcpu) kvm_has_sme(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme(vcpu) kvm_has_sme((vcpu)->kvm) +#endif + +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu)) =20 #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ @@ -1602,6 +1613,28 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64= val); #define kvm_has_sctlr2(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, SCTLRX, IMP)) =20 +#define kvm_has_fa64(k) \ + (system_supports_fa64() && \ + kvm_has_feat((k), ID_AA64SMFR0_EL1, FA64, IMP)) + +#define kvm_has_sme2(k) \ + (system_supports_sme2() && \ + kvm_has_feat((k), ID_AA64PFR1_EL1, SME, SME2)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme2(vcpu) kvm_has_sme2(kern_hyp_va((vcpu)->kvm)) +#define vcpu_has_fa64(vcpu) kvm_has_fa64(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme2(vcpu) kvm_has_sme2((vcpu)->kvm) +#define vcpu_has_fa64(vcpu) kvm_has_fa64((vcpu)->kvm) +#endif + +#define vcpu_in_streaming_mode(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK) + +#define vcpu_za_enabled(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_ZA_MASK) + static inline bool kvm_arch_has_irq_bypass(void) { return true; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c8fd7c6a12a1..3576e69468db 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1945,7 +1945,7 @@ static unsigned int sve_visibility(const struct kvm_v= cpu *vcpu, static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { - if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + if (vcpu_has_sme(vcpu)) return 0; =20 return REG_HIDDEN; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F19252D5C83; Tue, 23 Dec 2025 01:22:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452948; cv=none; b=tb4d+QZbBi9vJSTmE2PBMQ1gyAfnnLah5BpBr+xOFMWoEokI3INgbBsjcQtfc8mtJnwqL+kmYdA3JKfBALaUuL2x1W9tcRBuLWSwkwLM4TdBd1DTK2cYg1az23AVx1GBcSPagejj2Dxvi0V9GM902iZ53nA5NqWyQi622SsM4YE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452948; c=relaxed/simple; bh=gfp0s8vq4GgXX2OhDBoI78/711AOqvXjX/EjV4hM7Ew=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=D0dsCLg3nS0p3FEA9HgHwL3h9aEVdU06UNwubGIktqZ+MtxNiD8pWrxDXBvykp0XInjgBWHjYwHFEW+4sjFh9cvk4952hhcqfmvLXklR6NdlJUPRKzfNKhGQouIVF6V7V/VCUOoyKNdF/HRqXXvw/VjohyIHJM2OKaHRS1iSBiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=O16FGqKT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="O16FGqKT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A7D9C116C6; Tue, 23 Dec 2025 01:22:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452947; bh=gfp0s8vq4GgXX2OhDBoI78/711AOqvXjX/EjV4hM7Ew=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=O16FGqKTNdczc4/lSuwlD8138gQJMp4jx7y/Ez2gACQQQQOeVbrimke1N8wcOMoWW HJa+cUenUXOtb977SVm42D9WFm96+Sc0R334clTxAw/YhFxqtOaRJ3S8Z8kNKYCi8W OmqdKB56h413grtXnz5KyTp3nwAhp1i5J/DYilqmE+LfPjR/VhQsnOp6rFO7R2Jbxp e4CEOaiE0NIR05xpxYTSFn1euO6OCZl2p99U8vTpA5+59WKBpXlieg+c6p7czbqaSL UYOF6Oetl4xHRptAo1TRRwilN6T5dgCMcCynUTzohRWk0E9OHzfSNtKIqPKbybMjmN /29wPbOYOaVlQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:06 +0000 Subject: [PATCH v9 12/30] KVM: arm64: Rename sve_state_reg_region Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-12-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=2352; i=broonie@kernel.org; h=from:subject:message-id; bh=gfp0s8vq4GgXX2OhDBoI78/711AOqvXjX/EjV4hM7Ew=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6LyU2hP4tGR98SafeORWHn3VqVtWLVYEDpd 9CqFj8boKKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnuiwAKCRAk1otyXVSH 0NW2CACHCG80SVmspz99x7BNFLI22yJy0+S+koe679BIRDvbUunQDWWrfFfutWGJd1iIEAvK3lz 1/JEhESq4l3slRuO6tHz7i6C+wTnYkKCKwZhkqAD9Nc8KvxlT+cAp6VjMNREnKwIdDel2kGMJRb Bs59xm00prmh90JqAMmJiVM0+fcJrZLjkLgqhgUaAY68qq/8OkCTTBRk8lztbdxCa5tzsR/7nfE 2j6l3m3lLniNnkryNJ3tkDJpxhNmvuY3aL29KG4pGElJZ+Qo7dV6o2Gt3vkukCSzLR9wNrDBZ8s ff2/OUBH70tHxthXBek5qXQtyI4+QLubkHoYa19SIqi8gyuw X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB As for SVE we will need to pull parts of dynamically sized registers out of a block of memory for SME so we will use a similar code pattern for this. Rename the current struct sve_state_reg_region in preparation for this. No functional change. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kvm/guest.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index d15aa2da1891..8c3405b5d7b1 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -404,9 +404,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1 =20 -/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ -struct sve_state_reg_region { - unsigned int koffset; /* offset into sve_state in kernel memory */ +/* Bounds of a single register slice within vcpu->arch.s[mv]e_state */ +struct vec_state_reg_region { + unsigned int koffset; /* offset into s[mv]e_state in kernel memory */ unsigned int klen; /* length in kernel memory */ unsigned int upad; /* extra trailing padding in user memory */ }; @@ -415,7 +415,7 @@ struct sve_state_reg_region { * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy */ -static int sve_reg_to_region(struct sve_state_reg_region *region, +static int sve_reg_to_region(struct vec_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -485,7 +485,7 @@ static int sve_reg_to_region(struct sve_state_reg_regio= n *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; char __user *uptr =3D (char __user *)reg->addr; =20 /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ @@ -511,7 +511,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; const char __user *uptr =3D (const char __user *)reg->addr; =20 /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30BDB221FCC; Tue, 23 Dec 2025 01:22:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452952; cv=none; b=rLyb4eKVCyDRYnGwjiU8eXKDCdXddy7BdQ9zeZGir1vCgZFW+2J2HDKQTAqkVPChUlJtYaSchpFAJNlWCs2VHga9UGruzYzzdQURoDWQmDb2H74/D7w7OowHEINbEYCwxahf4k6aSLT9WcDLdJawMQRy31v2iX3rEk7UERQjVyw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452952; c=relaxed/simple; bh=3jPYWgE8QXiAVkyVE4qnWscHg7eNT+ByK51GsqGhdIs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=c3LNeUq/Bq7JmDMpX3kNgg1miVzV33cOxFcjlaBXSJvtTIWPa2B4RXidd4pMOoJlOPyFLTAXFs5NoyibagvKe6NLtd5nwi46l0BCi9zaqs9A7CNL8akmnW/ZUuwtALyMIVN4ftzYaJj+nQ8oeYbnOQp84dbXDduzPj2xzFKbtJo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TuBXtgo0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TuBXtgo0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9991C4CEF1; Tue, 23 Dec 2025 01:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452951; bh=3jPYWgE8QXiAVkyVE4qnWscHg7eNT+ByK51GsqGhdIs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TuBXtgo0rm9XW/eNw/CQfsFuZu/KY2uB5tvdU9VM+slBsB7p1pp0jMG8wrzi+xjMj 1e1ISygjBuOlwjqLhOS/K+USr/ndHVlja9Pd8plTfHjvFzfE1JeCqGnbKtBPvKn1JD CLguQctMi7YDNO60BfpfnZ1TAB3JWwBYgNQyycrxXybkSASj18y3EKpwm0cB1RvUpG frqLp2KHF4ffCwNxHn5Lxl7l0y4WZJ97wtxbL30F2roJ+YiBKifvFTadQQRYE6zI3N Yf7od1mtD9K+rBHiNNZwr/Z1fvF2H2R0Pu9st7EjroMCExYYG3RMrs6dg2SHlCrhNj +3etV8snmy1aA== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:07 +0000 Subject: [PATCH v9 13/30] KVM: arm64: Store vector lengths in an array Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-13-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=11792; i=broonie@kernel.org; h=from:subject:message-id; bh=3jPYWgE8QXiAVkyVE4qnWscHg7eNT+ByK51GsqGhdIs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6M9UL73xNMaycdUMejSG7qAV1QzG2HMbrJr 2z/RTE6RtGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnujAAKCRAk1otyXVSH 0DzcB/0QOAxrJ1VeLE6ZZJHcraLLmX9G8IxEAZ+d2usaxW/SixgELrV63akP86PbUP9gJq9sCsO AgJBn9d/GzyLUSt4cxOn7zFoHOKOKPgusf0m1BY4INW8nLazy1CPJodJzFBb+spzrBb7CkaOMnY 1Z5uxXSmdRpT3nTFQ03PXDPaoSKPjKm4fllVm0zbVh4/x8snIxQWOLvNSJlrqsgF3YXcYu080rO MdnC59B6GWUsAtHkGtQ61SXUjT5yWXe/VmBRkezxW8zTXkO3jC7+Mco7joosEqYPQ5RdY20DHzh yXJDZBSjN3N0CgQ7CfhJSBjRSPblMF+IFiYb3LKzxRd/MNh4 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a second vector length configured in a very similar way to the SVE vector length, in order to facilitate future code sharing for SME refactor our storage of vector lengths to use an array like the host does. We do not yet take much advantage of this so the intermediate code is not as clean as might be. No functional change. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 17 +++++++++++------ arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++++--- arch/arm64/kvm/reset.c | 22 +++++++++++----------- 9 files changed, 38 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 0816180dc551..3a3330b2a6a9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -77,8 +77,10 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif =20 -extern unsigned int __ro_after_init kvm_sve_max_vl; -extern unsigned int __ro_after_init kvm_host_sve_max_vl; +extern unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; +extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; +DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); + int __init kvm_arm_init_sve(void); =20 u32 __attribute_const__ kvm_target_cpu(void); @@ -811,7 +813,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX]; =20 /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; @@ -1098,9 +1100,12 @@ struct kvm_vcpu_arch { =20 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) + +#define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[typ= e]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) =20 -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) =20 #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) @@ -1119,7 +1124,7 @@ struct kvm_vcpu_arch { __size_ret; \ }) =20 -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) =20 /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_= hyp.h index 76ce2b94bd97..0317790dd3b7 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -146,6 +146,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val); =20 extern unsigned long kvm_nvhe_sym(__icache_flags); extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); -extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); +extern unsigned int kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_MAX]); =20 #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index 0aecd4ac5f45..0697c88f2210 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -167,7 +167,7 @@ static inline size_t pkvm_host_sve_state_size(void) return 0; =20 return size_add(sizeof(struct cpu_sve_state), - SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]))); } =20 struct pkvm_mapping { diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 9158353d8be3..1f4fcc8b5554 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -75,7 +75,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st =3D &vcpu->arch.ctxt.fp_regs; fp_state.sve_state =3D vcpu->arch.sve_state; - fp_state.sve_vl =3D vcpu->arch.sve_max_vl; + fp_state.sve_vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state =3D NULL; fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 8c3405b5d7b1..456ef61b6ed5 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -318,7 +318,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; =20 - if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL; =20 memset(vqs, 0, sizeof(vqs)); @@ -356,7 +356,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq =3D vq; =20 - if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) return -EINVAL; =20 /* @@ -375,7 +375,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) return -EINVAL; =20 /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl =3D sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_vl_from_vq(max_vq); =20 return 0; } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index c5d5e5b86eaf..9ce53524d664 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -458,8 +458,8 @@ static inline void __hyp_sve_save_host(void) struct cpu_sve_state *sve_state =3D *host_data_ptr(sve_state); =20 sve_state->zcr_el1 =3D read_sysreg_el1(SYS_ZCR); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl= ), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM= 64_VEC_SVE]), &sve_state->fpsr, true); } @@ -514,7 +514,7 @@ static inline void fpsimd_lazy_switch_to_host(struct kv= m_vcpu *vcpu) zcr_el2 =3D vcpu_sve_max_vq(vcpu) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); } else { - zcr_el2 =3D sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + zcr_el2 =3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); =20 zcr_el1 =3D vcpu_sve_max_vq(vcpu) - 1; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index a7c689152f68..208e9042aca4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -34,7 +34,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true= ); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); } =20 static void __hyp_sve_restore_host(void) @@ -50,8 +50,8 @@ static void __hyp_sve_restore_host(void) * that was discovered, if we wish to use larger VLs this will * need to be revisited. */ - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max= _vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[= ARM64_VEC_SVE]), &sve_state->fpsr, true); write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index b402dcb7691e..f4ec6695a6a5 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,7 +20,7 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits; =20 -unsigned int kvm_host_sve_max_vl; +unsigned int kvm_host_max_vl[ARM64_VEC_MAX]; =20 /* * The currently loaded hyp vCPU for each physical CPU. Used in protected = mode @@ -450,7 +450,8 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h } =20 /* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.sve_max_vl), kvm_host_sve_ma= x_vl); + sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); sve_state_size =3D sve_state_size_from_vl(sve_max_vl); sve_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state)); =20 @@ -464,7 +465,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h goto err; =20 vcpu->arch.sve_state =3D sve_state; - vcpu->arch.sve_max_vl =3D sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_max_vl; =20 return 0; err: diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f7c63e145d54..a8684a1346ec 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,7 +32,7 @@ =20 /* Maximum phys_shift supported for any VM on this host */ static u32 __ro_after_init kvm_ipa_limit; -unsigned int __ro_after_init kvm_host_sve_max_vl; +unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; =20 /* * ARMv8 Reset Values @@ -46,14 +46,14 @@ unsigned int __ro_after_init kvm_host_sve_max_vl; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT) =20 -unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; =20 int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl =3D sve_max_virtualisable_vl(); - kvm_host_sve_max_vl =3D sve_max_vl(); - kvm_nvhe_sym(kvm_host_sve_max_vl) =3D kvm_host_sve_max_vl; + kvm_max_vl[ARM64_VEC_SVE] =3D sve_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SVE] =3D sve_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SVE]) =3D kvm_host_max_vl[ARM64_V= EC_SVE]; =20 /* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -61,16 +61,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl =3D VL_ARCH_MAX; + if (WARN_ON(kvm_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SVE] =3D VL_ARCH_MAX; =20 /* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_max_vl[ARM64_VEC_SVE]); } =20 return 0; @@ -78,7 +78,7 @@ int __init kvm_arm_init_sve(void) =20 static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl =3D kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D kvm_max_vl[ARM64_VEC_SVE]; =20 /* * Userspace can still customize the vector lengths by writing @@ -99,7 +99,7 @@ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) size_t reg_sz; int ret; =20 - vl =3D vcpu->arch.sve_max_vl; + vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; =20 /* * Responsibility for these properties is shared between --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 690522EFDAF; Tue, 23 Dec 2025 01:22:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452956; cv=none; b=oqhNpR+nZ1Kx64nHGnlooXOTf2/uOSuFQeMkzO9WtNwHPK5f92mSSmt/V80u5Kt7gIrspc6HqC1z1XORuLePh7ndRJ7uvLe3TxU5NGGQVb0/rx47CZQaoGyfdHLmTh/2JeRyuPFHkiRFfQGK64DFoknKMVfE2Cb7zJ+Cx4K9YvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452956; c=relaxed/simple; bh=KZvs86ZuSSJ1GlLBBxNenh+ZiHNvwUrTRot0VdfnjeE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EmS2mDJgyo4vXKpjqmwvffXAfK48A4Jtc/RjfS5fVtgQ7O5DMiWENgwmO+qHo0R4ByVJFjjLljgT0sG0IHGFFPwvjS5fVLPA6iLRLl5tzgNcf1mx+qD+c5wCm3c3zDq9qFlZpjfeQ2jA95VSvATDSPMbIss15hTAgo1bqtrsNnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U0URwFcT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U0URwFcT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33F82C116C6; Tue, 23 Dec 2025 01:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452956; bh=KZvs86ZuSSJ1GlLBBxNenh+ZiHNvwUrTRot0VdfnjeE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=U0URwFcTLHsWlLf4wiX0xgN6fzxq0ax72VA6bPVx+68lgO9wT42mzh4rwOH8UUnX4 KGAZxOR/FKAiDEBEAQz0LJdIlc4/vacqanxu3J1/NNm0fovkywC2ve0Y3C732SMZ2K 3UXX8j4DioLeOglLpXypYyZuyxzuS6AO3s1Hkn3QyVsxJ7lH3AgkDCsyr2jgg6hVQl X+5ueOWpOY0nNrIBOLdo9siFBDIKTYbkGHVW5rTFB1VBYbn/ZrXJ6Xl6hVzMoOiXgy w/hn75hAg4qWJFyq5H7OarNEE8vF6sqP85roa5Z7obocVNX2W0P8hEQ8V0WaXgwqo7 Av0ubc834yetg== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:08 +0000 Subject: [PATCH v9 14/30] KVM: arm64: Implement SME vector length configuration Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-14-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=10155; i=broonie@kernel.org; h=from:subject:message-id; bh=KZvs86ZuSSJ1GlLBBxNenh+ZiHNvwUrTRot0VdfnjeE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6N+ULK6eQqqqNF3Sg67wHIYmChBi6eWWRzr GD+iSoG16yJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnujQAKCRAk1otyXVSH 0HYDB/9WUuvfLb11Amoz7wQ6qs/x8jTH2ClrUc0PHwHna21HeatE0nVvu5tMPz6kicai5w/0l8P YT2FtM8exB7O+f0LYG8PyEQ6SBFQGEcz6PI/rq9v7AvanJ/6kEX9fVCmCUUASSn2b0ehmCpS3ZM gIaxCbLQXb5BjQD+TtEMi7mtiA3YmZRo04r6uuCg18NJ4qyxzSM2xEwggbxnrG4sgzG9O+io1jR ZEYKr+jPO8dtZPZSRYzfL2FxRnZpNdlVxPxzPA8v7ODLtmr4LzXd4WuSDzTlDjli7/VGsT2PX0P 03vQKWZZCfpdcw+dJ61iJ5Ue7VZlSwYm+QVr854M3TzZT0ar X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME implements a vector length which architecturally looks very similar to that for SVE, configured in a very similar manner. This controls the vector length used for the ZA matrix register, and for the SVE vector and predicate registers when in streaming mode. The only substantial difference is that unlike SVE the architecture does not guarantee that any particular vector length will be implemented. Configuration for SME vector lengths is done using a virtual register as for SVE, hook up the implementation for the virtual register. Since we do not yet have support for any of the new SME registers stub register access functions are provided that only allow VL configuration. These will be extended as the SME specific registers, as for SVE. Since vq_available() is currently only defined for CONFIG_SVE add a stub for builds where that is disabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_host.h | 24 ++++++++++-- arch/arm64/include/uapi/asm/kvm.h | 9 +++++ arch/arm64/kvm/guest.c | 82 +++++++++++++++++++++++++++++++----= ---- 4 files changed, 96 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 146c1af55e22..8b0840bd7e14 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -340,6 +340,7 @@ static inline int sve_max_vl(void) return -EINVAL; } =20 +static inline bool vq_available(enum vec_type type, unsigned int vq) { ret= urn false; } static inline bool sve_vq_available(unsigned int vq) { return false; } =20 static inline void sve_user_disable(void) { BUILD_BUG(); } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 3a3330b2a6a9..b41700df3ce9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -810,8 +810,15 @@ struct kvm_vcpu_arch { * low 128 bits of the SVE Z registers. When the core * floating point code saves the register state of a task it * records which view it saved in fp_type. + * + * If SME support is also present then it provides an + * alternative view of the SVE registers accessed as for the Z + * registers when PSTATE.SM is 1, plus an additional set of + * SME specific state in the matrix register ZA and LUT + * register ZT0. */ void *sve_state; + void *sme_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX]; =20 @@ -1098,14 +1105,23 @@ struct kvm_vcpu_arch { =20 #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) =20 -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) - #define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[typ= e]) =20 #define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SME) + +#define vcpu_sve_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SVE]) +#define vcpu_sme_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SME]) =20 +#define vcpu_max_vl(vcpu) max(vcpu_sve_max_vl(vcpu), vcpu_sme_max_vl(vcpu)) +#define vcpu_max_vq(vcpu) sve_vq_from_vl(vcpu_max_vl(vcpu)) + +#define vcpu_cur_sve_vl(vcpu) (vcpu_in_streaming_mode(vcpu) ? \ + vcpu_sme_max_vl(vcpu) : vcpu_sve_max_vl(vcpu)) + +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset(vcpu_cur_sve_vl(vcpu))) =20 #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index c67564f02981..498a49a61487 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -354,6 +354,15 @@ struct kvm_arm_counter_offset { #define KVM_ARM64_SVE_VLS_WORDS \ ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) =20 +/* SME registers */ +#define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) + +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ + KVM_REG_SIZE_U512 | 0xfffe) +#define KVM_ARM64_SME_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) + /* Bitmap feature firmware registers */ #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64= | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 456ef61b6ed5..2a1fdcb0ec49 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -310,22 +310,20 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const = struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) #define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq))) =20 -static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +static int get_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; =20 - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; =20 memset(vqs, 0, sizeof(vqs)); =20 - max_vq =3D vcpu_sve_max_vq(vcpu); + max_vq =3D vcpu_vec_max_vq(vcpu, vec_type); for (vq =3D SVE_VQ_MIN; vq <=3D max_vq; ++vq) - if (sve_vq_available(vq)) + if (vq_available(vec_type, vq)) vqs[vq_word(vq)] |=3D vq_mask(vq); =20 if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) @@ -334,40 +332,41 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const s= truct kvm_one_reg *reg) return 0; } =20 -static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +static int set_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; =20 - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ =20 - if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; =20 if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) return -EFAULT; =20 + if (WARN_ON(vcpu->arch.sve_state || vcpu->arch.sme_state)) + return -EINVAL; + max_vq =3D 0; for (vq =3D SVE_VQ_MIN; vq <=3D SVE_VQ_MAX; ++vq) if (vq_present(vqs, vq)) max_vq =3D vq; =20 - if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) + if (max_vq > sve_vq_from_vl(kvm_max_vl[vec_type])) return -EINVAL; =20 /* * Vector lengths supported by the host can't currently be * hidden from the guest individually: instead we can only set a - * maximum via ZCR_EL2.LEN. So, make sure the available vector + * maximum via xCR_EL2.LEN. So, make sure the available vector * lengths match the set requested exactly up to the requested * maximum: */ for (vq =3D SVE_VQ_MIN; vq <=3D max_vq; ++vq) - if (vq_present(vqs, vq) !=3D sve_vq_available(vq)) + if (vq_present(vqs, vq) !=3D vq_available(vec_type, vq)) return -EINVAL; =20 /* Can't run with no vector lengths at all: */ @@ -375,11 +374,27 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const s= truct kvm_one_reg *reg) return -EINVAL; =20 /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[vec_type] =3D sve_vl_from_vq(max_vq); =20 return 0; } =20 +static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -533,6 +548,39 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const st= ruct kvm_one_reg *reg) return 0; } =20 +static int get_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) + return get_sme_vls(vcpu, reg); + + return -EINVAL; +} + +static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) + return set_sme_vls(vcpu, reg); + + return -EINVAL; +} int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { return -EINVAL; @@ -711,6 +759,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct= kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return get_sme_reg(vcpu, reg); } =20 return kvm_arm_sys_reg_get_reg(vcpu, reg); @@ -728,6 +777,7 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct= kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return set_sme_reg(vcpu, reg); } =20 return kvm_arm_sys_reg_set_reg(vcpu, reg); --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D7132F99AD; Tue, 23 Dec 2025 01:22:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452960; cv=none; b=nZjWfQcPs+gZU+ePgWGVR0ZJl79xrdGL4oSQCgI60Tj+H0lUjjXa4JUhp7fhlRQM5FvmonT7V5wF59YX73TGyf4PYWl4zxaKBXSxbIfW4QT1IeirHLiILSmJuoJcYjq9KEKdfyjCp6w1tN39F4PfqJ9SJ14zmLcLS98Ebi7BLsU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452960; c=relaxed/simple; bh=hpiP0eA4qmeF4dLRzqbVb4BsWjeVplh22fbkO2rFo8A=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Y3XxDbbazZm2DnO8EQiVWbeQQEdttu7MKKF6ngdOlcgXjjjlr2Ltn5AC+ex1MwBv/nd54PCv7nbmtk2mWmj8lucxpa5HMrpgd1kAKSbM1gBlYv7gQQH1QzqQkPOoOldpPLSE5O0tyZDq6a4dmJFD/Axhf2yWrvHa/5eREKJ1Qco= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nWIvFazZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nWIvFazZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 749BEC4CEF1; Tue, 23 Dec 2025 01:22:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452960; bh=hpiP0eA4qmeF4dLRzqbVb4BsWjeVplh22fbkO2rFo8A=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=nWIvFazZ5RgrX1CjjIpYxoRtuNA5PCABKZQ6CfGRyh2YGw9MTBZTecbebv8nJIkAt lWJRFkYIOFS9aRreL2MaK9diLIlPfJK8/WjIPpmBgDL+STqgPFaWhsk4dOCgFaRo7T 2rpiREiYkf9dQ7M4pAkWSP3VFZP+qOR51mzXSH+OYT0+7AZmD5QDvt369cg7FNlCDp bq4zw+Obqdk2yslAlg0NM7NjL6FIqb/nDx8M6F3gnml4nxSi6fkzx5NZ0Z4dMZ1hsa 2KwcGR6DVXCBhyNWrRpvb5r93CqzSHorVHw5GtJi0Pi6UQLWwUS/GVVEmeqkRpvnyJ EPvA6j48XKTkA== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:09 +0000 Subject: [PATCH v9 15/30] KVM: arm64: Support SME control registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-15-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=4356; i=broonie@kernel.org; h=from:subject:message-id; bh=hpiP0eA4qmeF4dLRzqbVb4BsWjeVplh22fbkO2rFo8A=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6OnJ5qm9IRTxWCDKhWuw2aB2hSIwMUwtyOH TgMhhQ1atCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnujgAKCRAk1otyXVSH 0McDB/9x+JQHYW4gMSj2mVEj4a9Woqsapd9hWJyx5t9Oy9c2TSR+913aYJpLW0pw2z5/peJUpDd wIMWclk27zLPVNowolZgW62+9GLvszO3ntr9He8IJbDA/UekMq1TyrBzFkuGh/3sHl07Mtb9qgt 6pNrIqViZp5xV6gBlLvsRmNlo5uHSKvwM9T0y/8HMVE7cCI/XwnSwsN6oQN/mejz9NaigbaKbJD P6QZ4DnqSfWRACybE/xujy+PIjuamypojsEoss84fCCxD4dW8oYkQh1iZOT6O0t2pdjP5agWY6c NWFgp8G3Cjn4H5hoiqwlKl4B8ASFufPqqEdjKTr429vvfKAN X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add definitions and userspace access for them. These control the SME vector length in a manner similar to that for SVE and also have feature enable bits for SME2 and FA64. A subsequent patch will add management of them for guests as part of the general floating point context switch, as is done for the equivalent SVE registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++= +++- 3 files changed, 38 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index b41700df3ce9..f24441244a68 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -504,6 +504,7 @@ enum vcpu_sysreg { CPTR_EL2, /* Architectural Feature Trap Register (EL2) */ HACR_EL2, /* Hypervisor Auxiliary Control Register */ ZCR_EL2, /* SVE Control Register (EL2) */ + SMCR_EL2, /* SME Control Register (EL2) */ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */ TCR_EL2, /* Translation Control Register (EL2) */ @@ -542,6 +543,7 @@ enum vcpu_sysreg { VNCR(ACTLR_EL1),/* Auxiliary Control Register */ VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ + VNCR(SMCR_EL1), /* SME Control */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm= /vncr_mapping.h index c2485a862e69..44b12565321b 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -44,6 +44,7 @@ #define VNCR_HDFGWTR_EL2 0x1D8 #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 +#define VNCR_SMCR_EL1 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3576e69468db..5c912139d264 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2827,6 +2827,37 @@ static bool access_gic_elrsr(struct kvm_vcpu *vcpu, return true; } =20 +static unsigned int sme_el2_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return __el2_visibility(vcpu, rd, sme_visibility); +} + +static bool access_smcr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned int vq; + u64 smcr; + + if (guest_hyp_sve_traps_enabled(vcpu)) { + kvm_inject_nested_sve_trap(vcpu); + return false; + } + + if (!p->is_write) { + p->regval =3D __vcpu_sys_reg(vcpu, SMCR_EL2); + return true; + } + + smcr =3D p->regval; + vq =3D SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1; + vq =3D min(vq, vcpu_sme_max_vq(vcpu)); + __vcpu_assign_sys_reg(vcpu, SMCR_EL2, SYS_FIELD_PREP(SMCR_ELx, LEN, + vq - 1)); + return true; +} + static unsigned int s1poe_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -3291,7 +3322,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility =3D sve= _visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -3655,6 +3686,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { =20 EL2_REG_VNCR(HCRX_EL2, reset_val, 0), =20 + EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, + sme_el2_visibility), + EL2_REG(TTBR0_EL2, access_rw, reset_val, 0), EL2_REG(TTBR1_EL2, access_rw, reset_val, 0), EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1), --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B60EE23BD1B; Tue, 23 Dec 2025 01:22:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452964; cv=none; b=rlee8KKjhETD0vXHw2II0n2UCBF5IEX2OmfnzZjP+H2fCi47Rp3fdG0t0pjVJOQ3BtW+vHho/+v68SIZNNPqCmy48frxXQhLhNf/6siasV7wCmjq/kOqJ4z1SibtGpLQhlhLcZO0UOT2nDCdv889Ye6GTHbpLtMYCyI4F8t7zrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452964; c=relaxed/simple; bh=nKbwSwJUawLfLfO+eAl8l5ECEVYhA/SyxKS3n4Kg7RI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ftC8hetc8EveMNp/yoCME8VmrN5DrzYwFyoxF1Rtfugtb4Pije1g/EaQZ5PjaMBFwRI6RIV3PPbRmlz3PPAsh1UHBAjYowqlpQyOToekzEsgc1rccy/fbgdU3oSOsliW4ssd1tfhlpEujwVv0Gv6GLRwVA+gqvW56MLUPX+6VeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U/waK+ep; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U/waK+ep" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B75E5C4CEF1; Tue, 23 Dec 2025 01:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452964; bh=nKbwSwJUawLfLfO+eAl8l5ECEVYhA/SyxKS3n4Kg7RI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=U/waK+epxGtWLeXIBQn3Ty2yXnCqVXgAzu2zC+vmW+VPzVwP3Qx2tnCRWizotJEa6 00FUhw0DrwiyIE+Qli2lH+DzOEqYXRyZcY2OGL4m6LPw3Vy8l+yoVnNOq1soPlxhot g2gffacx9SivdiSS8jFuJUFOSDSN1uGe6UPIfYij9wrEuwMIIJObDTw0Sai10+yJN7 Pk/JIcg8E8WyYBeUT7k5tzm6L50RcNXj8PnoiuxRodO52Fxclq5VlGRaBdkhFO1SRb lzRF2CRN7FdAP5m/FBK+uviCsLQ4uf6wAiPHvCygzNIi3uvNgjEr8yBEnqpNrsDYrB m0+VyrgIhP7lw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:10 +0000 Subject: [PATCH v9 16/30] KVM: arm64: Support TPIDR2_EL0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-16-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3465; i=broonie@kernel.org; h=from:subject:message-id; bh=nKbwSwJUawLfLfO+eAl8l5ECEVYhA/SyxKS3n4Kg7RI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6OmAHAqwGrTwNHiYW/k9eyKKBOSHAc//E8p irmzvJrZx+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnujgAKCRAk1otyXVSH 0JcpB/9lyY5vhfu0MpB/kSRWge/e+OfzF6Ac3Qb3qg/WMWoXwhEHkRWTVtFajpo0Vc8spi3s6v8 d0gwBsdcdU5ra9Nb8yziHD7WeaeqU2XaupIAEcIc5UjMHHBT/j7YgGAf0FvWUWFhg2HRJ9Sw1VP GEv6hL6koKqkPmS0MqMuMWjHY8lufURynIIozFrkwwPZrlLwSThIdYJCmJ6t/A28RtKLAf2aFNW k8heGHxiisImOluv1kWUrpu22u6J62eh6A1ahp965I7KeiBam+AN8dmPv8RJPwvP1lOFjq5wsJZ /1C6kbZ+whSUbLCNn7kicSnl5dmw6amQnyZOMPF7TIbuFL7/ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a new thread ID register, TPIDR2_EL0. This is used in userspace for delayed saving of the ZA state but in terms of the architecture is not really connected to SME other than being part of FEAT_SME. It has an independent fine grained trap and the runtime connection with the rest of SME is purely software defined. Expose the register as a system register if the guest supports SME, context switching it along with the other EL0 TPIDRs. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 +++++++++++++++ arch/arm64/kvm/sys_regs.c | 3 ++- 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index f24441244a68..825b74f752d6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -448,6 +448,7 @@ enum vcpu_sysreg { CSSELR_EL1, /* Cache Size Selection Register */ TPIDR_EL0, /* Thread ID, User R/W */ TPIDRRO_EL0, /* Thread ID, User R/O */ + TPIDR2_EL0, /* Thread ID, Register 2 */ TPIDR_EL1, /* Thread ID, Privileged */ CNTKCTL_EL1, /* Timer Control Register (EL1) */ PAR_EL1, /* Physical Address Register */ diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hy= p/include/hyp/sysreg-sr.h index 5624fd705ae3..8c3b3d6df99f 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -88,6 +88,17 @@ static inline bool ctxt_has_sctlr2(struct kvm_cpu_contex= t *ctxt) return kvm_has_sctlr2(kern_hyp_va(vcpu->kvm)); } =20 +static inline bool ctxt_has_sme(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!system_supports_sme()) + return false; + + vcpu =3D ctxt_to_vcpu(ctxt); + return kvm_has_sme(kern_hyp_va(vcpu->kvm)); +} + static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) { return host_data_ptr(host_ctxt) !=3D ctxt; @@ -127,6 +138,8 @@ static inline void __sysreg_save_user_state(struct kvm_= cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); + if (ctxt_has_sme(ctxt)) + ctxt_sys_reg(ctxt, TPIDR2_EL0) =3D read_sysreg_s(SYS_TPIDR2_EL0); } =20 static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) @@ -204,6 +217,8 @@ static inline void __sysreg_restore_user_state(struct k= vm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + if (ctxt_has_sme(ctxt)) + write_sysreg_s(ctxt_sys_reg(ctxt, TPIDR2_EL0), SYS_TPIDR2_EL0); } =20 static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt, diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 5c912139d264..7e550f045f4d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3504,7 +3504,8 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { .visibility =3D s1poe_visibility }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, - { SYS_DESC(SYS_TPIDR2_EL0), undef_access }, + { SYS_DESC(SYS_TPIDR2_EL0), NULL, reset_unknown, TPIDR2_EL0, + .visibility =3D sme_visibility}, =20 { SYS_DESC(SYS_SCXTNUM_EL0), undef_access }, =20 --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1503130F815; Tue, 23 Dec 2025 01:22:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452969; cv=none; b=XRHFs9/JePas7Nd+1UGstxsBebvGsINw49i6Kuu+8MeYrefpMqYwVbO6ETDR1+Q45ewlI+xYuEc205LG1oLL99IoR4fncYWLLqC4cfJy6xDExY8fRTK0xgZwDToTpDcvBRJ5miThJz5JE9HR6c3B9D8NmdpVuDrZZ3DNzJBZdeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452969; c=relaxed/simple; bh=xPz2JBOU6yzQUQySYvv/gJMVaHFNpKKxwmHWpS2GSPs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gOj1JdusaomfWFOp/k9jeioItgcuAbYw3bU+mL2u0YLck6ncd9fNeTI2TtKsUl3t+P+w9D8pL1YQD6+tR4hrviggGIK4PbnC+uCVsGa4KIGCAyzUx4t/dWm4pvy395I1iPET8rEG0XTrT9FCTHhNrPhH8gpf6c0m4VHqZeOSB+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=I4QDH35U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="I4QDH35U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0978AC4CEF1; Tue, 23 Dec 2025 01:22:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452968; bh=xPz2JBOU6yzQUQySYvv/gJMVaHFNpKKxwmHWpS2GSPs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=I4QDH35UxzxFWcYkO/SyCFncBWmsDdg2kWpjj+z2QXkpQtkYfKbP0Jp5DBaVBxjPh LGRiJGlSunSbqxJ1s9RjZf/My47xwWRyth1Q7GshzzpkKt1ymf0dRiSEwx2UzH/xxO snyKE5UWbK7SD7m7kh2J3SUCKfOTkq0Gve5TCpMGYBWGYDPn/HNIsn5Z71EgJBDEQ9 BC5WAi1SPeMTsXEM+mPh0c4Ny2NMoikBGUk5ZFcNa3/DgJgem9gisI3sTUcCQcm8tJ rrMWolXCmJ68XNBO+2W0F+kAI3NXBcED7Khy2PEZj+Eq9wLFI+R97rTXigYZIRAYcM CqJjuPaiglIGQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:11 +0000 Subject: [PATCH v9 17/30] KVM: arm64: Support SME identification registers for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-17-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=8946; i=broonie@kernel.org; h=from:subject:message-id; bh=xPz2JBOU6yzQUQySYvv/gJMVaHFNpKKxwmHWpS2GSPs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6PJZWt/H4ZqXYIun4Iwu8NJu5KTVBjsq3aM YHSwVsy5QeJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnujwAKCRAk1otyXVSH 0JMwB/wL/44tB/FSTv0rwC29xK4ldASpR5lHX6P8ylOPusmDczcYD/k1K/37OeGbiTYuuIBu0hP yNhxGQHqjonjOYahWJbt7CcHjVKdDZFDzUKanKDwXcBHTMjKGCS/OLg4wFxSq+mVlacwibDMPX1 +RE3ElVU2QiqLo1Fp7yPxIN76gd7q7cU7BQ4/0V7MjWeBS5B5+WtCMEsxe6xaskfFga2theXFwG ieOiqgQh6sjWkGDHfQnmhrpQklWYM9wQX3+hC0gx58X/LIiLY43un6Rhau9mXIA5EthysnS/9P7 XRk6wSsvlMag7LlyKK4EES1w2xB/7ifcgSCeFcdY7VEuHDCW X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The primary register for identifying SME is ID_AA64PFR1_EL1.SME. This is hidden from guests unless SME is enabled by the VMM. When it is visible it is writable and can be used to control the availability of SME2. There is also a new register ID_AA64SMFR0_EL1 which we make writable, forcing it to all bits 0 if SME is disabled. This includes the field SMEver giving the SME version, userspace is responsible for ensuring the value is consistent with ID_AA64PFR1_EL1.SME. It also includes FA64, a separately enableable extension which provides the full FPSIMD and SVE instruction set including FFR in streaming mode. Userspace can control the availability of FA64 by writing to this field. The other features enumerated there only add new instructions, there are no architectural controls for these. There is a further identification register SMIDR_EL1 which provides a basic description of the SME microarchitecture, in a manner similar to MIDR_EL1 for the PE. It also describes support for priority management and a basic affinity description for shared SME units, plus some RES0 space. We do not support priority management for guests so this is hidden from guests, along with any new fields. As for MIDR_EL1 and REVIDR_EL1 we expose the implementer and revision information to guests with the raw value from the CPU we are running on, this may present issues for asymmetric systems or for migration as it does for the existing registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/config.c | 8 +----- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 11 ++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 ++- arch/arm64/kvm/sys_regs.c | 40 ++++++++++++++++++++++++++= +--- 5 files changed, 54 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 825b74f752d6..fead6988f47c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -400,6 +400,7 @@ struct kvm_arch { u64 revidr_el1; u64 aidr_el1; u64 ctr_el0; + u64 smidr_el1; =20 /* Masks for VNCR-backed and general EL2 sysregs */ struct kvm_sysreg_masks *sysreg_masks; @@ -1543,6 +1544,8 @@ static inline u64 *__vm_id_reg(struct kvm_arch *ka, u= 32 reg) return &ka->revidr_el1; case SYS_AIDR_EL1: return &ka->aidr_el1; + case SYS_SMIDR_EL1: + return &ka->smidr_el1; default: WARN_ON_ONCE(1); return NULL; diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c index 24bb3f36e9d5..7e26991b2df1 100644 --- a/arch/arm64/kvm/config.c +++ b/arch/arm64/kvm/config.c @@ -274,14 +274,8 @@ static bool feat_anerr(struct kvm *kvm) =20 static bool feat_sme_smps(struct kvm *kvm) { - /* - * Revists this if KVM ever supports SME -- this really should - * look at the guest's view of SMIDR_EL1. Funnily enough, this - * is not captured in the JSON file, but only as a note in the - * ARM ARM. - */ return (kvm_has_feat(kvm, FEAT_SME) && - (read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS)); + (kvm_read_vm_id_reg(kvm, SYS_SMIDR_EL1) & SMIDR_EL1_SMPS)); } =20 static bool feat_spe_fds(struct kvm *kvm) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hy= p/include/hyp/sysreg-sr.h index 8c3b3d6df99f..d921db152119 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -125,6 +125,17 @@ static inline u64 ctxt_midr_el1(struct kvm_cpu_context= *ctxt) return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1); } =20 +static inline u64 ctxt_smidr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm *kvm =3D kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm); + + if (!(ctxt_is_guest(ctxt) && + test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))) + return read_sysreg_s(SYS_SMIDR_EL1); + + return kvm_read_vm_id_reg(kvm, SYS_SMIDR_EL1); +} + static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { *ctxt_mdscr_el1(ctxt) =3D read_sysreg(mdscr_el1); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index f4ec6695a6a5..b656449dff69 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -351,8 +351,10 @@ static void pkvm_init_features_from_host(struct pkvm_h= yp_vm *hyp_vm, const struc host_kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES); =20 - if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &host_arch_flags)) + if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &host_arch_flags)) { hyp_vm->kvm.arch.midr_el1 =3D host_kvm->arch.midr_el1; + hyp_vm->kvm.arch.smidr_el1 =3D host_kvm->arch.smidr_el1; + } =20 return; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7e550f045f4d..a7ab02822023 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1893,6 +1893,10 @@ static unsigned int id_visibility(const struct kvm_v= cpu *vcpu, if (!vcpu_has_sve(vcpu)) return REG_RAZ; break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) + return REG_RAZ; + break; } =20 return 0; @@ -1920,10 +1924,25 @@ static unsigned int raz_visibility(const struct kvm= _vcpu *vcpu, =20 /* cpufeature ID register access trap handlers */ =20 +static bool hidden_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + switch (reg_to_encoding(r)) { + case SYS_SMIDR_EL1: + return !vcpu_has_sme(vcpu); + default: + return false; + } +} + static bool access_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { + if (hidden_id_reg(vcpu, p, r)) + return bad_trap(vcpu, p, r, "write to hidden ID register"); + if (p->is_write) return write_to_read_only(vcpu, p, r); =20 @@ -2012,7 +2031,9 @@ static u64 sanitise_id_aa64pfr1_el1(const struct kvm_= vcpu *vcpu, u64 val) SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) =3D=3D ID_AA64PFR0_EL1_RA= S_IMP)) val &=3D ~ID_AA64PFR1_EL1_RAS_frac; =20 - val &=3D ~ID_AA64PFR1_EL1_SME; + if (!kvm_has_sme(vcpu->kvm)) + val &=3D ~ID_AA64PFR1_EL1_SME; + val &=3D ~ID_AA64PFR1_EL1_RNDR_trap; val &=3D ~ID_AA64PFR1_EL1_NMI; val &=3D ~ID_AA64PFR1_EL1_GCS; @@ -3038,6 +3059,9 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, case SYS_AIDR_EL1: p->regval =3D read_sysreg(aidr_el1); break; + case SYS_SMIDR_EL1: + p->regval =3D read_sysreg_s(SYS_SMIDR_EL1); + break; default: WARN_ON_ONCE(1); } @@ -3048,12 +3072,15 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, static u64 __ro_after_init boot_cpu_midr_val; static u64 __ro_after_init boot_cpu_revidr_val; static u64 __ro_after_init boot_cpu_aidr_val; +static u64 __ro_after_init boot_cpu_smidr_val; =20 static void init_imp_id_regs(void) { boot_cpu_midr_val =3D read_sysreg(midr_el1); boot_cpu_revidr_val =3D read_sysreg(revidr_el1); boot_cpu_aidr_val =3D read_sysreg(aidr_el1); + if (system_supports_sme()) + boot_cpu_smidr_val =3D read_sysreg_s(SYS_SMIDR_EL1); } =20 static u64 reset_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_de= sc *r) @@ -3065,6 +3092,8 @@ static u64 reset_imp_id_reg(struct kvm_vcpu *vcpu, co= nst struct sys_reg_desc *r) return boot_cpu_revidr_val; case SYS_AIDR_EL1: return boot_cpu_aidr_val; + case SYS_SMIDR_EL1: + return boot_cpu_smidr_val; default: KVM_BUG_ON(1, vcpu->kvm); return 0; @@ -3229,7 +3258,6 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | ID_AA64PFR1_EL1_RNDR_trap | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_MTE)), @@ -3239,7 +3267,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64PFR2_EL1_MTESTOREONLY), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0), =20 @@ -3446,7 +3474,11 @@ static const struct sys_reg_desc sys_reg_descs[] =3D= { { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user =3D set_clidr, .val =3D ~CLIDR_EL1_RES0 }, { SYS_DESC(SYS_CCSIDR2_EL1), undef_access }, - { SYS_DESC(SYS_SMIDR_EL1), undef_access }, + IMPLEMENTATION_ID(SMIDR_EL1, (SMIDR_EL1_NSMC | SMIDR_EL1_HIP | + SMIDR_EL1_AFFINITY2 | + SMIDR_EL1_IMPLEMENTER | + SMIDR_EL1_REVISION | SMIDR_EL1_SH | + SMIDR_EL1_AFFINITY)), IMPLEMENTATION_ID(AIDR_EL1, GENMASK_ULL(63, 0)), { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, ID_FILTERED(CTR_EL0, ctr_el0, --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F88D314D18; Tue, 23 Dec 2025 01:22:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452973; cv=none; b=E2P2tyQ7FP5D5HGdjPxjXuB8Ob2YNALLgcehYzHVjf31gG3rzazwp/XhQuzmc6PYHram99tJXC1OmlWrOk+ce1xMQAwHqFSeOoQ7smyvRdUUt12gdHfM2OuczxrgTI4K7FXJMfq7lRCZWEaYqTRLQS4wU88lKVMXZHOL+I4V7h0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452973; c=relaxed/simple; bh=4tkVTtIcV+e0K2I4W+1hXNEToJhLsGAEEoG4YE4nVFY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=aVZs36XM3QfjhahHLRp0p4+eKlgObbFeMoj9wyeOyhrBInHMFUcOX1TxqL+29G40LQtmKSTO3SumkT1tBEXfTfO1bCQQ60l+V3S+bIL6LXocsFYcj76gyxSIwMZWT+x+iyPesVbxnyoOvv09DkUi0xAM+ftJVoTZ6v+Ln8sw9rA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QKCXZGXJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QKCXZGXJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52C5BC4CEF1; Tue, 23 Dec 2025 01:22:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452973; bh=4tkVTtIcV+e0K2I4W+1hXNEToJhLsGAEEoG4YE4nVFY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=QKCXZGXJHQwD4bhRzNLQNAPRbvz9UGGP5t2joUhKoPVr0i2REx+KJmI+eGT9BDa5F OOjEoD//cp5peaaP6p2EHMDrtosfYbcprJVcomcwCRvMObfAChObdviGyiuRAcKEvS bLmcMylMgXbzseWNlrLN+z2hQyv+agEzq3dH4QZnroICVhiGmKcuvW72HbwMB6kDPs 1Ko6Z+TYtKkYmci+QUhqOzLaDOBOGswEer5oF/zZrA2NsXNPqP3XM86HE9Jmwu8Mfi L0B9WJXY3Wkrt9r4XVL3CYOJ5G+ECI6pEZ4A2xAfzpbOzx3wNXtUCO70K4o4nF9Hsi EFJs0xNvLoNhA== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:12 +0000 Subject: [PATCH v9 18/30] KVM: arm64: Support SME priority registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-18-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=6158; i=broonie@kernel.org; h=from:subject:message-id; bh=4tkVTtIcV+e0K2I4W+1hXNEToJhLsGAEEoG4YE4nVFY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6Q5ExWg5wVwOPBRRvKAq0znb2Hu7WznP00r umsHMxP8e2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnukAAKCRAk1otyXVSH 0EwJB/98PQP6BIwFdnJL41doY1V6EXpRtB421waSmISOsufUulVltMbKI/RzvMDapY35h27Sm08 o35srME+QFcH3Ha98Xwli0tjM+0kQYQdeIClelcVCA0Do7AXG5EJEXJOGU9AkgLY0L3Ps91aVs8 ZpBaNGsa3oB/CcFxs0VQbblEJEoxAjB257sYfOenTZNesxHcC4/MYR+X+Q2uP5waeFy+w8Uv85J ntKTYSZ5zH1z6Yjl9LfY2VO92a1yzQd+13mgppVwjUSwERttucyclt4Ut+0aKWoCH5fx4IFVT+O LQW4F4t0yfvkbuDU376YFKed3YsoLiUmKmWjEMpEj1aBNO53 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME has optional support for configuring the relative priorities of PEs in systems where they share a single SME hardware block, known as a SMCU. Currently we do not have any support for this in Linux and will also hide it from KVM guests, pending experience with practical implementations. The interface for configuring priority support is via two new system registers, these registers are always defined when SME is available. The register SMPRI_EL1 allows control of SME execution priorities. Since we disable SME priority support for guests this register is RES0, define it as such and enable fine grained traps for SMPRI_EL1 to ensure that guests can't write to it even if the hardware supports priorites. Since the register should be readable with fixed contents we only trap writes, not reads. Since there is no host support for using priorities the register currently left with a value of 0 by the host so we do not need to update the value for guests. There is also an EL2 register SMPRIMAP_EL2 for virtualisation of priorities, this is RES0 when priority configuration is not supported but has no specific traps available. When saving state from a nested guest we overwite any value the guest stored. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/config.c | 3 +++ arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 7 +++++++ arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++++++++++++++++- 5 files changed, 41 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index fead6988f47c..44595a789a97 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -546,6 +546,7 @@ enum vcpu_sysreg { VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ VNCR(SMCR_EL1), /* SME Control */ + VNCR(SMPRIMAP_EL2), /* Streaming Mode Priority Mapping Register */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm= /vncr_mapping.h index 44b12565321b..a2a84af6585b 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -45,6 +45,7 @@ #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 #define VNCR_SMCR_EL1 0x1F0 +#define VNCR_SMPRIMAP_EL2 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c index 7e26991b2df1..0088635a95bd 100644 --- a/arch/arm64/kvm/config.c +++ b/arch/arm64/kvm/config.c @@ -1481,6 +1481,9 @@ static void __compute_hfgwtr(struct kvm_vcpu *vcpu) =20 if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38)) *vcpu_fgt(vcpu, HFGWTR_EL2) |=3D HFGWTR_EL2_TCR_EL1; + + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + *vcpu_fgt(vcpu, HFGWTR_EL2) |=3D HFGWTR_EL2_nSMPRI_EL1; } =20 static void __compute_hdfgwtr(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sy= sreg-sr.c index f28c6cf4fe1b..07aa4378c58a 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -80,6 +80,13 @@ static void __sysreg_save_vel2_state(struct kvm_vcpu *vc= pu) =20 if (ctxt_has_sctlr2(&vcpu->arch.ctxt)) __vcpu_assign_sys_reg(vcpu, SCTLR2_EL2, read_sysreg_el1(SYS_SCTLR2)); + + /* + * We block SME priorities so SMPRIMAP_EL2 is RES0, however we + * do not have traps to block access so the guest might have + * updated the state, overwrite anything there. + */ + __vcpu_assign_sys_reg(vcpu, SMPRIMAP_EL2, 0); } =20 static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a7ab02822023..51f175bbe8d1 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -691,6 +691,15 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, return read_zero(vcpu, p); } =20 +static int set_res0(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + if (val) + return -EINVAL; + + return 0; +} + /* * ARMv8.1 mandates at least a trivial LORegion implementation, where all = the * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0 @@ -1979,6 +1988,15 @@ static unsigned int fp8_visibility(const struct kvm_= vcpu *vcpu, return REG_HIDDEN; } =20 +static unsigned int sme_raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sme(vcpu)) + return REG_RAZ; + + return REG_HIDDEN; +} + static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val) { if (!vcpu_has_sve(vcpu)) @@ -3349,7 +3367,14 @@ static const struct sys_reg_desc sys_reg_descs[] =3D= { =20 { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility =3D sve= _visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, - { SYS_DESC(SYS_SMPRI_EL1), undef_access }, + + /* + * SMPRI_EL1 is UNDEF when SME is disabled, the UNDEF is + * handled via FGU which is handled without consulting this + * table. + */ + { SYS_DESC(SYS_SMPRI_EL1), trap_raz_wi, .visibility =3D sme_raz_visibilit= y }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, @@ -3719,6 +3744,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { =20 EL2_REG_VNCR(HCRX_EL2, reset_val, 0), =20 + { SYS_DESC(SYS_SMPRIMAP_EL2), .reg =3D SMPRIMAP_EL2, + .access =3D trap_raz_wi, .set_user =3D set_res0, .reset =3D reset_val, + .val =3D 0, .visibility =3D sme_el2_visibility }, EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, sme_el2_visibility), =20 --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0D2C314D2C; Tue, 23 Dec 2025 01:22:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452977; cv=none; b=GQw6f86fNSSntVSzwiITA3gLTRjF8A1fmTFWt4BDpVgH1v1sRg4UudqhGHA8h316JzB0nPhsYWN8m/pXAr+GzaRGDy8rZtXc6wPnkhe+DIM8k6cE4DlG0Y7Qlsz6H8I/Nvz48kB4l71B2nfyb2U2i++DVU/mmXlc1LyW7fsDj7s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452977; c=relaxed/simple; bh=QalGDOMHjOiQjvBbbedaaYeF4HQ2DEwgjsr/uOC9S0E=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=H5qaiRi7PBIplg6otOZvpUYn02uKyNaCvnhbBH6yknDcSF5rsMIx4PJHTRiOntfSVusIYB1ZAPmggKcPD0kn7nlokHmBZ1XlvRBej0pQwEp5FM5YzhgN5ttfZ3qvHLVhpvgXWpv9GPwNTvGtjaDYea8cGZ7RB8xnl5bxdmtHplc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DZraqkfX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DZraqkfX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F451C19422; Tue, 23 Dec 2025 01:22:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452977; bh=QalGDOMHjOiQjvBbbedaaYeF4HQ2DEwgjsr/uOC9S0E=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DZraqkfX93bL3Xevfi+qsnXNb9OcIWXRZQqAiD/HWbASFR2KJ7RdKD1rEv7n7MbM1 VQaiOHb+FuPb2T/w8k0YpWl1k0JSjS+jIv7aPB0q0msMJ3/uEwTG1fCHrKViOkXZ3w PwgG97CyAeYu9nxQjl6JuX6xjRH4n4qLAVvp+duMIaUe/3QQ0nW4ICDp7iBtJocvFF lhQxOM96DR/WzMJ49v1HYaqDuuZFE8hLF+NuEyxowNv1tG8D6zppzXmyZsUMKL2ysz MCkgjGZuzUMIvqVx0ikJ1gUxPFcGAPu9JhKsl/8mKULjtNs8dpS1wXUvsIZpBanlwK Vt/tO9EKzucgg== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:13 +0000 Subject: [PATCH v9 19/30] KVM: arm64: Provide assembly for SME register access Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-19-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=1851; i=broonie@kernel.org; h=from:subject:message-id; bh=QalGDOMHjOiQjvBbbedaaYeF4HQ2DEwgjsr/uOC9S0E=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6Rm6UAk60DHDfEFq1PtHTHsczXqVLJt7lIW mXsBgcMvg2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnukQAKCRAk1otyXVSH 0IcHB/42RHGmQOW7bcb0Ca/kXpI+DpixoJ3o82LwO+zziSgSoIRMBGxX2S3DE8strN3QI33ylPX HvYQ0PrSI7IyZa9DMk2bIQzKIOKyg2tDb9maQm6iPGySwTUIKxmMM4Io3f8cY7+mz/KAL+5NnfL o+XpSaXuu9W1cqk10ObSgPh+ohdHbOkP2Sl10iVeEITpcgUEDtmeW9aBkeUSAccNqbyrBRSHSGk MjvN4/GST/Ef9J/mFebLrus44Zo6fFqTIgURWE8DwoMiogYmpMjGAOlaMLEYI/E3PUTjXxl1XC0 jZphq6h63+BpNT4S0/vSe2f4sYCVmZimpIx8Q709gUoNRcWs X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Provide versions of the SME state save and restore functions for the hypervisor to allow it to restore ZA and ZT for guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_hyp.h | 3 +++ arch/arm64/kvm/hyp/fpsimd.S | 26 ++++++++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_= hyp.h index 0317790dd3b7..1cef9991d238 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -116,6 +116,9 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_r= egs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); +int __sve_get_vl(void); +void __sme_save_state(void const *state, bool restore_zt); +void __sme_restore_state(void const *state, bool restore_zt); =20 u64 __guest_enter(struct kvm_vcpu *vcpu); =20 diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 6e16cbfc5df2..44a1b0a483da 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -29,3 +29,29 @@ SYM_FUNC_START(__sve_save_state) sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) + +SYM_FUNC_START(__sve_get_vl) + _sve_rdvl 0, 1 + ret +SYM_FUNC_END(__sve_get_vl) + +SYM_FUNC_START(__sme_save_state) + _sme_rdsvl 2, 1 // x2 =3D VL/8 + sme_save_za 0, x2, 12 // Leaves x0 pointing to the end of ZA + + cbz x1, 1f + _str_zt 0 +1: + ret +SYM_FUNC_END(__sme_save_state) + +SYM_FUNC_START(__sme_restore_state) + _sme_rdsvl 2, 1 // x2 =3D VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA + + cbz x1, 1f + _ldr_zt 0 + +1: + ret +SYM_FUNC_END(__sme_restore_state) --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54DA7226861; Tue, 23 Dec 2025 01:23:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452983; cv=none; b=UiBByHCoEgDtMtgUPUUFFJhiqko24fkbH5RaEc6weaF9mypRED6FGNwDaRMtoNOoUXvbBj4h7iJOowoV9UNw1QCBQSXn7ZSm6UYmfy0MOemt51i8f11mxc4s0fwfpop/8djw0ajfP/yhZhuZLCuMtMond8jW7B6HGJXHf3AIweQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452983; c=relaxed/simple; bh=5CD2kezQ6SpAVTOAH8KYe/eAD1gblCRSlCxfRkHq8xI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IUTgAYLGa1m6XOkFr6uiE5xqAMkrs5S4oWYB9EA1/SxufD3rjZYcK7MQLBaCYkGvv8wWP0CWmL08Ahw4DaXpOvyWvAxTOX+avuCAWJes95tp4BOAZCqP0nR0O4giQuwavzqw+PidsdUqm7scp0txSWZgreegnJ5TL7m+W2qqO/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uCTZmnx9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uCTZmnx9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCCA5C4CEF1; Tue, 23 Dec 2025 01:22:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452981; bh=5CD2kezQ6SpAVTOAH8KYe/eAD1gblCRSlCxfRkHq8xI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=uCTZmnx9aCTmqL8Rwj9CihWkcibN1XaEzHxveOeofpumWXUBdidWG2uwUAJ5dz9P6 UDlo0vNBWNeqAonw2mfGncBMQrQ/T0QLCqaVRcnT8/j8ncZTH6IAZDopniZg0Ev5QB YTU4E3tDXpXLh5g/8hu3qek+Fcb3/eNZ6UkQGfPRxbzhrnHTDEdnHoqvV/kvzNrj68 ieivFe713XCS7hCd1zVY7vgdsWSwelreQORnoi+MNjaW5KP1TisLG8E5YAEtgK7s5z gm6OSaMQWmJb5zwr1MhdPWwW4f2g3LWGRAQBJuv8K7Rgf9iNwLo8kj27NpctW8ZDD8 5WS2gjvn1Y53Q== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:14 +0000 Subject: [PATCH v9 20/30] KVM: arm64: Support userspace access to streaming mode Z and P registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-20-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=4047; i=broonie@kernel.org; h=from:subject:message-id; bh=5CD2kezQ6SpAVTOAH8KYe/eAD1gblCRSlCxfRkHq8xI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6SUpQflFwr9SQLhBCXc+4i5w7JdURtxBUT9 pJtomD9iceJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnukgAKCRAk1otyXVSH 0AnOB/4gTVoWXX9f4wpyus3+8f8KTtQoifdjwFi7nartT7KoGuA4iVlcOAyPjugPqsa5jcxDLod 6KHj2dXVHOI7FDMa8R7z+Y2+5ny0+cJytVlx22mdnqbpqcWK1Z7uHw4f80cgPY4YcNvCRyAoi2Z 5ctl8JRBWb6if2POD+oGmzOHNxH3KCEK4qftlJfK9/pyjV8+5xs7d/VAQTXL+B5bmPQKYpzXz+4 5dQe6xlb0etp7B4VYLIwKBbeEDPuQBDY52BgSqJrEkKa0EYoKM5X8WT7XkqowrVqhfrf70odYVj /MWkSwt8dXYH5h6wEFbA2GDAtQFbW+t9UKyJwIrze6uqKL8L X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME introduces a mode called streaming mode where the Z, P and optionally FFR registers can be accessed using the SVE instructions but with the SME vector length. Reflect this in the ABI for accessing the guest registers by making the vector length for the vcpu reflect the vector length that would be seen by the guest were it running, using the SME vector length when the guest is configured for streaming mode. Since SME may be present without SVE we also update the existing checks for access to the Z, P and V registers to check for either SVE or streaming mode. When not in streaming mode the guest floating point state may be accessed via the V registers. Any VMM that supports SME must be aware of the need to configure streaming mode prior to writing the floating point registers that this creates. Signed-off-by: Mark Brown --- arch/arm64/kvm/guest.c | 38 ++++++++++++++++++++++++++++++++++---- 1 file changed, 34 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2a1fdcb0ec49..90dcacb35f01 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -73,6 +73,11 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } =20 +static bool vcpu_has_sve_regs(const struct kvm_vcpu *vcpu) +{ + return vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu); +} + static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { int size; @@ -110,9 +115,10 @@ static int core_reg_size_from_offset(const struct kvm_= vcpu *vcpu, u64 off) /* * The KVM_REG_ARM64_SVE regs must be used instead of * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: + * SVE-enabled vcpus or when a SME enabled vcpu is in + * streaming mode: */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + if (vcpu_has_sve_regs(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL; =20 return size; @@ -426,6 +432,24 @@ struct vec_state_reg_region { unsigned int upad; /* extra trailing padding in user memory */ }; =20 +/* + * We represent the Z and P registers to userspace using either the + * SVE or SME vector length, depending on which features the guest has + * and if the guest is in streaming mode. + */ +static unsigned int vcpu_sve_cur_vq(struct kvm_vcpu *vcpu) +{ + unsigned int vq =3D 0; + + if (vcpu_has_sve(vcpu)) + vq =3D vcpu_sve_max_vq(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) + vq =3D vcpu_sme_max_vq(vcpu); + + return vq; +} + /* * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy @@ -466,7 +490,7 @@ static int sve_reg_to_region(struct vec_state_reg_regio= n *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; =20 - vq =3D vcpu_sve_max_vq(vcpu); + vq =3D vcpu_sve_cur_vq(vcpu); =20 reqoffset =3D SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -476,7 +500,7 @@ static int sve_reg_to_region(struct vec_state_reg_regio= n *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; =20 - vq =3D vcpu_sve_max_vq(vcpu); + vq =3D vcpu_sve_cur_vq(vcpu); =20 reqoffset =3D SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -515,6 +539,9 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 + if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -541,6 +568,9 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 + if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37FF9318125; Tue, 23 Dec 2025 01:23:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452986; cv=none; b=FA9hk+zifiS1CJrLVYRF8CxwzYfE+m/er+vWZNRTN0w344jo6wYo5EFhre0eMByYc2++VSovw6TEViBXwAskmqE4D3qB5ctcyAY/77DNZNrlj/EbleIy3SWUDkSSMj37UANAwW2e5HMTPY0PaufaRY4Rq7I8EluY3bQKxkfqlfg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452986; c=relaxed/simple; bh=JJCv5/aWHYtYD01yHMX/Fpj/I6fYeKS4R7fEQWO6boU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=e60Iusweq+pVHQtlThe4GCgPpKtc2vb3a+0RzFIQCcKMp96CleJo5u8IwAnJVcqhhPN+alhUm0PZ8QjJaDCw1ALkIGRsKW6kvz/gb/dkMPe235l98zmkS5bY4COcDNAX7qIgPklwThTQUUY39l5Tnt9ZIOJQI18BmMVTBb/y3ec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=clMp99AQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="clMp99AQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19B62C19424; Tue, 23 Dec 2025 01:23:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452985; bh=JJCv5/aWHYtYD01yHMX/Fpj/I6fYeKS4R7fEQWO6boU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=clMp99AQyoLYHSM78oI17kDX2socHgqwo8NmYdrmM4f80z/WWY7k5et5kBFDwWUci NggCdMxBieK3cAH6HS/6hTX8b3hSah1OWLkH+gRaCEjixJnFDiFwwNPkOCLRmvCior XyH7HCLHy7PNW60R18o/d2ehLYbx5ZyEHKHNAiPfouhxz/LTZTAx+dkVPhzabEFe4e mbNS9a2JbTVEynIw7//58Sx1JAU6JNl/rvZYL4yAQarxNuLO7SePSdVxy7FcaC+Vdi BrzzW325ajpnVCRu7VHgmH/rUAyvaVqGAZ7hkIwExMegQFSIuIPpQ6NaRB2mqgQPYq sHuzcp8Mg7Ppw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:15 +0000 Subject: [PATCH v9 21/30] KVM: arm64: Flush register state on writes to SVCR.SM and SVCR.ZA Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-21-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3621; i=broonie@kernel.org; h=from:subject:message-id; bh=JJCv5/aWHYtYD01yHMX/Fpj/I6fYeKS4R7fEQWO6boU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6SNLuv1rExcs0qccarlk6HYOfwO+99KdPxT GgJYVCU2KeJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnukgAKCRAk1otyXVSH 0EXLB/0UmpiAcLgtSGMFGBtBAEJNxmo7r6TtIB50Vi7F1WNiWbTvpXf2MPqzyU6a4daDzJ4nEDk 1lEYUZRcMG22AM+Bs9UxhKnZVw155gLWqWvFAw3oxmWkDwEr2bLQYkJFBE3p24tyAbVh5Ble5BI l++ArwH2g6gBlfmpdSmO8V/rixVdTTuL1FQ+miPSsYdGDjUC8S2Yc4FRsaJffEfCss4BRNUxRYm ndaH3VpxDBihGNB5AP3tZbvbfP+nWJ+FOvqyEdARwVNogDmPL5eNR9AaRvy8MQIEQ4bBjJ6DZqV M6rMzSI/e+aEWKgKQeLWySapzOArXvQ5bbmctrPyAh6bLbFm X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Writes to the physical SVCR.SM and SVCR.ZA change the state of PSTATE.SM and PSTATE.ZA, causing other floating point state to reset. Emulate this behaviour for writes done via the KVM userspace ABI. Setting PSTATE.ZA to 1 causes ZA and ZT0 to be reset to 0, these are stored in sme_state. Setting PSTATE.ZA to 0 causes ZA and ZT0 to become inaccesible so no reset is needed. Any change in PSTATE.SM causes the V, Z, P, FFR and FPMR registers to be reset to 0 and FPSR to be reset to 0x800009f. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++- 2 files changed, 52 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 44595a789a97..bd7a9a4efbc3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1147,6 +1147,30 @@ struct kvm_vcpu_arch { =20 #define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) =20 +#define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) + +#define sme_state_size_from_vl(vl, sme2) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(vl))) { \ + __size_ret =3D 0; \ + } else { \ + __vq =3D sve_vq_from_vl(vl); \ + __size_ret =3D ZA_SIG_REGS_SIZE(__vq); \ + if (sme2) \ + __size_ret +=3D ZT_SIG_REG_SIZE; \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sme_state_size(vcpu) ({ \ + unsigned long __vl; \ + __vl =3D (vcpu)->arch.max_vl[ARM64_VEC_SME]; \ + sme_state_size_from_vl(__vl, vcpu_has_sme2(vcpu)); \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 51f175bbe8d1..4ecfcb0af24c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -927,6 +927,33 @@ static unsigned int hidden_visibility(const struct kvm= _vcpu *vcpu, return REG_HIDDEN; } =20 +static int set_svcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + u64 old =3D __vcpu_sys_reg(vcpu, rd->reg); + + if (val & SVCR_RES0) + return -EINVAL; + + if ((val & SVCR_ZA) && !(old & SVCR_ZA) && vcpu->arch.sme_state) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); + + if ((val & SVCR_SM) !=3D (old & SVCR_SM)) { + memset(vcpu->arch.ctxt.fp_regs.vregs, 0, + sizeof(vcpu->arch.ctxt.fp_regs.vregs)); + + if (vcpu->arch.sve_state) + memset(vcpu->arch.sve_state, 0, + vcpu_sve_state_size(vcpu)); + + __vcpu_assign_sys_reg(vcpu, FPMR, 0); + vcpu->arch.ctxt.fp_regs.fpsr =3D 0x800009f; + } + + __vcpu_assign_sys_reg(vcpu, rd->reg, val); + return 0; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -3512,7 +3539,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { CTR_EL0_DminLine_MASK | CTR_EL0_L1Ip_MASK | CTR_EL0_IminLine_MASK), - { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility }, + { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility, .set_user =3D set_svcr }, { SYS_DESC(SYS_FPMR), undef_access, reset_val, FPMR, 0, .visibility =3D f= p8_visibility }, =20 { PMU_SYS_REG(PMCR_EL0), .access =3D access_pmcr, .reset =3D reset_pmcr, --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53F7831A04D; Tue, 23 Dec 2025 01:23:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452990; cv=none; b=mO6aTV6k+VqYTofSLM4i5RkyOEadyR2JSsH80gVtJe534zJpLEuQUTgX82CmyQVPn/P/QQjyhmxI+hn0+DJVf2dY5QGSwyfWS1DQac1oyIjAUDB+R2hKkvCHH/KRCCTRyE6TZTPceqFYWQksM5zIZPdJt4khL0INxZcOHp8oMWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452990; c=relaxed/simple; bh=7nbId40f4lGVgDiFbYePxhC3YPr9PDugIlnIkOnzXtA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kU4zFwraI7q5Lc54f7PFkX2pLdYKLcFUuOP4A11CMpCyeEynh/h9NlMOyou4mXH3dGojEaFj+Efm2jUZxPk4T6qmMvFAL0mIeME8uzU3Fm2mAk48JRoVdrCcvIRtzssVwjhmWqcVTUDRvTqtX3PtRRVko6g9aIFV6kbMYEkGPOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=t5ppQ1Rn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="t5ppQ1Rn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B043C4CEF1; Tue, 23 Dec 2025 01:23:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452990; bh=7nbId40f4lGVgDiFbYePxhC3YPr9PDugIlnIkOnzXtA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=t5ppQ1RnU3v7bu8ff3HL1cxnUhZ/5/dNccQHs2owzz4EWSwLf2vcX4HR7rLgYPGYw YUXtfgAoqysFKOEpWrCI7AhWFMlXlHSiQDeLq68j4ijmbJq5jPnhVkSSO4qhvZmJjd ljLpPTovTjDFxI9AI9XLRLZU+JUOTUEGF7coJbeMj0qAbjJWmIYifwSixL2XeU0ZVI pnSr/zl/lacGdjhBIxNFGih7V5LMVAfPq4L2YqC3fZ6eFK0AuNi3Cu4sarxTOmQuw2 WKvuyMPeNBDwK4IOXYwcmhJOxKO7/tZJwWWTLJYgOdV2kdv8po7UzMATZ7IqdUuzV9 Ju5wVF+9hq6TQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:16 +0000 Subject: [PATCH v9 22/30] KVM: arm64: Expose SME specific state to userspace Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-22-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=6845; i=broonie@kernel.org; h=from:subject:message-id; bh=7nbId40f4lGVgDiFbYePxhC3YPr9PDugIlnIkOnzXtA=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6T5xhKhAr2RzPabC3qNrPi+jJwzkwFMsreo zU4HpoykdGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnukwAKCRAk1otyXVSH 0PclB/94BuzaA5rHPB0q6bzNheNaDlNTInvUMU+/xj+ahC+ss0CQ3HsfZq7a6mxsP+mGlZ+jUuh fJF0EhdWF60pMXd6u8dx2Fc6ZNdYI43Llntpf7z2v6cgxxHo9IdWpYVvbS15lVQmwgJoqRYVbnA 6gn4Y+Kq2H5FlOkkgtlk58SxsmHq0KSd78tAgV5iFjcLXQ9aDeyBUUIUkYL0vLhRJlKx6LHbhY7 WJh9lUfN1SgJJqpWihS2e8AsAsppgzn085yvVNEMRTbbjdSVZlM2oMwTTc/+bJ8UVePJaBCsnz2 57/NxiQ/0bG65tez2VejLvvQnSrRbmwfCSYQ0anQAmldliCv X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME introduces two new registers, the ZA matrix register and the ZT0 LUT register. Both of these registers are only accessible when PSTATE.ZA is set and ZT0 is only present if SME2 is enabled for the guest. Provide support for configuring these from VMMs. The ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves. ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API, as with ZA we allow the register to be read or written regardless of the state of PSTATE.ZA in order to simplify userspace usage. The value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1, userspace can read stale values but these are not observable by the guest without manipulation of PSTATE.ZA by userspace. While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/kvm.h | 17 ++++++ arch/arm64/kvm/guest.c | 114 ++++++++++++++++++++++++++++++++++= +++- 2 files changed, 129 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index 498a49a61487..9a19cc58d227 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -357,6 +357,23 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) =20 +#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xfffe) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 90dcacb35f01..d4e30eb57a9c 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -594,23 +594,133 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const = struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); } =20 +/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq =3D vcpu_sme_max_vq(vcpu) - 1; + const u64 za_h_max =3D vq * __SVE_VQ_BYTES; + const u64 zah_id_min =3D KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max =3D KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num =3D (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >=3D zah_id_min && reg->id <=3D zah_id_max) { + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + /* ZA is exposed as SVE vectors ZA.H[n] */ + reqoffset =3D ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen =3D KVM_SVE_ZREG_SIZE; + maxlen =3D SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id =3D=3D KVM_REG_ARM64_SME_ZT_BASE) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, SME2) || + (reg->id & SVE_REG_SLICE_MASK) > 0 || + reg_num > 0) + return -ENOENT; + + /* ZT0 is stored after ZA */ + reqlen =3D KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen =3D KVM_REG_ARM64_SME_ZTREG_SIZE; + } else { + return -EINVAL; + } + + sme_state_size =3D vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset =3D array_index_nospec(reqoffset, sme_state_size); + region->klen =3D min(maxlen, reqlen); + region->upad =3D reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { + int ret; + struct vec_state_reg_region region; + char __user *uptr =3D (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg); =20 - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret =3D sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } =20 static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { + int ret; + struct vec_state_reg_region region; + char __user *uptr =3D (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg); =20 - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret =3D sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { return -EINVAL; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15D22326946; Tue, 23 Dec 2025 01:23:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452995; cv=none; b=FjMSCU4ihGuSkA+P+yCed62Tn8qGj7Wbn3PYVGLgKlIUtBx+w+YdRZCb3vqqdbRR3PkeFpjgvdbH5304T9hMjyG7VPLbd1c8eccaabDCapA32XMOlROdfITEC9A6Zvd/b98E4v/KUdYs7UZvEuV0yO+5dsb31cXVphfaSPFDKtQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452995; c=relaxed/simple; bh=7meBQOOk+cHZORi2pAyi6ETNFpYUQ8ejWOvoNqyjX4Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LMaf8QBBNjxSg9eTTJ2+hORhe32uBRrfclZemwJNo/ky49zj4xvBNG8XPBPxIErghqRvR/Afvx0QE7QIUdEJIat79MHSoGEqI+eGiRCDaM6hOT+0sMf8Pgw+/gzKA0poSP+o3hGiI30qXTU/wqXu8g0A4NbnylFPXRub+uKr48o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eVDienFp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eVDienFp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F032C19422; Tue, 23 Dec 2025 01:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452994; bh=7meBQOOk+cHZORi2pAyi6ETNFpYUQ8ejWOvoNqyjX4Q=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=eVDienFpjOatYQBZy1ZrglivhpcSztYlmPQGCQAnJfAswUVHvFJ2qI10trdhI97OQ QAV6QqBw40uAYN6ruKEWmAUwNgjlBnOXtJUnaihQLAt4M1hc+9+F7XquVFE9T4Upy8 DK/sWYPV09rJgV2VHWQv0xc9yab5jGrnMTWtUDxBjU0Yr/POdXxit5RqhD24s6QUd1 xUFFvfFG4IoKJHDLKP+mWbJFaxzCGy91dRGrDzjOIl2TpPB0wFW4ArWSHlpCVw/Q1a nXnEt74gVYkJeYTCnOBRiQL7Kt6EGqFDqK2HKX6sOFQVGXN+8OpGHVksS0HscFeBRD ltlC3Vy2yeD6A== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:17 +0000 Subject: [PATCH v9 23/30] KVM: arm64: Context switch SME state for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-23-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=18756; i=broonie@kernel.org; h=from:subject:message-id; bh=7meBQOOk+cHZORi2pAyi6ETNFpYUQ8ejWOvoNqyjX4Q=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6U+etODYBtBO2T1qsMK4tci8QC6LC1mCqU6 sMI9BPSOLSJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnulAAKCRAk1otyXVSH 0OzAB/0Ta7SZjziD+P3zN0Kj2gCO9n83PpkMadOWSS5hxbH9MX+U/bPe8cigT7CGOO2SyeyRCAi fDqeQsE683r5KzbKj4VSB5pTkNbNpq6WKjoQUjnN8o1kVS3CCNR4yVArmLwj5jxUohybAmWz3fj Y+IHceUK/KBZAY5fIF0y+rPJ1yXacSb30MoGIEJyCOZL4vfgUeczRudVE7KWAdUGvW7PsUflmkw kbL9BLE/Jr/9wypTqLBAHiLH2VE1JpPxNwL08ddP27tPOcC1r4sUTm2GQuWQwJAPgl6bRCqbrTi cHetS6oX6wR6Txdv/fytb03dOO8s1sULOceKYNqo2HKKZn4r X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB If the guest has SME state we need to context switch that state, provide support for that for normal guests. SME has three sets of registers, ZA, ZT (only present for SME2) and also streaming SVE which replaces the standard floating point registers when active. The first two are fairly straightforward, they are accessible only when PSTATE.ZA is set and we can reuse the assembly from the host to save and load them from a single contiguous buffer. When PSTATE.ZA is not set then these registers are inaccessible, when the guest enables PSTATE.ZA all bits will be set to 0 by that and nothing is required on restore. Streaming mode is slightly more complicated, when enabled via PSTATE.SM it provides a version of the SVE registers using the SME vector length and may optionally omit the FFR register. SME may also be present without SVE. The register state is stored in sve_state as for non-streaming SVE mode, we make an initial selection of registers to update based on the guest SVE support and then override this when loading SVCR if streaming mode is enabled. A further complication is that when the hardware is in streaming mode guest operations that are invalid in in streaming mode will generate SME exceptions. There are also subfeature exceptions for SME2 controlled via SMCR which generate distinct exception codes. In many situations these exceptions are routed directly to the lower ELs with no opportunity for the hypervisor to intercept. So that guests do not see unexpected exception types due to the actual hardware configuration not being what the guest configured we update the SMCRs and SVCR even if the guest does not own the registers. Since in order to avoid duplication with SME we now restore the register state outside of the SVE specific restore function we need to move the restore of the effective VL for nested guests to a separate restore function run after loading the floating point register state, along with the similar handling required for SME. The selection of which vector length to use is handled by vcpu_sve_pffr(). Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 10 +++ arch/arm64/include/asm/kvm_emulate.h | 6 ++ arch/arm64/include/asm/kvm_host.h | 4 + arch/arm64/kvm/fpsimd.c | 25 ++++-- arch/arm64/kvm/hyp/include/hyp/switch.h | 151 ++++++++++++++++++++++++++++= ++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 80 +++++++++++++++-- 6 files changed, 255 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 8b0840bd7e14..8642efbdcb2b 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -442,6 +442,15 @@ static inline size_t sme_state_size(struct task_struct= const *task) write_sysreg_s(__new, (reg)); \ } while (0) =20 +#define sme_cond_update_smcr_vq(val, reg) \ + do { \ + u64 __smcr =3D read_sysreg_s((reg)); \ + u64 __new =3D __smcr & ~SMCR_ELx_LEN_MASK; \ + __new |=3D (val) & SMCR_ELx_LEN_MASK; \ + if (__smcr !=3D __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else =20 static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -471,6 +480,7 @@ static inline size_t sme_state_size(struct task_struct = const *task) } =20 #define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) +#define sme_cond_update_smcr_vq(val, reg) do { } while (0) =20 #endif /* ! CONFIG_ARM64_SME */ =20 diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/= kvm_emulate.h index c9eab316398e..1b0ebe480e19 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -696,4 +696,10 @@ static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu) vcpu->arch.hcrx_el2 |=3D HCRX_EL2_SCTLR2En; } } + +static inline bool guest_hyp_sme_traps_enabled(const struct kvm_vcpu *vcpu) +{ + return __guest_hyp_cptr_xen_trap_enabled(vcpu, SMEN); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index bd7a9a4efbc3..bceaf0608d75 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -736,6 +736,7 @@ struct kvm_host_data { =20 /* Used by pKVM only. */ u64 fpmr; + u64 smcr_el1; =20 /* Ownership of the FP regs */ enum { @@ -1131,6 +1132,9 @@ struct kvm_vcpu_arch { #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) =20 +#define vcpu_sme_smcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? SMCR_EL2 : SMCR_EL1) + #define sve_state_size_from_vl(sve_max_vl) ({ \ size_t __size_ret; \ unsigned int __vq; \ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1f4fcc8b5554..8fb8c55e50b3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -69,19 +69,25 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); =20 if (guest_owns_fp_regs()) { - /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. - */ fp_state.st =3D &vcpu->arch.ctxt.fp_regs; fp_state.sve_state =3D vcpu->arch.sve_state; fp_state.sve_vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; - fp_state.sme_state =3D NULL; + fp_state.sme_state =3D vcpu->arch.sme_state; + fp_state.sme_vl =3D vcpu->arch.max_vl[ARM64_VEC_SME]; fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type =3D &vcpu->arch.fp_type; + fp_state.sme_features =3D 0; + if (kvm_has_fa64(vcpu->kvm)) + fp_state.sme_features |=3D SMCR_ELx_FA64; + if (kvm_has_sme2(vcpu->kvm)) + fp_state.sme_features |=3D SMCR_ELx_EZT0; =20 + /* + * For SME only hosts fpsimd_save() will override the + * state selection if we are in streaming mode. + */ if (vcpu_has_sve(vcpu)) fp_state.to_save =3D FP_STATE_SVE; else @@ -90,6 +96,15 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&fp_state); =20 clear_thread_flag(TIF_FOREIGN_FPSTATE); + } else { + /* + * We might have enabled SME to configure traps but + * insist the host doesn't run the hypervisor with SME + * enabled, ensure it's disabled again. + */ + if (system_supports_sme()) { + sme_smstop(); + } } } =20 diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 9ce53524d664..5bcc72ae48ff 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -431,6 +431,22 @@ static inline bool kvm_hyp_handle_mops(struct kvm_vcpu= *vcpu, u64 *exit_code) return true; } =20 +static inline void __hyp_sme_restore_guest(struct kvm_vcpu *vcpu, + bool *restore_sve, + bool *restore_ffr) +{ + bool has_fa64 =3D vcpu_has_fa64(vcpu); + bool has_sme2 =3D vcpu_has_sme2(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) { + *restore_sve =3D true; + *restore_ffr =3D has_fa64; + } + + if (vcpu_za_enabled(vcpu)) + __sme_restore_state(vcpu_sme_state(vcpu), has_sme2); +} + static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { /* @@ -438,19 +454,25 @@ static inline void __hyp_sve_restore_guest(struct kvm= _vcpu *vcpu) * vCPU. Start off with the max VL so we can load the SVE state. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr, - true); =20 + write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); +} + +static inline void __hyp_nv_restore_guest_vls(struct kvm_vcpu *vcpu) +{ /* * The effective VL for a VM could differ from the max VL when running a * nested guest, as the guest hypervisor could select a smaller VL. Slap * that into hardware before wrapping up. */ - if (is_nested_ctxt(vcpu)) + if (!is_nested_ctxt(vcpu)) + return; + + if (vcpu_has_sve(vcpu)) sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2); =20 - write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); + if (vcpu_has_sme(vcpu)) + sme_cond_update_smcr_vq(__vcpu_sys_reg(vcpu, SMCR_EL2), SYS_SMCR_EL2); } =20 static inline void __hyp_sve_save_host(void) @@ -464,10 +486,46 @@ static inline void __hyp_sve_save_host(void) true); } =20 +static inline void kvm_sme_configure_traps(struct kvm_vcpu *vcpu) +{ + u64 smcr_el1, smcr_el2; + u64 svcr; + + if (!vcpu_has_sme(vcpu)) + return; + + /* A guest hypervisor may restrict the effective max VL. */ + if (is_nested_ctxt(vcpu)) + smcr_el2 =3D __vcpu_sys_reg(vcpu, SMCR_EL2); + else + smcr_el2 =3D vcpu_sme_max_vq(vcpu) - 1; + + if (vcpu_has_fa64(vcpu)) + smcr_el2 |=3D SMCR_ELx_FA64; + if (vcpu_has_sme2(vcpu)) + smcr_el2 |=3D SMCR_ELx_EZT0; + + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 =3D __vcpu_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu)); + write_sysreg_el1(smcr_el1, SYS_SMCR); + + svcr =3D __vcpu_sys_reg(vcpu, SVCR); + write_sysreg_s(svcr, SYS_SVCR); +} + static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) { u64 zcr_el1, zcr_el2; =20 + /* + * We always load the SME control registers that affect traps + * since if they are not configured as expected by the guest + * then it may have exceptions that it does not expect + * directly delivered. + */ + kvm_sme_configure_traps(vcpu); + if (!guest_owns_fp_regs()) return; =20 @@ -487,8 +545,51 @@ static inline void fpsimd_lazy_switch_to_guest(struct = kvm_vcpu *vcpu) =20 static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) { + u64 smcr_el1, smcr_el2; u64 zcr_el1, zcr_el2; =20 + /* + * We always load the control registers for the guest so we + * always restore state for the host. + */ + if (vcpu_has_sme(vcpu)) { + /* + * __deactivate_cptr_traps() disabled traps, but there + * hasn't necessarily been a context synchronization + * event yet. + */ + isb(); + + smcr_el1 =3D read_sysreg_el1(SYS_SMCR); + __vcpu_assign_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu), smcr_el1); + + smcr_el2 =3D 0; + if (system_supports_fa64()) + smcr_el2 |=3D SMCR_ELx_FA64; + if (system_supports_sme2()) + smcr_el2 |=3D SMCR_ELx_EZT0; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such that + * the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + smcr_el2 |=3D vcpu_sme_max_vq(vcpu) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + } else { + smcr_el1 =3D smcr_el2; + smcr_el2 |=3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 |=3D vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(smcr_el1, SYS_SMCR); + } + + __vcpu_assign_sys_reg(vcpu, SVCR, read_sysreg_s(SYS_SVCR)); + } + if (!guest_owns_fp_regs()) return; =20 @@ -525,6 +626,16 @@ static inline void fpsimd_lazy_switch_to_host(struct k= vm_vcpu *vcpu) =20 static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) { + /* + * The hypervisor refuses to run if streaming mode or ZA is + * enabled, we only need to save SMCR_EL1 for SME. For pKVM + * we will restore this, reset SMCR_EL2 to a fixed value and + * disable streaming mode and ZA to avoid any state being + * leaked. + */ + if (system_supports_sme()) + *host_data_ptr(smcr_el1) =3D read_sysreg_el1(SYS_SMCR); + /* * Non-protected kvm relies on the host restoring its sve state. * Protected kvm restores the host's sve state as not to reveal that @@ -549,14 +660,17 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu = *vcpu) */ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_= code) { - bool sve_guest; - u8 esr_ec; + bool restore_sve, restore_ffr; + bool sve_guest, sme_guest; + u8 esr_ec, esr_iss_smtc; =20 if (!system_supports_fpsimd()) return false; =20 sve_guest =3D vcpu_has_sve(vcpu); + sme_guest =3D vcpu_has_sme(vcpu); esr_ec =3D kvm_vcpu_trap_get_class(vcpu); + esr_iss_smtc =3D ESR_ELx_SME_ISS_SMTC((kvm_vcpu_get_esr(vcpu))); =20 /* Only handle traps the vCPU can support here: */ switch (esr_ec) { @@ -575,6 +689,15 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vc= pu *vcpu, u64 *exit_code) if (guest_hyp_sve_traps_enabled(vcpu)) return false; break; + case ESR_ELx_EC_SME: + if (!sme_guest) + return false; + if (guest_hyp_sme_traps_enabled(vcpu)) + return false; + if (!kvm_has_sme2(vcpu->kvm) && + (esr_iss_smtc =3D=3D ESR_ELx_SME_ISS_SMTC_ZT_DISABLED)) + return false; + break; default: return false; } @@ -590,8 +713,20 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vc= pu *vcpu, u64 *exit_code) kvm_hyp_save_fpsimd_host(vcpu); =20 /* Restore the guest state */ + + /* These may be overridden for a SME guest */ + restore_sve =3D sve_guest; + restore_ffr =3D sve_guest; + if (sve_guest) __hyp_sve_restore_guest(vcpu); + if (sme_guest) + __hyp_sme_restore_guest(vcpu, &restore_sve, &restore_ffr); + + if (restore_sve) + __sve_restore_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.fp_regs.fpsr, + restore_ffr); else __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs); =20 @@ -602,6 +737,8 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcp= u *vcpu, u64 *exit_code) if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); =20 + __hyp_nv_restore_guest_vls(vcpu); + *host_data_ptr(fp_owner) =3D FP_STATE_GUEST_OWNED; =20 /* diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 208e9042aca4..bd48e149764c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -26,14 +26,17 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context = *host_ctxt); =20 static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) { + bool save_ffr =3D !vcpu_in_streaming_mode(vcpu) || vcpu_has_fa64(vcpu); + __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); + /* * On saving/restoring guest sve state, always use the maximum VL for * the guest. The layout of the data when saving the sve state depends * on the VL, so use a consistent (i.e., the maximum) guest VL. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true= ); + __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, save= _ffr); write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); } =20 @@ -57,9 +60,63 @@ static void __hyp_sve_restore_host(void) write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); } =20 -static void fpsimd_sve_flush(void) +static void __hyp_sme_save_guest(struct kvm_vcpu *vcpu) { - *host_data_ptr(fp_owner) =3D FP_STATE_HOST_OWNED; + __vcpu_assign_sys_reg(vcpu, SMCR_EL1, read_sysreg_el1(SYS_SMCR)); + __vcpu_assign_sys_reg(vcpu, SVCR, read_sysreg_s(SYS_SVCR)); + + /* + * On saving/restoring guest sve state, always use the maximum VL for + * the guest. The layout of the data when saving the sve state depends + * on the VL, so use a consistent (i.e., the maximum) guest VL. + * + * We restore the FA64 and SME2 enables for the host since we + * will always restore the host configuration so if host and + * guest VLs are the same we might suppress an update. + */ + sme_cond_update_smcr(vcpu_sme_max_vq(vcpu) - 1, system_supports_fa64(), + system_supports_sme2(), SYS_SMCR_EL2); + + if (vcpu_za_enabled(vcpu)) + __sme_save_state(vcpu_sme_state(vcpu), vcpu_has_sme2(vcpu)); +} + +static void __hyp_sme_restore_host(void) +{ + /* + * The hypervisor refuses to run if we are in streaming mode + * or have ZA enabled so there is no SME specific state to + * restore other than the system registers. + * + * Note that this constrains the PE to the maximum shared VL + * that was discovered, if we wish to use larger VLs this will + * need to be revisited. + */ + sme_cond_update_smcr(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1, + cpus_have_final_cap(ARM64_SME_FA64), + cpus_have_final_cap(ARM64_SME2), SYS_SMCR_EL2); + + write_sysreg_el1(*host_data_ptr(smcr_el1), SYS_SMCR); + + sme_smstop(); +} + +static void fpsimd_sve_flush(struct kvm_vcpu *vcpu) +{ + /* + * If the guest has SME then we need to restore the trap + * controls in SMCR and mode in SVCR in order to ensure that + * traps generated directly to EL1 have the correct types, + * otherwise we can defer until we load the guest state. + */ + if (vcpu_has_sme(vcpu)) { + kvm_hyp_save_fpsimd_host(vcpu); + kvm_sme_configure_traps(vcpu); + + *host_data_ptr(fp_owner) =3D FP_STATE_FREE; + } else { + *host_data_ptr(fp_owner) =3D FP_STATE_HOST_OWNED; + } } =20 static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) @@ -75,7 +132,10 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) */ isb(); =20 - if (vcpu_has_sve(vcpu)) + if (vcpu_has_sme(vcpu)) + __hyp_sme_save_guest(vcpu); + + if (vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu)) __hyp_sve_save_guest(vcpu); else __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs); @@ -84,6 +144,9 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) if (has_fpmr) __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR)); =20 + if (system_supports_sme()) + __hyp_sme_restore_host(); + if (system_supports_sve()) __hyp_sve_restore_host(); else @@ -121,7 +184,7 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) { struct kvm_vcpu *host_vcpu =3D hyp_vcpu->host_vcpu; =20 - fpsimd_sve_flush(); + fpsimd_sve_flush(host_vcpu); flush_debug_state(hyp_vcpu); =20 hyp_vcpu->vcpu.arch.ctxt =3D host_vcpu->arch.ctxt; @@ -204,10 +267,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_conte= xt *host_ctxt) struct pkvm_hyp_vcpu *hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); =20 /* - * KVM (and pKVM) doesn't support SME guests for now, and - * ensures that SME features aren't enabled in pstate when - * loading a vcpu. Therefore, if SME features enabled the host - * is misbehaving. + * KVM (and pKVM) refuses to run if PSTATE.{SM,ZA} are + * enabled. Therefore, if SME features enabled the + * host is misbehaving. */ if (unlikely(system_supports_sme() && read_sysreg_s(SYS_SVCR))) { ret =3D -EINVAL; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4073533BBBD; Tue, 23 Dec 2025 01:23:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452999; cv=none; b=b12z+sxi0qCkWBhJTNq81wbbW/Hjz31oY2WVUK/PlS0cvOYVgClsiWW9+AqWehG48Grx4q6X4Z/M/674hJSZ3IgQcljxwambezudlvBYNdIlmVwjiqr/cVNvAISd55LR2l36NtvvKBYx9t9O4nwiALWrelHfEPdcFPiP6PxuN5w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766452999; c=relaxed/simple; bh=OMYDhkHK0uSmrb5MCtocsFGAxnEKImTpr9foZYZOt40=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=obG3HgxSjwHJCUAO32I8bsvWwmZeo9SdPlotbkBePUNb2vA2DeiAlKQtn2BCgkM4O7c0Wk6B5q94BrWZRT4Phz3MSR/nm4O8f5Mu/VFjRDWK0IFCqJft9Cv4RpevP0dYb00Jhb0A0dBvnVuhp+hVO3j5QUvE5qunPUKa3kAI1S4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=paagAQx7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="paagAQx7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E33F9C16AAE; Tue, 23 Dec 2025 01:23:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766452998; bh=OMYDhkHK0uSmrb5MCtocsFGAxnEKImTpr9foZYZOt40=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=paagAQx7DR6OQLYiKBAyUmfJRqlLlWo0B/WqnSJp4UaOnSVTpvnrpUs9rcihKjl4p ogMcwM5pPsv/N9M33qjhe6cKUTe342MSgPkdFwJpdTym2v4trsvEbHhy95LD360o3R qLBV2NftYFVYZx+uE0s4Ssjtb8ibMpKk4S5S/8wlfAjQ/xO49jLR7ibxWsahklWyeQ No0fZBXZCr2JE8NV+rqTfIEy0M8VaCR2TzaqJ2ZLsTAEjFPwp7UVrzTdwBEwKB3OBz 2Caq0zVLjO5ggXaRtoVNOUcy3BrUbxnGbtNFsIZGfYwCdkudTZyWdqOYV+qx3y/1g8 KyVynz/qGHrnA== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:18 +0000 Subject: [PATCH v9 24/30] KVM: arm64: Handle SME exceptions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-24-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=6175; i=broonie@kernel.org; h=from:subject:message-id; bh=OMYDhkHK0uSmrb5MCtocsFGAxnEKImTpr9foZYZOt40=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6VqS1KxXhksu/eI5v73oOXVDUthEn9H58Vo YU6O35PgUmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnulQAKCRAk1otyXVSH 0L1tB/wLo9tdtbFcIkmIi1HKbhc864xEQx6SfbiiClSarEelE69s8aYGZLnVomleYwO9ihwJ/db VG0WTa4bvoXtF6vvC9ofVeHdrQJZtvEoByhdb7keq06H9phc40haHz6hM2srQPizn567WOlYnuE +FyxthfbZuW+gRWHGhpKqX/AK1EiSZuLY56NHNtDnUhOd/0RUvzc+RoVt2nKIqJfQ1vUfY16/85 YfOmduvBlsRgsmKKUikWxth3cxX671UNztDNAqr9QyanllLBVD+4IiLeNzvIrlhQXB26F6/cftB xvjMa3/76Jp6tLql+8q+RNXbXvn8fGwBz+NufDvtf3cEz9PE X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vhe/switch.c | 17 ++++++++++++----- 4 files changed, 35 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index cc7d5d1709cb..1e54d5d722e4 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -237,6 +237,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } =20 +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -390,6 +403,7 @@ static exit_handle_fn arm_exit_handlers[] =3D { [ESR_ELx_EC_SVC64] =3D handle_svc, [ESR_ELx_EC_SYS64] =3D kvm_handle_sys_reg, [ESR_ELx_EC_SVE] =3D handle_sve, + [ESR_ELx_EC_SME] =3D handle_sme, [ESR_ELx_EC_ERET] =3D kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] =3D kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] =3D kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 5bcc72ae48ff..ad88cc7bd5d3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -69,11 +69,8 @@ static inline void __activate_cptr_traps_nvhe(struct kvm= _vcpu *vcpu) { u64 val =3D CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; =20 - /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |=3D CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |=3D CPTR_EL2_TSM; =20 if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |=3D CPTR_EL2_TZ; @@ -101,6 +98,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_= vcpu *vcpu) val |=3D CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |=3D CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |=3D CPACR_EL1_SMEN; } =20 if (!vcpu_has_nv(vcpu)) @@ -142,6 +141,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm= _vcpu *vcpu) val &=3D ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &=3D ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &=3D ~CPACR_EL1_SMEN; =20 if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |=3D cptr & CPACR_EL1_E0POE; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index d3b9ec8a7c28..b2cba7c92b0f 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -181,6 +181,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, @@ -192,7 +193,8 @@ static const exit_handler_fn pvm_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_SYS64] =3D kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] =3D kvm_handle_pvm_restricted, - [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] =3D kvm_handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] =3D kvm_handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] =3D kvm_hyp_handle_watchpt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 9984c492305a..8449004bc24e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -458,22 +458,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu = *vcpu, u64 *exit_code) return true; } =20 -static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_cod= e) { u32 sysreg =3D esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); =20 if (!vcpu_has_nv(vcpu)) return false; =20 - if (sysreg !=3D SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + } =20 if (guest_owns_fp_regs()) return false; =20 /* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -493,7 +499,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *= vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true; =20 - if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true; =20 return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -522,6 +528,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 264C533F397; Tue, 23 Dec 2025 01:23:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453003; cv=none; b=XZQlPuZTpnF/nl+WGSL+byHy2/o0x5Aj1HOyTUvqmnT6+CZ3fmJY6b3NxUtedJXcCRK4WsyFqnX/ok8pEaVEWvekOAQ67kRgsa6+jsQxLqOUJ1DNJFmPIhBSq9n2BozA2BMWRV5nODn05VEBIvhMfJx4PdbLGWT3nM5f2zcrzCA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453003; c=relaxed/simple; bh=FzRi1bctv9rrqNEWwOJp3F+K+iFD0wwTwXRP36rhp+Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=CM1gz3Xxm4U7G6rWmVWS0YDseQWqUotvnxgIaTM7OeQPP0oxwnEysru5V8qZOmzZ93UU4/9eEiWxtEtHuoKXRmtzdnxFyMTUHdUrrqg9n5EgaDjcEn+HoI5YqBR27VWMefEASDE17iRsk6OuEgtxJ82SbXWoiTO2vQMppSp9GR8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OdctPKNJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OdctPKNJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2E997C4CEF1; Tue, 23 Dec 2025 01:23:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453003; bh=FzRi1bctv9rrqNEWwOJp3F+K+iFD0wwTwXRP36rhp+Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=OdctPKNJwOjWJ5tecGrza74VtWHJA9rrjCwOhN2FX4CdIo9tRikRy644wetxVooIN IxOkRdl83h6vyyS401NfoF9BJGXc4LHcF2cIdR7OwPJOKDb9vjzS+WsXCW7zC+yp/b hemm1WLXEpOfp2qa/Ww+RBfYFNVM70aWmqZ3E056kOqKWXvtAd6u9RbgjtC7wV3aR1 SLV6QBFuJ5owkYy8Mqr6cOv5j5OP1hsKqei9pm95vSiB908pvlkD48lgeEOgV3KJdQ emF/QSPj6QoZAB4EkayZIKNkYwuIHcf0Z/83lALqupB9Lrew7oh2NnBKgsw/h5DCUQ OnkhLeKZi411A== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:19 +0000 Subject: [PATCH v9 25/30] KVM: arm64: Expose SME to nested guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-25-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=1408; i=broonie@kernel.org; h=from:subject:message-id; bh=FzRi1bctv9rrqNEWwOJp3F+K+iFD0wwTwXRP36rhp+Y=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6Vf+MTqYjq4kXe4kbr9Zy9gweDKTkN/+6I+ 2AscH+hGPOJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnulQAKCRAk1otyXVSH 0K47B/46AKQNCiAhn5hTkguKHo5oVF1pbyCaBey9vSjyQIbzDbvhCfKZqmL6Sdu/DMwtf98exB1 RD1+t4wd3yU5XYMsyE8z5/zhRYyKKnOpz97jMwjJwkZ1YS7TYLQ4WmUhvaSV+WkLPEJkbCsSXIi y59GMQwrVxyom7Wui2mpYgm2JkqomZOW10XLIPBi8G60NgFjH6q7LGsUDBe7lucid92uDTvCkGe bh5+0qgM2Bf0ecCA5VCfMKhgCY6yAj7SPuzOWWApXTABtJHLH7MPC/b6EWv6h9PpwbdWQy8YHTf ezKax5T3S18GNdYUmZZdkUApkD2VmxucgyzJ6bwLrmqesJhe X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB With support for context switching SME state in place allow access to SME in nested guests. The SME floating point state is handled along with all the other floating point state, SME specific floating point exceptions are directed into the same handlers as other floating point exceptions with NV specific handling for the vector lengths already in place. TPIDR2_EL0 is context switched along with the other TPIDRs as part of the main guest register context switch. SME priority support is currently masked from all guests including nested ones. Signed-off-by: Mark Brown Reviewed-by: Fuad Tabba --- arch/arm64/kvm/nested.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index cdeeb8f09e72..a0967ca8c61e 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -1534,14 +1534,13 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 v= al) break; =20 case SYS_ID_AA64PFR1_EL1: - /* Only support BTI, SSBS, CSV2_frac */ + /* Only support BTI, SME, SSBS, CSV2_frac */ val &=3D ~(ID_AA64PFR1_EL1_PFAR | ID_AA64PFR1_EL1_MTEX | ID_AA64PFR1_EL1_THE | ID_AA64PFR1_EL1_GCS | ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_MTE); --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BABA342510; Tue, 23 Dec 2025 01:23:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453007; cv=none; b=QeDARiUUr4N3NKPAI6VQsEfHmqFEWKDJD5n8IlaFw7eNO8A/S0NabdestUQxAwm5WB7khcA2a/ptSEBT050sEAqIj9MxQ+jNfUHanhhapBOUdl4b+9WgUWL9AYBuRuPs394CqMPfUhi7c2i28RIs0WZXhwCEtpUr7jW+5gjtdYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453007; c=relaxed/simple; bh=Lam9qgvn3MRkeKtBMb6ydbJh/S2lrCOZ/DjzdjxRue8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BMTRuVcxVOuBdPm6QtwaFFJkPVXUeKV0TK/f0FGBvahUcK70bNdQDGY7y4HKDJlmp5j8rEMSICHrFkGpkfyB4xcyORH0y9Yun0OmqS0B7Sf4+rnUWFnHtSjNhxqqlO27EHZDPA+HIZGYIlBqdBxdCCYcR8862oKXgSWiViA20xE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GisFL2VS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GisFL2VS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E1E8C4CEF1; Tue, 23 Dec 2025 01:23:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453007; bh=Lam9qgvn3MRkeKtBMb6ydbJh/S2lrCOZ/DjzdjxRue8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=GisFL2VSDMr38REGWu07BcgmpmIOPYR78cjywmHVwvrDz2YhKck3sEn7XtEFrOQPR nSPDzSKhMeF0yp8ZU/7MHPHMsOBJI3ZM7LxJye2ZiPKUMpUV4UgGsLFEwc2vVPE0Oi E9y34H6EZ4rSlH/fSZfFae/IH7C4oL4o4fTllsgX3xBTQ7/7ekCz+s96cyRxcDMk+B LRpPEp02rBzbthuwNA5pLNwJHp2lMXhDWcb0gCYErNeoIUYLq5Hlaf6HyHopCRTAz5 TPrmnPTndZlzUDRHNlPjwpdmEHmmrXuzmt6LFMNqZZvJbgNYmB0mrX847AlF0dR+Vr gBrA+un+TPRcw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:20 +0000 Subject: [PATCH v9 26/30] KVM: arm64: Provide interface for configuring and enabling SME for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-26-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=15724; i=broonie@kernel.org; h=from:subject:message-id; bh=Lam9qgvn3MRkeKtBMb6ydbJh/S2lrCOZ/DjzdjxRue8=; b=owGbwMvMwMWocq27KDak/QLjabUkhkzPd9Nq69P6DUMdfSUXRvJ5G+2Le8V/LnWdPP+muUwTi 3tqtmd2MhqzMDByMciKKbKsfZaxKj1cYuv8R/NfwQxiZQKZwsDFKQATOf+dg6Hp13vFu80pSy3r 3XK7XgukiL1oVl5Uw90fe/eeZtM8AU9/QZmb/B81JjPwXO6RP+Odr2U6w+6a1r+tFxrYO+V6Uxm vXF7FW/1p8uPW/j2hLitlVYwe9spIBOfcFtVMPJ1pbvDL47H5psXnRL+bZS7geZ0Z1RFwctadFz 8D2/94frmlPHVroWYzs5dkRPGWFX4Ln6z9Jlakz7xnlrCIKfN197Ul4vl/BBKDms82Fl5nm/o2M zU8+LbgWo2fG2R7ykoO8272n3x/2UaT9rZ0o4OOJbnnY9RWLk0q9DVn40hpPRCtKl75Xufz4+0L Hj44endZ1KrY+/EsMS/Vt1wK9Uxb2NaUFvuCQdz1/b0MAA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Since SME requires configuration of a vector length in order to know the size of both the streaming mode SVE state and ZA array we implement a capability for it and require that it be enabled and finalized before the SME specific state can be accessed, similarly to SVE. Due to the overlap with sizing the SVE state we finalise both SVE and SME with a single finalization, preventing any further changes to the SVE and SME configuration once KVM_ARM_VCPU_VEC (an alias for _VCPU_SVE) has been finalised. This is not a thing of great elegance but it ensures that we never have a state where one of SVE or SME is finalised and the other not, avoiding complexity. SME is supported for normal and protected guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 12 +++- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 10 ++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 76 +++++++++++++++++++----- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 ++ arch/arm64/kvm/reset.c | 116 +++++++++++++++++++++++++++++++--= ---- include/uapi/linux/kvm.h | 1 + 7 files changed, 189 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index bceaf0608d75..011debfc1afd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@ =20 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS =20 -#define KVM_VCPU_MAX_FEATURES 9 +#define KVM_VCPU_MAX_FEATURES 10 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1) =20 #define KVM_REQ_SLEEP \ @@ -82,6 +82,7 @@ extern unsigned int __ro_after_init kvm_host_max_vl[ARM64= _VEC_MAX]; DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); =20 int __init kvm_arm_init_sve(void); +int __init kvm_arm_init_sme(void); =20 u32 __attribute_const__ kvm_target_cpu(void); void kvm_reset_vcpu(struct kvm_vcpu *vcpu); @@ -1149,7 +1150,14 @@ struct kvm_vcpu_arch { __size_ret; \ }) =20 -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) +#define vcpu_sve_state_size(vcpu) ({ \ + unsigned int __max_vl; \ + \ + __max_vl =3D max((vcpu)->arch.max_vl[ARM64_VEC_SVE], \ + (vcpu)->arch.max_vl[ARM64_VEC_SME]); \ + \ + sve_state_size_from_vl(__max_vl); \ +}) =20 #define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) =20 diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index 9a19cc58d227..b4be424e4230 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication= */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */ +#define KVM_ARM_VCPU_SME 9 /* enable SME for this CPU */ =20 /* * An alias for _SVE since we finalize VL configuration for both SVE and S= ME diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4f80da0c0d1d..7de7b497f74f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -402,6 +402,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_SVE: r =3D system_supports_sve(); break; + case KVM_CAP_ARM_SME: + r =3D system_supports_sme(); + break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: r =3D kvm_has_full_ptr_auth(); @@ -1456,6 +1459,9 @@ static unsigned long system_supported_vcpu_features(v= oid) if (!system_supports_sve()) clear_bit(KVM_ARM_VCPU_SVE, &features); =20 + if (!system_supports_sme()) + clear_bit(KVM_ARM_VCPU_SME, &features); + if (!kvm_has_full_ptr_auth()) { clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); @@ -2878,6 +2884,10 @@ static __init int kvm_arm_init(void) if (err) return err; =20 + err =3D kvm_arm_init_sme(); + if (err) + return err; + err =3D kvm_arm_vmid_alloc_init(); if (err) { kvm_err("Failed to initialize VMID allocator.\n"); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index b656449dff69..30ee9f371b0d 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -148,10 +148,6 @@ static int pkvm_check_pvm_cpu_features(struct kvm_vcpu= *vcpu) !kvm_has_feat(kvm, ID_AA64PFR0_EL1, AdvSIMD, IMP)) return -EINVAL; =20 - /* No SME support in KVM right now. Check to catch if it changes. */ - if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) - return -EINVAL; - return 0; } =20 @@ -377,6 +373,11 @@ static void pkvm_init_features_from_host(struct pkvm_h= yp_vm *hyp_vm, const struc kvm->arch.flags |=3D host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SVE); } =20 + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SME)) { + set_bit(KVM_ARM_VCPU_SME, allowed_features); + kvm->arch.flags |=3D host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SME); + } + bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); } @@ -399,6 +400,18 @@ static void unpin_host_sve_state(struct pkvm_hyp_vcpu = *hyp_vcpu) sve_state + vcpu_sve_state_size(&hyp_vcpu->vcpu)); } =20 +static void unpin_host_sme_state(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + void *sme_state; + + if (!vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SME)) + return; + + sme_state =3D kern_hyp_va(hyp_vcpu->vcpu.arch.sme_state); + hyp_unpin_shared_mem(sme_state, + sme_state + vcpu_sme_state_size(&hyp_vcpu->vcpu)); +} + static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[], unsigned int nr_vcpus) { @@ -412,6 +425,7 @@ static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_= vcpus[], =20 unpin_host_vcpu(hyp_vcpu->host_vcpu); unpin_host_sve_state(hyp_vcpu); + unpin_host_sme_state(hyp_vcpu); } } =20 @@ -438,23 +452,35 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, st= ruct pkvm_hyp_vm *hyp_vm, mmu->pgt =3D &hyp_vm->pgt; } =20 -static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_v= cpu *host_vcpu) +static int pkvm_vcpu_init_vec(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_v= cpu *host_vcpu) { struct kvm_vcpu *vcpu =3D &hyp_vcpu->vcpu; - unsigned int sve_max_vl; - size_t sve_state_size; - void *sve_state; + unsigned int sve_max_vl, sme_max_vl; + size_t sve_state_size, sme_state_size; + void *sve_state, *sme_state; int ret =3D 0; =20 - if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { + if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE) && + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 /* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), - kvm_host_max_vl[ARM64_VEC_SVE]); - sve_state_size =3D sve_state_size_from_vl(sve_max_vl); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) + sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); + else + sve_max_vl =3D 0; + + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + sme_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SME]), + kvm_host_max_vl[ARM64_VEC_SME]); + else + sme_max_vl =3D 0; + + /* We need SVE storage for the larger of normal or streaming mode */ + sve_state_size =3D sve_state_size_from_vl(max(sve_max_vl, sme_max_vl)); sve_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state)); =20 if (!sve_state || !sve_state_size) { @@ -466,12 +492,36 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *h= yp_vcpu, struct kvm_vcpu *h if (ret) goto err; =20 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { + sme_state_size =3D sme_state_size_from_vl(sme_max_vl, + vcpu_has_sme2(vcpu)); + sme_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sme_state)); + + if (!sme_state || !sme_state_size) { + ret =3D -EINVAL; + goto err_sve_mapped; + } + + ret =3D hyp_pin_shared_mem(sme_state, sme_state + sme_state_size); + if (ret) + goto err_sve_mapped; + } else { + sme_state =3D 0; + } + vcpu->arch.sve_state =3D sve_state; vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_max_vl; =20 + vcpu->arch.sme_state =3D sme_state; + vcpu->arch.max_vl[ARM64_VEC_SME] =3D sme_max_vl; + return 0; + +err_sve_mapped: + hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size); err: clear_bit(KVM_ARM_VCPU_SVE, vcpu->kvm->arch.vcpu_features); + clear_bit(KVM_ARM_VCPU_SME, vcpu->kvm->arch.vcpu_features); return ret; } =20 @@ -501,7 +551,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp= _vcpu, if (ret) goto done; =20 - ret =3D pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); + ret =3D pkvm_vcpu_init_vec(hyp_vcpu, host_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/s= ys_regs.c index 3108b5185c20..40127ba86335 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -66,6 +66,11 @@ static bool vm_has_ptrauth(const struct kvm *kvm) kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC); } =20 +static bool vm_has_sme(const struct kvm *kvm) +{ + return system_supports_sme() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SM= E); +} + static bool vm_has_sve(const struct kvm *kvm) { return system_supports_sve() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SV= E); @@ -102,6 +107,7 @@ static const struct pvm_ftr_bits pvmid_aa64pfr0[] =3D { }; =20 static const struct pvm_ftr_bits pvmid_aa64pfr1[] =3D { + MAX_FEAT_FUNC(ID_AA64PFR1_EL1, SME, SME2, vm_has_sme), MAX_FEAT(ID_AA64PFR1_EL1, BT, IMP), MAX_FEAT(ID_AA64PFR1_EL1, SSBS, SSBS2), MAX_FEAT_ENUM(ID_AA64PFR1_EL1, MTE_frac, NI), diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index a8684a1346ec..e6dc04267cbb 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -76,6 +76,34 @@ int __init kvm_arm_init_sve(void) return 0; } =20 +int __init kvm_arm_init_sme(void) +{ + if (system_supports_sme()) { + kvm_max_vl[ARM64_VEC_SME] =3D sme_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SME] =3D sme_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SME]) =3D kvm_host_max_vl[ARM64_V= EC_SME]; + + /* + * The get_sve_reg()/set_sve_reg() ioctl interface will need + * to be extended with multiple register slice support in + * order to support vector lengths greater than + * VL_ARCH_MAX: + */ + if (WARN_ON(kvm_max_vl[ARM64_VEC_SME] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SME] =3D VL_ARCH_MAX; + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_max_vl[ARM64_VEC_SME] < sme_max_vl()) + pr_warn("KVM: SME vector length for guests limited to %u bytes\n", + kvm_max_vl[ARM64_VEC_SME]); + } + + return 0; +} + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { vcpu->arch.max_vl[ARM64_VEC_SVE] =3D kvm_max_vl[ARM64_VEC_SVE]; @@ -88,42 +116,86 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) set_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &vcpu->kvm->arch.flags); } =20 +static void kvm_vcpu_enable_sme(struct kvm_vcpu *vcpu) +{ + vcpu->arch.max_vl[ARM64_VEC_SME] =3D kvm_max_vl[ARM64_VEC_SME]; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SME_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + set_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &vcpu->kvm->arch.flags); +} + /* - * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * Finalize vcpu's maximum vector lengths, allocating + * vcpu->arch.sve_state and vcpu->arch.sme_state as necessary. */ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { - void *buf; + void *sve_state, *sme_state; unsigned int vl; - size_t reg_sz; int ret; =20 - vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; - /* * Responsibility for these properties is shared between * kvm_arm_init_sve(), kvm_vcpu_enable_sve() and * set_sve_vls(). Double-check here just to be sure: */ - if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || - vl > VL_ARCH_MAX)) - return -EIO; + if (vcpu_has_sve(vcpu)) { + vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sve_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } =20 - reg_sz =3D vcpu_sve_state_size(vcpu); - buf =3D kzalloc(reg_sz, GFP_KERNEL_ACCOUNT); - if (!buf) + /* Similarly for SME */ + if (vcpu_has_sme(vcpu)) { + vl =3D vcpu->arch.max_vl[ARM64_VEC_SME]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sme_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } + + sve_state =3D kzalloc(vcpu_sve_state_size(vcpu), GFP_KERNEL_ACCOUNT); + if (!sve_state) return -ENOMEM; =20 - ret =3D kvm_share_hyp(buf, buf + reg_sz); - if (ret) { - kfree(buf); - return ret; + ret =3D kvm_share_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + if (ret) + goto err_sve_alloc; + + if (vcpu_has_sme(vcpu)) { + sme_state =3D kzalloc(vcpu_sme_state_size(vcpu), + GFP_KERNEL_ACCOUNT); + if (!sme_state) { + ret =3D -ENOMEM; + goto err_sve_map; + } + + ret =3D kvm_share_hyp(sme_state, + sme_state + vcpu_sme_state_size(vcpu)); + if (ret) + goto err_sme_alloc; + } else { + sme_state =3D NULL; } -=09 - vcpu->arch.sve_state =3D buf; + + vcpu->arch.sve_state =3D sve_state; + vcpu->arch.sme_state =3D sme_state; vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; + +err_sme_alloc: + kfree(sme_state); +err_sve_map: + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); +err_sve_alloc: + kfree(sve_state); + return ret; } =20 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) @@ -153,12 +225,16 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state =3D vcpu->arch.sve_state; + void *sme_state =3D vcpu->arch.sme_state; =20 kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); free_page((unsigned long)vcpu->arch.ctxt.vncr_array); + if (sme_state) + kvm_unshare_hyp(sme_state, sme_state + vcpu_sme_state_size(vcpu)); + kfree(sme_state); kfree(vcpu->arch.vncr_tlb); kfree(vcpu->arch.ccsidr); } @@ -167,6 +243,8 @@ static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + if (vcpu_has_sme(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); } =20 /** @@ -206,6 +284,8 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + kvm_vcpu_enable_sme(vcpu); } else { kvm_vcpu_reset_vec(vcpu); } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index dddb781b0507..d9e068db3b73 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -974,6 +974,7 @@ struct kvm_enable_cap { #define KVM_CAP_GUEST_MEMFD_FLAGS 244 #define KVM_CAP_ARM_SEA_TO_USER 245 #define KVM_CAP_S390_USER_OPEREXEC 246 +#define KVM_CAP_ARM_SME 247 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF3DB3446C5; Tue, 23 Dec 2025 01:23:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453012; cv=none; b=ojwXY3urdtc9DlQ0AfhyYucvZU01kPEA6wmZk5WixyxI2JfB0QkuXCco5+0oCbjiqy66N7nNp6SQMuBoE+XEtlXoW4RbH8cAm6ynlHUnin/pc8R3yyVmmX0LFfd56z+TaF7WQp+MIEgryhP1CgUvQs07xKu49jsXaORter136Os= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453012; c=relaxed/simple; bh=Z5j3l5hq06OVglsz6W9Xs4iwgZIZ/lq9hxI6ABS0IYM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=c2aaX01ACAS15bsgQHVOEtU2jT34AiqreQi1ISZtoG/bxSYogxl3xgMjdQi6cfnok6bUinAdwfY69MOP1F7zs179R7hUCPG2fLJCMilxd4vyZAcv4oIJtbZ77c/j9+Soj5iksgeDcVfKW1x8cieTUUzVUA85XzuBZyw/BgrQabE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q+qxLBd1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q+qxLBd1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B13B3C19423; Tue, 23 Dec 2025 01:23:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453011; bh=Z5j3l5hq06OVglsz6W9Xs4iwgZIZ/lq9hxI6ABS0IYM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=q+qxLBd1+fBvecWLmFic4UBW2LgCAzlPUr9zb0EF80h8QkPP4HPOhbO1rYiVYjGw/ Jw/yDNNQ9ERhcqDPInp5hXfwMU9jFrWbRvCjExBqhyYrmwgs7bQe1ph9AGup0JCjap osv3tq9df9wVDVMyI0Fd8zzQxG2Bk0SKNgPTsznaTCF0131x2EfWFuCfSNMx6iLjkz Df/sZea2lE3kMBvsMpupIkq0UGmU8kZ0jKCUkfHTTdbMLq0mFI3sJ8IH8HJ2ss9h/H oSKwuE8qVSTBLzmXbeIB2XWAQNHzi7jsoBZSkw3QToRd8h+fUJL0lQOAXzNLRroh2X OEUuStJ92mzHA== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:21 +0000 Subject: [PATCH v9 27/30] KVM: arm64: selftests: Remove spurious check for single bit safe values Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-27-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=1169; i=broonie@kernel.org; h=from:subject:message-id; bh=Z5j3l5hq06OVglsz6W9Xs4iwgZIZ/lq9hxI6ABS0IYM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6XuZqDZHAD1EbYrK8wn0phiB+1DdVn6PblO QAM9RR2+ICJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnulwAKCRAk1otyXVSH 0JJCB/4y34p+VTk0tkbCqrPh1qJK7fJsyrMK/+p5SaTDCsoRuuE4RyMJ5RqomOK0FgfnkdBHk2P va9Xud+cf24Dezk4eYeg6HQaJtZRQnK1AnSqKOMOfxx5fqmK8wljWyR/QII8LA4UrQWX7vGTpYp t9Wmb+EZUJoVc14h9Mfmqcum/aUjTjiEm4RnN1HO+PdnxU5p3HRXlQfB0g1RRPSvkzwr3w+Bish 0erE71BlCyTFzonFUO3SH+5UhD1w/bCILJybHOzBsnDEcw/jBIyIVUI7eKq9CJ7lEMfhxExTe2M bTitMkcm7AoeoGDCM1CjdBli5WAddd0qICzdghkryQSjmMmN X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB get_safe_value() currently asserts that bitfields it is generating a safe value for must be more than one bit wide but in actual fact it should always be possible to generate a safe value to write to a bitfield even if it is just the current value and the function correctly handles that. Remove the assert. Fixes: bf09ee918053e ("KVM: arm64: selftests: Remove ARM64_FEATURE_FIELD_BI= TS and its last user") Signed-off-by: Mark Brown Reviewed-by: Ben Horgan --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index c4815d365816..322cd13b9352 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -270,8 +270,6 @@ uint64_t get_safe_value(const struct reg_ftr_bits *ftr_= bits, uint64_t ftr) { uint64_t ftr_max =3D ftr_bits->mask >> ftr_bits->shift; =20 - TEST_ASSERT(ftr_max > 1, "This test doesn't support single bit features"); - if (ftr_bits->sign =3D=3D FTR_UNSIGNED) { switch (ftr_bits->type) { case FTR_EXACT: --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AFB034679F; Tue, 23 Dec 2025 01:23:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453016; cv=none; b=SxkrUbs7WRLul2Vi0QRCC1YkjKdn3F/1c4b4f9oAeeLU/7tyCibWQc1uQXbGcKfAtd9oOBZj9BDtFig7T/7KgnWXdIBabrXe1iZfLgl9u+D7FLQPpSx7d4TngJJteUUSGGuef2Zg3CtK9FHSpRO+EnejN031hXGQaVewe2SyFCY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453016; c=relaxed/simple; bh=jbLNcvQ8qxr+FL0J4Bab/tdTijCeYtU2EXCu11Jdphc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=RjWoOBGCtTzrKNsweNvOlaQ63RM82pfSGX9pHAlZEJKAkfGb4JEouDUQlaHOZY3ZGNUhh+fA9bvpniP2s9m5ZKoxR7x5+GeRVUrYLxtBaEg3pL+oHaYPj+ff3bQgSNRCTJOqLC/FrNKhpwmyZv6JhBWrNSowNckgpjPAqfzzZwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EJ+BbwVJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EJ+BbwVJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED853C4CEF1; Tue, 23 Dec 2025 01:23:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453015; bh=jbLNcvQ8qxr+FL0J4Bab/tdTijCeYtU2EXCu11Jdphc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=EJ+BbwVJ5m3VApHWOcACnEn/Jtbrhz3GXdEHzeekIga3vryGLjkOdmsZwWXvDIOvA BBxgwyPU7COryEy6mVlRZlH1M1GySUOo74ZpZV4vzwnepOJpu3u40EyJeDx5meyyjO t+BakPXpQ/lA/tygjFk+NPGWxCqZXs46AeIkBmL4wL5FQb4EFwmrsu3b07okc/L3Hf QH+5Z2LzInok83O39wAR0otC7kQm9VC9M3uW7HuLvCts3rYVg5ID7FmoN+EXUUCyca MTjafTAzlcXP2WyAS9IdOPmbaOtX/H7zzjRsiT8gDGaEJfou0pOeMmagkWcuVg4qRS qfylhUpbLObJw== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:22 +0000 Subject: [PATCH v9 28/30] KVM: arm64: selftests: Skip impossible invalid value tests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-28-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3644; i=broonie@kernel.org; h=from:subject:message-id; bh=jbLNcvQ8qxr+FL0J4Bab/tdTijCeYtU2EXCu11Jdphc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6YjlDBX+enbPVACfxp+aByWhyMVRBBrMoHp yKUFjTo7kmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnumAAKCRAk1otyXVSH 0PAdB/wKnIGQ0YWaL2F/ONZhaNf7iTxCd4x4MoDNmNeqoCkxjB7QvAl3CDHLRjbPZXLqxliOVWW n9wi9014gJZ/JD0umqy8ZetSmzKsOQw1mmMhW3nd22zLuLKufd/KC6upoXXMBq56T6yiP5FWnoU J4qeY8twwIzjd/zVmXaiK9ZE4RIbMhyhLi4USL89FG08YCzCFBK0ginrTgJ3h255gW6KCzuVm5E fOMzXi83SHaSprSXVV2EFXxUsBZXwsyesJAMt/3GSB0xyLcYIlFfPiOI0bwUXJP/tKwj/sSSFbw OL2lN0rqRDQ57wtw06kB9PzKMie3b7CaLj2SW44lWrTJRErt X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The set_id_regs test currently assumes that there will always be invalid values available in bitfields for it to generate but this may not be the case if the architecture has defined meanings for every possible value for the bitfield. An assert added in commit bf09ee918053e ("KVM: arm64: selftests: Remove ARM64_FEATURE_FIELD_BITS and its last user") refuses to run for single bit fields which will show the issue most readily but there is no reason wider ones can't show the same issue. Rework the tests for invalid value to check if an invalid value can be generated and skip the test if not, removing the assert. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 58 ++++++++++++++++++++-= ---- 1 file changed, 46 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index 322cd13b9352..641194c5005a 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -318,11 +318,12 @@ uint64_t get_safe_value(const struct reg_ftr_bits *ft= r_bits, uint64_t ftr) } =20 /* Return an invalid value to a given ftr_bits an ftr value */ -uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t f= tr) +uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t f= tr, + bool *skip) { uint64_t ftr_max =3D ftr_bits->mask >> ftr_bits->shift; =20 - TEST_ASSERT(ftr_max > 1, "This test doesn't support single bit features"); + *skip =3D false; =20 if (ftr_bits->sign =3D=3D FTR_UNSIGNED) { switch (ftr_bits->type) { @@ -330,42 +331,72 @@ uint64_t get_invalid_value(const struct reg_ftr_bits = *ftr_bits, uint64_t ftr) ftr =3D max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); break; case FTR_LOWER_SAFE: + if (ftr =3D=3D ftr_max) + *skip =3D true; ftr++; break; case FTR_HIGHER_SAFE: + if (ftr =3D=3D 0) + *skip =3D true; ftr--; break; case FTR_HIGHER_OR_ZERO_SAFE: - if (ftr =3D=3D 0) + switch (ftr) { + case 0: ftr =3D ftr_max; - else + break; + case 1: + *skip =3D true; + break; + default: ftr--; - break; + break; + } default: + *skip =3D true; break; } } else if (ftr !=3D ftr_max) { switch (ftr_bits->type) { case FTR_EXACT: ftr =3D max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); + if (ftr > ftr_max) + *skip =3D true; break; case FTR_LOWER_SAFE: - ftr++; + if (ftr =3D=3D ftr_max) + *skip =3D true; + else + ftr++; break; case FTR_HIGHER_SAFE: - ftr--; - break; - case FTR_HIGHER_OR_ZERO_SAFE: if (ftr =3D=3D 0) - ftr =3D ftr_max - 1; + *skip =3D true; else ftr--; break; + case FTR_HIGHER_OR_ZERO_SAFE: + switch (ftr) { + case 0: + if (ftr_max > 1) + ftr =3D ftr_max - 1; + else + *skip =3D true; + break; + case 1: + *skip =3D true; + break; + default: + ftr--; + break; + break; + } default: + *skip =3D true; break; } } else { - ftr =3D 0; + *skip =3D true; } =20 return ftr; @@ -400,12 +431,15 @@ static void test_reg_set_fail(struct kvm_vcpu *vcpu, = uint64_t reg, uint8_t shift =3D ftr_bits->shift; uint64_t mask =3D ftr_bits->mask; uint64_t val, old_val, ftr; + bool skip; int r; =20 val =3D vcpu_get_reg(vcpu, reg); ftr =3D (val & mask) >> shift; =20 - ftr =3D get_invalid_value(ftr_bits, ftr); + ftr =3D get_invalid_value(ftr_bits, ftr, &skip); + if (skip) + return; =20 old_val =3D val; ftr <<=3D shift; --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D182346E73; Tue, 23 Dec 2025 01:23:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453020; cv=none; b=fZYAuMRVbOhj2x/iQQHbYZBF4Mn3GtkdcsIb9RibPqQGiD2hcbLYfAvPI/ffQnhFHd+jE6gpgKFk8Efwe8/mu8pXCwJQ5FpGa4JGu1WwUswNI2woB9pQlAsKTLI5N0rEePcW2jZdG6V+wC8bhU3xDaNaIWUVwd+YBji0TpqxHkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453020; c=relaxed/simple; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nN3TsnBvyPBskYN9mgZwG2dzcwtyU//PDdCf9R+RwFDuDzGTa1bJNcJX/jJI4JbrzaJQzPCOgBp62uGZVDyaOMe3HHnSW5K/nRai2RJvovddO0HzhWJ6+2eqZdejx/Ud37rGFmiLSsh9ZMZ/TG0Yu2/C1ECWZAvcfKDUqZqLAz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TDI63LLK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TDI63LLK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35919C116C6; Tue, 23 Dec 2025 01:23:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453020; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TDI63LLK1Dd1BykFPpVc3P1jymgU0gE7V7Jfq79W8nXBVnryoXik9MiSeHISYJWso iqEKTBpNSpTY8vUe1lMC7M3DqFprUaOCoghpDgRcaQ/apQHkwkEhtSSCKPwjYSPAjV Q6oa3MzIbx34D1ijUlfgrZ3+pt6SaxE/svxtaVLffBOGZ9J/sEK1h1bVE5lH+SXT6Y ItLdWd3XzoCmoyA8NA96k8dppgPXtTVypxXkHlWVhG3sbDZ8/5gUUB3x4OuUmsz8lF RLUs/9RWms61xzLBp2sYKMs1nTdyT6CWO/IATP6khzWzRaAZf2RQzvBKjxgW73VJXc oUJotCBGyqs3w== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:23 +0000 Subject: [PATCH v9 29/30] KVM: arm64: selftests: Add SME system registers to get-reg-list Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-29-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=3018; i=broonie@kernel.org; h=from:subject:message-id; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6ZT3WxDFPfEe4BoCZZTNsMTuw5J7xSE+tTa Kmv0qg+FySJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnumQAKCRAk1otyXVSH 0MA/CACCkH772PTG/2cJD7n0eyuJAn8ayWPMc6UNVuPIbnw1VvkAHXSJua/tE7V2+I1eNywUt2e pA231uFEmuh9U2Y6d0sb5W9L9K+Oy8kjQefNK/pbaZF2e0R5oEWc+jHgWcPzNj5z9r6qaz5Ne+c 6+UEzIJyvS6TIop3jGeSrcXeqptOr8td52Z2M0WWURsA03BIDpd1xNd+yVw4aHi6qTphgsODZ1s 6OZeAxZhqrHu4y5N3ssTpQjiGFesV+2QhchBxbA+8aiumfc4ci18jbU9OXKN2qZWMEl9Vwk/+BK DhKMtrPx1XA/2ePHIWanby31qExFiyZ1phJO8hFZGjQr5Ylh X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/get-reg-list.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/arm64/get-reg-list.c b/tools/testi= ng/selftests/kvm/arm64/get-reg-list.c index 0a3a94c4cca1..876c4719e2e2 100644 --- a/tools/testing/selftests/kvm/arm64/get-reg-list.c +++ b/tools/testing/selftests/kvm/arm64/get-reg-list.c @@ -61,7 +61,13 @@ static struct feature_id_reg feat_id_regs[] =3D { REG_FEAT(HFGITR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), REG_FEAT(HDFGRTR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), REG_FEAT(HDFGWTR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), - REG_FEAT(ZCR_EL2, ID_AA64PFR0_EL1, SVE, IMP), + REG_FEAT(SMCR_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMCR_EL2, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMIDR_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMPRI_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMPRIMAP_EL2, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(TPIDR2_EL0, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SVCR, ID_AA64PFR1_EL1, SME, IMP), REG_FEAT(SCTLR2_EL1, ID_AA64MMFR3_EL1, SCTLRX, IMP), REG_FEAT(SCTLR2_EL2, ID_AA64MMFR3_EL1, SCTLRX, IMP), REG_FEAT(VDISR_EL2, ID_AA64PFR0_EL1, RAS, IMP), @@ -367,6 +373,7 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 0, 0, 0), /* MIDR_EL1 */ ARM64_SYS_REG(3, 0, 0, 0, 6), /* REVIDR_EL1 */ ARM64_SYS_REG(3, 1, 0, 0, 1), /* CLIDR_EL1 */ + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ ARM64_SYS_REG(3, 1, 0, 0, 7), /* AIDR_EL1 */ ARM64_SYS_REG(3, 3, 0, 0, 1), /* CTR_EL0 */ ARM64_SYS_REG(2, 0, 0, 0, 4), @@ -498,6 +505,8 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 1, 0, 1), /* ACTLR_EL1 */ ARM64_SYS_REG(3, 0, 1, 0, 2), /* CPACR_EL1 */ KVM_ARM64_SYS_REG(SYS_SCTLR2_EL1), + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 0), /* TTBR0_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 1), /* TTBR1_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 2), /* TCR_EL1 */ @@ -518,9 +527,11 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 13, 0, 4), /* TPIDR_EL1 */ ARM64_SYS_REG(3, 0, 14, 1, 0), /* CNTKCTL_EL1 */ ARM64_SYS_REG(3, 2, 0, 0, 0), /* CSSELR_EL1 */ + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ ARM64_SYS_REG(3, 3, 10, 2, 4), /* POR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 2), /* TPIDR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 3), /* TPIDRRO_EL0 */ + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ ARM64_SYS_REG(3, 3, 14, 0, 1), /* CNTPCT_EL0 */ ARM64_SYS_REG(3, 3, 14, 2, 1), /* CNTP_CTL_EL0 */ ARM64_SYS_REG(3, 3, 14, 2, 2), /* CNTP_CVAL_EL0 */ @@ -730,6 +741,8 @@ static __u64 el2_regs[] =3D { SYS_REG(HFGITR_EL2), SYS_REG(HACR_EL2), SYS_REG(ZCR_EL2), + SYS_REG(SMPRIMAP_EL2), + SYS_REG(SMCR_EL2), SYS_REG(HCRX_EL2), SYS_REG(TTBR0_EL2), SYS_REG(TTBR1_EL2), --=20 2.47.3 From nobody Sat Feb 7 08:45:04 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E5B13491C8; Tue, 23 Dec 2025 01:23:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453024; cv=none; b=bN2ckrg1Uz9XoBreX1dItVEN0O9KSMIe4L1TR9BG7AoGchRfvKlJMHdSYoeBvyDFiD6CSDUy+02TSWC2zACkZYUK35VneBSRzJlZUG+8mc5P4PuS10jv6SIrh74jRCxWPoxIde6eM4gAY2GeVHqsQPZUPrxKNZ99DQkb766++wY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766453024; c=relaxed/simple; bh=biBZze5wJQdB2Qc9fXo86i+NjUBgBr6X/9wDncv9D5Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=l6qbMsOeuKFzjq68qLan6TBNAuh3upLAboDV4ReRF8dY7yyQfE31NKs/N7w/DcdxeDyhNiyKVHTWnhlWhy5XAgRxFai35xn3eVn5O7W3bX0dBZ8wx+mh0/yXhDkAss2w3ZqhUf0NdnaB9ne3gifHdc1WusVyTc0/15RVcopSjSg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lecEL8eO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lecEL8eO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75150C4CEF1; Tue, 23 Dec 2025 01:23:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766453024; bh=biBZze5wJQdB2Qc9fXo86i+NjUBgBr6X/9wDncv9D5Q=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=lecEL8eOZXFmjzpni/xmLZL/Z86cerSieXn+9NDNhqxg61iPWXtGinqGPWE32xxKn Sdf2+JykhJIGkxKvmrAflNj49YoeSXa5WtfMMl3G15ebOBnVVJDzuR81ZltmkRUbBg SLs7dnpOb4gALrpTQjXQeBHroDbB1KYr9/CdPS6RQP/V6fXnqUEZcs8HCtwwhAVKD9 nKNncO41OCNTDQduqNyCWKATmH/JpX0WdBR3I+6FPTkPMClm5uhqRi3yibvv4/U8Ue H6Yh4hUDeFuXsxf/TTHMRz4qsSveBW74iA1N7ynstRjxciYsnqKK+LrUrZoll9NpAw zxoZ031Plt4CQ== From: Mark Brown Date: Tue, 23 Dec 2025 01:21:24 +0000 Subject: [PATCH v9 30/30] KVM: arm64: selftests: Add SME to set_id_regs test Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251223-kvm-arm64-sme-v9-30-8be3867cb883@kernel.org> References: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> In-Reply-To: <20251223-kvm-arm64-sme-v9-0-8be3867cb883@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-47773 X-Developer-Signature: v=1; a=openpgp-sha256; l=2754; i=broonie@kernel.org; h=from:subject:message-id; bh=biBZze5wJQdB2Qc9fXo86i+NjUBgBr6X/9wDncv9D5Q=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpSe6ZkgocYr2mGN9fKdCMYgkg1el8uznuZmkoj G98q/feEOOJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaUnumQAKCRAk1otyXVSH 0K85B/wNsRCOfe6FnZaGCX+B0kHmxdSjtYhwgjJdEAvSvTKLAtZmj7jDPO+5qc7+VVh4pX/MmhI aefgBuf4s+uCqd0d0o0zQoZpmu3n3KLeBJeg+6A+tfistXav4qG8kBQ8OwtKkReRVNus6ppmGWx OIBxWlmQP5DghzNK4P5WqRro7rC/P8XpI3mS5lfGLLlupC6kvePjQ+QamKlY0zOK8pVzCnj61xp /PvPFqdEnX4+cRCHIGqBpEk8LLPN1K9R14DoyviycsggQd5GC991rANwaV4/g+Hl5VStaetztPI XY2x1ikz93+SlwLZI3b3kXGGEvSabfKcm5V8AeMmF42b+wF4 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Add coverage of the SME ID registers to set_id_regs, ID_AA64PFR1_EL1.SME becomes writable and we add ID_AA64SMFR_EL1 and it's subfields. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 24 +++++++++++++++++++++= +++ 1 file changed, 24 insertions(+) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index 641194c5005a..73489c48d550 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -203,6 +203,28 @@ static const struct reg_ftr_bits ftr_id_aa64mmfr3_el1[= ] =3D { REG_FTR_END, }; =20 +static const struct reg_ftr_bits ftr_id_aa64smfr0_el1[] =3D { + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, FA64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, LUTv2, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SMEver, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F64F64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16B16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I8I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, BI32I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F32F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8FMA, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP4, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP2, 0), + REG_FTR_END, +}; + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] =3D { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), @@ -235,6 +257,7 @@ static struct test_feature_reg test_regs[] =3D { TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), TEST_REG(SYS_ID_AA64MMFR3_EL1, ftr_id_aa64mmfr3_el1), + TEST_REG(SYS_ID_AA64SMFR0_EL1, ftr_id_aa64smfr0_el1), TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), }; =20 @@ -254,6 +277,7 @@ static void guest_code(void) GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR3_EL1); + GUEST_REG_SYNC(SYS_ID_AA64SMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); GUEST_REG_SYNC(SYS_MPIDR_EL1); GUEST_REG_SYNC(SYS_CLIDR_EL1); --=20 2.47.3