From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9B683FB04E; Fri, 6 Mar 2026 17:09:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816970; cv=none; b=DNgETedllrUMJLEK1Dm5DSkqVUe14SPulv/e9u3+px5nURGaBvChjbyk0y8LyoykalXVF/VT2VqQAvQA+p1SHHhLdIHlvjglGL5lbUjJy2LXf8/M+/UnJQ5Ls4YESAsG9uZZQwB98Ymi23D17/UbBbuubTDMH3ZGO+j5pmLlp6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816970; c=relaxed/simple; bh=iE8TJ62f6uMzW9uqVWzvpfHVO0KL3OXm+i7JJ3rnjXM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=OaaDq2HzQU2iDGJuSWt7IIObMZR3jQG0EBJwTPolF/nJowYaHLOuTyBnaT6gPBRzFoFvNZJZGfXEYqUFWIRaiGSXEVzCKoQY00/ETaiDxLFhosv/PmZyIIC+jSuxvwDt+aoB7MrDTyG6Dwj6R5YI0qmOVls0NKM3mxm/W4xL4dM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e9PmJ/P1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e9PmJ/P1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1D72C19425; Fri, 6 Mar 2026 17:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816969; bh=iE8TJ62f6uMzW9uqVWzvpfHVO0KL3OXm+i7JJ3rnjXM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=e9PmJ/P1wJNK7L7GtRPNZ5MY6bM3hiHOfjJ5HcCZarS5eSSXgVVN30Yn3mVVBhBZj pLlsCfDPt8F8FKqPcBGoc+fUS2MZFFeqBOU7HQaZLInf3BwyJfT0Olf578QuaelbCw fvvj4CWDApGkB9PbBtdfWdm3w38W0VYYq4gafaGzE86WA009wIgELwP92lXGWAmzSf v+HAqo8Z5tAiqEn1CjQl1A7QTP8w2DEXdtPlBX/Vk3614c/8xJYXkiyXb/9rIgXM4t 4iREMm9wkR03VtDJcm70SgGtUUPYaJbQ2q1XH+oFDtPaTitjj5HdQCA/CO6SJ+JXQK I11X4uIuyBkeg== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:53 +0000 Subject: [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-1-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=978; i=broonie@kernel.org; h=from:subject:message-id; bh=iE8TJ62f6uMzW9uqVWzvpfHVO0KL3OXm+i7JJ3rnjXM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwopL4waf4p9vIMTbtYH6LENXlVn+f/cqmjsa Gdyla7fQIqJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKKQAKCRAk1otyXVSH 0JxrB/9RFY+pwbrQCPGwKr6YbGcoYjU1PRHwuApHAbOf28EIZ5uMZf08eL9SqxsVkfel04R2+bs QTtPo1HP4dt4Hm8Tgz2+qKEbs+DiIKiEcHQfiBne1a0ifw5KcJmp2/IzpvACHBWdTB9EPAZUxTC wMqR6t9HXSR3brPy+4mukOQoQFG5ZiOV16acdUeulDxWngWFx89RJhXy83DLmdbca+2yllaMZWQ RonPJZnDaXxTX0uJdw3jdtzmZom3eeWsjTCDJH8AXZL3azOpPzrbsuMV5y3RoiUk3a1HKHqdGPF YghzNGPRQ0EvCxWOy7HpmaIstjomKnzaUF0tmmAKQa7Tx/BJ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Update the definition of SMIDR_EL1 in the sysreg definition to reflect the information in DD0601 2025-06. This includes somewhat more generic ways of describing the sharing of SMCUs, more information on supported priorities and provides additional resolution for describing affinity groups. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Acked-by: Catalin Marinas --- arch/arm64/tools/sysreg | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 9d1c21108057..b6586accf344 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -3655,11 +3655,15 @@ Field 3:0 BS EndSysreg =20 Sysreg SMIDR_EL1 3 1 0 0 6 -Res0 63:32 +Res0 63:60 +Field 59:56 NSMC +Field 55:52 HIP +Field 51:32 AFFINITY2 Field 31:24 IMPLEMENTER Field 23:16 REVISION Field 15 SMPS -Res0 14:12 +Field 14:13 SH +Res0 12 Field 11:0 AFFINITY EndSysreg =20 --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EFA13F23A6; Fri, 6 Mar 2026 17:09:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816974; cv=none; b=CWgpFy+wAl/6MQGSq5EqfGHdPOOYEUGfm261SEtu5bws2GmG+NqbVC3WFu6nWxEhti3ER868fout0Kq7MaINcSj2XkQuV5V3d2qXfCz8PdH2S5zyu6Pf7MsbLb/M2xIZpOKBSdSm/ZMa18iVooVsZ7QLOHqjoaphEwtLj4k8gHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816974; c=relaxed/simple; bh=iAB4BimeNRob/7O+enqv/cHWofNWPfxei3AygIaccdw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=SfkVMa7F+4tOVj1+oYBBf/buh024hvP1GOB0/8M9O53o4GsQlkHl1Ac+5n8HBNPyp82jpGEtKKFfKhJ2iUJPSYZH/USvPE79WtOQtkOfGg+MTnHLJ3To19/+g6yG8l8QyIE8wnP5TQvLyvBXPVru45MLq8cE4NQCUkbJ1uYvtRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pSDRTFQH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pSDRTFQH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E79EC2BCB1; Fri, 6 Mar 2026 17:09:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816974; bh=iAB4BimeNRob/7O+enqv/cHWofNWPfxei3AygIaccdw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pSDRTFQHCmkFIsrFb9FLZPc/FBqbgsYx+0ou5AGKmYA+UmVENrArRVII1eAdMZCrt ZhBGFdp+y0KK2vBHY0smN8nsBVRhdDuCmaZI68u2CQibXwR//HF52BuhmmzCCyBP3A bAiJ/LtVxIjvimHMaH+PwzPx+HAbRmJoktqtlPZqbMvt8uRo73rvX0y9jF+TF9yKcV lpxgeKhfNI1EZKlCF+wp4INxMqOGLgQoWF13yIqJ1/hK3aqC6Q/+Jr5z6rrE8YK3s4 g81bU9B+cd8Gv8akf4+9kRzppBDLkoJAJ+csWz9m9KqCZypXVx6t+hNkFZBUG4eDzk cj0Qg7/rHhfMg== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:54 +0000 Subject: [PATCH v10 02/30] arm64/fpsimd: Update FA64 and ZT0 enables when loading SME state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-2-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=5890; i=broonie@kernel.org; h=from:subject:message-id; bh=iAB4BimeNRob/7O+enqv/cHWofNWPfxei3AygIaccdw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwoq3yoqKJtjdNFFgK9uTEvPwoBZ3ElKnQS79 RRvKmc2X7aJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKKgAKCRAk1otyXVSH 0KurB/wNyUKZSWDdSTrMPwlejZ32XwTF6cn1h8bGWQZ1BqN7T1UPYSM4EVSJw1C9LP/l0QoQLHj vlP5LItjXFD7A7qCWuthEwv2LDmidsKhpev58SNgmr7qHQjOY2zyiaVLAjOnmFg5lWGiNrj15eE a5uLQP9irprTBD6dB6BMTdcVJwe69e437tXZ0DT5uE+au68qLG1Feh1RPQswUIMavGiA5ko2wm0 y57EtHy4EPgW1pLm4F6ox/nia1rYd0bm+6TpRCKYl/+Oo0BdJpZTEiCnywJoqMTnjJ6o59pqPOf l6eiwHpR6Ph+S9safqH6z5hssnzcNeTS1h0seJYOPEN5x7mQ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Currently we enable EL0 and EL1 access to FA64 and ZT0 at boot and leave them enabled throughout the runtime of the system. When we add KVM support we will need to make this configuration dynamic, these features may be disabled for some KVM guests. Since the host kernel saves the floating point state for non-protected guests and we wish to avoid KVM having to reload the floating point state needlessly on guest reentry let's move the configuration of these enables to the floating point state reload. We provide a helper which does the configuration as part of a read/modify/write operation along with the configuration of the task VL, then update the floating point state load and SME access trap to use it. We also remove the setting of the enable bits from the CPU feature identification and resume paths. There will be a small overhead from setting the enables one at a time but this should be negligible in the context of the state load or access trap. In order to avoid compiler warnings due to unused variables in !CONFIG_ARM64_SME cases we avoid storing the vector length in temporary variables. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/fpsimd.h | 18 ++++++++++++++++ arch/arm64/kernel/cpufeature.c | 2 -- arch/arm64/kernel/fpsimd.c | 47 +++++++++++--------------------------= ---- 3 files changed, 30 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 1d2e33559bd5..7361b3b4a5f5 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -428,6 +428,22 @@ static inline size_t sme_state_size(struct task_struct= const *task) return __sme_state_size(task_get_sme_vl(task)); } =20 +/* + * Note that unlike SVE we have additional feature bits for FA64 and + * ZT0 as well as the VL. + */ +#define sme_cond_update_smcr(vl, fa64, zt0, reg) \ + do { \ + u64 __old =3D read_sysreg_s((reg)); \ + u64 __new =3D vl & SMCR_ELx_LEN_MASK; \ + if (fa64) \ + __new |=3D SMCR_ELx_FA64; \ + if (zt0) \ + __new |=3D SMCR_ELx_EZT0; \ + if (__old !=3D __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else =20 static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -456,6 +472,8 @@ static inline size_t sme_state_size(struct task_struct = const *task) return 0; } =20 +#define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) + #endif /* ! CONFIG_ARM64_SME */ =20 /* For use by EFI runtime services calls only */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c31f8e17732a..a1fcfab3024f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2970,7 +2970,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME_FA64, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2978,7 +2977,6 @@ static const struct arm64_cpu_capabilities arm64_feat= ures[] =3D { .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, .capability =3D ARM64_SME2, .matches =3D has_cpuid_feature, - .cpu_enable =3D cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 9de1d8a604cb..cf419319f077 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -398,11 +398,15 @@ static void task_fpsimd_load(void) =20 /* Restore SME, override SVE register configuration if needed */ if (system_supports_sme()) { - unsigned long sme_vl =3D task_get_sme_vl(current); - - /* Ensure VL is set up for restoring data */ + /* + * Ensure VL is set up for restoring data. KVM might + * disable subfeatures so we reset them each time. + */ if (test_thread_flag(TIF_SME)) - sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 write_sysreg_s(current->thread.svcr, SYS_SVCR); =20 @@ -1211,26 +1215,6 @@ void cpu_enable_sme(const struct arm64_cpu_capabilit= ies *__always_unused p) isb(); } =20 -void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME2 <=3D ARM64_SME); - - /* Allow use of ZT0 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, - SYS_SMCR_EL1); -} - -void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused = p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME_FA64 <=3D ARM64_SME); - - /* Allow use of FA64 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, - SYS_SMCR_EL1); -} - void __init sme_setup(void) { struct vl_info *info =3D &vl_info[ARM64_VEC_SME]; @@ -1275,17 +1259,9 @@ void __init sme_setup(void) =20 void sme_suspend_exit(void) { - u64 smcr =3D 0; - if (!system_supports_sme()) return; =20 - if (system_supports_fa64()) - smcr |=3D SMCR_ELx_FA64; - if (system_supports_sme2()) - smcr |=3D SMCR_ELx_EZT0; - - write_sysreg_s(smcr, SYS_SMCR_EL1); write_sysreg_s(0, SYS_SMPRI_EL1); } =20 @@ -1400,9 +1376,10 @@ void do_sme_acc(unsigned long esr, struct pt_regs *r= egs) WARN_ON(1); =20 if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { - unsigned long vq_minus_one =3D - sve_vq_from_vl(task_get_sme_vl(current)) - 1; - sme_set_vq(vq_minus_one); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1); =20 fpsimd_bind_task_to_cpu(); } else { --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 912643ED5A7; Fri, 6 Mar 2026 17:09:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816978; cv=none; b=F7WUIbMlJpG+LfNJMeu8U4vyxKzXXN+qrXiXsFx4N2RsBdbhRvyKbKsaowSmWehCDL5HPHGTW622zrbLT6FDOCAMMZprctXxn58rw2qwYJruid6/YuWuFKbHD2vvb0gtRBzI6LCaES50VTzzfCZsjGZ95ZILqkFmfelV7DEO5Dc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816978; c=relaxed/simple; bh=sM9E+HyTSNR7eo/xMvIQXOjg3Prwq/RvlNfqR4HFAKw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mIM/kmV7yblyt94Rfk4jPrAQAnQb4fc5lwNktWpo6FhO/zl4kP6iieicsg1o9HDQlry9QLOAKYVU3wP6U21hVrBy6G43yXABA+U5bbGC1UCGbIKPfUpFmQShPLQ08Kq3zbGKYVzTUtWmjuJWOskDDTynPBVZNd4cTUzzhYNk3Bg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pa8RwM4h; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pa8RwM4h" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75FCAC19425; Fri, 6 Mar 2026 17:09:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816978; bh=sM9E+HyTSNR7eo/xMvIQXOjg3Prwq/RvlNfqR4HFAKw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pa8RwM4heujOlb8MDd1b6sXTwRaFj5C21AjuX5JQgc3XvXVc3KoNh4ciBe5o9W6KQ Y18LpZLHpa32FY7l/zQkOAnTzZbYiNmmCdAREWxN18rY2Z71qZr++0Qt5VvKqO25Xw YhfIZn4fJa3tRToMqwgIOpg88U20o1JhIGEIamM83dOx1xc0RQ5TVCicVB3CNTTfXx t3Q05+VwkA/mt+4aXHVgnuFQmw1FwYZdNP0aEwU/pE7xe7TAZTFof4CmMzhjiWAOrU YOnBiHQFkI3oFsK/HKJ3VPAD39X+SagQqrnvNUf3aGRgJtar6W4prfeqzrYFchbi0T 8A7o2JmZQx4Ag== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:55 +0000 Subject: [PATCH v10 03/30] arm64/fpsimd: Decide to save ZT0 and streaming mode FFR at bind time Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-3-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3297; i=broonie@kernel.org; h=from:subject:message-id; bh=sM9E+HyTSNR7eo/xMvIQXOjg3Prwq/RvlNfqR4HFAKw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwoq5zUTgfzO46tRiQ5ZzpXJgba6fXPBrZsxe +CIkm++mEmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKKgAKCRAk1otyXVSH 0CwJB/9C13km6pWUoAvqHsbOVm3SS4n6CBgFM2r7aTMrW+PlTIWM83xo0vOJA4Nt7RlP2JD2NSF srDCWFqiDsg+m1VKmZGMb9uIrAOwFogIxAd3QPCXL7nfFnecDbFCcpYFM94kJnrG/va6nd2/VT7 qBRg2PIdfcYLHCKQMxm+U9gLFnO755Vnz31WxvbJGnSqNRilZ/533E+pKyWoePCeF1vwFPa+Zz9 EmuK34/nfAeu7PrcNdr0ZgGzbiSe3nGSrmvPa+INrTIxWQ1/iyOv8HWM19gHXQ4VhQnxo1RjNJz /st2NpVPHhU0H1Lo9zuGjEODTgtAXWmEIysw8IrW+OBz7HtQ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Some parts of the SME state are optional, enabled by additional features on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We unconditionally enable these for the host but for KVM we will allow the feature set exposed to guests to be restricted by the VMM. These are the FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2). We defer saving of guest floating point state for non-protected guests to the host kernel. We also want to avoid having to reconfigure the guest floating point state if nothing used the floating point state while running the host. If the guest was running with the optional features disabled then traps will be enabled for them so the host kernel will need to skip accessing that state when saving state for the guest. Support this by moving the decision about saving this state to the point where we bind floating point state to the CPU, adding a new variable to the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which features are enabled. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/fpsimd.c | 1 + 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 7361b3b4a5f5..e97729aa3b2f 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -87,6 +87,7 @@ struct cpu_fp_state { void *sme_state; u64 *svcr; u64 *fpmr; + u64 sme_features; unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index cf419319f077..2af0e0c5b9f4 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -483,12 +483,12 @@ static void fpsimd_save_user_state(void) =20 if (*svcr & SVCR_ZA_MASK) sme_save_state(last->sme_state, - system_supports_sme2()); + last->sme_features & SMCR_ELx_EZT0); =20 /* If we are in streaming mode override regular SVE. */ if (*svcr & SVCR_SM_MASK) { save_sve_regs =3D true; - save_ffr =3D system_supports_fa64(); + save_ffr =3D last->sme_features & SMCR_ELx_FA64; vl =3D last->sme_vl; } } @@ -1632,6 +1632,12 @@ static void fpsimd_bind_task_to_cpu(void) last->to_save =3D FP_STATE_CURRENT; current->thread.fpsimd_cpu =3D smp_processor_id(); =20 + last->sme_features =3D 0; + if (system_supports_fa64()) + last->sme_features |=3D SMCR_ELx_FA64; + if (system_supports_sme2()) + last->sme_features |=3D SMCR_ELx_EZT0; + /* * Toggle SVE and SME trapping for userspace if needed, these * are serialsied by ret_to_user(). diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 15e17aca1dec..9158353d8be3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -80,6 +80,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type =3D &vcpu->arch.fp_type; + fp_state.sme_features =3D 0; =20 if (vcpu_has_sve(vcpu)) fp_state.to_save =3D FP_STATE_SVE; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CE053ED5A7; Fri, 6 Mar 2026 17:09:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816983; cv=none; b=CApIAIGaWlbKZhIPjlARmHFYqVVdVWhsXR0o4j66yQEjFXzU7eNT8e+JrJab43S7wPjJypUlQxWKR0YQkV0yzPzr3oHpjcyQQMeOHyOBgfJZrCZl6ajexytcWGRQHG3ipSLbCt9TljxiDMK8l6pAoQwpgaQI8b8JnuDoH1FYt+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816983; c=relaxed/simple; bh=sxGE8ds7O7e2xiqhxdLSPHlYvj4uo9nY+mVtuvgP/7Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=aDBlIxD6DMCpZQNvJl45NQagCxXr7WiHHd29PU9WE1y8V8NyU/FhjkWdxT3dSU+Ui+JBCYbN2+M+jz7+7q+GFyfBQNBJ+LRi+rmp94E4cwPhpHp+QlIIMul4WwFA433oAUvppX4UdRBOUi21ELphrvMIzMItieoj5UfTP7D4JJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y2Z2u0cf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y2Z2u0cf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCDF6C2BCB3; Fri, 6 Mar 2026 17:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816982; bh=sxGE8ds7O7e2xiqhxdLSPHlYvj4uo9nY+mVtuvgP/7Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Y2Z2u0cfWixgxUHkakeXFpwx2rRQQ060MUQV/HtgY0/WPCC8pAIr9c831ZLfYfTkR dsFmYToYkUZdwBugz3YJufa/tu4u0uNNQrOIiYZRj+bEpV6DBWmb7zOWtiwJ4/vfmj n/cRNtCJapIa8Ni3tHJRYh5utp8FD+wdVz8G4S1pmc72vFu4STp4Hv7AyJ3120D7S8 0LIxxMOVKgG1PUyILsHovKvbHT8Oyz3/0RBuIHmb0GDh2+gIaOsQlMXpdofIP+Eop8 eZe0YyhNzNFCFnXFaQ1xYLvWEp5IKhY6v0zZcuHcpOieggeKCXiC51UUta+Pmair7p 2jHlsD/CeYYFA== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:56 +0000 Subject: [PATCH v10 04/30] arm64/fpsimd: Determine maximum virtualisable SME vector length Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-4-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=2807; i=broonie@kernel.org; h=from:subject:message-id; bh=sxGE8ds7O7e2xiqhxdLSPHlYvj4uo9nY+mVtuvgP/7Y=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwor0zNBUzB9WNxy4RgcdBBPLU7P1PPQxFqYF 6wVUbMPFnuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKKwAKCRAk1otyXVSH 0CXWB/oDaxzGIoAF7C6ZU1ssTeUidjJB1twTqQ6/6Wm38rt2CX4D5G5FP217toFJybD2sadvy/u M8akLGgOI95eUfOA5nluGwDGRB4Ry+2rzGJCxgidUmPzeExMkls9UFU+OlCp2333wy24399YDtD ProNtda5Ahobw7kErxQ5PKZXn9wOm+R3d0+M4EeioNzMO/NzPn51OXetLA/hstwBBy0caMB9LDD eq4BFlZFTmnfNrhMytwIqliDuDpBFm6lBprrKPqsnWTTPoguOli53Lxx4iWFNDfnil277UQlvuB ZlCVXWklzMQzPh1oBiblUjyUqFmEqvJN8nBoMWam7kAfw5iI X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB As with SVE we can only virtualise SME vector lengths that are supported by all CPUs in the system, implement similar checks to those for SVE. Since unlike SVE there are no specific vector lengths that are architecturally required the handling is subtly different, we report a system where this happens with a maximum vector length of SME_VQ_INVALID. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas Reviewed-by: Jean-Philippe Brucker --- arch/arm64/include/asm/fpsimd.h | 2 ++ arch/arm64/kernel/fpsimd.c | 21 ++++++++++++++++++++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index e97729aa3b2f..0cd8a866e844 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -69,6 +69,8 @@ static inline void cpacr_restore(unsigned long cpacr) #define ARCH_SVE_VQ_MAX ((ZCR_ELx_LEN_MASK >> ZCR_ELx_LEN_SHIFT) + 1) #define SME_VQ_MAX ((SMCR_ELx_LEN_MASK >> SMCR_ELx_LEN_SHIFT) + 1) =20 +#define SME_VQ_INVALID (SME_VQ_MAX + 1) + struct task_struct; =20 extern void fpsimd_save_state(struct user_fpsimd_state *state); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 2af0e0c5b9f4..49c050ef6db9 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1218,7 +1218,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabiliti= es *__always_unused p) void __init sme_setup(void) { struct vl_info *info =3D &vl_info[ARM64_VEC_SME]; - int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + int min_bit, max_bit, b; =20 if (!system_supports_sme()) return; @@ -1249,12 +1250,30 @@ void __init sme_setup(void) */ set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32)); =20 + bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b =3D find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >=3D SVE_VQ_MAX) + /* All VLs virtualisable */ + info->max_virtualisable_vl =3D sve_vl_from_vq(ARCH_SVE_VQ_MAX); + else if (b =3D=3D SVE_VQ_MAX - 1) + /* No virtualisable VLs */ + info->max_virtualisable_vl =3D sve_vl_from_vq(SME_VQ_INVALID); + else + info->max_virtualisable_vl =3D sve_vl_from_vq(__bit_to_vq(b + 1)); + pr_info("SME: minimum available vector length %u bytes per vector\n", info->min_vl); pr_info("SME: maximum available vector length %u bytes per vector\n", info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (info->max_virtualisable_vl < info->max_vl || + info->max_virtualisable_vl =3D=3D sve_vl_from_vq(SME_VQ_INVALID)) + pr_warn("SME: unvirtualisable vector lengths present\n"); } =20 void sme_suspend_exit(void) --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A69B2301016; Fri, 6 Mar 2026 17:09:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816987; cv=none; b=liIkWL0ecgIwgZEvtyPsYVD806ChLoNpvet5C/BRWgSH4aw28abpE1Iitp1sD9GHNJDZCzIVRwvEqGYyf+cM3LaJ67LuhICCPI2bbNJqSxhD6KvjpfQaFg1+2Ze5Uu8icPxcxUO1gqlmBjSVE+auxtJNin0dFXFxcnXgWNoHkbE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816987; c=relaxed/simple; bh=G+7uwco+cJxxYiPrDD2VSLQyEnMR+uK7i7CR96riimI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gntxQy6iScaRTth9PBbA4tyx8lJwqug9zCvT1MHz2mGZyFaWSEJdBzViEKL2Uv9aQDrul2MykXVhuFd6dNe8dP5BiJ0jAytiiBdWgn01YkVRbcy/BjteaWTqNjtJjkcYRVS3mCaM5LBn0Am5EAgJwL6/4jmNdDKgqKSNw5attNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KLSmqjF3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KLSmqjF3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F587C2BC86; Fri, 6 Mar 2026 17:09:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816987; bh=G+7uwco+cJxxYiPrDD2VSLQyEnMR+uK7i7CR96riimI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=KLSmqjF3zu/Hq+cmUv/6o3saOiwTxdRmNIX5mEX0yS+E8nQem2rqUvQjUFcRRo4Dc /c3Fl12ugWO0CrUDJFBT1uCHOCZu36EEGSUtfJVqH41hmBq/pkTgYEKLbxhZDCVEVQ Ni0ZNACJpPsYUpDaWjeuJXyMk2fDrUff8Wv4JxmYMLxSkrgVEEW50P7T8gmUo26BDb HWBcQMLaLbs26PboNYY+hcDQ7wT0JNJ0u+pMlCPP0OmCs3FQkxkWpKZajoha9HwD6X wBT1dnVV3lL3NoAmrcWqbmWigzaENpGSCxzaWpoZsIEyLgoUBVAn6Wwa/huY72dR7Z x2m4Y0riGtK1Q== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:57 +0000 Subject: [PATCH v10 05/30] KVM: arm64: Pay attention to FFR parameter in SVE save and load Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-5-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=1072; i=broonie@kernel.org; h=from:subject:message-id; bh=G+7uwco+cJxxYiPrDD2VSLQyEnMR+uK7i7CR96riimI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwosR2hPZbMXVk/Ddq+Vv8nfNrmuk16HWkT3n gdWq0P4Cz2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKLAAKCRAk1otyXVSH 0HcMB/4qfwyGeSaC48mPdYWl/NRMAL9TBWIkgLGfmprEZMugcBQr/IZzXfSgjje6tzteESpv2xZ 5rb6hifu0/CSMQSY55uyRk+qSxThb/mAkfYdY9WwtTLrB6dCF0x7loc+8Htr7setZEy6ffMeiX9 KvcEhG/CKZOJlobmuKPq0rwDKA8zQ7y2IFxlxB5GqWDvwZZvTpdrNuWYo3E5ytMXjjy8fnA2daJ CJ/hJpLidcZHZCxrI8EkxSfX6y6HwAMwf1akh2ljY/G6AY1E+pRNAAjHMEs3beyBn76/rvMJx/+ JEgC8hzcC8U0AUHiGXBPwQEBWI/xSv9AlucE3qDGJ624i02l X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The hypervisor copies of the SVE save and load functions are prototyped with third arguments specifying FFR should be accessed but the assembly functions overwrite whatever is supplied to unconditionally access FFR. Remove this and use the supplied parameter. This has no effect currently since FFR is always present for SVE but will be important for SME. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/fpsimd.S | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index e950875e31ce..6e16cbfc5df2 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -21,13 +21,11 @@ SYM_FUNC_START(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state) =20 SYM_FUNC_START(__sve_restore_state) - mov x2, #1 sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state) =20 SYM_FUNC_START(__sve_save_state) - mov x2, #1 sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D661A401491; Fri, 6 Mar 2026 17:09:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816991; cv=none; b=kpOOuyoK/zG8iZl9rLFkzb1Y/NPSW77kHWwz/kMmllY/BmdlFZdu3lIavi/uQVL5pqc5acVtQsbeXTUYVjEYmR8OvLcKFhslTfpEZ2HJorfgk/UpgzMv+ja1GwNUesz4AU18iiObQxepLs6hLWd2FCOSYzvkj43aNjT0qkhS0UM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816991; c=relaxed/simple; bh=FbDjUlkeVQonqZmxtMpNg1hWpmAleRhNyM/CAB2skO0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tFf+uU2ZyN3PpVHu1tcaT5ZI2RZJAZOtunNTAl/P9z1+D0qmxL7Og/hlXJ6FS5WOMrhEsIOBNbG16STUecSPGPOdrQL5+HiZYUfsYPkshP9YyHGEra1FICz9+MuBYRVtnVCN8gmhAgxB4TPioX3qRWpjbbyCuddhAMFUsuXTUmQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ieyo51tA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ieyo51tA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6475C19425; Fri, 6 Mar 2026 17:09:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816991; bh=FbDjUlkeVQonqZmxtMpNg1hWpmAleRhNyM/CAB2skO0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Ieyo51tATWDI+S8I3Gz+De+jhyw22b9XQ+1Zu92Ss6S0H7ATsoIoULiqSJH5Bipue 9Uygsp66B3MTsLoaNqhqvruNLONw7r6dAhIPQp7nZLcPcpyoVtDsmtjKzDdSwPMfyx heH43tYpv88SExAWXqbAJ5ujzakRZV+GQIVbE05kLh8xFoTXf7zu1npmBZLan6WmBz 3ur8KQx0W6l3KW8BP+z/7+5ukrEM+2wAkq0Bd7ilTtO4Cn9fvtwPmcEtICiYEWEOa2 7Ik5fy6b9TX2q2VHgJL6gijq39TXZCjbudV1W9vzoKWLHgH/qPwvNNJJyHREPqy3wA nDnNFd9K+MUEA== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:58 +0000 Subject: [PATCH v10 06/30] KVM: arm64: Pull ctxt_has_ helpers to start of sysreg-sr.h Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-6-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3951; i=broonie@kernel.org; h=from:subject:message-id; bh=FbDjUlkeVQonqZmxtMpNg1hWpmAleRhNyM/CAB2skO0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwotBwe6WGrNGhWACEQm6DwKxwb/T5p3br7f0 yYxgcLYIv2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKLQAKCRAk1otyXVSH 0PjdB/9hLoYW+EFNZxW/Glm2n30jgDHfGudVVssSx/ZbgkT0RciL7KB/ZZFsRtcZbsyMLSEs8hS PwBgKQ1/wnuRA0JJvbrmdRzEfYR4+MbsXYkgZo9rvtMNUU48hqJEDGM7haxw3pwQ8cRbugiq+AN QSfoAJMYd0WuvnDtevUTiZM3T4karmPm21X6SAQ6zOez0cu7bW/+26xq7Voi6+c3g5QvA48tuvM JT3yvDYUnH5e2QD8m0O0qxaPzEqWRIjYJJ61YQJNtcKPp+70SDwaRe3Quy9ZnU6LENOSu+sqCFM bUQfofBtkZxztAHMvvj233KCR6pfUxIQiw6oGeHvuTzp0P9g X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Rather than add earlier prototypes of specific ctxt_has_ helpers let's just pull all their definitions to the top of sysreg-sr.h so they're all available to all the individual save/restore functions. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 84 +++++++++++++++-----------= ---- 1 file changed, 41 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hy= p/include/hyp/sysreg-sr.h index a17cbe7582de..5624fd705ae3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,8 +16,6 @@ #include #include =20 -static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt); - static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu =3D ctxt->__hyp_running_vcpu; @@ -28,47 +26,6 @@ static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_c= pu_context *ctxt) return vcpu; } =20 -static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) -{ - return host_data_ptr(host_ctxt) !=3D ctxt; -} - -static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); - - if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) - return &vcpu->arch.external_mdscr_el1; - - return &ctxt_sys_reg(ctxt, MDSCR_EL1); -} - -static inline u64 ctxt_midr_el1(struct kvm_cpu_context *ctxt) -{ - struct kvm *kvm =3D kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm); - - if (!(ctxt_is_guest(ctxt) && - test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))) - return read_cpuid_id(); - - return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1); -} - -static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - *ctxt_mdscr_el1(ctxt) =3D read_sysreg(mdscr_el1); - - // POR_EL0 can affect uaccess, so must be saved/restored early. - if (ctxt_has_s1poe(ctxt)) - ctxt_sys_reg(ctxt, POR_EL0) =3D read_sysreg_s(SYS_POR_EL0); -} - -static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); - ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); -} - static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); @@ -131,6 +88,47 @@ static inline bool ctxt_has_sctlr2(struct kvm_cpu_conte= xt *ctxt) return kvm_has_sctlr2(kern_hyp_va(vcpu->kvm)); } =20 +static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) +{ + return host_data_ptr(host_ctxt) !=3D ctxt; +} + +static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu =3D ctxt_to_vcpu(ctxt); + + if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) + return &vcpu->arch.external_mdscr_el1; + + return &ctxt_sys_reg(ctxt, MDSCR_EL1); +} + +static inline u64 ctxt_midr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm *kvm =3D kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm); + + if (!(ctxt_is_guest(ctxt) && + test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))) + return read_cpuid_id(); + + return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1); +} + +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + *ctxt_mdscr_el1(ctxt) =3D read_sysreg(mdscr_el1); + + // POR_EL0 can affect uaccess, so must be saved/restored early. + if (ctxt_has_s1poe(ctxt)) + ctxt_sys_reg(ctxt, POR_EL0) =3D read_sysreg_s(SYS_POR_EL0); +} + +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); + ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); +} + static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, SCTLR_EL1) =3D read_sysreg_el1(SYS_SCTLR); --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D525401491; Fri, 6 Mar 2026 17:09:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816996; cv=none; b=AFcCMF0OiLYU81ia5IepJiMxxbxBgnVaRzTjUtpRNITWLUZbqK8r3DlHHQQTMTe1ae0Sulotb56bHXma46ccxN4OrWBfYnr+dkbKepeU46QDmi/ClIb2BQRXMZbVP6A1ulLZ+NbKmi8ujmBbV4wjSjulUn7Ha65nUYdItx6M6NU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772816996; c=relaxed/simple; bh=WVinD+PZg5gNJJmXAaqYdYZjNnA9vJnAHHaWmmyoiA0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ufvVnjQqchVMyyNy0RPoQZjB33sDqhe4Oc3rCRYKUHijlzYnnU6dMv6WtKChsg2Nx+6MvgXrP9IwRnxg+/nhTMlu/5J6SnRGO5zOx06Amfdsjf3z2W+ENJdYpbXtLT898MhOuvRyTkMBKBeOuHfMufzTADTFUwj1bUjmUrkR4bs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l+SNf2LJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l+SNf2LJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 245DCC2BC86; Fri, 6 Mar 2026 17:09:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772816996; bh=WVinD+PZg5gNJJmXAaqYdYZjNnA9vJnAHHaWmmyoiA0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=l+SNf2LJKvzXJvTAXxdWC2sMdkaz+ozbrbV8vW5MZT3te7IMAzl5YxKu/5MeWgrZh P+aMITHI6G37NIkM4Pwgf5eH2MWibKf25naqVNY0m7fI4VQ78C87GBWdrwM7UY5buR ItLKDJ6skZIhcI8lJ2Y+hvPOPAKoPmy0IhHarctHEQp+7XfVS7IpryXVq2GUnRw4nq EFvaL8YczoFxOYmi/bn204pfh5Lx/4/r4t/ZsAxgQS64dRaAb5KXcpIx9Vy8zMsH21 ffrAfYm7yfqdlAOegtqJB7pYFJaRKc3zLAq+yUNjNEsz/H9BAN6vjd2fsTE432v96w PyAfbrTDiE4fw== From: Mark Brown Date: Fri, 06 Mar 2026 17:00:59 +0000 Subject: [PATCH v10 07/30] KVM: arm64: Move SVE state access macros after feature test macros Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-7-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=2766; i=broonie@kernel.org; h=from:subject:message-id; bh=WVinD+PZg5gNJJmXAaqYdYZjNnA9vJnAHHaWmmyoiA0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwouL/LMzLhb7e8WoxtZwIEbzOl4rfVnCNAcj 9NpObJy/I6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKLgAKCRAk1otyXVSH 0J7KB/9cDJ3yEZzWS61XCor71OYqImPMtHbcwcGrAwSt9+PIEB0J4xMggs8anSvNbOjxmJ6vcm6 uDYL08igkJwXWVPCWsdyxDSaVPLAN5E1NogM95HuOCD4ZXncP817BevI6naP6Vmu6/hqM7z4PUe cNlqLYM2JTpPgRch0cQbp/pZNmutdQDMEMoLqSbZfdM2qGRLdrMZ4MI3MloXaaUB0vGTUzgFjAp kcz/QCklxqYhptf4b0um1WVDdCdRL91EypKUhor+zsxqtdccd9JOydp2mxN50M/aOCe03cYM6Ws gdm17lBZmTgbHJKpLgKxHdK+yasjHF/3XVmSYLZ2HSvL6+Ys X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_host.h | 50 +++++++++++++++++++----------------= ---- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 2ca264b3db5f..3e7247b3890c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1072,31 +1072,6 @@ struct kvm_vcpu_arch { #define NESTED_SERROR_PENDING __vcpu_single_flag(sflags, BIT(8)) =20 =20 -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_zcr_elx(vcpu) \ - (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) - -#define sve_state_size_from_vl(sve_max_vl) ({ \ - size_t __size_ret; \ - unsigned int __vq; \ - \ - if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ - __size_ret =3D 0; \ - } else { \ - __vq =3D sve_vq_from_vl(sve_max_vl); \ - __size_ret =3D SVE_SIG_REGS_SIZE(__vq); \ - } \ - \ - __size_ret; \ -}) - -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -1132,6 +1107,31 @@ struct kvm_vcpu_arch { =20 #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) =20 +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_zcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) + +#define sve_state_size_from_vl(sve_max_vl) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ + __size_ret =3D 0; \ + } else { \ + __vq =3D sve_vq_from_vl(sve_max_vl); \ + __size_ret =3D SVE_SIG_REGS_SIZE(__vq); \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA9E53FD13C; Fri, 6 Mar 2026 17:10:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817000; cv=none; b=PIr8VlP80Jg04cuPz0mwfNcORXCY/w85QVJVj+XrLcA4DFY4HM+qr06SVGm45/+X+cAzl4JAsB9qsZDIjNllPQ6tBhF25qMj/cw/X/hZqP5iur+pWwogjFcayrEmKCHDMTXcCMoWf00lxM/hmC6QhJA2uak28zp9sGcQQVSm2+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817000; c=relaxed/simple; bh=f0y3ZrqdfWVPI/Jsnnhbn6AtnKCJ+CtCh3h35iuBQCM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=L9OFcKCiAKbGEs2nvUmZYYN/5Xb0i3/MDw7wMyA2mJPI7mE/Poa5Z4ObHewQsKufS/APYwzDkG5xXlDHAimgoewhAnGr/onrvTTCOHWeG4H3OQyfdmAbYxa+TKOSwVs84/jwNW1+Jd9KWNQDSmO0Kp+Dt98EQE/pl913oXBRfg0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YkRgzD6b; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YkRgzD6b" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 891A8C4CEF7; Fri, 6 Mar 2026 17:09:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817000; bh=f0y3ZrqdfWVPI/Jsnnhbn6AtnKCJ+CtCh3h35iuBQCM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=YkRgzD6b7pjsL/uTiNW3Fh+qFqxXf59o0kHrEx8XiLvkAUbf3t5oCySw/hK47y2O0 Ta/UqJP3exDa2aGUB78euP4xSKwHhbQgolgaI1A/4C0SJwc38L3JqAx7Du1DEd959p F5VoDpkgb7D/3UDrdb29/B1jMK0fG4bErfUNihED5FAtlLghFbBvuH9ETSrBHM6GYL ssR5cWJLG+rsLl6jzFQ5Hxv2hr0L0cXvX384Hxe0UvAckU8N8OLy5Ja8qrLLAlQZLu KawzRsw668pkwegd+MiZS8ZltcNe4Ps8QN8Ej+KZm6HSsqoF2Rc4v/y23AQuToyvQ5 mzUNZmwqGYKFQ== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:00 +0000 Subject: [PATCH v10 08/30] KVM: arm64: Rename SVE finalization constants to be more general Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-8-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=7266; i=broonie@kernel.org; h=from:subject:message-id; bh=f0y3ZrqdfWVPI/Jsnnhbn6AtnKCJ+CtCh3h35iuBQCM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwou1FPKpniXenBABFoKrKVWGqIIrJo4jB4G6 54/77FT4xaJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKLgAKCRAk1otyXVSH 0B3LB/wLbXV1VxRQ/1il2H8ETlbxoETQ0GEdfSBbiK8fjb6jP4TUqxpH0mGi1MDLrbHGJCJ2fXe MaHMZB098RuCCB9z0u8G4pffojlkGDkZZLhUkySfIwyE73GEagcdDhMHXC9E3IHbJen6pZo9er5 +Lw6qsLn0+0YXdIZBt5Kl0t91ZT0kOj3v7zN4svOIp8vuDjYwntIjJduw77vvzrAZoxjZ49pX4k /e3AI/3pU3KcD2INMD5pl7LI1CZ8SF9Ieex3G6o0tta/qXZsf81jWBDkq7uKF3m+20kX381ed1j rlGc/nqDi0g0nFAWcahENLRydBhjy7utulis9o9OrNgBO9vB X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion. Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants. No functional change. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/include/asm/kvm_host.h | 8 +++++--- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 20 ++++++++++---------- 5 files changed, 27 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 3e7247b3890c..656464179ba8 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1012,8 +1012,8 @@ struct kvm_vcpu_arch { =20 /* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) -/* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +/* Vector config completed */ +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1)) /* pKVM VCPU setup completed */ #define VCPU_PKVM_FINALIZED __vcpu_single_flag(cflags, BIT(2)) =20 @@ -1086,6 +1086,8 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif =20 +#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1482,7 +1484,7 @@ struct kvm *kvm_arch_alloc_vm(void); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); =20 -#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINA= LIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINA= LIZED) =20 #define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index a792a599b9d6..c67564f02981 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -107,6 +107,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */ =20 +/* + * An alias for _SVE since we finalize VL configuration for both SVE and S= ME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1c87699fd886..d15aa2da1891 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -342,7 +342,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; =20 - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ =20 if (WARN_ON(vcpu->arch.sve_state)) @@ -497,7 +497,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (ret) return ret; =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -523,7 +523,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (ret) return ret; =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -599,7 +599,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu= *vcpu) return 0; =20 /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); =20 return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -617,7 +617,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *= vcpu, return 0; =20 /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); =20 /* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 2f029bfe4755..24acbe5594e2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -445,7 +445,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h int ret =3D 0; =20 if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { - vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 959532422d3a..f7c63e145d54 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -92,7 +92,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -122,21 +122,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcp= u) } =09 vcpu->arch.sve_state =3D buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL; =20 - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 - return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); } =20 return -EINVAL; @@ -144,7 +144,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int fe= ature) =20 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false; =20 return true; @@ -163,7 +163,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) kfree(vcpu->arch.ccsidr); } =20 -static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); @@ -203,11 +203,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); =20 - if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); } =20 if (vcpu_el1_is_32bit(vcpu)) --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7888C41324E; Fri, 6 Mar 2026 17:10:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817005; cv=none; b=NdMm+ESz2T6fQ3jtDKM+fFyfPRPW+uzZ02I57FTXlRXwXjCDf3ducchh2UTvDOD+LVYeSa0BjYudduwLSGB8F9g7juQp8qkIUED6OK6ehgpG3m007Rv7CCHKUkYRcMlwsWjj7+1dH1nb/Hm0q72hC7gsXQuir38fEGfoY2jYXFc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817005; c=relaxed/simple; bh=BKO7iJk2IhyUeOcAxAzJmMDV35mroMAKYnKUOQP13Fg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=d49C//mIqtc1NJ6DggQ4Pa7I9Wo1Zz1f2OI2u7ul1NDJ465DYt61PQBly/F5e6C845xv42TRyw9L5dCoxf8FvaEAA9UWWp12SRZFHrG6WHHvv08iJW9R+oTU7q0vjLne+pP0ueCh2bon+WTYtLuLPcLzzuda/NGRC9lmUCe2qTc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MG50ustp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MG50ustp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB07EC4CEF7; Fri, 6 Mar 2026 17:10:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817005; bh=BKO7iJk2IhyUeOcAxAzJmMDV35mroMAKYnKUOQP13Fg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=MG50ustpwFvAzXQGd6PIyWdUQ1JjYf+8iyfc39XPB/5PeuukLDErjQPbx9CXUc++f QwmpTbfk3jTfCW9UouegSufW5mbxbfdvM/VO/QpTN29279upIjNrQdBGdTA3K/tmoF FwTSLZllP04HGT0DEU2dVpA21bbbcwrsBV7RwR5IwUNhWG1zHgA/Ce4saQJiGUC/mv bWDoJ8sTrf51Ij/w351vm/2E9kKDr7OMJzL6K6MYbRPenKQTR5mQFCjpc5JkhZhKlW p9JUII/LPB+9t3cA0GmrgPkPuPLabb059Ccdg6BG00WMLCFVavDkDO37iSVHuUilxT tQ/Hv/LH//7wg== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:01 +0000 Subject: [PATCH v10 09/30] KVM: arm64: Define internal features for SME Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-9-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3307; i=broonie@kernel.org; h=from:subject:message-id; bh=BKO7iJk2IhyUeOcAxAzJmMDV35mroMAKYnKUOQP13Fg=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwovGjatnkSdoMG1R7IqeeLHYVyk4vOq/u0bv 10T1mcq7eaJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKLwAKCRAk1otyXVSH 0PrrB/9Hg8AGVXjAruDGgoXORBmhD7QMKXYrUDcD4+9YqTX1wkK6LYvmJgAZfl/sZ0mHV8RsAkw 3ebwtOUYs0sbSxewQPIqq2jCubvEfdnPKeqNRavQFEjKDqit4frkvcEeFCoZYuVCBUcK9Ybk6wb QzHyZ9vwfORoX7sIKQsjk1B9rLnyZakOQC+Z8RmldljooFyMh2r2i4OdRAhBEIK11JR24Chj4ie 9uipE0mCRmBVr6uglVJjXmT3uCxOilPoTNR7Z+sAfFwsBVUXa5uTjp12vQPc/c76BB6P3D7TUuv Kb8wyVDEooOzlOV4F2Tc3NYQB03qfbmtbiLYqIa5Hgn/IxQy X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In order to simplify interdependencies in the rest of the series define the feature detection for SME and its subfeatures. Due to the need for vector length configuration we define a flag for SME like for SVE. We also have two subfeatures which add architectural state, FA64 and SME2, which are configured via the normal ID register scheme. Also provide helpers which check if the vCPU is in streaming mode or has ZA enabled. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++++++++++++++++++- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 656464179ba8..906dbefc5b33 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -353,6 +353,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10 /* Unhandled SEAs are taken to userspace */ #define KVM_ARCH_FLAG_EXIT_SEA 11 + /* SME exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SME 12 unsigned long flags; =20 /* VM-wide vCPU feature set */ @@ -1086,7 +1088,16 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif =20 -#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) +#define kvm_has_sme(kvm) (system_supports_sme() && \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &(kvm)->arch.flags)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme(vcpu) kvm_has_sme(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme(vcpu) kvm_has_sme((vcpu)->kvm) +#endif + +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu)) =20 #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ @@ -1627,6 +1638,28 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64= val); #define kvm_has_sctlr2(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, SCTLRX, IMP)) =20 +#define kvm_has_fa64(k) \ + (system_supports_fa64() && \ + kvm_has_feat((k), ID_AA64SMFR0_EL1, FA64, IMP)) + +#define kvm_has_sme2(k) \ + (system_supports_sme2() && \ + kvm_has_feat((k), ID_AA64PFR1_EL1, SME, SME2)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme2(vcpu) kvm_has_sme2(kern_hyp_va((vcpu)->kvm)) +#define vcpu_has_fa64(vcpu) kvm_has_fa64(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme2(vcpu) kvm_has_sme2((vcpu)->kvm) +#define vcpu_has_fa64(vcpu) kvm_has_fa64((vcpu)->kvm) +#endif + +#define vcpu_in_streaming_mode(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK) + +#define vcpu_za_enabled(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_ZA_MASK) + static inline bool kvm_arch_has_irq_bypass(void) { return true; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1b4cacb6e918..f94fe57adcad 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1948,7 +1948,7 @@ static unsigned int sve_visibility(const struct kvm_v= cpu *vcpu, static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { - if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + if (vcpu_has_sme(vcpu)) return 0; =20 return REG_HIDDEN; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 801CD3A1E96; Fri, 6 Mar 2026 17:10:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817009; cv=none; b=rBVeO3b54FC3047RGM+oaXAF4dmZWDiUM8G9PkOlEKg+SghrqXJrzBUy0/7QT0m1lkj6BA+E/k7aR6XmdGkOhQtPqzxkQheSX637eSr8sqLnR0pxGI7UKwtDX+J6miNJczVnlspOlQQclq2YP3VFjjPsO91PG8Gg84KcCQGR4Zc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817009; c=relaxed/simple; bh=AeF1hwoxwxrE+FMHOeHAJ/eu1PU3teMN7UxNIXZFRu0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ZlQA6M2onYqUbCukMswTg6lNkDBmi4CfDWIqU5aiRuTSKSuZvi20yGMnzvfyQCfq5tGIb1J9Iq7gyUuNvR9pjWCkmLWbOIZVqO5Xpda2U109RXgDHrOG/ptxl4hM56MXw9R1z3Znp7iBu/HxBjcKvfPldGZmeNHPRzczbA6eYMc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r75vkUAu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r75vkUAu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FEE9C19425; Fri, 6 Mar 2026 17:10:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817009; bh=AeF1hwoxwxrE+FMHOeHAJ/eu1PU3teMN7UxNIXZFRu0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=r75vkUAu/Ivoim+qAOgFwBMiq6Degl3W4n8qhfLU1kdj4bUTnAO8wGqGwdYmIA1QI eTAB3nQ7tg8d34Z6XrC4Vox4AfPxC4zhYXFKXbY8Kg/AY0GGn/nVr53+EULqcHxC18 X8excN0n0yJTo8FXbnl8z0et54+0vX+A3SBWbzDkuc0CqFMkDdm+zuZNisTosFq5sI sQaoSsrtOmZ/poOHXoyVx2tIF62gIDwm+D4Uf3pauXkKhHQZeSv9V1nC21v9YW3HIl 0XmcnK46NoyyODn1+jGxr8q036LCzVg9kwc7rYv2nJ1i/LwHlupPAa2c438mym/GYj zemUZETwL8/rQ== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:02 +0000 Subject: [PATCH v10 10/30] KVM: arm64: Rename sve_state_reg_region Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-10-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=2396; i=broonie@kernel.org; h=from:subject:message-id; bh=AeF1hwoxwxrE+FMHOeHAJ/eu1PU3teMN7UxNIXZFRu0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwowm0AizuLxb3k3mbTwFO0Nw0hKiV2IvV+IQ ZGhfsw3qniJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKMAAKCRAk1otyXVSH 0NRjB/9CvSUYqf+SB8M1EfZF9tK8iXowB2O5hAp5YQBqreL7Xpw4K+/+OuvVB8C24881Dy+mtlV secGqXbax1KsnZuQl/itCOrZdaaxp0n6ONiBptopP6OnpixecCDv3DEY2xtLZRj058MPZF5HUIS bMtIrerT/FkHNDM8oKgNxHufFUwhm4o9xWCyhTNX8cCs59HWUcUh/Exx0SxkljV3WOa795FZUGc MPThGNES8vTi0zf4cHfhVCs3tQ6H2cCPAhoUxDflvaLzph8WT9hKOXIiIWxnvf0wcdoZPt8p63i QCVKTkEBfc77YrmfIUp0Z+XbFH86IkuKKu19Yc+XLkXfgaaf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB As for SVE we will need to pull parts of dynamically sized registers out of a block of memory for SME so we will use a similar code pattern for this. Rename the current struct sve_state_reg_region in preparation for this. No functional change. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/kvm/guest.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index d15aa2da1891..8c3405b5d7b1 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -404,9 +404,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1 =20 -/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ -struct sve_state_reg_region { - unsigned int koffset; /* offset into sve_state in kernel memory */ +/* Bounds of a single register slice within vcpu->arch.s[mv]e_state */ +struct vec_state_reg_region { + unsigned int koffset; /* offset into s[mv]e_state in kernel memory */ unsigned int klen; /* length in kernel memory */ unsigned int upad; /* extra trailing padding in user memory */ }; @@ -415,7 +415,7 @@ struct sve_state_reg_region { * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy */ -static int sve_reg_to_region(struct sve_state_reg_region *region, +static int sve_reg_to_region(struct vec_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -485,7 +485,7 @@ static int sve_reg_to_region(struct sve_state_reg_regio= n *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; char __user *uptr =3D (char __user *)reg->addr; =20 /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ @@ -511,7 +511,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; const char __user *uptr =3D (const char __user *)reg->addr; =20 /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45E553A1E96; Fri, 6 Mar 2026 17:10:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817014; cv=none; b=qyLI3WJFi8qD2WYa4sQo5FZoeUXAWKh7dFO7cKBryfLJvESeFYwQTJhy+2wE/bEVneYvwLT+KEEZ2o2b1xG/qcmLsSHKGwrqC3bkb8OMwgLbEODuF7PvOdugb+PpFAZB536qbz4zmq71O+Qp6ItuHCZ9p0GkLfBu+ounOKLGkUk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817014; c=relaxed/simple; bh=mV429UbyX0eT0JR6MxU1c+Vka6FjmJ2aBYbT/XXqIw0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=oLYjt17jyaE2qrYZ5kcVY3I+Qpvl7eZpy7mYUWdrVdcp2kaHYeqY6H24XlS+qQwkB9AoivfmK9kjnNhg73bwY50EoN6aEu+5/UrWSNH9y28cl8nNLVRn9TgN4+zIimXNmw+5wnyyHhuPaIla5DTPgH7ED9syrxZkyLDieKTdx8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FK4h9xMT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FK4h9xMT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC512C2BCB1; Fri, 6 Mar 2026 17:10:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817013; bh=mV429UbyX0eT0JR6MxU1c+Vka6FjmJ2aBYbT/XXqIw0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=FK4h9xMTLsC15AbXBetMYo8mrYtXlzPOQL5rde2/9dFZ8VLnDtZ+w5j+39y7Jykik hNytATfEzttxgJW0d2j7Nwl26WQyufozmZWQyjSAzUxAL0WT1L1NFzRRd+Vf/OKV89 3SV6uCTvlLwrwIr1P+G1Vx/OEhWk0ZE1/kIfH6LMA0a8HaTTwaK3A15683o1/Nic3u RErz8setMgpPXs1HgaWyAXqkm2Cog0e7+4378gE5TisDS3GP0UkrThTxupqCNz+Rvg ZRwzHDblvzuul6LyMSsQWJvmD3WhZ4zcnOKXxBeyx1+n+4zw/DyXK5J5ZXCipTG4qv S8QFeyWPtws2w== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:03 +0000 Subject: [PATCH v10 11/30] KVM: arm64: Store vector lengths in an array Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-11-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=11836; i=broonie@kernel.org; h=from:subject:message-id; bh=mV429UbyX0eT0JR6MxU1c+Vka6FjmJ2aBYbT/XXqIw0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwox9xfKnvf84ZWUmQo00+3QSYDjKTFI+lweZ I7TyarhJcCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKMQAKCRAk1otyXVSH 0Kn+B/4iKod0ul0a7U8WDefbB4HxtL5LFWhsAomxJsaPhSZGKNYZCr1hqDy2hMLhNcnyo44RPML WXozoS8sC1Q1huFdFOFB9qINolGfF3GT7cIWF55W0oRkZfpbvxUvPXuyIZwgw5h4tyU6xNRUkWp +XTcR4oefkI3J6sN68BThKK4wJAlLGpBx//jvyCCOhJRXyUwlsckfw0tXEQUEl6cwZj5aU0M8ZR DTCSuvRSM5U+EurnLA25KqUB/2LUM6QsMX+6zk9a0WfQ8s2FLquKXnXuHlMzPzhX5xA9BX7ZQ27 5JpBsVVx2AOLMvsPA75IHh4kx/oGvrr6L0dVSH4Qp3arcbqD X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a second vector length configured in a very similar way to the SVE vector length, in order to facilitate future code sharing for SME refactor our storage of vector lengths to use an array like the host does. We do not yet take much advantage of this so the intermediate code is not as clean as might be. No functional change. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 17 +++++++++++------ arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++++--- arch/arm64/kvm/reset.c | 22 +++++++++++----------- 9 files changed, 38 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 906dbefc5b33..3c30c1a70429 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -77,8 +77,10 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif =20 -extern unsigned int __ro_after_init kvm_sve_max_vl; -extern unsigned int __ro_after_init kvm_host_sve_max_vl; +extern unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; +extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; +DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); + int __init kvm_arm_init_sve(void); =20 u32 __attribute_const__ kvm_target_cpu(void); @@ -835,7 +837,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX]; =20 /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; @@ -1122,9 +1124,12 @@ struct kvm_vcpu_arch { =20 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) + +#define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[typ= e]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) =20 -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) =20 #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) @@ -1143,7 +1148,7 @@ struct kvm_vcpu_arch { __size_ret; \ }) =20 -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_= max_vl) +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) =20 /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_= hyp.h index 76ce2b94bd97..0317790dd3b7 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -146,6 +146,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val); =20 extern unsigned long kvm_nvhe_sym(__icache_flags); extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); -extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); +extern unsigned int kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_MAX]); =20 #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index 757076ad4ec9..0805498e20c4 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -191,7 +191,7 @@ static inline size_t pkvm_host_sve_state_size(void) return 0; =20 return size_add(sizeof(struct cpu_sve_state), - SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]))); } =20 struct pkvm_mapping { diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 9158353d8be3..1f4fcc8b5554 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -75,7 +75,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st =3D &vcpu->arch.ctxt.fp_regs; fp_state.sve_state =3D vcpu->arch.sve_state; - fp_state.sve_vl =3D vcpu->arch.sve_max_vl; + fp_state.sve_vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state =3D NULL; fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 8c3405b5d7b1..456ef61b6ed5 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -318,7 +318,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; =20 - if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL; =20 memset(vqs, 0, sizeof(vqs)); @@ -356,7 +356,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq =3D vq; =20 - if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) return -EINVAL; =20 /* @@ -375,7 +375,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) return -EINVAL; =20 /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl =3D sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_vl_from_vq(max_vq); =20 return 0; } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 2597e8bda867..4e38610be19a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -456,8 +456,8 @@ static inline void __hyp_sve_save_host(void) struct cpu_sve_state *sve_state =3D *host_data_ptr(sve_state); =20 sve_state->zcr_el1 =3D read_sysreg_el1(SYS_ZCR); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl= ), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM= 64_VEC_SVE]), &sve_state->fpsr, true); } @@ -512,7 +512,7 @@ static inline void fpsimd_lazy_switch_to_host(struct kv= m_vcpu *vcpu) zcr_el2 =3D vcpu_sve_max_vq(vcpu) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); } else { - zcr_el2 =3D sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + zcr_el2 =3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); =20 zcr_el1 =3D vcpu_sve_max_vq(vcpu) - 1; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e7790097db93..f4da7a452964 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -34,7 +34,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true= ); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); } =20 static void __hyp_sve_restore_host(void) @@ -50,8 +50,8 @@ static void __hyp_sve_restore_host(void) * that was discovered, if we wish to use larger VLs this will * need to be revisited. */ - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max= _vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[= ARM64_VEC_SVE]), &sve_state->fpsr, true); write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 24acbe5594e2..399968cf570e 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,7 +20,7 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits; =20 -unsigned int kvm_host_sve_max_vl; +unsigned int kvm_host_max_vl[ARM64_VEC_MAX]; =20 /* * The currently loaded hyp vCPU for each physical CPU. Used in protected = mode @@ -450,7 +450,8 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h } =20 /* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.sve_max_vl), kvm_host_sve_ma= x_vl); + sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); sve_state_size =3D sve_state_size_from_vl(sve_max_vl); sve_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state)); =20 @@ -464,7 +465,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp= _vcpu, struct kvm_vcpu *h goto err; =20 vcpu->arch.sve_state =3D sve_state; - vcpu->arch.sve_max_vl =3D sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_max_vl; =20 return 0; err: diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f7c63e145d54..a8684a1346ec 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,7 +32,7 @@ =20 /* Maximum phys_shift supported for any VM on this host */ static u32 __ro_after_init kvm_ipa_limit; -unsigned int __ro_after_init kvm_host_sve_max_vl; +unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; =20 /* * ARMv8 Reset Values @@ -46,14 +46,14 @@ unsigned int __ro_after_init kvm_host_sve_max_vl; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT) =20 -unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; =20 int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl =3D sve_max_virtualisable_vl(); - kvm_host_sve_max_vl =3D sve_max_vl(); - kvm_nvhe_sym(kvm_host_sve_max_vl) =3D kvm_host_sve_max_vl; + kvm_max_vl[ARM64_VEC_SVE] =3D sve_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SVE] =3D sve_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SVE]) =3D kvm_host_max_vl[ARM64_V= EC_SVE]; =20 /* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -61,16 +61,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl =3D VL_ARCH_MAX; + if (WARN_ON(kvm_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SVE] =3D VL_ARCH_MAX; =20 /* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_max_vl[ARM64_VEC_SVE]); } =20 return 0; @@ -78,7 +78,7 @@ int __init kvm_arm_init_sve(void) =20 static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl =3D kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D kvm_max_vl[ARM64_VEC_SVE]; =20 /* * Userspace can still customize the vector lengths by writing @@ -99,7 +99,7 @@ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) size_t reg_sz; int ret; =20 - vl =3D vcpu->arch.sve_max_vl; + vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; =20 /* * Responsibility for these properties is shared between --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6355C423175; Fri, 6 Mar 2026 17:10:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817018; cv=none; b=dRX3GVemHX46QfqZdiqeVOMOjhylxvkarwR7csUaoc8xYzu5qUWpkavXHc6aullWYuU0ShaeP7D0oWYMNeb7tLE1uU9hDcLogS4VZSUsoyqGu6ycSmhP7xTsQ2Gyjgtiv02om150QOf5zUpV6Ovm6dMj8zUbLPJhlXaRNl7ikWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817018; c=relaxed/simple; bh=SitDXA7jb0eoqcSylSxaCT4VSP66th+kAnfyQu7Nc1I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Cefi0OxiaKbdUa5i/xdlHJPIVTgv+jMFLtXCzrVulf1mbP4cz6YVNrC2GVS6qig2Uv7EwqP5xydjbtbWS10f1c4/ya1DxiDws8+AIgN3Tnd3MJU6dXFgu4Y1ikpZyyYtAlwQ2uRS3RvlgHLMBD6t/FZGTAtxQNmBTiNx+oT3zZo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fSrKxZ1m; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fSrKxZ1m" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4396EC2BCB4; Fri, 6 Mar 2026 17:10:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817018; bh=SitDXA7jb0eoqcSylSxaCT4VSP66th+kAnfyQu7Nc1I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=fSrKxZ1mn4TB92F4Z8i2TjQDAatmhXKaGzMsk10XHve+BT9Gc7pWwH6zjGaZjBJww 43yDwO0yAVKXa3jNyuKCXl/VFWAkVrIo6NvnRIxWwgyXsyfT+HhJB2DrYt8XGGNq1N 1lpw2HrdsBYxaUMSOSXEWPgd2mGuj6K/4chXuE/R4usQRpNeDXAF3fi9ZkQ5LtRtci CfLH7iL6jwEROuKO6qt4rprIgYKOe2+vSEy6oU+wzz8m9qMDOwmiO4PfjITY/KjwqM jJ/12ylm0toV/nHPoYj/Lhnq+eXUZROTKeEny8gTR5rGgfQXr9ONxHrSmXrQhPMOR5 6EgTziWKj5pLQ== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:04 +0000 Subject: [PATCH v10 12/30] KVM: arm64: Factor SVE code out of fpsimd_lazy_switch_to_host() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-12-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=2820; i=broonie@kernel.org; h=from:subject:message-id; bh=SitDXA7jb0eoqcSylSxaCT4VSP66th+kAnfyQu7Nc1I=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwoxdwx4Vs/wp9I109pAepIICp+QdG104WOgX gOWCAZyCpKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKMQAKCRAk1otyXVSH 0FTMB/9uU5zLItDNiBtK37m3TP/Yc+l7e1bbazjTwvbRJ6dAx2Z2GlrdJBP5i7GzM38ITDX6AxZ 8134rzDXSHwdwOX8vJhzS4bIPix2W6p/r8UUaGuPSemYfxfV2ldLLXY0VF+J2lT0c8fPQb6fPu7 Rht8Qdn1wN0vYofqvlIxC0yBZb7xtpOPTsENy+1ihB+xGhRglmF22fjIbfXkmCZl0/qa6Jj0sIB kLj58WvCeJX4kgBs9huxif4FY9sJpP/zQs16gUMuZnwfMhB4YN+d3KivkQAXkwYw9Qll9mycSXF YfeQIA9FmEsIJcliJxS/PjvyiFvSdjss6sUi/IY+fziQ/3JD X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Since the function will grow as a result of adding SME support move the SVE code out of fpsimd_lazy_switch_to_host(). No functional change, just code motion. Signed-off-by: Mark Brown Reviewed-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/hyp/switch.h | 46 +++++++++++++++++++----------= ---- 1 file changed, 26 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 4e38610be19a..5b99aa479c59 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -483,11 +483,11 @@ static inline void fpsimd_lazy_switch_to_guest(struct= kvm_vcpu *vcpu) } } =20 -static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) +static inline void sve_lazy_switch_to_host(struct kvm_vcpu *vcpu) { u64 zcr_el1, zcr_el2; =20 - if (!guest_owns_fp_regs()) + if (!vcpu_has_sve(vcpu)) return; =20 /* @@ -498,29 +498,35 @@ static inline void fpsimd_lazy_switch_to_host(struct = kvm_vcpu *vcpu) * synchronization event, we don't need an ISB here to avoid taking * traps for anything that was exposed to the guest. */ - if (vcpu_has_sve(vcpu)) { - zcr_el1 =3D read_sysreg_el1(SYS_ZCR); - __vcpu_assign_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu), zcr_el1); + zcr_el1 =3D read_sysreg_el1(SYS_ZCR); + __vcpu_assign_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu), zcr_el1); =20 - /* - * The guest's state is always saved using the guest's max VL. - * Ensure that the host has the guest's max VL active such that - * the host can save the guest's state lazily, but don't - * artificially restrict the host to the guest's max VL. - */ - if (has_vhe()) { - zcr_el2 =3D vcpu_sve_max_vq(vcpu) - 1; - write_sysreg_el2(zcr_el2, SYS_ZCR); - } else { - zcr_el2 =3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; - write_sysreg_el2(zcr_el2, SYS_ZCR); + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such + * that the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + zcr_el2 =3D vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); + } else { + zcr_el2 =3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); =20 - zcr_el1 =3D vcpu_sve_max_vq(vcpu) - 1; - write_sysreg_el1(zcr_el1, SYS_ZCR); - } + zcr_el1 =3D vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(zcr_el1, SYS_ZCR); } } =20 +static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) +{ + if (!guest_owns_fp_regs()) + return; + + sve_lazy_switch_to_host(vcpu); +} + static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) { /* --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EE9D40FDB7; Fri, 6 Mar 2026 17:10:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817023; cv=none; b=Zw+Yq4wdKUkQC3o1QpZtbcNyYZ5M622IgoaRvF/3f+jJbbc/Az08AUc0w0BkX5QXOdHrE8oO1mOX5bppYzW3GL07KXWT5LVcfoccx1hk2qjcAXMxrB7K6MWIISaua5W+xgWReedhd0YKsd0xbOppDI7DnAEzdMB4wjM4U8miugM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817023; c=relaxed/simple; bh=QfA3C9hX9svrq1a6kngrY8lUw+xs7XQfPovswqDFzt0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=V04BShNJG4Us4G8I4cmn+j7NQvMvHK54F/T702ewCCgS1MUtYe/6uGFVTz6J3WBJNVctYRNreb6oN2plWnOasRb/Pi0BIatY2aTlJuUySfVQL+d2Xvrsa8SmTagHDzfHgXgSe7FOKCJlOqIvcKS1ce9t3Dbhy5MuCHwLRHdptng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hc0EW6sr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hc0EW6sr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE19AC19425; Fri, 6 Mar 2026 17:10:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817022; bh=QfA3C9hX9svrq1a6kngrY8lUw+xs7XQfPovswqDFzt0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hc0EW6srRYmwa5hoe2O4Tc6N3rp6EOyon0oNfK5hpLVVlshOaZRzHDL4JYdcsPr+o qSIU72ISt866rFpvT9fGvru4tBr01LdWQUXFsr2lymdwzOPEhRpC3Zi2Y8QEldaJYE 1CEe859wquK6vXoGykatprPfOX+oN8Y50448LnwJPsTWz8zvyixZs2F0wQTXoI0R0y Aok/G/4RkrtRJ6ggTeGvtJxJOHLcSsUATR63pED7Psz/sfzIf2E+qmatuh8E6j+zJc gzRsGxQRNqnyszbJ+JEbp9z5osnioURP0LkjZfbquaB3BHeVgPKFiodGaZfoFOn611 CxCEe5PwzunOw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:05 +0000 Subject: [PATCH v10 13/30] KVM: arm64: Document the KVM ABI for SME Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-13-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=12325; i=broonie@kernel.org; h=from:subject:message-id; bh=QfA3C9hX9svrq1a6kngrY8lUw+xs7XQfPovswqDFzt0=; b=owGbwMvMwMWocq27KDak/QLjabUkhszVXEbfri5TTJrIN++/vmtL9YwPCx9UxRR1zlxx//LlP xv/e2u3dDIaszAwcjHIiimyrH2WsSo9XGLr/EfzX8EMYmUCmcLAxSkAE+m5zf6btbBET1Ti/ZH8 iwru7/UideYnKYcIxTzPE1iX63vrnJO2/bzXespSaeZ1Vg31NsfcdvnNCVkWfmM5W/vB3Mhcj7e n09wyXdc4/P55c2O5rmpR+dzvzBbzpzjoMl0RWGttUeu2LDT+XYuC68XAmgqJxEZ/gb/G1RNbO6 wP5GTcN+UNUEpw0gs22374Vdbcf35cd4U6z2c/Zd1XlsjlrHDkp1Mqs32znb+G2efnF51XJhzLP x+n+XNG1P2dS81NtzKlrDwffop1Ce/XgwGh/cX5gbsc1DQ+xNwOPKr4oL/BmznW6HXz7HuF2yzs du5ymliXO9k2k3n9DBbOTqcbrz/emGP2NUl3f+HLrNs2AA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME, the Scalable Matrix Extension, is an arm64 extension which adds support for matrix operations, with core concepts patterned after SVE. SVE introduced some complication in the ABI since it adds new vector floating point registers with runtime configurable size, the size being controlled by a parameter called the vector length (VL). To provide control of this to VMMs we offer two phase configuration of SVE, SVE must first be enabled for the vCPU with KVM_ARM_VCPU_INIT(KVM_ARM_VCPU_SVE), after which vector length may then be configured but the configurably sized floating point registers are inaccessible until finalized with a call to KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE) after which the configurably sized registers can be accessed. SME introduces an additional independent configurable vector length which as well as controlling the size of the new ZA register also provides an alternative view of the configurably sized SVE registers (known as streaming mode) with the guest able to switch between the two modes as it pleases. There is also a fixed sized register ZT0 introduced in SME2. As well as streaming mode the guest may enable and disable ZA and (where SME2 is available) ZT0 dynamically independently of streaming mode. These modes are controlled via the system register SVCR. We handle the configuration of the vector length for SME in a similar manner to SVE, requiring initialization and finalization of the feature with a pseudo register controlling the available SME vector lengths as for SVE. Further, if the guest has both SVE and SME then finalizing one prevents further configuration of the vector length for the other. Where both SVE and SME are configured for the guest we present the SVE registers to userspace as having the maximum vector length of the currently active vector type as configured via SVCR.SM, imposing an ordering requirement on userspace. Userspace access to ZA and (if configured) ZT0 is only available when SVCR.ZA is 1. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- Documentation/virt/kvm/api.rst | 124 +++++++++++++++++++++++++++++--------= ---- 1 file changed, 89 insertions(+), 35 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 6f85e1b321dd..2ed08bd03a34 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -406,7 +406,7 @@ Errors: instructions from device memory (arm64) ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) - EPERM SVE feature set but not finalized (arm64) + EPERM SVE or SME feature set but not finalized (arm64) =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 This ioctl is used to run a guest virtual cpu. While there are no @@ -2605,11 +2605,11 @@ Specifically: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =20 .. [1] These encodings are not accepted for SVE-enabled vcpus. See - :ref:`KVM_ARM_VCPU_INIT`. + :ref:`KVM_ARM_VCPU_INIT`. They are also not accepted when SME is + enabled without SVE and the vcpu is in streaming mode. =20 The equivalent register content can be accessed via bits [127:0] of - the corresponding SVE Zn registers instead for vcpus that have SVE - enabled (see below). + the corresponding SVE Zn registers in these cases (see below). =20 arm64 CCSIDR registers are demultiplexed by CSSELR value:: =20 @@ -2640,24 +2640,40 @@ arm64 SVE registers have the following bit patterns= :: 0x6050 0000 0015 060 FFR bits[256*slice + 255 : 256*sli= ce] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-regis= ter =20 -Access to register IDs where 2048 * slice >=3D 128 * max_vq will fail with -ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit -quadwords: see [2]_ below. +arm64 SME registers have the following bit patterns: =20 -These registers are only accessible on vcpus for which SVE is enabled. + 0x6080 0000 0017 00 ZA.H[n] bits[2048*slice + 2047 : 2= 048*slice] + 0x6060 0000 0017 0100 ZT0 + 0x6060 0000 0017 fffe KVM_REG_ARM64_SME_VLS pseudo-regis= ter + +Access to Z, P, FFR or ZA register IDs where 2048 * slice >=3D 128 * +max_vq will fail with ENOENT. max_vq is the vcpu's current maximum +supported vector length in 128-bit quadwords: see [2]_ below. + +Changing the value of SVCR.SM will result in the contents of +the Z, P and FFR registers being reset to 0. When restoring the +values of these registers for a VM with SME support it is +important that SVCR.SM be configured first. + +Access to the ZA and ZT0 registers is only available if SVCR.ZA is set +to 1. + +These registers are only accessible on vcpus for which SME is enabled. See KVM_ARM_VCPU_INIT for details. =20 -In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not -accessible until the vcpu's SVE configuration has been finalized -using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT -and KVM_ARM_VCPU_FINALIZE for more information about this procedure. +In addition, except for KVM_REG_ARM64_SVE_VLS and +KVM_REG_ARM64_SME_VLS, these registers are not accessible until the +vcpu's SVE and SME configuration has been finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). See KVM_ARM_VCPU_INIT and +KVM_ARM_VCPU_FINALIZE for more information about this procedure. =20 -KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector -lengths supported by the vcpu to be discovered and configured by -userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type -__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as -follows:: +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM64_SME_VLS are +pseudo-registers that allows the set of vector lengths supported by +the vcpu to be discovered and configured by userspace. When +transferred to or from user memory via KVM_GET_ONE_REG or +KVM_SET_ONE_REG, the value of this register is of type +__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths +as follows:: =20 __u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS]; =20 @@ -2669,19 +2685,25 @@ follows:: /* Vector length vq * 16 bytes not supported */ =20 .. [2] The maximum value vq for which the above condition is true is - max_vq. This is the maximum vector length available to the guest on - this vcpu, and determines which register slices are visible through - this ioctl interface. + max_vq. This is the maximum vector length currently available to + the guest on this vcpu, and determines which register slices are + visible through this ioctl interface. + + If SME is supported then the max_vq used for the Z and P registers + while SVCR.SM is 1 this vector length will be the maximum SME + vector length max_vq_sme available for the guest, otherwise it + will be the maximum SVE vector length max_vq_sve available. =20 (See Documentation/arch/arm64/sve.rst for an explanation of the "vq" nomenclature.) =20 -KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. -KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that -the host supports. +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM64_SME_VLS are only accessible +after KVM_ARM_VCPU_INIT. KVM_ARM_VCPU_INIT initialises them to the +best set of vector lengths that the host supports. =20 -Userspace may subsequently modify it if desired until the vcpu's SVE -configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). +Userspace may subsequently modify these registers if desired until the +vcpu's SVE and SME configuration is finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). =20 Apart from simply removing all vector lengths from the host set that exceed some value, support for arbitrarily chosen sets of vector lengths @@ -2689,8 +2711,8 @@ is hardware-dependent and may not be available. Atte= mpting to configure an invalid set of vector lengths via KVM_SET_ONE_REG will fail with EINVAL. =20 -After the vcpu's SVE configuration is finalized, further attempts to -write this register will fail with EPERM. +After the vcpu's SVE or SME configuration is finalized, further +attempts to write these registers will fail with EPERM. =20 arm64 bitmap feature firmware pseudo-registers have the following bit patt= ern:: =20 @@ -3489,6 +3511,7 @@ The initial values are defined as: - General Purpose registers, including PC and SP: set to 0 - FPSIMD/NEON registers: set to 0 - SVE registers: set to 0 + - SME registers: set to 0 - System registers: Reset to their architecturally defined values as for a warm reset to EL1 (resp. SVC) or EL2 (in the case of EL2 being enabled). @@ -3532,7 +3555,7 @@ Possible features: =20 - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. - Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 * After KVM_ARM_VCPU_INIT: =20 @@ -3540,7 +3563,7 @@ Possible features: initial value of this pseudo-register indicates the best set of vector lengths possible for a vcpu on this host. =20 - * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 - KVM_RUN and KVM_GET_REG_LIST are not available; =20 @@ -3553,11 +3576,41 @@ Possible features: KVM_SET_ONE_REG, to modify the set of vector lengths available for the vcpu. =20 - * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): =20 - the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can no longer be written using KVM_SET_ONE_REG. =20 + - KVM_ARM_VCPU_SME: Enables SME for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SME. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SME_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable architectural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR, the matrix register + KVM_REG_ARM64_SME_ZAHREG() or the LUT register + KVM_REG_ARM64_SME_ZTREG(); + + - KVM_REG_ARM64_SME_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - the KVM_REG_ARM64_SME_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. + - KVM_ARM_VCPU_HAS_EL2: Enable Nested Virtualisation support, booting the guest from EL2 instead of EL1. Depends on KVM_CAP_ARM_EL2. @@ -5142,11 +5195,12 @@ Errors: =20 Recognised values for feature: =20 - =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D - arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) - =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + arm64 KVM_ARM_VCPU_VEC (requires KVM_CAP_ARM_SVE or KVM_CAP_ARM_SME) + arm64 KVM_ARM_VCPU_SVE (alias for KVM_ARM_VCPU_VEC) + =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -Finalizes the configuration of the specified vcpu feature. +Finalizes the configuration of the specified vcpu features. =20 The vcpu must already have been initialised, enabling the affected feature= , by means of a successful :ref:`KVM_ARM_VCPU_INIT ` call wi= th the --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DBA141C2FB; Fri, 6 Mar 2026 17:10:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817027; cv=none; b=pGcjdiTLYi+2/pC+hZpbLpBjptRnUGPDxfRQV02BXUcqR4eAjtUIHSMKMSHpStIL0V1zBuZh8bAGI2LP/l+rvLcfesaFfM59rutIOVMOMeqCdPmfSVNbxKcx91sGzECwr8eHWZSjZ2ogFXXcMDXtWH4IhloxpUEUMI9KLF/wSUY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817027; c=relaxed/simple; bh=AXdHrDZrpQrbEu7LtL2qvdSjiieYuFkXyZeZNcEeLu0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EEY2/qY6MFhOgtbYHbSje78VbHGAz6k41yaVOtqktyvmopaA8BrMtJBLJMWk6u3L1IcNgNwxJa58xhGTLeMBNiKGnffGAqHxKlDqU4GinwAbfCwU/7ggv9SAfLMPS0CqaG1KHOMt11hOyC2VzyVU03pnWkfYdGe9waDlDzsnnM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rO4mof9v; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rO4mof9v" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23119C4CEF7; Fri, 6 Mar 2026 17:10:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817027; bh=AXdHrDZrpQrbEu7LtL2qvdSjiieYuFkXyZeZNcEeLu0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=rO4mof9vpOZ9YLvtiVlGA1brdtffovot1V5OT3IY1r0cnvfihCzBrg5q9xMUkZBL7 lsCUYlhoJMakcux882hxQfYI8QYKudvrs4ESqLIXU+ReaD8pGFGlB6e+oCap6vL6CK NH42zkR/wrdywMt/24DVgG8KLehtmsJMcHpRHvIA8eLu6cZhXZREa9NI/n3ZQL9WNY RAFiqdwd1AXdJC9tkvjwPLIWHkLHfntrsq5T05tNoTGftD5/PC4+dRBgzYpo7C7v86 6H6Akf68fRgn7MPOtXq2Y3OI8iatQbBbaO9pohfPndqO+JNcZQjQN6NYY8Ow0qpKHL w0Pfq9DAWUUuw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:06 +0000 Subject: [PATCH v10 14/30] KVM: arm64: Implement SME vector length configuration Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-14-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=10119; i=broonie@kernel.org; h=from:subject:message-id; bh=AXdHrDZrpQrbEu7LtL2qvdSjiieYuFkXyZeZNcEeLu0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwoz/ewC3VSoxHIIVdFqZK24Y1P3e/LlMv3QM 0AY9aq8ZcSJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKMwAKCRAk1otyXVSH 0Cs7B/0TCucIGs8gGAT+5Am0ymgLm+KU08wTn1/oSLV9oGhjrkVJpct5tP2O9kXvvDdBl2zP4/s nNbf/MKZiv3c/6eOpobmigXj2EvVR9xKwUmhyDJD77xBf0Tuorfkg7jaMXDumXljo8MMA8nVLdH dydvgDiYXd3zE+013MwbV5dvJSD8Qly3kxPTSXjYZj4kSS5I7CW3us90CCTELFFORhtCCuEHn2k xgKopWivrt7+T/7JJG/BVd3KnsMVUCLjUBWjAi0hkz9Chl9KI/4NuZlNMx+EF/D6QY8ywO29imY njp7L45Q+wLXt+KaPLbN2UClXoNtcyvWxy9T4XbEqBDjg6Pf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME implements a vector length which architecturally looks very similar to that for SVE, configured in a very similar manner. This controls the vector length used for the ZA matrix register, and for the SVE vector and predicate registers when in streaming mode. The only substantial difference is that unlike SVE the architecture does not guarantee that any particular vector length will be implemented. Configuration for SME vector lengths is done using a virtual register as for SVE, hook up the implementation for the virtual register. Since we do not yet have support for any of the new SME registers stub register access functions are provided that only allow VL configuration. These will be extended as the SME specific registers, as for SVE. Since vq_available() is currently only defined for CONFIG_SVE add a stub for builds where that is disabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_host.h | 25 +++++++++++-- arch/arm64/include/uapi/asm/kvm.h | 9 +++++ arch/arm64/kvm/guest.c | 79 +++++++++++++++++++++++++++++++----= ---- 4 files changed, 94 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 0cd8a866e844..05566bbfa4d4 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -342,6 +342,7 @@ static inline int sve_max_vl(void) return -EINVAL; } =20 +static inline bool vq_available(enum vec_type type, unsigned int vq) { ret= urn false; } static inline bool sve_vq_available(unsigned int vq) { return false; } =20 static inline void sve_user_disable(void) { BUILD_BUG(); } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 3c30c1a70429..fe663d0772dc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -834,8 +834,15 @@ struct kvm_vcpu_arch { * low 128 bits of the SVE Z registers. When the core * floating point code saves the register state of a task it * records which view it saved in fp_type. + * + * If SME support is also present then it provides an + * alternative view of the SVE registers accessed as for the Z + * registers when PSTATE.SM is 1, plus an additional set of + * SME specific state in the matrix register ZA and LUT + * register ZT0. */ void *sve_state; + void *sme_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX]; =20 @@ -1122,14 +1129,24 @@ struct kvm_vcpu_arch { =20 #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) =20 -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) - #define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[typ= e]) =20 #define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SME) + +#define vcpu_sve_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SVE]) +#define vcpu_sme_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SME]) =20 +#define vcpu_max_vl(vcpu) max(vcpu_sve_max_vl(vcpu), vcpu_sme_max_vl(vcpu)) +#define vcpu_max_vq(vcpu) sve_vq_from_vl(vcpu_max_vl(vcpu)) + +/* Current for the hypervisor */ +#define vcpu_cur_sve_vl(vcpu) (vcpu_in_streaming_mode(vcpu) ? \ + vcpu_sme_max_vl(vcpu) : vcpu_sve_max_vl(vcpu)) + +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset(vcpu_cur_sve_vl(vcpu))) =20 #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index c67564f02981..498a49a61487 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -354,6 +354,15 @@ struct kvm_arm_counter_offset { #define KVM_ARM64_SVE_VLS_WORDS \ ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) =20 +/* SME registers */ +#define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) + +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ + KVM_REG_SIZE_U512 | 0xfffe) +#define KVM_ARM64_SME_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) + /* Bitmap feature firmware registers */ #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64= | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 456ef61b6ed5..9276054b5bdd 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -310,22 +310,20 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const = struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) #define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq))) =20 -static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +static int get_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; =20 - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; =20 memset(vqs, 0, sizeof(vqs)); =20 - max_vq =3D vcpu_sve_max_vq(vcpu); + max_vq =3D vcpu_vec_max_vq(vcpu, vec_type); for (vq =3D SVE_VQ_MIN; vq <=3D max_vq; ++vq) - if (sve_vq_available(vq)) + if (vq_available(vec_type, vq)) vqs[vq_word(vq)] |=3D vq_mask(vq); =20 if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) @@ -334,18 +332,16 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const s= truct kvm_one_reg *reg) return 0; } =20 -static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +static int set_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; =20 - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ =20 - if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; =20 if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) @@ -356,18 +352,18 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const s= truct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq =3D vq; =20 - if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) + if (max_vq > sve_vq_from_vl(kvm_max_vl[vec_type])) return -EINVAL; =20 /* * Vector lengths supported by the host can't currently be * hidden from the guest individually: instead we can only set a - * maximum via ZCR_EL2.LEN. So, make sure the available vector + * maximum via xCR_EL2.LEN. So, make sure the available vector * lengths match the set requested exactly up to the requested * maximum: */ for (vq =3D SVE_VQ_MIN; vq <=3D max_vq; ++vq) - if (vq_present(vqs, vq) !=3D sve_vq_available(vq)) + if (vq_present(vqs, vq) !=3D vq_available(vec_type, vq)) return -EINVAL; =20 /* Can't run with no vector lengths at all: */ @@ -375,11 +371,27 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const s= truct kvm_one_reg *reg) return -EINVAL; =20 /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[vec_type] =3D sve_vl_from_vq(max_vq); =20 return 0; } =20 +static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -533,6 +545,39 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const st= ruct kvm_one_reg *reg) return 0; } =20 +static int get_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) + return get_sme_vls(vcpu, reg); + + return -EINVAL; +} + +static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) + return set_sme_vls(vcpu, reg); + + return -EINVAL; +} int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { return -EINVAL; @@ -711,6 +756,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct= kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return get_sme_reg(vcpu, reg); } =20 return kvm_arm_sys_reg_get_reg(vcpu, reg); @@ -728,6 +774,7 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct= kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return set_sme_reg(vcpu, reg); } =20 return kvm_arm_sys_reg_set_reg(vcpu, reg); --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2E8541C2FB; Fri, 6 Mar 2026 17:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817031; cv=none; b=u2z/UYJBka6mess+CS4TGKOcG3oxn9NjOV+2QeEmzWWCIDHnqldVyjokxIT1TEiLQTKXefH0V8JUNi9DLromv1jzt+89E5SxvUwDkEqS057+rHk9ESEqVSEvIUa7nc7La0iWUdmfEOYe/okJ6dVbWT6Rx5J1C7EPrAZWb2G36oU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817031; c=relaxed/simple; bh=CI3d6HjSk1kWGGzg+r1F0ReSlxp80t7nIiipNLFyh7M=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=TudhjFupsP3EcyseOUFrXyXeyt4e/HiiWNns+jEanWOdMamHoinHCv82yvENyNuM6Q9RPFgXJq48zGpiUox3/ARZhoBUK/9YQLEovZNkaQPQZDWRpbbDoXgpwZEcSjXgEybfv2+wnvjCYDgKvp7FufFkB+fknpjxdwz6dn8+5Nk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=F9TBBTsO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="F9TBBTsO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8501DC2BCAF; Fri, 6 Mar 2026 17:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817031; bh=CI3d6HjSk1kWGGzg+r1F0ReSlxp80t7nIiipNLFyh7M=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=F9TBBTsO/exmA3tPMd3/zMqL2e2ahulLF2PDSXYatdSpep1yf9k9HSq/cjOHGTkdp 7AU979F0LhxVJK/0iClC5w75w/NTTAXTuCIx0ulpUk+RoDxwmfZp9Jqn5VRR5IfGne c0abyWLjWrv78UozGSm7ywWa14oqa+YRNBABROXfmv8GwpaHypuzt333vF6AskyR8a EHYDRrgwC3YHfqfb2CFMuiGGRig15smdOG1jHg8gEWIE4iCNWWHRznnu1L4uWX+/4e wK3saOi4gCvfFfLMEs9fkV85DwtUGF3v5T6KTFqUZcB/2xnFnWukBkXG0uybBKn8aB BzY6Tj/Pm2guw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:07 +0000 Subject: [PATCH v10 15/30] KVM: arm64: Support SME control registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-15-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=5603; i=broonie@kernel.org; h=from:subject:message-id; bh=CI3d6HjSk1kWGGzg+r1F0ReSlxp80t7nIiipNLFyh7M=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo0vYQqOWqmTv91ilv1KmSqhfnPdE8h0fnI9 xOXSCFz5aGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKNAAKCRAk1otyXVSH 0M/vCACE0nqoYGDRv24kYVfEkp2q1gp+FGrz7LbjbDXU7Rui/GROpKizVM0k2/MqmeQ3aRcbGy4 6g3E2IRQ1Kw1FR8ieLAOJZFS+Q0Lg5fchdntzcfX5YvV2pniFpbfT9I8pvfwHkjKdcEm9bp4YrK 0ciXP+4udZPB+m4qfV5kbqxtyzW+BQQx1ETM7FEmCv4OsO5m9uwXsq7jlR9U3bwLa1C0ZVf2ocY 2NK8xYSVcbGTTAObFy3werpGpAtGiFvUto3HykYzghRoMbzJOLlMHhfU5xAzUhbEfJ7J/42/LCo pQ3650uVUhuktpbw2uZ3wvOgx+exBZbA/IYlcMvmMDcC08iJ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add definitions and userspace access for them. These control the SME vector length in a manner similar to that for SVE and also have feature enable bits for SME2 and FA64. A subsequent patch will add management of them for guests as part of the general floating point context switch, as is done for the equivalent SVE registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 14 ++++++++++++ arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 42 +++++++++++++++++++++++++++++++= +++- 4 files changed, 58 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/= kvm_emulate.h index 5bf3d7e1d92c..7a11dd7d554c 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -89,6 +89,14 @@ static inline void kvm_inject_nested_sve_trap(struct kvm= _vcpu *vcpu) kvm_inject_nested_sync(vcpu, esr); } =20 +static inline void kvm_inject_nested_sme_trap(struct kvm_vcpu *vcpu) +{ + u64 esr =3D FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SME) | + ESR_ELx_IL; + + kvm_inject_nested_sync(vcpu, esr); +} + #if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__) static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { @@ -688,4 +696,10 @@ static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu) vcpu->arch.hcrx_el2 |=3D HCRX_EL2_EnASR; } } + +static inline bool guest_hyp_sme_traps_enabled(const struct kvm_vcpu *vcpu) +{ + return __guest_hyp_cptr_xen_trap_enabled(vcpu, SMEN); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index fe663d0772dc..e5194ffc40a7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -500,6 +500,7 @@ enum vcpu_sysreg { CPTR_EL2, /* Architectural Feature Trap Register (EL2) */ HACR_EL2, /* Hypervisor Auxiliary Control Register */ ZCR_EL2, /* SVE Control Register (EL2) */ + SMCR_EL2, /* SME Control Register (EL2) */ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */ TCR_EL2, /* Translation Control Register (EL2) */ @@ -539,6 +540,7 @@ enum vcpu_sysreg { VNCR(ACTLR_EL1),/* Auxiliary Control Register */ VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ + VNCR(SMCR_EL1), /* SME Control */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm= /vncr_mapping.h index c2485a862e69..44b12565321b 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -44,6 +44,7 @@ #define VNCR_HDFGWTR_EL2 0x1D8 #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 +#define VNCR_SMCR_EL1 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f94fe57adcad..f13ff8e630f2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2830,6 +2830,43 @@ static bool access_gic_elrsr(struct kvm_vcpu *vcpu, return true; } =20 +static unsigned int sme_el2_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return __el2_visibility(vcpu, rd, sme_visibility); +} + +static bool access_smcr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned int vq; + u64 smcr; + + if (guest_hyp_sme_traps_enabled(vcpu)) { + kvm_inject_nested_sme_trap(vcpu); + return false; + } + + if (!p->is_write) { + p->regval =3D __vcpu_sys_reg(vcpu, SMCR_EL2); + return true; + } + + smcr =3D p->regval & ~SMCR_ELx_RES0; + if (!vcpu_has_fa64(vcpu)) + smcr &=3D ~SMCR_ELx_FA64; + if (!vcpu_has_sme2(vcpu)) + smcr &=3D ~SMCR_ELx_EZT0; + + vq =3D SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1; + vq =3D min(vq, vcpu_sme_max_vq(vcpu)); + smcr &=3D ~SMCR_ELx_LEN_MASK; + smcr |=3D SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1); + __vcpu_assign_sys_reg(vcpu, SMCR_EL2, smcr); + return true; +} + static unsigned int s1poe_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -3294,7 +3331,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility =3D sve= _visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -3656,6 +3693,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { =20 EL2_REG_VNCR(HCRX_EL2, reset_val, 0), =20 + EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, + sme_el2_visibility), + EL2_REG(TTBR0_EL2, access_rw, reset_val, 0), EL2_REG(TTBR1_EL2, access_rw, reset_val, 0), EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1), --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62D7C41325C; Fri, 6 Mar 2026 17:10:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817036; cv=none; b=u5zcaWdXuxZU3CMth4dJaJq7FYsn0UyN2pAfH0ozhc/L+T0PG6uamHsSRviBU6XqBhAJGK1fS7t5VBDLwHAZMLwkXx54CNjR5wO67pk5aDMNZvsU7OHoAcmg+JNrvtRXNhuIybnOIP8eiQLHjv+R3rlhuLKOGQy1Tv0aYa3RacM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817036; c=relaxed/simple; bh=WfGxhqhjq10VSZV2h4qBa1D5MAiiOjvTsGIutBZvgzc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WEagRIMFtsvqCaePIMFWbQF1BAg9NhfXO0HB95FlT5M76vtZRBxvzBKfPZburQKlJJYrD9fspR3cFQiRqn/Bck5ky6xvoP/hb21dQBQv39MJlozMF41Bt4AM2aRkkWTVXaXbmBnU2LsBA2E+Um2UB7g2/TW29PcFu5qlqxdrhRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YGQF1lpW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YGQF1lpW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED45FC19425; Fri, 6 Mar 2026 17:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817035; bh=WfGxhqhjq10VSZV2h4qBa1D5MAiiOjvTsGIutBZvgzc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=YGQF1lpWRSNCgIY7ZSXaI+wlaPYuTpKe7GWuHTZlkTV4EmygG5gPWR4TpekpDfek7 9SKQuzptPALC/VjW1Rb7bnWEEAq+QMxfkERwsp7pDfrNfOoKelAItMonhu4zmZvM6x E2ubrJ74qHgeuOF+XCJKxAe9lmZC59ZrEGGz2J1a4114Z3Coxte7YW4Y1mWeS/amlS YmXBKezMUx43vs01pf3wnQbqzLDcr3+qfNVWKnrSFuI7OS43Wx1iNsq2EYcvGSrHFi EKj2vMB/u2L7g6UNtCmx1YRAwMbzbnIX3mXVK1tRwmqwed1ezShH/SRObdkHYOMj6m ulFxdD2xug7pg== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:08 +0000 Subject: [PATCH v10 16/30] KVM: arm64: Support TPIDR2_EL0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-16-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3509; i=broonie@kernel.org; h=from:subject:message-id; bh=WfGxhqhjq10VSZV2h4qBa1D5MAiiOjvTsGIutBZvgzc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo1v7dD/KcG9s1sYihJjHjqzTXtoGqAjolYy d2DqFiDNKWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKNQAKCRAk1otyXVSH 0OTvB/97FvvhkZuzELh6GOdRs9rjuutO8EvVc+/Z/UoedyQy3bhhUUa81iUPK4TUX6dRAlSWyRj yngohaEvhpTlOOS2oNqJZoYk1AU4/kcjQQ3oKm/f6LU566xg2v7SCzM+qZB0or89nDT4J29AliJ zF3EGfPafDtb6ah3hxN9dAfSbLdLea0nLhxha1mzdSmoYL61Hnwj9tIHDpaJu6a/6jkiJrVu5P5 xYRnFkoewo2fmAgJOM7JunfhXZTBroKTKiCtC2GOL6nZnSg32qd0+OT0p29barLlUOYOwtyVLP6 wx3H0YqJwLDTakRDeK1To9odEizdPUz2lnUKrmcaoMK08ONB X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a new thread ID register, TPIDR2_EL0. This is used in userspace for delayed saving of the ZA state but in terms of the architecture is not really connected to SME other than being part of FEAT_SME. It has an independent fine grained trap and the runtime connection with the rest of SME is purely software defined. Expose the register as a system register if the guest supports SME, context switching it along with the other EL0 TPIDRs. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 +++++++++++++++ arch/arm64/kvm/sys_regs.c | 3 ++- 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index e5194ffc40a7..ec1ede0c3c12 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -445,6 +445,7 @@ enum vcpu_sysreg { CSSELR_EL1, /* Cache Size Selection Register */ TPIDR_EL0, /* Thread ID, User R/W */ TPIDRRO_EL0, /* Thread ID, User R/O */ + TPIDR2_EL0, /* Thread ID, Register 2 */ TPIDR_EL1, /* Thread ID, Privileged */ CNTKCTL_EL1, /* Timer Control Register (EL1) */ PAR_EL1, /* Physical Address Register */ diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hy= p/include/hyp/sysreg-sr.h index 5624fd705ae3..8c3b3d6df99f 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -88,6 +88,17 @@ static inline bool ctxt_has_sctlr2(struct kvm_cpu_contex= t *ctxt) return kvm_has_sctlr2(kern_hyp_va(vcpu->kvm)); } =20 +static inline bool ctxt_has_sme(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!system_supports_sme()) + return false; + + vcpu =3D ctxt_to_vcpu(ctxt); + return kvm_has_sme(kern_hyp_va(vcpu->kvm)); +} + static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) { return host_data_ptr(host_ctxt) !=3D ctxt; @@ -127,6 +138,8 @@ static inline void __sysreg_save_user_state(struct kvm_= cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) =3D read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) =3D read_sysreg(tpidrro_el0); + if (ctxt_has_sme(ctxt)) + ctxt_sys_reg(ctxt, TPIDR2_EL0) =3D read_sysreg_s(SYS_TPIDR2_EL0); } =20 static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) @@ -204,6 +217,8 @@ static inline void __sysreg_restore_user_state(struct k= vm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + if (ctxt_has_sme(ctxt)) + write_sysreg_s(ctxt_sys_reg(ctxt, TPIDR2_EL0), SYS_TPIDR2_EL0); } =20 static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt, diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f13ff8e630f2..66248fd48a7d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3511,7 +3511,8 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { .visibility =3D s1poe_visibility }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, - { SYS_DESC(SYS_TPIDR2_EL0), undef_access }, + { SYS_DESC(SYS_TPIDR2_EL0), NULL, reset_unknown, TPIDR2_EL0, + .visibility =3D sme_visibility}, =20 { SYS_DESC(SYS_SCXTNUM_EL0), undef_access }, =20 --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 894AF41325C; Fri, 6 Mar 2026 17:10:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817040; cv=none; b=pOAzE6CQfSCytzA4OX8dACitnFk0qAxXhmppgZqlCCEPOUvddxhOTQjNWGgPr2valdC0vmeB9S8xeG74JvWnIsLRysyfmQ4r/26OBiegjmPkk1taP+ymZb1LLfENfKNB0H3f5Hm1fm5fDv3u7PaUfwoh5vZH0dEVqdzZdLXn/v8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817040; c=relaxed/simple; bh=J5EO1xviE9VFn8+583CTCxibzcT4/ZQErwlFsDHecNo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=D8QazppkRo2so5o5w7GgDRCEFG49SFhnKnkWWD7T/y4C39B1tf2dkQaUc9wgMCMiT7Bb0MzZ0/VKt6KxEkWRC5uVyr1txTcV757T/xVEpgrz1ieUj0WQFZN27s7AcLgIJRzYEV2BOSMarGxwt9sJlQJebenuJq+Cv9MBUx+viPs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N3CYEuul; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N3CYEuul" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D6B4C4CEF7; Fri, 6 Mar 2026 17:10:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817040; bh=J5EO1xviE9VFn8+583CTCxibzcT4/ZQErwlFsDHecNo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=N3CYEuulJ177qjcAiexiehYsjLv27xJdXImip3kWt1Ls0CIgth/rFiO4/E0074UlE TawWME1xl2L0dv2aCw0UKe3vMwFIJiDGvBHnkOxKUrwzEJGBXMf41eRh0RTipU5uso ZKiSuoq4wMyoYQ2/vBl8MwWElpUrgOIereqoYjsooJIWuOayFaS8tOifUJGXToyJs6 /bSIMqWScyOWV+/4jq9StHbcjQKnB/99RpzvHIzyRCrbcXJJUkvpeWW/Ya4ija3K5n JBHquZmUV6KsUuSeEbQCrd9VKgsNFIwaAnMjltAJji/z/HNONpHSnWahod+XtLuy95 /fIgG3PlGXliw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:09 +0000 Subject: [PATCH v10 17/30] KVM: arm64: Support SME identification registers for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-17-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=9360; i=broonie@kernel.org; h=from:subject:message-id; bh=J5EO1xviE9VFn8+583CTCxibzcT4/ZQErwlFsDHecNo=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo1Th/KwCxjeHYiBvJe4lfMxXEt4BCdSWywn Gls9WVy/BOJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKNQAKCRAk1otyXVSH 0IEWB/9G92L2v8rdlgm6PYPvbFHf2la7m/pZwFrtfTC1OhfGoak0JPTVgs2nORdQGwyTRdMYIqf 06yOw4MF4zclyeOSUIlbf9al7pe3Ix8Qg6YrMmqB/IIe6iW3ThkpMCUJLz+asI6/e7Xx0M4rNT0 h1r89kDaD2swaL5rcKws9Q+HR2OwjFytZAGlH74Dt5EgXjfyPqx+q+Sl/l/m6j/nq1rgV5a5UI8 TeyLSItp9rTtYng4Hg2qP3aRqi+cCg9c8cL5HDwhqP/2lQAXVewi2bkdE9Ix2vsA86tNvqb9mL9 7nBETCjD1vJuUk1dG8qxzvSz6fw81BVNsu/gkCF0iznAm8ga X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The primary register for identifying SME is ID_AA64PFR1_EL1.SME. This is hidden from guests unless SME is enabled by the VMM. When it is visible it is writable and can be used to control the availability of SME2. There is also a new register ID_AA64SMFR0_EL1 which we make writable, forcing it to all bits 0 if SME is disabled. This includes the field SMEver giving the SME version, userspace is responsible for ensuring the value is consistent with ID_AA64PFR1_EL1.SME. It also includes FA64, a separately enableable extension which provides the full FPSIMD and SVE instruction set including FFR in streaming mode. Userspace can control the availability of FA64 by writing to this field. The other features enumerated there only add new instructions, there are no architectural controls for these. There is a further identification register SMIDR_EL1 which provides a basic description of the SME microarchitecture, in a manner similar to MIDR_EL1 for the PE. It also describes support for priority management and a basic affinity description for shared SME units, plus some RES0 space. We do not support priority management for guests so this is hidden from guests, along with any new fields. As for MIDR_EL1 and REVIDR_EL1 we expose the implementer and revision information to guests with the raw value from the CPU we are running on, this may present issues for asymmetric systems or for migration as it does for the existing registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/config.c | 8 +----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 ++- arch/arm64/kvm/sys_regs.c | 60 +++++++++++++++++++++++++++++++++++= ---- 4 files changed, 61 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index ec1ede0c3c12..b8f9ab8fadd4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -397,6 +397,7 @@ struct kvm_arch { u64 revidr_el1; u64 aidr_el1; u64 ctr_el0; + u64 smidr_el1; =20 /* Masks for VNCR-backed and general EL2 sysregs */ struct kvm_sysreg_masks *sysreg_masks; @@ -1568,6 +1569,8 @@ static inline u64 *__vm_id_reg(struct kvm_arch *ka, u= 32 reg) return &ka->revidr_el1; case SYS_AIDR_EL1: return &ka->aidr_el1; + case SYS_SMIDR_EL1: + return &ka->smidr_el1; default: WARN_ON_ONCE(1); return NULL; diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c index d9f553cbf9df..57df8d0c38c4 100644 --- a/arch/arm64/kvm/config.c +++ b/arch/arm64/kvm/config.c @@ -281,14 +281,8 @@ static bool feat_anerr(struct kvm *kvm) =20 static bool feat_sme_smps(struct kvm *kvm) { - /* - * Revists this if KVM ever supports SME -- this really should - * look at the guest's view of SMIDR_EL1. Funnily enough, this - * is not captured in the JSON file, but only as a note in the - * ARM ARM. - */ return (kvm_has_feat(kvm, FEAT_SME) && - (read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS)); + (kvm_read_vm_id_reg(kvm, SYS_SMIDR_EL1) & SMIDR_EL1_SMPS)); } =20 static bool feat_spe_fds(struct kvm *kvm) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 399968cf570e..2757833c4396 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -348,8 +348,10 @@ static void pkvm_init_features_from_host(struct pkvm_h= yp_vm *hyp_vm, const struc host_kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES); =20 - if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &host_arch_flags)) + if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &host_arch_flags)) { hyp_vm->kvm.arch.midr_el1 =3D host_kvm->arch.midr_el1; + hyp_vm->kvm.arch.smidr_el1 =3D host_kvm->arch.smidr_el1; + } =20 return; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 66248fd48a7d..15854947de61 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1893,7 +1893,11 @@ static unsigned int id_visibility(const struct kvm_v= cpu *vcpu, =20 switch (id) { case SYS_ID_AA64ZFR0_EL1: - if (!vcpu_has_sve(vcpu)) + if (!vcpu_has_sve(vcpu) && !vcpu_has_sme(vcpu)) + return REG_RAZ; + break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) return REG_RAZ; break; } @@ -1923,6 +1927,18 @@ static unsigned int raz_visibility(const struct kvm_= vcpu *vcpu, =20 /* cpufeature ID register access trap handlers */ =20 +static bool hidden_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + switch (reg_to_encoding(r)) { + case SYS_SMIDR_EL1: + return !vcpu_has_sme(vcpu); + default: + return false; + } +} + static bool access_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -2015,7 +2031,9 @@ static u64 sanitise_id_aa64pfr1_el1(const struct kvm_= vcpu *vcpu, u64 val) SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) =3D=3D ID_AA64PFR0_EL1_RA= S_IMP)) val &=3D ~ID_AA64PFR1_EL1_RAS_frac; =20 - val &=3D ~ID_AA64PFR1_EL1_SME; + if (!kvm_has_sme(vcpu->kvm)) + val &=3D ~ID_AA64PFR1_EL1_SME; + val &=3D ~ID_AA64PFR1_EL1_RNDR_trap; val &=3D ~ID_AA64PFR1_EL1_NMI; val &=3D ~ID_AA64PFR1_EL1_GCS; @@ -3026,6 +3044,9 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { + if (hidden_id_reg(vcpu, p, r)) + return bad_trap(vcpu, p, r, "write to hidden ID register"); + if (p->is_write) return write_to_read_only(vcpu, p, r); =20 @@ -3037,8 +3058,11 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, return access_id_reg(vcpu, p, r); =20 /* - * Otherwise, fall back to the old behavior of returning the value of - * the current CPU. + * Otherwise, fall back to the old behavior of returning the + * value of the current CPU for REVIDR_EL1 and AIDR_EL1, or + * use whatever the sanitised reset value we have is for other + * registers not exposed prior to writability support for + * these registers. */ switch (reg_to_encoding(r)) { case SYS_REVIDR_EL1: @@ -3047,6 +3071,9 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, case SYS_AIDR_EL1: p->regval =3D read_sysreg(aidr_el1); break; + case SYS_SMIDR_EL1: + p->regval =3D r->val; + break; default: WARN_ON_ONCE(1); } @@ -3057,12 +3084,15 @@ static bool access_imp_id_reg(struct kvm_vcpu *vcpu, static u64 __ro_after_init boot_cpu_midr_val; static u64 __ro_after_init boot_cpu_revidr_val; static u64 __ro_after_init boot_cpu_aidr_val; +static u64 __ro_after_init boot_cpu_smidr_val; =20 static void init_imp_id_regs(void) { boot_cpu_midr_val =3D read_sysreg(midr_el1); boot_cpu_revidr_val =3D read_sysreg(revidr_el1); boot_cpu_aidr_val =3D read_sysreg(aidr_el1); + if (system_supports_sme()) + boot_cpu_smidr_val =3D read_sysreg_s(SYS_SMIDR_EL1); } =20 static u64 reset_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_de= sc *r) @@ -3074,6 +3104,8 @@ static u64 reset_imp_id_reg(struct kvm_vcpu *vcpu, co= nst struct sys_reg_desc *r) return boot_cpu_revidr_val; case SYS_AIDR_EL1: return boot_cpu_aidr_val; + case SYS_SMIDR_EL1: + return boot_cpu_smidr_val; default: KVM_BUG_ON(1, vcpu->kvm); return 0; @@ -3122,6 +3154,16 @@ static int set_imp_id_reg(struct kvm_vcpu *vcpu, con= st struct sys_reg_desc *r, .val =3D mask, \ } =20 +#define IMPLEMENTATION_ID_FILTERED(reg, mask, reg_visibility) { \ + SYS_DESC(SYS_##reg), \ + .access =3D access_imp_id_reg, \ + .get_user =3D get_id_reg, \ + .set_user =3D set_imp_id_reg, \ + .reset =3D reset_imp_id_reg, \ + .visibility =3D reg_visibility, \ + .val =3D mask, \ + } + static u64 reset_mdcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { __vcpu_assign_sys_reg(vcpu, r->reg, vcpu->kvm->arch.nr_pmu_counters); @@ -3238,7 +3280,6 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | ID_AA64PFR1_EL1_RNDR_trap | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_MTE)), @@ -3248,7 +3289,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64PFR2_EL1_MTESTOREONLY), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0), =20 @@ -3454,6 +3495,13 @@ static const struct sys_reg_desc sys_reg_descs[] =3D= { { SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr }, { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user =3D set_clidr, .val =3D ~CLIDR_EL1_RES0 }, + IMPLEMENTATION_ID_FILTERED(SMIDR_EL1, + (SMIDR_EL1_NSMC | SMIDR_EL1_HIP | + SMIDR_EL1_AFFINITY2 | + SMIDR_EL1_IMPLEMENTER | + SMIDR_EL1_REVISION | SMIDR_EL1_SH | + SMIDR_EL1_AFFINITY), + sme_visibility), IMPLEMENTATION_ID(AIDR_EL1, GENMASK_ULL(63, 0)), { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, ID_FILTERED(CTR_EL0, ctr_el0, --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44C9941325C; Fri, 6 Mar 2026 17:10:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817045; cv=none; b=AHM/cOcvIZcL7nzOhylZLc5ekfdmeNDI0vIMRezxiHYHondPaJ1ftKjt2ZfDdRYkPWNcAKNmVY7tU//8imTbZE+UzQitPvbHgzomqmcw4ggJSx4b0y9QMkkLSaw9BpX+nEBKzTtAswek3OKGG+DLufut7acRizWVdN9/b1P7fCM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817045; c=relaxed/simple; bh=F41Vx/sjHMfSg6KECj89bZ4Pk/E+IG43GonQ3lL1rlk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LAOozjUW6KG5X8cKJ3ZeHQxqL2DJY7Aq9ofVq1SJ2+0SIInbA9gvY1G13jgY+pOuSgU1Si4RG0gsICacP/67PPnvE1McrvBcY6iH/IpF7Ap0ZRIXI0rCDJ1jln4wI1PI40bdfOgOr0H/nRd+E7aylqJ1HjBPbVu7N/FYiEBA8dA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SjLF03+h; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SjLF03+h" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7EADC4CEF7; Fri, 6 Mar 2026 17:10:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817044; bh=F41Vx/sjHMfSg6KECj89bZ4Pk/E+IG43GonQ3lL1rlk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=SjLF03+hwhb3Y+iHviJq1+2m4RF0v5FE8U/woN0UuSs6L1mIe3lXVeHOAA7238bRZ LWMImTBghVM1t7dffIrS2dsmttWzxQkJepWemDcSZf9HfFtwewtJD/Zx9Xw7BAiGR0 10A7sHXUpDq0KVF3r7DABSh7fqTZyeBYPjNh/pO0pIVw6NRCxgXIki2e2qveX57vYK V7EKKHaOHHvJ6sNUNuqy7fGYMT0veHSNoxXrzjetKot7D2crOvqMI3qrbF2j1zreYq HwwXQt2VeBqu6KdFX0MCFj9TMAp6paAC/1smaomulYR1qfYJ7oUls9a/z5T5pOUGGS wenl5pxcT2Sdw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:10 +0000 Subject: [PATCH v10 18/30] KVM: arm64: Support SME priority registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-18-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=5564; i=broonie@kernel.org; h=from:subject:message-id; bh=F41Vx/sjHMfSg6KECj89bZ4Pk/E+IG43GonQ3lL1rlk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo23yH252jb6sF4Wijm1Cyl2gq68NvuYW+AJ 4kio7pfisiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKNgAKCRAk1otyXVSH 0L19B/9uwW0IjXWgxWd7m/3k3jZlVD1n4SPRTQSe5q1iLZ3jliJ8O48Zr6g9Xj1FUqtm8tXeYE2 rO51hrABG7lI9/tKZqsU8ZmANlLDAiCAyvQsgkEbFuydewmw5XaJpuDXmvg47D1Vwmcsz/UIDkS R8Rz6NOeSeIJIUPCxFOAMVPp10Xd8YFG7Mfq1qW9TOd0ggILCZQgwyyEI8ZLwdrFzdvyLfc9+4L Q/ynBfRmqWnkWB37enLOt+QqZ8wgUiUUKkQ1oL2ZMCYNOu0USVwCY5K2hv+er2/JH+ZbmyF8Jnu 0HUuGQU7zCLzFnzEkvDu2tJLeeFReS3qBxd8XUeZjGEfrMZm X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME has optional support for configuring the relative priorities of PEs in systems where they share a single SME hardware block, known as a SMCU. Currently we do not have any support for this in Linux and will also hide it from KVM guests, pending experience with practical implementations. The interface for configuring priority support is via two new system registers, these registers are always defined when SME is available. The register SMPRI_EL1 allows control of SME execution priorities. Since we disable SME priority support for guests this register is RES0, define it as such and enable fine grained traps for SMPRI_EL1 to ensure that guests can't write to it even if the hardware supports priorities. Since the register should be readable with fixed contents we only trap writes, not reads. Since there is no host support for using priorities the register currently left with a value of 0 by the host so we do not need to update the value for guests. There is also an EL2 register SMPRIMAP_EL2 for virtualisation of priorities, this is RES0 when priority configuration is not supported but has no specific traps available. When saving state from a nested guest we overwrite any value the guest stored. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 7 +++++++ arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++++++++++++++++- 4 files changed, 38 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index b8f9ab8fadd4..094cbf8e7022 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -543,6 +543,7 @@ enum vcpu_sysreg { VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ VNCR(SMCR_EL1), /* SME Control */ + VNCR(SMPRIMAP_EL2), /* Streaming Mode Priority Mapping Register */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm= /vncr_mapping.h index 44b12565321b..ac2f5db0ee9c 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -45,6 +45,7 @@ #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 #define VNCR_SMCR_EL1 0x1F0 +#define VNCR_SMPRIMAP_EL2 0x1F8 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sy= sreg-sr.c index b254d442e54e..d814e7fb12ba 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -80,6 +80,13 @@ static void __sysreg_save_vel2_state(struct kvm_vcpu *vc= pu) =20 if (ctxt_has_sctlr2(&vcpu->arch.ctxt)) __vcpu_assign_sys_reg(vcpu, SCTLR2_EL2, read_sysreg_el1(SYS_SCTLR2)); + + /* + * We block SME priorities so SMPRIMAP_EL2 is RES0, however we + * do not have traps to block access so the guest might have + * updated the state, overwrite anything there. + */ + __vcpu_assign_sys_reg(vcpu, SMPRIMAP_EL2, 0); } =20 static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 15854947de61..0ddb89723819 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -691,6 +691,15 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, return read_zero(vcpu, p); } =20 +static int set_res0(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + if (val) + return -EINVAL; + + return 0; +} + /* * ARMv8.1 mandates at least a trivial LORegion implementation, where all = the * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0 @@ -1979,6 +1988,15 @@ static unsigned int fp8_visibility(const struct kvm_= vcpu *vcpu, return REG_HIDDEN; } =20 +static unsigned int sme_raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sme(vcpu)) + return REG_RAZ; + + return REG_HIDDEN; +} + static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val) { if (!vcpu_has_sve(vcpu)) @@ -3371,7 +3389,14 @@ static const struct sys_reg_desc sys_reg_descs[] =3D= { =20 { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility =3D sve= _visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, - { SYS_DESC(SYS_SMPRI_EL1), undef_access }, + + /* + * SMPRI_EL1 is UNDEF when SME is disabled, the UNDEF is + * handled via FGU which is handled without consulting this + * table. + */ + { SYS_DESC(SYS_SMPRI_EL1), trap_raz_wi, .visibility =3D sme_raz_visibilit= y }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, @@ -3742,6 +3767,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { =20 EL2_REG_VNCR(HCRX_EL2, reset_val, 0), =20 + { SYS_DESC(SYS_SMPRIMAP_EL2), .reg =3D SMPRIMAP_EL2, + .access =3D trap_raz_wi, .set_user =3D set_res0, .reset =3D reset_val, + .val =3D 0, .visibility =3D sme_el2_visibility }, EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, sme_el2_visibility), =20 --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7CB5411632; Fri, 6 Mar 2026 17:10:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817049; cv=none; b=KJc5Jv7drh1GR+5Lc2W/8zFadMFu3GR0tKah3YFmEGzcIv108/w+GHFKXXvJVUS3UAE7E/YvM8DG8ld6z3oN1x93xHXcz95b2SGkJYwSdQyi5F4FEeIClw7+mhHgIWXZRXwsZO8QTM81fmdiPvbzjPyd3mYHQVh8A6qciMXIsOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817049; c=relaxed/simple; bh=FRT5fvZBAIjhBya7YWT00dc/V832oYejWRmRw40AVzI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jmDzboijrbdUg5/deGp2HS3mu/7wMJb1bgn77mH6jCqKdSeB2EY+1eX19bNnYGqjzZR3pssqAeASIFScK4RFEO1JQEFHsIkllY7oKC6sXqJu7yFrOorQBghGQGw2owqb/hgOkuWqEIKcm7nT1Iu25sWc/eKDdly38krJqPdeMs8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ur84VoZD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ur84VoZD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42A08C2BC86; Fri, 6 Mar 2026 17:10:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817049; bh=FRT5fvZBAIjhBya7YWT00dc/V832oYejWRmRw40AVzI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Ur84VoZDL/fO63PzSYfda/mG6D0R5ZDsnoNC2C9NQ5WzePtWOzgSLqbZXUnWChUD+ BR4CZipry7nSHfZ3gjPzLXwMeCFkuaozJ1H+Js4v/GOqifoWmeQjQwK/86c3enOyL3 Y+wm9xMgOJi3K/8R7Dtlygt4699GcO9rWNAHv6vmu7SDYmkegudPSXtzfUOiIeu0o/ GdCL3ANPzvYhj/+OVxFNlCgYSwrq9CIgoBf1ZXy/zLhkDy0Lq4wmt9KXFK0mX/Vpuw aVZe95boI/eA5KqOtWzOJHdegAdKrIPFhvqRB8OOS5RrkAaVp/Exj/ZcePtud6ucnU qc+DgFQazSYJg== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:11 +0000 Subject: [PATCH v10 19/30] KVM: arm64: Provide assembly for SME register access Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-19-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=1838; i=broonie@kernel.org; h=from:subject:message-id; bh=FRT5fvZBAIjhBya7YWT00dc/V832oYejWRmRw40AVzI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo3M4fBMa89wgAteKP42mDicIN04q1U64tmn W/8erSVjL+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKNwAKCRAk1otyXVSH 0DUlB/sGo724WMwO3Na+9JbUGzTipVpyGlj5U0MlKsI71IC0IicVW7UL0M+cgKnxRHbhm5Wl0C+ 2e0A4I/aGK9tDVOAeoC6E9V0lVAnStY5fKeh1MFNJMJbwtoj4GrnqvCnwWbUd4Sh0ZNsmwryXq2 M9oPODPl4I4EF9AKnOZ4kUZe98wYJwhlrTkn40wtgovae6Gfhhazz0UULOJDn1ViaFAlT3oKQ45 qDfwhb2Ypd7zyJznPNyFJ5yfdjRFbInqX03T3Ihr3RM/sdtZqGkd5VPLUAo3coZF98leTkkwqQo zvWL/HLwAz7M/Nt5qinv/vbvZZ3eovGqftUWamBmLU9tcA0q X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Provide versions of the SME state save and restore functions for the hypervisor to allow it to restore ZA and ZT for guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kvm/hyp/fpsimd.S | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_= hyp.h index 0317790dd3b7..9b1354d1122c 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -116,6 +116,8 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_r= egs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); +void __sme_save_state(void const *state, bool save_zt); +void __sme_restore_state(void const *state, bool restore_zt); =20 u64 __guest_enter(struct kvm_vcpu *vcpu); =20 diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 6e16cbfc5df2..18b7a666016c 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -29,3 +29,26 @@ SYM_FUNC_START(__sve_save_state) sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) + +SYM_FUNC_START(__sme_save_state) + // Caller needs to ensure SMCR updates are visible + _sme_rdsvl 2, 1 // x2 =3D VL/8 + sme_save_za 0, x2, 12 // Leaves x0 pointing to the end of ZA + + cbz x1, 1f + _str_zt 0 +1: + ret +SYM_FUNC_END(__sme_save_state) + +SYM_FUNC_START(__sme_restore_state) + // Caller needs to ensure SMCR updates are visible + _sme_rdsvl 2, 1 // x2 =3D VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA + + cbz x1, 1f + _ldr_zt 0 + +1: + ret +SYM_FUNC_END(__sme_restore_state) --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15EE841B34E; Fri, 6 Mar 2026 17:10:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817054; cv=none; b=WSASsXoVY4EDFqDkNC6ROpLyzdcJFrwA4N9zvBp1XKFKgoqMolgk0wAJDYjLssoevfD6UXV0VzjfI/ABX+3yYTDLPcy1uVHz+D5/1BhnYZZprDLDXyPInKkHJaWoWPkpoPVQDhH4mXo0hjV6IfDve+3sCG7bCLBcy+qVjHOXsrc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817054; c=relaxed/simple; bh=adTxBFGRwQgcoDahtmqxTjtuGOAKvsxDnqc+RXBB7TE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Z0vAE0USfSLee6PKPIedeFho2Ze3vHxEUDEtwjbsphsWPjdf3u7IQLeDRSx6WJV9Y8svi6XKTVZH7OgiWb9gnJwbKgFtiEhSg2EtswQqn6uB+S3b16dzxoIjsUrAft9eftkk9FCyBE3jBFpoqgGVYAsLJMeyFbelyb88G59aUJA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=P+t6cMfr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="P+t6cMfr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B08D8C4CEF7; Fri, 6 Mar 2026 17:10:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817053; bh=adTxBFGRwQgcoDahtmqxTjtuGOAKvsxDnqc+RXBB7TE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=P+t6cMfruA3noVoGA+fw/wB+mk2enpMDLfvQ8j8XARair3vL0IwqYBhen0Fv3+H9E NlWNDNYjnzwsxnaE3ZwbR+Ce1tQ2Rjh/cok5oE9yH9dbcjM/y7hDWE+5o+7eyd02oi l2tqCb6/f3llO7K8x2mzTO9Mm8w5r5UKX66QFmxd1EnKDpXe968/HnGDNm/4Mk325h p6Z+dx5NdqfzQFzhi3Io1cbkTLLv5cCjFig0O2waxMswjetXxMKtim8p3Esz0bV+6D s4sC3uinbkWgPLpX4SNyVn1G0ml/SQxEqtwQmortAaTN63MZLRMK0laVRjvy7ln6Dd nkDTE3DbaAAQA== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:12 +0000 Subject: [PATCH v10 20/30] KVM: arm64: Support userspace access to streaming mode Z and P registers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-20-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=5535; i=broonie@kernel.org; h=from:subject:message-id; bh=adTxBFGRwQgcoDahtmqxTjtuGOAKvsxDnqc+RXBB7TE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo4vuM/qvToJ/erMef+mTT8+E2MOiB02B8xK rUCsyw4o/6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKOAAKCRAk1otyXVSH 0H/EB/9WbvOJbemHDY5M9t8n5SLcw2NQ6dV+b4W/ALqCyPneVNWB7Qj1lVutSczk7wkC2W10E2h hm4u0UURF0Zh5RnHC3/9u9WGgQAm+p5nQH9KjETJOTVvb8yWZStMCFxEPwE7AgWXmLL1yJ/z42I R1bOvV4+1fIAUzCzWJhvh/OLVd7zuUKuDJKme+IdtEwOXtmzxbwrUt+nfUbaqUIOPmErVOKCXuy 7PB7sdZgZ9npHPaQDddBJdKJfz75vU3Wpq1Gvl0rE+kg/gZBqnibf6vj9lJD+2BUWRNCrxEaCwq 5dNy3LhUHtlmaInaBRQmubYJPPDv/Vgpbc5oTydIBWeg7ItY X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME introduces a mode called streaming mode where the Z, P and optionally FFR registers can be accessed using the SVE instructions but with the SME vector length. Reflect this in the ABI for accessing the guest registers by making the vector length for the vcpu reflect the vector length that would be seen by the guest were it running, using the SME vector length when the guest is configured for streaming mode. Since SME may be present without SVE we also update the existing checks for access to the Z, P and V registers to check for either SVE or streaming mode. When not in streaming mode the guest floating point state may be accessed via the V registers. Any VMM that supports SME must be aware of the need to configure streaming mode prior to writing the floating point registers that this creates. Signed-off-by: Mark Brown --- arch/arm64/kvm/guest.c | 67 +++++++++++++++++++++++++++++++++++++++++++---= ---- 1 file changed, 58 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 9276054b5bdd..20e06047d4bf 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -73,6 +73,19 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } =20 +static bool vcpu_has_sve_regs(const struct kvm_vcpu *vcpu) +{ + return vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu); +} + +static bool vcpu_has_ffr(const struct kvm_vcpu *vcpu) +{ + if (vcpu_in_streaming_mode(vcpu)) + return vcpu_has_fa64(vcpu); + else + return vcpu_has_sve(vcpu); +} + static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { int size; @@ -110,9 +123,10 @@ static int core_reg_size_from_offset(const struct kvm_= vcpu *vcpu, u64 off) /* * The KVM_REG_ARM64_SVE regs must be used instead of * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: + * SVE-enabled vcpus or when a SME enabled vcpu is in + * streaming mode: */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + if (vcpu_has_sve_regs(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL; =20 return size; @@ -423,6 +437,24 @@ struct vec_state_reg_region { unsigned int upad; /* extra trailing padding in user memory */ }; =20 +/* + * We represent the Z and P registers to userspace using either the + * SVE or SME vector length, depending on which features the guest has + * and if the guest is in streaming mode. + */ +static unsigned int vcpu_sve_cur_vq(struct kvm_vcpu *vcpu) +{ + unsigned int vq =3D 0; + + if (vcpu_has_sve(vcpu)) + vq =3D vcpu_sve_max_vq(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) + vq =3D vcpu_sme_max_vq(vcpu); + + return vq; +} + /* * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy @@ -460,20 +492,25 @@ static int sve_reg_to_region(struct vec_state_reg_reg= ion *region, reg_num =3D (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; =20 if (reg->id >=3D zreg_id_min && reg->id <=3D zreg_id_max) { - if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + if (!vcpu_has_sve_regs(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; =20 - vq =3D vcpu_sve_max_vq(vcpu); + vq =3D vcpu_sve_cur_vq(vcpu); =20 reqoffset =3D SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; reqlen =3D KVM_SVE_ZREG_SIZE; maxlen =3D SVE_SIG_ZREG_SIZE(vq); } else if (reg->id >=3D preg_id_min && reg->id <=3D preg_id_max) { - if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + if (!vcpu_has_sve_regs(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; =20 - vq =3D vcpu_sve_max_vq(vcpu); + if (!vcpu_has_ffr(vcpu) && + (reg->id >=3D KVM_REG_ARM64_SVE_FFR(0)) && + (reg->id <=3D KVM_REG_ARM64_SVE_FFR(SVE_NUM_SLICES - 1))) + return -ENOENT; + + vq =3D vcpu_sve_cur_vq(vcpu); =20 reqoffset =3D SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -512,6 +549,9 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 + if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -538,6 +578,9 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; =20 + if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT; @@ -639,15 +682,21 @@ static unsigned long num_core_regs(const struct kvm_v= cpu *vcpu) static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) { const unsigned int slices =3D vcpu_sve_slices(vcpu); + int regs, ret; =20 - if (!vcpu_has_sve(vcpu)) + if (!vcpu_has_sve(vcpu) && !vcpu_in_streaming_mode(vcpu)) return 0; =20 /* Policed by KVM_GET_REG_LIST: */ WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); =20 - return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) - + 1; /* KVM_REG_ARM64_SVE_VLS */ + regs =3D SVE_NUM_PREGS + SVE_NUM_ZREGS; + if (vcpu_has_sve(vcpu) || vcpu_has_fa64(vcpu)) + regs++; /* FFR */ + ret =3D regs * slices; + if (vcpu_has_sve(vcpu)) + ret++; /* KVM_REG_ARM64_SVE_VLS */ + return ret; } =20 static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F37441163E; Fri, 6 Mar 2026 17:10:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817058; cv=none; b=VjgtF23cazc+hg4QmUn8hVODFUWP4HnS4w+VDN3eDG7vlgUl441nKyBwbOvF+jaWqPuG2M2VCfRkbG1kPD+XIV0mlkDtoYybBfnVl9omLH8Nl5tOHQWzW51zH3Gr/qHZcjK+G1zvdnHSfwic2nC4TPbfSl8n8uM7MeTofVAYjek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817058; c=relaxed/simple; bh=XMWKLA20ZJQ2D0jPmK3pGEr/rR365kxTEGOE8CBcYKo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=snsshCUEvPoRmsm/j8HLgn93o6tsMUidW82KWX3Hh9UpvLpe2SKAzDzZag2j/PnPlgd9G3lyS0JOjD/bzturjuOM+A17N7WpCBAKsAck77aXCLJqlWYrMmvvTtWKiD+LMxn1t46hXApxheisyvCk8tVnGHPuHt0VReFSFo5lCeM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jy0PsE97; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jy0PsE97" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2888BC2BC86; Fri, 6 Mar 2026 17:10:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817058; bh=XMWKLA20ZJQ2D0jPmK3pGEr/rR365kxTEGOE8CBcYKo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jy0PsE97fdBos7uFwCnhSEbAKskXM3/fhbc94U6zjtBirw/7zesavv4H3JUyxDjgd VjP3km3gjW4kbshvFPiBwPcgcylOsjOPUZBPz7QZfESvzku8RAGmxbO4E0lZsNuQGC lS+URgG2cQ7rx5qXEtFbraOCAGKJQAYxBWZekZqOAywsUg6Wjbm7rgYhCyfKiKvfwh ude9dLjYYOXY8otXWL/n/AqjD/rTYZBhCHf4lSq07yCaV6TxnxlrgcI0yRXiRsjXA9 Ubj1XpZeaO10NZpxYgYpB+ZAvlvQBQVPzXwA+rqT+522WdDBbp14OXVnXHOvFkcQUo iL12G1WtBl/jw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:13 +0000 Subject: [PATCH v10 21/30] KVM: arm64: Flush register state on writes to SVCR.SM and SVCR.ZA Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-21-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=4405; i=broonie@kernel.org; h=from:subject:message-id; bh=XMWKLA20ZJQ2D0jPmK3pGEr/rR365kxTEGOE8CBcYKo=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo4QB0GzEXtABB8d+AGe7mHIeS5KOZQ4FwqN ejgafKoNX2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKOAAKCRAk1otyXVSH 0NycB/97X0uK9f1azHuOQ9RQ6KvzB3F7lsfXwATUwgISCKtgmoBQo3Yp3RnWqW/dP7qv49ubFVB 2uQhPj8uIT4fw5JT9uWAZWzcMsLnROShZjCgiyuiwYxrl76n/yBdkTnRgSbSfoii6+1aOLOEe/t 0E2DbQnv0pI/DamIUJSWsuoY0Uh34MJ8ULaz+j6YEPYZX0j0Kh1C5ejLVkdVKxBrSZLrNfy/siO x+CMlqhO2/d/N42MYLdqL0y0vSosxDw+HRDAQUaiIsS2oj4o/4kVmA3q/THepzr8xnvfMMZX9U0 fwFXvt4nyPydtfHvdhDqiUy8HmmNwzzdM+jC/OrMQxyCSFVq X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Writes to the physical SVCR.SM and SVCR.ZA change the state of PSTATE.SM and PSTATE.ZA, causing other floating point state to reset. Emulate this behaviour for writes done via the KVM userspace ABI. Setting PSTATE.ZA to 1 causes ZA and ZT0 to be reset to 0, these are stored in sme_state. Setting PSTATE.ZA to 0 causes ZA and ZT0 to become inaccessib= le so no reset is needed. Any change in PSTATE.SM causes the V, Z, P, FFR and FPMR registers to be reset to 0 and FPSR to be reset to 0x800009f. Rather than introduce a requirement that the vector configuration be finalised before writing to SVCR we check for this before updating the SVE and SME specific state, when finalisation happens they will be allocated with an initial state of 0. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++++++++++++++ arch/arm64/include/asm/sysreg.h | 2 ++ arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++++++++++++++++- 3 files changed, 55 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 094cbf8e7022..aa0817eb1b48 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1172,6 +1172,30 @@ struct kvm_vcpu_arch { =20 #define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) =20 +#define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) + +#define sme_state_size_from_vl(vl, sme2) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(vl))) { \ + __size_ret =3D 0; \ + } else { \ + __vq =3D sve_vq_from_vl(vl); \ + __size_ret =3D ZA_SIG_REGS_SIZE(__vq); \ + if (sme2) \ + __size_ret +=3D ZT_SIG_REG_SIZE; \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sme_state_size(vcpu) ({ \ + unsigned long __vl; \ + __vl =3D (vcpu)->arch.max_vl[ARM64_VEC_SME]; \ + sme_state_size_from_vl(__vl, vcpu_has_sme2(vcpu)); \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysre= g.h index f4436ecc630c..90d398429d80 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -1101,6 +1101,8 @@ #define gicr_insn(insn) read_sysreg_s(GICV5_OP_GICR_##insn) #define gic_insn(v, insn) write_sysreg_s(v, GICV5_OP_GIC_##insn) =20 +#define FPSR_RESET_VALUE 0x800009f + #ifdef __ASSEMBLER__ =20 .macro mrs_s, rt, sreg diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 0ddb89723819..8a9fd8d69d6e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -927,6 +927,34 @@ static unsigned int hidden_visibility(const struct kvm= _vcpu *vcpu, return REG_HIDDEN; } =20 +static int set_svcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + u64 old =3D __vcpu_sys_reg(vcpu, rd->reg); + + if (val & SVCR_RES0) + return -EINVAL; + + if ((val & SVCR_ZA) && !(old & SVCR_ZA) && + kvm_arm_vcpu_vec_finalized(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); + + if ((val & SVCR_SM) !=3D (old & SVCR_SM)) { + memset(vcpu->arch.ctxt.fp_regs.vregs, 0, + sizeof(vcpu->arch.ctxt.fp_regs.vregs)); + + if (kvm_arm_vcpu_vec_finalized(vcpu)) + memset(vcpu->arch.sve_state, 0, + vcpu_sve_state_size(vcpu)); + + __vcpu_assign_sys_reg(vcpu, FPMR, 0); + vcpu->arch.ctxt.fp_regs.fpsr =3D FPSR_RESET_VALUE; + } + + __vcpu_assign_sys_reg(vcpu, rd->reg, val); + return 0; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -3535,7 +3563,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { CTR_EL0_DminLine_MASK | CTR_EL0_L1Ip_MASK | CTR_EL0_IminLine_MASK), - { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility }, + { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility, .set_user =3D set_svcr }, { SYS_DESC(SYS_FPMR), undef_access, reset_val, FPMR, 0, .visibility =3D f= p8_visibility }, =20 { PMU_SYS_REG(PMCR_EL0), .access =3D access_pmcr, .reset =3D reset_pmcr, --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7D37401484; Fri, 6 Mar 2026 17:11:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817062; cv=none; b=mFjSbAVt9KbhOlgxeMnLmqEYKc7wAjZWsrYPdR8uiRh/Uu0POJaFUDA2hdTjCfyQeWq7OeBe1pnxBJGHwveY4d/WhXCr2SmyA/aglaoX94H4Z0WUh9fn1MdiFqK6DKkOWKL6oed5s2AVs9YMClVXZWSTqvP8uEtoc8d6e3iKxfw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817062; c=relaxed/simple; bh=8cslAwJ6i6zMbxfjdRzBzF+cW4InDSsHIAphFArm1rY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jgaXAUoFWrhtdnwfF5Bt9VyCt8r3wU9NtHUm1PgJY/JO1QoHxUPLpQ4/DTJNSTVfT/3gppqwX5JwcMuqowDM1XHBMeLBCXDBViKBWKXAY13uBTJRlUEPd9qFce21YEZy+oOmhrdSKYoG6x759aUDXXTCXMBtKr0Ikuo6ZAN9j5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fZgVuzwf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fZgVuzwf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B617C4CEF7; Fri, 6 Mar 2026 17:10:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817062; bh=8cslAwJ6i6zMbxfjdRzBzF+cW4InDSsHIAphFArm1rY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=fZgVuzwfkN1dJnujTK6ts3keru/YW/PGPiIgHZMRmonr29KcySQsSBpsdxWENBblF YyBb9IP1M7af20C8NNFzQ7mf4UKvetIYfYHU7IFpmaV2/fbLICVQiPWihxFwYB94O4 i6kKnpnbm8MShQHY9YR/qsCFOiEmPiSKCZV9BjPzTw2g/M2Ov3pHbIRwf7ZUYe3Ufk F3+Uy2nuGkF1WJQXZ5lOt6wbqvkoEl4H0wK6CUBOTiQxLcym9fsYQ+se15H/CVDs42 OCUvpJsA2PxNE5/JzNEYEsUqWQqqIRqME6AmTTQWCnq4/shTpM4RwV/mpOglD3iCGY LURnYDYKC+kSA== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:14 +0000 Subject: [PATCH v10 22/30] KVM: arm64: Expose SME specific state to userspace Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-22-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=8956; i=broonie@kernel.org; h=from:subject:message-id; bh=8cslAwJ6i6zMbxfjdRzBzF+cW4InDSsHIAphFArm1rY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo5RbOzKDT/89tBGwP/aLROnesc5kxEi84tX j1cZXuay2WJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKOQAKCRAk1otyXVSH 0KNtB/9fOJsPMBHYGZ02D8X6IM1CVd0zufqgwE1w2hx+XzAZQ7KeXU0EuFGfBV3z5JMzMDhzY5m Mro8SO4kRHlx5rWomvODODXFstZTQVgp8sJqKwRpJ1xABNC0q6QFQmC9yjZqtBDdPfWfXMwisWC qnik4n3YxefCZdjgoIX5pEijv97jjLyukxQMUByce2K2sxl3fRRRkmSqKxyPEUJwtr8z6MSgOcB KHbplxFOBoahX+j1eNWrIwp0Y3guonKJOVXTlSiifBsrA5PUzflto3FBbPd4Q3wiMWATkmXmoD0 +jFtKTdronitedM9kC3J8OlAXRZG7KkSD/orsyL8pY1s36PT X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME introduces two new registers, the ZA matrix register and the ZT0 LUT register. Both of these registers are only accessible when PSTATE.ZA is set and ZT0 is only present if SME2 is enabled for the guest. Provide support for configuring these from VMMs. The ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves. ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API. As is done in the architecture for both ZA and ZT0 the value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1 and the registers are inaccessible when PSTATE.ZA is 0. While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/kvm.h | 20 +++++ arch/arm64/kvm/guest.c | 168 ++++++++++++++++++++++++++++++++++= +++- 2 files changed, 186 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index 498a49a61487..f68061680f9a 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -357,6 +357,26 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) =20 +#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) +#define KVM_REG_ARM64_SME_ZTREG(n) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZT_BASE | \ + KVM_REG_SIZE_U512) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xfffe) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 20e06047d4bf..b78944a76da8 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -604,23 +604,124 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const = struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); } =20 +/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq =3D vcpu_sme_max_vq(vcpu); + const u64 za_h_max =3D vq * __SVE_VQ_BYTES; + const u64 zah_id_min =3D KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max =3D KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num =3D (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >=3D zah_id_min && reg->id <=3D zah_id_max) { + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + if (!vcpu_za_enabled(vcpu)) + return -EBUSY; + + /* ZA is exposed as SVE vectors ZA.H[n] */ + reqoffset =3D ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen =3D KVM_SVE_ZREG_SIZE; + maxlen =3D SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id =3D=3D KVM_REG_ARM64_SME_ZTREG(0)) { + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, SME2)) + return -ENOENT; + + if (!vcpu_za_enabled(vcpu)) + return -EBUSY; + + /* ZT0 is stored after ZA */ + reqoffset =3D ZA_SIG_REGS_SIZE(vq); + reqlen =3D KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen =3D KVM_REG_ARM64_SME_ZTREG_SIZE; + } else { + return -EINVAL; + } + + sme_state_size =3D vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset =3D array_index_nospec(reqoffset, sme_state_size); + region->klen =3D min(maxlen, reqlen); + region->upad =3D reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { + int ret; + struct vec_state_reg_region region; + char __user *uptr =3D (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg); =20 - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret =3D sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + if (copy_to_user(uptr, vcpu->arch.sme_state + region.koffset, + region.klen) || + clear_user(uptr + region.klen, region.upad)) + return -EFAULT; + + return 0; } =20 static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *re= g) { + int ret; + struct vec_state_reg_region region; + char __user *uptr =3D (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id =3D=3D KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg); =20 - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret =3D sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { return -EINVAL; @@ -699,6 +800,20 @@ static unsigned long num_sve_regs(const struct kvm_vcp= u *vcpu) return ret; } =20 +static unsigned long num_sme_regs(const struct kvm_vcpu *vcpu) +{ + const unsigned int slices =3D vcpu_sve_slices(vcpu); + + if (!vcpu_has_sme(vcpu)) + return 0; + + /* Policed by KVM_GET_REG_LIST: */ + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); + + /* KVM_REG_ARM64_SME_VLS, ZA, and ZT0 if SME2 */ + return 1 + (slices * vcpu_sme_max_vl(vcpu)) + vcpu_has_sme2(vcpu); +} + static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, u64 __user *uindices) { @@ -746,6 +861,49 @@ static int copy_sve_reg_indices(const struct kvm_vcpu = *vcpu, return num_regs; } =20 +static int copy_sme_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + const unsigned int slices =3D vcpu_sve_slices(vcpu); + u64 reg; + unsigned int i, n; + int num_regs =3D 0; + + if (!vcpu_has_sme(vcpu)) + return 0; + + /* Policed by KVM_GET_REG_LIST: */ + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); + + /* + * Enumerate this first, so that userspace can save/restore in + * the order reported by KVM_GET_REG_LIST: + */ + reg =3D KVM_REG_ARM64_SME_VLS; + if (put_user(reg, uindices++)) + return -EFAULT; + ++num_regs; + + for (i =3D 0; i < slices; i++) { + for (n =3D 0; n < vcpu_sme_max_vl(vcpu); n++) { + reg =3D KVM_REG_ARM64_SME_ZAHREG(n, i); + if (put_user(reg, uindices++)) + return -EFAULT; + num_regs++; + } + } + + if (vcpu_has_sme2(vcpu)) { + reg =3D KVM_REG_ARM64_SME_ZTREG(0); + if (put_user(reg, uindices++)) + return -EFAULT; + num_regs++; + } + + return num_regs; +} + + /** * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG * @vcpu: the vCPU pointer @@ -758,6 +916,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) =20 res +=3D num_core_regs(vcpu); res +=3D num_sve_regs(vcpu); + res +=3D num_sme_regs(vcpu); res +=3D kvm_arm_num_sys_reg_descs(vcpu); res +=3D kvm_arm_get_fw_num_regs(vcpu); =20 @@ -785,6 +944,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u6= 4 __user *uindices) return ret; uindices +=3D ret; =20 + ret =3D copy_sme_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices +=3D ret; + ret =3D kvm_arm_copy_fw_reg_indices(vcpu, uindices); if (ret < 0) return ret; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5259041323D; Fri, 6 Mar 2026 17:11:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817067; cv=none; b=BFhQKrOVQHlhLeyukSn2tW67Y3zP3r88WD+bZ4mIiY/8fCKHH/NX7ej6GOQV5YNDls44w3uoqZ/u7UBikrbbl2npz2rhmM0aSpQkSsEdZbWEkA32jW2nJq02x9VsvBtV8ifZ6eCE1BeOFry8a62+EHGMx2Hju723ElZ9w/aoLJw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817067; c=relaxed/simple; bh=8fclKZDnoWtyS0nblev56P0iXdl9hH24A3+WWsGkR1E=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qS+fZ5YbKtswzYcKsMPDj+odxU+Tm2GoiXlEgRhcLAb7gyo/KtmclitOc2vXs43PBGo94cK9PDhmORLlnH5d8fSAB/01jBpyv/CjnB1nfxYl18geO6gh3xeoqGvOxdzth3ciFpBhvlR0Hh1eJK1Aix4u2KHwDzDHSMj7WQeQzdU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=S1lLl5M5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S1lLl5M5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00C47C19425; Fri, 6 Mar 2026 17:11:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817067; bh=8fclKZDnoWtyS0nblev56P0iXdl9hH24A3+WWsGkR1E=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=S1lLl5M5xezHI47qHl4Bsw80LWTwY6hQmUbzajw/hZhN5D4KuVReJfY9xHFGNQHqE G+LpNAZmP3HFic5s0t8NRCqXj+N6ijvNjG1PxnbpgFlv7YsGLTIWi8TvwCznSG+DuC yuOrymmUZuSaNIX1spwV3aSJv5Gx2qkaaTYLmaIcFcUfyOf+XF7JTipwOCKbjgprHt nopRqlXQm7y6mY8j9/iheYZEA5dIwiUli4n8k3Ju/9H8ejGG+mHrOyquFC9dLhwFl0 W+fptYi8KSHCAp5M+blEwTmU9wKOL16zRkuvKLbxwZvWkayDTkxXWNVjyRQi0zxKZv YVd/ByLEEhS+Q== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:15 +0000 Subject: [PATCH v10 23/30] KVM: arm64: Context switch SME state for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-23-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=18895; i=broonie@kernel.org; h=from:subject:message-id; bh=8fclKZDnoWtyS0nblev56P0iXdl9hH24A3+WWsGkR1E=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo6O1B/v56CmOocLRwl1WcAZjWteSD7r0gSV CCuSeE6+AGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKOgAKCRAk1otyXVSH 0HN/B/0S6R4ELZmartGXkJf5zpP8JaDIdylH1R4LOoUdqq3OpmrU/Xf2gYmWJEm/CD1AAsK3Th2 wbweq3fQ7rakV9Dokn80Ih8P9B1wkDtNiF8c2haznUUCVQE752b/4ckoNAWrW2lgy88Qle1LtEi kv+zg2o48azhZD1LQUedX2HimqP6vtqDBFvcrkt9gFT9PSeNL31QRrKwH59NjwZQEKQxno9FJ2v INsCl8MbZ4uOnmNpMsp17us83C0mGV8sNtzpLhDIazkVCPV6uYJ6fZRIYoMtYNDzt7mOu2OxNsR ZH5404vTQ/rngjD+WrhagKZIiCHWD7QTw9eZ4W2qC/Ss9i8Z X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB If the guest has SME state we need to context switch that state, provide support for that for normal guests. SME has three sets of registers, ZA, ZT (only present for SME2) and also streaming SVE which replaces the standard floating point registers when active. The first two are fairly straightforward, they are accessible only when PSTATE.ZA is set and we can reuse the assembly from the host to save and load them from a single contiguous buffer. When PSTATE.ZA is not set then these registers are inaccessible, when the guest enables PSTATE.ZA all bits will be set to 0 by that and nothing is required on restore. Streaming mode is slightly more complicated, when enabled via PSTATE.SM it provides a version of the SVE registers using the SME vector length and may optionally omit the FFR register. SME may also be present without SVE. The register state is stored in sve_state as for non-streaming SVE mode, we make an initial selection of registers to update based on the guest SVE support and then override this when loading SVCR if streaming mode is enabled. A further complication is that when the hardware is in streaming mode guest operations that are invalid in in streaming mode will generate SME exceptions. There are also subfeature exceptions for SME2 controlled via SMCR which generate distinct exception codes. In many situations these exceptions are routed directly to the lower ELs with no opportunity for the hypervisor to intercept. So that guests do not see unexpected exception types due to the actual hardware configuration not being what the guest configured we update the SMCRs and SVCR even if the guest does not own the registers. Since in order to avoid duplication with SME we now restore the register state outside of the SVE specific restore function we need to move the restore of the effective VL for nested guests to a separate restore function run after loading the floating point register state, along with the similar handling required for SME. The selection of which vector length to use is handled by vcpu_sve_pffr(). Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 10 ++ arch/arm64/include/asm/kvm_host.h | 4 + arch/arm64/kvm/fpsimd.c | 25 ++++- arch/arm64/kvm/hyp/include/hyp/switch.h | 157 ++++++++++++++++++++++++++++= ++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 107 ++++++++++++++++++---- 5 files changed, 274 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 05566bbfa4d4..f891261a5c91 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -448,6 +448,15 @@ static inline size_t sme_state_size(struct task_struct= const *task) write_sysreg_s(__new, (reg)); \ } while (0) =20 +#define sme_cond_update_smcr_vq(val, reg) \ + do { \ + u64 __smcr =3D read_sysreg_s((reg)); \ + u64 __new =3D __smcr & ~SMCR_ELx_LEN_MASK; \ + __new |=3D (val) & SMCR_ELx_LEN_MASK; \ + if (__smcr !=3D __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else =20 static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -477,6 +486,7 @@ static inline size_t sme_state_size(struct task_struct = const *task) } =20 #define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) +#define sme_cond_update_smcr_vq(val, reg) do { } while (0) =20 #endif /* ! CONFIG_ARM64_SME */ =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index aa0817eb1b48..f804cf160b1e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -760,6 +760,7 @@ struct kvm_host_data { =20 /* Used by pKVM only. */ u64 fpmr; + u64 smcr_el1; =20 /* Ownership of the FP regs */ enum { @@ -1156,6 +1157,9 @@ struct kvm_vcpu_arch { #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) =20 +#define vcpu_sme_smcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? SMCR_EL2 : SMCR_EL1) + #define sve_state_size_from_vl(sve_max_vl) ({ \ size_t __size_ret; \ unsigned int __vq; \ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1f4fcc8b5554..8fb8c55e50b3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -69,19 +69,25 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); =20 if (guest_owns_fp_regs()) { - /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. - */ fp_state.st =3D &vcpu->arch.ctxt.fp_regs; fp_state.sve_state =3D vcpu->arch.sve_state; fp_state.sve_vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; - fp_state.sme_state =3D NULL; + fp_state.sme_state =3D vcpu->arch.sme_state; + fp_state.sme_vl =3D vcpu->arch.max_vl[ARM64_VEC_SME]; fp_state.svcr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr =3D __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type =3D &vcpu->arch.fp_type; + fp_state.sme_features =3D 0; + if (kvm_has_fa64(vcpu->kvm)) + fp_state.sme_features |=3D SMCR_ELx_FA64; + if (kvm_has_sme2(vcpu->kvm)) + fp_state.sme_features |=3D SMCR_ELx_EZT0; =20 + /* + * For SME only hosts fpsimd_save() will override the + * state selection if we are in streaming mode. + */ if (vcpu_has_sve(vcpu)) fp_state.to_save =3D FP_STATE_SVE; else @@ -90,6 +96,15 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&fp_state); =20 clear_thread_flag(TIF_FOREIGN_FPSTATE); + } else { + /* + * We might have enabled SME to configure traps but + * insist the host doesn't run the hypervisor with SME + * enabled, ensure it's disabled again. + */ + if (system_supports_sme()) { + sme_smstop(); + } } } =20 diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 5b99aa479c59..7312b8f34c7a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -429,6 +429,22 @@ static inline bool kvm_hyp_handle_mops(struct kvm_vcpu= *vcpu, u64 *exit_code) return true; } =20 +static inline void __hyp_sme_restore_guest(struct kvm_vcpu *vcpu, + bool *restore_sve, + bool *restore_ffr) +{ + bool has_fa64 =3D vcpu_has_fa64(vcpu); + bool has_sme2 =3D vcpu_has_sme2(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) { + *restore_sve =3D true; + *restore_ffr =3D has_fa64; + } + + if (vcpu_za_enabled(vcpu)) + __sme_restore_state(vcpu_sme_state(vcpu), has_sme2); +} + static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { /* @@ -436,19 +452,25 @@ static inline void __hyp_sve_restore_guest(struct kvm= _vcpu *vcpu) * vCPU. Start off with the max VL so we can load the SVE state. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr, - true); =20 + write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); +} + +static inline void __hyp_nv_restore_guest_vls(struct kvm_vcpu *vcpu) +{ /* * The effective VL for a VM could differ from the max VL when running a * nested guest, as the guest hypervisor could select a smaller VL. Slap * that into hardware before wrapping up. */ - if (is_nested_ctxt(vcpu)) + if (!is_nested_ctxt(vcpu)) + return; + + if (vcpu_has_sve(vcpu)) sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2); =20 - write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); + if (vcpu_has_sme(vcpu)) + sme_cond_update_smcr_vq(__vcpu_sys_reg(vcpu, SMCR_EL2), SYS_SMCR_EL2); } =20 static inline void __hyp_sve_save_host(void) @@ -462,10 +484,46 @@ static inline void __hyp_sve_save_host(void) true); } =20 +static inline void kvm_sme_configure_traps(struct kvm_vcpu *vcpu) +{ + u64 smcr_el1, smcr_el2; + u64 svcr; + + if (!vcpu_has_sme(vcpu)) + return; + + /* A guest hypervisor may restrict the effective max VL. */ + if (is_nested_ctxt(vcpu)) + smcr_el2 =3D __vcpu_sys_reg(vcpu, SMCR_EL2); + else + smcr_el2 =3D vcpu_sme_max_vq(vcpu) - 1; + + if (vcpu_has_fa64(vcpu)) + smcr_el2 |=3D SMCR_ELx_FA64; + if (vcpu_has_sme2(vcpu)) + smcr_el2 |=3D SMCR_ELx_EZT0; + + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 =3D __vcpu_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu)); + write_sysreg_el1(smcr_el1, SYS_SMCR); + + svcr =3D __vcpu_sys_reg(vcpu, SVCR); + write_sysreg_s(svcr, SYS_SVCR); +} + static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) { u64 zcr_el1, zcr_el2; =20 + /* + * We always load the SME control registers that affect traps + * since if they are not configured as expected by the guest + * then it may have exceptions that it does not expect + * directly delivered. + */ + kvm_sme_configure_traps(vcpu); + if (!guest_owns_fp_regs()) return; =20 @@ -519,8 +577,57 @@ static inline void sve_lazy_switch_to_host(struct kvm_= vcpu *vcpu) } } =20 +static inline void sme_lazy_switch_to_host(struct kvm_vcpu *vcpu) +{ + u64 smcr_el1, smcr_el2; + + if (!vcpu_has_sme(vcpu)) + return; + + /* + * __deactivate_cptr_traps() disabled traps, but there hasn't + * necessarily been a context synchronization event yet. + */ + isb(); + + smcr_el1 =3D read_sysreg_el1(SYS_SMCR); + __vcpu_assign_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu), smcr_el1); + + smcr_el2 =3D 0; + if (system_supports_fa64()) + smcr_el2 |=3D SMCR_ELx_FA64; + if (system_supports_sme2()) + smcr_el2 |=3D SMCR_ELx_EZT0; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such + * that the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + smcr_el2 |=3D vcpu_sme_max_vq(vcpu) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + } else { + smcr_el1 =3D smcr_el2; + smcr_el2 |=3D sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 |=3D vcpu_sme_max_vq(vcpu) - 1; + write_sysreg_el1(smcr_el1, SYS_SMCR); + } + + __vcpu_assign_sys_reg(vcpu, SVCR, read_sysreg_s(SYS_SVCR)); +} + static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) { + /* + * We always load the control registers for the guest so we + * always restore state for the host. + */ + sme_lazy_switch_to_host(vcpu); + if (!guest_owns_fp_regs()) return; =20 @@ -529,6 +636,16 @@ static inline void fpsimd_lazy_switch_to_host(struct k= vm_vcpu *vcpu) =20 static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) { + /* + * The hypervisor refuses to run if streaming mode or ZA is + * enabled, we only need to save SMCR_EL1 for SME. For pKVM + * we will restore this, reset SMCR_EL2 to a fixed value and + * disable streaming mode and ZA to avoid any state being + * leaked. + */ + if (system_supports_sme()) + *host_data_ptr(smcr_el1) =3D read_sysreg_el1(SYS_SMCR); + /* * Non-protected kvm relies on the host restoring its sve state. * Protected kvm restores the host's sve state as not to reveal that @@ -553,14 +670,17 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu = *vcpu) */ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_= code) { - bool sve_guest; - u8 esr_ec; + bool restore_sve, restore_ffr; + bool sve_guest, sme_guest; + u8 esr_ec, esr_iss_smtc; =20 if (!system_supports_fpsimd()) return false; =20 sve_guest =3D vcpu_has_sve(vcpu); + sme_guest =3D vcpu_has_sme(vcpu); esr_ec =3D kvm_vcpu_trap_get_class(vcpu); + esr_iss_smtc =3D ESR_ELx_SME_ISS_SMTC((kvm_vcpu_get_esr(vcpu))); =20 /* Only handle traps the vCPU can support here: */ switch (esr_ec) { @@ -579,6 +699,15 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vc= pu *vcpu, u64 *exit_code) if (guest_hyp_sve_traps_enabled(vcpu)) return false; break; + case ESR_ELx_EC_SME: + if (!sme_guest) + return false; + if (guest_hyp_sme_traps_enabled(vcpu)) + return false; + if (!kvm_has_sme2(vcpu->kvm) && + (esr_iss_smtc =3D=3D ESR_ELx_SME_ISS_SMTC_ZT_DISABLED)) + return false; + break; default: return false; } @@ -594,8 +723,20 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vc= pu *vcpu, u64 *exit_code) kvm_hyp_save_fpsimd_host(vcpu); =20 /* Restore the guest state */ + + /* These may be overridden for a SME guest */ + restore_sve =3D sve_guest; + restore_ffr =3D sve_guest; + if (sve_guest) __hyp_sve_restore_guest(vcpu); + if (sme_guest) + __hyp_sme_restore_guest(vcpu, &restore_sve, &restore_ffr); + + if (restore_sve) + __sve_restore_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.fp_regs.fpsr, + restore_ffr); else __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs); =20 @@ -606,6 +747,8 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcp= u *vcpu, u64 *exit_code) if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); =20 + __hyp_nv_restore_guest_vls(vcpu); + *host_data_ptr(fp_owner) =3D FP_STATE_GUEST_OWNED; =20 /* diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index f4da7a452964..c00fbade1feb 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -26,15 +26,27 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context = *host_ctxt); =20 static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) { - __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); - /* - * On saving/restoring guest sve state, always use the maximum VL for - * the guest. The layout of the data when saving the sve state depends - * on the VL, so use a consistent (i.e., the maximum) guest VL. - */ - sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true= ); - write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZC= R_EL2); + bool save_ffr =3D !vcpu_in_streaming_mode(vcpu) || vcpu_has_fa64(vcpu); + + if (vcpu_has_sve(vcpu)) { + __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); + + /* + * On saving/restoring guest sve state, always use the + * maximum VL for the guest. The layout of the data + * when saving the sve state depends on the VL, so use + * a consistent (i.e., the maximum) guest VL. + */ + sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); + } + + /* Ensure ZCR/SMCR updates for VL are seen */ + isb(); + __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, save= _ffr); + + if (system_supports_sve()) + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, + SYS_ZCR_EL2); } =20 static void __hyp_sve_restore_host(void) @@ -57,9 +69,65 @@ static void __hyp_sve_restore_host(void) write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); } =20 -static void fpsimd_sve_flush(void) +static void __hyp_sme_save_guest(struct kvm_vcpu *vcpu) { - *host_data_ptr(fp_owner) =3D FP_STATE_HOST_OWNED; + __vcpu_assign_sys_reg(vcpu, SMCR_EL1, read_sysreg_el1(SYS_SMCR)); + __vcpu_assign_sys_reg(vcpu, SVCR, read_sysreg_s(SYS_SVCR)); + + /* + * On saving/restoring guest sve state, always use the maximum VL for + * the guest. The layout of the data when saving the sve state depends + * on the VL, so use a consistent (i.e., the maximum) guest VL. + * + * We restore the FA64 and SME2 enables for the host since we + * will always restore the host configuration so if host and + * guest VLs are the same we might suppress an update. + */ + sme_cond_update_smcr(vcpu_sme_max_vq(vcpu) - 1, system_supports_fa64(), + system_supports_sme2(), SYS_SMCR_EL2); + + if (vcpu_za_enabled(vcpu)) { + isb(); + __sme_save_state(vcpu_sme_state(vcpu), vcpu_has_sme2(vcpu)); + } +} + +static void __hyp_sme_restore_host(void) +{ + /* + * The hypervisor refuses to run if we are in streaming mode + * or have ZA enabled so there is no SME specific state to + * restore other than the system registers. + * + * Note that this constrains the PE to the maximum shared VL + * that was discovered, if we wish to use larger VLs this will + * need to be revisited. + */ + sme_cond_update_smcr(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1, + cpus_have_final_cap(ARM64_SME_FA64), + cpus_have_final_cap(ARM64_SME2), SYS_SMCR_EL2); + + write_sysreg_el1(*host_data_ptr(smcr_el1), SYS_SMCR); + + sme_smstop(); +} + +static void fpsimd_sve_flush(struct kvm_vcpu *vcpu) +{ + /* + * If the guest has SME then we need to restore the trap + * controls in SMCR and mode in SVCR in order to ensure that + * traps generated directly to EL1 have the correct types, + * otherwise we can defer until we load the guest state. + */ + if (vcpu_has_sme(vcpu)) { + kvm_hyp_save_fpsimd_host(vcpu); + kvm_sme_configure_traps(vcpu); + + *host_data_ptr(fp_owner) =3D FP_STATE_FREE; + } else { + *host_data_ptr(fp_owner) =3D FP_STATE_HOST_OWNED; + } } =20 static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) @@ -75,7 +143,10 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) */ isb(); =20 - if (vcpu_has_sve(vcpu)) + if (vcpu_has_sme(vcpu)) + __hyp_sme_save_guest(vcpu); + + if (vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu)) __hyp_sve_save_guest(vcpu); else __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs); @@ -84,6 +155,9 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) if (has_fpmr) __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR)); =20 + if (system_supports_sme()) + __hyp_sme_restore_host(); + if (system_supports_sve()) __hyp_sve_restore_host(); else @@ -121,7 +195,7 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) { struct kvm_vcpu *host_vcpu =3D hyp_vcpu->host_vcpu; =20 - fpsimd_sve_flush(); + fpsimd_sve_flush(host_vcpu); flush_debug_state(hyp_vcpu); =20 hyp_vcpu->vcpu.arch.ctxt =3D host_vcpu->arch.ctxt; @@ -207,10 +281,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_conte= xt *host_ctxt) struct pkvm_hyp_vcpu *hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); =20 /* - * KVM (and pKVM) doesn't support SME guests for now, and - * ensures that SME features aren't enabled in pstate when - * loading a vcpu. Therefore, if SME features enabled the host - * is misbehaving. + * KVM (and pKVM) refuses to run if PSTATE.{SM,ZA} are + * enabled. Therefore, if SME features enabled the + * host is misbehaving. */ if (unlikely(system_supports_sme() && read_sysreg_s(SYS_SVCR))) { ret =3D -EINVAL; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81F49410D3A; Fri, 6 Mar 2026 17:11:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817071; cv=none; b=KMRccuXNRxP5JIkuIsuKnGFN4oFHeVGoTuQN3TAnoWm+KP+Y4mV+qysZBKFcjTreTJpM87iS0dDpvMDeUbcMonaB9+KiCksIeAHWYPD0ZSoKmmKUWcW+MZo01gbT9H7tMC/xuj7izyxTIzaYBWL0jGz4D9PIaqo1+3sxuEzkwws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817071; c=relaxed/simple; bh=pnYrxRk5dfXB09IBMkMwYadiCZ0H9EPfd0Nx5rSu738=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=T13TEQg9QtCjPYICH+o/AjPKnb2iKmYzbRKx7mUJ2HzAvInIEEfAu3Butca36tLNiPQbJgJJWdNY1hxKQ0v6IaTPPtRCyqQLQcTs5D6XIiPbR2NLLf+MBqJnjbFL6fe1I/6IVuuX+kHTtFddgnZ4KlZ/sZvqwRQ8jPbMs6PIqdw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=g/kha5y9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="g/kha5y9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69440C4CEF7; Fri, 6 Mar 2026 17:11:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817071; bh=pnYrxRk5dfXB09IBMkMwYadiCZ0H9EPfd0Nx5rSu738=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=g/kha5y9Q8/fpJ5imuKHwGChyJtThRt80VYciyN6Pix9Lnzv/NgYOKH3A4MWdI4mF jhhOaqQsoVN55y+m3/dMkA+frKREB0T8n22+ypAt3VnwvoaQZEzyKasQOpzN/4SSkD 0WNRaqf3io/XbYptNCqIVgQqLJAZJk42/ObyZFtDlkecN/YBwMf065UHPaBGojyPfV 3tIfI/dv6jyEDBrxmYh9H2Jyc8dQ5eizfCrfL7GQ09wENllYloOgnRjoS92ypDHDnf vFrB8PTUbT1LQolXAm4zFoC6DU8ctpN7ypYm0zE4SJqdxIOyX0JhXTq6BjAhZPaXwB KxHMcq1KEL3UA== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:16 +0000 Subject: [PATCH v10 24/30] KVM: arm64: Handle SME exceptions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-24-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=6103; i=broonie@kernel.org; h=from:subject:message-id; bh=pnYrxRk5dfXB09IBMkMwYadiCZ0H9EPfd0Nx5rSu738=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo7lMDNfaqq3vOAWszPC0MUf0J9BjLcr7swU rxIAsFf4EmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKOwAKCRAk1otyXVSH 0B/pB/9WaJMZ8ufHhGlpQZwNJdIqit3HKeM70a5ESZecBQ3ppEXtUdy4ZXAEjwdzK2EUp8qH5vR 1jmcVRschNQXqj89i3AA38v9QVxl6rH1PyFVonYk9V+lINUxM2yDid+TFc6UCCH9Dsu9oBeW3T9 q/KyrH3RUMjeIlVNDZSDavGUs0q4ZwgBjMT5Ko7EhPNkcH1uKz+Uucp8D+RYudoDP8tjlc5xxIl O89uQ/A+KNxJ23RGM/fdOTH1c+RQgAHVeZ9o85ilFP6BeEKq6lmS+JTQi61g9HcDlbN7J3JKM9R yPy4cCY8uPHpnxMjKpkEB9LJIDaVrEMTj2sDu5n7ftI9jDMG X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 2 ++ arch/arm64/kvm/hyp/vhe/switch.c | 17 ++++++++++++----- 4 files changed, 34 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index cc7d5d1709cb..1e54d5d722e4 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -237,6 +237,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } =20 +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -390,6 +403,7 @@ static exit_handle_fn arm_exit_handlers[] =3D { [ESR_ELx_EC_SVC64] =3D handle_svc, [ESR_ELx_EC_SYS64] =3D kvm_handle_sys_reg, [ESR_ELx_EC_SVE] =3D handle_sve, + [ESR_ELx_EC_SME] =3D handle_sme, [ESR_ELx_EC_ERET] =3D kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] =3D kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] =3D kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7312b8f34c7a..29f7ea519e8a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -67,11 +67,8 @@ static inline void __activate_cptr_traps_nvhe(struct kvm= _vcpu *vcpu) { u64 val =3D CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; =20 - /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |=3D CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |=3D CPTR_EL2_TSM; =20 if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |=3D CPTR_EL2_TZ; @@ -99,6 +96,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_v= cpu *vcpu) val |=3D CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |=3D CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |=3D CPACR_EL1_SMEN; } =20 if (!vcpu_has_nv(vcpu)) @@ -140,6 +139,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm= _vcpu *vcpu) val &=3D ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &=3D ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &=3D ~CPACR_EL1_SMEN; =20 if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |=3D cptr & CPACR_EL1_E0POE; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 779089e42681..5e5e3c2d4ea8 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -181,6 +181,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, @@ -192,6 +193,7 @@ static const exit_handler_fn pvm_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_SYS64] =3D kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] =3D kvm_handle_pvm_restricted, + [ESR_ELx_EC_SME] =3D kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] =3D kvm_hyp_handle_dabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 9db3f11a4754..563ac85f0146 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -458,22 +458,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu = *vcpu, u64 *exit_code) return true; } =20 -static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_cod= e) { u32 sysreg =3D esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); =20 if (!vcpu_has_nv(vcpu)) return false; =20 - if (sysreg !=3D SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + } =20 if (guest_owns_fp_regs()) return false; =20 /* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -493,7 +499,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *= vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true; =20 - if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true; =20 return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -522,6 +528,7 @@ static const exit_handler_fn hyp_exit_handlers[] =3D { [0 ... ESR_ELx_EC_MAX] =3D NULL, [ESR_ELx_EC_CP15_32] =3D kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] =3D kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] =3D kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] =3D kvm_hyp_handle_iabt_low, --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBE00410D3A; Fri, 6 Mar 2026 17:11:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817076; cv=none; b=AhZ93k9jSqfYZKqMosys8ul7NlAykW9vVwwixqngEKcPcU8kTr2iKcp9s6P6rVmrR+cxpCb5D8ZkCvYtJKyMTYCQlObcgx2V2FxV686/Wf4eEHDcueOcrIMq8IPyZa9X68bL0wQefwZtdIrdguVyAkcSeb5gQDxKWJlJ5aQ2AoY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817076; c=relaxed/simple; bh=9AoDP1H4yvRJIEhTi6n9tn6RUHS9VgzCYqCyyE7TlDQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Cs9oGRA/mLHhICwngUTpSgsY3/COoOGYG5lbZ4zCBvlJdkEQjnQJJKMLaaKuQAlO9Nw4I0IBuC/HenAZoWwQYcPmA0C/hPL1h18i9+fw/0ksZHng+MIa+uV2fOUfiKDQBikV/YimZjT77f+nuQRINpxrWSOokY02D24VNsVRnVA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U9eXInAq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U9eXInAq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE467C4CEF7; Fri, 6 Mar 2026 17:11:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817075; bh=9AoDP1H4yvRJIEhTi6n9tn6RUHS9VgzCYqCyyE7TlDQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=U9eXInAq6PsnJKF4xNWxt6HMYo7q/Ik/qmUHsWJc5gdGs5DmM2lDRmBzFbKU+97LK PYmTWkZzqMPW6rHVeYbOwOV4yonZzgBKZBOxz6wi+nUL2EKnle4L7P4/mg1AhsMx25 xIvpDRdjtZ4xWTOMIiFqCyygM0KXBeQJLo84NbvJ4IFrqxTRW76bT/SPyg6P+s2MT1 BwA+tY/0PBQqWiikpb7WZRM7pFbwg8BsD3WQHOEEz0IIgIeI8BiYcgj8SmU8jm+FLF AI5xAz8ix1CnhxZHCbjxC1Dk4XEjHM7h/d+0tWXjrCqHXUqcp1pN1cYTU6S/9Yz3zY xrI6L67qRsE+w== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:17 +0000 Subject: [PATCH v10 25/30] KVM: arm64: Expose SME to nested guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-25-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=1452; i=broonie@kernel.org; h=from:subject:message-id; bh=9AoDP1H4yvRJIEhTi6n9tn6RUHS9VgzCYqCyyE7TlDQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo8SGPszNhvztYIvDOCtpWiaiaB+98YQbEDr eOW1XPeuIiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKPAAKCRAk1otyXVSH 0GWxB/9Cu9zVjHVSP0M7G5dJXH/040q4ZjiItij2oocAWBOf4nwWUab+WPXOw32G6SHWIOX8Bkx elPX0VunIxTX2TMNP15U5YU+GNdWbG9x9t/hjnQ75Z1jJCOqxc/uixUjGipKDOkrVV7gFAeATv3 T07TBZRm+SCJs9uUt4HC6DyQ3jZE0CXGBhBf96xL+TAdO5FRTkw+ZSvT0diWnYc32y0DLPciKGG nMN10SjIlTd292CD3GLuAEsPdyiGNjzQ4+fc8Rqct/JQL/bFF6mPRPhW/zXZcLwYSNH/EIfd6hL /8cBWedrr9rAa7eRWv12ECW5wdSfZ+67R0HqU5MKPCvO77HS X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB With support for context switching SME state in place allow access to SME in nested guests. The SME floating point state is handled along with all the other floating point state, SME specific floating point exceptions are directed into the same handlers as other floating point exceptions with NV specific handling for the vector lengths already in place. TPIDR2_EL0 is context switched along with the other TPIDRs as part of the main guest register context switch. SME priority support is currently masked from all guests including nested ones. Reviewed-by: Fuad Tabba Signed-off-by: Mark Brown --- arch/arm64/kvm/nested.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 12c9f6e8dfda..a46002004988 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -1540,14 +1540,13 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 v= al) break; =20 case SYS_ID_AA64PFR1_EL1: - /* Only support BTI, SSBS, CSV2_frac */ + /* Only support BTI, SME, SSBS, CSV2_frac */ val &=3D ~(ID_AA64PFR1_EL1_PFAR | ID_AA64PFR1_EL1_MTEX | ID_AA64PFR1_EL1_THE | ID_AA64PFR1_EL1_GCS | ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_MTE); --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 843D5407598; Fri, 6 Mar 2026 17:11:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817080; cv=none; b=e+9VxGo/bIM4dtacO0i9AYYD52itJNmwFG8Pm321z/o5edK6PgKsXKHe1OHigtiZ5GIQ6p3HdJJ8v0AxWA/GgMzTRXcH0pJVp1O90wW9IF5G+bDnUJedWcuL09J9dLVYrtP4uTH6vYKqmMopruDo0cjcdXgLERpdRQGhfXUTWQE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817080; c=relaxed/simple; bh=6m0sKJbA/KvxO4vsamJcMuexvM6p6QuYEsrhuWbCs1o=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=XxUsrnip/hfKLfI9UQ+Mxud3vuJDCWPYHoQ/FrYkB1kuIdM+NbIkNBf346OSQ9AnK4kRAdgH5i1vP6WU3Be/R96LQFYorlCjfdepZFVnJKSs4dYWAXzq88l5hrG78juKOIW5nKZ3ggP/skElmVvNkYMPGam9ecLO6GixLiqknU8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kkx9QrV/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kkx9QrV/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 412B6C4CEF7; Fri, 6 Mar 2026 17:11:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817080; bh=6m0sKJbA/KvxO4vsamJcMuexvM6p6QuYEsrhuWbCs1o=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=kkx9QrV/GY4V34fTo0HxLDz/g6XLe91Md43t8uw/oWXXV8ny8qFUwCxsLfdvxW9Xv CkC9IkFPzVnQu4g1De1QdRj78ZPme3Me2+9NLNABo6NzWF1qoYGoE+mVQkqHqanCpr hXfvslEaFqyFON+GXuzkVNtf9EDzG1Ss5i68aMmQ2xBGWHqSfTkkOI4Ry5QQlIfhZj 4yq7mFbyBHKRMysG6yTmuYZWpH9sC+gGeuxBSSPAk3gBp36hT/LHKrs63oPRqyRJY6 56i8+SjsHhev6fo2vCFBE90NhmYS4l1bMD/QwqtNdAq4u4VWj+XtrC9iJUqfwemUzO CCkOxefOitI5Q== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:18 +0000 Subject: [PATCH v10 26/30] KVM: arm64: Provide interface for configuring and enabling SME for guests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-26-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=17368; i=broonie@kernel.org; h=from:subject:message-id; bh=6m0sKJbA/KvxO4vsamJcMuexvM6p6QuYEsrhuWbCs1o=; b=owGbwMvMwMWocq27KDak/QLjabUkhszVXDZPZr9S3p+zNvR4ZoCYxLJFv7PvWnouWskeGLSMO 1jcyLymk9GYhYGRi0FWTJFl7bOMVenhElvnP5r/CmYQKxPIFAYuTgGYyNzd7P+TWMLOG/29+0Ff P1u3RFP1is5if9/Zqi5R9j3zwtaJ38ja+OqKVXJZrOIHx9k+W76+9zjBXr24Qrgh3rO0ZrNr+Tb 9w8kaNacl9SuTDOxj+3icnms5uf+WXrF4U3Q0e+E0/c81lonfPJJdJf/+6w465CCe+PV5m/Ja/d Nvt5bJ87E9it/9K9G0TU391e/PObMe5vrNuC4ZZfntkZxCmHoCi/SWpBVO7JMark/OftM0mXXyg wMeZw49++agzJAWlyHu82ONWXhvdvLS0C+HZnIKcrwtC5vGoPt78s2TrvXukvfkG2w1f4b0Twg8 wXp1mqHGIvtHHclqM6ZfTrsi7vQgc7nnrp7S+TXcqeYGAA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Since SME requires configuration of a vector length in order to know the size of both the streaming mode SVE state and ZA array we implement a capability for it and require that it be enabled and finalized before the SME specific state can be accessed, similarly to SVE. Due to the overlap with sizing the SVE state we finalise both SVE and SME with a single finalization, preventing any further changes to the SVE and SME configuration once KVM_ARM_VCPU_VEC (an alias for _VCPU_SVE) has been finalised. This is not a thing of great elegance but it ensures that we never have a state where one of SVE or SME is finalised and the other not, avoiding complexity. Since unlike SVE there is no architecturally manadated vector length which must be supported by all PEs we detect the case where the feature is supported but there is no shared VL and hide the feature. SME is supported for normal and protected guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 2 +- arch/arm64/include/asm/kvm_host.h | 18 +++++- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 10 ++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 79 ++++++++++++++++++++----- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 ++ arch/arm64/kvm/reset.c | 116 +++++++++++++++++++++++++++++++--= ---- include/uapi/linux/kvm.h | 1 + 8 files changed, 197 insertions(+), 36 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index f891261a5c91..409f621685ee 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -470,7 +470,7 @@ static inline void sme_alloc(struct task_struct *task, = bool flush) { } static inline void sme_setup(void) { } static inline unsigned int sme_get_vl(void) { return 0; } static inline int sme_max_vl(void) { return 0; } -static inline int sme_max_virtualisable_vl(void) { return 0; } +static inline int sme_max_virtualisable_vl(void) { return SME_VQ_INVALID; } static inline int sme_set_current_vl(unsigned long arg) { return -EINVAL; } static inline int sme_get_current_vl(void) { return -EINVAL; } static inline void sme_suspend_exit(void) { } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index f804cf160b1e..28de788ba4d9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@ =20 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS =20 -#define KVM_VCPU_MAX_FEATURES 9 +#define KVM_VCPU_MAX_FEATURES 10 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1) =20 #define KVM_REQ_SLEEP \ @@ -82,6 +82,7 @@ extern unsigned int __ro_after_init kvm_host_max_vl[ARM64= _VEC_MAX]; DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); =20 int __init kvm_arm_init_sve(void); +int __init kvm_arm_init_sme(void); =20 u32 __attribute_const__ kvm_target_cpu(void); void kvm_reset_vcpu(struct kvm_vcpu *vcpu); @@ -1174,7 +1175,14 @@ struct kvm_vcpu_arch { __size_ret; \ }) =20 -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_= vl[ARM64_VEC_SVE]) +#define vcpu_sve_state_size(vcpu) ({ \ + unsigned int __max_vl; \ + \ + __max_vl =3D max((vcpu)->arch.max_vl[ARM64_VEC_SVE], \ + (vcpu)->arch.max_vl[ARM64_VEC_SME]); \ + \ + sve_state_size_from_vl(__max_vl); \ +}) =20 #define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) =20 @@ -1774,4 +1782,10 @@ static __always_inline enum fgt_group_id __fgt_reg_t= o_group_id(enum vcpu_sysreg =20 long kvm_get_cap_for_kvm_ioctl(unsigned int ioctl, long *ext); =20 +static inline bool system_supports_sme_virt(void) +{ + return system_supports_sme() && + sme_max_virtualisable_vl() !=3D sve_vl_from_vq(SME_VQ_INVALID); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index f68061680f9a..af89a5cc860f 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication= */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */ +#define KVM_ARM_VCPU_SME 9 /* enable SME for this CPU */ =20 /* * An alias for _SVE since we finalize VL configuration for both SVE and S= ME diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 410ffd41fd73..aa9f334ae10e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -447,6 +447,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_SVE: r =3D system_supports_sve(); break; + case KVM_CAP_ARM_SME: + r =3D system_supports_sme_virt(); + break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: r =3D kvm_has_full_ptr_auth(); @@ -1502,6 +1505,9 @@ static unsigned long system_supported_vcpu_features(v= oid) if (!system_supports_sve()) clear_bit(KVM_ARM_VCPU_SVE, &features); =20 + if (!system_supports_sme_virt()) + clear_bit(KVM_ARM_VCPU_SME, &features); + if (!kvm_has_full_ptr_auth()) { clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); @@ -2933,6 +2939,10 @@ static __init int kvm_arm_init(void) if (err) return err; =20 + err =3D kvm_arm_init_sme(); + if (err) + return err; + err =3D kvm_arm_vmid_alloc_init(); if (err) { kvm_err("Failed to initialize VMID allocator.\n"); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 2757833c4396..70f271aa48da 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -148,10 +148,6 @@ static int pkvm_check_pvm_cpu_features(struct kvm_vcpu= *vcpu) !kvm_has_feat(kvm, ID_AA64PFR0_EL1, AdvSIMD, IMP)) return -EINVAL; =20 - /* No SME support in KVM right now. Check to catch if it changes. */ - if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) - return -EINVAL; - return 0; } =20 @@ -377,6 +373,11 @@ static void pkvm_init_features_from_host(struct pkvm_h= yp_vm *hyp_vm, const struc kvm->arch.flags |=3D host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SVE); } =20 + if (kvm_pkvm_ext_allowed(kvm, KVM_CAP_ARM_SME)) { + set_bit(KVM_ARM_VCPU_SME, allowed_features); + kvm->arch.flags |=3D host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SME); + } + bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); } @@ -391,7 +392,8 @@ static void unpin_host_sve_state(struct pkvm_hyp_vcpu *= hyp_vcpu) { void *sve_state; =20 - if (!vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SVE)) + if (!vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SVE) && + !vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SME)) return; =20 sve_state =3D hyp_vcpu->vcpu.arch.sve_state; @@ -399,6 +401,18 @@ static void unpin_host_sve_state(struct pkvm_hyp_vcpu = *hyp_vcpu) sve_state + vcpu_sve_state_size(&hyp_vcpu->vcpu)); } =20 +static void unpin_host_sme_state(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + void *sme_state; + + if (!vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SME)) + return; + + sme_state =3D kern_hyp_va(hyp_vcpu->vcpu.arch.sme_state); + hyp_unpin_shared_mem(sme_state, + sme_state + vcpu_sme_state_size(&hyp_vcpu->vcpu)); +} + static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[], unsigned int nr_vcpus) { @@ -412,6 +426,7 @@ static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_= vcpus[], =20 unpin_host_vcpu(hyp_vcpu->host_vcpu); unpin_host_sve_state(hyp_vcpu); + unpin_host_sme_state(hyp_vcpu); } } =20 @@ -438,23 +453,35 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, st= ruct pkvm_hyp_vm *hyp_vm, mmu->pgt =3D &hyp_vm->pgt; } =20 -static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_v= cpu *host_vcpu) +static int pkvm_vcpu_init_vec(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_v= cpu *host_vcpu) { struct kvm_vcpu *vcpu =3D &hyp_vcpu->vcpu; - unsigned int sve_max_vl; - size_t sve_state_size; - void *sve_state; + unsigned int sve_max_vl, sme_max_vl; + size_t sve_state_size, sme_state_size; + void *sve_state, *sme_state; int ret =3D 0; =20 - if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { + if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE) && + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } =20 /* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), - kvm_host_max_vl[ARM64_VEC_SVE]); - sve_state_size =3D sve_state_size_from_vl(sve_max_vl); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) + sve_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); + else + sve_max_vl =3D 0; + + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + sme_max_vl =3D min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SME]), + kvm_host_max_vl[ARM64_VEC_SME]); + else + sme_max_vl =3D 0; + + /* We need SVE storage for the larger of normal or streaming mode */ + sve_state_size =3D sve_state_size_from_vl(max(sve_max_vl, sme_max_vl)); sve_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state)); =20 if (!sve_state || !sve_state_size) { @@ -466,12 +493,36 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *h= yp_vcpu, struct kvm_vcpu *h if (ret) goto err; =20 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { + sme_state_size =3D sme_state_size_from_vl(sme_max_vl, + vcpu_has_sme2(vcpu)); + sme_state =3D kern_hyp_va(READ_ONCE(host_vcpu->arch.sme_state)); + + if (!sme_state || !sme_state_size) { + ret =3D -EINVAL; + goto err_sve_mapped; + } + + ret =3D hyp_pin_shared_mem(sme_state, sme_state + sme_state_size); + if (ret) + goto err_sve_mapped; + } else { + sme_state =3D NULL; + } + vcpu->arch.sve_state =3D sve_state; vcpu->arch.max_vl[ARM64_VEC_SVE] =3D sve_max_vl; =20 + vcpu->arch.sme_state =3D sme_state; + vcpu->arch.max_vl[ARM64_VEC_SME] =3D sme_max_vl; + return 0; + +err_sve_mapped: + hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size); err: clear_bit(KVM_ARM_VCPU_SVE, vcpu->kvm->arch.vcpu_features); + clear_bit(KVM_ARM_VCPU_SME, vcpu->kvm->arch.vcpu_features); return ret; } =20 @@ -531,7 +582,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp= _vcpu, if (ret) goto done; =20 - ret =3D pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); + ret =3D pkvm_vcpu_init_vec(hyp_vcpu, host_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/s= ys_regs.c index 06d28621722e..f21a6be65842 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -66,6 +66,11 @@ static bool vm_has_ptrauth(const struct kvm *kvm) kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC); } =20 +static bool vm_has_sme(const struct kvm *kvm) +{ + return system_supports_sme() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SM= E); +} + static bool vm_has_sve(const struct kvm *kvm) { return system_supports_sve() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SV= E); @@ -102,6 +107,7 @@ static const struct pvm_ftr_bits pvmid_aa64pfr0[] =3D { }; =20 static const struct pvm_ftr_bits pvmid_aa64pfr1[] =3D { + MAX_FEAT_FUNC(ID_AA64PFR1_EL1, SME, SME2, vm_has_sme), MAX_FEAT(ID_AA64PFR1_EL1, BT, IMP), MAX_FEAT(ID_AA64PFR1_EL1, SSBS, SSBS2), MAX_FEAT_ENUM(ID_AA64PFR1_EL1, MTE_frac, NI), diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index a8684a1346ec..59a6cb71ffef 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -76,6 +76,28 @@ int __init kvm_arm_init_sve(void) return 0; } =20 +int __init kvm_arm_init_sme(void) +{ + if (system_supports_sme()) { + kvm_host_max_vl[ARM64_VEC_SME] =3D sme_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SME]) =3D kvm_host_max_vl[ARM64_V= EC_SME]; + } + + if (system_supports_sme_virt()) { + kvm_max_vl[ARM64_VEC_SME] =3D sme_max_virtualisable_vl(); + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_max_vl[ARM64_VEC_SME] < sme_max_vl()) + pr_warn("KVM: SME vector length for guests limited to %u bytes\n", + kvm_max_vl[ARM64_VEC_SME]); + } + + return 0; +} + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { vcpu->arch.max_vl[ARM64_VEC_SVE] =3D kvm_max_vl[ARM64_VEC_SVE]; @@ -88,42 +110,90 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) set_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &vcpu->kvm->arch.flags); } =20 +static void kvm_vcpu_enable_sme(struct kvm_vcpu *vcpu) +{ + vcpu->arch.max_vl[ARM64_VEC_SME] =3D kvm_max_vl[ARM64_VEC_SME]; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SME_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + set_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &vcpu->kvm->arch.flags); +} + /* - * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * Finalize vcpu's maximum vector lengths, allocating + * vcpu->arch.sve_state and vcpu->arch.sme_state as necessary. */ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { - void *buf; + void *sve_state, *sme_state; unsigned int vl; - size_t reg_sz; int ret; =20 - vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; - /* * Responsibility for these properties is shared between * kvm_arm_init_sve(), kvm_vcpu_enable_sve() and * set_sve_vls(). Double-check here just to be sure: */ - if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || - vl > VL_ARCH_MAX)) - return -EIO; + if (vcpu_has_sve(vcpu)) { + vl =3D vcpu->arch.max_vl[ARM64_VEC_SVE]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sve_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } else { + vcpu->arch.max_vl[ARM64_VEC_SVE] =3D 0; + } =20 - reg_sz =3D vcpu_sve_state_size(vcpu); - buf =3D kzalloc(reg_sz, GFP_KERNEL_ACCOUNT); - if (!buf) + /* Similarly for SME */ + if (vcpu_has_sme(vcpu)) { + vl =3D vcpu->arch.max_vl[ARM64_VEC_SME]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sme_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } else { + vcpu->arch.max_vl[ARM64_VEC_SME] =3D 0; + } + + sve_state =3D kzalloc(vcpu_sve_state_size(vcpu), GFP_KERNEL_ACCOUNT); + if (!sve_state) return -ENOMEM; =20 - ret =3D kvm_share_hyp(buf, buf + reg_sz); - if (ret) { - kfree(buf); - return ret; + ret =3D kvm_share_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + if (ret) + goto err_sve_alloc; + + if (vcpu_has_sme(vcpu)) { + sme_state =3D kzalloc(vcpu_sme_state_size(vcpu), + GFP_KERNEL_ACCOUNT); + if (!sme_state) { + ret =3D -ENOMEM; + goto err_sve_map; + } + + ret =3D kvm_share_hyp(sme_state, + sme_state + vcpu_sme_state_size(vcpu)); + if (ret) + goto err_sme_alloc; + } else { + sme_state =3D NULL; } -=09 - vcpu->arch.sve_state =3D buf; + + vcpu->arch.sve_state =3D sve_state; + vcpu->arch.sme_state =3D sme_state; vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; + +err_sme_alloc: + kfree(sme_state); +err_sve_map: + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); +err_sve_alloc: + kfree(sve_state); + return ret; } =20 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) @@ -153,20 +223,26 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state =3D vcpu->arch.sve_state; + void *sme_state =3D vcpu->arch.sme_state; =20 kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); free_page((unsigned long)vcpu->arch.ctxt.vncr_array); + if (sme_state) + kvm_unshare_hyp(sme_state, sme_state + vcpu_sme_state_size(vcpu)); + kfree(sme_state); kfree(vcpu->arch.vncr_tlb); kfree(vcpu->arch.ccsidr); } =20 static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu)) + if (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + if (vcpu_has_sme(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); } =20 /** @@ -206,6 +282,8 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + kvm_vcpu_enable_sme(vcpu); } else { kvm_vcpu_reset_vec(vcpu); } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 65500f5db379..5b502fd2bfec 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -985,6 +985,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_SEA_TO_USER 245 #define KVM_CAP_S390_USER_OPEREXEC 246 #define KVM_CAP_S390_KEYOP 247 +#define KVM_CAP_ARM_SME 248 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C856341C2F7; Fri, 6 Mar 2026 17:11:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817084; cv=none; b=Ky9JtJQOk9PhSTmgU19gpjkN5Twbw0GNsa+tUB8kuwQ4QJBKUuXn4N3ybz57hpZgYMtQm4gqem3ywMpCOLBzfz8QTAukpRaOnRu6eMNOXgpPSiEakoWBGgpcnosUPNA7zqzklt+fm88LWRH/GB1jaExKdzaQ2IcXhFlmT3UHCdk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817084; c=relaxed/simple; bh=RwM0lvASFFISwO7mYlHziW2rJGO2q+5K2S9ANJKi+qk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ag/kDQK8WtYoYWzva0AKjb7qdkQAR+0aqO+gvzQzQ5zSs+3I5ER5DLbpofmL4SHlzmpDtwONv3JER1bcgZDYl9/Ly5gYeibseOveeHDLDmd4MXaxdeUW1fsJv92HbIXdpTIbAibz3UrhGmvAdIiWT5e14ocO2l4be2krxsvlc50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gghwR0a7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gghwR0a7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7C13C4CEF7; Fri, 6 Mar 2026 17:11:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817084; bh=RwM0lvASFFISwO7mYlHziW2rJGO2q+5K2S9ANJKi+qk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gghwR0a71jnooATMBX5TocY5duhVClWj7zPggZHjAaCF9uD8eCRsCrOQAJlx+x/OT uHanDt7NRGLoKkO+KfOu0nyqwdlm4MDxs37uVWkg2LU3uuF0sdoFnqWbEHvAhgQx1p 6vBq5t1oqEKubYFosSdVNmUuOdOABEuN6cvPaMYpBtqvF9Rl9xrCUHNRNdEN/9ULKI gjJaV54ad0W5IiG2Zz8wYBleVVNJmDNrDox+4phWDR26vRLR/omreN1J0uXx4xDT+O ist0zJH3f8lZRe+oeIk8tzxjZEvMnAtM++KDHMggpBX/e+LrmoWsUyn0w0dtz3rIUn +UQJlqcONXeAw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:19 +0000 Subject: [PATCH v10 27/30] KVM: arm64: selftests: Remove spurious check for single bit safe values Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-27-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=1215; i=broonie@kernel.org; h=from:subject:message-id; bh=RwM0lvASFFISwO7mYlHziW2rJGO2q+5K2S9ANJKi+qk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo9khSGDEm4RKpWoYkjMo875PXE5ezoRFH8g p77Xmqp8JeJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKPQAKCRAk1otyXVSH 0Dc6B/4+xlyIn14vonZiqCu0iFWQPdpktw/K1mXbtG5oUfy1UaOU9VoAgvFwbAmJCJZ6DiB3nm3 uqcfCqwuig8fEMuQcNEvik2VEoSahN2ujQaT2mu5t4y0A3d+sLzNxIayT2d65CAsRnO6NpYNcit gLf5qkvYsJcpi6AYMoyv6IW4GHeS5eZtZQTLD9MiTYDkz4b5VKJ8I8nlDZEETr3ejG+EiLyc4u9 L2c/msSkXv4kQh7PKbP84zA77Rvg1GKyYRW6KSXbx+G28Qp6SXigjf+wlM88FIEw7PtmWgdsaHl 4+5wobzSg4n0fNJdyDKFlvLAUV2KdmOCz7BfQ3VBc8kVyh7n X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB get_safe_value() currently asserts that bitfields it is generating a safe value for must be more than one bit wide but in actual fact it should always be possible to generate a safe value to write to a bitfield even if it is just the current value and the function correctly handles that. Remove the assert. Fixes: bf09ee918053e ("KVM: arm64: selftests: Remove ARM64_FEATURE_FIELD_BI= TS and its last user") Reviewed-by: Ben Horgan Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index 73de5be58bab..bfca7be3e766 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -269,8 +269,6 @@ uint64_t get_safe_value(const struct reg_ftr_bits *ftr_= bits, uint64_t ftr) { uint64_t ftr_max =3D ftr_bits->mask >> ftr_bits->shift; =20 - TEST_ASSERT(ftr_max > 1, "This test doesn't support single bit features"); - if (ftr_bits->sign =3D=3D FTR_UNSIGNED) { switch (ftr_bits->type) { case FTR_EXACT: --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AD8141325D; Fri, 6 Mar 2026 17:11:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817089; cv=none; b=fvPS/+IfO9r1D1Q4scVlQsxGg6AJL1tKYhCFXGhverhJIOrrjna4ApfBbBd2bumy4fqZmxYHCqh058uzrU0PnHTmzC7in/KO4Ur2mzF5ZqnAelRyAjEwmoHMJ8Zo3oPlJ7thmcXNMC1IJnEn+MyR8y5hUDwy9np1UqPPmCUKS7s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817089; c=relaxed/simple; bh=48J6vOELlRnKwnvRZfyB0150FtcIqOvT0BA0oxdO4r8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=khuRLrif4fV6BhShxwrfgNaC/PQKXryWK23LkVWwKswns0DboF6E0XIfSRlhm84cMlN1E44N9kT5rMCxNgVoYe9/Txj2UBuOYTCng7+HESSLbCc/HnJ/yET/I09p5Sfqu/vx8QO01HE2+PaLyQ5UORtxzb2IZbid/hIZgpqXG0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lwLFFjDN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lwLFFjDN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18A94C2BC86; Fri, 6 Mar 2026 17:11:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817089; bh=48J6vOELlRnKwnvRZfyB0150FtcIqOvT0BA0oxdO4r8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=lwLFFjDNOuu1sfuYaweC0SyZJ9rrIiPS40Z2CtQ9ba4apRuHWnJiSsXWef7e5OQBu tiU1Dl6vTgZATqWG6M1Gv94S0lXT4GCD8LiKW2uxB/x0SbufAVwUcZi4uuowhAMYKV bKeeZee9ec+5nQKzpAFyU4DcRGGZBV4CBukzvZM7Orx15lF4lo2ONAixI7Q3lrV3q4 0fV6X2ETisCHyfWTfPG56TiATOOC3cQqJouFdbbsA/Fj5ccEAcnzzvmdR1FKF4m1mk J6vVA2RBFTuDq9VFWQ7W54diTrayAkKV1KuOzEsh7Tza7vaDz3KGnJ+UVX3acQL+GO rJj6/f/yprLWw== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:20 +0000 Subject: [PATCH v10 28/30] KVM: arm64: selftests: Skip impossible invalid value tests Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-28-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3787; i=broonie@kernel.org; h=from:subject:message-id; bh=48J6vOELlRnKwnvRZfyB0150FtcIqOvT0BA0oxdO4r8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwo+wHKzAuHcwdnrh2mF3OuVoFygnzjy+m5WJ 1ySn5i4AduJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKPgAKCRAk1otyXVSH 0FWcB/95h8TYFaJLtY9Mig1aiCYlIxNxOjNvHHu24TrccFxyTJ06ypfmagGF+VP5x9Pbj8n6KnU hdBNCUXSjBt5seAx22tcbMzTRoVaurCnYiCMWjL1+5ze7yQ4D3geNWPduq1D8OHfIrDlNjVCrEj FQY8J1EOeYVXQDqE9HN+oEwZ/k80twTLdA/pZP/iOYTq8ApNKmpuo3paT2MzkKUNFulB6dQuPNN eKjm9xweYdVRvCf4E/fA5sgqxWen6AVTuA+IuGjwdm0zsw8HSP2rdK3XYxVRsvUiMj+CsWGGofP h3ItgX/Qu1x/b9e4qrGcczWSkDFV1SNEQWmn/eIM5tGtZcOv X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The set_id_regs test currently assumes that there will always be invalid values available in bitfields for it to generate but this may not be the case if the architecture has defined meanings for every possible value for the bitfield. An assert added in commit bf09ee918053e ("KVM: arm64: selftests: Remove ARM64_FEATURE_FIELD_BITS and its last user") refuses to run for single bit fields which will show the issue most readily but there is no reason wider ones can't show the same issue. Rework the tests for invalid value to check if an invalid value can be generated and skip the test if not, removing the assert. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 63 +++++++++++++++++++++= ---- 1 file changed, 53 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index bfca7be3e766..928e7d9e5ab7 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -317,11 +317,12 @@ uint64_t get_safe_value(const struct reg_ftr_bits *ft= r_bits, uint64_t ftr) } =20 /* Return an invalid value to a given ftr_bits an ftr value */ -uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t f= tr) +uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t f= tr, + bool *skip) { uint64_t ftr_max =3D ftr_bits->mask >> ftr_bits->shift; =20 - TEST_ASSERT(ftr_max > 1, "This test doesn't support single bit features"); + *skip =3D false; =20 if (ftr_bits->sign =3D=3D FTR_UNSIGNED) { switch (ftr_bits->type) { @@ -329,42 +330,81 @@ uint64_t get_invalid_value(const struct reg_ftr_bits = *ftr_bits, uint64_t ftr) ftr =3D max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); break; case FTR_LOWER_SAFE: + if (ftr =3D=3D ftr_max) + *skip =3D true; ftr++; break; case FTR_HIGHER_SAFE: + if (ftr =3D=3D 0) + *skip =3D true; ftr--; break; case FTR_HIGHER_OR_ZERO_SAFE: - if (ftr =3D=3D 0) + switch (ftr) { + case 0: ftr =3D ftr_max; - else + break; + case 1: + *skip =3D true; + break; + default: ftr--; + break; + } break; default: + *skip =3D true; break; } } else if (ftr !=3D ftr_max) { switch (ftr_bits->type) { case FTR_EXACT: ftr =3D max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); + if (ftr >=3D ftr_max) + *skip =3D true; break; case FTR_LOWER_SAFE: ftr++; break; case FTR_HIGHER_SAFE: - ftr--; + /* FIXME: "need to check for the actual highest." */ + if (ftr =3D=3D ftr_max) + *skip =3D true; + else + ftr--; break; case FTR_HIGHER_OR_ZERO_SAFE: - if (ftr =3D=3D 0) - ftr =3D ftr_max - 1; - else + switch (ftr) { + case 0: + if (ftr_max > 1) + ftr =3D ftr_max - 1; + else + *skip =3D true; + break; + case 1: + *skip =3D true; + break; + default: ftr--; + break; + } break; default: + *skip =3D true; break; } } else { - ftr =3D 0; + switch (ftr_bits->type) { + case FTR_LOWER_SAFE: + if (ftr =3D=3D 0) + *skip =3D true; + else + ftr =3D 0; + break; + default: + *skip =3D true; + break; + } } =20 return ftr; @@ -399,12 +439,15 @@ static void test_reg_set_fail(struct kvm_vcpu *vcpu, = uint64_t reg, uint8_t shift =3D ftr_bits->shift; uint64_t mask =3D ftr_bits->mask; uint64_t val, old_val, ftr; + bool skip; int r; =20 val =3D vcpu_get_reg(vcpu, reg); ftr =3D (val & mask) >> shift; =20 - ftr =3D get_invalid_value(ftr_bits, ftr); + ftr =3D get_invalid_value(ftr_bits, ftr, &skip); + if (skip) + return; =20 old_val =3D val; ftr <<=3D shift; --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF1DE41160E; Fri, 6 Mar 2026 17:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817093; cv=none; b=rOTSDZLi0tLeMp5OiF17KQ4s6zUHBWSLdBnvXg4Xkiszx9f4LXR+ucr5WlBXJw4jdmWNpKPMBf1JUnE75G4mgbZyEKIq+GcYu+CvRwfOdkxATB0P6uKJpL1nildMsSByMkVPWE6SoigzK9aHhG72iGDUVjLKzMoZOHK5r9f2Fgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817093; c=relaxed/simple; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=f8nTt8eJXaZzeaIIoncieYBa9+iT6fC/O9jDoCBzPzk0QqJBPMH5f7zkiXS403hzHmH3hkSOl2gNe1vnVVzm0F1GKYFnf5APMNivuc56PZJfiL399z5N9r6aRTpmh4j9hZseHCZhKjxfBEg4WmDzebOU9O41rKFhkyxRU+EHtJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N7YLlyNL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N7YLlyNL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C454C2BC9E; Fri, 6 Mar 2026 17:11:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817093; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=N7YLlyNLgtbaiW7ha3XjAdlD7VpU7NJTF2zcfcdtTPovQYADqIGwE1eVGSIaVw+Ye kXMOf150yVkaJ9QTcSvmcukEDgxsilq4V/w/nWKSLqtBG2p5ECq1OVL8MGQaMPcBMa oTlw35ew8wmnExdd9SRYTSfU5WUmkMpA+3qFuD00xEXeq4MTEiU7jmAT7SOMvtAmi/ wL1EOz3HQoVevFFTY6FVLlnpt+/jwTfmNxsPMSgJpSoyaJvKB7l0IyJxnpaOEAQ02L rNQTrtYLq+/zopfv/EegX7P1Zd7kWoeVGcKtj8Zk7bv3TG28y3EpUBPYldLb3Vqn1S r5c/P52HypuEQ== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:21 +0000 Subject: [PATCH v10 29/30] KVM: arm64: selftests: Add SME system registers to get-reg-list Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-29-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3018; i=broonie@kernel.org; h=from:subject:message-id; bh=I21JH58CR/AbSnYTHYyjog9l03x/hS5CLetpI7AAePU=; b=owGbwMvMwMWocq27KDak/QLjabUkhszVXPYxP0vPWJerJQp1NrPbO5dFf7Jn4H+kHO3etesJQ +GzhYadjMYsDIxcDLJiiixrn2WsSg+X2Dr/0fxXMINYmUCmMHBxCsBEVtxh/++W7jfFv8hL8ceL /DDmgOrMo7eZXzwwmPN54/H1mflcaiHWZmfS6i7xLtg2m1/WoWvLqiiZ+sXHpN/Fs7rnsm2P32n vHBz7IXb/r2PhsXn6h+8rygSqbKzYJe9q8fdzckrmplDDuUseGfwSN92ZZH69+kWDri7f6tajT9 3cX9jHKoikvuP52VbRFLHM+8+MHWrME55qOsWJOKn/UDlqWvOzv+RJub1e6uOVNceL7vY9Tpv5+ UOBn7iFkKX+pZtnPF6z2Hkd3C3r+E/tc91cNt01rJa8kdX5wdXq7qqclt+WHG7I/bPC8dDmnX7C sTdk+HWtJ150DZ6iz/506rab2/NVk27o5TcEtrb9PMMHAA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/get-reg-list.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/arm64/get-reg-list.c b/tools/testi= ng/selftests/kvm/arm64/get-reg-list.c index 0a3a94c4cca1..876c4719e2e2 100644 --- a/tools/testing/selftests/kvm/arm64/get-reg-list.c +++ b/tools/testing/selftests/kvm/arm64/get-reg-list.c @@ -61,7 +61,13 @@ static struct feature_id_reg feat_id_regs[] =3D { REG_FEAT(HFGITR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), REG_FEAT(HDFGRTR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), REG_FEAT(HDFGWTR2_EL2, ID_AA64MMFR0_EL1, FGT, FGT2), - REG_FEAT(ZCR_EL2, ID_AA64PFR0_EL1, SVE, IMP), + REG_FEAT(SMCR_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMCR_EL2, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMIDR_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMPRI_EL1, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SMPRIMAP_EL2, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(TPIDR2_EL0, ID_AA64PFR1_EL1, SME, IMP), + REG_FEAT(SVCR, ID_AA64PFR1_EL1, SME, IMP), REG_FEAT(SCTLR2_EL1, ID_AA64MMFR3_EL1, SCTLRX, IMP), REG_FEAT(SCTLR2_EL2, ID_AA64MMFR3_EL1, SCTLRX, IMP), REG_FEAT(VDISR_EL2, ID_AA64PFR0_EL1, RAS, IMP), @@ -367,6 +373,7 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 0, 0, 0), /* MIDR_EL1 */ ARM64_SYS_REG(3, 0, 0, 0, 6), /* REVIDR_EL1 */ ARM64_SYS_REG(3, 1, 0, 0, 1), /* CLIDR_EL1 */ + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ ARM64_SYS_REG(3, 1, 0, 0, 7), /* AIDR_EL1 */ ARM64_SYS_REG(3, 3, 0, 0, 1), /* CTR_EL0 */ ARM64_SYS_REG(2, 0, 0, 0, 4), @@ -498,6 +505,8 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 1, 0, 1), /* ACTLR_EL1 */ ARM64_SYS_REG(3, 0, 1, 0, 2), /* CPACR_EL1 */ KVM_ARM64_SYS_REG(SYS_SCTLR2_EL1), + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 0), /* TTBR0_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 1), /* TTBR1_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 2), /* TCR_EL1 */ @@ -518,9 +527,11 @@ static __u64 base_regs[] =3D { ARM64_SYS_REG(3, 0, 13, 0, 4), /* TPIDR_EL1 */ ARM64_SYS_REG(3, 0, 14, 1, 0), /* CNTKCTL_EL1 */ ARM64_SYS_REG(3, 2, 0, 0, 0), /* CSSELR_EL1 */ + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ ARM64_SYS_REG(3, 3, 10, 2, 4), /* POR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 2), /* TPIDR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 3), /* TPIDRRO_EL0 */ + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ ARM64_SYS_REG(3, 3, 14, 0, 1), /* CNTPCT_EL0 */ ARM64_SYS_REG(3, 3, 14, 2, 1), /* CNTP_CTL_EL0 */ ARM64_SYS_REG(3, 3, 14, 2, 2), /* CNTP_CVAL_EL0 */ @@ -730,6 +741,8 @@ static __u64 el2_regs[] =3D { SYS_REG(HFGITR_EL2), SYS_REG(HACR_EL2), SYS_REG(ZCR_EL2), + SYS_REG(SMPRIMAP_EL2), + SYS_REG(SMCR_EL2), SYS_REG(HCRX_EL2), SYS_REG(TTBR0_EL2), SYS_REG(TTBR1_EL2), --=20 2.47.3 From nobody Sun Apr 5 16:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 081464218BE; Fri, 6 Mar 2026 17:11:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817098; cv=none; b=mXl2QPA0uEa6LdVuGdpequZYFCC/fEgTmqCpn1IP+nEDODxefsaYiTRvPpC97byiOGRvlNHhGKAo3mlMDls3zbb+PRca32MkAXuFNEJ2iGVp7CkvnILcsqCeHp3xO37p1fIPppJhIXRdIzq2DQhGUVM7NXo/H42aMxYsnqISn3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817098; c=relaxed/simple; bh=NZIFgTqhh8EXiLLs/nxGZ8SnCnhPPbRnuUWQ0HANKVQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=emdUhYUlyxQqTv+0pWpa56wlDsTD6cJtTBvece6LR9APyRDcPFfk0PiX7rey8sY057bn4Ix8RFLMQ7S7QeMvZCt97p91wiKglzMlh0/IaTCY+bQo758p4FZZBGiqzNihY8kMRi4kMSaGT1DNzxw09o9XbKXrUW+dG3/WrHpMdO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pkYVXB8H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pkYVXB8H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDDFDC19425; Fri, 6 Mar 2026 17:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817097; bh=NZIFgTqhh8EXiLLs/nxGZ8SnCnhPPbRnuUWQ0HANKVQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pkYVXB8H7vKkPXZgZCgjPpnjMqxBKbjFIMuRJTGiM3BUto0m1fiQYC25++c4eJM17 oJqfV9pYBzTYXjU1zqbJhPBjEdHUTALKi3cU3lhLHBm4tOom7isS2/Ic9hkO5Sqx6o kqRdqZZGyKzwpdpGWcu4k57LympBFNwhusj+bhQKjH/2j5JhFTQaIKbHcfASRJSAeq Hz9zs88t2F9ocLtbPVFKc6Cqk/D3b23MaSjA31Kqq3xDBNnJ2SqZYCZrKM37SzvJ+3 z84LQBq2wY8fSSsHnsxDgrlDyu5D5r9El6Y+5ft1ZlR029CD081PO05YP6sLnoiqdM GK0vy12EpfhUg== From: Mark Brown Date: Fri, 06 Mar 2026 17:01:22 +0000 Subject: [PATCH v10 30/30] KVM: arm64: selftests: Add SME to set_id_regs test Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260306-kvm-arm64-sme-v10-30-43f7683a0fb7@kernel.org> References: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> In-Reply-To: <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org> To: Marc Zyngier , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Oliver Upton Cc: Dave Martin , Fuad Tabba , Mark Rutland , Ben Horgan , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-6ac23 X-Developer-Signature: v=1; a=openpgp-sha256; l=3538; i=broonie@kernel.org; h=from:subject:message-id; bh=NZIFgTqhh8EXiLLs/nxGZ8SnCnhPPbRnuUWQ0HANKVQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpqwpAv2zOEHkFm+GcJUqJtudd25zDjFhwHvj1S uW+4xwJ6NOJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaasKQAAKCRAk1otyXVSH 0Ni7B/0T3XlbJIMnbMh8OG9XDj/dYjlfpsSNvzZ+HLVd+tbRo7mKD0KbCYe7XgRKmDz7DMpZKre 0XR0UtnEAYMps41HkT4Iu8Rinj+UjHHeRTeDL066+LWleOkQrCrWzEjsepMaZJVoxsS1jWiFpue tuW5vhiKv8bRuWc3y67+T5oe/kCIr3VaUm1GK1q5KRWOktz1qK61RwqP5BVIkCYSmV3MyfGuqmM pETU5l8Osql5i7d7hFZLXtmgqhhosg83Bx5HpBWigsMMbmg/3SKX8zRPP0kMw+Eyz8kkjAnoEBD zUH8q6rNnIJ4aRHIFRa6fXrm1zNom8U7TMSSAn28D/FoK2Ea X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Add coverage of the SME ID registers to set_id_regs, ID_AA64PFR1_EL1.SME becomes writable and we add ID_AA64SMFR0_EL1 and it's subfields. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 30 +++++++++++++++++++++= ++++ 1 file changed, 30 insertions(+) diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testin= g/selftests/kvm/arm64/set_id_regs.c index 928e7d9e5ab7..042a7496ec83 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -145,6 +145,7 @@ static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = =3D { static const struct reg_ftr_bits ftr_id_aa64pfr1_el1[] =3D { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, DF2, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, CSV2_frac, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SME, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SSBS, ID_AA64PFR1_EL1_SSBS_= NI), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, BT, 0), REG_FTR_END, @@ -202,6 +203,33 @@ static const struct reg_ftr_bits ftr_id_aa64mmfr3_el1[= ] =3D { REG_FTR_END, }; =20 +static const struct reg_ftr_bits ftr_id_aa64smfr0_el1[] =3D { + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, FA64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, LUTv2, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SMEver, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F64F64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16B16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I8I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, BI32I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F32F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8FMA, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP4, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP2, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SBitPerm, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, AES, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SFEXPA, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, STMOP, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SMOP4, 0), + REG_FTR_END, +}; + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] =3D { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), @@ -234,6 +262,7 @@ static struct test_feature_reg test_regs[] =3D { TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), TEST_REG(SYS_ID_AA64MMFR3_EL1, ftr_id_aa64mmfr3_el1), + TEST_REG(SYS_ID_AA64SMFR0_EL1, ftr_id_aa64smfr0_el1), TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), }; =20 @@ -253,6 +282,7 @@ static void guest_code(void) GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR3_EL1); + GUEST_REG_SYNC(SYS_ID_AA64SMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); GUEST_REG_SYNC(SYS_MPIDR_EL1); GUEST_REG_SYNC(SYS_CLIDR_EL1); --=20 2.47.3