From nobody Sun Feb 8 05:28:33 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAEFB6D52C; Thu, 29 Feb 2024 21:48:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709243334; cv=none; b=qKCyhVJ9gISdYny/6FwZO2m4/7mSxND7ekLg1gZcnxukddz6D0Zg3lfAENn5AIApu+FKM79QqflPoQ39strOVBZPmUoIYcJOJLWcrhdH2S+ssJIinAswv+21shK+XvXMSTkYBgOeSGDnXMJ7FjH1G2zQCnyZnhiqbfkdyXSIuvk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709243334; c=relaxed/simple; bh=Jec+4jSEe3TE9NlPCIYI3vp1JTkd6IOOCraufqfC27M=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MukxJ03WQuwMahpb9Kiz3Noj06/rr00ysBzcdeQkpj1pCnCUV4cYvfJkI0tegzu44uGp27MXPOL+rt2UsAvk7gH85BE5OWH0wwoZ/zdTtuqKAnePoAe618Z1PeGywSXoEkvq3HSgkgXY2o84cCAdnhQwYj4JRuPq8x1hx727uL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DwOJpbMH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DwOJpbMH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2E505C43390; Thu, 29 Feb 2024 21:48:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709243334; bh=Jec+4jSEe3TE9NlPCIYI3vp1JTkd6IOOCraufqfC27M=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DwOJpbMHwK5QlJiunk96HPXVcWk97kaLsHV2iGu0SGpecfyx7m4iXclB+CRfEvL6Y QbqZkc+kGTzEW9/YCJM6AXg1ugjOvUuW2tbZ4DiyMIhJG+8paXEidb9W+Th2YUrcur SzeJ4m8ehbm6o9E/EcXISfGskx+plVceRAeu4/vMbvj6Yb11VxGFjbCLLXneQ+gxT5 u5R104IfAZ4qYUA2lUtEMYWpGjIY0DyGmCg4RVtxY9N/rspz4f6BE5lavguNJ0GztU kjEqFPg4V6ucjpVVRBaNrpj7D4QE6tlu/J/PCqV6Ntp+s+3M2bqZ4RxvtG5p8kA2X5 JnW7gvluSP+Yw== From: Mark Brown Date: Thu, 29 Feb 2024 21:47:34 +0000 Subject: [PATCH v2 1/2] KVM: arm64: Rename variable for tracking ownership of FP state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240229-kvm-arm64-group-fp-data-v2-1-276de0d550e8@kernel.org> References: <20240229-kvm-arm64-group-fp-data-v2-0-276de0d550e8@kernel.org> In-Reply-To: <20240229-kvm-arm64-group-fp-data-v2-0-276de0d550e8@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-a684c X-Developer-Signature: v=1; a=openpgp-sha256; l=8388; i=broonie@kernel.org; h=from:subject:message-id; bh=Jec+4jSEe3TE9NlPCIYI3vp1JTkd6IOOCraufqfC27M=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBl4Pu/NRO39V3DV9PCJx46l55pMDJEMBOIkc54yd7a JQf8+uaJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZeD7vwAKCRAk1otyXVSH0EJJCA CBTC3iSWExB4u2CD9SmZ6tdzeWhEUmOumgpUjPuM8hztPxAlGqI2Nd70ZpqlmpFByHUt6/iimpp/z9 tT331tZd7ZmyWVKAJ77hkiPUPVolpPxIXPgQHGm2UwkcjwAjGnLjjnv5dePpGmU7gYvNye66wJupTg tlU/0IleHBdC3zeKN+1apSvWFH/FdRyM4Cy1RN/TSx/afbr32mLK9KYSGeKmftVsfrxbuJMJg783vl C72LUKj0i98vJS5yTQFo63Sjavz21ZMczRaw+14wPnVedLTad0fRzmHyNAVyrgVpKsF8ux6/NtpLzE z0wWU5tvbP9M6r7ffJC9p3NgaLKTZi X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In preparation for refactoring how we store the actual FP state into a single struct let's free up the name 'fp_state' which we currently use for the variable where we track the ownership of the FP registers to something a bit more specific to it's usage. While we're at it also move the enum definition next to the rest of the FP state. No functional changes. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 4 ++-- arch/arm64/include/asm/kvm_host.h | 14 +++++++------- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/fpsimd.c | 10 +++++----- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- 8 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/= kvm_emulate.h index b804fe832184..1211d93aa712 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -593,7 +593,7 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struc= t kvm_vcpu *vcpu) val =3D (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); =20 if (!vcpu_has_sve(vcpu) || - (vcpu->arch.fp_state !=3D FP_STATE_GUEST_OWNED)) + (vcpu->arch.fp_owner !=3D FP_STATE_GUEST_OWNED)) val |=3D CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; if (cpus_have_final_cap(ARM64_SME)) val |=3D CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; @@ -601,7 +601,7 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struc= t kvm_vcpu *vcpu) val =3D CPTR_NVHE_EL2_RES1; =20 if (vcpu_has_sve(vcpu) && - (vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED)) + (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED)) val |=3D CPTR_EL2_TZ; if (cpus_have_final_cap(ARM64_SME)) val &=3D ~CPTR_EL2_TSM; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 21c57b812569..e0fbba52f1d3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -544,6 +544,13 @@ struct kvm_vcpu_arch { unsigned int sve_max_vl; u64 svcr; =20 + /* Ownership of the FP regs */ + enum { + FP_STATE_FREE, + FP_STATE_HOST_OWNED, + FP_STATE_GUEST_OWNED, + } fp_owner; + /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; =20 @@ -558,13 +565,6 @@ struct kvm_vcpu_arch { /* Exception Information */ struct kvm_vcpu_fault_info fault; =20 - /* Ownership of the FP regs */ - enum { - FP_STATE_FREE, - FP_STATE_HOST_OWNED, - FP_STATE_GUEST_OWNED, - } fp_state; - /* Configuration flags, set once and for all before the vcpu can run */ u8 cflags; =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a25265aca432..a2cba18effb2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -377,7 +377,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) * Default value for the FP state, will be overloaded at load * time if we support FP (pretty likely) */ - vcpu->arch.fp_state =3D FP_STATE_FREE; + vcpu->arch.fp_owner =3D FP_STATE_FREE; =20 /* Set up the timer */ kvm_timer_vcpu_init(vcpu); diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8c1d0d4853df..8dbd62d1e677 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -86,7 +86,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) * guest in kvm_arch_vcpu_ctxflush_fp() and override this to * FP_STATE_FREE if the flag set. */ - vcpu->arch.fp_state =3D FP_STATE_HOST_OWNED; + vcpu->arch.fp_owner =3D FP_STATE_HOST_OWNED; =20 vcpu_clear_flag(vcpu, HOST_SVE_ENABLED); if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) @@ -110,7 +110,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) * been saved, this is very unlikely to happen. */ if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) { - vcpu->arch.fp_state =3D FP_STATE_FREE; + vcpu->arch.fp_owner =3D FP_STATE_FREE; fpsimd_save_and_flush_cpu_state(); } } @@ -126,7 +126,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu) { if (test_thread_flag(TIF_FOREIGN_FPSTATE)) - vcpu->arch.fp_state =3D FP_STATE_FREE; + vcpu->arch.fp_owner =3D FP_STATE_FREE; } =20 /* @@ -142,7 +142,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) =20 WARN_ON_ONCE(!irqs_disabled()); =20 - if (vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED) { + if (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED) { =20 /* * Currently we do not support SME guests so SVCR is @@ -195,7 +195,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) isb(); } =20 - if (vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED) { + if (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED) { if (vcpu_has_sve(vcpu)) { __vcpu_sys_reg(vcpu, ZCR_EL1) =3D read_sysreg_el1(SYS_ZCR); =20 diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index a038320cdb08..575c39847d40 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -42,7 +42,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_t= able; /* Check whether the FP regs are owned by the guest */ static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu) { - return vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED; + return vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED; } =20 /* Save the 32-bit only FPSIMD system register state */ @@ -370,7 +370,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu= , u64 *exit_code) isb(); =20 /* Write out the host state if it's in the registers */ - if (vcpu->arch.fp_state =3D=3D FP_STATE_HOST_OWNED) + if (vcpu->arch.fp_owner =3D=3D FP_STATE_HOST_OWNED) __fpsimd_save_state(vcpu->arch.host_fpsimd_state); =20 /* Restore the guest state */ @@ -383,7 +383,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu= , u64 *exit_code) if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); =20 - vcpu->arch.fp_state =3D FP_STATE_GUEST_OWNED; + vcpu->arch.fp_owner =3D FP_STATE_GUEST_OWNED; =20 return true; } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 2385fd03ed87..85ea18227d33 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -39,7 +39,7 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_vcpu->vcpu.arch.cptr_el2 =3D host_vcpu->arch.cptr_el2; =20 hyp_vcpu->vcpu.arch.iflags =3D host_vcpu->arch.iflags; - hyp_vcpu->vcpu.arch.fp_state =3D host_vcpu->arch.fp_state; + hyp_vcpu->vcpu.arch.fp_owner =3D host_vcpu->arch.fp_owner; =20 hyp_vcpu->vcpu.arch.debug_ptr =3D kern_hyp_va(host_vcpu->arch.debug_ptr); hyp_vcpu->vcpu.arch.host_fpsimd_state =3D host_vcpu->arch.host_fpsimd_sta= te; @@ -64,7 +64,7 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) host_vcpu->arch.fault =3D hyp_vcpu->vcpu.arch.fault; =20 host_vcpu->arch.iflags =3D hyp_vcpu->vcpu.arch.iflags; - host_vcpu->arch.fp_state =3D hyp_vcpu->vcpu.arch.fp_state; + host_vcpu->arch.fp_owner =3D hyp_vcpu->vcpu.arch.fp_owner; =20 host_cpu_if->vgic_hcr =3D hyp_cpu_if->vgic_hcr; for (i =3D 0; i < hyp_cpu_if->used_lrs; ++i) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index c50f8459e4fc..9f9404c9bbae 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -337,7 +337,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) =20 __sysreg_restore_state_nvhe(host_ctxt); =20 - if (vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED) + if (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED) __fpsimd_save_fpexc32(vcpu); =20 __debug_switch_to_host(vcpu); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 1581df6aec87..17596586806c 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -258,7 +258,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) =20 sysreg_restore_host_state_vhe(host_ctxt); =20 - if (vcpu->arch.fp_state =3D=3D FP_STATE_GUEST_OWNED) + if (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED) __fpsimd_save_fpexc32(vcpu); =20 __debug_switch_to_host(vcpu); --=20 2.30.2 From nobody Sun Feb 8 05:28:33 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45C1816FF47; Thu, 29 Feb 2024 21:48:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709243337; cv=none; b=Oh0THrFD0XoESlyWjXBBm31e68iIEy9zuU5dUwcrqF9vSPIdKaL8FGWCmQ7khkN1/6DhExNt+NjpFOGBKyG1Hef47/Qn/2g/nYDIwxhdu7hVOrnoglUkY9kNxZMoKUm1LpuPuLFHWN59qz7TNf0hBB7FcnddMpNHUOOpILaqeC4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709243337; c=relaxed/simple; bh=wrn4bqlmQ5ZQNF93UH1XuqIXMotMY+E1QcwDY9R/4Qw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=oCGW6GiHeVmVRVTw3CpPcuMTVPFhXHMdhS5t0b1fKZAWSf6SIb4EhtTLHUrP5VlaFwL5gSDQR723IqLTNp38ADMt1h1eGB4Jp8wYZIrOCaaW+qPCLvuRMoLSfZxdV8KVB5Nq7el8lMVdC6SgsERHUzQ9tn4wdq3lj5r6y8GWa2U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oPnFm+HU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oPnFm+HU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B032DC41674; Thu, 29 Feb 2024 21:48:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709243336; bh=wrn4bqlmQ5ZQNF93UH1XuqIXMotMY+E1QcwDY9R/4Qw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=oPnFm+HU5OpTLbPARHP8jBaAtL/VfASqtRCFAOH8zqTQBs6E0tk1EYrJfs8fERRMv CYvnkbTFHFp70PDj/qxULgg70zrpZe2vtOFARHNCuNY1F2OZME4n/OUVMbi++QHNgc Y1cvMJctuq5ErN5zXvze/EGiNe7souWNyqZJphsHsyJbm7GYBfMNg7hoVFbYKzLABp oRBnsCVMM0FPmcXLDP/V5kLFe7F1m/9Th2de5C2Pl/JQZJ9F0LXTafHaCZRWSEQ2dR OSYWAzNSbCKZDNOZVbUpe56eMSZX9/ZKBZRTQCuBIRCz1i5IpUeM/yUmbSofmoQLMF u601Y0qX6tvKw== From: Mark Brown Date: Thu, 29 Feb 2024 21:47:35 +0000 Subject: [PATCH v2 2/2] KVM: arm64: Reuse struct cpu_fp_state to track the guest FP state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240229-kvm-arm64-group-fp-data-v2-2-276de0d550e8@kernel.org> References: <20240229-kvm-arm64-group-fp-data-v2-0-276de0d550e8@kernel.org> In-Reply-To: <20240229-kvm-arm64-group-fp-data-v2-0-276de0d550e8@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-a684c X-Developer-Signature: v=1; a=openpgp-sha256; l=10274; i=broonie@kernel.org; h=from:subject:message-id; bh=wrn4bqlmQ5ZQNF93UH1XuqIXMotMY+E1QcwDY9R/4Qw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBl4PvAC6Yw5ZnnJjz+9Z61ixdnjmYKLgqIgAoZ6gSo BAKsdaeJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZeD7wAAKCRAk1otyXVSH0P4IB/ 9k409f2CQ8TT/LzPcrT0T2Kk3z+5zeMnfvOMeohsTUsJDJgx1j6JdmtRgoY48rOEEi0oYC0vN85vUG cMkY1pknpkF0ieIjkUnvbYiyccqbh+MiQV23gUwIs8cKE+V5uKVkpKXsFQCc4ZWz6WAfNJ2HKlo9wp 0/0wrgNeO4sFE5wIpjOYPGr4TXzlRSUv/SXIgdc3jB5+Dkzxl3hWdE0BuqqVmvXnAdMY3CZU1BWHA6 gXfTDKztNcpSO5cPNDt8BsJDpHvzoayq40MMWMJrAwmditW/qeKkwqmH0Lijmta16i4O1vISd1Su0r es6xkbfL1xLl4BzW3nde0AN4x1l2yw X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB At present we store the various bits of floating point state individually in struct kvm_vcpu_arch and construct a struct cpu_fp_state to share with the host each time we exit the guest. Let's simplify this a little by having a struct cpu_fp_state in the struct kvm_vcpu_arch and initialising this while initialising the guest. As part of this remove the separate variables used for the SVE register storage and vector length information, just using the variables in the struct cpu_fp_state directly. Since cpu_fp_state stores pointers to variables to be updated as part of saving we do still need some variables stored directly in the struct for the FPSIMD registers, SVCR and the type of FP state saved. Due to the embedding of the FPSIMD registers into ucontext which is stored directly in the host's data and the future need to support KVM's system register view of SVCR and FPMR unpicking these indirections would be more involved. We initialise the structure when the vCPU is created and then update it if SVE is enabled, this split initialisation mirrors the existing code and helps avoid future modifications creating a situation where partially initialised floating point state is exposed to userspace. The need to offer configurability of the SVE vector length means we have to have some reinitialisation. No functional changes. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 11 +++++------ arch/arm64/kvm/arm.c | 12 ++++++++++++ arch/arm64/kvm/fpsimd.c | 21 +-------------------- arch/arm64/kvm/guest.c | 21 ++++++++++++++------- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 5 +++-- arch/arm64/kvm/reset.c | 14 ++++++++------ 6 files changed, 43 insertions(+), 41 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index e0fbba52f1d3..47bd769a26ff 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -539,9 +539,8 @@ struct kvm_vcpu_arch { * floating point code saves the register state of a task it * records which view it saved in fp_type. */ - void *sve_state; + struct cpu_fp_state fp_state; enum fp_type fp_type; - unsigned int sve_max_vl; u64 svcr; =20 /* Ownership of the FP regs */ @@ -799,16 +798,16 @@ struct kvm_vcpu_arch { =20 =20 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.fp_state.sve_state) = + \ + sve_ffr_offset((vcpu)->arch.fp_state.sve_vl)) =20 -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.fp_state.sve_vl) =20 #define vcpu_sve_state_size(vcpu) ({ \ size_t __size_ret; \ unsigned int __vcpu_vq; \ \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.fp_state.sve_vl))) { \ __size_ret =3D 0; \ } else { \ __vcpu_vq =3D vcpu_sve_max_vq(vcpu); \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a2cba18effb2..84cc0dbd9b14 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -379,6 +379,18 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) */ vcpu->arch.fp_owner =3D FP_STATE_FREE; =20 + /* + * Initial setup for FP state for sharing with host, if SVE is + * enabled additional configuration will be done. + * + * Currently we do not support SME guests so SVCR is always 0 + * and we just need a variable to point to. + */ + vcpu->arch.fp_state.st =3D &vcpu->arch.ctxt.fp_regs; + vcpu->arch.fp_state.fp_type =3D &vcpu->arch.fp_type; + vcpu->arch.fp_state.svcr =3D &vcpu->arch.svcr; + vcpu->arch.fp_state.to_save =3D FP_STATE_FPSIMD; + /* Set up the timer */ kvm_timer_vcpu_init(vcpu); =20 diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8dbd62d1e677..fc270a2257d5 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -138,29 +138,10 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) { - struct cpu_fp_state fp_state; - WARN_ON_ONCE(!irqs_disabled()); =20 if (vcpu->arch.fp_owner =3D=3D FP_STATE_GUEST_OWNED) { - - /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. - */ - fp_state.st =3D &vcpu->arch.ctxt.fp_regs; - fp_state.sve_state =3D vcpu->arch.sve_state; - fp_state.sve_vl =3D vcpu->arch.sve_max_vl; - fp_state.sme_state =3D NULL; - fp_state.svcr =3D &vcpu->arch.svcr; - fp_state.fp_type =3D &vcpu->arch.fp_type; - - if (vcpu_has_sve(vcpu)) - fp_state.to_save =3D FP_STATE_SVE; - else - fp_state.to_save =3D FP_STATE_FPSIMD; - - fpsimd_bind_state_to_cpu(&fp_state); + fpsimd_bind_state_to_cpu(&vcpu->arch.fp_state); =20 clear_thread_flag(TIF_FOREIGN_FPSTATE); } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index aaf1d4939739..54e9d3b648f0 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -317,7 +317,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; =20 - if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.fp_state.sve_vl))) return -EINVAL; =20 memset(vqs, 0, sizeof(vqs)); @@ -344,7 +344,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; /* too late! */ =20 - if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(vcpu->arch.fp_state.sve_state)) return -EINVAL; =20 if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) @@ -373,8 +373,11 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const st= ruct kvm_one_reg *reg) if (max_vq < SVE_VQ_MIN) return -EINVAL; =20 - /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl =3D sve_vl_from_vq(max_vq); + /* + * vcpu->arch.fp_state.sve_state will be alloc'd by + * kvm_vcpu_finalize_sve(). + */ + vcpu->arch.fp_state.sve_vl =3D sve_vl_from_vq(max_vq); =20 return 0; } @@ -403,7 +406,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const st= ruct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1 =20 -/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ +/* + * Bounds of a single SVE register slice within + * vcpu->arch.fp_state.sve_state + */ struct sve_state_reg_region { unsigned int koffset; /* offset into sve_state in kernel memory */ unsigned int klen; /* length in kernel memory */ @@ -499,7 +505,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; =20 - if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, + if (copy_to_user(uptr, vcpu->arch.fp_state.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) return -EFAULT; @@ -525,7 +531,8 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const str= uct kvm_one_reg *reg) if (!kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; =20 - if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, + if (copy_from_user(vcpu->arch.fp_state.sve_state + region.koffset, + uptr, region.klen)) return -EFAULT; =20 diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 85ea18227d33..63971b801cf3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -29,8 +29,9 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) =20 hyp_vcpu->vcpu.arch.ctxt =3D host_vcpu->arch.ctxt; =20 - hyp_vcpu->vcpu.arch.sve_state =3D kern_hyp_va(host_vcpu->arch.sve_state); - hyp_vcpu->vcpu.arch.sve_max_vl =3D host_vcpu->arch.sve_max_vl; + hyp_vcpu->vcpu.arch.fp_state.sve_state + =3D kern_hyp_va(host_vcpu->arch.fp_state.sve_state); + hyp_vcpu->vcpu.arch.fp_state.sve_vl =3D host_vcpu->arch.fp_state.sve_vl; =20 hyp_vcpu->vcpu.arch.hw_mmu =3D host_vcpu->arch.hw_mmu; =20 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 68d1d05672bd..675b8925242f 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -75,7 +75,7 @@ int __init kvm_arm_init_sve(void) =20 static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl =3D kvm_sve_max_vl; + vcpu->arch.fp_state.sve_vl =3D kvm_sve_max_vl; =20 /* * Userspace can still customize the vector lengths by writing @@ -87,7 +87,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) =20 /* * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * vcpu->arch.fp_state.sve_state as necessary. */ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) { @@ -96,7 +96,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) size_t reg_sz; int ret; =20 - vl =3D vcpu->arch.sve_max_vl; + vl =3D vcpu->arch.fp_state.sve_vl; =20 /* * Responsibility for these properties is shared between @@ -118,7 +118,8 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) return ret; } =09 - vcpu->arch.sve_state =3D buf; + vcpu->arch.fp_state.sve_state =3D buf; + vcpu->arch.fp_state.to_save =3D FP_STATE_SVE; vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); return 0; } @@ -149,7 +150,7 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) =20 void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { - void *sve_state =3D vcpu->arch.sve_state; + void *sve_state =3D vcpu->arch.fp_state.sve_state; =20 kvm_vcpu_unshare_task_fp(vcpu); kvm_unshare_hyp(vcpu, vcpu + 1); @@ -162,7 +163,8 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) - memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + memset(vcpu->arch.fp_state.sve_state, 0, + vcpu_sve_state_size(vcpu)); } =20 static void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) --=20 2.30.2