From nobody Sat Apr 4 04:54:44 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A47323CD8A2 for ; Fri, 20 Mar 2026 15:47:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774021652; cv=none; b=AMMuADgbGZCjrvh/OgtI+mWKdxqT2NkyNObBeMOA0ltWFvUSVgahznmthEPg3aIrnERCSRoIPlGwDeIUW2Oz4dE2E96mhjnOdYHnaxuoSFguy4CsRi9oK9+EMMMyjctyptWNdHcpN5QiaOepI/wbeMe0JrwT6x5AkacSUi9Bsbw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774021652; c=relaxed/simple; bh=AhrpkkmBjpJ2/C5bTeRAdRM8zKL8qpGvBKtsGgaXuvw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BK3AYu5SAA1fmv5f3ugpOmzVVk98DqnZXq1utbwKvCwJS56iUk/acbwRliBZKJfk40dkb2yae11OlAM05+FJDGOD05HUB+1CG7xuElq5wznxMJfur0kGkATrvaJ/oxOrkWvPvMyxYGsE/gAtPfMc2qA6fOXzDIFuMwZ5pME2MJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=L1e5DJ6t; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="L1e5DJ6t" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ECB9EC19425; Fri, 20 Mar 2026 15:47:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774021652; bh=AhrpkkmBjpJ2/C5bTeRAdRM8zKL8qpGvBKtsGgaXuvw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=L1e5DJ6tOjlbLmriA4lWJl5yyybE/SFBxARBbpuTXD1YeA04w1pBJ8UQickjcHrM9 BKarYMcFslJIhhKrGWUIocdSoLi8e1jMSymjwbziiz4HwCdj3jfTAqvAnrtJNZLqup uHiOj2YAkSZgVr+/sXD/OFlpUP4gwoPZGGgQl8LlDbmwS1Scx60YbwyHVeOXA/baJM 9b23m3XvpBEG6x6g4QzmBuyfrmSpdLLgOPwOC1Bv0PBD32NV7jVbDMg9gcv5Sf4qLp FDPVS+AY3HjHjzN7guPd+KSFWUhqpKgf9FUgl7rnPhYEi54jz4j4x6/T9CtnsI8u2X IBAH3jVKNcwiQ== From: Mark Brown Date: Fri, 20 Mar 2026 15:44:14 +0000 Subject: [PATCH v8 1/2] arm64/fpsimd: Suppress SVE access traps when loading FPSIMD state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260320-arm64-sve-trap-mitigation-v8-1-8bf116c8e360@kernel.org> References: <20260320-arm64-sve-trap-mitigation-v8-0-8bf116c8e360@kernel.org> In-Reply-To: <20260320-arm64-sve-trap-mitigation-v8-0-8bf116c8e360@kernel.org> To: Catalin Marinas , Will Deacon Cc: Mark Rutland , Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-db13a X-Developer-Signature: v=1; a=openpgp-sha256; l=8045; i=broonie@kernel.org; h=from:subject:message-id; bh=AhrpkkmBjpJ2/C5bTeRAdRM8zKL8qpGvBKtsGgaXuvw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpvWwPit2ilNGQvUG4YuV8Y7q1fXnFNBAGVfQ8n pX3v07g5MKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCab1sDwAKCRAk1otyXVSH 0GS5B/9glBVNO//slD40nLdrGSf3IwWpG0ctuQwOoCzSeAh1LbHXkAM3q8xzMbR6oOfWI/Un4BO JhmmglCZo2Huh9g0qJydziLy4XctvZx/rbV9AE/kCrcJm3gZG3SUJzMDhPoLseUW08EOUixvkkW jYCNs4SxYGz7LTzxk44JBSwZL344VM/fKO50zIqbS9N4fDN7QWl0uhd+P9o6Q1B1cvFJTbGcO/e M2NfiYIQjLwzchb+qpAjtY0+6rAqkujArB7f9lt8XNBxVehcHTUZXzY8lC1BhhKrDf8X43xtI74 MD4vZCran+LaH5WUukS/LEmPiZUaJ8fCNrjXTwxoF55xlz2k X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB When we are in a syscall we take the opportunity to discard the SVE state, saving only the FPSIMD subset of the register state. If we have to reload the floating point state from memory then we reenable SVE access traps, stopping tracking SVE until the task uses SVE again at which point it will take another SVE access trap. This means that for a task which is actively using SVE and also doing many blocking system calls will have the additional overhead of SVE access traps. The use of SVE for applications like memcpy() means that frequent SVE usage is common with modern distributions, even with tasks that do not obviously use floating point. I did some instrumentation which counted the number of SVE access traps and the number of times we loaded FPSIMD only register state for each task. Testing with Debian Bookworm this showed that during boot the overwhelming majority of tasks triggered another SVE access trap more than 50% of the time after loading FPSIMD only state with a substantial number near 100%, though some programs had a very small number of SVE accesses most likely from startup. There were few tasks in the range 5-45%, most tasks either used SVE frequently or used it only a tiny proportion of times. As expected older distributions which do not have the SVE performance work available showed no SVE usage in general applications. This indicates that there should be some benefit from reducing the number of SVE access traps for blocking system calls like we did for non blocking system calls in commit 8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch"). Let's do this with a timeout, when we take a SVE access trap record a jiffies after which we'll reeanble SVE traps and then check this whenever we load a FPSIMD only floating point state from memory. If the time has passed then we reenable traps, otherwise we leave traps disabled and flush the non-shared register state like we would on trap. The timeout is currently set to a second, I pulled this number out of thin air so there is doubtless some room for tuning. This means that for a task which is actively using SVE the number of SVE access traps will be equivalent or reduced but applications which use SVE only very infrequently will avoid the overheads associated with tracking SVE state after a second. The extra cost from additional tracking of SVE state only occurs when a task is preempted so short running tasks should be minimally affected. As would be expected fp-pidbench shows minimal change from this patch, it does not block and on a quiet system is unlikely to see it's state reloaded from memory. There should be no functional change resulting from this, it is purely a performance optimisation. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/entry-fpsimd.S | 15 +++++++++++++ arch/arm64/kernel/fpsimd.c | 46 +++++++++++++++++++++++++++++++++-= ---- 4 files changed, 58 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsim= d.h index 1d2e33559bd5..2d9ab5bbcb22 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -144,6 +144,7 @@ static inline void *thread_zt_state(struct thread_struc= t *thread) extern void sve_save_state(void *state, u32 *pfpsr, int save_ffr); extern void sve_load_state(void const *state, u32 const *pfpsr, int restore_ffr); +extern void sve_flush_p(bool flush_ffr); extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1); extern unsigned int sve_get_vl(void); extern void sve_set_vq(unsigned long vq_minus_1); diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/pr= ocessor.h index e30c4c8e3a7a..a174864eca5f 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -166,6 +166,7 @@ struct thread_struct { unsigned int fpsimd_cpu; void *sve_state; /* SVE registers, if any */ void *sme_state; /* ZA and ZT state, if any */ + unsigned long sve_timeout; /* jiffies to drop TIF_SVE */ unsigned int vl[ARM64_VEC_MAX]; /* vector length */ unsigned int vl_onexec[ARM64_VEC_MAX]; /* vl after next exec */ unsigned long fault_address; /* fault info */ diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fps= imd.S index 6325db1a2179..617dd70cafd7 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -85,6 +85,21 @@ SYM_FUNC_START(sve_flush_live) 2: ret SYM_FUNC_END(sve_flush_live) =20 +/* + * Zero the predicate registers + * + * VQ must already be configured by caller, any further updates of VQ + * will need to ensure that the register state remains valid. + * + * x0 =3D include FFR? + */ +SYM_FUNC_START(sve_flush_p) + sve_flush_p + tbz x0, #0, 1f + sve_flush_ffr +1: ret +SYM_FUNC_END(sve_flush_p) + #endif /* CONFIG_ARM64_SVE */ =20 #ifdef CONFIG_ARM64_SME diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 9de1d8a604cb..d46bb370f9a9 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -360,6 +360,7 @@ static void task_fpsimd_load(void) { bool restore_sve_regs =3D false; bool restore_ffr; + unsigned long sve_vq_minus_one; =20 WARN_ON(!system_supports_fpsimd()); WARN_ON(preemptible()); @@ -368,16 +369,11 @@ static void task_fpsimd_load(void) if (system_supports_sve() || system_supports_sme()) { switch (current->thread.fp_type) { case FP_STATE_FPSIMD: - /* Stop tracking SVE for this task until next use. */ - clear_thread_flag(TIF_SVE); break; case FP_STATE_SVE: if (!thread_sm_enabled(¤t->thread)) WARN_ON_ONCE(!test_and_set_thread_flag(TIF_SVE)); =20 - if (test_thread_flag(TIF_SVE)) - sve_set_vq(sve_vq_from_vl(task_get_sve_vl(current)) - 1); - restore_sve_regs =3D true; restore_ffr =3D true; break; @@ -396,6 +392,15 @@ static void task_fpsimd_load(void) } } =20 + /* + * If SVE has been enabled we may keep it enabled even if + * loading only FPSIMD state, so always set the VL. + */ + if (system_supports_sve() && test_thread_flag(TIF_SVE)) { + sve_vq_minus_one =3D sve_vq_from_vl(task_get_sve_vl(current)) - 1; + sve_set_vq(sve_vq_minus_one); + } + /* Restore SME, override SVE register configuration if needed */ if (system_supports_sme()) { unsigned long sme_vl =3D task_get_sme_vl(current); @@ -425,6 +430,30 @@ static void task_fpsimd_load(void) } else { WARN_ON_ONCE(current->thread.fp_type !=3D FP_STATE_FPSIMD); fpsimd_load_state(¤t->thread.uw.fpsimd_state); + + /* + * If the task had been using SVE we keep it enabled + * when loading FPSIMD only state for a period to + * minimise overhead for tasks actively using SVE, + * disabling it periodicaly to ensure that tasks that + * use SVE intermittently do eventually avoid the + * overhead of carrying SVE state. The timeout is + * initialised when we take a SVE trap in do_sve_acc(). + */ + if (system_supports_sve() && test_thread_flag(TIF_SVE)) { + if (time_after(jiffies, current->thread.sve_timeout)) { + clear_thread_flag(TIF_SVE); + sve_user_disable(); + } else { + /* + * Loading V will have flushed the + * rest of the Z register, SVE is + * enabled at EL1 and VL was set + * above. + */ + sve_flush_p(true); + } + } } } =20 @@ -1343,6 +1372,13 @@ void do_sve_acc(unsigned long esr, struct pt_regs *r= egs) =20 get_cpu_fpsimd_context(); =20 + /* + * We will keep SVE enabled when loading FPSIMD only state for + * the next second to minimise traps when userspace is + * actively using SVE. + */ + current->thread.sve_timeout =3D jiffies + HZ; + if (test_and_set_thread_flag(TIF_SVE)) WARN_ON(1); /* SVE access shouldn't have trapped */ =20 --=20 2.47.3 From nobody Sat Apr 4 04:54:44 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68D1D3CD8C9 for ; Fri, 20 Mar 2026 15:47:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774021654; cv=none; b=rQBs4BhpwHehm9VpTbygExbPrhwJ4qFNdcuJAOIe+MRC06cnqeXG/XumnW/K5WUp2mqpLOKE6CxqHbaV8YxLx+5C61NYKy1AXkl4n28gDfL0IzCZCaWU/WKwApu+v7lydoEOR23AIvcZzpqICATUf7vSz8FbOQyCRsUnElcqOik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774021654; c=relaxed/simple; bh=1YCIRlzvccz0o7Yp+gHCS/xf6jY6fB4+9eaW9ZuF5O4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qHQaG7HWkf89IBrv6EDQuWqCbxjufrtBYN5/Y5WXR7dKSrDtzB1dwXJujenMUdp+LryagBbgsnka5zFRJtHHLLjubf+X6byTZ4zU47CzyVKBVmjWFih7bbqZpoSAUVsyy6HR7ZCyPuQ6yL+o+BPDdnd8hXUTln0p97Pek5xF+2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fw7pKAYb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fw7pKAYb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0843C4CEF7; Fri, 20 Mar 2026 15:47:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774021653; bh=1YCIRlzvccz0o7Yp+gHCS/xf6jY6fB4+9eaW9ZuF5O4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=fw7pKAYbdzdWgsBoGLn/pE6Ky0mApfADe597sxloct0VOLhkI6uh0fFrtmAFtzsvI HzFWVHbS57a07mg68mDVkUrtYJuHVqb1LOtP0FG5BibtgDa+Hv6ycW7NqQPYQ4PplY KjQ/chDNGqFnJxnXbZhXnpAAk1MOHP3VsB7gTyTEty6k/vNpKMM/STuF536sGo4p67 vmlDJvA+z9Ivh/EtHR+/Nyi6A1oUQUEs2EoGYzVSqv/iKwV679D7u4xGvPzYfFzk/N PRBVcGvsZOl1dblC0NMrCOfsaGZk61pZ9dA1is8mEWLWVf9h+lJeWiv4pZTNkstELp HqZL2Du4kc4iw== From: Mark Brown Date: Fri, 20 Mar 2026 15:44:15 +0000 Subject: [PATCH v8 2/2] arm64/sve: Disable TIF_SVE on syscall once per second Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260320-arm64-sve-trap-mitigation-v8-2-8bf116c8e360@kernel.org> References: <20260320-arm64-sve-trap-mitigation-v8-0-8bf116c8e360@kernel.org> In-Reply-To: <20260320-arm64-sve-trap-mitigation-v8-0-8bf116c8e360@kernel.org> To: Catalin Marinas , Will Deacon Cc: Mark Rutland , Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-db13a X-Developer-Signature: v=1; a=openpgp-sha256; l=2697; i=broonie@kernel.org; h=from:subject:message-id; bh=1YCIRlzvccz0o7Yp+gHCS/xf6jY6fB4+9eaW9ZuF5O4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBpvWwQk8La/rwzGdEW7tZNW+j6qERjnRYi476yP rk1yBlp0FWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCab1sEAAKCRAk1otyXVSH 0MnwB/9dJOh5WieStSHuudcz4Jesfz699a5guOYhmjrXqQQ+enriUxPl9v2T2rcWwGUeqpym4rJ SkERTDGcBtrXGlZZDIrZGc4lnJozjkF9qgfizi9lonA2GzedSsmnk3T3QW2mv9Zi1vVwAFItsA4 bBSjfAPof/llpQcZVf66CoPSDzhf9frhTDaq7vK/iJotuqhO52UpEcF22nktMuHXOWH0hdUEaVW UYyOmT2JnfRAoHCI25c6GjI5vCvbfvmRcs1D+c/SgxeIc5WecZOh2AQc7tNeQqq6nPXpE+maPHA 8AXtLJ/MF9EB115m/dk6p48f8w1XYCeDKNMP7WN66TARLEI8 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Our syscall ABI requires that when performing a syscall the portions of the Z registers not shared with the V registers, the P and FFR registers are reset to 0. Since we have no way of monitoring EL0 SVE usage this is implemented by changing the in register values on every syscall for tasks which have SVE enabled, for systems with 128 bit SVE vector lengths this has been benchmarked as a 6% overhead. We currently support disabling SVE for userspace tasks when loading the floating point state from memory during a syscall, allowing tasks that use SVE infrequently to avoid this overhead, but this may not help CPU bound tasks if they are not fortunate enough to block or be scheduled during a syscall. This is done whenever the state is loaded from a second after the last time the task generate a SVE access trap. Extend this mechanism to also apply during syscall entry, disabling SVE instead of flushing the live registers when we perform a syscall a second after the last time a SVE access trap was taken. This adds an additional memory access and branch for tasks using SVE and means that CPU bound tasks actively using SVE will take extra SVE access traps (at most one per second) but will allows CPU bound tasks that infrequently use SVE to avoid the overhead of flushing the registers on syscall. On a system with 128 bit SVE vectors fp-pidbench shows a roughly 4.5% improvement compared to baseline after having used SVE, for a roughly 0.4% overhead when SVE is used between each syscall. Obviously this is very much a microbenchmark. This is purely a performance optimisation, there should be no functional change. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry-common.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-com= mon.c index 3625797e9ee8..a7b7ec66f084 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -234,8 +234,18 @@ static inline void fpsimd_syscall_enter(void) if (test_thread_flag(TIF_SVE)) { unsigned int sve_vq_minus_one; =20 - sve_vq_minus_one =3D sve_vq_from_vl(task_get_sve_vl(current)) - 1; - sve_flush_live(true, sve_vq_minus_one); + /* + * Ensure that tasks that don't block in a syscall + * also get a chance to drop TIF_SVE. + */ + if (unlikely(time_after(jiffies, + current->thread.sve_timeout))) { + clear_thread_flag(TIF_SVE); + sve_user_disable(); + } else { + sve_vq_minus_one =3D sve_vq_from_vl(task_get_sve_vl(current)) - 1; + sve_flush_live(true, sve_vq_minus_one); + } } =20 /* --=20 2.47.3