From nobody Tue Apr 7 23:39:51 2026 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 051173DD511 for ; Wed, 11 Mar 2026 11:53:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773229995; cv=none; b=Nb+Mgb24yr7ih8VNThrpNO5sj6gJRp8+kv4u6ZwHxlbWdVqeZBEqZJFEkW3a5fMzeKdzohNweBguk8nz8X5aimBx9TpU708UkkkCNy3XQO9Szq9ZA+38+DzBemHv0okHzfj7zdnqVap1RtLir6oOce7lQ2vPv7GOd/cNcLsccRM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773229995; c=relaxed/simple; bh=z68fgqc2iOMqfVA3tcxpjxIvdUnGOzEv71RskaThPjA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ooQYZJgnbsfYCEhmjWi8+j2w+XdxVLwBw6z4FbXWWtzv40fCjWdmy+EHEQIc7Lf9ScvD8ZZO69OVzvUEYfVwsdZ4f0IWSrJI9CNza5Ta8gsEwRbV/IEGCBP/e4jS5833q6GK/iZHk3u9tbcUIyUIxWBZnt7tCyY6po3ZgSNq9sc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=a+Mg7yZj; arc=none smtp.client-ip=209.85.210.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a+Mg7yZj" Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-829928e512aso3545670b3a.2 for ; Wed, 11 Mar 2026 04:53:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773229992; x=1773834792; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BbAh84gjMQujNaKw3Rut1VBI8uWVKCNQdZxq2kKwp6Y=; b=a+Mg7yZjPaMPDdxMssgTZTNPO/Iy5+wLiY1KF6us9O/cDBKGlXUr2iTa0wIUfP3siX 3cmjZsDglvDeHtuPculW3jO6vazoQlEjvlWp0RBO+2IIaBjKmvv7M3Ss8kcMgYdYrueX jW4LHOzGGQZCMjQ70V7cm8bd4puUMbgS7wl5+hknlsvqJ/P60+08+hV86vikgpIdcxYP dc9rBLrIw4XazLb8Lyzgi6ZhyneGkoNktLWVXFkXaB2QhBM5bg4Xtewk5p4OhY/B8ucR LQ/RGsrh3D3qVYax1pifb7p4wBFSKHM/3WUL0TN00Kkx1SHJ9u+5BNbytFEDtUsad/hr /S/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773229992; x=1773834792; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=BbAh84gjMQujNaKw3Rut1VBI8uWVKCNQdZxq2kKwp6Y=; b=bfoETS8HOiJJRx0YCeFYQHzU0sfs2N6gqtYoHKD9UjKTQCths3XuYwzAJIePzLkSJh d9ssI6QUj4P38OGKqewl73v5gqjsVPa6xijANU+lMW7oJj/fY/k3g/z2K9QC9m382Ccd IVwqmJIX6gtSz41WE98hNwarRlR59r8qwgoPXUWmzYaOJqN9Ydu/CUXfVeu+tIA9+BrQ krWzeVgF/PQbRlH1yHXloccBg/WwPnd5TZx1/mbVR9+y1cC+/JATVDwiH4fB20Q2sWSq 7TZl9RJh1IwPQvrrUBL3h2EKTbpzCUDLdDIBvD3eabNR7LHYa06E4c2HS+RLM5yHbeuV /Jqg== X-Gm-Message-State: AOJu0YxJhXQUaXjM8M2+eqQnJjipR2vUrycGrfZpT6N7S8mke+4KyTWx BFo9z2TT/n7raFXpIGdRj12QbJuegyA/rcbLV4YA+eeq3zBwGd6vo3sn X-Gm-Gg: ATEYQzytkmHSUx4bawyJvoiPZZV1H3EVjy4DWlKZgufhManG9CxML6FsftDdU3e6LgD 9NEgDkdw1zfd0oIjXyyo1XU0IpivGwWa6L99d+aIgf8DdZIIusI6/SRYCHN0OZrZUhNBrmKQKWQ QFhG6rJgiVIfvs4vT+X6EVyWBE2LDmwYlTwST/B4LiBqNIHsLvQ8UGXD4/UZNG6XgP4pqdL9u0L hWIALb/lk2NS9HV2v1cvVcVi0Aa9t0KFTmRZ49CcQNXABalPS/mqL6zvaBjUNUfkITBga/7LYaR pGO90KoI4g0DgkoGCqVTiG2gCdbkoAU51g7tOLsyPhLLY4hGTiRW1RlL4fUBGDD+oDoE9/mZuAO 19YCoHUkfJzQ5zvAF5tYzJmvLoEIwUHJAdH2mWoLVpqtZzwjszCUo0cv9GBT4CnkRDfGvaY+20f HtWIOFIWs1aftOt0C4/S7u7tiUKDZQgUMVGCjA42Zqn3eBFaIPt8n5TAsvBeDEg8k= X-Received: by 2002:a05:6a00:1c8f:b0:81f:5127:5d48 with SMTP id d2e1a72fcca58-829f7036d89mr2176235b3a.30.1773229992190; Wed, 11 Mar 2026 04:53:12 -0700 (PDT) Received: from localhost.localdomain ([2409:891f:1b46:28af:1445:b4e2:b8bf:2d47]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-829f6df3aedsm2500588b3a.18.2026.03.11.04.53.08 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 11 Mar 2026 04:53:11 -0700 (PDT) From: Yafang Shao To: peterz@infradead.org, mingo@redhat.com, will@kernel.org, boqun@kernel.org, longman@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, mark.rutland@arm.com, mathieu.desnoyers@efficios.com, david.laight.linux@gmail.com Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Yafang Shao Subject: [RFC PATCH v2 1/3] locking/mutex: Add slow path variants for lock/unlock Date: Wed, 11 Mar 2026 19:52:48 +0800 Message-ID: <20260311115250.78488-2-laoar.shao@gmail.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260311115250.78488-1-laoar.shao@gmail.com> References: <20260311115250.78488-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Background =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D One of our latency-sensitive services reported random CPU pressure spikes. After a thorough investigation, we finally identified the root cause of the CPU pressure spikes. The key kernel stacks are as follows: - Task A 2026-02-14-16:53:40.938243: [CPU198] 2156302(bpftrace) cgrp:4019437 pod:401= 9253 find_kallsyms_symbol+142 module_address_lookup+104 kallsyms_lookup_buildid+203 kallsyms_lookup+20 print_rec+64 t_show+67 seq_read_iter+709 seq_read+165 vfs_read+165 ksys_read+103 __x64_sys_read+25 do_syscall_64+56 entry_SYSCALL_64_after_hwframe+100 This task (2156302, bpftrace) is reading the /sys/kernel/tracing/available_filter_functions to check if a function is traceable: https://github.com/bpftrace/bpftrace/blob/master/src/tracefs/tracefs.h#L21 Reading the available_filter_functions file is time-consuming, as it contains tens of thousands of functions: $ cat /sys/kernel/tracing/available_filter_functions | wc -l 59221 $ time cat /sys/kernel/tracing/available_filter_functions > /dev/null real 0m0.458s user 0m0.001s sys 0m0.457s Consequently, the ftrace_lock is held by this task for an extended period. - Other Tasks 2026-02-14-16:53:41.437094: [CPU79] 2156308(bpftrace) cgrp:4019437 pod:4019= 253 mutex_spin_on_owner+108 __mutex_lock.constprop.0+1132 __mutex_lock_slowpath+19 mutex_lock+56 t_start+51 seq_read_iter+250 seq_read+165 vfs_read+165 ksys_read+103 __x64_sys_read+25 do_syscall_64+56 entry_SYSCALL_64_after_hwframe+100 Since ftrace_lock is held by Task-A and Task-A is actively running on a CPU, all other tasks waiting for the same lock will spin on their respective CPUs. This leads to increased CPU pressure. Reproduction =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D This issue can be reproduced simply by running `cat available_filter_functions`. - Single process reading available_filter_functions: $ time cat /sys/kernel/tracing/available_filter_functions > /dev/null real 0m0.458s user 0m0.001s sys 0m0.457s - Six processes reading available_filter_functions simultaneously: for i in `seq 0 5`; do time cat /sys/kernel/tracing/available_filter_functions > /dev/null & done The results are as follows: real 0m2.666s user 0m0.001s sys 0m2.557s real 0m2.718s user 0m0.000s sys 0m2.655s real 0m2.718s user 0m0.001s sys 0m2.600s real 0m2.733s user 0m0.001s sys 0m2.554s real 0m2.735s user 0m0.000s sys 0m2.573s real 0m2.738s user 0m0.000s sys 0m2.664s As more processes are added, the system time increases correspondingly. Solution =3D=3D=3D=3D=3D=3D=3D=3D One approach is to optimize the reading of available_filter_functions to make it as fast as possible. However, the risk lies in the contention caused by optimistic spin locking. Therefore, we need to consider an alternative solution that avoids optimistic spinning for heavy mutexes that may be held for long durations. Note that we do not want to disable CONFIG_MUTEX_SPIN_ON_OWNER entirely, as that could lead to unexpected performance regressions. In this patch, two new APIs are introduced to allow heavy locks to selectively disable optimistic spinning. slow_mutex_lock() - lock a mutex without optimistic spinning slow_mutex_unlock() - unlock the slow mutex - The result of this optimization After applying this slow mutex to ftrace_lock and concurrently running six processes, the results are as follows: real 0m2.691s user 0m0.001s sys 0m0.458s real 0m2.785s user 0m0.001s sys 0m0.467s real 0m2.787s user 0m0.000s sys 0m0.469s real 0m2.787s user 0m0.000s sys 0m0.466s real 0m2.788s user 0m0.001s sys 0m0.468s real 0m2.789s user 0m0.000s sys 0m0.471s The system time remains similar to that of running a single process. Signed-off-by: Yafang Shao --- include/linux/mutex.h | 4 ++++ kernel/locking/mutex.c | 41 ++++++++++++++++++++++++++++++++++------- 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/include/linux/mutex.h b/include/linux/mutex.h index ecaa0440f6ec..eed0e87c084c 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -189,11 +189,13 @@ extern int __must_check mutex_lock_interruptible_nest= ed(struct mutex *lock, extern int __must_check _mutex_lock_killable(struct mutex *lock, unsigned int subclass, struct lockdep_map *nest_lock) __cond_acquires(0,= lock); extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ) __acquires(lock); +extern void slow_mutex_lock_nested(struct mutex *lock, unsigned int subcla= ss); =20 #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) #define mutex_lock_killable(lock) _mutex_lock_killable(lock, 0, NULL) #define mutex_lock_io(lock) mutex_lock_io_nested(lock, 0) +#define slow_mutex_lock(lock) slow_mutex_lock_nested(lock, 0) =20 #define mutex_lock_nest_lock(lock, nest_lock) \ do { \ @@ -215,6 +217,7 @@ extern void mutex_lock(struct mutex *lock) __acquires(l= ock); extern int __must_check mutex_lock_interruptible(struct mutex *lock) __con= d_acquires(0, lock); extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acq= uires(0, lock); extern void mutex_lock_io(struct mutex *lock) __acquires(lock); +extern void slow_mutex_lock(struct mutex *lock) __acquires(lock); =20 # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) @@ -247,6 +250,7 @@ extern int mutex_trylock(struct mutex *lock) __cond_acq= uires(true, lock); #endif =20 extern void mutex_unlock(struct mutex *lock) __releases(lock); +#define slow_mutex_unlock(lock) mutex_unlock(lock) =20 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __= cond_acquires(true, lock); =20 diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 2a1d165b3167..5766d824b3fe 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -443,8 +443,11 @@ static inline int mutex_can_spin_on_owner(struct mutex= *lock) */ static __always_inline bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, - struct mutex_waiter *waiter) + struct mutex_waiter *waiter, const bool slow) { + if (slow) + return false; + if (!waiter) { /* * The purpose of the mutex_can_spin_on_owner() function is @@ -577,7 +580,8 @@ EXPORT_SYMBOL(ww_mutex_unlock); static __always_inline int __sched __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int s= ubclass, struct lockdep_map *nest_lock, unsigned long ip, - struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) + struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx, + const bool slow) { DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; @@ -615,7 +619,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas =20 trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); if (__mutex_trylock(lock) || - mutex_optimistic_spin(lock, ww_ctx, NULL)) { + mutex_optimistic_spin(lock, ww_ctx, NULL, slow)) { /* got the lock, yay! */ lock_acquired(&lock->dep_map, ip); if (ww_ctx) @@ -716,7 +720,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas * to run. */ clear_task_blocked_on(current, lock); - if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) + if (mutex_optimistic_spin(lock, ww_ctx, &waiter, slow)) break; set_task_blocked_on(current, lock); trace_contention_begin(lock, LCB_F_MUTEX); @@ -773,14 +777,21 @@ static int __sched __mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass, struct lockdep_map *nest_lock, unsigned long ip) { - return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, fa= lse); + return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, fa= lse, false); +} + +static int __sched +__slow_mutex_lock(struct mutex *lock, unsigned int state, unsigned int sub= class, + struct lockdep_map *nest_lock, unsigned long ip) +{ + return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, fa= lse, true); } =20 static int __sched __ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subcl= ass, unsigned long ip, struct ww_acquire_ctx *ww_ctx) { - return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true); + return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true,= false); } =20 /** @@ -861,11 +872,17 @@ mutex_lock_io_nested(struct mutex *lock, unsigned int= subclass) =20 token =3D io_schedule_prepare(); __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, - subclass, NULL, _RET_IP_, NULL, 0); + subclass, NULL, _RET_IP_, NULL, 0, false); io_schedule_finish(token); } EXPORT_SYMBOL_GPL(mutex_lock_io_nested); =20 +void __sched +slow_mutex_lock_nested(struct mutex *lock, unsigned int subclass) +{ + __slow_mutex_lock(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_); +} + static inline int ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *= ctx) { @@ -923,6 +940,16 @@ ww_mutex_lock_interruptible(struct ww_mutex *lock, str= uct ww_acquire_ctx *ctx) } EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible); =20 +#else + +void __sched slow_mutex_lock(struct mutex *lock) +{ + might_sleep(); + + if (!__mutex_trylock_fast(lock)) + __slow_mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); +} + #endif =20 /* --=20 2.47.3