From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 711FC15539A for ; Wed, 12 Mar 2025 15:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; cv=none; b=iIRQ28gLY6e5KdMDJyiNUIYqx73hnXougNge4BtTBTZzljZYcDD8anyrtvhTnzcOU7OX6drBQkqO/2sI6dqyCpVAUJSgpGJveG0aRvmx6+hfwkavDzN3eVXerZXNDDkcpIea03fjkeS/oZZMr8YjWFbxwML+kva4Li9NK1uUScw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; c=relaxed/simple; bh=hcWZjNCxjGsr0P1tH6aZ4F1DBNY3VCi/s2YS5QlAsU8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JQITFC9Eug4LyNwfZ5NMf9iDtHvibwvuGBff8u62Z1QTIhTXgAG4nOi3o4khmToLCyS+PCm2dJi60BKvVSWEMQcKsO7J5pHvuaryWDxFpM60SIkGIxgL3BDnl25tYpsOY72v8SUC+jkPJr449xPn9KFkhpMtFvLsaHxBhUXt8qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=lr5Goxq1; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=GK7aYlrX; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="lr5Goxq1"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="GK7aYlrX" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M7bu4gzDDxGaac6doSvKGFnMgvLdgh2Hvzf+k6mP7CA=; b=lr5Goxq1/ZG96afTjZFpkHdUGK2pMjgP8kiITiULDv4SbN2MR1UMgAOZEAtYFtHnSwcVyW gsZu5F82BuJwGaVfdVeDSAM/dx0A8akzUTSblnyFIOW6fRSqOxXrKlkgBeUJ28EWpAvS8o wKbVOBGrEwIp93uhH7V6gKD93+O677vo2Z/SQiXmGaQWO/MYqtiE9zwT7ZPb3CtQ/Lt16l Q1oRqVPeKuPS0bop5KneI10cJ55Sj48uwPnixUOWtrWG3x00RF3sHAt0ol7ux3Agq8hOsZ sziyYVV9MTWprsP2bdtLIaq03ffFdNebxW2v3NKXN2GHMgyRyMXJiMSvyjjTKQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M7bu4gzDDxGaac6doSvKGFnMgvLdgh2Hvzf+k6mP7CA=; b=GK7aYlrXWSx5/vPOyzxflTAQdhq1KXapxtKvGJdZn6dxQAgiAGVBedA3DPzq9Ef1T8faUa g1YV8I4ZqSUf96BQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 01/21] rcuref: Provide rcuref_is_dead(). Date: Wed, 12 Mar 2025 16:16:14 +0100 Message-ID: <20250312151634.2183278-2-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" rcuref_read() returns the number of references that are currently held. If 0 is returned then it is not safe to assume that the object ca be scheduled for deconstruction because it is marked DEAD. This happens if the return value of rcuref_put() is ignored and assumptions are made. If 0 is returned then the counter transitioned from 0 to RCUREF_NOREF. If rcuref_put() did not return to the caller then the counter did not yet transition from RCUREF_NOREF to RCUREF_DEAD. This means that there is still a chance that the counter counter will transition from RCUREF_NOREF to 0 meaning it is still valid and must not be deconstructed. In this brief window rcuref_read() will return 0. Provide rcuref_is_dead() to determine if the counter is marked as RCUREF_DEAD. Signed-off-by: Sebastian Andrzej Siewior Reviewed-by: Joel Fernandes --- include/linux/rcuref.h | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h index 6322d8c1c6b42..2fb2af6d98249 100644 --- a/include/linux/rcuref.h +++ b/include/linux/rcuref.h @@ -30,7 +30,11 @@ static inline void rcuref_init(rcuref_t *ref, unsigned i= nt cnt) * rcuref_read - Read the number of held reference counts of a rcuref * @ref: Pointer to the reference count * - * Return: The number of held references (0 ... N) + * Return: The number of held references (0 ... N). The value 0 does not + * indicate that it is safe to schedule the object, protected by this refe= rence + * counter, for deconstruction. + * If you want to know if the reference counter has been marked DEAD (as + * signaled by rcuref_put()) please use rcuread_is_dead(). */ static inline unsigned int rcuref_read(rcuref_t *ref) { @@ -40,6 +44,22 @@ static inline unsigned int rcuref_read(rcuref_t *ref) return c >=3D RCUREF_RELEASED ? 0 : c + 1; } =20 +/** + * rcuref_is_dead - Check if the rcuref has been already marked dead + * @ref: Pointer to the reference count + * + * Return: True if the object has been marked DEAD. This signals that a pr= evious + * invocation of rcuref_put() returned true on this reference counter mean= ing + * the protected object can safely be scheduled for deconstruction. + * Otherwise, returns false. + */ +static inline bool rcuref_is_dead(rcuref_t *ref) +{ + unsigned int c =3D atomic_read(&ref->refcnt); + + return (c >=3D RCUREF_RELEASED) && (c < RCUREF_NOREF); +} + extern __must_check bool rcuref_get_slowpath(rcuref_t *ref); =20 /** --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC5B82033A for ; Wed, 12 Mar 2025 15:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; cv=none; b=I6lVqTQMScM0SqeDK48NW0nvnA+b5IchNz82jIS8NNZa8sI6XzPJbavmFN7FBRfYdLNhZ00HYAJBEDxA0dusvks7alnrr/AsYwkP/14wGzNzBN5cukQz84yOsTIov/HgmU2YG0+S88tULjWU3GYvwShE3vpHUeogU+oAqiHXBTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; c=relaxed/simple; bh=1OHtfc6oxuVIKAYZqxsbcqsgEg9h4bXv2EV0t++FFrs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qVBqrQUjHqiAq5TGG0uY83muQUgAg1guAtmxSZRqMWBHCieafOkJHGt46rJwiiYelgQTyFM+ML3uZ5UhhKlsFlretPQUQWD6qrrbXMp/Lx/EyemxhmQg/pjnWwswFSacEDvO8h3AQO3f616eeJ/jjwfn1bB/8KzNVRw+5vZhpzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=XHEliIXa; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=wBLEr87p; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="XHEliIXa"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="wBLEr87p" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=54xL4DFuTIQO6XeqH5r8yTIXE3kb7XbefsF+2rinLAo=; b=XHEliIXaXiVlNf5Act2kTwZcb7izmzVNRgYxD3v6KIItVFaxgVOUxGgEeN8szdCUDDnMtI 6ojI6e2YOEVAojntxvBNnQM+fQFTBrBI8x5tGuk3YwMh9T+RKcBe9KPKQ3WxUapJ1pAGDO 3oCQMtaSFzAVThKB90VcO00bu0K4gyByC/J2r2mfqNynuVsQJI/DIcJF5KClm0C3dZQsla 0rF56VxBC55E5ID7trop/3yFx5sPKmCix7vidEtdPgiM5jLhQ2BrUGqt1+o/YCgXBNTmlG eQW3QFP+LuiWBpYekJpsLmKAxCdROeTjp5e8BhAXyxvbCnuQu64QScSv5sUZ1Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=54xL4DFuTIQO6XeqH5r8yTIXE3kb7XbefsF+2rinLAo=; b=wBLEr87pF9G1DN9jUVwxZRw7bLKHLlPza9clH0SQJKtphs4xphmAEtOH6Lpfb2D/QI/4Ft 8LAR7Ouc9EKK0pBA== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 02/21] futex: Move futex_queue() into futex_wait_setup() Date: Wed, 12 Mar 2025 16:16:15 +0100 Message-ID: <20250312151634.2183278-3-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra futex_wait_setup() has a weird calling convention in order to return hb to use as an argument to futex_queue(). Mostly such that requeue can have an extra test in between. Reorder code a little to get rid of this and keep the hb usage inside futex_wait_setup(). [bigeasy: fixes] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- io_uring/futex.c | 4 +--- kernel/futex/futex.h | 6 +++--- kernel/futex/requeue.c | 28 ++++++++++-------------- kernel/futex/waitwake.c | 47 +++++++++++++++++++++++------------------ 4 files changed, 42 insertions(+), 43 deletions(-) diff --git a/io_uring/futex.c b/io_uring/futex.c index 43e2143255f57..e7c264db0818e 100644 --- a/io_uring/futex.c +++ b/io_uring/futex.c @@ -311,7 +311,6 @@ int io_futex_wait(struct io_kiocb *req, unsigned int is= sue_flags) struct io_futex *iof =3D io_kiocb_to_cmd(req, struct io_futex); struct io_ring_ctx *ctx =3D req->ctx; struct io_futex_data *ifd =3D NULL; - struct futex_hash_bucket *hb; int ret; =20 if (!iof->futex_mask) { @@ -333,12 +332,11 @@ int io_futex_wait(struct io_kiocb *req, unsigned int = issue_flags) ifd->req =3D req; =20 ret =3D futex_wait_setup(iof->uaddr, iof->futex_val, iof->futex_flags, - &ifd->q, &hb); + &ifd->q, NULL, NULL); if (!ret) { hlist_add_head(&req->hash_node, &ctx->futex_list); io_ring_submit_unlock(ctx, issue_flags); =20 - futex_queue(&ifd->q, hb, NULL); return IOU_ISSUE_SKIP_COMPLETE; } =20 diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 6b2f4c7eb720f..16aafd0113442 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -219,9 +219,9 @@ static inline int futex_match(union futex_key *key1, un= ion futex_key *key2) } =20 extern int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags, - struct futex_q *q, struct futex_hash_bucket **hb); -extern void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q = *q, - struct hrtimer_sleeper *timeout); + struct futex_q *q, union futex_key *key2, + struct task_struct *task); +extern void futex_do_wait(struct futex_q *q, struct hrtimer_sleeper *timeo= ut); extern bool __futex_wake_mark(struct futex_q *q); extern void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q); =20 diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index b47bb764b3520..0e55975af515c 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -769,7 +769,6 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned i= nt flags, { struct hrtimer_sleeper timeout, *to; struct rt_mutex_waiter rt_waiter; - struct futex_hash_bucket *hb; union futex_key key2 =3D FUTEX_KEY_INIT; struct futex_q q =3D futex_q_init; struct rt_mutex_base *pi_mutex; @@ -805,29 +804,24 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned= int flags, * Prepare to wait on uaddr. On success, it holds hb->lock and q * is initialized. */ - ret =3D futex_wait_setup(uaddr, val, flags, &q, &hb); + ret =3D futex_wait_setup(uaddr, val, flags, &q, &key2, current); if (ret) goto out; =20 - /* - * The check above which compares uaddrs is not sufficient for - * shared futexes. We need to compare the keys: - */ - if (futex_match(&q.key, &key2)) { - futex_q_unlock(hb); - ret =3D -EINVAL; - goto out; - } - /* Queue the futex_q, drop the hb lock, wait for wakeup. */ - futex_wait_queue(hb, &q, to); + futex_do_wait(&q, to); =20 switch (futex_requeue_pi_wakeup_sync(&q)) { case Q_REQUEUE_PI_IGNORE: - /* The waiter is still on uaddr1 */ - spin_lock(&hb->lock); - ret =3D handle_early_requeue_pi_wakeup(hb, &q, to); - spin_unlock(&hb->lock); + { + struct futex_hash_bucket *hb; + + hb =3D futex_hash(&q.key); + /* The waiter is still on uaddr1 */ + spin_lock(&hb->lock); + ret =3D handle_early_requeue_pi_wakeup(hb, &q, to); + spin_unlock(&hb->lock); + } break; =20 case Q_REQUEUE_PI_LOCKED: diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 25877d4f2f8f3..6cf10701294b4 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -339,18 +339,8 @@ static long futex_wait_restart(struct restart_block *r= estart); * @q: the futex_q to queue up on * @timeout: the prepared hrtimer_sleeper, or null for no timeout */ -void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q, - struct hrtimer_sleeper *timeout) +void futex_do_wait(struct futex_q *q, struct hrtimer_sleeper *timeout) { - /* - * The task state is guaranteed to be set before another task can - * wake it. set_current_state() is implemented using smp_store_mb() and - * futex_queue() calls spin_unlock() upon completion, both serializing - * access to the hash list and forcing another memory barrier. - */ - set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE); - futex_queue(q, hb, current); - /* Arm the timer */ if (timeout) hrtimer_sleeper_start_expires(timeout, HRTIMER_MODE_ABS); @@ -578,7 +568,8 @@ int futex_wait_multiple(struct futex_vector *vs, unsign= ed int count, * @val: the expected value * @flags: futex flags (FLAGS_SHARED, etc.) * @q: the associated futex_q - * @hb: storage for hash_bucket pointer to be returned to caller + * @key2: the second futex_key if used for requeue PI + * task: Task queueing this futex * * Setup the futex_q and locate the hash_bucket. Get the futex value and * compare it with the expected value. Handle atomic faults internally. @@ -589,8 +580,10 @@ int futex_wait_multiple(struct futex_vector *vs, unsig= ned int count, * - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is = unlocked */ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags, - struct futex_q *q, struct futex_hash_bucket **hb) + struct futex_q *q, union futex_key *key2, + struct task_struct *task) { + struct futex_hash_bucket *hb; u32 uval; int ret; =20 @@ -618,12 +611,12 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsi= gned int flags, return ret; =20 retry_private: - *hb =3D futex_q_lock(q); + hb =3D futex_q_lock(q); =20 ret =3D futex_get_value_locked(&uval, uaddr); =20 if (ret) { - futex_q_unlock(*hb); + futex_q_unlock(hb); =20 ret =3D get_user(uval, uaddr); if (ret) @@ -636,10 +629,25 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsi= gned int flags, } =20 if (uval !=3D val) { - futex_q_unlock(*hb); - ret =3D -EWOULDBLOCK; + futex_q_unlock(hb); + return -EWOULDBLOCK; } =20 + if (key2 && futex_match(&q->key, key2)) { + futex_q_unlock(hb); + return -EINVAL; + } + + /* + * The task state is guaranteed to be set before another task can + * wake it. set_current_state() is implemented using smp_store_mb() and + * futex_queue() calls spin_unlock() upon completion, both serializing + * access to the hash list and forcing another memory barrier. + */ + if (task =3D=3D current) + set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE); + futex_queue(q, hb, task); + return ret; } =20 @@ -647,7 +655,6 @@ int __futex_wait(u32 __user *uaddr, unsigned int flags,= u32 val, struct hrtimer_sleeper *to, u32 bitset) { struct futex_q q =3D futex_q_init; - struct futex_hash_bucket *hb; int ret; =20 if (!bitset) @@ -660,12 +667,12 @@ int __futex_wait(u32 __user *uaddr, unsigned int flag= s, u32 val, * Prepare to wait on uaddr. On success, it holds hb->lock and q * is initialized. */ - ret =3D futex_wait_setup(uaddr, val, flags, &q, &hb); + ret =3D futex_wait_setup(uaddr, val, flags, &q, NULL, current); if (ret) return ret; =20 /* futex_queue and wait for wakeup, timeout, or a signal. */ - futex_wait_queue(hb, &q, to); + futex_do_wait(&q, to); =20 /* If we were woken (and unqueued), we succeeded, whatever. */ if (!futex_unqueue(&q)) --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1E411EB1A9 for ; Wed, 12 Mar 2025 15:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; cv=none; b=k37TWC7xyosgsjnv9guMRFUF/5LLcw2njlbQoJ1j26oBala25nAdBhwVlQ5rI8g52zqrGEe2Zhf96dC1vaHmDHE1iXdD7w/JcuA+AX1SuSXlvMXpwtbLlZaTCngPrJGvXNum2rhrCZLF7jGaYbKpzgqNeM+BiXsm5P2I0ck9K4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792604; c=relaxed/simple; bh=IKCqymlZHiMJT+CqMRlS9IoHd/fGF5KQBWzzQjT0dN0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q/APtWGPjER0xPKiUuEJy3i5wvpXxhz/ioPpeuuoRW4hCgLotZ2r+EcVW0XdPbxIFZ3yHCKsFH5MihvYJ1bUcXglXz6Ne0R+i7ApbrsNm/OX2DzdAeVWI/lfyiB96rJ6vyVtTI+2uxF9WMpbr/ekkGNlz3Ts4HxI4KuXkXlwXts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=KQXfr5+2; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=GNIj+GFE; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="KQXfr5+2"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="GNIj+GFE" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mZHtf3xWmXqiraVMjhasP+1bQqaBuZbzg7ehXZac8eA=; b=KQXfr5+2zgw1mdvr0FCRmBN7Gg4xlJ1phesr8jbVVEpbODhZnDDVAmj/ipfUyxeU5nmPvj MnJN84ribiKfGHzKClGc2tIH3G/AYxJW+tqcjtvLEFB/V9WVtfemgGkkZhyMd774KwpJxa xwfG5N5FirdIC8BslX9WR4PllhqF39Wkn5+Sr51aAX5vNuDPCTfELL/NaDaOJYOnHzcjtH q90xhas0NkK8yU8OhARTnmjXKHdanyl3yRAXpIatIJ7TkTHLqjLspbz50akhQPlnPjOAsE Nk7EaMoxN+t+U7T4rzYZAQmONeDlB8xIS32/bzUNeFvc0NF2Y5VupqRIcR0oIQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mZHtf3xWmXqiraVMjhasP+1bQqaBuZbzg7ehXZac8eA=; b=GNIj+GFEOrJ5Sc9qjSZoyR+VKIiFLg8GMewQ4mRq94ABt8fXts+865PDiDXkvNXPOu/uoA whg5ZzKxDRQu0ADw== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 03/21] futex: Pull futex_hash() out of futex_q_lock() Date: Wed, 12 Mar 2025 16:16:16 +0100 Message-ID: <20250312151634.2183278-4-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 7 +------ kernel/futex/futex.h | 2 +- kernel/futex/pi.c | 3 ++- kernel/futex/waitwake.c | 6 ++++-- 4 files changed, 8 insertions(+), 10 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index cca15859a50be..7adc914878933 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -502,13 +502,9 @@ void __futex_unqueue(struct futex_q *q) } =20 /* The key must be already stored in q->key. */ -struct futex_hash_bucket *futex_q_lock(struct futex_q *q) +void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb) __acquires(&hb->lock) { - struct futex_hash_bucket *hb; - - hb =3D futex_hash(&q->key); - /* * Increment the counter before taking the lock so that * a potential waker won't miss a to-be-slept task that is @@ -522,7 +518,6 @@ struct futex_hash_bucket *futex_q_lock(struct futex_q *= q) q->lock_ptr =3D &hb->lock; =20 spin_lock(&hb->lock); - return hb; } =20 void futex_q_unlock(struct futex_hash_bucket *hb) diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 16aafd0113442..a219903e52084 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -354,7 +354,7 @@ static inline int futex_hb_waiters_pending(struct futex= _hash_bucket *hb) #endif } =20 -extern struct futex_hash_bucket *futex_q_lock(struct futex_q *q); +extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb); extern void futex_q_unlock(struct futex_hash_bucket *hb); =20 =20 diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c index 7a941845f7eee..3bf942e9400ac 100644 --- a/kernel/futex/pi.c +++ b/kernel/futex/pi.c @@ -939,7 +939,8 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags= , ktime_t *time, int tryl goto out; =20 retry_private: - hb =3D futex_q_lock(&q); + hb =3D futex_hash(&q.key); + futex_q_lock(&q, hb); =20 ret =3D futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, &exiting, 0); diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 6cf10701294b4..1108f373fd315 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -441,7 +441,8 @@ int futex_wait_multiple_setup(struct futex_vector *vs, = int count, int *woken) struct futex_q *q =3D &vs[i].q; u32 val =3D vs[i].w.val; =20 - hb =3D futex_q_lock(q); + hb =3D futex_hash(&q->key); + futex_q_lock(q, hb); ret =3D futex_get_value_locked(&uval, uaddr); =20 if (!ret && uval =3D=3D val) { @@ -611,7 +612,8 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsign= ed int flags, return ret; =20 retry_private: - hb =3D futex_q_lock(q); + hb =3D futex_hash(&q->key); + futex_q_lock(q, hb); =20 ret =3D futex_get_value_locked(&uval, uaddr); =20 --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52D8022E402 for ; Wed, 12 Mar 2025 15:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792606; cv=none; b=DdSwJuahrXKY97vutDObaquTOXkbn3wQcBS4MvAXGpBfEt9X9Pd8paTe3+ZeFrCVSLjqAzt5DXNZlX5hFPpcT/cBwrW7laMwXKOxv/NYrv41ATOoURoYgjtkAj/JsFfW8XVmWf+iAWQ3n/D1ycqKwu6ZdmaulPX9UAIM59wLNwk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792606; c=relaxed/simple; bh=qSclWI5j7fYsWU29QPiQ3wQIvBffUM0HHXFz2qvjq9w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O3AaIsn6Y8YFJxDtCmSVaSGx4cNq1Fuc5+MH90J4zupEXudxDHt4xD9v8UiMJIoSd7Df79rl3eLYQFxJIllGhHBmDyPdwIFzFHrUIaFOqCfRpszG25kZI5T6hH+/fO1ZyJtxe9ltM3cAKZ6I4p42E6XLhS/CgBKW99+DiYceZiQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=rtFsFavh; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=aGWl8X29; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="rtFsFavh"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="aGWl8X29" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5edmI3ndtPp9qZ90xZ0EIqpommqwQbXTEX78nyveRmc=; b=rtFsFavhs7Tax8rx8uDNoT5NJGKrNk6vKeXWqbJ2Uve7C6T1sFbYHpANgZu0HpmGYgF6ZR byceGTgybvzg3VXMi0YAnCDOFonIBbD+IpX4OE5F1WHAO3L418/VdqDalJNUy0VFMFlnXV HR8PK4vJmxtrAhvrA9yAiHK+j8zHcfjk9FipMiAoE4PBtRztgc25D6guB7M3ZbggiBCg9X n/K359ijqBgEfibq2SgNo94wYlcX2uDuFybHYXh+1j/9KOJ3L36eVIXQWojovkZo0TQpox Kwp2EH0qGE01Svza76tAy4Al4j/RWRcfXw51myfxDl00xC0vUSDqMKp44wSe5g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5edmI3ndtPp9qZ90xZ0EIqpommqwQbXTEX78nyveRmc=; b=aGWl8X29x8ktKq7UyfwDLTqs5nn6se4wqT3U08WaucT1+4jsl9cMTQvJRMxgfIcNGAlzuR hzF6Mczo8+MH4mDQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 04/21] futex: Create hb scopes Date: Wed, 12 Mar 2025 16:16:17 +0100 Message-ID: <20250312151634.2183278-5-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Create explicit scopes for hb variables; almost pure re-indent. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 81 ++++---- kernel/futex/pi.c | 282 +++++++++++++------------- kernel/futex/requeue.c | 433 ++++++++++++++++++++-------------------- kernel/futex/waitwake.c | 193 +++++++++--------- 4 files changed, 504 insertions(+), 485 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 7adc914878933..e4cb5ce9785b1 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -944,7 +944,6 @@ static void exit_pi_state_list(struct task_struct *curr) { struct list_head *next, *head =3D &curr->pi_state_list; struct futex_pi_state *pi_state; - struct futex_hash_bucket *hb; union futex_key key =3D FUTEX_KEY_INIT; =20 /* @@ -957,50 +956,54 @@ static void exit_pi_state_list(struct task_struct *cu= rr) next =3D head->next; pi_state =3D list_entry(next, struct futex_pi_state, list); key =3D pi_state->key; - hb =3D futex_hash(&key); + if (1) { + struct futex_hash_bucket *hb; =20 - /* - * We can race against put_pi_state() removing itself from the - * list (a waiter going away). put_pi_state() will first - * decrement the reference count and then modify the list, so - * its possible to see the list entry but fail this reference - * acquire. - * - * In that case; drop the locks to let put_pi_state() make - * progress and retry the loop. - */ - if (!refcount_inc_not_zero(&pi_state->refcount)) { + hb =3D futex_hash(&key); + + /* + * We can race against put_pi_state() removing itself from the + * list (a waiter going away). put_pi_state() will first + * decrement the reference count and then modify the list, so + * its possible to see the list entry but fail this reference + * acquire. + * + * In that case; drop the locks to let put_pi_state() make + * progress and retry the loop. + */ + if (!refcount_inc_not_zero(&pi_state->refcount)) { + raw_spin_unlock_irq(&curr->pi_lock); + cpu_relax(); + raw_spin_lock_irq(&curr->pi_lock); + continue; + } raw_spin_unlock_irq(&curr->pi_lock); - cpu_relax(); - raw_spin_lock_irq(&curr->pi_lock); - continue; - } - raw_spin_unlock_irq(&curr->pi_lock); =20 - spin_lock(&hb->lock); - raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_lock(&curr->pi_lock); - /* - * We dropped the pi-lock, so re-check whether this - * task still owns the PI-state: - */ - if (head->next !=3D next) { - /* retain curr->pi_lock for the loop invariant */ - raw_spin_unlock(&pi_state->pi_mutex.wait_lock); + spin_lock(&hb->lock); + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); + raw_spin_lock(&curr->pi_lock); + /* + * We dropped the pi-lock, so re-check whether this + * task still owns the PI-state: + */ + if (head->next !=3D next) { + /* retain curr->pi_lock for the loop invariant */ + raw_spin_unlock(&pi_state->pi_mutex.wait_lock); + spin_unlock(&hb->lock); + put_pi_state(pi_state); + continue; + } + + WARN_ON(pi_state->owner !=3D curr); + WARN_ON(list_empty(&pi_state->list)); + list_del_init(&pi_state->list); + pi_state->owner =3D NULL; + + raw_spin_unlock(&curr->pi_lock); + raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); spin_unlock(&hb->lock); - put_pi_state(pi_state); - continue; } =20 - WARN_ON(pi_state->owner !=3D curr); - WARN_ON(list_empty(&pi_state->list)); - list_del_init(&pi_state->list); - pi_state->owner =3D NULL; - - raw_spin_unlock(&curr->pi_lock); - raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); - spin_unlock(&hb->lock); - rt_mutex_futex_unlock(&pi_state->pi_mutex); put_pi_state(pi_state); =20 diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c index 3bf942e9400ac..62ce5ecaeddd6 100644 --- a/kernel/futex/pi.c +++ b/kernel/futex/pi.c @@ -920,7 +920,6 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags= , ktime_t *time, int tryl struct hrtimer_sleeper timeout, *to; struct task_struct *exiting =3D NULL; struct rt_mutex_waiter rt_waiter; - struct futex_hash_bucket *hb; struct futex_q q =3D futex_q_init; DEFINE_WAKE_Q(wake_q); int res, ret; @@ -939,152 +938,169 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int f= lags, ktime_t *time, int tryl goto out; =20 retry_private: - hb =3D futex_hash(&q.key); - futex_q_lock(&q, hb); + if (1) { + struct futex_hash_bucket *hb; =20 - ret =3D futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, - &exiting, 0); - if (unlikely(ret)) { - /* - * Atomic work succeeded and we got the lock, - * or failed. Either way, we do _not_ block. - */ - switch (ret) { - case 1: - /* We got the lock. */ - ret =3D 0; - goto out_unlock_put_key; - case -EFAULT: - goto uaddr_faulted; - case -EBUSY: - case -EAGAIN: + hb =3D futex_hash(&q.key); + futex_q_lock(&q, hb); + + ret =3D futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, + &exiting, 0); + if (unlikely(ret)) { /* - * Two reasons for this: - * - EBUSY: Task is exiting and we just wait for the - * exit to complete. - * - EAGAIN: The user space value changed. + * Atomic work succeeded and we got the lock, + * or failed. Either way, we do _not_ block. */ - futex_q_unlock(hb); - /* - * Handle the case where the owner is in the middle of - * exiting. Wait for the exit to complete otherwise - * this task might loop forever, aka. live lock. - */ - wait_for_owner_exiting(ret, exiting); - cond_resched(); - goto retry; - default: - goto out_unlock_put_key; + switch (ret) { + case 1: + /* We got the lock. */ + ret =3D 0; + goto out_unlock_put_key; + case -EFAULT: + goto uaddr_faulted; + case -EBUSY: + case -EAGAIN: + /* + * Two reasons for this: + * - EBUSY: Task is exiting and we just wait for the + * exit to complete. + * - EAGAIN: The user space value changed. + */ + futex_q_unlock(hb); + /* + * Handle the case where the owner is in the middle of + * exiting. Wait for the exit to complete otherwise + * this task might loop forever, aka. live lock. + */ + wait_for_owner_exiting(ret, exiting); + cond_resched(); + goto retry; + default: + goto out_unlock_put_key; + } } - } =20 - WARN_ON(!q.pi_state); + WARN_ON(!q.pi_state); =20 - /* - * Only actually queue now that the atomic ops are done: - */ - __futex_queue(&q, hb, current); + /* + * Only actually queue now that the atomic ops are done: + */ + __futex_queue(&q, hb, current); =20 - if (trylock) { - ret =3D rt_mutex_futex_trylock(&q.pi_state->pi_mutex); - /* Fixup the trylock return value: */ - ret =3D ret ? 0 : -EWOULDBLOCK; - goto no_block; - } + if (trylock) { + ret =3D rt_mutex_futex_trylock(&q.pi_state->pi_mutex); + /* Fixup the trylock return value: */ + ret =3D ret ? 0 : -EWOULDBLOCK; + goto no_block; + } =20 - /* - * Must be done before we enqueue the waiter, here is unfortunately - * under the hb lock, but that *should* work because it does nothing. - */ - rt_mutex_pre_schedule(); + /* + * Must be done before we enqueue the waiter, here is unfortunately + * under the hb lock, but that *should* work because it does nothing. + */ + rt_mutex_pre_schedule(); =20 - rt_mutex_init_waiter(&rt_waiter); + rt_mutex_init_waiter(&rt_waiter); =20 - /* - * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not - * hold it while doing rt_mutex_start_proxy(), because then it will - * include hb->lock in the blocking chain, even through we'll not in - * fact hold it while blocking. This will lead it to report -EDEADLK - * and BUG when futex_unlock_pi() interleaves with this. - * - * Therefore acquire wait_lock while holding hb->lock, but drop the - * latter before calling __rt_mutex_start_proxy_lock(). This - * interleaves with futex_unlock_pi() -- which does a similar lock - * handoff -- such that the latter can observe the futex_q::pi_state - * before __rt_mutex_start_proxy_lock() is done. - */ - raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); - spin_unlock(q.lock_ptr); - /* - * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter - * such that futex_unlock_pi() is guaranteed to observe the waiter when - * it sees the futex_q::pi_state. - */ - ret =3D __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, cu= rrent, &wake_q); - raw_spin_unlock_irq_wake(&q.pi_state->pi_mutex.wait_lock, &wake_q); + /* + * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not + * hold it while doing rt_mutex_start_proxy(), because then it will + * include hb->lock in the blocking chain, even through we'll not in + * fact hold it while blocking. This will lead it to report -EDEADLK + * and BUG when futex_unlock_pi() interleaves with this. + * + * Therefore acquire wait_lock while holding hb->lock, but drop the + * latter before calling __rt_mutex_start_proxy_lock(). This + * interleaves with futex_unlock_pi() -- which does a similar lock + * handoff -- such that the latter can observe the futex_q::pi_state + * before __rt_mutex_start_proxy_lock() is done. + */ + raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); + spin_unlock(q.lock_ptr); + /* + * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter + * such that futex_unlock_pi() is guaranteed to observe the waiter when + * it sees the futex_q::pi_state. + */ + ret =3D __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, c= urrent, &wake_q); + raw_spin_unlock_irq_wake(&q.pi_state->pi_mutex.wait_lock, &wake_q); =20 - if (ret) { - if (ret =3D=3D 1) - ret =3D 0; - goto cleanup; - } + if (ret) { + if (ret =3D=3D 1) + ret =3D 0; + goto cleanup; + } =20 - if (unlikely(to)) - hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS); + if (unlikely(to)) + hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS); =20 - ret =3D rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); + ret =3D rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); =20 cleanup: - /* - * If we failed to acquire the lock (deadlock/signal/timeout), we must - * must unwind the above, however we canont lock hb->lock because - * rt_mutex already has a waiter enqueued and hb->lock can itself try - * and enqueue an rt_waiter through rtlock. - * - * Doing the cleanup without holding hb->lock can cause inconsistent - * state between hb and pi_state, but only in the direction of not - * seeing a waiter that is leaving. - * - * See futex_unlock_pi(), it deals with this inconsistency. - * - * There be dragons here, since we must deal with the inconsistency on - * the way out (here), it is impossible to detect/warn about the race - * the other way around (missing an incoming waiter). - * - * What could possibly go wrong... - */ - if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter= )) - ret =3D 0; + /* + * If we failed to acquire the lock (deadlock/signal/timeout), we must + * must unwind the above, however we canont lock hb->lock because + * rt_mutex already has a waiter enqueued and hb->lock can itself try + * and enqueue an rt_waiter through rtlock. + * + * Doing the cleanup without holding hb->lock can cause inconsistent + * state between hb and pi_state, but only in the direction of not + * seeing a waiter that is leaving. + * + * See futex_unlock_pi(), it deals with this inconsistency. + * + * There be dragons here, since we must deal with the inconsistency on + * the way out (here), it is impossible to detect/warn about the race + * the other way around (missing an incoming waiter). + * + * What could possibly go wrong... + */ + if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waite= r)) + ret =3D 0; =20 - /* - * Now that the rt_waiter has been dequeued, it is safe to use - * spinlock/rtlock (which might enqueue its own rt_waiter) and fix up - * the - */ - spin_lock(q.lock_ptr); - /* - * Waiter is unqueued. - */ - rt_mutex_post_schedule(); + /* + * Now that the rt_waiter has been dequeued, it is safe to use + * spinlock/rtlock (which might enqueue its own rt_waiter) and fix up + * the + */ + spin_lock(q.lock_ptr); + /* + * Waiter is unqueued. + */ + rt_mutex_post_schedule(); no_block: - /* - * Fixup the pi_state owner and possibly acquire the lock if we - * haven't already. - */ - res =3D fixup_pi_owner(uaddr, &q, !ret); - /* - * If fixup_pi_owner() returned an error, propagate that. If it acquired - * the lock, clear our -ETIMEDOUT or -EINTR. - */ - if (res) - ret =3D (res < 0) ? res : 0; + /* + * Fixup the pi_state owner and possibly acquire the lock if we + * haven't already. + */ + res =3D fixup_pi_owner(uaddr, &q, !ret); + /* + * If fixup_pi_owner() returned an error, propagate that. If it acquired + * the lock, clear our -ETIMEDOUT or -EINTR. + */ + if (res) + ret =3D (res < 0) ? res : 0; =20 - futex_unqueue_pi(&q); - spin_unlock(q.lock_ptr); - goto out; + futex_unqueue_pi(&q); + spin_unlock(q.lock_ptr); + goto out; =20 out_unlock_put_key: - futex_q_unlock(hb); + futex_q_unlock(hb); + goto out; + +uaddr_faulted: + futex_q_unlock(hb); + + ret =3D fault_in_user_writeable(uaddr); + if (ret) + goto out; + + if (!(flags & FLAGS_SHARED)) + goto retry_private; + + goto retry; + } =20 out: if (to) { @@ -1092,18 +1108,6 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int fl= ags, ktime_t *time, int tryl destroy_hrtimer_on_stack(&to->timer); } return ret !=3D -EINTR ? ret : -ERESTARTNOINTR; - -uaddr_faulted: - futex_q_unlock(hb); - - ret =3D fault_in_user_writeable(uaddr); - if (ret) - goto out; - - if (!(flags & FLAGS_SHARED)) - goto retry_private; - - goto retry; } =20 /* diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index 0e55975af515c..209794cad6f2f 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -371,7 +371,6 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flag= s1, union futex_key key1 =3D FUTEX_KEY_INIT, key2 =3D FUTEX_KEY_INIT; int task_count =3D 0, ret; struct futex_pi_state *pi_state =3D NULL; - struct futex_hash_bucket *hb1, *hb2; struct futex_q *this, *next; DEFINE_WAKE_Q(wake_q); =20 @@ -443,240 +442,244 @@ int futex_requeue(u32 __user *uaddr1, unsigned int = flags1, if (requeue_pi && futex_match(&key1, &key2)) return -EINVAL; =20 - hb1 =3D futex_hash(&key1); - hb2 =3D futex_hash(&key2); - retry_private: - futex_hb_waiters_inc(hb2); - double_lock_hb(hb1, hb2); + if (1) { + struct futex_hash_bucket *hb1, *hb2; =20 - if (likely(cmpval !=3D NULL)) { - u32 curval; + hb1 =3D futex_hash(&key1); + hb2 =3D futex_hash(&key2); =20 - ret =3D futex_get_value_locked(&curval, uaddr1); + futex_hb_waiters_inc(hb2); + double_lock_hb(hb1, hb2); =20 - if (unlikely(ret)) { - double_unlock_hb(hb1, hb2); - futex_hb_waiters_dec(hb2); + if (likely(cmpval !=3D NULL)) { + u32 curval; =20 - ret =3D get_user(curval, uaddr1); - if (ret) - return ret; + ret =3D futex_get_value_locked(&curval, uaddr1); =20 - if (!(flags1 & FLAGS_SHARED)) - goto retry_private; + if (unlikely(ret)) { + double_unlock_hb(hb1, hb2); + futex_hb_waiters_dec(hb2); =20 - goto retry; - } - if (curval !=3D *cmpval) { - ret =3D -EAGAIN; - goto out_unlock; - } - } + ret =3D get_user(curval, uaddr1); + if (ret) + return ret; =20 - if (requeue_pi) { - struct task_struct *exiting =3D NULL; + if (!(flags1 & FLAGS_SHARED)) + goto retry_private; =20 - /* - * Attempt to acquire uaddr2 and wake the top waiter. If we - * intend to requeue waiters, force setting the FUTEX_WAITERS - * bit. We force this here where we are able to easily handle - * faults rather in the requeue loop below. - * - * Updates topwaiter::requeue_state if a top waiter exists. - */ - ret =3D futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1, - &key2, &pi_state, - &exiting, nr_requeue); - - /* - * At this point the top_waiter has either taken uaddr2 or - * is waiting on it. In both cases pi_state has been - * established and an initial refcount on it. In case of an - * error there's nothing. - * - * The top waiter's requeue_state is up to date: - * - * - If the lock was acquired atomically (ret =3D=3D 1), then - * the state is Q_REQUEUE_PI_LOCKED. - * - * The top waiter has been dequeued and woken up and can - * return to user space immediately. The kernel/user - * space state is consistent. In case that there must be - * more waiters requeued the WAITERS bit in the user - * space futex is set so the top waiter task has to go - * into the syscall slowpath to unlock the futex. This - * will block until this requeue operation has been - * completed and the hash bucket locks have been - * dropped. - * - * - If the trylock failed with an error (ret < 0) then - * the state is either Q_REQUEUE_PI_NONE, i.e. "nothing - * happened", or Q_REQUEUE_PI_IGNORE when there was an - * interleaved early wakeup. - * - * - If the trylock did not succeed (ret =3D=3D 0) then the - * state is either Q_REQUEUE_PI_IN_PROGRESS or - * Q_REQUEUE_PI_WAIT if an early wakeup interleaved. - * This will be cleaned up in the loop below, which - * cannot fail because futex_proxy_trylock_atomic() did - * the same sanity checks for requeue_pi as the loop - * below does. - */ - switch (ret) { - case 0: - /* We hold a reference on the pi state. */ - break; - - case 1: - /* - * futex_proxy_trylock_atomic() acquired the user space - * futex. Adjust task_count. - */ - task_count++; - ret =3D 0; - break; - - /* - * If the above failed, then pi_state is NULL and - * waiter::requeue_state is correct. - */ - case -EFAULT: - double_unlock_hb(hb1, hb2); - futex_hb_waiters_dec(hb2); - ret =3D fault_in_user_writeable(uaddr2); - if (!ret) goto retry; - return ret; - case -EBUSY: - case -EAGAIN: - /* - * Two reasons for this: - * - EBUSY: Owner is exiting and we just wait for the - * exit to complete. - * - EAGAIN: The user space value changed. - */ - double_unlock_hb(hb1, hb2); - futex_hb_waiters_dec(hb2); - /* - * Handle the case where the owner is in the middle of - * exiting. Wait for the exit to complete otherwise - * this task might loop forever, aka. live lock. - */ - wait_for_owner_exiting(ret, exiting); - cond_resched(); - goto retry; - default: - goto out_unlock; - } - } - - plist_for_each_entry_safe(this, next, &hb1->chain, list) { - if (task_count - nr_wake >=3D nr_requeue) - break; - - if (!futex_match(&this->key, &key1)) - continue; - - /* - * FUTEX_WAIT_REQUEUE_PI and FUTEX_CMP_REQUEUE_PI should always - * be paired with each other and no other futex ops. - * - * We should never be requeueing a futex_q with a pi_state, - * which is awaiting a futex_unlock_pi(). - */ - if ((requeue_pi && !this->rt_waiter) || - (!requeue_pi && this->rt_waiter) || - this->pi_state) { - ret =3D -EINVAL; - break; + } + if (curval !=3D *cmpval) { + ret =3D -EAGAIN; + goto out_unlock; + } } =20 - /* Plain futexes just wake or requeue and are done */ - if (!requeue_pi) { - if (++task_count <=3D nr_wake) - this->wake(&wake_q, this); - else + if (requeue_pi) { + struct task_struct *exiting =3D NULL; + + /* + * Attempt to acquire uaddr2 and wake the top waiter. If we + * intend to requeue waiters, force setting the FUTEX_WAITERS + * bit. We force this here where we are able to easily handle + * faults rather in the requeue loop below. + * + * Updates topwaiter::requeue_state if a top waiter exists. + */ + ret =3D futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1, + &key2, &pi_state, + &exiting, nr_requeue); + + /* + * At this point the top_waiter has either taken uaddr2 or + * is waiting on it. In both cases pi_state has been + * established and an initial refcount on it. In case of an + * error there's nothing. + * + * The top waiter's requeue_state is up to date: + * + * - If the lock was acquired atomically (ret =3D=3D 1), then + * the state is Q_REQUEUE_PI_LOCKED. + * + * The top waiter has been dequeued and woken up and can + * return to user space immediately. The kernel/user + * space state is consistent. In case that there must be + * more waiters requeued the WAITERS bit in the user + * space futex is set so the top waiter task has to go + * into the syscall slowpath to unlock the futex. This + * will block until this requeue operation has been + * completed and the hash bucket locks have been + * dropped. + * + * - If the trylock failed with an error (ret < 0) then + * the state is either Q_REQUEUE_PI_NONE, i.e. "nothing + * happened", or Q_REQUEUE_PI_IGNORE when there was an + * interleaved early wakeup. + * + * - If the trylock did not succeed (ret =3D=3D 0) then the + * state is either Q_REQUEUE_PI_IN_PROGRESS or + * Q_REQUEUE_PI_WAIT if an early wakeup interleaved. + * This will be cleaned up in the loop below, which + * cannot fail because futex_proxy_trylock_atomic() did + * the same sanity checks for requeue_pi as the loop + * below does. + */ + switch (ret) { + case 0: + /* We hold a reference on the pi state. */ + break; + + case 1: + /* + * futex_proxy_trylock_atomic() acquired the user space + * futex. Adjust task_count. + */ + task_count++; + ret =3D 0; + break; + + /* + * If the above failed, then pi_state is NULL and + * waiter::requeue_state is correct. + */ + case -EFAULT: + double_unlock_hb(hb1, hb2); + futex_hb_waiters_dec(hb2); + ret =3D fault_in_user_writeable(uaddr2); + if (!ret) + goto retry; + return ret; + case -EBUSY: + case -EAGAIN: + /* + * Two reasons for this: + * - EBUSY: Owner is exiting and we just wait for the + * exit to complete. + * - EAGAIN: The user space value changed. + */ + double_unlock_hb(hb1, hb2); + futex_hb_waiters_dec(hb2); + /* + * Handle the case where the owner is in the middle of + * exiting. Wait for the exit to complete otherwise + * this task might loop forever, aka. live lock. + */ + wait_for_owner_exiting(ret, exiting); + cond_resched(); + goto retry; + default: + goto out_unlock; + } + } + + plist_for_each_entry_safe(this, next, &hb1->chain, list) { + if (task_count - nr_wake >=3D nr_requeue) + break; + + if (!futex_match(&this->key, &key1)) + continue; + + /* + * FUTEX_WAIT_REQUEUE_PI and FUTEX_CMP_REQUEUE_PI should always + * be paired with each other and no other futex ops. + * + * We should never be requeueing a futex_q with a pi_state, + * which is awaiting a futex_unlock_pi(). + */ + if ((requeue_pi && !this->rt_waiter) || + (!requeue_pi && this->rt_waiter) || + this->pi_state) { + ret =3D -EINVAL; + break; + } + + /* Plain futexes just wake or requeue and are done */ + if (!requeue_pi) { + if (++task_count <=3D nr_wake) + this->wake(&wake_q, this); + else + requeue_futex(this, hb1, hb2, &key2); + continue; + } + + /* Ensure we requeue to the expected futex for requeue_pi. */ + if (!futex_match(this->requeue_pi_key, &key2)) { + ret =3D -EINVAL; + break; + } + + /* + * Requeue nr_requeue waiters and possibly one more in the case + * of requeue_pi if we couldn't acquire the lock atomically. + * + * Prepare the waiter to take the rt_mutex. Take a refcount + * on the pi_state and store the pointer in the futex_q + * object of the waiter. + */ + get_pi_state(pi_state); + + /* Don't requeue when the waiter is already on the way out. */ + if (!futex_requeue_pi_prepare(this, pi_state)) { + /* + * Early woken waiter signaled that it is on the + * way out. Drop the pi_state reference and try the + * next waiter. @this->pi_state is still NULL. + */ + put_pi_state(pi_state); + continue; + } + + ret =3D rt_mutex_start_proxy_lock(&pi_state->pi_mutex, + this->rt_waiter, + this->task); + + if (ret =3D=3D 1) { + /* + * We got the lock. We do neither drop the refcount + * on pi_state nor clear this->pi_state because the + * waiter needs the pi_state for cleaning up the + * user space value. It will drop the refcount + * after doing so. this::requeue_state is updated + * in the wakeup as well. + */ + requeue_pi_wake_futex(this, &key2, hb2); + task_count++; + } else if (!ret) { + /* Waiter is queued, move it to hb2 */ requeue_futex(this, hb1, hb2, &key2); - continue; - } - - /* Ensure we requeue to the expected futex for requeue_pi. */ - if (!futex_match(this->requeue_pi_key, &key2)) { - ret =3D -EINVAL; - break; + futex_requeue_pi_complete(this, 0); + task_count++; + } else { + /* + * rt_mutex_start_proxy_lock() detected a potential + * deadlock when we tried to queue that waiter. + * Drop the pi_state reference which we took above + * and remove the pointer to the state from the + * waiters futex_q object. + */ + this->pi_state =3D NULL; + put_pi_state(pi_state); + futex_requeue_pi_complete(this, ret); + /* + * We stop queueing more waiters and let user space + * deal with the mess. + */ + break; + } } =20 /* - * Requeue nr_requeue waiters and possibly one more in the case - * of requeue_pi if we couldn't acquire the lock atomically. - * - * Prepare the waiter to take the rt_mutex. Take a refcount - * on the pi_state and store the pointer in the futex_q - * object of the waiter. + * We took an extra initial reference to the pi_state in + * futex_proxy_trylock_atomic(). We need to drop it here again. */ - get_pi_state(pi_state); - - /* Don't requeue when the waiter is already on the way out. */ - if (!futex_requeue_pi_prepare(this, pi_state)) { - /* - * Early woken waiter signaled that it is on the - * way out. Drop the pi_state reference and try the - * next waiter. @this->pi_state is still NULL. - */ - put_pi_state(pi_state); - continue; - } - - ret =3D rt_mutex_start_proxy_lock(&pi_state->pi_mutex, - this->rt_waiter, - this->task); - - if (ret =3D=3D 1) { - /* - * We got the lock. We do neither drop the refcount - * on pi_state nor clear this->pi_state because the - * waiter needs the pi_state for cleaning up the - * user space value. It will drop the refcount - * after doing so. this::requeue_state is updated - * in the wakeup as well. - */ - requeue_pi_wake_futex(this, &key2, hb2); - task_count++; - } else if (!ret) { - /* Waiter is queued, move it to hb2 */ - requeue_futex(this, hb1, hb2, &key2); - futex_requeue_pi_complete(this, 0); - task_count++; - } else { - /* - * rt_mutex_start_proxy_lock() detected a potential - * deadlock when we tried to queue that waiter. - * Drop the pi_state reference which we took above - * and remove the pointer to the state from the - * waiters futex_q object. - */ - this->pi_state =3D NULL; - put_pi_state(pi_state); - futex_requeue_pi_complete(this, ret); - /* - * We stop queueing more waiters and let user space - * deal with the mess. - */ - break; - } - } - - /* - * We took an extra initial reference to the pi_state in - * futex_proxy_trylock_atomic(). We need to drop it here again. - */ - put_pi_state(pi_state); + put_pi_state(pi_state); =20 out_unlock: - double_unlock_hb(hb1, hb2); + double_unlock_hb(hb1, hb2); + futex_hb_waiters_dec(hb2); + } wake_up_q(&wake_q); - futex_hb_waiters_dec(hb2); return ret ? ret : task_count; } =20 diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 1108f373fd315..4bf839c85b66c 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -253,7 +253,6 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flag= s, u32 __user *uaddr2, int nr_wake, int nr_wake2, int op) { union futex_key key1 =3D FUTEX_KEY_INIT, key2 =3D FUTEX_KEY_INIT; - struct futex_hash_bucket *hb1, *hb2; struct futex_q *this, *next; int ret, op_ret; DEFINE_WAKE_Q(wake_q); @@ -266,67 +265,71 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int fl= ags, u32 __user *uaddr2, if (unlikely(ret !=3D 0)) return ret; =20 - hb1 =3D futex_hash(&key1); - hb2 =3D futex_hash(&key2); - retry_private: - double_lock_hb(hb1, hb2); - op_ret =3D futex_atomic_op_inuser(op, uaddr2); - if (unlikely(op_ret < 0)) { - double_unlock_hb(hb1, hb2); + if (1) { + struct futex_hash_bucket *hb1, *hb2; =20 - if (!IS_ENABLED(CONFIG_MMU) || - unlikely(op_ret !=3D -EFAULT && op_ret !=3D -EAGAIN)) { - /* - * we don't get EFAULT from MMU faults if we don't have - * an MMU, but we might get them from range checking - */ - ret =3D op_ret; - return ret; - } + hb1 =3D futex_hash(&key1); + hb2 =3D futex_hash(&key2); =20 - if (op_ret =3D=3D -EFAULT) { - ret =3D fault_in_user_writeable(uaddr2); - if (ret) + double_lock_hb(hb1, hb2); + op_ret =3D futex_atomic_op_inuser(op, uaddr2); + if (unlikely(op_ret < 0)) { + double_unlock_hb(hb1, hb2); + + if (!IS_ENABLED(CONFIG_MMU) || + unlikely(op_ret !=3D -EFAULT && op_ret !=3D -EAGAIN)) { + /* + * we don't get EFAULT from MMU faults if we don't have + * an MMU, but we might get them from range checking + */ + ret =3D op_ret; return ret; - } - - cond_resched(); - if (!(flags & FLAGS_SHARED)) - goto retry_private; - goto retry; - } - - plist_for_each_entry_safe(this, next, &hb1->chain, list) { - if (futex_match (&this->key, &key1)) { - if (this->pi_state || this->rt_waiter) { - ret =3D -EINVAL; - goto out_unlock; } - this->wake(&wake_q, this); - if (++ret >=3D nr_wake) - break; - } - } =20 - if (op_ret > 0) { - op_ret =3D 0; - plist_for_each_entry_safe(this, next, &hb2->chain, list) { - if (futex_match (&this->key, &key2)) { + if (op_ret =3D=3D -EFAULT) { + ret =3D fault_in_user_writeable(uaddr2); + if (ret) + return ret; + } + + cond_resched(); + if (!(flags & FLAGS_SHARED)) + goto retry_private; + goto retry; + } + + plist_for_each_entry_safe(this, next, &hb1->chain, list) { + if (futex_match (&this->key, &key1)) { if (this->pi_state || this->rt_waiter) { ret =3D -EINVAL; goto out_unlock; } this->wake(&wake_q, this); - if (++op_ret >=3D nr_wake2) + if (++ret >=3D nr_wake) break; } } - ret +=3D op_ret; - } + + if (op_ret > 0) { + op_ret =3D 0; + plist_for_each_entry_safe(this, next, &hb2->chain, list) { + if (futex_match (&this->key, &key2)) { + if (this->pi_state || this->rt_waiter) { + ret =3D -EINVAL; + goto out_unlock; + } + this->wake(&wake_q, this); + if (++op_ret >=3D nr_wake2) + break; + } + } + ret +=3D op_ret; + } =20 out_unlock: - double_unlock_hb(hb1, hb2); + double_unlock_hb(hb1, hb2); + } wake_up_q(&wake_q); return ret; } @@ -402,7 +405,6 @@ int futex_unqueue_multiple(struct futex_vector *v, int = count) */ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) { - struct futex_hash_bucket *hb; bool retry =3D false; int ret, i; u32 uval; @@ -441,21 +443,25 @@ int futex_wait_multiple_setup(struct futex_vector *vs= , int count, int *woken) struct futex_q *q =3D &vs[i].q; u32 val =3D vs[i].w.val; =20 - hb =3D futex_hash(&q->key); - futex_q_lock(q, hb); - ret =3D futex_get_value_locked(&uval, uaddr); + if (1) { + struct futex_hash_bucket *hb; =20 - if (!ret && uval =3D=3D val) { - /* - * The bucket lock can't be held while dealing with the - * next futex. Queue each futex at this moment so hb can - * be unlocked. - */ - futex_queue(q, hb, current); - continue; + hb =3D futex_hash(&q->key); + futex_q_lock(q, hb); + ret =3D futex_get_value_locked(&uval, uaddr); + + if (!ret && uval =3D=3D val) { + /* + * The bucket lock can't be held while dealing with the + * next futex. Queue each futex at this moment so hb can + * be unlocked. + */ + futex_queue(q, hb, current); + continue; + } + + futex_q_unlock(hb); } - - futex_q_unlock(hb); __set_current_state(TASK_RUNNING); =20 /* @@ -584,7 +590,6 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsign= ed int flags, struct futex_q *q, union futex_key *key2, struct task_struct *task) { - struct futex_hash_bucket *hb; u32 uval; int ret; =20 @@ -612,44 +617,48 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsi= gned int flags, return ret; =20 retry_private: - hb =3D futex_hash(&q->key); - futex_q_lock(q, hb); + if (1) { + struct futex_hash_bucket *hb; =20 - ret =3D futex_get_value_locked(&uval, uaddr); + hb =3D futex_hash(&q->key); + futex_q_lock(q, hb); =20 - if (ret) { - futex_q_unlock(hb); + ret =3D futex_get_value_locked(&uval, uaddr); =20 - ret =3D get_user(uval, uaddr); - if (ret) - return ret; + if (ret) { + futex_q_unlock(hb); =20 - if (!(flags & FLAGS_SHARED)) - goto retry_private; + ret =3D get_user(uval, uaddr); + if (ret) + return ret; =20 - goto retry; + if (!(flags & FLAGS_SHARED)) + goto retry_private; + + goto retry; + } + + if (uval !=3D val) { + futex_q_unlock(hb); + return -EWOULDBLOCK; + } + + if (key2 && futex_match(&q->key, key2)) { + futex_q_unlock(hb); + return -EINVAL; + } + + /* + * The task state is guaranteed to be set before another task can + * wake it. set_current_state() is implemented using smp_store_mb() and + * futex_queue() calls spin_unlock() upon completion, both serializing + * access to the hash list and forcing another memory barrier. + */ + if (task =3D=3D current) + set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE); + futex_queue(q, hb, task); } =20 - if (uval !=3D val) { - futex_q_unlock(hb); - return -EWOULDBLOCK; - } - - if (key2 && futex_match(&q->key, key2)) { - futex_q_unlock(hb); - return -EINVAL; - } - - /* - * The task state is guaranteed to be set before another task can - * wake it. set_current_state() is implemented using smp_store_mb() and - * futex_queue() calls spin_unlock() upon completion, both serializing - * access to the hash list and forcing another memory barrier. - */ - if (task =3D=3D current) - set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE); - futex_queue(q, hb, task); - return ret; } =20 --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE109253B4B for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; cv=none; b=Zl4YDO/HGEMZBjhjUg5L9oywQvrTEtuiCzm9/4cgEI0wBkPTLxo1Aw3vLyMN/cyDVcVy1zT50Wd8ywSkptqQS5PjPEuBfJipJ4Z0ssvKrZ+hHakPjW5mWSM2k3h0T4LpnIui5BhU+zab72BLUtL5e6mmTjHNAhJ1mZdgYMaU+ko= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; c=relaxed/simple; bh=q1V5yYTSdJWOEMOeqhqb0Ea6SrRVRvGM1XThjnfcnKY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k0eEcHsUiM+CJZlXpmFnVosnd6yNUXMzzZNVOWYeKCD9TBPOvcxG9zM/BZRQo+sSrFi4N9ygJHW6p5qpSv6LRNWMp7WQWP1wXQ1Mw+MOZmTZqU6pvLnvGlRblkU0d4cnFwub259l6JSbbkF8WU1JTlwlcN5DdEdEndGHQZ6MEkw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=DhzD1OmG; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=c47PcOYu; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="DhzD1OmG"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="c47PcOYu" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EITFuj2xFcReNPnL6ovL+LhxtFO1rhrxKDTWPkOHOKw=; b=DhzD1OmGL1VFjGL32TN8ASX8rqtCT+Tay8bLhiw2p9qHqjIV+nexGM73X7zMWvepSBn8CM EbMMTI7AKY72/ACVwrCh8IODk5G+sBmDRdyH+b4E3TDfdtH6V+/z7B7MZZ/EHFcxnjsjg6 k+OYB/A2OPfxaHv21dgvqyRpGEORfsLb945/TzYhuy0Dvn/JWnBf7VKCPg4B/MAAZunJHN aBXbqPBITQORWtu8Xy77KgzWhBSaXA9uVafhJpWUO6aQV8b5WGevGCemx07mbYTH8mSBVv JvVHRCz82FjXhC4XxY2G31hVgO76YdMU1YOSszqIbYBwg38+Z0O1LlZ11tGAeQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EITFuj2xFcReNPnL6ovL+LhxtFO1rhrxKDTWPkOHOKw=; b=c47PcOYuGwBJ8ObWiI72eTTZvKkmypVNdlRqK/gBZth0Ayx+uOrAFbSwEFTPnhIy6ScQDu weckwPZ0z+abjODQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 05/21] futex: Create futex_hash() get/put class Date: Wed, 12 Mar 2025 16:16:18 +0100 Message-ID: <20250312151634.2183278-6-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 7 +++---- kernel/futex/futex.h | 8 +++++++- kernel/futex/pi.c | 10 ++++++---- kernel/futex/requeue.c | 10 +++------- kernel/futex/waitwake.c | 15 +++++---------- 5 files changed, 24 insertions(+), 26 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index e4cb5ce9785b1..08cf54567aeb6 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -114,7 +114,7 @@ late_initcall(fail_futex_debugfs); * We hash on the keys returned from get_futex_key (see below) and return = the * corresponding hash bucket in the global hash. */ -struct futex_hash_bucket *futex_hash(union futex_key *key) +struct futex_hash_bucket *__futex_hash(union futex_key *key) { u32 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, key->both.offset); @@ -122,6 +122,7 @@ struct futex_hash_bucket *futex_hash(union futex_key *k= ey) return &futex_queues[hash & futex_hashmask]; } =20 +void futex_hash_put(struct futex_hash_bucket *hb) { } =20 /** * futex_setup_timer - set up the sleeping hrtimer. @@ -957,9 +958,7 @@ static void exit_pi_state_list(struct task_struct *curr) pi_state =3D list_entry(next, struct futex_pi_state, list); key =3D pi_state->key; if (1) { - struct futex_hash_bucket *hb; - - hb =3D futex_hash(&key); + CLASS(hb, hb)(&key); =20 /* * We can race against put_pi_state() removing itself from the diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index a219903e52084..eac6de6ed563a 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 #ifdef CONFIG_PREEMPT_RT #include @@ -201,7 +202,12 @@ extern struct hrtimer_sleeper * futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns); =20 -extern struct futex_hash_bucket *futex_hash(union futex_key *key); +extern struct futex_hash_bucket *__futex_hash(union futex_key *key); +extern void futex_hash_put(struct futex_hash_bucket *hb); + +DEFINE_CLASS(hb, struct futex_hash_bucket *, + if (_T) futex_hash_put(_T), + __futex_hash(key), union futex_key *key); =20 /** * futex_match - Check whether two futex keys are equal diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c index 62ce5ecaeddd6..4cee9ec5d97d6 100644 --- a/kernel/futex/pi.c +++ b/kernel/futex/pi.c @@ -939,9 +939,8 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags= , ktime_t *time, int tryl =20 retry_private: if (1) { - struct futex_hash_bucket *hb; + CLASS(hb, hb)(&q.key); =20 - hb =3D futex_hash(&q.key); futex_q_lock(&q, hb); =20 ret =3D futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, @@ -1017,6 +1016,10 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int fl= ags, ktime_t *time, int tryl */ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); spin_unlock(q.lock_ptr); + /* + * Caution; releasing @hb in-scope. + */ + futex_hash_put(no_free_ptr(hb)); /* * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter * such that futex_unlock_pi() is guaranteed to observe the waiter when @@ -1119,7 +1122,6 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int f= lags) { u32 curval, uval, vpid =3D task_pid_vnr(current); union futex_key key =3D FUTEX_KEY_INIT; - struct futex_hash_bucket *hb; struct futex_q *top_waiter; int ret; =20 @@ -1139,7 +1141,7 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int f= lags) if (ret) return ret; =20 - hb =3D futex_hash(&key); + CLASS(hb, hb)(&key); spin_lock(&hb->lock); retry_hb: =20 diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index 209794cad6f2f..992e3ce005c6f 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -444,10 +444,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int fla= gs1, =20 retry_private: if (1) { - struct futex_hash_bucket *hb1, *hb2; - - hb1 =3D futex_hash(&key1); - hb2 =3D futex_hash(&key2); + CLASS(hb, hb1)(&key1); + CLASS(hb, hb2)(&key2); =20 futex_hb_waiters_inc(hb2); double_lock_hb(hb1, hb2); @@ -817,9 +815,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned i= nt flags, switch (futex_requeue_pi_wakeup_sync(&q)) { case Q_REQUEUE_PI_IGNORE: { - struct futex_hash_bucket *hb; - - hb =3D futex_hash(&q.key); + CLASS(hb, hb)(&q.key); /* The waiter is still on uaddr1 */ spin_lock(&hb->lock); ret =3D handle_early_requeue_pi_wakeup(hb, &q, to); diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 4bf839c85b66c..44034dee7a48c 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -154,7 +154,6 @@ void futex_wake_mark(struct wake_q_head *wake_q, struct= futex_q *q) */ int futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bit= set) { - struct futex_hash_bucket *hb; struct futex_q *this, *next; union futex_key key =3D FUTEX_KEY_INIT; DEFINE_WAKE_Q(wake_q); @@ -170,7 +169,7 @@ int futex_wake(u32 __user *uaddr, unsigned int flags, i= nt nr_wake, u32 bitset) if ((flags & FLAGS_STRICT) && !nr_wake) return 0; =20 - hb =3D futex_hash(&key); + CLASS(hb, hb)(&key); =20 /* Make sure we really have tasks to wakeup */ if (!futex_hb_waiters_pending(hb)) @@ -267,10 +266,8 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int fla= gs, u32 __user *uaddr2, =20 retry_private: if (1) { - struct futex_hash_bucket *hb1, *hb2; - - hb1 =3D futex_hash(&key1); - hb2 =3D futex_hash(&key2); + CLASS(hb, hb1)(&key1); + CLASS(hb, hb2)(&key2); =20 double_lock_hb(hb1, hb2); op_ret =3D futex_atomic_op_inuser(op, uaddr2); @@ -444,9 +441,8 @@ int futex_wait_multiple_setup(struct futex_vector *vs, = int count, int *woken) u32 val =3D vs[i].w.val; =20 if (1) { - struct futex_hash_bucket *hb; + CLASS(hb, hb)(&q->key); =20 - hb =3D futex_hash(&q->key); futex_q_lock(q, hb); ret =3D futex_get_value_locked(&uval, uaddr); =20 @@ -618,9 +614,8 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsign= ed int flags, =20 retry_private: if (1) { - struct futex_hash_bucket *hb; + CLASS(hb, hb)(&q->key); =20 - hb =3D futex_hash(&q->key); futex_q_lock(q, hb); =20 ret =3D futex_get_value_locked(&uval, uaddr); --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40208252907 for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792606; cv=none; b=Emd53t6EgbxaSi/9nGVKcwZJoJvV3QDh0wBbrR0oG2q0FGH19uCrMmCVSr7/VUddVmWHJhPuAc9Lh1EuBIpYRsX/RQUQHGwuUH1UeNRs+mMLMFtruz3eO0P+UezZnlmw1LiHYxpxUhk+7HgBtuJveuSaEx7jrjaXgY2tFWNPaLg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792606; c=relaxed/simple; bh=zNHF9gLRJbaWgJ86a6QXAXpvHh0UaOvRjG2O6sAVe6g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kYQ5RCFdJaIQnmYBXD+Fd0W64A8Cd8NdO5JIenEjEBzfdGU94ZEM3WuOciFZJSm4xcd31aC8IIThlk3iXbeDZ7pD6FGkvhr9lpKXFSTaj48QilpiAZrtap8Hot0JoJavV8fitFUCxooLAlryj/8rT5AIe57kEn7KObyIC07LMtE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=p8HQJS9j; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=/6YnKMHK; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="p8HQJS9j"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="/6YnKMHK" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k8eVr2NQkknRf1y68E2hvHz4ee/KkpMgqGwjShLOjZE=; b=p8HQJS9joM1yZg+lnVruYvXCUayevADHVRUB48NXX2m/7EStQt+xCAuBB0bBnknsis08l4 GAumTG5wSqIGRlrAjgTiVPd3H6vu81U9xcfST3ygjUuQtaCbhDK+/BYTF1LNGOqyiPHvqT clSSZQMjbuIjGos9zKgcYSFfeb3TZO+7taqmI6qKwuNyYaphHjxdJwCGRX8tD3MDCpZhB3 m8ZVCpefMgl8GB+DpZnw2FhFFseFkoSlCv6TfX6uqPX3E2a0teJSCw/HFXueVF995B1tmV BsNdxFZaRriTgPBXO5/fpfBPJexEAX9WeHC+1JotWIVuLlcl2YJr0MDbZjKHxA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k8eVr2NQkknRf1y68E2hvHz4ee/KkpMgqGwjShLOjZE=; b=/6YnKMHKwaTB/t80orjLcoyyRn91o/s9BKHuSBrfYjr7oCO4AXNHKdjfqYI8t7gy+KuOpO vDjytlNFqusFnvDg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 06/21] futex: Create helper function to initialize a hash slot. Date: Wed, 12 Mar 2025 16:16:19 +0100 Message-ID: <20250312151634.2183278-7-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Factor out the futex_hash_bucket initialisation into a helpr function. The helper function will be used in a follow up patch implementing process private hash buckets. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 08cf54567aeb6..c6c5cde78e0cb 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -1122,6 +1122,13 @@ void futex_exit_release(struct task_struct *tsk) futex_cleanup_end(tsk, FUTEX_STATE_DEAD); } =20 +static void futex_hash_bucket_init(struct futex_hash_bucket *fhb) +{ + atomic_set(&fhb->waiters, 0); + plist_head_init(&fhb->chain); + spin_lock_init(&fhb->lock); +} + static int __init futex_init(void) { unsigned long hashsize, i; @@ -1139,11 +1146,8 @@ static int __init futex_init(void) hashsize, hashsize); hashsize =3D 1UL << futex_shift; =20 - for (i =3D 0; i < hashsize; i++) { - atomic_set(&futex_queues[i].waiters, 0); - plist_head_init(&futex_queues[i].chain); - spin_lock_init(&futex_queues[i].lock); - } + for (i =3D 0; i < hashsize; i++) + futex_hash_bucket_init(&futex_queues[i]); =20 futex_hashmask =3D hashsize - 1; return 0; --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE179253B53 for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; cv=none; b=fY4zGmPXBYs1joddhRzRg6xUmRWPTR91MXZfQmP/OphJjry8W0HLwXy03FsbEY9uJ9zSck2V/8lr7DnlV+4nkCADMwyGhC9A98w0CI1zy3qiAQmsFaA7hyM3kPl3vBQz231SoFQ7uSmUmk7ljxyel4I/rGfvev2mL5QZdLoigbM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; c=relaxed/simple; bh=R3gHjKOYjiHI+nWj8KdJ6SDQDLV0UGcgUWWlLGdWqsA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uCTojFCVZyCqbVSTd11WhGOyoOd6D55pVq6zE6RFjeamntpHliVsuN5ubeJK4NhaFWTNyJ5gx8SyPbSvZ2lRqnobXEigiA/ud/6RyxXUsazugHCObcx1Jh9RUFauYcXfnYYeH+Ek7lsM59K9s0n+DctjNwJY6cH4Pc4XcJLKNlM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=D9GLKYdM; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=xXQlNBe+; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="D9GLKYdM"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="xXQlNBe+" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DfbVJC9+YYu7wzBs4EaJxNoG3If5jAkcHBmcz824GVc=; b=D9GLKYdMowU/RlIHnOlAP8dAJtuAXhDrWwbVKsBH9u7yKO7+2/2Runhj9vDR3Wo9V8TiNc wempx080+cW1QAki/2no3KtPiI3UVVDYGW3bnHeLp6qdDYJsVn8MNjhLV6g57XppmcEeX9 /JjcJQEBzU7iJJlFAn8IcStRF5kKnIaGAZat/tTBCIjtU7wWahGKmxGccslSdrBIfT7Vwc 5zbamEclRRBvVlmPK4cwHV3L+D73jRdwY9ICZ6OAfLAruDR23RBPfAtSP/F6Y88veuXSCI fKGZKNEppF1AgD3qC1F/uaaCzVBaO2HVykR5IJpLSZpF/4gh+RSba4bJcgInUg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DfbVJC9+YYu7wzBs4EaJxNoG3If5jAkcHBmcz824GVc=; b=xXQlNBe+Petp0yX2bFIFyQPlmU6nR0+KVai3j9AbUOKS+ClB+UEcouI5c0ciqsI6oFNaIn elU5RB75XR4HE8CQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 07/21] futex: Add basic infrastructure for local task local hash. Date: Wed, 12 Mar 2025 16:16:20 +0100 Message-ID: <20250312151634.2183278-8-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The futex hashmap is system wide and shared by random tasks. Each slot is hashed based on its address and VMA. Due to randomized VMAs (and memory allocations) the same logical lock (pointer) can end up in a different hash bucket on each invocation of the application. This in turn means that different applications may share a hash bucket on the first invocation but not on the second an it is not always clear which applications will be involved. This can result in high latency's to acquire the futex_hash_bucket::lock especially if the lock owner is limited to a CPU and not be effectively PI boosted. Introduce a task local hash map. The hashmap can be allocated via prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, 0) The `0' argument allocates a default number of 16 slots, a higher number can be specified if desired. The current upper limit is 131072. The allocated hashmap is used by all threads within a process. A thread can check if the private map has been allocated via prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_SLOTS); Which return the current number of slots. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/futex.h | 20 ++++++++ include/linux/mm_types.h | 6 ++- include/uapi/linux/prctl.h | 5 ++ kernel/fork.c | 2 + kernel/futex/core.c | 99 ++++++++++++++++++++++++++++++++++++-- kernel/sys.c | 4 ++ 6 files changed, 132 insertions(+), 4 deletions(-) diff --git a/include/linux/futex.h b/include/linux/futex.h index b70df27d7e85c..943828db52234 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -77,6 +77,15 @@ void futex_exec_release(struct task_struct *tsk); =20 long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout, u32 __user *uaddr2, u32 val2, u32 val3); +int futex_hash_prctl(unsigned long arg2, unsigned long arg3); +int futex_hash_allocate_default(void); +void futex_hash_free(struct mm_struct *mm); + +static inline void futex_mm_init(struct mm_struct *mm) +{ + mm->futex_hash_bucket =3D NULL; +} + #else static inline void futex_init_task(struct task_struct *tsk) { } static inline void futex_exit_recursive(struct task_struct *tsk) { } @@ -88,6 +97,17 @@ static inline long do_futex(u32 __user *uaddr, int op, u= 32 val, { return -EINVAL; } +static inline int futex_hash_prctl(unsigned long arg2, unsigned long arg3) +{ + return -EINVAL; +} +static inline int futex_hash_allocate_default(void) +{ + return 0; +} +static inline void futex_hash_free(struct mm_struct *mm) { } +static inline void futex_mm_init(struct mm_struct *mm) { } + #endif =20 #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0234f14f2aa6b..769cd77364e2d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -30,6 +30,7 @@ #define INIT_PASID 0 =20 struct address_space; +struct futex_hash_bucket; struct mem_cgroup; =20 /* @@ -937,7 +938,10 @@ struct mm_struct { */ seqcount_t mm_lock_seq; #endif - +#ifdef CONFIG_FUTEX + unsigned int futex_hash_mask; + struct futex_hash_bucket *futex_hash_bucket; +#endif =20 unsigned long hiwater_rss; /* High-watermark of RSS usage */ unsigned long hiwater_vm; /* High-water virtual memory usage */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 5c6080680cb27..55b843644c51a 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -353,4 +353,9 @@ struct prctl_mm_map { */ #define PR_LOCK_SHADOW_STACK_STATUS 76 =20 +/* FUTEX hash management */ +#define PR_FUTEX_HASH 77 +# define PR_FUTEX_HASH_SET_SLOTS 1 +# define PR_FUTEX_HASH_GET_SLOTS 2 + #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/fork.c b/kernel/fork.c index e27fe5d5a15c9..95d48dbc90934 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1287,6 +1287,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm= , struct task_struct *p, RCU_INIT_POINTER(mm->exe_file, NULL); mmu_notifier_subscriptions_init(mm); init_tlb_flush_pending(mm); + futex_mm_init(mm); #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLO= CKS) mm->pmd_huge_pte =3D NULL; #endif @@ -1364,6 +1365,7 @@ static inline void __mmput(struct mm_struct *mm) if (mm->binfmt) module_put(mm->binfmt->module); lru_gen_del_mm(mm); + futex_hash_free(mm); mmdrop(mm); } =20 diff --git a/kernel/futex/core.c b/kernel/futex/core.c index c6c5cde78e0cb..1feb7092635d0 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -39,6 +39,7 @@ #include #include #include +#include =20 #include "futex.h" #include "../locking/rtmutex_common.h" @@ -107,18 +108,40 @@ late_initcall(fail_futex_debugfs); =20 #endif /* CONFIG_FAIL_FUTEX */ =20 +static inline bool futex_key_is_private(union futex_key *key) +{ + /* + * Relies on get_futex_key() to set either bit for shared + * futexes -- see comment with union futex_key. + */ + return !(key->both.offset & (FUT_OFF_INODE | FUT_OFF_MMSHARED)); +} + /** * futex_hash - Return the hash bucket in the global hash * @key: Pointer to the futex key for which the hash is calculated * * We hash on the keys returned from get_futex_key (see below) and return = the - * corresponding hash bucket in the global hash. + * corresponding hash bucket in the global hash. If the FUTEX is private a= nd + * a local hash table is privated then this one is used. */ struct futex_hash_bucket *__futex_hash(union futex_key *key) { - u32 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, - key->both.offset); + struct futex_hash_bucket *fhb; + u32 hash; =20 + fhb =3D current->mm->futex_hash_bucket; + if (fhb && futex_key_is_private(key)) { + u32 hash_mask =3D current->mm->futex_hash_mask; + + hash =3D jhash2((u32 *)key, + offsetof(typeof(*key), both.offset) / 4, + key->both.offset); + return &fhb[hash & hash_mask]; + } + hash =3D jhash2((u32 *)key, + offsetof(typeof(*key), both.offset) / 4, + key->both.offset); return &futex_queues[hash & futex_hashmask]; } =20 @@ -1129,6 +1152,76 @@ static void futex_hash_bucket_init(struct futex_hash= _bucket *fhb) spin_lock_init(&fhb->lock); } =20 +void futex_hash_free(struct mm_struct *mm) +{ + kvfree(mm->futex_hash_bucket); +} + +static int futex_hash_allocate(unsigned int hash_slots) +{ + struct futex_hash_bucket *fhb; + int i; + + if (current->mm->futex_hash_bucket) + return -EALREADY; + + if (!thread_group_leader(current)) + return -EINVAL; + + if (hash_slots =3D=3D 0) + hash_slots =3D 16; + if (hash_slots < 2) + hash_slots =3D 2; + if (hash_slots > 131072) + hash_slots =3D 131072; + if (!is_power_of_2(hash_slots)) + hash_slots =3D rounddown_pow_of_two(hash_slots); + + fhb =3D kvmalloc_array(hash_slots, sizeof(struct futex_hash_bucket), GFP_= KERNEL_ACCOUNT); + if (!fhb) + return -ENOMEM; + + current->mm->futex_hash_mask =3D hash_slots - 1; + + for (i =3D 0; i < hash_slots; i++) + futex_hash_bucket_init(&fhb[i]); + + current->mm->futex_hash_bucket =3D fhb; + return 0; +} + +int futex_hash_allocate_default(void) +{ + return futex_hash_allocate(0); +} + +static int futex_hash_get_slots(void) +{ + if (current->mm->futex_hash_bucket) + return current->mm->futex_hash_mask + 1; + return 0; +} + +int futex_hash_prctl(unsigned long arg2, unsigned long arg3) +{ + int ret; + + switch (arg2) { + case PR_FUTEX_HASH_SET_SLOTS: + ret =3D futex_hash_allocate(arg3); + break; + + case PR_FUTEX_HASH_GET_SLOTS: + ret =3D futex_hash_get_slots(); + break; + + default: + ret =3D -EINVAL; + break; + } + return ret; +} + static int __init futex_init(void) { unsigned long hashsize, i; diff --git a/kernel/sys.c b/kernel/sys.c index cb366ff8703af..e509ad9795103 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -52,6 +52,7 @@ #include #include #include +#include =20 #include #include @@ -2811,6 +2812,9 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, ar= g2, unsigned long, arg3, return -EINVAL; error =3D arch_lock_shadow_stack_status(me, arg2); break; + case PR_FUTEX_HASH: + error =3D futex_hash_prctl(arg2, arg3); + break; default: trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5); error =3D -EINVAL; --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE08F253B4A for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; cv=none; b=ROHpXG7LJKR/DAnoofI/L8d6UfAYbs4y/dOrkUiKGbvrZqMwbE/n3TvnRKmvKppAakt9+tSXlZDDRBzDnSX1Vt4cRG6mWeJW4Ge1Tc3VYaNGrdo8VDef+X4S+bHZEmOe3jUAtqbrEi19tTYJvn6xXzz6PCAc6F1dyqyKN87AO28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; c=relaxed/simple; bh=cAS8HrysPPVrUaecFoBO1lNGEIB8Qf5KtzktvdEyj8Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KTMeeHp6L0yOoQX/jSlYEJMLaDCYeMHjSkFsyhF3in2Ko2/2sCPB3m5KVfjbUITaXWy155HDSUB+PsLFhaslH4CkFBVVSVr8TYjxQePbLYRQJPf0i+srYNQ+AFsIpjmn7oq/ivdbxLRWNtAkBlTbCI0HU/HV5nESMCZsl89yO6Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=a3pZYK3w; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=76mlXKq7; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="a3pZYK3w"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="76mlXKq7" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ShzZceWVd5JqsOVo7vox1R/+m1/uWlP73cHaeuIRA7U=; b=a3pZYK3w0bDOVUrumnxRGCr7OvzGPBVo7vO7CRYH2AI668JoBxtulurh6LxcWSYJMe5a3A iMBIfp2H95kyL2dqzUn3EWXae75LrPqygnuQmqgGohh3fd7TmVGgCxJW/QsJcBvimH8+03 XVb+Sa6dgIOBfNUEiVmr6ycfTzCpMSShgKoJFv6gRhQFx5K4CUIJOhWqkZDOyig8luTG3h QOG6+sSQRmCTn95op8YeJvOxaKYd3F5kkgp3xhTufpTNA6iI/79gLjufH7mQVgH4tZ6+6H Mdx9wnDhU8V9AQJgJY6TYXEat0mjAspAE0/BOA2cvsvwkAc1ufm1maUngghnaQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ShzZceWVd5JqsOVo7vox1R/+m1/uWlP73cHaeuIRA7U=; b=76mlXKq78zsZisu6H6kU6eCUr6Cf5AjwbVFbkOa/Lxs8d5DJNSNfsRx3lwL6yzJVuBjepr +zuUf2fDd2i/MdDg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 08/21] futex: Hash only the address for private futexes. Date: Wed, 12 Mar 2025 16:16:21 +0100 Message-ID: <20250312151634.2183278-9-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" futex_hash() passes the whole futex_key to jhash2. The first two member are passed as the first argument and the offset as the "initial value". For private futexes, the mm-part is always the same and it is used only within the process. By excluding the mm part from the hash, we reduce the length passed to jhash2 from 4 (16 / 4) to 2 (8 / 2). This avoids the __jhash_mix() part of jhash. The resulting code is smaller and based on testing this variant performs as good as the original or slightly better. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 1feb7092635d0..8561c41df7dc5 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -117,6 +117,18 @@ static inline bool futex_key_is_private(union futex_ke= y *key) return !(key->both.offset & (FUT_OFF_INODE | FUT_OFF_MMSHARED)); } =20 +static struct futex_hash_bucket *futex_hash_private(union futex_key *key, + struct futex_hash_bucket *fhb, + u32 hash_mask) +{ + u32 hash; + + hash =3D jhash2((void *)&key->private.address, + sizeof(key->private.address) / 4, + key->both.offset); + return &fhb[hash & hash_mask]; +} + /** * futex_hash - Return the hash bucket in the global hash * @key: Pointer to the futex key for which the hash is calculated @@ -131,14 +143,9 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) u32 hash; =20 fhb =3D current->mm->futex_hash_bucket; - if (fhb && futex_key_is_private(key)) { - u32 hash_mask =3D current->mm->futex_hash_mask; + if (fhb && futex_key_is_private(key)) + return futex_hash_private(key, fhb, current->mm->futex_hash_mask); =20 - hash =3D jhash2((u32 *)key, - offsetof(typeof(*key), both.offset) / 4, - key->both.offset); - return &fhb[hash & hash_mask]; - } hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, key->both.offset); --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5667253F30 for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; cv=none; b=LqalrQRhdx2W0m5XpmRTk1QFIB2RHcFYBChqfeiiPmeCnHnHkLbNOfKrpC8QwcppoXL6LtVKm2cfV4AKdvnZoLffmViMldO1OjsL0UDJK6HtSydXGwgxfgRXU8dozEF2y/lfhlt8wkRg/521fDc1SB5XvFq9n9N+sveAqTqPJlA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792607; c=relaxed/simple; bh=cLvRqCx4fyRpNg62flvLoAR+ZnmBnNvjRz7U70ntS/U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R/MQwHAiAyzXVcEp/BOZJfLdvOVqPi45GgZz65uKXgE+v8LHq+Ct2TFQ59liXHgTRknQ4dVhcbI0qGpaWi4TT1lIWOLjtrYS8NLFdfbQocOf7Vob27IIOUEsJadyd+1p1Fr04U2TP3Np5PiKEgTrZ8anJwooCLuz1/NdioAzqoY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=icx4w/Z3; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=feDHiAik; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="icx4w/Z3"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="feDHiAik" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QbL60HqGpqeHNoGxFnXcSoVcCK+9wOVSWknHise1E80=; b=icx4w/Z3vwm5POLTn4pCb6XYzULxd21i/nAM0I24CatG+2AqlGotEVapC4/qzqyyppGVME sc9+GbzAWkGRstAo/T8XH9+jeCC+4wLypN3e62Ta5jPIDFdds7uxP+sirLQhzV4rQyZ3lk FML7qSGp2MvrulzObaWbFVCWHSlbpW4HS9rzTrWB05kdxwGMIOG5uQbMZYMjaEBijXur0g JDKikiuOMKfjtdjRxKvt/kFEcDR1MNZKVLyktaGIDPuwPbrbO+iKweRptAyb2MdPM40Y/c 93R5A3OjgWbX6zvTLMrd+n3MIYQ4demA0nLpOQkQFEwQ4xpE+rFd9g4K7eHG6w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QbL60HqGpqeHNoGxFnXcSoVcCK+9wOVSWknHise1E80=; b=feDHiAikJ54XQQAEyFBZX3OhnTeLrkwIEeRApRAX8Dj6mrmgOm521QXPIWExHBgE32wDJ8 Ad2cxw9SJg06v/Ag== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 09/21] futex: Allow automatic allocation of process wide futex hash. Date: Wed, 12 Mar 2025 16:16:22 +0100 Message-ID: <20250312151634.2183278-10-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allocate a default futex hash if a task forks its first thread. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/futex.h | 12 ++++++++++++ kernel/fork.c | 24 ++++++++++++++++++++++++ 2 files changed, 36 insertions(+) diff --git a/include/linux/futex.h b/include/linux/futex.h index 943828db52234..bad377c30de5e 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -86,6 +86,13 @@ static inline void futex_mm_init(struct mm_struct *mm) mm->futex_hash_bucket =3D NULL; } =20 +static inline bool futex_hash_requires_allocation(void) +{ + if (current->mm->futex_hash_bucket) + return false; + return true; +} + #else static inline void futex_init_task(struct task_struct *tsk) { } static inline void futex_exit_recursive(struct task_struct *tsk) { } @@ -108,6 +115,11 @@ static inline int futex_hash_allocate_default(void) static inline void futex_hash_free(struct mm_struct *mm) { } static inline void futex_mm_init(struct mm_struct *mm) { } =20 +static inline bool futex_hash_requires_allocation(void) +{ + return false; +} + #endif =20 #endif diff --git a/kernel/fork.c b/kernel/fork.c index 95d48dbc90934..440c5808f70a2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2137,6 +2137,15 @@ static void rv_task_fork(struct task_struct *p) #define rv_task_fork(p) do {} while (0) #endif =20 +static bool need_futex_hash_allocate_default(u64 clone_flags) +{ + if ((clone_flags & (CLONE_THREAD | CLONE_VM)) !=3D (CLONE_THREAD | CLONE_= VM)) + return false; + if (!thread_group_empty(current)) + return false; + return futex_hash_requires_allocation(); +} + /* * This creates a new process as a copy of the old one, * but does not actually start it yet. @@ -2514,6 +2523,21 @@ __latent_entropy struct task_struct *copy_process( if (retval) goto bad_fork_cancel_cgroup; =20 + /* + * Allocate a default futex hash for the user process once the first + * thread spawns. + */ + if (need_futex_hash_allocate_default(clone_flags)) { + retval =3D futex_hash_allocate_default(); + if (retval) + goto bad_fork_core_free; + /* + * If we fail beyond this point we don't free the allocated + * futex hash map. We assume that another thread will be created + * and makes use of it. The hash map will be freed once the main + * thread terminates. + */ + } /* * From this point on we must avoid any synchronous user-space * communication until we take the tasklist-lock. In particular, we do --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4438E253F32 for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; cv=none; b=bo7IvVwL4z4VezNj4zruyCrYugHL4XaCAjHx1zNz7BliweEca0cZi39wwWB/ohkwRjYdTKcQ5G+mX/g0ICepySTUIuAz+p4kNt0s1HyvjV58sKI2Bobb65HpSI2GlneK6dSu5g7iT5mWzljBYSjP+rpUN0lQmSBictssXVqC3zo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; c=relaxed/simple; bh=6Saf8+6zOJvJXTerUyM6/ulxWs8fDqg+gHh8MDFo338=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pmPrDTJ6ESfKEVr7j3VX+7CRvc7Vt7SnZ9au+Dg2sb3RbX8a17XYcFhQYynoRFP9KDap/xjDDN8p7xSIBccbvNS0PlaDFhjxG1KAwrltIiNrYv+deujXj2AhWvb9VtLOgiNz6QfGUNRy/+Cl9nISwu37nzI3y7wKnCvrkI/dklg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=fqPgtx/S; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=RabAcvFY; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="fqPgtx/S"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="RabAcvFY" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wlPRd71QgRWcOngjZIbTVAwCSWBwEnHmUW8a2a4k+sI=; b=fqPgtx/S/NwCd93TX+o2UBYMhVNBzEbDYwWBtcQIy5bLfmsYY75h4kjoesGo1GgFCPZSZ7 W/WFriEx7wk2rj1IIawjEbkOPa+gTohoDFntMdSVZnxJlPw2YQn5JUOl9rFLaVP+UMAEGk dMDLJNqDW5vBUHI5DIDFxoQzQBuyyr5WbstuU8sxPYvigy2QHw/1tmkiQIAxFskX2tYp8e dxuzMf2wb1M77tKYvAEL/U4TR14gmkEzFePniJ/nemYbfRnCJmR9sE7S2HUdaBF22jjTdt /QDEfYAqN44nN8q9DKARcay+mIpVTDqRlft4A89kC+jL7lWMHtcDSqWy2/vcHw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wlPRd71QgRWcOngjZIbTVAwCSWBwEnHmUW8a2a4k+sI=; b=RabAcvFYiYSPFaeTOG15AoJvXw13G4q2+FFMR8r0RNnINEflyuHIIJhLH7gpmrYe+0cO/j zXr2p+fUfbVVXbCg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 10/21] futex: Decrease the waiter count before the unlock operation. Date: Wed, 12 Mar 2025 16:16:23 +0100 Message-ID: <20250312151634.2183278-11-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To support runtime resizing of the process private hash, it's required to not use the obtained hash bucket once the reference count has been dropped. The reference will be dropped after the unlock of the hash bucket. The amount of waiters is decremented after the unlock operation. There is no requirement that this needs to happen after the unlock. The increment happens before acquiring the lock to signal early that there will be a waiter. The waiter can avoid blocking on the lock if it is known that there will be no waiter. There is no difference in terms of ordering if the decrement happens before or after the unlock. Decrease the waiter count before the unlock operation. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 2 +- kernel/futex/requeue.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 8561c41df7dc5..063d733181783 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -554,8 +554,8 @@ void futex_q_lock(struct futex_q *q, struct futex_hash_= bucket *hb) void futex_q_unlock(struct futex_hash_bucket *hb) __releases(&hb->lock) { - spin_unlock(&hb->lock); futex_hb_waiters_dec(hb); + spin_unlock(&hb->lock); } =20 void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb, diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index 992e3ce005c6f..023c028d2fce3 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -456,8 +456,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flag= s1, ret =3D futex_get_value_locked(&curval, uaddr1); =20 if (unlikely(ret)) { - double_unlock_hb(hb1, hb2); futex_hb_waiters_dec(hb2); + double_unlock_hb(hb1, hb2); =20 ret =3D get_user(curval, uaddr1); if (ret) @@ -542,8 +542,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flag= s1, * waiter::requeue_state is correct. */ case -EFAULT: - double_unlock_hb(hb1, hb2); futex_hb_waiters_dec(hb2); + double_unlock_hb(hb1, hb2); ret =3D fault_in_user_writeable(uaddr2); if (!ret) goto retry; @@ -556,8 +556,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flag= s1, * exit to complete. * - EAGAIN: The user space value changed. */ - double_unlock_hb(hb1, hb2); futex_hb_waiters_dec(hb2); + double_unlock_hb(hb1, hb2); /* * Handle the case where the owner is in the middle of * exiting. Wait for the exit to complete otherwise @@ -674,8 +674,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flag= s1, put_pi_state(pi_state); =20 out_unlock: - double_unlock_hb(hb1, hb2); futex_hb_waiters_dec(hb2); + double_unlock_hb(hb1, hb2); } wake_up_q(&wake_q); return ret ? ret : task_count; --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A0D325484F for ; Wed, 12 Mar 2025 15:16:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; cv=none; b=cCu+oRtlUOyCJ9rMoOxskNf8eowrqPUc85b8NKXgSNDOHmxzikl2ZCtlFOgsc08t2397CD8j3P8wwde1TjXBVNL00+6AjVX9BsleqyMS4K501hRpv+8EifyvtQHas96GO10n2hdQjL3ijWFUIXpPsesI0nIhu0KimI2uXF2f2I4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; c=relaxed/simple; bh=62Ux0FrdZPTM+myksVyK/E2tMXzfJQgClEOPiAMmdjY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NAfG82ORRt+Xj0lGXOprsYbUPmoVJPucxsb2ZocAt2DiCLUEK/Dwl3Q7cA5ilCOFKFQaeTd1hqqICqglw8kp3Um4X2RFB8XYdnN6Izb4bS7G3h2tAV7Cec1Vjtr5CZypbEniYM8Yo0cRh/YxI+BtwPKWCKS/cess6J1zmxKUWhA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=fpTqcfIZ; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=nOhaD7gO; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="fpTqcfIZ"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="nOhaD7gO" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B5KJiJoZv46lB38KebKBB9Mnm6EAiiE+MpXjOgFgK9g=; b=fpTqcfIZRv0e00ABCzNlX/+AK5pzItFEt1/lJwWe5Sz00Kxmy3emE+pBPuaGjUg5W1ucbn tkTM6GZK80AKyed0o8zcsm1ORQV3ckCInwtu10AS2I2RBCRUlQaj7AAPmPmp2UzecM8IYC vGPx9djhpTsbj/VCUay0c01VdyNxCIWwYvCxnGU470cfOrMPhrIF80+K70FbAIxBA/woRw +ogSqb8ko+r22nPTAcmQ1lRlrxpD6yI9pUSr4iGiYuYAjOWmEzhK/5pxdAYtyoeOQFS3sK 2Ya9/7dhWb0pK2s66diKoEIoNA5jQJ54SekoJ/wN/pp7Og2s/MeN0MTJqJlAOw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B5KJiJoZv46lB38KebKBB9Mnm6EAiiE+MpXjOgFgK9g=; b=nOhaD7gOpz1pQvrq3kL6HIspwK9qpAksCy2XdhYyipG4NeaPhepB1skUTm76gYK8F1ylel lvHR0v3ME+yJapCw== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 11/21] futex: Introduce futex_q_lockptr_lock(). Date: Wed, 12 Mar 2025 16:16:24 +0100 Message-ID: <20250312151634.2183278-12-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" futex_lock_pi() and __fixup_pi_state_owner() acquire the futex_q::lock_ptr without holding a reference assuming the previously obtained hash bucket and the assigned lock_ptr are still valid. This isn't the case once the private hash can be resized and becomes invalid after the reference drop. Introduce futex_q_lockptr_lock() to lock the hash bucket recorded in futex_q::lock_ptr. The lock pointer is read in a RCU section to ensure that it does not go away if the hash bucket has been replaced and the old pointer has been observed. After locking the pointer needs to be compared to check if it changed. If so then the hash bucket has been replaced and the user has been moved to the new one and lock_ptr has been updated. The lock operation needs to be redone in this case. The locked hash bucket is not returned. A special case is an early return in futex_lock_pi() (due to signal or timeout) and a successful futex_wait_requeue_pi(). In both cases a valid futex_q::lock_ptr is expected (and its matching hash bucket) but since the waiter has been removed from the hash this can no longer be guaranteed. Therefore before the waiter is removed and a reference is acquired which is later dropped by the waiter to avoid a resize. Add futex_q_lockptr_lock() and use it. Acquire an additional reference in requeue_pi_wake_futex() and futex_unlock_pi() while the futex_q is removed, denote this extra reference in futex_q::drop_hb_ref and let the waiter drop the reference in this case. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 29 +++++++++++++++++++++++++++++ kernel/futex/futex.h | 4 +++- kernel/futex/pi.c | 15 +++++++++++++-- kernel/futex/requeue.c | 16 +++++++++++++--- 4 files changed, 58 insertions(+), 6 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 063d733181783..4d8912daffe83 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -152,6 +152,17 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) return &futex_queues[hash & futex_hashmask]; } =20 +/** + * futex_hash_get - Get an additional reference for the local hash. + * @hb: ptr to the private local hash. + * + * Obtain an additional reference for the already obtained hash bucket. The + * caller must already own an reference. + */ +void futex_hash_get(struct futex_hash_bucket *hb) +{ +} + void futex_hash_put(struct futex_hash_bucket *hb) { } =20 /** @@ -632,6 +643,24 @@ int futex_unqueue(struct futex_q *q) return ret; } =20 +void futex_q_lockptr_lock(struct futex_q *q) +{ + spinlock_t *lock_ptr; + + /* + * See futex_unqueue() why lock_ptr can change. + */ + guard(rcu)(); +retry: + lock_ptr =3D READ_ONCE(q->lock_ptr); + spin_lock(lock_ptr); + + if (unlikely(lock_ptr !=3D q->lock_ptr)) { + spin_unlock(lock_ptr); + goto retry; + } +} + /* * PI futexes can not be requeued and must remove themselves from the hash * bucket. The hash bucket lock (i.e. lock_ptr) is held. diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index eac6de6ed563a..e6f8f2f9281aa 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -183,6 +183,7 @@ struct futex_q { union futex_key *requeue_pi_key; u32 bitset; atomic_t requeue_state; + bool drop_hb_ref; #ifdef CONFIG_PREEMPT_RT struct rcuwait requeue_wait; #endif @@ -197,12 +198,13 @@ enum futex_access { =20 extern int get_futex_key(u32 __user *uaddr, unsigned int flags, union fute= x_key *key, enum futex_access rw); - +extern void futex_q_lockptr_lock(struct futex_q *q); extern struct hrtimer_sleeper * futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns); =20 extern struct futex_hash_bucket *__futex_hash(union futex_key *key); +extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); =20 DEFINE_CLASS(hb, struct futex_hash_bucket *, diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c index 4cee9ec5d97d6..51c69e8808152 100644 --- a/kernel/futex/pi.c +++ b/kernel/futex/pi.c @@ -806,7 +806,7 @@ static int __fixup_pi_state_owner(u32 __user *uaddr, st= ruct futex_q *q, break; } =20 - spin_lock(q->lock_ptr); + futex_q_lockptr_lock(q); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); =20 /* @@ -1066,7 +1066,7 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int fla= gs, ktime_t *time, int tryl * spinlock/rtlock (which might enqueue its own rt_waiter) and fix up * the */ - spin_lock(q.lock_ptr); + futex_q_lockptr_lock(&q); /* * Waiter is unqueued. */ @@ -1086,6 +1086,11 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int fl= ags, ktime_t *time, int tryl =20 futex_unqueue_pi(&q); spin_unlock(q.lock_ptr); + if (q.drop_hb_ref) { + CLASS(hb, hb)(&q.key); + /* Additional reference from futex_unlock_pi() */ + futex_hash_put(hb); + } goto out; =20 out_unlock_put_key: @@ -1194,6 +1199,12 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int = flags) */ rt_waiter =3D rt_mutex_top_waiter(&pi_state->pi_mutex); if (!rt_waiter) { + /* + * Acquire a reference for the leaving waiter to ensure + * valid futex_q::lock_ptr. + */ + futex_hash_get(hb); + top_waiter->drop_hb_ref =3D true; __futex_unqueue(top_waiter); raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); goto retry_hb; diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index 023c028d2fce3..b0e64fd454d96 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -231,7 +231,12 @@ void requeue_pi_wake_futex(struct futex_q *q, union fu= tex_key *key, =20 WARN_ON(!q->rt_waiter); q->rt_waiter =3D NULL; - + /* + * Acquire a reference for the waiter to ensure valid + * futex_q::lock_ptr. + */ + futex_hash_get(hb); + q->drop_hb_ref =3D true; q->lock_ptr =3D &hb->lock; =20 /* Signal locked state to the waiter */ @@ -826,7 +831,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned i= nt flags, case Q_REQUEUE_PI_LOCKED: /* The requeue acquired the lock */ if (q.pi_state && (q.pi_state->owner !=3D current)) { - spin_lock(q.lock_ptr); + futex_q_lockptr_lock(&q); ret =3D fixup_pi_owner(uaddr2, &q, true); /* * Drop the reference to the pi state which the @@ -853,7 +858,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned i= nt flags, if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter)) ret =3D 0; =20 - spin_lock(q.lock_ptr); + futex_q_lockptr_lock(&q); debug_rt_mutex_free_waiter(&rt_waiter); /* * Fixup the pi_state owner and possibly acquire the lock if we @@ -885,6 +890,11 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned = int flags, default: BUG(); } + if (q.drop_hb_ref) { + CLASS(hb, hb)(&q.key); + /* Additional reference from requeue_pi_wake_futex() */ + futex_hash_put(hb); + } =20 out: if (to) { --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D514254854 for ; Wed, 12 Mar 2025 15:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; cv=none; b=hYAorFf5FkVVbAUZaJY3wyE79hSwPp0NP4i20C9Y4CVFDW/zVgx/z+czRjb44opCqPmf6CaQh8JP4ij/s9WpKvoeP6gEOSKvFAg6v52FWg1yg7xqWTu9JinOSJg4LVxom6XcOtXXb6ijlN1V5qSpMqBe41PtFAzt7Sf9jvgL6Zg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792608; c=relaxed/simple; bh=sJE7kkM+ee6XzUfxK02b7HGBX7JN4uRjiQhi6eJ2Gdg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kKZN8KDzxTXv3nxgS4UpElKVwRPS9FUKG5lSG4mZ8VyY3hgoy1F9dRC1F27NAjyH4dcZLcpNMfZusHEJ6+aPXqG4A/juEYNx0jBpOMqsiy2McQldVNTnATwIk0/GAo6sqQauygcpEGG4F38toxwgD6rbm2xnXgQR0546yK/9wQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=vfAZNY3I; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=bdxg+nTr; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="vfAZNY3I"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="bdxg+nTr" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8viNeFodPA9cABm3ztkM3eISgNmBGw4dLmTsZtJtTBk=; b=vfAZNY3IaTfXKIq4+fEdXkA4fwM3H3nZhKSCQRBVuCvUqwKEWu1K3WstEyNhg6yY7gXZfp VQ6DiqBNXHbZy0p5T0zkYjLxH2kouP4HzAQnQ9ECLxqrJQTB+m8kInH6+bXebBTjGZ7WYw NNzB/FPxDxXCmAqFJb3oYcbs4KsC2ftlXEJ+06i7UMPZayMI5j4M3kNdptpcSVYaBdC0mH 4LzuiNWV/HxdE8hQ1S3QvhrRPgnEjpGjNL1sjBJ9FBvxsGO0XTIbb+SNYNcBcAMpDdtRP0 /nVLiBg3DMkA/vZTkAuWj8n2eGfxWJCn7HdznZFkJq7k+RPsn/DdvTQVDEgw/w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8viNeFodPA9cABm3ztkM3eISgNmBGw4dLmTsZtJtTBk=; b=bdxg+nTrQ2TNGz7nr30HtqBkQbulHFevRBLzF3SywZ6EdCqItO1pWHK8nK06vIwrEW9FAl SubqvBbzR+ATHVDw== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 12/21] futex: Acquire a hash reference in futex_wait_multiple_setup(). Date: Wed, 12 Mar 2025 16:16:25 +0100 Message-ID: <20250312151634.2183278-13-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" futex_wait_multiple_setup() changes task_struct::__state to !TASK_RUNNING and then enqueues on multiple futexes. Every futex_q_lock() acquires a reference on the global hash which is dropped later. If a rehash is in progress then the loop will block on mm_struct::futex_hash_bucket for the rehash to complete and this will lose the previously set task_struct::__state. Acquire a reference on the local hash to avoiding blocking on mm_struct::futex_hash_bucket. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 10 ++++++++++ kernel/futex/futex.h | 2 ++ kernel/futex/waitwake.c | 21 +++++++++++++++++++-- 3 files changed, 31 insertions(+), 2 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 4d8912daffe83..700a24d796acb 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -129,6 +129,11 @@ static struct futex_hash_bucket *futex_hash_private(un= ion futex_key *key, return &fhb[hash & hash_mask]; } =20 +struct futex_private_hash *futex_get_private_hash(void) +{ + return NULL; +} + /** * futex_hash - Return the hash bucket in the global hash * @key: Pointer to the futex key for which the hash is calculated @@ -152,6 +157,11 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) return &futex_queues[hash & futex_hashmask]; } =20 +bool futex_put_private_hash(struct futex_private_hash *hb_p) +{ + return false; +} + /** * futex_hash_get - Get an additional reference for the local hash. * @hb: ptr to the private local hash. diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index e6f8f2f9281aa..0a76ee6e7dc10 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -206,6 +206,8 @@ futex_setup_timer(ktime_t *time, struct hrtimer_sleeper= *timeout, extern struct futex_hash_bucket *__futex_hash(union futex_key *key); extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); +extern struct futex_private_hash *futex_get_private_hash(void); +extern bool futex_put_private_hash(struct futex_private_hash *hb_p); =20 DEFINE_CLASS(hb, struct futex_hash_bucket *, if (_T) futex_hash_put(_T), diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 44034dee7a48c..67eebb5b4b212 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -385,7 +385,7 @@ int futex_unqueue_multiple(struct futex_vector *v, int = count) } =20 /** - * futex_wait_multiple_setup - Prepare to wait and enqueue multiple futexes + * __futex_wait_multiple_setup - Prepare to wait and enqueue multiple fute= xes * @vs: The futex list to wait on * @count: The size of the list * @woken: Index of the last woken futex, if any. Used to notify the @@ -400,7 +400,7 @@ int futex_unqueue_multiple(struct futex_vector *v, int = count) * - 0 - Success * - <0 - -EFAULT, -EWOULDBLOCK or -EINVAL */ -int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) +static int __futex_wait_multiple_setup(struct futex_vector *vs, int count,= int *woken) { bool retry =3D false; int ret, i; @@ -491,6 +491,23 @@ int futex_wait_multiple_setup(struct futex_vector *vs,= int count, int *woken) return 0; } =20 +int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) +{ + struct futex_private_hash *hb_p; + int ret; + + /* + * Assume to have a private futex and acquire a reference on the private + * hash to avoid blocking on mm_struct::futex_hash_bucket during rehash + * after changing the task state. + */ + hb_p =3D futex_get_private_hash(); + ret =3D __futex_wait_multiple_setup(vs, count, woken); + if (hb_p) + futex_put_private_hash(hb_p); + return ret; +} + /** * futex_sleep_multiple - Check sleeping conditions and sleep * @vs: List of futexes to wait for --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2B0A256C76 for ; Wed, 12 Mar 2025 15:16:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792610; cv=none; b=DvbsXZyjF/4SivPd95nHbs6z/IDZm8TQ7bMW/9vmbPZJ9PX94+9JQTG7Y27ldS/8rJgs8BdzNs+RTS3spHHn+vJRmAH52MCBcHWalQ231bdxsBKSZl9r6uqypn8lRMPmQOYOsk6EyelVO91wyi99gcsa4B7pJrUqJXPvG9ZjUQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792610; c=relaxed/simple; bh=bUcP7FTDqQ6lkB59tEAHNpq9YvwxPStWGa5i72VjIVU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T4ocXng2zTPqpHY9U7M9b56FvBpJ5BCXy/lv+F9UkNKdSXS8rESnbYGsFKUHWBBFWvo9TFabhlYgeDxmbSomdR73iecbD5MPtYvkpY0yH+poKjc0kYR5l5uLc7JiHoeSQrFPuLYjd8vljLOozY5Vo+kAHaHI2gDEOGU+DaCVgxA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=PMiQkN0y; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=iS8Tk5Oz; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="PMiQkN0y"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="iS8Tk5Oz" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E6AzMWNT021r3h0LcM/MAbkmjoTc0mBl6hIpCxnUv30=; b=PMiQkN0yzPmtD9vzvn/q8sOXfXDR8cShiXkjcFb8/PGzDh5+BsxP5251KxpD3JstwmTM8C XdcXwagI5zlOt/9spDg59f85Hn8ffmOxddDwZwMLIsLG7+oM7IzDPc9HgkEMfG0XUVp/Tx TIh9NryPXIE8iirpGxmQ8AzqLtXYbBKtQ1QDh8kYBLY9XOOMdtTkCN/BCJHl+WxYQQ2gsZ FdR3StCexlhw8aGIp1Wc8PCt9tA7Mhc53hvnXFkqgYcM0GJeMMNFY5tbz+mKO7fU485AzD H9lgj6sYQJT6/NTE3G8v3dACRGR4Q6Y6uBBQd7wq6GsEo5izBLw/ti0HtrXjqA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E6AzMWNT021r3h0LcM/MAbkmjoTc0mBl6hIpCxnUv30=; b=iS8Tk5OzRjt3PU5WO3Tr1ZCusF1RXyYPvsLQDGDjdYiwfUxR4yOwP28I5TyI+1XxohRPmC QP4/Bii7BjwkbVBA== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 13/21] futex: Allow to re-allocate the private local hash. Date: Wed, 12 Mar 2025 16:16:26 +0100 Message-ID: <20250312151634.2183278-14-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The mm_struct::futex_hash_lock guards the futex_hash_bucket assignment/ replacement. The futex_hash_allocate()/ PR_FUTEX_HASH_SET_SLOTS operation can now be invoked at runtime and resize an already existing internal private futex_hash_bucket to another size. The reallocation is based on an idea by Thomas Gleixner: The initial allocation of struct futex_private_hash sets the reference count to one. Every user acquires a reference on the local hash before using it and drops it after it enqueued itself on the hash bucket. There is no reference held while the task is scheduled out while waiting for the wake up. The resize allocates a new struct futex_private_hash and drops the initial reference under the mm_struct::futex_hash_lock. If the reference drop results in destruction of the object then users currently queued on the local hash will be requeued on the new local hash. At the end mm_struct::futex_phash is updated, the old pointer is RCU freed and the mutex is dropped. If the reference drop does not result in destruction of the object then the new pointer is saved as mm_struct::futex_phash_new. In this case replacement is delayed. The user dropping the last reference is not always the best choice to perform the replacement. For instance futex_wait_queue() drops the reference after changing its task state which will also be modified while the futex_hash_lock is acquired. Therefore the replacement is delayed to the task acquiring a reference on the current local hash. This scheme keeps the requirement that all waiters/ wakers of the same addr= ess block always on the same futex_hash_bucket::lock. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/futex.h | 5 +- include/linux/mm_types.h | 7 +- kernel/futex/core.c | 248 +++++++++++++++++++++++++++++++++++---- kernel/futex/futex.h | 1 + kernel/futex/requeue.c | 5 + 5 files changed, 237 insertions(+), 29 deletions(-) diff --git a/include/linux/futex.h b/include/linux/futex.h index bad377c30de5e..bfb38764bac7a 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -83,12 +83,13 @@ void futex_hash_free(struct mm_struct *mm); =20 static inline void futex_mm_init(struct mm_struct *mm) { - mm->futex_hash_bucket =3D NULL; + rcu_assign_pointer(mm->futex_phash, NULL); + mutex_init(&mm->futex_hash_lock); } =20 static inline bool futex_hash_requires_allocation(void) { - if (current->mm->futex_hash_bucket) + if (current->mm->futex_phash) return false; return true; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 769cd77364e2d..46abaf1ce1c0a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -30,7 +30,7 @@ #define INIT_PASID 0 =20 struct address_space; -struct futex_hash_bucket; +struct futex_private_hash; struct mem_cgroup; =20 /* @@ -939,8 +939,9 @@ struct mm_struct { seqcount_t mm_lock_seq; #endif #ifdef CONFIG_FUTEX - unsigned int futex_hash_mask; - struct futex_hash_bucket *futex_hash_bucket; + struct mutex futex_hash_lock; + struct futex_private_hash __rcu *futex_phash; + struct futex_private_hash *futex_phash_new; #endif =20 unsigned long hiwater_rss; /* High-watermark of RSS usage */ diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 700a24d796acb..c5a9db946b421 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -40,6 +40,7 @@ #include #include #include +#include =20 #include "futex.h" #include "../locking/rtmutex_common.h" @@ -56,6 +57,14 @@ static struct { #define futex_queues (__futex_data.queues) #define futex_hashmask (__futex_data.hashmask) =20 +struct futex_private_hash { + rcuref_t users; + unsigned int hash_mask; + struct rcu_head rcu; + bool initial_ref_dropped; + bool released; + struct futex_hash_bucket queues[]; +}; =20 /* * Fault injections for futexes. @@ -129,9 +138,122 @@ static struct futex_hash_bucket *futex_hash_private(u= nion futex_key *key, return &fhb[hash & hash_mask]; } =20 +static void futex_rehash_current_users(struct futex_private_hash *old, + struct futex_private_hash *new) +{ + struct futex_hash_bucket *hb_old, *hb_new; + unsigned int slots =3D old->hash_mask + 1; + u32 hash_mask =3D new->hash_mask; + unsigned int i; + + for (i =3D 0; i < slots; i++) { + struct futex_q *this, *tmp; + + hb_old =3D &old->queues[i]; + + spin_lock(&hb_old->lock); + plist_for_each_entry_safe(this, tmp, &hb_old->chain, list) { + + plist_del(&this->list, &hb_old->chain); + futex_hb_waiters_dec(hb_old); + + WARN_ON_ONCE(this->lock_ptr !=3D &hb_old->lock); + + hb_new =3D futex_hash_private(&this->key, new->queues, hash_mask); + futex_hb_waiters_inc(hb_new); + /* + * The new pointer isn't published yet but an already + * moved user can be unqueued due to timeout or signal. + */ + spin_lock_nested(&hb_new->lock, SINGLE_DEPTH_NESTING); + plist_add(&this->list, &hb_new->chain); + this->lock_ptr =3D &hb_new->lock; + spin_unlock(&hb_new->lock); + } + spin_unlock(&hb_old->lock); + } +} + +static void futex_assign_new_hash(struct futex_private_hash *hb_p_new, + struct mm_struct *mm) +{ + bool drop_init_ref =3D hb_p_new !=3D NULL; + struct futex_private_hash *hb_p; + + if (!hb_p_new) { + hb_p_new =3D mm->futex_phash_new; + mm->futex_phash_new =3D NULL; + } + /* Someone was quicker, the current mask is valid */ + if (!hb_p_new) + return; + + hb_p =3D rcu_dereference_check(mm->futex_phash, + lockdep_is_held(&mm->futex_hash_lock)); + if (hb_p) { + if (hb_p->hash_mask >=3D hb_p_new->hash_mask) { + /* It was increased again while we were waiting */ + kvfree(hb_p_new); + return; + } + /* + * If the caller started the resize then the initial reference + * needs to be dropped. If the object can not be deconstructed + * we save hb_p_new for later and ensure the reference counter + * is not dropped again. + */ + if (drop_init_ref && + (hb_p->initial_ref_dropped || !futex_put_private_hash(hb_p))) { + mm->futex_phash_new =3D hb_p_new; + hb_p->initial_ref_dropped =3D true; + return; + } + if (!READ_ONCE(hb_p->released)) { + mm->futex_phash_new =3D hb_p_new; + return; + } + + futex_rehash_current_users(hb_p, hb_p_new); + } + rcu_assign_pointer(mm->futex_phash, hb_p_new); + kvfree_rcu(hb_p, rcu); +} + struct futex_private_hash *futex_get_private_hash(void) { - return NULL; + struct mm_struct *mm =3D current->mm; + /* + * Ideally we don't loop. If there is a replacement in progress + * then a new private hash is already prepared and a reference can't be + * obtained once the last user dropped it's. + * In that case we block on mm_struct::futex_hash_lock and either have + * to perform the replacement or wait while someone else is doing the + * job. Eitherway, on the second iteration we acquire a reference on the + * new private hash or loop again because a new replacement has been + * requested. + */ +again: + scoped_guard(rcu) { + struct futex_private_hash *hb_p; + + hb_p =3D rcu_dereference(mm->futex_phash); + if (!hb_p) + return NULL; + + if (rcuref_get(&hb_p->users)) + return hb_p; + } + scoped_guard(mutex, ¤t->mm->futex_hash_lock) + futex_assign_new_hash(NULL, mm); + goto again; +} + +static struct futex_private_hash *futex_get_private_hb(union futex_key *ke= y) +{ + if (!futex_key_is_private(key)) + return NULL; + + return futex_get_private_hash(); } =20 /** @@ -144,12 +266,12 @@ struct futex_private_hash *futex_get_private_hash(voi= d) */ struct futex_hash_bucket *__futex_hash(union futex_key *key) { - struct futex_hash_bucket *fhb; + struct futex_private_hash *hb_p; u32 hash; =20 - fhb =3D current->mm->futex_hash_bucket; - if (fhb && futex_key_is_private(key)) - return futex_hash_private(key, fhb, current->mm->futex_hash_mask); + hb_p =3D futex_get_private_hb(key); + if (hb_p) + return futex_hash_private(key, hb_p->queues, hb_p->hash_mask); =20 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, @@ -159,7 +281,13 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) =20 bool futex_put_private_hash(struct futex_private_hash *hb_p) { - return false; + bool released; + + guard(preempt)(); + released =3D rcuref_put_rcusafe(&hb_p->users); + if (released) + WRITE_ONCE(hb_p->released, true); + return released; } =20 /** @@ -171,9 +299,22 @@ bool futex_put_private_hash(struct futex_private_hash = *hb_p) */ void futex_hash_get(struct futex_hash_bucket *hb) { + struct futex_private_hash *hb_p =3D hb->hb_p; + + if (!hb_p) + return; + + WARN_ON_ONCE(!rcuref_get(&hb_p->users)); } =20 -void futex_hash_put(struct futex_hash_bucket *hb) { } +void futex_hash_put(struct futex_hash_bucket *hb) +{ + struct futex_private_hash *hb_p =3D hb->hb_p; + + if (!hb_p) + return; + futex_put_private_hash(hb_p); +} =20 /** * futex_setup_timer - set up the sleeping hrtimer. @@ -615,6 +756,8 @@ int futex_unqueue(struct futex_q *q) spinlock_t *lock_ptr; int ret =3D 0; =20 + /* RCU so lock_ptr is not going away during locking. */ + guard(rcu)(); /* In the common case we don't take the spinlock, which is nice. */ retry: /* @@ -1013,9 +1156,21 @@ static void compat_exit_robust_list(struct task_stru= ct *curr) static void exit_pi_state_list(struct task_struct *curr) { struct list_head *next, *head =3D &curr->pi_state_list; + struct futex_private_hash *hb_p; struct futex_pi_state *pi_state; union futex_key key =3D FUTEX_KEY_INIT; =20 + /* + * The mutex mm_struct::futex_hash_lock might be acquired. + */ + might_sleep(); + /* + * Ensure the hash remains stable (no resize) during the while loop + * below. The hb pointer is acquired under the pi_lock so we can't block + * on the mutex. + */ + WARN_ON(curr !=3D current); + hb_p =3D futex_get_private_hash(); /* * We are a ZOMBIE and nobody can enqueue itself on * pi_state_list anymore, but we have to be careful @@ -1078,6 +1233,8 @@ static void exit_pi_state_list(struct task_struct *cu= rr) raw_spin_lock_irq(&curr->pi_lock); } raw_spin_unlock_irq(&curr->pi_lock); + if (hb_p) + futex_put_private_hash(hb_p); } #else static inline void exit_pi_state_list(struct task_struct *curr) { } @@ -1191,8 +1348,10 @@ void futex_exit_release(struct task_struct *tsk) futex_cleanup_end(tsk, FUTEX_STATE_DEAD); } =20 -static void futex_hash_bucket_init(struct futex_hash_bucket *fhb) +static void futex_hash_bucket_init(struct futex_hash_bucket *fhb, + struct futex_private_hash *hb_p) { + fhb->hb_p =3D hb_p; atomic_set(&fhb->waiters, 0); plist_head_init(&fhb->chain); spin_lock_init(&fhb->lock); @@ -1200,20 +1359,34 @@ static void futex_hash_bucket_init(struct futex_has= h_bucket *fhb) =20 void futex_hash_free(struct mm_struct *mm) { - kvfree(mm->futex_hash_bucket); + struct futex_private_hash *hb_p; + + kvfree(mm->futex_phash_new); + /* + * The mm_struct belonging to the task is about to be removed so all + * threads, that ever accessed the private hash, are gone and the + * pointer can be accessed directly (omitting a RCU-read section or + * lock). + * Since there can not be a thread holding a reference to the private + * hash we free it immediately. + */ + hb_p =3D rcu_dereference_raw(mm->futex_phash); + if (!hb_p) + return; + + if (!hb_p->initial_ref_dropped && WARN_ON(!futex_put_private_hash(hb_p))) + return; + + kvfree(hb_p); } =20 static int futex_hash_allocate(unsigned int hash_slots) { - struct futex_hash_bucket *fhb; + struct futex_private_hash *hb_p, *hb_tofree =3D NULL; + struct mm_struct *mm =3D current->mm; + size_t alloc_size; int i; =20 - if (current->mm->futex_hash_bucket) - return -EALREADY; - - if (!thread_group_leader(current)) - return -EINVAL; - if (hash_slots =3D=3D 0) hash_slots =3D 16; if (hash_slots < 2) @@ -1223,16 +1396,39 @@ static int futex_hash_allocate(unsigned int hash_sl= ots) if (!is_power_of_2(hash_slots)) hash_slots =3D rounddown_pow_of_two(hash_slots); =20 - fhb =3D kvmalloc_array(hash_slots, sizeof(struct futex_hash_bucket), GFP_= KERNEL_ACCOUNT); - if (!fhb) + if (unlikely(check_mul_overflow(hash_slots, sizeof(struct futex_hash_buck= et), + &alloc_size))) return -ENOMEM; =20 - current->mm->futex_hash_mask =3D hash_slots - 1; + if (unlikely(check_add_overflow(alloc_size, sizeof(struct futex_private_h= ash), + &alloc_size))) + return -ENOMEM; + + hb_p =3D kvmalloc(alloc_size, GFP_KERNEL_ACCOUNT); + if (!hb_p) + return -ENOMEM; + + rcuref_init(&hb_p->users, 1); + hb_p->initial_ref_dropped =3D false; + hb_p->released =3D false; + hb_p->hash_mask =3D hash_slots - 1; =20 for (i =3D 0; i < hash_slots; i++) - futex_hash_bucket_init(&fhb[i]); + futex_hash_bucket_init(&hb_p->queues[i], hb_p); =20 - current->mm->futex_hash_bucket =3D fhb; + scoped_guard(mutex, &mm->futex_hash_lock) { + if (mm->futex_phash_new) { + if (mm->futex_phash_new->hash_mask <=3D hb_p->hash_mask) { + hb_tofree =3D mm->futex_phash_new; + } else { + hb_tofree =3D hb_p; + hb_p =3D mm->futex_phash_new; + } + mm->futex_phash_new =3D NULL; + } + futex_assign_new_hash(hb_p, mm); + } + kvfree(hb_tofree); return 0; } =20 @@ -1243,8 +1439,12 @@ int futex_hash_allocate_default(void) =20 static int futex_hash_get_slots(void) { - if (current->mm->futex_hash_bucket) - return current->mm->futex_hash_mask + 1; + struct futex_private_hash *hb_p; + + guard(rcu)(); + hb_p =3D rcu_dereference(current->mm->futex_phash); + if (hb_p) + return hb_p->hash_mask + 1; return 0; } =20 @@ -1286,7 +1486,7 @@ static int __init futex_init(void) hashsize =3D 1UL << futex_shift; =20 for (i =3D 0; i < hashsize; i++) - futex_hash_bucket_init(&futex_queues[i]); + futex_hash_bucket_init(&futex_queues[i], NULL); =20 futex_hashmask =3D hashsize - 1; return 0; diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 0a76ee6e7dc10..973efcca2e01b 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -118,6 +118,7 @@ struct futex_hash_bucket { atomic_t waiters; spinlock_t lock; struct plist_head chain; + struct futex_private_hash *hb_p; } ____cacheline_aligned_in_smp; =20 /* diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c index b0e64fd454d96..c716a66f86929 100644 --- a/kernel/futex/requeue.c +++ b/kernel/futex/requeue.c @@ -87,6 +87,11 @@ void requeue_futex(struct futex_q *q, struct futex_hash_= bucket *hb1, futex_hb_waiters_inc(hb2); plist_add(&q->list, &hb2->chain); q->lock_ptr =3D &hb2->lock; + /* + * hb1 and hb2 belong to the same futex_hash_bucket_private + * because if we managed get a reference on hb1 then it can't be + * replaced. Therefore we avoid put(hb1)+get(hb2) here. + */ } q->key =3D *key2; } --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E94D2566DF for ; Wed, 12 Mar 2025 15:16:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792610; cv=none; b=GwLgG5rVZaQ3jmK6F/SBnb+YAUB+IU/bEHQVHtZQ4D8pcHAqPdzgSn2AspIe7pysXjcOFEzf5u5nRAw0tdmndasiiUYn0RDqEfF7/6A9KOXSkpy6c8V0+5ckTVUCPZa7IeUiv9hPO5140MyUmIK1Zz0kBwUcwvoSF+o7G/UzOV8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792610; c=relaxed/simple; bh=w6nmezl4P7xWIJ1VKGZJWoIDWUkwhzVPj7rCnwJ6W/s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BnRo1QbHDC+OYql3Jx1o/03l/HXiJKVh+b2E3xHov5uB1i+kLhHp2aPVI8pFHhBPbjau0kZsU4gotF2AEIeQz0jsMY+SfUuIfK2ju+9OjQPkQSrcu0DvNuMu0JoYRg0624THEedZFj4+TDf0dUxwXP41ys5iq40vRdhxcdRDQt4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=ewfi0l8/; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=CsqMtexp; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="ewfi0l8/"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="CsqMtexp" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wES9Ujp59eDAVRR2NFrJ5uT4AeYkmwPj68ViJNLFAAk=; b=ewfi0l8/50RJkq1Y79QNpLu212Xo0EUDryIGXSX88hDzlxQ+2n9DjnGt/7lFqrBEV2VJv/ uBrmXID/6Ttw+vPScsTOlIvGW8jXRhRiz1918Fx9AE9thG9F9UrNKVgj/5EyAVO40oKJv0 yM5bXEMsKtMuGOJAdWzwyM44z4sPQMFK4puBKds77Nwohb7JkrBKbsLMADVBh+yCRUjupd jjF2faU/nWYvdEoTwHMXnrSRtZnHeKuy/RmaJCuzVc8f7Djso0dj6W8658hqq4rtfgwVJO Nd/L1qqGbyxpRzU0KtrP4+BvmSLcOShBoHS6hH2/NrQOBi2kWLf7eQ7APb1/RA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wES9Ujp59eDAVRR2NFrJ5uT4AeYkmwPj68ViJNLFAAk=; b=CsqMtexpgrFU9U2b9c5qRX8/zLm9pH9lXqumx5Oe7ZFKO2E/4gGUROnjyKBK7jQ1ADpHsT iuGUp8CeB9M9ldBQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 14/21] futex: Resize local futex hash table based on number of threads. Date: Wed, 12 Mar 2025 16:16:27 +0100 Message-ID: <20250312151634.2183278-15-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Automatically size the local hash based on the number of threads but don't exceed the number of online CPUs. The logic tries to allocate between 16 and futex_hashsize (the default for the system wide hash bucket) and uses 4 * number-of-threads. On CONFIG_BASE_SMALL configs, the additional members for private hash resize have been removed in order to save memory on mm_struct and avoid any additional memory consumption. To achive this, a CONFIG_FUTEX_PRIVATE_HASH has been introduced which depends on !BASE_SMALL and can be extended later. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/futex.h | 20 ++++++-------- include/linux/mm_types.h | 2 +- init/Kconfig | 5 ++++ kernel/fork.c | 4 +-- kernel/futex/core.c | 57 ++++++++++++++++++++++++++++++++++++---- kernel/futex/futex.h | 8 ++++++ 6 files changed, 75 insertions(+), 21 deletions(-) diff --git a/include/linux/futex.h b/include/linux/futex.h index bfb38764bac7a..7e14d2e9162d2 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -78,6 +78,8 @@ void futex_exec_release(struct task_struct *tsk); long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout, u32 __user *uaddr2, u32 val2, u32 val3); int futex_hash_prctl(unsigned long arg2, unsigned long arg3); + +#ifdef CONFIG_FUTEX_PRIVATE_HASH int futex_hash_allocate_default(void); void futex_hash_free(struct mm_struct *mm); =20 @@ -87,14 +89,13 @@ static inline void futex_mm_init(struct mm_struct *mm) mutex_init(&mm->futex_hash_lock); } =20 -static inline bool futex_hash_requires_allocation(void) -{ - if (current->mm->futex_phash) - return false; - return true; -} +#else /* !CONFIG_FUTEX_PRIVATE_HASH */ +static inline int futex_hash_allocate_default(void) { return 0; } +static inline void futex_hash_free(struct mm_struct *mm) { } +static inline void futex_mm_init(struct mm_struct *mm) { } +#endif /* CONFIG_FUTEX_PRIVATE_HASH */ =20 -#else +#else /* !CONFIG_FUTEX */ static inline void futex_init_task(struct task_struct *tsk) { } static inline void futex_exit_recursive(struct task_struct *tsk) { } static inline void futex_exit_release(struct task_struct *tsk) { } @@ -116,11 +117,6 @@ static inline int futex_hash_allocate_default(void) static inline void futex_hash_free(struct mm_struct *mm) { } static inline void futex_mm_init(struct mm_struct *mm) { } =20 -static inline bool futex_hash_requires_allocation(void) -{ - return false; -} - #endif =20 #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 46abaf1ce1c0a..e0e8adbe66bdd 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -938,7 +938,7 @@ struct mm_struct { */ seqcount_t mm_lock_seq; #endif -#ifdef CONFIG_FUTEX +#ifdef CONFIG_FUTEX_PRIVATE_HASH struct mutex futex_hash_lock; struct futex_private_hash __rcu *futex_phash; struct futex_private_hash *futex_phash_new; diff --git a/init/Kconfig b/init/Kconfig index a0ea04c177842..bb209c12a2bda 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1683,6 +1683,11 @@ config FUTEX_PI depends on FUTEX && RT_MUTEXES default y =20 +config FUTEX_PRIVATE_HASH + bool + depends on FUTEX && !BASE_SMALL + default y + config EPOLL bool "Enable eventpoll support" if EXPERT default y diff --git a/kernel/fork.c b/kernel/fork.c index 440c5808f70a2..69f98d7e85054 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2141,9 +2141,7 @@ static bool need_futex_hash_allocate_default(u64 clon= e_flags) { if ((clone_flags & (CLONE_THREAD | CLONE_VM)) !=3D (CLONE_THREAD | CLONE_= VM)) return false; - if (!thread_group_empty(current)) - return false; - return futex_hash_requires_allocation(); + return true; } =20 /* diff --git a/kernel/futex/core.c b/kernel/futex/core.c index c5a9db946b421..229009279ee7d 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -138,6 +138,7 @@ static struct futex_hash_bucket *futex_hash_private(uni= on futex_key *key, return &fhb[hash & hash_mask]; } =20 +#ifdef CONFIG_FUTEX_PRIVATE_HASH static void futex_rehash_current_users(struct futex_private_hash *old, struct futex_private_hash *new) { @@ -256,6 +257,14 @@ static struct futex_private_hash *futex_get_private_hb= (union futex_key *key) return futex_get_private_hash(); } =20 +#else + +static struct futex_private_hash *futex_get_private_hb(union futex_key *ke= y) +{ + return NULL; +} +#endif + /** * futex_hash - Return the hash bucket in the global hash * @key: Pointer to the futex key for which the hash is calculated @@ -279,6 +288,7 @@ struct futex_hash_bucket *__futex_hash(union futex_key = *key) return &futex_queues[hash & futex_hashmask]; } =20 +#ifdef CONFIG_FUTEX_PRIVATE_HASH bool futex_put_private_hash(struct futex_private_hash *hb_p) { bool released; @@ -315,6 +325,7 @@ void futex_hash_put(struct futex_hash_bucket *hb) return; futex_put_private_hash(hb_p); } +#endif =20 /** * futex_setup_timer - set up the sleeping hrtimer. @@ -1351,12 +1362,15 @@ void futex_exit_release(struct task_struct *tsk) static void futex_hash_bucket_init(struct futex_hash_bucket *fhb, struct futex_private_hash *hb_p) { +#ifdef CONFIG_FUTEX_PRIVATE_HASH fhb->hb_p =3D hb_p; +#endif atomic_set(&fhb->waiters, 0); plist_head_init(&fhb->chain); spin_lock_init(&fhb->lock); } =20 +#ifdef CONFIG_FUTEX_PRIVATE_HASH void futex_hash_free(struct mm_struct *mm) { struct futex_private_hash *hb_p; @@ -1389,10 +1403,7 @@ static int futex_hash_allocate(unsigned int hash_slo= ts) =20 if (hash_slots =3D=3D 0) hash_slots =3D 16; - if (hash_slots < 2) - hash_slots =3D 2; - if (hash_slots > 131072) - hash_slots =3D 131072; + hash_slots =3D clamp(hash_slots, 2, futex_hashmask + 1); if (!is_power_of_2(hash_slots)) hash_slots =3D rounddown_pow_of_two(hash_slots); =20 @@ -1434,7 +1445,30 @@ static int futex_hash_allocate(unsigned int hash_slo= ts) =20 int futex_hash_allocate_default(void) { - return futex_hash_allocate(0); + unsigned int threads, buckets, current_buckets =3D 0; + struct futex_private_hash *hb_p; + + if (!current->mm) + return 0; + + scoped_guard(rcu) { + threads =3D min_t(unsigned int, get_nr_threads(current), num_online_cpus= ()); + hb_p =3D rcu_dereference(current->mm->futex_phash); + if (hb_p) + current_buckets =3D hb_p->hash_mask + 1; + } + + /* + * The default allocation will remain within + * 16 <=3D threads * 4 <=3D global hash size + */ + buckets =3D roundup_pow_of_two(4 * threads); + buckets =3D clamp(buckets, 16, futex_hashmask + 1); + + if (current_buckets >=3D buckets) + return 0; + + return futex_hash_allocate(buckets); } =20 static int futex_hash_get_slots(void) @@ -1448,6 +1482,19 @@ static int futex_hash_get_slots(void) return 0; } =20 +#else + +static int futex_hash_allocate(unsigned int hash_slots) +{ + return -EINVAL; +} + +static int futex_hash_get_slots(void) +{ + return 0; +} +#endif + int futex_hash_prctl(unsigned long arg2, unsigned long arg3) { int ret; diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 973efcca2e01b..782021feffe2e 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -205,11 +205,19 @@ futex_setup_timer(ktime_t *time, struct hrtimer_sleep= er *timeout, int flags, u64 range_ns); =20 extern struct futex_hash_bucket *__futex_hash(union futex_key *key); +#ifdef CONFIG_FUTEX_PRIVATE_HASH extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); extern struct futex_private_hash *futex_get_private_hash(void); extern bool futex_put_private_hash(struct futex_private_hash *hb_p); =20 +#else /* !CONFIG_FUTEX_PRIVATE_HASH */ +static inline void futex_hash_get(struct futex_hash_bucket *hb) { } +static inline void futex_hash_put(struct futex_hash_bucket *hb) { } +static inline struct futex_private_hash *futex_get_private_hash(void) { re= turn NULL; } +static inline bool futex_put_private_hash(struct futex_private_hash *hb_p)= { return false; } +#endif + DEFINE_CLASS(hb, struct futex_hash_bucket *, if (_T) futex_hash_put(_T), __futex_hash(key), union futex_key *key); --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9926D2571A8 for ; Wed, 12 Mar 2025 15:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; cv=none; b=ufyDgAPgwkSDHpfFq3F9HONmdrT8kuXaT+8cm+TkSTMVO6wnw26NrXXFdJa4J9yzoYrY6Hd/z3na8F9Q2kEFjEZgAni6XyKJNnKmS0unbQpfhxAchTlchNkBO+i00axRMGhgwjKk029PqIL9mzJFyj1MVPiKcMi5cijtV76QRcs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; c=relaxed/simple; bh=l/WqEIOlaQmwjwfryMtEc6A+aNo5oTbMaNRP00d86oo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YC9a9nurKC8FwHWY+dlj0V4QYrfg/mtSwNUcUtoiN/JAoHNg1VKNZMBhEgLBHI5h8Z6ToDm0JUqcHA+QNrNLeCmjPVemuW8oua2k3p6QxN1BabIR7F148wb6zHHKpRjLRGyMB8v0oaqdVu4qq0qwhnPOQ71k+ZiWQ9r2yU23//U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=RHDP6JGj; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=1JmbDrjy; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="RHDP6JGj"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="1JmbDrjy" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U5GwabWDNOE2x/DiLAHMAH0N3IFUH1OABi7nIU2t/j0=; b=RHDP6JGjgis4TJC2nUf/q18RIsGnLgZl7LaCSY/Bl5gLi9TSgjli72xYDsszD2AmfhbZ5V WaH1RivsupvUr6qGwCNVKDxtiMUIlA1GqHOyLDW4W7WAN0iwaBq9IaNwKfOacnWUK0xWpC U/FjZTwXmNhZfj47BbrzPrJHoiECJNrX6xTbESjGPWQMqu8HKhmi0MOY3GswqOgiPpW7BZ jVT51WBBWoAgvg7Bk9CF51DkGn/n1AsAK3TLkA0+5tlpT3XEStFYEEovPTDPS6i2v930uq cSZq+tYqJ66MIcqk9Yr3UlXbEQOzdkaGaMAmuQdrXrUnsRUAGKUuiH3QzVD3eA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U5GwabWDNOE2x/DiLAHMAH0N3IFUH1OABi7nIU2t/j0=; b=1JmbDrjy+zK83coQufQhyvEeBIc+im0OB7KHKzBVP8fT9/QSBHSD6ERpHoeh6yHP8ItFUy 3+gi43DOkmOOTWCg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 15/21] futex: s/hb_p/fph/ Date: Wed, 12 Mar 2025 16:16:28 +0100 Message-ID: <20250312151634.2183278-16-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra To me hb_p reads like hash-bucket-private, but these things are pointers to private hash table, not bucket. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 136 ++++++++++++++++++++-------------------- kernel/futex/futex.h | 6 +- kernel/futex/waitwake.c | 8 +-- 3 files changed, 75 insertions(+), 75 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 229009279ee7d..9b87c4f128f14 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -175,49 +175,49 @@ static void futex_rehash_current_users(struct futex_p= rivate_hash *old, } } =20 -static void futex_assign_new_hash(struct futex_private_hash *hb_p_new, +static void futex_assign_new_hash(struct futex_private_hash *new, struct mm_struct *mm) { - bool drop_init_ref =3D hb_p_new !=3D NULL; - struct futex_private_hash *hb_p; + bool drop_init_ref =3D new !=3D NULL; + struct futex_private_hash *fph; =20 - if (!hb_p_new) { - hb_p_new =3D mm->futex_phash_new; + if (!new) { + new =3D mm->futex_phash_new; mm->futex_phash_new =3D NULL; } /* Someone was quicker, the current mask is valid */ - if (!hb_p_new) + if (!new) return; =20 - hb_p =3D rcu_dereference_check(mm->futex_phash, + fph =3D rcu_dereference_check(mm->futex_phash, lockdep_is_held(&mm->futex_hash_lock)); - if (hb_p) { - if (hb_p->hash_mask >=3D hb_p_new->hash_mask) { + if (fph) { + if (fph->hash_mask >=3D new->hash_mask) { /* It was increased again while we were waiting */ - kvfree(hb_p_new); + kvfree(new); return; } /* * If the caller started the resize then the initial reference * needs to be dropped. If the object can not be deconstructed - * we save hb_p_new for later and ensure the reference counter + * we save new for later and ensure the reference counter * is not dropped again. */ if (drop_init_ref && - (hb_p->initial_ref_dropped || !futex_put_private_hash(hb_p))) { - mm->futex_phash_new =3D hb_p_new; - hb_p->initial_ref_dropped =3D true; + (fph->initial_ref_dropped || !futex_put_private_hash(fph))) { + mm->futex_phash_new =3D new; + fph->initial_ref_dropped =3D true; return; } - if (!READ_ONCE(hb_p->released)) { - mm->futex_phash_new =3D hb_p_new; + if (!READ_ONCE(fph->released)) { + mm->futex_phash_new =3D new; return; } =20 - futex_rehash_current_users(hb_p, hb_p_new); + futex_rehash_current_users(fph, new); } - rcu_assign_pointer(mm->futex_phash, hb_p_new); - kvfree_rcu(hb_p, rcu); + rcu_assign_pointer(mm->futex_phash, new); + kvfree_rcu(fph, rcu); } =20 struct futex_private_hash *futex_get_private_hash(void) @@ -235,14 +235,14 @@ struct futex_private_hash *futex_get_private_hash(voi= d) */ again: scoped_guard(rcu) { - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; =20 - hb_p =3D rcu_dereference(mm->futex_phash); - if (!hb_p) + fph =3D rcu_dereference(mm->futex_phash); + if (!fph) return NULL; =20 - if (rcuref_get(&hb_p->users)) - return hb_p; + if (rcuref_get(&fph->users)) + return fph; } scoped_guard(mutex, ¤t->mm->futex_hash_lock) futex_assign_new_hash(NULL, mm); @@ -275,12 +275,12 @@ static struct futex_private_hash *futex_get_private_h= b(union futex_key *key) */ struct futex_hash_bucket *__futex_hash(union futex_key *key) { - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; u32 hash; =20 - hb_p =3D futex_get_private_hb(key); - if (hb_p) - return futex_hash_private(key, hb_p->queues, hb_p->hash_mask); + fph =3D futex_get_private_hb(key); + if (fph) + return futex_hash_private(key, fph->queues, fph->hash_mask); =20 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, @@ -289,14 +289,14 @@ struct futex_hash_bucket *__futex_hash(union futex_ke= y *key) } =20 #ifdef CONFIG_FUTEX_PRIVATE_HASH -bool futex_put_private_hash(struct futex_private_hash *hb_p) +bool futex_put_private_hash(struct futex_private_hash *fph) { bool released; =20 guard(preempt)(); - released =3D rcuref_put_rcusafe(&hb_p->users); + released =3D rcuref_put_rcusafe(&fph->users); if (released) - WRITE_ONCE(hb_p->released, true); + WRITE_ONCE(fph->released, true); return released; } =20 @@ -309,21 +309,21 @@ bool futex_put_private_hash(struct futex_private_hash= *hb_p) */ void futex_hash_get(struct futex_hash_bucket *hb) { - struct futex_private_hash *hb_p =3D hb->hb_p; + struct futex_private_hash *fph =3D hb->priv; =20 - if (!hb_p) + if (!fph) return; =20 - WARN_ON_ONCE(!rcuref_get(&hb_p->users)); + WARN_ON_ONCE(!rcuref_get(&fph->users)); } =20 void futex_hash_put(struct futex_hash_bucket *hb) { - struct futex_private_hash *hb_p =3D hb->hb_p; + struct futex_private_hash *fph =3D hb->priv; =20 - if (!hb_p) + if (!fph) return; - futex_put_private_hash(hb_p); + futex_put_private_hash(fph); } #endif =20 @@ -1167,7 +1167,7 @@ static void compat_exit_robust_list(struct task_struc= t *curr) static void exit_pi_state_list(struct task_struct *curr) { struct list_head *next, *head =3D &curr->pi_state_list; - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; struct futex_pi_state *pi_state; union futex_key key =3D FUTEX_KEY_INIT; =20 @@ -1181,7 +1181,7 @@ static void exit_pi_state_list(struct task_struct *cu= rr) * on the mutex. */ WARN_ON(curr !=3D current); - hb_p =3D futex_get_private_hash(); + fph =3D futex_get_private_hash(); /* * We are a ZOMBIE and nobody can enqueue itself on * pi_state_list anymore, but we have to be careful @@ -1244,8 +1244,8 @@ static void exit_pi_state_list(struct task_struct *cu= rr) raw_spin_lock_irq(&curr->pi_lock); } raw_spin_unlock_irq(&curr->pi_lock); - if (hb_p) - futex_put_private_hash(hb_p); + if (fph) + futex_put_private_hash(fph); } #else static inline void exit_pi_state_list(struct task_struct *curr) { } @@ -1360,10 +1360,10 @@ void futex_exit_release(struct task_struct *tsk) } =20 static void futex_hash_bucket_init(struct futex_hash_bucket *fhb, - struct futex_private_hash *hb_p) + struct futex_private_hash *fph) { #ifdef CONFIG_FUTEX_PRIVATE_HASH - fhb->hb_p =3D hb_p; + fhb->priv =3D fph; #endif atomic_set(&fhb->waiters, 0); plist_head_init(&fhb->chain); @@ -1373,7 +1373,7 @@ static void futex_hash_bucket_init(struct futex_hash_= bucket *fhb, #ifdef CONFIG_FUTEX_PRIVATE_HASH void futex_hash_free(struct mm_struct *mm) { - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; =20 kvfree(mm->futex_phash_new); /* @@ -1384,19 +1384,19 @@ void futex_hash_free(struct mm_struct *mm) * Since there can not be a thread holding a reference to the private * hash we free it immediately. */ - hb_p =3D rcu_dereference_raw(mm->futex_phash); - if (!hb_p) + fph =3D rcu_dereference_raw(mm->futex_phash); + if (!fph) return; =20 - if (!hb_p->initial_ref_dropped && WARN_ON(!futex_put_private_hash(hb_p))) + if (!fph->initial_ref_dropped && WARN_ON(!futex_put_private_hash(fph))) return; =20 - kvfree(hb_p); + kvfree(fph); } =20 static int futex_hash_allocate(unsigned int hash_slots) { - struct futex_private_hash *hb_p, *hb_tofree =3D NULL; + struct futex_private_hash *fph, *hb_tofree =3D NULL; struct mm_struct *mm =3D current->mm; size_t alloc_size; int i; @@ -1415,29 +1415,29 @@ static int futex_hash_allocate(unsigned int hash_sl= ots) &alloc_size))) return -ENOMEM; =20 - hb_p =3D kvmalloc(alloc_size, GFP_KERNEL_ACCOUNT); - if (!hb_p) + fph =3D kvmalloc(alloc_size, GFP_KERNEL_ACCOUNT); + if (!fph) return -ENOMEM; =20 - rcuref_init(&hb_p->users, 1); - hb_p->initial_ref_dropped =3D false; - hb_p->released =3D false; - hb_p->hash_mask =3D hash_slots - 1; + rcuref_init(&fph->users, 1); + fph->initial_ref_dropped =3D false; + fph->released =3D false; + fph->hash_mask =3D hash_slots - 1; =20 for (i =3D 0; i < hash_slots; i++) - futex_hash_bucket_init(&hb_p->queues[i], hb_p); + futex_hash_bucket_init(&fph->queues[i], fph); =20 scoped_guard(mutex, &mm->futex_hash_lock) { if (mm->futex_phash_new) { - if (mm->futex_phash_new->hash_mask <=3D hb_p->hash_mask) { + if (mm->futex_phash_new->hash_mask <=3D fph->hash_mask) { hb_tofree =3D mm->futex_phash_new; } else { - hb_tofree =3D hb_p; - hb_p =3D mm->futex_phash_new; + hb_tofree =3D fph; + fph =3D mm->futex_phash_new; } mm->futex_phash_new =3D NULL; } - futex_assign_new_hash(hb_p, mm); + futex_assign_new_hash(fph, mm); } kvfree(hb_tofree); return 0; @@ -1446,16 +1446,16 @@ static int futex_hash_allocate(unsigned int hash_sl= ots) int futex_hash_allocate_default(void) { unsigned int threads, buckets, current_buckets =3D 0; - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; =20 if (!current->mm) return 0; =20 scoped_guard(rcu) { threads =3D min_t(unsigned int, get_nr_threads(current), num_online_cpus= ()); - hb_p =3D rcu_dereference(current->mm->futex_phash); - if (hb_p) - current_buckets =3D hb_p->hash_mask + 1; + fph =3D rcu_dereference(current->mm->futex_phash); + if (fph) + current_buckets =3D fph->hash_mask + 1; } =20 /* @@ -1473,12 +1473,12 @@ int futex_hash_allocate_default(void) =20 static int futex_hash_get_slots(void) { - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; =20 guard(rcu)(); - hb_p =3D rcu_dereference(current->mm->futex_phash); - if (hb_p) - return hb_p->hash_mask + 1; + fph =3D rcu_dereference(current->mm->futex_phash); + if (fph) + return fph->hash_mask + 1; return 0; } =20 diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 782021feffe2e..99218d220e534 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -118,7 +118,7 @@ struct futex_hash_bucket { atomic_t waiters; spinlock_t lock; struct plist_head chain; - struct futex_private_hash *hb_p; + struct futex_private_hash *priv; } ____cacheline_aligned_in_smp; =20 /* @@ -209,13 +209,13 @@ extern struct futex_hash_bucket *__futex_hash(union f= utex_key *key); extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); extern struct futex_private_hash *futex_get_private_hash(void); -extern bool futex_put_private_hash(struct futex_private_hash *hb_p); +extern bool futex_put_private_hash(struct futex_private_hash *fph); =20 #else /* !CONFIG_FUTEX_PRIVATE_HASH */ static inline void futex_hash_get(struct futex_hash_bucket *hb) { } static inline void futex_hash_put(struct futex_hash_bucket *hb) { } static inline struct futex_private_hash *futex_get_private_hash(void) { re= turn NULL; } -static inline bool futex_put_private_hash(struct futex_private_hash *hb_p)= { return false; } +static inline bool futex_put_private_hash(struct futex_private_hash *fph) = { return false; } #endif =20 DEFINE_CLASS(hb, struct futex_hash_bucket *, diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 67eebb5b4b212..0d150453a0b41 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -493,7 +493,7 @@ static int __futex_wait_multiple_setup(struct futex_vec= tor *vs, int count, int * =20 int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) { - struct futex_private_hash *hb_p; + struct futex_private_hash *fph; int ret; =20 /* @@ -501,10 +501,10 @@ int futex_wait_multiple_setup(struct futex_vector *vs= , int count, int *woken) * hash to avoid blocking on mm_struct::futex_hash_bucket during rehash * after changing the task state. */ - hb_p =3D futex_get_private_hash(); + fph =3D futex_get_private_hash(); ret =3D __futex_wait_multiple_setup(vs, count, woken); - if (hb_p) - futex_put_private_hash(hb_p); + if (fph) + futex_put_private_hash(fph); return ret; } =20 --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96A992571A7 for ; Wed, 12 Mar 2025 15:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; cv=none; b=VWbfZNYLNnRN3XJhmbkf+HTOfg+zviheTV5SbjUrRim9t+YwcCIQoHv+bRPr1tdpX7MKSSt4mdX5CB06NQvChXU22tpu8BVfLw5sKvpuGZD9ylaXCmDjZjIeLMumaQSA9GgmfYxSJfhAQzb8ogypcW/0h3qYCBkXfwLLS6DcCbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; c=relaxed/simple; bh=SIbPV6uUijvWYp91xoelpRYdlXcaYkw3NXEvR43WxIM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NOAjwbS8P/lDBLne7LRYuBtj/R0bVhVI4F9FnePfyFMqus0XY6CtN/V1KUHIeljGVT/zaRwhVTUuaffo/W5DMg3anbdk5WkT6SqUhx1teoR2K48YnLUZ0d1Sfixduixdw2VOo3/MdJ3cqyoYoZrdKXuggp4yDym7dQjkDQROmPE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=xKlaf6Mi; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=rtzKOL2o; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="xKlaf6Mi"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="rtzKOL2o" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NpgcsEM4Xbd8wF12Glc8/S+/uLAOG7pGhETgaTzw7eM=; b=xKlaf6MiVtn25d0+s9jXklOHSuHia7eU4xwC5UK6BvH0Z2ozjG80O7txTAvO/3WsZUo967 0odqMcEHCGXXWq0u8wsgavUwmC+5Abq0y4VDReP9V4iYKE8IdYqZTti8ojs2PpgeCpXn5r FrvajH6C8q4PYeLJEYi62e5Tpj8r8nxN7m4l/14/vimqaGFeeTr7EZbourlAh+po52ndST dqTWYG6FDpqQhKiu6htEHLAiPblElGL26bY0alKjQzSp/Hoh8fj+O/NQU0KYrl05XjLRcG dAn2OUYBR00ahoc9OxY+GnYeBF8YoGBCS9Ox7Bw0LGX4adQ3QblSCSiP8mXdIQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NpgcsEM4Xbd8wF12Glc8/S+/uLAOG7pGhETgaTzw7eM=; b=rtzKOL2oqX9RInGgutz60mL4+T7V6ykiidoQIrtICUbeNpWohHQfCopMjqAH772ehB8sn8 uoGAB5/8VRfH/oDg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 16/21] futex: Remove superfluous state Date: Wed, 12 Mar 2025 16:16:29 +0100 Message-ID: <20250312151634.2183278-17-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra The whole initial_ref_dropped and release state lead to confusing code. [bigeasy: use rcuref_is_dead() ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 116 +++++++++++++++++++++---------------------- kernel/futex/futex.h | 4 +- 2 files changed, 58 insertions(+), 62 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 9b87c4f128f14..37c3e020f2f03 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -61,8 +61,6 @@ struct futex_private_hash { rcuref_t users; unsigned int hash_mask; struct rcu_head rcu; - bool initial_ref_dropped; - bool released; struct futex_hash_bucket queues[]; }; =20 @@ -175,49 +173,32 @@ static void futex_rehash_current_users(struct futex_p= rivate_hash *old, } } =20 -static void futex_assign_new_hash(struct futex_private_hash *new, - struct mm_struct *mm) +static bool futex_assign_new_hash(struct mm_struct *mm, + struct futex_private_hash *new) { - bool drop_init_ref =3D new !=3D NULL; struct futex_private_hash *fph; =20 - if (!new) { - new =3D mm->futex_phash_new; - mm->futex_phash_new =3D NULL; - } - /* Someone was quicker, the current mask is valid */ - if (!new) - return; + WARN_ON_ONCE(mm->futex_phash_new); =20 - fph =3D rcu_dereference_check(mm->futex_phash, - lockdep_is_held(&mm->futex_hash_lock)); + fph =3D rcu_dereference_protected(mm->futex_phash, + lockdep_is_held(&mm->futex_hash_lock)); if (fph) { if (fph->hash_mask >=3D new->hash_mask) { /* It was increased again while we were waiting */ kvfree(new); - return; + return true; } - /* - * If the caller started the resize then the initial reference - * needs to be dropped. If the object can not be deconstructed - * we save new for later and ensure the reference counter - * is not dropped again. - */ - if (drop_init_ref && - (fph->initial_ref_dropped || !futex_put_private_hash(fph))) { + + if (!rcuref_is_dead(&fph->users)) { mm->futex_phash_new =3D new; - fph->initial_ref_dropped =3D true; - return; - } - if (!READ_ONCE(fph->released)) { - mm->futex_phash_new =3D new; - return; + return false; } =20 futex_rehash_current_users(fph, new); } rcu_assign_pointer(mm->futex_phash, new); kvfree_rcu(fph, rcu); + return true; } =20 struct futex_private_hash *futex_get_private_hash(void) @@ -244,11 +225,26 @@ struct futex_private_hash *futex_get_private_hash(voi= d) if (rcuref_get(&fph->users)) return fph; } - scoped_guard(mutex, ¤t->mm->futex_hash_lock) - futex_assign_new_hash(NULL, mm); + scoped_guard (mutex, &mm->futex_hash_lock) { + struct futex_private_hash *fph; + + fph =3D mm->futex_phash_new; + if (fph) { + mm->futex_phash_new =3D NULL; + futex_assign_new_hash(mm, fph); + } + } goto again; } =20 +void futex_put_private_hash(struct futex_private_hash *fph) +{ + /* Ignore return value, last put is verified via rcuref_is_dead() */ + if (rcuref_put(&fph->users)) { + ; + } +} + static struct futex_private_hash *futex_get_private_hb(union futex_key *ke= y) { if (!futex_key_is_private(key)) @@ -289,17 +285,6 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) } =20 #ifdef CONFIG_FUTEX_PRIVATE_HASH -bool futex_put_private_hash(struct futex_private_hash *fph) -{ - bool released; - - guard(preempt)(); - released =3D rcuref_put_rcusafe(&fph->users); - if (released) - WRITE_ONCE(fph->released, true); - return released; -} - /** * futex_hash_get - Get an additional reference for the local hash. * @hb: ptr to the private local hash. @@ -1376,22 +1361,11 @@ void futex_hash_free(struct mm_struct *mm) struct futex_private_hash *fph; =20 kvfree(mm->futex_phash_new); - /* - * The mm_struct belonging to the task is about to be removed so all - * threads, that ever accessed the private hash, are gone and the - * pointer can be accessed directly (omitting a RCU-read section or - * lock). - * Since there can not be a thread holding a reference to the private - * hash we free it immediately. - */ fph =3D rcu_dereference_raw(mm->futex_phash); - if (!fph) - return; - - if (!fph->initial_ref_dropped && WARN_ON(!futex_put_private_hash(fph))) - return; - - kvfree(fph); + if (fph) { + WARN_ON_ONCE(rcuref_read(&fph->users) > 1); + kvfree(fph); + } } =20 static int futex_hash_allocate(unsigned int hash_slots) @@ -1420,15 +1394,32 @@ static int futex_hash_allocate(unsigned int hash_sl= ots) return -ENOMEM; =20 rcuref_init(&fph->users, 1); - fph->initial_ref_dropped =3D false; - fph->released =3D false; fph->hash_mask =3D hash_slots - 1; =20 for (i =3D 0; i < hash_slots; i++) futex_hash_bucket_init(&fph->queues[i], fph); =20 scoped_guard(mutex, &mm->futex_hash_lock) { + if (mm->futex_phash && !mm->futex_phash_new) { + /* + * If we have an existing hash, but do not yet have + * allocated a replacement hash, drop the initial + * reference on the existing hash. + * + * Ignore the return value; removal is serialized by + * mm->futex_hash_lock which we currently hold and last + * put is verified via rcuref_is_dead(). + */ + if (rcuref_put(&mm->futex_phash->users)) { + ; + } + } + if (mm->futex_phash_new) { + /* + * If we already have a replacement hash pending; + * keep the larger hash. + */ if (mm->futex_phash_new->hash_mask <=3D fph->hash_mask) { hb_tofree =3D mm->futex_phash_new; } else { @@ -1437,7 +1428,12 @@ static int futex_hash_allocate(unsigned int hash_slo= ts) } mm->futex_phash_new =3D NULL; } - futex_assign_new_hash(fph, mm); + + /* + * Will set mm->futex_phash_new on failure; + * futex_get_private_hash() will try again. + */ + futex_assign_new_hash(mm, fph); } kvfree(hb_tofree); return 0; diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 99218d220e534..5b6b58e8a7008 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -209,13 +209,13 @@ extern struct futex_hash_bucket *__futex_hash(union f= utex_key *key); extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); extern struct futex_private_hash *futex_get_private_hash(void); -extern bool futex_put_private_hash(struct futex_private_hash *fph); +extern void futex_put_private_hash(struct futex_private_hash *fph); =20 #else /* !CONFIG_FUTEX_PRIVATE_HASH */ static inline void futex_hash_get(struct futex_hash_bucket *hb) { } static inline void futex_hash_put(struct futex_hash_bucket *hb) { } static inline struct futex_private_hash *futex_get_private_hash(void) { re= turn NULL; } -static inline bool futex_put_private_hash(struct futex_private_hash *fph) = { return false; } +static inline void futex_put_private_hash(struct futex_private_hash *fph) = { } #endif =20 DEFINE_CLASS(hb, struct futex_hash_bucket *, --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D83732571BF for ; Wed, 12 Mar 2025 15:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; cv=none; b=fCcq6qFYtgX0EBqg0nZWHpLtt3Fxj4e1McM6cMHta9yLjdVrXezJ92QbiVurguFdVGwZsKTaLoW1shWNdZLimsxCwvyUj9z2VAgSVfRUQ/m7eMSYHoF5VKg1UTYhAyswY6lCDoj0ZBUAtt93kajywSvmnNOnx6OB1tUq4knI3Zk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; c=relaxed/simple; bh=olTY4zkLJtGIgjHa3oTn07pR9E/f5tvEKq1v4Ae3qyE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eYKg68/bcOfYDfqkVtETjzq6GoE/CUMx+e5J+TrGeIWH76gUUQORKyA4TRitCFBRX9spt60iTj7IN8WjPD6ORSb6CI9U+s5yDOTSWKzMK6PDx+3xUfvLvZw6PN1CdEtCDhxy28kUYTUAcyoyeBDg8L7kQIJjLWhvUJY/v5C1sSo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=OIpKUAqb; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=G2XdG3/x; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="OIpKUAqb"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="G2XdG3/x" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aHcDZa1MSrRJbJWT1Y3YbvdWTEoyBRWmqHy2swZ12vg=; b=OIpKUAqbVERxihz+KXrhCNZxp95H8U0+YcqEQjIkzEH+DYS97cd3FFY/FwwJ+JIvTurJOr JfretJ1evh3ZtwLVcSoXbkyPA8kwYw6zuNUaLQrZN3YnyGI0ZaL5oeHJzaocwoln/ZqDh1 gO9Oeho0ENtKrcQ7ensFFkJAfsqCl26gBIivTqvCjJDSdJVqQFDP+stNU0bbtOSFQUT73U 87rBaGWzzYauw6trO/auKeMHo/UaFyIb8ikcwdv1qTKmeFUCviCZkXS/gv4i9KegO5icpo 9oz2z5/LWynSyUB9rk5pd4mciPkfj98aj4pop0pGWbIFlnI1KWfnYkxp41SYug== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aHcDZa1MSrRJbJWT1Y3YbvdWTEoyBRWmqHy2swZ12vg=; b=G2XdG3/xAxM2XRrbtdX+T2DYmjRvfITnIA4oOV7WqaWqGHS4RUdNlMAgfVXpz62wrJAdJ3 d5FqeSs7gBarzWBg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 17/21] futex: Untangle and naming Date: Wed, 12 Mar 2025 16:16:30 +0100 Message-ID: <20250312151634.2183278-18-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Untangle the futex_private_hash::users increment from finding the hb. hb =3D __futex_hash(key) /* finds the hb */ hb =3D futex_hash(key) /* finds the hb and inc users */ Use __futex_hash() for re-hashing, notably allowing to rehash into the global hash. This gets us: hb =3D futex_hash(key) /* gets hb and inc users */ futex_hash_get(hb) /* inc users */ futex_hash_put(hb) /* dec users */ But then we have: fph =3D futex_get_private_hash() /* get fph and inc */ futex_put_private_hash() /* dec */ Which doesn't match naming, so change to: fph =3D futex_private_hash() /* get and inc */ futex_private_hash_get(fph) /* inc */ futex_private_hash_put(fph) /* dec */ Add a CLASS for the private_hash, to clean up some trivial wrappers. Additional random renaming that happened while mucking about with the code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- kernel/futex/core.c | 170 +++++++++++++++++++++++----------------- kernel/futex/futex.h | 27 +++++-- kernel/futex/waitwake.c | 25 ++---- 3 files changed, 124 insertions(+), 98 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 37c3e020f2f03..1c00890cc4fb5 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -124,25 +124,34 @@ static inline bool futex_key_is_private(union futex_k= ey *key) return !(key->both.offset & (FUT_OFF_INODE | FUT_OFF_MMSHARED)); } =20 -static struct futex_hash_bucket *futex_hash_private(union futex_key *key, - struct futex_hash_bucket *fhb, - u32 hash_mask) +static struct futex_hash_bucket * +__futex_hash(union futex_key *key, struct futex_private_hash *fph); + +#ifdef CONFIG_FUTEX_PRIVATE_HASH +static struct futex_hash_bucket * +__futex_hash_private(union futex_key *key, struct futex_private_hash *fph) { u32 hash; =20 + if (!futex_key_is_private(key)) + return NULL; + + if (!fph) + fph =3D rcu_dereference(key->private.mm->futex_phash); + if (!fph || !fph->hash_mask) + return NULL; + hash =3D jhash2((void *)&key->private.address, sizeof(key->private.address) / 4, key->both.offset); - return &fhb[hash & hash_mask]; + return &fph->queues[hash & fph->hash_mask]; } =20 -#ifdef CONFIG_FUTEX_PRIVATE_HASH -static void futex_rehash_current_users(struct futex_private_hash *old, - struct futex_private_hash *new) +static void futex_rehash_private(struct futex_private_hash *old, + struct futex_private_hash *new) { struct futex_hash_bucket *hb_old, *hb_new; unsigned int slots =3D old->hash_mask + 1; - u32 hash_mask =3D new->hash_mask; unsigned int i; =20 for (i =3D 0; i < slots; i++) { @@ -158,7 +167,7 @@ static void futex_rehash_current_users(struct futex_pri= vate_hash *old, =20 WARN_ON_ONCE(this->lock_ptr !=3D &hb_old->lock); =20 - hb_new =3D futex_hash_private(&this->key, new->queues, hash_mask); + hb_new =3D __futex_hash(&this->key, new); futex_hb_waiters_inc(hb_new); /* * The new pointer isn't published yet but an already @@ -173,8 +182,8 @@ static void futex_rehash_current_users(struct futex_pri= vate_hash *old, } } =20 -static bool futex_assign_new_hash(struct mm_struct *mm, - struct futex_private_hash *new) +static bool __futex_pivot_hash(struct mm_struct *mm, + struct futex_private_hash *new) { struct futex_private_hash *fph; =20 @@ -194,14 +203,27 @@ static bool futex_assign_new_hash(struct mm_struct *m= m, return false; } =20 - futex_rehash_current_users(fph, new); + futex_rehash_private(fph, new); } rcu_assign_pointer(mm->futex_phash, new); kvfree_rcu(fph, rcu); return true; } =20 -struct futex_private_hash *futex_get_private_hash(void) +static void futex_pivot_hash(struct mm_struct *mm) +{ + scoped_guard (mutex, &mm->futex_hash_lock) { + struct futex_private_hash *fph; + + fph =3D mm->futex_phash_new; + if (fph) { + mm->futex_phash_new =3D NULL; + __futex_pivot_hash(mm, fph); + } + } +} + +struct futex_private_hash *futex_private_hash(void) { struct mm_struct *mm =3D current->mm; /* @@ -225,41 +247,73 @@ struct futex_private_hash *futex_get_private_hash(voi= d) if (rcuref_get(&fph->users)) return fph; } - scoped_guard (mutex, &mm->futex_hash_lock) { - struct futex_private_hash *fph; - - fph =3D mm->futex_phash_new; - if (fph) { - mm->futex_phash_new =3D NULL; - futex_assign_new_hash(mm, fph); - } - } + futex_pivot_hash(mm); goto again; } =20 -void futex_put_private_hash(struct futex_private_hash *fph) +bool futex_private_hash_get(struct futex_private_hash *fph) { - /* Ignore return value, last put is verified via rcuref_is_dead() */ - if (rcuref_put(&fph->users)) { - ; + return rcuref_get(&fph->users); +} + +void futex_private_hash_put(struct futex_private_hash *fph) +{ + /* + * Ignore the result; the DEAD state is picked up + * when rcuref_get() starts failing via rcuref_is_dead(). + */ + bool __maybe_unused ignore =3D rcuref_put(&fph->users); +} + +struct futex_hash_bucket *futex_hash(union futex_key *key) +{ + struct futex_private_hash *fph; + struct futex_hash_bucket *hb; + +again: + scoped_guard (rcu) { + hb =3D __futex_hash(key, NULL); + fph =3D hb->priv; + + if (!fph || futex_private_hash_get(fph)) + return hb; } + futex_pivot_hash(key->private.mm); + goto again; } =20 -static struct futex_private_hash *futex_get_private_hb(union futex_key *ke= y) +void futex_hash_get(struct futex_hash_bucket *hb) { - if (!futex_key_is_private(key)) - return NULL; + struct futex_private_hash *fph =3D hb->priv; =20 - return futex_get_private_hash(); + if (!fph) + return; + WARN_ON_ONCE(!futex_private_hash_get(fph)); } =20 -#else +void futex_hash_put(struct futex_hash_bucket *hb) +{ + struct futex_private_hash *fph =3D hb->priv; =20 -static struct futex_private_hash *futex_get_private_hb(union futex_key *ke= y) + if (!fph) + return; + futex_private_hash_put(fph); +} + +#else /* !CONFIG_FUTEX_PRIVATE_HASH */ + +static inline struct futex_hash_bucket * +__futex_hash_private(union futex_key *key, struct futex_private_hash *fph) { return NULL; } -#endif + +struct futex_hash_bucket *futex_hash(union futex_key *key) +{ + return __futex_hash(key, NULL); +} + +#endif /* CONFIG_FUTEX_PRIVATE_HASH */ =20 /** * futex_hash - Return the hash bucket in the global hash @@ -269,14 +323,15 @@ static struct futex_private_hash *futex_get_private_h= b(union futex_key *key) * corresponding hash bucket in the global hash. If the FUTEX is private a= nd * a local hash table is privated then this one is used. */ -struct futex_hash_bucket *__futex_hash(union futex_key *key) +static struct futex_hash_bucket * +__futex_hash(union futex_key *key, struct futex_private_hash *fph) { - struct futex_private_hash *fph; + struct futex_hash_bucket *hb; u32 hash; =20 - fph =3D futex_get_private_hb(key); - if (fph) - return futex_hash_private(key, fph->queues, fph->hash_mask); + hb =3D __futex_hash_private(key, fph); + if (hb) + return hb; =20 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, @@ -284,34 +339,6 @@ struct futex_hash_bucket *__futex_hash(union futex_key= *key) return &futex_queues[hash & futex_hashmask]; } =20 -#ifdef CONFIG_FUTEX_PRIVATE_HASH -/** - * futex_hash_get - Get an additional reference for the local hash. - * @hb: ptr to the private local hash. - * - * Obtain an additional reference for the already obtained hash bucket. The - * caller must already own an reference. - */ -void futex_hash_get(struct futex_hash_bucket *hb) -{ - struct futex_private_hash *fph =3D hb->priv; - - if (!fph) - return; - - WARN_ON_ONCE(!rcuref_get(&fph->users)); -} - -void futex_hash_put(struct futex_hash_bucket *hb) -{ - struct futex_private_hash *fph =3D hb->priv; - - if (!fph) - return; - futex_put_private_hash(fph); -} -#endif - /** * futex_setup_timer - set up the sleeping hrtimer. * @time: ptr to the given timeout value @@ -1152,7 +1179,6 @@ static void compat_exit_robust_list(struct task_struc= t *curr) static void exit_pi_state_list(struct task_struct *curr) { struct list_head *next, *head =3D &curr->pi_state_list; - struct futex_private_hash *fph; struct futex_pi_state *pi_state; union futex_key key =3D FUTEX_KEY_INIT; =20 @@ -1166,7 +1192,7 @@ static void exit_pi_state_list(struct task_struct *cu= rr) * on the mutex. */ WARN_ON(curr !=3D current); - fph =3D futex_get_private_hash(); + guard(private_hash)(); /* * We are a ZOMBIE and nobody can enqueue itself on * pi_state_list anymore, but we have to be careful @@ -1229,8 +1255,6 @@ static void exit_pi_state_list(struct task_struct *cu= rr) raw_spin_lock_irq(&curr->pi_lock); } raw_spin_unlock_irq(&curr->pi_lock); - if (fph) - futex_put_private_hash(fph); } #else static inline void exit_pi_state_list(struct task_struct *curr) { } @@ -1410,9 +1434,7 @@ static int futex_hash_allocate(unsigned int hash_slot= s) * mm->futex_hash_lock which we currently hold and last * put is verified via rcuref_is_dead(). */ - if (rcuref_put(&mm->futex_phash->users)) { - ; - } + futex_private_hash_put(mm->futex_phash); } =20 if (mm->futex_phash_new) { @@ -1433,7 +1455,7 @@ static int futex_hash_allocate(unsigned int hash_slot= s) * Will set mm->futex_phash_new on failure; * futex_get_private_hash() will try again. */ - futex_assign_new_hash(mm, fph); + __futex_pivot_hash(mm, fph); } kvfree(hb_tofree); return 0; diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 5b6b58e8a7008..8eba9982bcae1 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -204,23 +204,38 @@ extern struct hrtimer_sleeper * futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns); =20 -extern struct futex_hash_bucket *__futex_hash(union futex_key *key); +extern struct futex_hash_bucket *futex_hash(union futex_key *key); + #ifdef CONFIG_FUTEX_PRIVATE_HASH extern void futex_hash_get(struct futex_hash_bucket *hb); extern void futex_hash_put(struct futex_hash_bucket *hb); -extern struct futex_private_hash *futex_get_private_hash(void); -extern void futex_put_private_hash(struct futex_private_hash *fph); + +extern struct futex_private_hash *futex_private_hash(void); +extern bool futex_private_hash_get(struct futex_private_hash *fph); +extern void futex_private_hash_put(struct futex_private_hash *fph); =20 #else /* !CONFIG_FUTEX_PRIVATE_HASH */ static inline void futex_hash_get(struct futex_hash_bucket *hb) { } static inline void futex_hash_put(struct futex_hash_bucket *hb) { } -static inline struct futex_private_hash *futex_get_private_hash(void) { re= turn NULL; } -static inline void futex_put_private_hash(struct futex_private_hash *fph) = { } + +static inline struct futex_private_hash *futex_private_hash(void) +{ + return NULL; +} +static inline bool futex_private_hash_get(struct futex_private_hash *fph) +{ + return false; +} +static inline void futex_private_hash_put(struct futex_private_hash *fph) = { } #endif =20 DEFINE_CLASS(hb, struct futex_hash_bucket *, if (_T) futex_hash_put(_T), - __futex_hash(key), union futex_key *key); + futex_hash(key), union futex_key *key); + +DEFINE_CLASS(private_hash, struct futex_private_hash *, + if (_T) futex_private_hash_put(_T), + futex_private_hash(), void); =20 /** * futex_match - Check whether two futex keys are equal diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index 0d150453a0b41..74647f6bf75de 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -400,12 +400,18 @@ int futex_unqueue_multiple(struct futex_vector *v, in= t count) * - 0 - Success * - <0 - -EFAULT, -EWOULDBLOCK or -EINVAL */ -static int __futex_wait_multiple_setup(struct futex_vector *vs, int count,= int *woken) +int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) { bool retry =3D false; int ret, i; u32 uval; =20 + /* + * Make sure to have a reference on the private_hash such that we + * don't block on rehash after changing the task state below. + */ + guard(private_hash)(); + /* * Enqueuing multiple futexes is tricky, because we need to enqueue * each futex on the list before dealing with the next one to avoid @@ -491,23 +497,6 @@ static int __futex_wait_multiple_setup(struct futex_ve= ctor *vs, int count, int * return 0; } =20 -int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *wok= en) -{ - struct futex_private_hash *fph; - int ret; - - /* - * Assume to have a private futex and acquire a reference on the private - * hash to avoid blocking on mm_struct::futex_hash_bucket during rehash - * after changing the task state. - */ - fph =3D futex_get_private_hash(); - ret =3D __futex_wait_multiple_setup(vs, count, woken); - if (fph) - futex_put_private_hash(fph); - return ret; -} - /** * futex_sleep_multiple - Check sleeping conditions and sleep * @vs: List of futexes to wait for --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8A9F2571C0 for ; Wed, 12 Mar 2025 15:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; cv=none; b=gtRDe2kgzYOThvFUduoPbfrfzXSM9fKQOCxoTZL7rs82o4FeFEV1EcrCgoCcI/UK9Vxb4anLUYyfwRip8Ns38GW1GGFMbfpTWur3l7n2m+mv4dIlocJCLUEI01D/6C1MYJMv3ESk7SCI4uVnZAMASYw0EpDIljmlC+/Jy8mXW28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; c=relaxed/simple; bh=6aV5iO7haHMb03GvwAYBnZtSAmTB4bYnvF5YVRf6vlU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=knQKJTsOf9uJjIvPcmnIB6g2XXzWp4MmBMpdLYHzxX5h6x5bGc3KgqMN/LfXOUty0KsPtxS8rpgOJF3wuEy4yqcTavuTNmHVrhmt4tgihVOrYWMNGCgYXVAy+M2KiZoFUzd9rr0vIX28TDlro/iDbwpiygdL1L8B78jyl5WUy3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=z+jLjjVl; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=dPpLYSOZ; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="z+jLjjVl"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="dPpLYSOZ" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XasgQ1bTHVhj0irlCaoC3urxdFVb2K0PsjlA/vCBkJM=; b=z+jLjjVl9Jgh3tkR4Jps7UJWIWTF1pvZakyP/6JPAhtpWRJHigugblJwgy73yPlJwJYCIv QwE59UcZtPpQP8huG8J8hLQIH4IaM9nIspk5+UPlDlcMdsAOLuagxVVPpGaXPZmaeqcaHi WUNfy5aOrwU3iR1Q4OtMrN11b6cm+17+zlIu+szkTrR+CuNuCyupHZdKLkWWq4jTJuw93/ b1PwukJc8VVobLetTSeFRZMELVzXydNqYoEBsnwk6aLG5vNKXpVtZ1+eUkILLBKOfZ8oJU D8FFg61NY7i6PJh7zwHFu3LkR+iVVNQ3vNph22usb2QT4QanyKPLMyh9R5J2BA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XasgQ1bTHVhj0irlCaoC3urxdFVb2K0PsjlA/vCBkJM=; b=dPpLYSOZoXxUwAtHfP+0AW+CT1f95IGcZj4gw5/LZ6n2NlB1/LbPoAwFGdE9/mtpC1yvhz f4HSj8sE+iHgtsCg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 18/21] futex: Rework SET_SLOTS Date: Wed, 12 Mar 2025 16:16:31 +0100 Message-ID: <20250312151634.2183278-19-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Let SET_SLOTS have precedence over default scaling; once user sets a size, stick with it. Notably, doing SET_SLOTS 0 will cause fph->hash_mask to be 0, which will cause __futex_hash() to return global hash buckets. Once in this state, it is impossible to recover, so disable SET_SLOTS. Also, let prctl() users wait-retry the rehash, such that return of prctl() means new size is in effect. [bigeasy: make private hash depend on MMU] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- init/Kconfig | 2 +- kernel/futex/core.c | 183 +++++++++++++++++++++++++++++--------------- 2 files changed, 123 insertions(+), 62 deletions(-) diff --git a/init/Kconfig b/init/Kconfig index bb209c12a2bda..b0a448608446d 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1685,7 +1685,7 @@ config FUTEX_PI =20 config FUTEX_PRIVATE_HASH bool - depends on FUTEX && !BASE_SMALL + depends on FUTEX && !BASE_SMALL && MMU default y =20 config EPOLL diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 1c00890cc4fb5..bc7451287b2ce 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -61,6 +61,8 @@ struct futex_private_hash { rcuref_t users; unsigned int hash_mask; struct rcu_head rcu; + void *mm; + bool custom; struct futex_hash_bucket queues[]; }; =20 @@ -192,12 +194,6 @@ static bool __futex_pivot_hash(struct mm_struct *mm, fph =3D rcu_dereference_protected(mm->futex_phash, lockdep_is_held(&mm->futex_hash_lock)); if (fph) { - if (fph->hash_mask >=3D new->hash_mask) { - /* It was increased again while we were waiting */ - kvfree(new); - return true; - } - if (!rcuref_is_dead(&fph->users)) { mm->futex_phash_new =3D new; return false; @@ -207,6 +203,7 @@ static bool __futex_pivot_hash(struct mm_struct *mm, } rcu_assign_pointer(mm->futex_phash, new); kvfree_rcu(fph, rcu); + wake_up_var(mm); return true; } =20 @@ -262,7 +259,8 @@ void futex_private_hash_put(struct futex_private_hash *= fph) * Ignore the result; the DEAD state is picked up * when rcuref_get() starts failing via rcuref_is_dead(). */ - bool __maybe_unused ignore =3D rcuref_put(&fph->users); + if (rcuref_put(&fph->users)) + wake_up_var(fph->mm); } =20 struct futex_hash_bucket *futex_hash(union futex_key *key) @@ -1392,72 +1390,128 @@ void futex_hash_free(struct mm_struct *mm) } } =20 -static int futex_hash_allocate(unsigned int hash_slots) +static bool futex_pivot_pending(struct mm_struct *mm) +{ + struct futex_private_hash *fph; + + guard(rcu)(); + + if (!mm->futex_phash_new) + return false; + + fph =3D rcu_dereference(mm->futex_phash); + return !rcuref_read(&fph->users); +} + +static bool futex_hash_less(struct futex_private_hash *a, + struct futex_private_hash *b) +{ + /* user provided always wins */ + if (!a->custom && b->custom) + return true; + if (a->custom && !b->custom) + return false; + + /* zero-sized hash wins */ + if (!b->hash_mask) + return true; + if (!a->hash_mask) + return false; + + /* keep the biggest */ + if (a->hash_mask < b->hash_mask) + return true; + if (a->hash_mask > b->hash_mask) + return false; + + return false; /* equal */ +} + +static int futex_hash_allocate(unsigned int hash_slots, bool custom) { - struct futex_private_hash *fph, *hb_tofree =3D NULL; struct mm_struct *mm =3D current->mm; - size_t alloc_size; + struct futex_private_hash *fph; int i; =20 - if (hash_slots =3D=3D 0) - hash_slots =3D 16; - hash_slots =3D clamp(hash_slots, 2, futex_hashmask + 1); - if (!is_power_of_2(hash_slots)) - hash_slots =3D rounddown_pow_of_two(hash_slots); + if (hash_slots && (hash_slots =3D=3D 1 || !is_power_of_2(hash_slots))) + return -EINVAL; =20 - if (unlikely(check_mul_overflow(hash_slots, sizeof(struct futex_hash_buck= et), - &alloc_size))) - return -ENOMEM; + /* + * Once we've disabled the global hash there is no way back. + */ + scoped_guard (rcu) { + fph =3D rcu_dereference(mm->futex_phash); + if (fph && !fph->hash_mask) { + if (custom) + return -EBUSY; + return 0; + } + } =20 - if (unlikely(check_add_overflow(alloc_size, sizeof(struct futex_private_h= ash), - &alloc_size))) - return -ENOMEM; - - fph =3D kvmalloc(alloc_size, GFP_KERNEL_ACCOUNT); + fph =3D kvzalloc(struct_size(fph, queues, hash_slots), GFP_KERNEL_ACCOUNT= ); if (!fph) return -ENOMEM; =20 rcuref_init(&fph->users, 1); - fph->hash_mask =3D hash_slots - 1; + fph->hash_mask =3D hash_slots ? hash_slots - 1 : 0; + fph->custom =3D custom; + fph->mm =3D mm; =20 for (i =3D 0; i < hash_slots; i++) futex_hash_bucket_init(&fph->queues[i], fph); =20 - scoped_guard(mutex, &mm->futex_hash_lock) { - if (mm->futex_phash && !mm->futex_phash_new) { - /* - * If we have an existing hash, but do not yet have - * allocated a replacement hash, drop the initial - * reference on the existing hash. - * - * Ignore the return value; removal is serialized by - * mm->futex_hash_lock which we currently hold and last - * put is verified via rcuref_is_dead(). - */ - futex_private_hash_put(mm->futex_phash); - } - - if (mm->futex_phash_new) { - /* - * If we already have a replacement hash pending; - * keep the larger hash. - */ - if (mm->futex_phash_new->hash_mask <=3D fph->hash_mask) { - hb_tofree =3D mm->futex_phash_new; - } else { - hb_tofree =3D fph; - fph =3D mm->futex_phash_new; - } - mm->futex_phash_new =3D NULL; - } - + if (custom) { /* - * Will set mm->futex_phash_new on failure; - * futex_get_private_hash() will try again. + * Only let prctl() wait / retry; don't unduly delay clone(). */ - __futex_pivot_hash(mm, fph); +again: + wait_var_event(mm, futex_pivot_pending(mm)); + } + + scoped_guard(mutex, &mm->futex_hash_lock) { + struct futex_private_hash *free __free(kvfree) =3D NULL; + struct futex_private_hash *cur, *new; + + cur =3D rcu_dereference_protected(mm->futex_phash, + lockdep_is_held(&mm->futex_hash_lock)); + new =3D mm->futex_phash_new; + mm->futex_phash_new =3D NULL; + + if (fph) { + if (cur && !new) { + /* + * If we have an existing hash, but do not yet have + * allocated a replacement hash, drop the initial + * reference on the existing hash. + */ + futex_private_hash_put(cur); + } + + if (new) { + /* + * Two updates raced; throw out the lesser one. + */ + if (futex_hash_less(new, fph)) { + free =3D new; + new =3D fph; + } else { + free =3D fph; + } + } else { + new =3D fph; + } + fph =3D NULL; + } + + if (new) { + /* + * Will set mm->futex_phash_new on failure; + * futex_get_private_hash() will try again. + */ + if (!__futex_pivot_hash(mm, new) && custom) + goto again; + } } - kvfree(hb_tofree); return 0; } =20 @@ -1470,10 +1524,17 @@ int futex_hash_allocate_default(void) return 0; =20 scoped_guard(rcu) { - threads =3D min_t(unsigned int, get_nr_threads(current), num_online_cpus= ()); + threads =3D min_t(unsigned int, + get_nr_threads(current), + num_online_cpus()); + fph =3D rcu_dereference(current->mm->futex_phash); - if (fph) + if (fph) { + if (fph->custom) + return 0; + current_buckets =3D fph->hash_mask + 1; + } } =20 /* @@ -1486,7 +1547,7 @@ int futex_hash_allocate_default(void) if (current_buckets >=3D buckets) return 0; =20 - return futex_hash_allocate(buckets); + return futex_hash_allocate(buckets, false); } =20 static int futex_hash_get_slots(void) @@ -1502,7 +1563,7 @@ static int futex_hash_get_slots(void) =20 #else =20 -static int futex_hash_allocate(unsigned int hash_slots) +static int futex_hash_allocate(unsigned int hash_slots, bool custom) { return -EINVAL; } @@ -1519,7 +1580,7 @@ int futex_hash_prctl(unsigned long arg2, unsigned lon= g arg3) =20 switch (arg2) { case PR_FUTEX_HASH_SET_SLOTS: - ret =3D futex_hash_allocate(arg3); + ret =3D futex_hash_allocate(arg3, true); break; =20 case PR_FUTEX_HASH_GET_SLOTS: --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 580F825742B for ; Wed, 12 Mar 2025 15:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; cv=none; b=n9eD789HbeIP6ZnFa8bKCPisrUTvDrAyWsRytlRPylzSrx0aClqQ4DQ719Lu1snlPmY5O0tNuvbHftzIgjjaIZKliYjz8YHS4aLdfjRzhfpVH7QhjK3LZbUtn3Me3YniRqAy3KqSt5nOK4XvyUoTZCrbPCmfFyZsJZJKoraUJKY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; c=relaxed/simple; bh=JUPFEbJeMoHWfrH/ZVwEH0MomkuBtAKsRLp1ZXYurwo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hQEPOCHDtdw9b74LkkZ2nqcGEGdcTy5mETRRU97dXx99jUUl+ed2N+oeH4XdgQmDBgRI+qPJhtMKHt90v/MVz70dWQSktU441KDNY7wSghuazwcjzhI2CkhUOYlD5ATrrANFxPvxPodjSGSoFOlG7mntXEbPFF/0AW0Z+QM+DHo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=EMfrX/R9; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=cSY3YaaR; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="EMfrX/R9"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="cSY3YaaR" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=si5dEG1uweLN2PpJ+c7HS1w13Fpbcv6DZJnMkGCu60E=; b=EMfrX/R9YbQeVeibLTnVIcDgRjZJIad2AZx5Xd7ySh2JnD3eCWASpiKflcDN7GmWq4qMoh IpPF1VWHe7XH05QFyFWTxFJ02liXcEf36esB5TOf7xtAW7musWSJeZi+EFFuIHk1JJ6yk5 jbyv0PSnzD5CmzBbmlvW2Olk6EulUVI9ro0r83hcc1INRG5EqPBLYlqRgsIS7dxCK01b4a dz5tXNiItIMdFYGI+2NNhJEct5Ot2HRa7RORjo5Vo43kGXvc+lAeQ3P4v4fK6pS91mmsIt 6GAl26aC3Hx+GzF+GaOfgSWiU81d1QPUBPyyCMc2A1eLiyxa/GxKT1Y2BDsyrw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=si5dEG1uweLN2PpJ+c7HS1w13Fpbcv6DZJnMkGCu60E=; b=cSY3YaaRvRgNorkJBb3yqfXwVgcNB0RlpjUgNxpakCGJQPbFWUioGUS4qN8TNWCxahNu8W j7h31g8OqIhGIVDQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , linux-mm@kvack.org, Christoph Hellwig , Sebastian Andrzej Siewior Subject: [PATCH v10 19/21] mm: Add vmalloc_huge_node() Date: Wed, 12 Mar 2025 16:16:32 +0100 Message-ID: <20250312151634.2183278-20-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra To enable node specific hash-tables. [bigeasy: use __vmalloc_node_range_noprof(), add nommu bits] Cc: Andrew Morton Cc: Uladzislau Rezki Cc: Christoph Hellwig Cc: linux-mm@kvack.org Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Christoph Hellwig Signed-off-by: Sebastian Andrzej Siewior --- include/linux/vmalloc.h | 3 +++ mm/nommu.c | 5 +++++ mm/vmalloc.c | 7 +++++++ 3 files changed, 15 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 31e9ffd936e39..09c3e3e33f1f8 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -171,6 +171,9 @@ void *__vmalloc_node_noprof(unsigned long size, unsigne= d long align, gfp_t gfp_m void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size= (1); #define vmalloc_huge(...) alloc_hooks(vmalloc_huge_noprof(__VA_ARGS__)) =20 +void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int nod= e) __alloc_size(1); +#define vmalloc_huge_node(...) alloc_hooks(vmalloc_huge_node_noprof(__VA_A= RGS__)) + extern void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) __= alloc_size(1, 2); #define __vmalloc_array(...) alloc_hooks(__vmalloc_array_noprof(__VA_ARGS_= _)) =20 diff --git a/mm/nommu.c b/mm/nommu.c index baa79abdaf037..d04e601a8f4d7 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -209,6 +209,11 @@ EXPORT_SYMBOL(vmalloc_noprof); =20 void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __weak __ali= as(__vmalloc_noprof); =20 +void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int nod= e) +{ + return vmalloc_huge_noprof(size, gfp_mask); +} + /* * vzalloc - allocate virtually contiguous memory with zero fill * diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a6e7acebe9adf..69247b46413ca 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3966,6 +3966,13 @@ void *vmalloc_huge_noprof(unsigned long size, gfp_t = gfp_mask) } EXPORT_SYMBOL_GPL(vmalloc_huge_noprof); =20 +void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int nod= e) +{ + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, + gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, + node, __builtin_return_address(0)); +} + /** * vzalloc - allocate virtually contiguous memory with zero fill * @size: allocation size --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C20BC257AC9 for ; Wed, 12 Mar 2025 15:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; cv=none; b=prjSraeXwUeRDsTr9QZHcvV2j0GBB/57SmtVbx8KWpwdUenjMQwMZfN6x4DmFvxfd8MAjL5LE/xVAqKB7eUa/AEncUcN6a/OxPcXXSa5cTO+5xdcLaHm/aRNkmKOr4gbYDwtwYctdXvTAcxJFd30NsgBUG4HymuYcrE4IrDLcy4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792612; c=relaxed/simple; bh=bl4cLDhuMd74Mq6Ih/NF2BqPljxDr4IxOUtL/VE90fg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=byCwwoJRwSJA90XcrKfAZcwn/wDFOLDme1EpfC30Xdr3hNVpGHhZUrfWbOjER62TAP1XvLuXvTkNjez3UKCtPZHVZL1iKZ6B8Gn96BloedzwF6P7HN7vljH6LBnFu+8lNpW4Oi/9E7oTNP50iQOMMv/zbbaLww7qLaZfuzxCCgE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=dY+5nCBM; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=74udpUd+; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="dY+5nCBM"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="74udpUd+" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L3LCm1d64Y7YEyqKbvy7JXW7DiwDUF3lDXFWlqN3fEY=; b=dY+5nCBMOwL76hZ8hxWz+9CJ6Qv26mX5PmnwFseYvvXCSHJG72mdt0GU5+S8X/hucQrEAM cb8qswksdCMs6o7C24zyhoVffI5kbaUk1TxhGFyzkKhjgKcA6OINHPZlPqwJA87CBaK18H 0HlQcqTE302w1AHo7w1ZwGTlujDEGY6UeQAm3E5sMIieFOH035DWhclbWoHi/WP9f9GCbd JlatNei4Zuxwmm00KmH4cuOovllwpScJ4mdwL58JmVGuDIdrDGrnJddG/GcfsliO58lDqE W1g+DYZjjuPl2+Np1VJwLrcI2HbTZE8nX2Wf2458eLJ72IiE4tEK9soL3XrQCw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L3LCm1d64Y7YEyqKbvy7JXW7DiwDUF3lDXFWlqN3fEY=; b=74udpUd+7OGwkKSwjUlMrXivZGp/74I0kHpzQuJMGvKuOlzRpym3/raVCoVFNPBv7FDVBk E21i2547TgA4AuBQ== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 20/21] futex: Implement FUTEX2_NUMA Date: Wed, 12 Mar 2025 16:16:33 +0100 Message-ID: <20250312151634.2183278-21-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Extend the futex2 interface to be numa aware. When FUTEX2_NUMA is specified for a futex, the user value is extended to two words (of the same size). The first is the user value we all know, the second one will be the node to place this futex on. struct futex_numa_32 { u32 val; u32 node; }; When node is set to ~0, WAIT will set it to the current node_id such that WAKE knows where to find it. If userspace corrupts the node value between WAIT and WAKE, the futex will not be found and no wakeup will happen. When FUTEX2_NUMA is not set, the node is simply an extention of the hash, such that traditional futexes are still interleaved over the nodes. This is done to avoid having to have a separate !numa hash-table. [bigeasy: ensure to have at least hashsize of 4 in futex_init(), add pr_info() for size and allocation information.] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/futex.h | 3 ++ include/uapi/linux/futex.h | 8 +++ kernel/futex/core.c | 100 ++++++++++++++++++++++++++++++------- kernel/futex/futex.h | 33 ++++++++++-- 4 files changed, 124 insertions(+), 20 deletions(-) diff --git a/include/linux/futex.h b/include/linux/futex.h index 7e14d2e9162d2..19c37afa0432a 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -34,6 +34,7 @@ union futex_key { u64 i_seq; unsigned long pgoff; unsigned int offset; + /* unsigned int node; */ } shared; struct { union { @@ -42,11 +43,13 @@ union futex_key { }; unsigned long address; unsigned int offset; + /* unsigned int node; */ } private; struct { u64 ptr; unsigned long word; unsigned int offset; + unsigned int node; /* NOT hashed! */ } both; }; =20 diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h index d2ee625ea1890..0435025beaae8 100644 --- a/include/uapi/linux/futex.h +++ b/include/uapi/linux/futex.h @@ -74,6 +74,14 @@ /* do not use */ #define FUTEX_32 FUTEX2_SIZE_U32 /* historical accident :-( */ =20 + +/* + * When FUTEX2_NUMA doubles the futex word, the second word is a node valu= e. + * The special value -1 indicates no-node. This is the same value as + * NUMA_NO_NODE, except that value is not ABI, this is. + */ +#define FUTEX_NO_NODE (-1) + /* * Max numbers of elements in a futex_waitv array */ diff --git a/kernel/futex/core.c b/kernel/futex/core.c index bc7451287b2ce..b9da7dc6a900a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -36,6 +36,8 @@ #include #include #include +#include +#include #include #include #include @@ -51,11 +53,14 @@ * reside in the same cacheline. */ static struct { - struct futex_hash_bucket *queues; unsigned long hashmask; + unsigned int hashshift; + struct futex_hash_bucket *queues[MAX_NUMNODES]; } __futex_data __read_mostly __aligned(2*sizeof(long)); -#define futex_queues (__futex_data.queues) -#define futex_hashmask (__futex_data.hashmask) + +#define futex_hashmask (__futex_data.hashmask) +#define futex_hashshift (__futex_data.hashshift) +#define futex_queues (__futex_data.queues) =20 struct futex_private_hash { rcuref_t users; @@ -326,15 +331,35 @@ __futex_hash(union futex_key *key, struct futex_priva= te_hash *fph) { struct futex_hash_bucket *hb; u32 hash; + int node; =20 hb =3D __futex_hash_private(key, fph); if (hb) return hb; =20 hash =3D jhash2((u32 *)key, - offsetof(typeof(*key), both.offset) / 4, + offsetof(typeof(*key), both.offset) / sizeof(u32), key->both.offset); - return &futex_queues[hash & futex_hashmask]; + node =3D key->both.node; + + if (node =3D=3D FUTEX_NO_NODE) { + /* + * In case of !FLAGS_NUMA, use some unused hash bits to pick a + * node -- this ensures regular futexes are interleaved across + * the nodes and avoids having to allocate multiple + * hash-tables. + * + * NOTE: this isn't perfectly uniform, but it is fast and + * handles sparse node masks. + */ + node =3D (hash >> futex_hashshift) % nr_node_ids; + if (!node_possible(node)) { + node =3D find_next_bit_wrap(node_possible_map.bits, + nr_node_ids, node); + } + } + + return &futex_queues[node][hash & futex_hashmask]; } =20 /** @@ -441,25 +466,49 @@ int get_futex_key(u32 __user *uaddr, unsigned int fla= gs, union futex_key *key, struct page *page; struct folio *folio; struct address_space *mapping; - int err, ro =3D 0; + int node, err, size, ro =3D 0; bool fshared; =20 fshared =3D flags & FLAGS_SHARED; + size =3D futex_size(flags); + if (flags & FLAGS_NUMA) + size *=3D 2; =20 /* * The futex address must be "naturally" aligned. */ key->both.offset =3D address % PAGE_SIZE; - if (unlikely((address % sizeof(u32)) !=3D 0)) + if (unlikely((address % size) !=3D 0)) return -EINVAL; address -=3D key->both.offset; =20 - if (unlikely(!access_ok(uaddr, sizeof(u32)))) + if (unlikely(!access_ok(uaddr, size))) return -EFAULT; =20 if (unlikely(should_fail_futex(fshared))) return -EFAULT; =20 + if (flags & FLAGS_NUMA) { + u32 __user *naddr =3D uaddr + size / 2; + + if (futex_get_value(&node, naddr)) + return -EFAULT; + + if (node =3D=3D FUTEX_NO_NODE) { + node =3D numa_node_id(); + if (futex_put_value(node, naddr)) + return -EFAULT; + + } else if (node >=3D MAX_NUMNODES || !node_possible(node)) { + return -EINVAL; + } + + key->both.node =3D node; + + } else { + key->both.node =3D FUTEX_NO_NODE; + } + /* * PROCESS_PRIVATE futexes are fast. * As the mm cannot disappear under us and the 'key' only needs @@ -1597,24 +1646,41 @@ int futex_hash_prctl(unsigned long arg2, unsigned l= ong arg3) static int __init futex_init(void) { unsigned long hashsize, i; - unsigned int futex_shift; + unsigned int order, n; + unsigned long size; =20 #ifdef CONFIG_BASE_SMALL hashsize =3D 16; #else - hashsize =3D roundup_pow_of_two(256 * num_possible_cpus()); + hashsize =3D 256 * num_possible_cpus(); + hashsize /=3D num_possible_nodes(); + hashsize =3D max(4, hashsize); + hashsize =3D roundup_pow_of_two(hashsize); #endif + futex_hashshift =3D ilog2(hashsize); + size =3D sizeof(struct futex_hash_bucket) * hashsize; + order =3D get_order(size); =20 - futex_queues =3D alloc_large_system_hash("futex", sizeof(*futex_queues), - hashsize, 0, 0, - &futex_shift, NULL, - hashsize, hashsize); - hashsize =3D 1UL << futex_shift; + for_each_node(n) { + struct futex_hash_bucket *table; =20 - for (i =3D 0; i < hashsize; i++) - futex_hash_bucket_init(&futex_queues[i], NULL); + if (order > MAX_PAGE_ORDER) + table =3D vmalloc_huge_node(size, GFP_KERNEL, n); + else + table =3D alloc_pages_exact_nid(n, size, GFP_KERNEL); + + BUG_ON(!table); + + for (i =3D 0; i < hashsize; i++) + futex_hash_bucket_init(&table[i], NULL); + + futex_queues[n] =3D table; + } =20 futex_hashmask =3D hashsize - 1; + pr_info("futex hash table entries: %lu (%lu bytes on %d NUMA nodes, total= %lu KiB, %s).\n", + hashsize, size, num_possible_nodes(), size * num_possible_nodes() / 1024, + order > MAX_PAGE_ORDER ? "vmalloc" : "linear"); return 0; } core_initcall(futex_init); diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 8eba9982bcae1..11c870a92b5d0 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -54,7 +54,7 @@ static inline unsigned int futex_to_flags(unsigned int op) return flags; } =20 -#define FUTEX2_VALID_MASK (FUTEX2_SIZE_MASK | FUTEX2_PRIVATE) +#define FUTEX2_VALID_MASK (FUTEX2_SIZE_MASK | FUTEX2_NUMA | FUTEX2_PRIVATE) =20 /* FUTEX2_ to FLAGS_ */ static inline unsigned int futex2_to_flags(unsigned int flags2) @@ -87,6 +87,19 @@ static inline bool futex_flags_valid(unsigned int flags) if ((flags & FLAGS_SIZE_MASK) !=3D FLAGS_SIZE_32) return false; =20 + /* + * Must be able to represent both FUTEX_NO_NODE and every valid nodeid + * in a futex word. + */ + if (flags & FLAGS_NUMA) { + int bits =3D 8 * futex_size(flags); + u64 max =3D ~0ULL; + + max >>=3D 64 - bits; + if (nr_node_ids >=3D max) + return false; + } + return true; } =20 @@ -290,7 +303,7 @@ static inline int futex_cmpxchg_value_locked(u32 *curva= l, u32 __user *uaddr, u32 * This looks a bit overkill, but generally just results in a couple * of instructions. */ -static __always_inline int futex_read_inatomic(u32 *dest, u32 __user *from) +static __always_inline int futex_get_value(u32 *dest, u32 __user *from) { u32 val; =20 @@ -307,12 +320,26 @@ static __always_inline int futex_read_inatomic(u32 *d= est, u32 __user *from) return -EFAULT; } =20 +static __always_inline int futex_put_value(u32 val, u32 __user *to) +{ + if (can_do_masked_user_access()) + to =3D masked_user_access_begin(to); + else if (!user_read_access_begin(to, sizeof(*to))) + return -EFAULT; + unsafe_put_user(val, to, Efault); + user_read_access_end(); + return 0; +Efault: + user_read_access_end(); + return -EFAULT; +} + static inline int futex_get_value_locked(u32 *dest, u32 __user *from) { int ret; =20 pagefault_disable(); - ret =3D futex_read_inatomic(dest, from); + ret =3D futex_get_value(dest, from); pagefault_enable(); =20 return ret; --=20 2.47.2 From nobody Fri Dec 19 03:04:55 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5817825742C for ; Wed, 12 Mar 2025 15:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; cv=none; b=NZonxzSlBtLEvkiUqjXPrTb4ZEbf1mQInnnaNyE9zF54fkLNKXDZx+hlOp5Ej/GT+INITqsbp0BsCAzsX1v176FI4qywik/U0bFQk4q/9geDwts8ekzb5R8GgHTyHcOKKiM3k5EvVBYyHKo11vcVVOhXczSGTZn/+eyn2W/oE/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741792611; c=relaxed/simple; bh=L41lfVycNegLBDt/mqxZXDMwv/TFyY5hHQG/8Ke5Z/8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hTlzyFmMq/m/5uj8Kol5IRjFQB92T/3SlPBimYd1E/kWxaD3djNuOVSi0jvfiIOO0Gsjx8YrjHm6MiAYTSSZ2lYLUUW/pTTgkMm8d0JTE1vgv+b/A1i6fiEQIPMwvZKC1+oneOpJ2v9e6x5r0TqS4iyCDwNyIpBMKwu64xC4+aw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=qZvS3G7D; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=HK/EBYER; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="qZvS3G7D"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="HK/EBYER" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iNeUiHh2MMnAD2CP7mWP/8SxhtTZSX7q+aRCE1+zZrI=; b=qZvS3G7DkbOVTnvcCbS5KnEp7f+n4USELIhiG0H2pC3+zalTrD5zzxJLpq7l9q2WwJomPu mZrFBm5WAdWYynKVp8wmH+negahC1MMZcrvi5Skn3hjaYY19pcbwlJs3Aqlx9C/VHrs2VR TuFCD07I+EAZY/mzrt+fNRVN74bMXhxYgSP0swG36K831ib9oW62KK3zFZsmILVQDYPE5x XfUo8FZukn4fepZuUK6k8RG/2tigx4azftJ4s3iB8U7WyY3KkRGGCRxkYifHYOeotsg76C 9bFSGnjncRMbK6oM2545Mbj4UovgZLjBW90n2RGnLl0Gduz+FI4BAY54tr42gw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741792606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iNeUiHh2MMnAD2CP7mWP/8SxhtTZSX7q+aRCE1+zZrI=; b=HK/EBYERdr0H8x3yJ6y3wJgTOuiyfbCXl/7fZLkTx9Lhu++fwMBgorap42qntvIsSruWyz SFUIaePHDwvfIzAg== To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Andr=C3=A9=20Almeida?= , Darren Hart , Davidlohr Bueso , Ingo Molnar , Juri Lelli , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH v10 21/21] futex: Implement FUTEX2_MPOL Date: Wed, 12 Mar 2025 16:16:34 +0100 Message-ID: <20250312151634.2183278-22-bigeasy@linutronix.de> In-Reply-To: <20250312151634.2183278-1-bigeasy@linutronix.de> References: <20250312151634.2183278-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Extend the futex2 interface to be aware of mempolicy. When FUTEX2_MPOL is specified and there is a MPOL_PREFERRED or home_node specified covering the futex address, use that hash-map. Notably, in this case the futex will go to the global node hashtable, even if it is a PRIVATE futex. When FUTEX2_NUMA|FUTEX2_MPOL is specified and the user specified node value is FUTEX_NO_NODE, the MPOL lookup (as described above) will be tried first before reverting to setting node to the local node. [bigeasy: add CONFIG_FUTEX_MPOL ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior --- include/linux/mmap_lock.h | 4 ++ include/uapi/linux/futex.h | 2 +- init/Kconfig | 5 ++ kernel/futex/core.c | 112 +++++++++++++++++++++++++++++++------ kernel/futex/futex.h | 4 ++ 5 files changed, 108 insertions(+), 19 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 45a21faa3ff62..89fb032545e0d 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 #define MMAP_LOCK_INITIALIZER(name) \ .mmap_lock =3D __RWSEM_INITIALIZER((name).mmap_lock), @@ -217,6 +218,9 @@ static inline void mmap_read_unlock(struct mm_struct *m= m) up_read(&mm->mmap_lock); } =20 +DEFINE_GUARD(mmap_read_lock, struct mm_struct *, + mmap_read_lock(_T), mmap_read_unlock(_T)) + static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) { __mmap_lock_trace_released(mm, false); diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h index 0435025beaae8..247c425e175ef 100644 --- a/include/uapi/linux/futex.h +++ b/include/uapi/linux/futex.h @@ -63,7 +63,7 @@ #define FUTEX2_SIZE_U32 0x02 #define FUTEX2_SIZE_U64 0x03 #define FUTEX2_NUMA 0x04 - /* 0x08 */ +#define FUTEX2_MPOL 0x08 /* 0x10 */ /* 0x20 */ /* 0x40 */ diff --git a/init/Kconfig b/init/Kconfig index b0a448608446d..a4502a9077e03 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1688,6 +1688,11 @@ config FUTEX_PRIVATE_HASH depends on FUTEX && !BASE_SMALL && MMU default y =20 +config FUTEX_MPOL + bool + depends on FUTEX && NUMA + default y + config EPOLL bool "Enable eventpoll support" if EXPERT default y diff --git a/kernel/futex/core.c b/kernel/futex/core.c index b9da7dc6a900a..65523f3cfe32e 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -43,6 +43,8 @@ #include #include #include +#include +#include =20 #include "futex.h" #include "../locking/rtmutex_common.h" @@ -318,6 +320,73 @@ struct futex_hash_bucket *futex_hash(union futex_key *= key) =20 #endif /* CONFIG_FUTEX_PRIVATE_HASH */ =20 +#ifdef CONFIG_FUTEX_MPOL +static int __futex_key_to_node(struct mm_struct *mm, unsigned long addr) +{ + struct vm_area_struct *vma =3D vma_lookup(mm, addr); + struct mempolicy *mpol; + int node =3D FUTEX_NO_NODE; + + if (!vma) + return FUTEX_NO_NODE; + + mpol =3D vma_policy(vma); + if (!mpol) + return FUTEX_NO_NODE; + + switch (mpol->mode) { + case MPOL_PREFERRED: + node =3D first_node(mpol->nodes); + break; + case MPOL_PREFERRED_MANY: + case MPOL_BIND: + if (mpol->home_node !=3D NUMA_NO_NODE) + node =3D mpol->home_node; + break; + default: + break; + } + + return node; +} + +static int futex_key_to_node_opt(struct mm_struct *mm, unsigned long addr) +{ + int seq, node; + + guard(rcu)(); + + if (!mmap_lock_speculate_try_begin(mm, &seq)) + return -EBUSY; + + node =3D __futex_key_to_node(mm, addr); + + if (mmap_lock_speculate_retry(mm, seq)) + return -EAGAIN; + + return node; +} + +static int futex_mpol(struct mm_struct *mm, unsigned long addr) +{ + int node; + + node =3D futex_key_to_node_opt(mm, addr); + if (node >=3D FUTEX_NO_NODE) + return node; + + guard(mmap_read_lock)(mm); + return __futex_key_to_node(mm, addr); +} +#else /* !CONFIG_FUTEX_MPOL */ + +static int futex_mpol(struct mm_struct *mm, unsigned long addr) +{ + return FUTEX_NO_NODE; +} + +#endif /* CONFIG_FUTEX_MPOL */ + /** * futex_hash - Return the hash bucket in the global hash * @key: Pointer to the futex key for which the hash is calculated @@ -329,18 +398,20 @@ struct futex_hash_bucket *futex_hash(union futex_key = *key) static struct futex_hash_bucket * __futex_hash(union futex_key *key, struct futex_private_hash *fph) { - struct futex_hash_bucket *hb; + int node =3D key->both.node; u32 hash; - int node; =20 - hb =3D __futex_hash_private(key, fph); - if (hb) - return hb; + if (node =3D=3D FUTEX_NO_NODE) { + struct futex_hash_bucket *hb; + + hb =3D __futex_hash_private(key, fph); + if (hb) + return hb; + } =20 hash =3D jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / sizeof(u32), key->both.offset); - node =3D key->both.node; =20 if (node =3D=3D FUTEX_NO_NODE) { /* @@ -488,27 +559,32 @@ int get_futex_key(u32 __user *uaddr, unsigned int fla= gs, union futex_key *key, if (unlikely(should_fail_futex(fshared))) return -EFAULT; =20 + node =3D FUTEX_NO_NODE; + if (flags & FLAGS_NUMA) { u32 __user *naddr =3D uaddr + size / 2; =20 if (futex_get_value(&node, naddr)) return -EFAULT; =20 - if (node =3D=3D FUTEX_NO_NODE) { - node =3D numa_node_id(); - if (futex_put_value(node, naddr)) - return -EFAULT; - - } else if (node >=3D MAX_NUMNODES || !node_possible(node)) { + if (node >=3D MAX_NUMNODES || !node_possible(node)) return -EINVAL; - } - - key->both.node =3D node; - - } else { - key->both.node =3D FUTEX_NO_NODE; } =20 + if (node =3D=3D FUTEX_NO_NODE && (flags & FLAGS_MPOL)) + node =3D futex_mpol(mm, address); + + if (flags & FLAGS_NUMA) { + u32 __user *naddr =3D uaddr + size / 2; + + if (node =3D=3D FUTEX_NO_NODE) + node =3D numa_node_id(); + if (futex_put_value(node, naddr)) + return -EFAULT; + } + + key->both.node =3D node; + /* * PROCESS_PRIVATE futexes are fast. * As the mm cannot disappear under us and the 'key' only needs diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index 11c870a92b5d0..52e9c0c4b6c87 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -39,6 +39,7 @@ #define FLAGS_HAS_TIMEOUT 0x0040 #define FLAGS_NUMA 0x0080 #define FLAGS_STRICT 0x0100 +#define FLAGS_MPOL 0x0200 =20 /* FUTEX_ to FLAGS_ */ static inline unsigned int futex_to_flags(unsigned int op) @@ -67,6 +68,9 @@ static inline unsigned int futex2_to_flags(unsigned int f= lags2) if (flags2 & FUTEX2_NUMA) flags |=3D FLAGS_NUMA; =20 + if (flags2 & FUTEX2_MPOL) + flags |=3D FLAGS_MPOL; + return flags; } =20 --=20 2.47.2