From nobody Wed Dec 31 06:37:46 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 309E0C4332F for ; Mon, 6 Nov 2023 23:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233622AbjKFXIH (ORCPT ); Mon, 6 Nov 2023 18:08:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233533AbjKFXHh (ORCPT ); Mon, 6 Nov 2023 18:07:37 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32971703; Mon, 6 Nov 2023 15:07:32 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7AB6CC433C7; Mon, 6 Nov 2023 23:07:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699312052; bh=md3aXzGLS4KCiMpHUZ/uZtMIj/5YQPVs/Nh62VwgmFs=; h=From:To:Cc:Subject:Date:From; b=LXtX3WosRNuQ0jADLol+qQIXVaRakLQaDwzvbjbDsqLhiYTosifWpaQTci1GTBkNP xEHzlDRIpsa5mj2u1zOD9kpD/V+u7GI5s1bryXSFfvZG9sOSwYnyPEVxx9epSJOSrV p370ph26O7+FJamB8WfqZMCJ6x7am+jJFUMhyaRS6SppnZAnvbpY5k3V0Xy928iM7F FhnJp4yiKD4jiLsEGpYXxexhEHoP55ZZdjOXE9+TFUlpczKNE+PEBh4zCo5XtCxJfS YSZzgS+Z0ZwIOaH/yafggs7S0VuMB8V4ZTCytizzzGOw4EyoIRAru1zgP6Hsna0hC3 VTC6Rr74LY7dQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: John Stultz , Ingo Molnar , Sasha Levin , peterz@infradead.org, mingo@redhat.com, will@kernel.org Subject: [PATCH AUTOSEL 4.14] locking/ww_mutex/test: Fix potential workqueue corruption Date: Mon, 6 Nov 2023 18:07:29 -0500 Message-ID: <20231106230729.3734685-1-sashal@kernel.org> X-Mailer: git-send-email 2.42.0 MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 4.14.328 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: John Stultz [ Upstream commit bccdd808902f8c677317cec47c306e42b93b849e ] In some cases running with the test-ww_mutex code, I was seeing odd behavior where sometimes it seemed flush_workqueue was returning before all the work threads were finished. Often this would cause strange crashes as the mutexes would be freed while they were being used. Looking at the code, there is a lifetime problem as the controlling thread that spawns the work allocates the "struct stress" structures that are passed to the workqueue threads. Then when the workqueue threads are finished, they free the stress struct that was passed to them. Unfortunately the workqueue work_struct node is in the stress struct. Which means the work_struct is freed before the work thread returns and while flush_workqueue is waiting. It seems like a better idea to have the controlling thread both allocate and free the stress structures, so that we can be sure we don't corrupt the workqueue by freeing the structure prematurely. So this patch reworks the test to do so, and with this change I no longer see the early flush_workqueue returns. Signed-off-by: John Stultz Signed-off-by: Ingo Molnar Link: https://lore.kernel.org/r/20230922043616.19282-3-jstultz@google.com Signed-off-by: Sasha Levin --- kernel/locking/test-ww_mutex.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 654977862b06b..8489a01f943e8 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -439,7 +439,6 @@ static void stress_inorder_work(struct work_struct *wor= k) } while (!time_after(jiffies, stress->timeout)); =20 kfree(order); - kfree(stress); } =20 struct reorder_lock { @@ -504,7 +503,6 @@ static void stress_reorder_work(struct work_struct *wor= k) list_for_each_entry_safe(ll, ln, &locks, link) kfree(ll); kfree(order); - kfree(stress); } =20 static void stress_one_work(struct work_struct *work) @@ -525,8 +523,6 @@ static void stress_one_work(struct work_struct *work) break; } } while (!time_after(jiffies, stress->timeout)); - - kfree(stress); } =20 #define STRESS_INORDER BIT(0) @@ -537,15 +533,24 @@ static void stress_one_work(struct work_struct *work) static int stress(int nlocks, int nthreads, unsigned int flags) { struct ww_mutex *locks; - int n; + struct stress *stress_array; + int n, count; =20 locks =3D kmalloc_array(nlocks, sizeof(*locks), GFP_KERNEL); if (!locks) return -ENOMEM; =20 + stress_array =3D kmalloc_array(nthreads, sizeof(*stress_array), + GFP_KERNEL); + if (!stress_array) { + kfree(locks); + return -ENOMEM; + } + for (n =3D 0; n < nlocks; n++) ww_mutex_init(&locks[n], &ww_class); =20 + count =3D 0; for (n =3D 0; nthreads; n++) { struct stress *stress; void (*fn)(struct work_struct *work); @@ -569,9 +574,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) if (!fn) continue; =20 - stress =3D kmalloc(sizeof(*stress), GFP_KERNEL); - if (!stress) - break; + stress =3D &stress_array[count++]; =20 INIT_WORK(&stress->work, fn); stress->locks =3D locks; @@ -586,6 +589,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) =20 for (n =3D 0; n < nlocks; n++) ww_mutex_destroy(&locks[n]); + kfree(stress_array); kfree(locks); =20 return 0; --=20 2.42.0