From nobody Sat Feb 7 15:21:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46A90C001B0 for ; Tue, 8 Aug 2023 17:43:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233913AbjHHRny (ORCPT ); Tue, 8 Aug 2023 13:43:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233873AbjHHRnb (ORCPT ); Tue, 8 Aug 2023 13:43:31 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E71112498D for ; Tue, 8 Aug 2023 09:19:21 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id af79cd13be357-768270326b8so635552285a.2 for ; Tue, 08 Aug 2023 09:19:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691511526; x=1692116326; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b7oKrmXIz3NsRfcR5MFYJZeuPXTlZNmpPB/o29ITFQI=; b=LLaqBjd4f9hbhCmCTs/5WF+B22WR06Nz5JfC3U/hQ039kYEniV8mrgEbjikVq8JmJc +T2SsO4OY9e/yAStgVCOlsw/5FkXzB7XGI4clM5VCCYXOjAT2U2ucFzuji/jyu158Xm7 dxN+xf9ApfXv5/X0HWNbsFcfEwGRjOlJIDdUHOH9+fvU1nxmfctrtAsMPgKSRmCOtI75 H7bll04PyL/XJiTFQZkIsCkfEgWILdiO2krCpgIUCAnf99+LmGYS3d9CgnMmjjA2EFpO s9ObvxNOK8JZHO3z3SRsF3AGXKXJV3rO9EWDsvnSTYIKiXAJfFhQ+laT1aTRsIelUvoa 84Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691511526; x=1692116326; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b7oKrmXIz3NsRfcR5MFYJZeuPXTlZNmpPB/o29ITFQI=; b=NW90u6VRRU6J/VAjR8pPHXhIoRj1avDuxo5+XeOVMiN9tyWbcHrkP8EQuoqNRdNOFI T6RTtuzr16MeU1N5Cg8mM8l4ZkXsNu4T/gIIqBqORQChgl39G6XQCHac9SBg63wUdfnE PXYcGDYNVu+mKoSfzHU9rQnkgZ805WP/Oyx9LNgpgQMkXKLJNluRhQ61VLL+HXBtjQeM QJgtUGpkm40SmQnD46CTS6gUGAokezM3rvI0iX01Gv64PWJ/h1F8EzpG/pql1ZaYPnjT lmCS83bC5XyiFF52f4NMSVEzXkKrBjcWb9jKDPSBqDd1LLD/Ds5vFUGzhQZUzXxe6JKg PiAg== X-Gm-Message-State: AOJu0Yza0Ic5g8nht/7kSmMsYGTUV0NnVhletPoNRptVrW829Vt8QR8G rX84J9CKdioFVqlmnD/Wal2SmavseiWFrVV8yXxobrMPAYxdB77dBGsLNrBV5X+ahVKkjV7+gVc zb+HIL/tkrphWBSzvPdtal2YKY0lgPyJRiimLUYRwxTktPxMqEipvVNtBnJxM5fHI8K9i+Pc= X-Google-Smtp-Source: AGHT+IGhdd9/6FpupVF/U/x4zx4Q8zbIJse0Sw4k06GfTMBc1OKW4ryDOJLK8I94bNfoJOyeIuXYWgoXYFFf X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a63:6d8c:0:b0:563:4342:4307 with SMTP id i134-20020a636d8c000000b0056343424307mr41016pgc.2.1691476024079; Mon, 07 Aug 2023 23:27:04 -0700 (PDT) Date: Tue, 8 Aug 2023 06:26:41 +0000 In-Reply-To: <20230808062658.391595-1-jstultz@google.com> Mime-Version: 1.0 References: <20230808062658.391595-1-jstultz@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808062658.391595-2-jstultz@google.com> Subject: [RFC][PATCH 1/3] test-ww_mutex: Use prng instead of rng to avoid hangs at bootup From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Li Zhijian , Dietmar Eggemann , kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Booting w/ qemu without kvm, I noticed we'd sometimes seem to get stuck in get_random_u32_below(). This seems potentially to be entropy exhaustion (with the test module linked statically, it runs pretty early in the bootup). I'm not 100% sure on this, but this patch switches to use the prng instead since we don't need true randomness, just mixed up orders for testing ww_mutex lock acquisitions. With this patch, I no longer see hangs in get_random_u32_below() Feedback would be appreciated! Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Li Zhijian Cc: Dietmar Eggemann Cc: kernel-team@android.com Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 93cca6e69860..9bceba65858a 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include =20 @@ -386,6 +386,19 @@ struct stress { int nlocks; }; =20 +struct rnd_state rng; +DEFINE_SPINLOCK(rng_lock); + +static inline u32 prandom_u32_below(u32 ceil) +{ + u32 ret; + + spin_lock(&rng_lock); + ret =3D prandom_u32_state(&rng) % ceil; + spin_unlock(&rng_lock); + return ret; +} + static int *get_random_order(int count) { int *order; @@ -399,7 +412,7 @@ static int *get_random_order(int count) order[n] =3D n; =20 for (n =3D count - 1; n > 1; n--) { - r =3D get_random_u32_below(n + 1); + r =3D prandom_u32_below(n + 1); if (r !=3D n) { tmp =3D order[n]; order[n] =3D order[r]; @@ -625,6 +638,8 @@ static int __init test_ww_mutex_init(void) =20 printk(KERN_INFO "Beginning ww mutex selftests\n"); =20 + prandom_seed_state(&rng, get_random_u64()); + wq =3D alloc_workqueue("test-ww_mutex", WQ_UNBOUND, 0); if (!wq) return -ENOMEM; --=20 2.41.0.640.ga95def55d0-goog From nobody Sat Feb 7 15:21:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF7FDC001DB for ; Tue, 8 Aug 2023 18:52:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232311AbjHHSwm (ORCPT ); Tue, 8 Aug 2023 14:52:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235499AbjHHSwV (ORCPT ); Tue, 8 Aug 2023 14:52:21 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D8511A768 for ; Tue, 8 Aug 2023 10:06:11 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id af79cd13be357-76c939bc1adso863973385a.3 for ; Tue, 08 Aug 2023 10:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691514368; x=1692119168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ymit3zFEWIjWyELyB8FrEwDrQYWKERmL2cdBLLRpncg=; b=uATKww8C8iuNmq5K/nNL2TZ1RoKZ2lf8Udwev/WUIv+xt73/VYFR7POfB9NqlYawLr fFjA7oonJ6O/CzAycnXQorpgkf7Sutqk3c1VJEZAPvFn7RGHVCDUIy3KxapCP4pMnKjw sBT+s42fYb11lQ8DUlSiilqdqmfIvXov5RSHEy+SSPWvtnoJdFMAuTAn1CPv4W0FWtSF glExNj6G1w4VUcFmXoy1ehLeYERic5ZTAv3lGfouaSuy8Ob2jWj+vP0jpiOVh/IMNmWp kKCbrPWhrd6peQiKunvaTjhH45uA43v5uGjHOS9M55k7o1+nisGXvfbTy3ec5hx+jbCF KjPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691514368; x=1692119168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ymit3zFEWIjWyELyB8FrEwDrQYWKERmL2cdBLLRpncg=; b=hl4KiGysQHS50OdYGLbS+Wdb28M/PB8tJCbE+pwZCbC1SYfAUF6Adi/T5JyVcGOXRL ZOVqHMPgenzV/FyTJxMjV8yztxvUK3QwSD8hw4P24JE84pqk6HhxbGyQ6wrzejFRxkdT 0KYquB7UEPssY0E4ph78BtRVatBanRZgB8CX7N6KNuQ+gxW+5eMh8nIDUgRyyM67QIrF ABD/qNw2xlN1fJ8xw6Yh9+qUIYQPMaXY1WcKQ7xz96OtxzBGP0/0a5pihRPNaV8oaVzE paB2ipw+MQ8QdVhTacyzd2uGhX4dWtwL3Ks4ZykaTvjuRNMzFNmshjIpeSjUnMEPq2Vf oz4A== X-Gm-Message-State: AOJu0YyFcELR/hcwb9dxw41R6wL7mYFFX6U4ESOMXd+jXnGY8FYMtMZq lECDUox1qv6Wwf341zaX9jynZLPGEJFVvdaygkHcV3TuyVivHrDyTys9mNODpAwHdhWU17Zzxei Nyyq3uoimtovn5Bhh0qQRN9E3/dC4sf4GSgNv1O2nhGJvgW8pV5NuVatp76INGw6LXJn7obc= X-Google-Smtp-Source: AGHT+IFtBEywd6rcRcrgJzjs0aS0ORYXNdltbuVrMFXOFw9bGdzgPs2KuGt4qxEZ/wllTfZha/bqiOJNj34Z X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a63:3ecf:0:b0:564:3087:dd22 with SMTP id l198-20020a633ecf000000b005643087dd22mr50812pga.9.1691476025927; Mon, 07 Aug 2023 23:27:05 -0700 (PDT) Date: Tue, 8 Aug 2023 06:26:42 +0000 In-Reply-To: <20230808062658.391595-1-jstultz@google.com> Mime-Version: 1.0 References: <20230808062658.391595-1-jstultz@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808062658.391595-3-jstultz@google.com> Subject: [RFC][PATCH 2/3] test-ww_mutex: Fix potential workqueue corruption From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Li Zhijian , Dietmar Eggemann , kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In some cases running with the test-ww_mutex code, I was seeing odd behavior where sometimes it seemed flush_workqueue was returning before all the work threads were finished. Often this would cause strange crashes as the mutexes would be freed while they were being used. Looking at the code, there is a lifetime problem as the controlling thread that spawns the work allocates the "struct stress" structures that are passed to the workqueue threads. Then when the workqueue threads are finished, they free the stress struct that was passed to them. Unfortunately the workqueue work_struct node is in the stress struct. Which means the work_struct is freed before the work thread returns and while flush_workqueue is waiting. It seems like a better idea to have the controlling thread both allocate and free the stress structures, so that we can be sure we don't corrupt the workqueue by freeing the structure prematurely. So this patch reworks the test to do so, and with this change I no longer see the early flush_workqueue returns. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Li Zhijian Cc: Dietmar Eggemann Cc: kernel-team@android.com Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 9bceba65858a..358d66150426 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -479,7 +479,6 @@ static void stress_inorder_work(struct work_struct *wor= k) } while (!time_after(jiffies, stress->timeout)); =20 kfree(order); - kfree(stress); } =20 struct reorder_lock { @@ -544,7 +543,6 @@ static void stress_reorder_work(struct work_struct *wor= k) list_for_each_entry_safe(ll, ln, &locks, link) kfree(ll); kfree(order); - kfree(stress); } =20 static void stress_one_work(struct work_struct *work) @@ -565,8 +563,6 @@ static void stress_one_work(struct work_struct *work) break; } } while (!time_after(jiffies, stress->timeout)); - - kfree(stress); } =20 #define STRESS_INORDER BIT(0) @@ -577,15 +573,24 @@ static void stress_one_work(struct work_struct *work) static int stress(int nlocks, int nthreads, unsigned int flags) { struct ww_mutex *locks; - int n; + struct stress *stress_array; + int n, count; =20 locks =3D kmalloc_array(nlocks, sizeof(*locks), GFP_KERNEL); if (!locks) return -ENOMEM; =20 + stress_array =3D kmalloc_array(nthreads, sizeof(*stress_array), + GFP_KERNEL); + if (!stress_array) { + kfree(locks); + return -ENOMEM; + } + for (n =3D 0; n < nlocks; n++) ww_mutex_init(&locks[n], &ww_class); =20 + count =3D 0; for (n =3D 0; nthreads; n++) { struct stress *stress; void (*fn)(struct work_struct *work); @@ -609,9 +614,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) if (!fn) continue; =20 - stress =3D kmalloc(sizeof(*stress), GFP_KERNEL); - if (!stress) - break; + stress =3D &stress_array[count++]; =20 INIT_WORK(&stress->work, fn); stress->locks =3D locks; @@ -626,6 +629,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) =20 for (n =3D 0; n < nlocks; n++) ww_mutex_destroy(&locks[n]); + kfree(stress_array); kfree(locks); =20 return 0; --=20 2.41.0.640.ga95def55d0-goog From nobody Sat Feb 7 15:21:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D68C001B0 for ; Tue, 8 Aug 2023 18:40:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233256AbjHHSkF (ORCPT ); Tue, 8 Aug 2023 14:40:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232846AbjHHSjn (ORCPT ); Tue, 8 Aug 2023 14:39:43 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 966A014E747 for ; Tue, 8 Aug 2023 10:04:26 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5897d05e878so7609027b3.3 for ; Tue, 08 Aug 2023 10:04:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691514263; x=1692119063; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6Pj22MKmr8XIhh1K0aihl/wpxs9R/qGJHCHsUaf0qEA=; b=s0xWzXU70VZY4/sag/71nAGjSBw0VTsldkRXrQv6hNF6rmShd40JQtJnRdQyZ+iqTK 5KBQnevbciD9hRputkVCUki0nuPPo4YuQTp9SNDBBj+h3SW0+ck2qUTTiLIzA/wkNxvI cV27wWiHSGKysqfzCqw0NyAIKFMx5eNV1ALBXhVGzL6BXLdu5Zw1zDJeXqHrdxizGWSK qBqsnCIZn1yJYMDNyqqnMwg2RLRE2tUousQzZENCNcI/5HzaxlTYgjrN8ZCGr7mXQBXv 8xw+9R2b353EPTq7m6N0FNmy65uljEbsKpFSHBeTpHnlHUNjJaOGLe/rR6dT+yEB71Br BBHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691514263; x=1692119063; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6Pj22MKmr8XIhh1K0aihl/wpxs9R/qGJHCHsUaf0qEA=; b=bvsCN8+ZXGxcO/QkYZs3phxJA6p1IYz50mWqRr43Mqk7QDkqrDSNdRIz6/F9SMKw/N V6X1bjUt0C7B8n0mog0hbHvoxQYJl/tEkDQyOr0DYYjhgSwesTPdlnxct7SjpvGu0Ey6 CkXCrHEGQPZwltuSeL+nKWNOENU31IcP8Wamow3gTF1MGIFkNzhPJFpKVWMMfR3inQDD R2FJjoImCX93dqJrjYU5/B9IpqJv/mw57NoFBZbtxK5wWP7Onc4FK5aZL1n+lNbuSnYU 4h9lv4olWH/sb56n0KbWDKgsCnXYPRRKNLJI+0+/Nwp7k0g0lruFSlmILhA9lL4L5uJp tXvQ== X-Gm-Message-State: AOJu0YwcCKkrR7YU/PNXsFkPSE99CA4CZxvZQUUgHNd3Jt9SvP9+r/yU r7eQAPe7iaBT/DBFIXz2HMIZqXsKsgOKC6QdW8EjHwbQ83xWklcIAyt/3wreZuHjL2epw9jba4u 55FV/uvE8+zN7c/7XjRf10Bc0Dr7mIhDco13G5omxW9F6dKU61ExuJ2q9uCpHP9QCl+Spq0U= X-Google-Smtp-Source: AGHT+IHqx2QI+7pJSjzr9uHQXPaw5pVfN0pf1+WRluhAo+loAGS0f/4XjbvkD6Leg6ztTpG/cWZBXZsgpUxj X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a63:770c:0:b0:551:eb6:1ea6 with SMTP id s12-20020a63770c000000b005510eb61ea6mr48746pgc.10.1691476027531; Mon, 07 Aug 2023 23:27:07 -0700 (PDT) Date: Tue, 8 Aug 2023 06:26:43 +0000 In-Reply-To: <20230808062658.391595-1-jstultz@google.com> Mime-Version: 1.0 References: <20230808062658.391595-1-jstultz@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808062658.391595-4-jstultz@google.com> Subject: [RFC][PATCH 3/3] test-ww_mutex: Make sure we bail out instead of livelock From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Li Zhijian , Dietmar Eggemann , kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" I've seen what appears to be livelocks in the stress_inorder_work() function, and looking at the code it is clear we can have a case where we continually retry acquiring the locks and never check to see if we have passed the specified timeout. This patch reworks that function so we always check the timeout before iterating through the loop again. I believe others may have hit this previously here: https://lore.kernel.org/lkml/895ef450-4fb3-5d29-a6ad-790657106a5a@intel.com/ Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Li Zhijian Cc: Dietmar Eggemann Cc: kernel-team@android.com Reported-by: Li Zhijian Link: https://lore.kernel.org/lkml/895ef450-4fb3-5d29-a6ad-790657106a5a@int= el.com/ Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 358d66150426..78719e1ef1b1 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -465,17 +465,18 @@ static void stress_inorder_work(struct work_struct *w= ork) ww_mutex_unlock(&locks[order[n]]); =20 if (err =3D=3D -EDEADLK) { - ww_mutex_lock_slow(&locks[order[contended]], &ctx); - goto retry; + if (!time_after(jiffies, stress->timeout)) { + ww_mutex_lock_slow(&locks[order[contended]], &ctx); + goto retry; + } } =20 + ww_acquire_fini(&ctx); if (err) { pr_err_once("stress (%s) failed with %d\n", __func__, err); break; } - - ww_acquire_fini(&ctx); } while (!time_after(jiffies, stress->timeout)); =20 kfree(order); --=20 2.41.0.640.ga95def55d0-goog