From nobody Sat Feb 7 17:54:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 969BEE7D0C9 for ; Fri, 22 Sep 2023 04:36:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230339AbjIVEgl (ORCPT ); Fri, 22 Sep 2023 00:36:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229879AbjIVEgc (ORCPT ); Fri, 22 Sep 2023 00:36:32 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D121F1 for ; Thu, 21 Sep 2023 21:36:26 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-59b59e1ac70so25832257b3.1 for ; Thu, 21 Sep 2023 21:36:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695357386; x=1695962186; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iJjrcHsRRDC/E8xCadhRO+d676g6LTNm1iG8XOf+yCU=; b=b0+pcqOcSGJgZ09DIgxP+JFc0LwSIrGZAmmYspKWWLAczPZKPX3RdfBWJcNGtSD2PC g5vnSNg64BWR+A4DEWSK6TBnyCNxBseldG3jCLIWITJ4T9lH7AAp+AMBDpwbQsk7ANg6 oL4laPY3CSQigHEXdpNCuB+pzdKLcpCkgqD2Jf/u+YjmUhgItHCwckebkwXdOMexDNDr POldY2mxLNqBwOoMog2SsEhj3zA9vrQko0/NfRcZss1dApIx6lmapS0TVsOEXNAcH9SB OeMqFiIiZPcVJLpDlb3vGw2BNLpQweXLl2QpivB4gFpuWSno1/hITXyovkLWYfX82nIt W2rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695357386; x=1695962186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iJjrcHsRRDC/E8xCadhRO+d676g6LTNm1iG8XOf+yCU=; b=ZNAnVR4jOHach/DLtkvMXnH9tYiuSE0nn/C3JfTkNNZhHTekFNhGY9YcAEg0DA9Drb Ea2HxXJeV4TLZfJHx1drsROxNobA9B8pc+bSnjTh2pUCwOuOc+ViKRFVvNryk7+nT6Em alOW/qJGeIUXgc5p1hg/7vjPbJPe00azEqqVpT7Ln3W/fmbV1SjCp4NEXj6ae4zf36sZ FgWrVHj4/DThWRLr2qwD51rwBLOfJCE9ImxVXCbGQQL+tmpoRdRDZC0QFZFtEyR4FZgu MzSNRVf6TOQssr6u+lXkEIkL++hnPdBPJp5wJfP5/UP9+wbEH5Ozm9L/osiCHicBByfO VLBA== X-Gm-Message-State: AOJu0Yy0QS+1a8f/sIPwo1YUWK2MaKco+OITG+E1mZFjhKrxAC1jknND UI1MdzWTTSAm397yVhZcEVPaPMTaJSGXS5gRoO21Y9dRuucwsqH9MBZZRlS2DZ0Rv3agi6K6ZoE +66hvDkjgnBDu76gegZyZ3YsksZgk+c7Bf8CzUztIRJCUtFUBabqajJQF2/u+8ZiYp8LBAIk= X-Google-Smtp-Source: AGHT+IFNOkIsULGPsfSNQHS7oQY8u5wdhSTdRmt2j5wpIX+plt9ZpRiCueKYzeAursc8A2FvuZN6V53qxjOo X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:ca16:0:b0:d81:8e4d:b681 with SMTP id a22-20020a25ca16000000b00d818e4db681mr94662ybg.12.1695357385708; Thu, 21 Sep 2023 21:36:25 -0700 (PDT) Date: Fri, 22 Sep 2023 04:35:59 +0000 In-Reply-To: <20230922043616.19282-1-jstultz@google.com> Mime-Version: 1.0 References: <20230922043616.19282-1-jstultz@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230922043616.19282-2-jstultz@google.com> Subject: [PATCH 1/3] test-ww_mutex: Use prng instead of rng to avoid hangs at bootup From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Dietmar Eggemann , kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Booting w/ qemu without kvm, and with 64 cpus, I noticed we'd sometimes hung task watchdog splats in get_random_u32_below() when using the test-ww_mutex stress test. While entropy exhaustion is no longer an issue, the RNG may be slower early in boot. The test-ww_mutex code will spawn off 128 threads (2x cpus) and each thread will call get_random_u32_below() a number of times to generate a random order of the 16 locks. This intense use takes time and without kvm, qemu can be slow enough that we trip the hung task watchdogs. For this test, we don't need true randomness, just mixed up orders for testing ww_mutex lock acquisitions, so it changes the logic to use the prng instead, which takes less time and avoids the watchdgos. Feedback would be appreciated! Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Dietmar Eggemann Cc: kernel-team@android.com Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 93cca6e69860..9bceba65858a 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include =20 @@ -386,6 +386,19 @@ struct stress { int nlocks; }; =20 +struct rnd_state rng; +DEFINE_SPINLOCK(rng_lock); + +static inline u32 prandom_u32_below(u32 ceil) +{ + u32 ret; + + spin_lock(&rng_lock); + ret =3D prandom_u32_state(&rng) % ceil; + spin_unlock(&rng_lock); + return ret; +} + static int *get_random_order(int count) { int *order; @@ -399,7 +412,7 @@ static int *get_random_order(int count) order[n] =3D n; =20 for (n =3D count - 1; n > 1; n--) { - r =3D get_random_u32_below(n + 1); + r =3D prandom_u32_below(n + 1); if (r !=3D n) { tmp =3D order[n]; order[n] =3D order[r]; @@ -625,6 +638,8 @@ static int __init test_ww_mutex_init(void) =20 printk(KERN_INFO "Beginning ww mutex selftests\n"); =20 + prandom_seed_state(&rng, get_random_u64()); + wq =3D alloc_workqueue("test-ww_mutex", WQ_UNBOUND, 0); if (!wq) return -ENOMEM; --=20 2.42.0.515.g380fc7ccd1-goog From nobody Sat Feb 7 17:54:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8953E7D0C2 for ; Fri, 22 Sep 2023 04:36:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230410AbjIVEgp (ORCPT ); Fri, 22 Sep 2023 00:36:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230225AbjIVEge (ORCPT ); Fri, 22 Sep 2023 00:36:34 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D852E18F for ; Thu, 21 Sep 2023 21:36:28 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d8571d5e71aso2203978276.0 for ; Thu, 21 Sep 2023 21:36:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695357388; x=1695962188; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FFCV0sch6yU/8LYuNDFjRf1SrTydnK4Yn1LXrMsCSk4=; b=yXERfMhY4eBgTkKkc75ZyeMIdpnGWdA75CrRE8zXbbt+Q7Xl0wlfNNATH2T4OfOpS7 w2t+T59imlmUC77GplKSx09TzP96vffUY54q2b90PsA0CvBJWBNNeCKa3lQgk9Pf4End 8OpP5yi1mYxQ6x41+89Kje2Ayg8xZ9sQZQNIhi0jdt6+tZdBOmkdbxXPWZdu0gdw8A0Z nW1keo/5k1zZM215s8/vzUmQU4EPj845jfAUZgC3o8c6Y/YDd9xXgLECz1UKbTX6/E8Y olcj83NMpKeA1llx9vAjD/L1up2+O8O8asL+gDhEhU92T/mFB5RlzzXmetqx/BVoyteI /n/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695357388; x=1695962188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FFCV0sch6yU/8LYuNDFjRf1SrTydnK4Yn1LXrMsCSk4=; b=RoSOpqbEUqRdBDuvMBxUDQFVo+dhsQV/yVGTdM8pK46k14DbQ2OcObwR9jdE3aMfdh vKR+QmxtA5QY+zVcYXSGjD1gI/nJtk15rzOw4GeyFhH6As1KqWMjRfHD9HBxQOnsOIkN VYuj5qzhNkVkLRcHMk/9pWoaLdWv0IfDAPxDdjP4CFM1hVI5PrdsaopZVlvULjBNEe5I mF23RVdPlrabhG6c6QHvC5UMGEWazS66sJRM7LojvwyISfDv4PDXOiRIT2L8KTDmI38V hQ2SQvB+1DMu7XiA12+FbMlFduOyoyCugJ/nxqHwhuiSAfqlHkdSCpkwNEJTHIyQXbcv 9Xhg== X-Gm-Message-State: AOJu0YxSd60EeYZ7Hkpt+dhP0oHXAaF5tupubSKL61GMgOS9SPLpO0AH j/0CGbm2jCw+AMtsr6fMN2vaAfUXeS/WjRAVLpbyHjIMKQL1thJZEIzeITK+5PfNpSzL67vkevV 0OE05q1gC1IwKLgwAZYUlqMv5pXeBZa2W23bUsRlLw30QCpzzvKXBhOODZDVCG+w/O1ZXZuI= X-Google-Smtp-Source: AGHT+IHV7SMIXl/z2ZcUCI9wh8uElpYjP827jcLFnNhWQLUNorHifjF1jTcRD63C6a9rfv1vjhH5LttK/Cqh X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:4081:0:b0:d10:5b67:843c with SMTP id n123-20020a254081000000b00d105b67843cmr96648yba.4.1695357387716; Thu, 21 Sep 2023 21:36:27 -0700 (PDT) Date: Fri, 22 Sep 2023 04:36:00 +0000 In-Reply-To: <20230922043616.19282-1-jstultz@google.com> Mime-Version: 1.0 References: <20230922043616.19282-1-jstultz@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230922043616.19282-3-jstultz@google.com> Subject: [PATCH 2/3] test-ww_mutex: Fix potential workqueue corruption From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Dietmar Eggemann , kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In some cases running with the test-ww_mutex code, I was seeing odd behavior where sometimes it seemed flush_workqueue was returning before all the work threads were finished. Often this would cause strange crashes as the mutexes would be freed while they were being used. Looking at the code, there is a lifetime problem as the controlling thread that spawns the work allocates the "struct stress" structures that are passed to the workqueue threads. Then when the workqueue threads are finished, they free the stress struct that was passed to them. Unfortunately the workqueue work_struct node is in the stress struct. Which means the work_struct is freed before the work thread returns and while flush_workqueue is waiting. It seems like a better idea to have the controlling thread both allocate and free the stress structures, so that we can be sure we don't corrupt the workqueue by freeing the structure prematurely. So this patch reworks the test to do so, and with this change I no longer see the early flush_workqueue returns. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Dietmar Eggemann Cc: kernel-team@android.com Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 9bceba65858a..358d66150426 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -479,7 +479,6 @@ static void stress_inorder_work(struct work_struct *wor= k) } while (!time_after(jiffies, stress->timeout)); =20 kfree(order); - kfree(stress); } =20 struct reorder_lock { @@ -544,7 +543,6 @@ static void stress_reorder_work(struct work_struct *wor= k) list_for_each_entry_safe(ll, ln, &locks, link) kfree(ll); kfree(order); - kfree(stress); } =20 static void stress_one_work(struct work_struct *work) @@ -565,8 +563,6 @@ static void stress_one_work(struct work_struct *work) break; } } while (!time_after(jiffies, stress->timeout)); - - kfree(stress); } =20 #define STRESS_INORDER BIT(0) @@ -577,15 +573,24 @@ static void stress_one_work(struct work_struct *work) static int stress(int nlocks, int nthreads, unsigned int flags) { struct ww_mutex *locks; - int n; + struct stress *stress_array; + int n, count; =20 locks =3D kmalloc_array(nlocks, sizeof(*locks), GFP_KERNEL); if (!locks) return -ENOMEM; =20 + stress_array =3D kmalloc_array(nthreads, sizeof(*stress_array), + GFP_KERNEL); + if (!stress_array) { + kfree(locks); + return -ENOMEM; + } + for (n =3D 0; n < nlocks; n++) ww_mutex_init(&locks[n], &ww_class); =20 + count =3D 0; for (n =3D 0; nthreads; n++) { struct stress *stress; void (*fn)(struct work_struct *work); @@ -609,9 +614,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) if (!fn) continue; =20 - stress =3D kmalloc(sizeof(*stress), GFP_KERNEL); - if (!stress) - break; + stress =3D &stress_array[count++]; =20 INIT_WORK(&stress->work, fn); stress->locks =3D locks; @@ -626,6 +629,7 @@ static int stress(int nlocks, int nthreads, unsigned in= t flags) =20 for (n =3D 0; n < nlocks; n++) ww_mutex_destroy(&locks[n]); + kfree(stress_array); kfree(locks); =20 return 0; --=20 2.42.0.515.g380fc7ccd1-goog From nobody Sat Feb 7 17:54:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C670DE7D0C2 for ; Fri, 22 Sep 2023 04:36:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230432AbjIVEgt (ORCPT ); Fri, 22 Sep 2023 00:36:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230248AbjIVEgg (ORCPT ); Fri, 22 Sep 2023 00:36:36 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6770F192 for ; Thu, 21 Sep 2023 21:36:30 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-690c2fec489so1666309b3a.2 for ; Thu, 21 Sep 2023 21:36:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695357390; x=1695962190; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r/KUuuQt1wtLnvbtfLVYFf0YPn4pz5Wk+u9cse4uVUE=; b=g0DrD1J3I/aBzq1Qo1EPcvjmJzgMiJ7/EkxODsE9SfoSVZYfby0JdYQaQgapzAquBI MqQpezaUPoCRhxSkDEv0cqDG/zZSAENwct5dQ3kcOv51udpojGyZuvSQcePfPMk0st5S /DMEuRM2wVCZgTyRHn7cwbO8gFnjbhuOAuFmwO3P7kGPocjCnXApXX1kqykNoDW5km+o 7bvZxD2gsr1WhwFOo26L/mn3j3pzB4820BZIj8SpvU+C/Sb3l5x7UaL/It0yisa1QRsj JXk5GDF8iC7AXwu+tL4G3g/Ob77HOLlTD1skRkhz/JmilgpA7vKwH28A8bRLY4lQSvl8 tLIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695357390; x=1695962190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r/KUuuQt1wtLnvbtfLVYFf0YPn4pz5Wk+u9cse4uVUE=; b=Zk5TwSpS8IkexpdbsRmYHMald7app7AWG9tWIS/XrGySl2pzzsUBwtRnrMlPVz+Buw BnDkHGu8HWUD5XixNJDNOHGuPFRMj+DDqM8e9bfRWKg5m51aW4R7dBRy35u1T3u9xHz6 TKrbjCNI+D4FAKK/vR3Q4lS9jAOPlWsW10iFahnALpu8kplVsgaB7OaULPzqEGLcKMIi UWe2SF9e4noxv1jVMfQIgfIBiG3yFhtMMbSVGbIV+4Fa0jHy1dVRSWzzo0EoLwE98yTg bvFyR1d/SbzEs4hnq+aHacXD5qX2gQ1qO1Ojtiv0XncrC2guMORdfSb79KtUTZ4EIO2i D80w== X-Gm-Message-State: AOJu0YwB+bbnZzcTMfucM3RR/XeAmr7BN1BM7jWGakfDkbLVNe2KSoB5 hC0A5mAap6O9uhULn4xJTK2+kI2Gb04q030IvBelysmh0ln/8UzOCUZd1Zjux3D9u66Nu9e5wvX SIosWncvCF0AOrw5QAen7J7UKebg17JQL49E5qj+oCtWmdV9kR4SNFAY8T649yOH0Ip79bFg= X-Google-Smtp-Source: AGHT+IECV1YgzUdYwFj51w3+MPI0hZIbAXhqIaJV6YqJbBEazjIHb5LEGCWnetMW+pkWTk6eGtYnxd4OSB2N X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:c91:b0:690:3793:17f9 with SMTP id a17-20020a056a000c9100b00690379317f9mr127074pfv.6.1695357389355; Thu, 21 Sep 2023 21:36:29 -0700 (PDT) Date: Fri, 22 Sep 2023 04:36:01 +0000 In-Reply-To: <20230922043616.19282-1-jstultz@google.com> Mime-Version: 1.0 References: <20230922043616.19282-1-jstultz@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230922043616.19282-4-jstultz@google.com> Subject: [PATCH 3/3] test-ww_mutex: Make sure we bail out instead of livelock From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Joel Fernandes , Dietmar Eggemann , kernel-team@android.com, Li Zhijian Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" I've seen what appears to be livelocks in the stress_inorder_work() function, and looking at the code it is clear we can have a case where we continually retry acquiring the locks and never check to see if we have passed the specified timeout. This patch reworks that function so we always check the timeout before iterating through the loop again. I believe others may have hit this previously here: https://lore.kernel.org/lkml/895ef450-4fb3-5d29-a6ad-790657106a5a@intel.com/ Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: Joel Fernandes Cc: Dietmar Eggemann Cc: kernel-team@android.com Reported-by: Li Zhijian Link: https://lore.kernel.org/lkml/895ef450-4fb3-5d29-a6ad-790657106a5a@int= el.com/ Signed-off-by: John Stultz --- kernel/locking/test-ww_mutex.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 358d66150426..78719e1ef1b1 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -465,17 +465,18 @@ static void stress_inorder_work(struct work_struct *w= ork) ww_mutex_unlock(&locks[order[n]]); =20 if (err =3D=3D -EDEADLK) { - ww_mutex_lock_slow(&locks[order[contended]], &ctx); - goto retry; + if (!time_after(jiffies, stress->timeout)) { + ww_mutex_lock_slow(&locks[order[contended]], &ctx); + goto retry; + } } =20 + ww_acquire_fini(&ctx); if (err) { pr_err_once("stress (%s) failed with %d\n", __func__, err); break; } - - ww_acquire_fini(&ctx); } while (!time_after(jiffies, stress->timeout)); =20 kfree(order); --=20 2.42.0.515.g380fc7ccd1-goog