From nobody Sun Feb 8 01:31:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EACB4C77B7E for ; Sun, 28 May 2023 12:47:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229558AbjE1MrB (ORCPT ); Sun, 28 May 2023 08:47:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229447AbjE1Mq7 (ORCPT ); Sun, 28 May 2023 08:46:59 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCC42B9 for ; Sun, 28 May 2023 05:46:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 71E9961460 for ; Sun, 28 May 2023 12:46:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1AA83C433D2; Sun, 28 May 2023 12:46:55 +0000 (UTC) Date: Sun, 28 May 2023 08:46:52 -0400 From: Steven Rostedt To: LKML , x86@kernel.org Cc: Masami Hiramatsu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra Subject: [PATCH] x86/alternatives: Add cond_resched() to text_poke_bp_batch() Message-ID: <20230528084652.5f3b48f0@rorschach.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Steven Rostedt (Google)" Debugging in the kernel has started slowing down the kernel by a noticeable amount. The ftrace start up tests are triggering the softlockup watchdog on some boxes. This is caused by the start up tests that enable function and function graph tracing several times. Sprinkling cond_resched() just in the start up test code was not enough to stop the softlockup from triggering. It would sometimes trigger in the text_poke_bp_batch() code. The text_poke_bp_batch() is run in schedulable context. Add cond_resched() between each phase (adding the int3, updating the code, and removing the int3). This keeps the softlockup from triggering in the start up tests. Signed-off-by: Steven Rostedt (Google) --- arch/x86/kernel/alternative.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index f615e0cb6d93..e024eddd457f 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1953,6 +1953,14 @@ static void text_poke_bp_batch(struct text_poke_loc = *tp, unsigned int nr_entries */ atomic_set_release(&bp_desc.refs, 1); =20 + /* + * Function tracing can enable thousands of places that need to be + * updated. This can take quite some time, and with full kernel debugging + * enabled, this could cause the softlockup watchdog to trigger. + * Add cond_resched() calls to each phase. + */ + cond_resched(); + /* * Corresponding read barrier in int3 notifier for making sure the * nr_entries and handler are correctly ordered wrt. patching. @@ -2030,6 +2038,7 @@ static void text_poke_bp_batch(struct text_poke_loc *= tp, unsigned int nr_entries * better safe than sorry (plus there's not only Intel). */ text_poke_sync(); + cond_resched(); } =20 /* @@ -2049,8 +2058,10 @@ static void text_poke_bp_batch(struct text_poke_loc = *tp, unsigned int nr_entries do_sync++; } =20 - if (do_sync) + if (do_sync) { text_poke_sync(); + cond_resched(); + } =20 /* * Remove and wait for refs to be zero. --=20 2.39.2