From nobody Wed Feb 11 14:47:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E6A4C77B7F for ; Mon, 8 May 2023 06:18:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232689AbjEHGS5 (ORCPT ); Mon, 8 May 2023 02:18:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232222AbjEHGSs (ORCPT ); Mon, 8 May 2023 02:18:48 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 185E330FB for ; Sun, 7 May 2023 23:18:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683526685; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=W0c+VSR17kVnx+dGxdbRQxPAsnMKfT7yhIv2KO3Qcns=; b=eZadGpezd6zVhtVE2L21j5sBql9lhb44bZWqk5phTb5gR7uPZqxQPL9XYH1aBmtJlMf9VQ DYpLCAUviBTgfivhauSwQieqcmtl3R6USb0RaEBx5PiEGPJuNC3zTMTAE0370DqlbstXQ8 50wvgwNntqJ1K5dyBNIf94uc1St9mHM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-463-67wtD5P8MEaxF9DVAmIAzg-1; Mon, 08 May 2023 02:18:01 -0400 X-MC-Unique: 67wtD5P8MEaxF9DVAmIAzg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3686038005FD; Mon, 8 May 2023 06:18:01 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.192.142]) by smtp.corp.redhat.com (Postfix) with ESMTP id 93060492C13; Mon, 8 May 2023 06:17:59 +0000 (UTC) From: Paolo Abeni To: linux-kernel@vger.kernel.org Cc: "Paul E. McKenney" , Thomas Gleixner , peterz@infradead.org, netdev@vger.kernel.org, Jakub Kicinski , Jason Xing , Eric Dumazet Subject: [PATCH] revert: "softirq: Let ksoftirqd do its job" Date: Mon, 8 May 2023 08:17:44 +0200 Message-Id: <57e66b364f1b6f09c9bc0316742c3b14f4ce83bd.1683526542.git.pabeni@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Due to the mentioned commit, when the ksoftirqd processes take charge of softirq processing, the system can experience high latencies. In the past a few workarounds have been implemented for specific side-effects of the above: commit 1ff688209e2e ("watchdog: core: make sure the watchdog_worker is not = deferred") commit 8d5755b3f77b ("watchdog: softdog: fire watchdog even if softirqs do = not get to run") commit 217f69743681 ("net: busy-poll: allow preemption in sk_busy_loop()") commit 3c53776e29f8 ("Mark HI and TASKLET softirq synchronous") but the latency problem still exists in real-life workloads, see the link below. The reverted commit intended to solve a live-lock scenario that can now be addressed with the NAPI threaded mode, introduced with commit 29863d41bb6e ("net: implement threaded-able napi poll loop support"), and nowadays in a pretty stable status. While a complete solution to put softirq processing under nice resource control would be preferable, that has proven to be a very hard task. In the short term, remove the main pain point, and also simplify a bit the current softirq implementation. Note that this change also reverts commit 3c53776e29f8 ("Mark HI and TASKLET softirq synchronous") and commit 1342d8080f61 ("softirq: Don't skip softirq execution when softirq thread is parking"), which are direct follow-ups of the feature commit. A single change is preferred to avoid known bad intermediate states introduced by a patch series reverting them individually. Link: https://lore.kernel.org/netdev/305d7742212cbe98621b16be782b0562f1012c= b6.camel@redhat.com/ Signed-off-by: Paolo Abeni Tested-by: Jason Xing Reviewed-by: Eric Dumazet Reviewed-by: Jakub Kicinski Reviewed-by: Sebastian Andrzej Siewior --- kernel/softirq.c | 22 ++-------------------- 1 file changed, 2 insertions(+), 20 deletions(-) diff --git a/kernel/softirq.c b/kernel/softirq.c index 1b725510dd0f..807b34ccd797 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -80,21 +80,6 @@ static void wakeup_softirqd(void) wake_up_process(tsk); } =20 -/* - * If ksoftirqd is scheduled, we do not want to process pending softirqs - * right now. Let ksoftirqd handle this at its own rate, to get fairness, - * unless we're doing some of the synchronous softirqs. - */ -#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ)) -static bool ksoftirqd_running(unsigned long pending) -{ - struct task_struct *tsk =3D __this_cpu_read(ksoftirqd); - - if (pending & SOFTIRQ_NOW_MASK) - return false; - return tsk && task_is_running(tsk) && !__kthread_should_park(tsk); -} - #ifdef CONFIG_TRACE_IRQFLAGS DEFINE_PER_CPU(int, hardirqs_enabled); DEFINE_PER_CPU(int, hardirq_context); @@ -236,7 +221,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned in= t cnt) goto out; =20 pending =3D local_softirq_pending(); - if (!pending || ksoftirqd_running(pending)) + if (!pending) goto out; =20 /* @@ -432,9 +417,6 @@ static inline bool should_wake_ksoftirqd(void) =20 static inline void invoke_softirq(void) { - if (ksoftirqd_running(local_softirq_pending())) - return; - if (!force_irqthreads() || !__this_cpu_read(ksoftirqd)) { #ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK /* @@ -468,7 +450,7 @@ asmlinkage __visible void do_softirq(void) =20 pending =3D local_softirq_pending(); =20 - if (pending && !ksoftirqd_running(pending)) + if (pending) do_softirq_own_stack(); =20 local_irq_restore(flags); --=20 2.40.0