From nobody Wed Apr 8 09:41:01 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A979C32774 for ; Mon, 22 Aug 2022 19:05:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237892AbiHVTFW (ORCPT ); Mon, 22 Aug 2022 15:05:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237440AbiHVTFJ (ORCPT ); Mon, 22 Aug 2022 15:05:09 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3765DF84 for ; Mon, 22 Aug 2022 12:05:08 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id z18-20020a170903019200b00172dd6da065so3497976plg.14 for ; Mon, 22 Aug 2022 12:05:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=Ymyfh9RlnpsfHiWgle1iozHTu/vvYAbdLZ+pqPOCskE=; b=Nz7r52o6DK2hNZWto4vpWnLRTLN09sP156alZPaM1opHbRd41281g0qge69YB+744e QFw6DGAKFDyy9zT1o6o9g0gxBMoKsePQ+68xvK1hAy4dxi1TB/KZfSJA4c9wUwcT9Iiy 80GDRue1E6ORyiRuJaAUPKvZsbQgABWKOEuA/sduNLop1TNwzyKjQ0HUte6rIBJH6j1S VALXxBRJ/qtvua/bZhHTOxQDQGW7hoj0+20/9DB4vxc56MQDvSNHQnp0xJ12pFkgO6cY QMng7Q9/OW8qrqXAJaDm/3Sug5dfSH35u0M4JKXO6dWSJJCgZMJUYmnmBN6914XWMfGc XEOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Ymyfh9RlnpsfHiWgle1iozHTu/vvYAbdLZ+pqPOCskE=; b=Ea34z5tQ+lDR6JackM0UjBwuEMMZ+sxC20ss7pOkYP5g6f2wBTwiYvZTaXFpQ7GphU FQXaA24ipJDJANtFSlI4lcVi8II0n+UbDZTYi1hN6LfgciFLLs8aaC6QLy2oG8Um+hMp OICqN89MLUQ7PqTo9D8jCxhOzeMbYKnbEjDET/wdJ+as1GuOoh4fhmI+doorH3sJflPa gDaDjFiTHttuLYgwxpbvMCRKzVOxnjOsEDzs5nQiuaSN/NpKz+FR9kyLQwXwPGi7NXV2 C2DMirBJYD+JJdyk2B8Ly5i8mgCL78Znk3MAaEgNFbtKyOVJcE/YuzdXUhI9L8LL24sG te7A== X-Gm-Message-State: ACgBeo38zBTxKTT9UG7NKtC89048fAzWiv20JrPlgV6UK6WOY+VskU7S jP8ykb3JpR7tFANfeIeYO1f9QYmrDeE5tnVVHWk6w122F05PaBELZgG8FOI/hbziLm/g8crrg5c ww+Li7QGVFWVQnvpTSBnqagZ/cj1stFIo2HJCfXszx8bLN+KSfM0n/jNLZ+Lc9hG34ycNWkE= X-Google-Smtp-Source: AA6agR6QV98jP/m+YVbkkw6HUpiSCFWpCpvbcFqSjlnJCqrpqQEK4th0d5NTc2hE0gkxKmzMyamlABMe/cnP X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:1515:b0:536:c6ea:115f with SMTP id q21-20020a056a00151500b00536c6ea115fmr3830121pfu.37.1661195108353; Mon, 22 Aug 2022 12:05:08 -0700 (PDT) Date: Mon, 22 Aug 2022 19:05:01 +0000 In-Reply-To: <20220822190501.2171100-1-jstultz@google.com> Message-Id: <20220822190501.2171100-3-jstultz@google.com> Mime-Version: 1.0 References: <20220822190501.2171100-1-jstultz@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC][PATCH v2 2/2] softirq: defer softirq processing to ksoftirqd if CPU is busy with RT From: John Stultz To: LKML Cc: Pavankumar Kondeti , John Dias , "Connor O'Brien" , Rick Yiu , John Kacur , Qais Yousef , Chris Redpath , Abhijeet Dharmapurikar , Peter Zijlstra , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Thomas Gleixner , kernel-team@android.com, Satya Durga Srinivasu Prabhala , "J . Avila" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pavankumar Kondeti Defer the softirq processing to ksoftirqd if a RT task is running or queued on the current CPU. This complements the RT task placement algorithm which tries to find a CPU that is not currently busy with softirqs. Currently NET_TX, NET_RX, BLOCK and TASKLET softirqs are only deferred as they can potentially run for long time. Additionally, this patch stubs out ksoftirqd_running() logic, in the CONFIG_RT_SOFTIRQ_OPTIMIZATION case, as deferring potentially long-running softirqs will cause the logic to not process shorter-running softirqs immediately. By stubbing it out the potentially long running softirqs are deferred, but the shorter running ones can still run immediately. This patch includes folded-in fixes by: Lingutla Chandrasekhar Satya Durga Srinivasu Prabhala J. Avila Cc: John Dias Cc: Connor O'Brien Cc: Rick Yiu Cc: John Kacur Cc: Qais Yousef Cc: Chris Redpath Cc: Abhijeet Dharmapurikar Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Steven Rostedt Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Pavankumar Kondeti [satyap@codeaurora.org: trivial merge conflict resolution.] Signed-off-by: Satya Durga Srinivasu Prabhala [elavila: Port to mainline, squash with bugfix] Signed-off-by: J. Avila [jstultz: Rebase to linus/HEAD, minor rearranging of code, included bug fix Reported-by: Qais Yousef ] Signed-off-by: John Stultz --- include/linux/sched.h | 10 ++++++++++ kernel/sched/cpupri.c | 13 +++++++++++++ kernel/softirq.c | 25 +++++++++++++++++++++++-- 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index e7b2f8a5c711..7f76371cbbb0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1826,6 +1826,16 @@ current_restore_flags(unsigned long orig_flags, unsi= gned long flags) =20 extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const stru= ct cpumask *trial); extern int task_can_attach(struct task_struct *p, const struct cpumask *cs= _effective_cpus); + +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION +extern bool cpupri_check_rt(void); +#else +static inline bool cpupri_check_rt(void) +{ + return false; +} +#endif + #ifdef CONFIG_SMP extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumas= k *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumas= k *new_mask); diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c index fa9ce9d83683..18dc75d16951 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c @@ -64,6 +64,19 @@ static int convert_prio(int prio) return cpupri; } =20 +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION +/* + * cpupri_check_rt - check if CPU has a RT task + * should be called from rcu-sched read section. + */ +bool cpupri_check_rt(void) +{ + int cpu =3D raw_smp_processor_id(); + + return cpu_rq(cpu)->rd->cpupri.cpu_to_pri[cpu] > CPUPRI_NORMAL; +} +#endif + static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p, struct cpumask *lowest_mask, int idx) { diff --git a/kernel/softirq.c b/kernel/softirq.c index 35ee79dd8786..203a70dc9459 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -87,6 +87,7 @@ static void wakeup_softirqd(void) wake_up_process(tsk); } =20 +#ifndef CONFIG_RT_SOFTIRQ_OPTIMIZATION /* * If ksoftirqd is scheduled, we do not want to process pending softirqs * right now. Let ksoftirqd handle this at its own rate, to get fairness, @@ -101,6 +102,9 @@ static bool ksoftirqd_running(unsigned long pending) return false; return tsk && task_is_running(tsk) && !__kthread_should_park(tsk); } +#else +#define ksoftirqd_running(pending) (false) +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */ =20 #ifdef CONFIG_TRACE_IRQFLAGS DEFINE_PER_CPU(int, hardirqs_enabled); @@ -532,6 +536,17 @@ static inline bool lockdep_softirq_start(void) { retur= n false; } static inline void lockdep_softirq_end(bool in_hardirq) { } #endif =20 +static __u32 softirq_deferred_for_rt(__u32 *pending) +{ + __u32 deferred =3D 0; + + if (cpupri_check_rt()) { + deferred =3D *pending & LONG_SOFTIRQ_MASK; + *pending &=3D ~LONG_SOFTIRQ_MASK; + } + return deferred; +} + asmlinkage __visible void __softirq_entry __do_softirq(void) { unsigned long end =3D jiffies + MAX_SOFTIRQ_TIME; @@ -539,6 +554,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(= void) int max_restart =3D MAX_SOFTIRQ_RESTART; struct softirq_action *h; bool in_hardirq; + __u32 deferred; __u32 pending; int softirq_bit; =20 @@ -551,13 +567,15 @@ asmlinkage __visible void __softirq_entry __do_softir= q(void) =20 pending =3D local_softirq_pending(); =20 + deferred =3D softirq_deferred_for_rt(&pending); softirq_handle_begin(); + in_hardirq =3D lockdep_softirq_start(); account_softirq_enter(current); =20 restart: /* Reset the pending bitmask before enabling irqs */ - set_softirq_pending(0); + set_softirq_pending(deferred); __this_cpu_write(active_softirqs, pending); =20 local_irq_enable(); @@ -596,13 +614,16 @@ asmlinkage __visible void __softirq_entry __do_softir= q(void) local_irq_disable(); =20 pending =3D local_softirq_pending(); + deferred =3D softirq_deferred_for_rt(&pending); + if (pending) { if (time_before(jiffies, end) && !need_resched() && --max_restart) goto restart; + } =20 + if (pending | deferred) wakeup_softirqd(); - } =20 account_softirq_exit(current); lockdep_softirq_end(in_hardirq); --=20 2.37.1.595.g718a3a8f04-goog