From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA3E6C32792 for ; Fri, 19 Aug 2022 20:49:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351507AbiHSUtQ (ORCPT ); Fri, 19 Aug 2022 16:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351218AbiHSUtL (ORCPT ); Fri, 19 Aug 2022 16:49:11 -0400 Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com [IPv6:2607:f8b0:4864:20::732]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CC1A9FA8F for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: by mail-qk1-x732.google.com with SMTP id a15so4114289qko.4 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ILmAvxERz1/ytVnxeMqVKkdEB1g5Djdc+NIIzxTOQ40=; b=yFjBG1KITkc2WAfL6WSOP5dJiuyUBHowRj+2m7my3ryrfJdJidXg59rULt6GyPplBT yVwiSaJcgNvlTqPSxdMsc3H715JgmpFJe/pTGCK1cQjOxPFvDFA5JbbjJa+/x4PqgV/j Kclt/cCS8M9Nw4BDG2Y+vN1gcmZFeFGdjZrBM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ILmAvxERz1/ytVnxeMqVKkdEB1g5Djdc+NIIzxTOQ40=; b=fksEqrER2U7pbqJiFeEJ4bU1484GYWNy+ywzkLKwO8p6tgobf2C6rPSEOW0owzXwEf 8xtWMTRso0oUXVOCWUR6pVOdNnKwDNW2rouUovthW1pnxaCttZs/d+IQvVyES+jGwopR 9DKiLf50zkfL/5SQFdWOJWg5FZ9Z9hYvNfm6ZCG0IyzUymaz4nsnONueSSHoXZ5PA9P3 7M7ZWo5GrYNpEA9f4eHQbXaNIsoE5S28SfyerRrE1Q0/LPFbvHkiZ/DH+Z8+j6KnayNW mB1I/PyxkI999AtJo2QkMXO69qZTRZgvPdp+z/HVc0JhdYTw7FLUCKdckatdbBohVGPV 8I+w== X-Gm-Message-State: ACgBeo1u7wj4tV7GlVgVIJnLFNmRknRnSF7J6C11G1RstIcFnKFX8XgM NN42NqORHbVT+S+z9b21rq84RvKKFIeUPQ== X-Google-Smtp-Source: AA6agR4EIRzXBKUGlMn+4bECNF9pq0vHDHd/HAZIGKipqVe1JGzdPCURwI/bfq2Chkroa7xHcbranw== X-Received: by 2002:a05:620a:151:b0:6ba:e711:eca9 with SMTP id e17-20020a05620a015100b006bae711eca9mr6293196qkn.385.1660942145153; Fri, 19 Aug 2022 13:49:05 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:04 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Paul McKenney , Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 01/14] rcu: Introduce call_rcu_lazy() API implementation Date: Fri, 19 Aug 2022 20:48:44 +0000 Message-Id: <20220819204857.3066329-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement timer-based RCU lazy callback batching. The batch is flushed whenever a certain amount of time has passed, or the batch on a particular CPU grows too big. Also memory pressure will flush it in a future patch. To handle several corner cases automagically (such as rcu_barrier() and hotplug), we re-use bypass lists to handle lazy CBs. The bypass list length has the lazy CB length included in it. A separate lazy CB length counter is also introduced to keep track of the number of lazy CBs. Suggested-by: Paul McKenney Signed-off-by: Joel Fernandes (Google) --- include/linux/rcu_segcblist.h | 1 + include/linux/rcupdate.h | 6 + kernel/rcu/Kconfig | 8 ++ kernel/rcu/rcu.h | 11 ++ kernel/rcu/rcu_segcblist.c | 15 ++- kernel/rcu/rcu_segcblist.h | 20 +++- kernel/rcu/tree.c | 130 ++++++++++++++-------- kernel/rcu/tree.h | 10 +- kernel/rcu/tree_nocb.h | 199 ++++++++++++++++++++++++++-------- 9 files changed, 301 insertions(+), 99 deletions(-) diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h index 659d13a7ddaa..9a992707917b 100644 --- a/include/linux/rcu_segcblist.h +++ b/include/linux/rcu_segcblist.h @@ -22,6 +22,7 @@ struct rcu_cblist { struct rcu_head *head; struct rcu_head **tail; long len; + long lazy_len; }; =20 #define RCU_CBLIST_INITIALIZER(n) { .head =3D NULL, .tail =3D &n.head } diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 1a32036c918c..9191a3d88087 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -82,6 +82,12 @@ static inline int rcu_preempt_depth(void) =20 #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ =20 +#ifdef CONFIG_RCU_LAZY +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func); +#else +#define call_rcu_lazy(head, func) call_rcu(head, func) +#endif + /* Internal to kernel */ void rcu_init(void); extern int rcu_scheduler_active; diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index 27aab870ae4c..779b6e84006b 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -293,4 +293,12 @@ config TASKS_TRACE_RCU_READ_MB Say N here if you hate read-side memory barriers. Take the default if you are unsure. =20 +config RCU_LAZY + bool "RCU callback lazy invocation functionality" + depends on RCU_NOCB_CPU + default n + help + To save power, batch RCU callbacks and flush after delay, memory + pressure or callback list growing too big. + endmenu # "RCU Subsystem" diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 4916077119f3..608f6ab76c7f 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -463,6 +463,14 @@ enum rcutorture_type { INVALID_RCU_FLAVOR }; =20 +#if defined(CONFIG_RCU_LAZY) +unsigned long rcu_lazy_get_jiffies_till_flush(void); +void rcu_lazy_set_jiffies_till_flush(unsigned long j); +#else +static inline unsigned long rcu_lazy_get_jiffies_till_flush(void) { return= 0; } +static inline void rcu_lazy_set_jiffies_till_flush(unsigned long j) { } +#endif + #if defined(CONFIG_TREE_RCU) void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, unsigned long *gp_seq); @@ -472,6 +480,8 @@ void do_trace_rcu_torture_read(const char *rcutorturena= me, unsigned long c_old, unsigned long c); void rcu_gp_set_torture_wait(int duration); +void rcu_force_call_rcu_to_lazy(bool force); + #else static inline void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, unsigned long *gp_seq) @@ -490,6 +500,7 @@ void do_trace_rcu_torture_read(const char *rcutorturena= me, do { } while (0) #endif static inline void rcu_gp_set_torture_wait(int duration) { } +static inline void rcu_force_call_rcu_to_lazy(bool force) { } #endif =20 #if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TE= ST) diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c index c54ea2b6a36b..776647cd2d6c 100644 --- a/kernel/rcu/rcu_segcblist.c +++ b/kernel/rcu/rcu_segcblist.c @@ -20,16 +20,21 @@ void rcu_cblist_init(struct rcu_cblist *rclp) rclp->head =3D NULL; rclp->tail =3D &rclp->head; rclp->len =3D 0; + rclp->lazy_len =3D 0; } =20 /* * Enqueue an rcu_head structure onto the specified callback list. */ -void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp) +void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp, + bool lazy) { *rclp->tail =3D rhp; rclp->tail =3D &rhp->next; WRITE_ONCE(rclp->len, rclp->len + 1); + + if (IS_ENABLED(CONFIG_RCU_LAZY) && lazy) + WRITE_ONCE(rclp->lazy_len, rclp->lazy_len + 1); } =20 /* @@ -38,11 +43,12 @@ void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct= rcu_head *rhp) * element of the second rcu_cblist structure, but ensuring that the second * rcu_cblist structure, if initially non-empty, always appears non-empty * throughout the process. If rdp is NULL, the second rcu_cblist structure - * is instead initialized to empty. + * is instead initialized to empty. Also account for lazy_len for lazy CBs. */ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, struct rcu_cblist *srclp, - struct rcu_head *rhp) + struct rcu_head *rhp, + bool lazy) { drclp->head =3D srclp->head; if (drclp->head) @@ -58,6 +64,9 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, srclp->tail =3D &rhp->next; WRITE_ONCE(srclp->len, 1); } + + if (IS_ENABLED(CONFIG_RCU_LAZY) && rhp && lazy) + WRITE_ONCE(srclp->lazy_len, 1); } =20 /* diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h index 431cee212467..8e90b34adb00 100644 --- a/kernel/rcu/rcu_segcblist.h +++ b/kernel/rcu/rcu_segcblist.h @@ -15,14 +15,30 @@ static inline long rcu_cblist_n_cbs(struct rcu_cblist *= rclp) return READ_ONCE(rclp->len); } =20 +/* Return number of callbacks in the specified callback list. */ +static inline long rcu_cblist_n_lazy_cbs(struct rcu_cblist *rclp) +{ + if (IS_ENABLED(CONFIG_RCU_LAZY)) + return READ_ONCE(rclp->lazy_len); + return 0; +} + +static inline void rcu_cblist_reset_lazy_len(struct rcu_cblist *rclp) +{ + if (IS_ENABLED(CONFIG_RCU_LAZY)) + WRITE_ONCE(rclp->lazy_len, 0); +} + /* Return number of callbacks in segmented callback list by summing seglen= . */ long rcu_segcblist_n_segment_cbs(struct rcu_segcblist *rsclp); =20 void rcu_cblist_init(struct rcu_cblist *rclp); -void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp); +void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp, + bool lazy); void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, struct rcu_cblist *srclp, - struct rcu_head *rhp); + struct rcu_head *rhp, + bool lazy); struct rcu_head *rcu_cblist_dequeue(struct rcu_cblist *rclp); =20 /* diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c25ba442044a..e76fef8031be 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3058,47 +3058,8 @@ static void check_cb_ovld(struct rcu_data *rdp) raw_spin_unlock_rcu_node(rnp); } =20 -/** - * call_rcu() - Queue an RCU callback for invocation after a grace period. - * @head: structure to be used for queueing the RCU updates. - * @func: actual callback function to be invoked after the grace period - * - * The callback function will be invoked some time after a full grace - * period elapses, in other words after all pre-existing RCU read-side - * critical sections have completed. However, the callback function - * might well execute concurrently with RCU read-side critical sections - * that started after call_rcu() was invoked. - * - * RCU read-side critical sections are delimited by rcu_read_lock() - * and rcu_read_unlock(), and may be nested. In addition, but only in - * v5.0 and later, regions of code across which interrupts, preemption, - * or softirqs have been disabled also serve as RCU read-side critical - * sections. This includes hardware interrupt handlers, softirq handlers, - * and NMI handlers. - * - * Note that all CPUs must agree that the grace period extended beyond - * all pre-existing RCU read-side critical section. On systems with more - * than one CPU, this means that when "func()" is invoked, each CPU is - * guaranteed to have executed a full memory barrier since the end of its - * last RCU read-side critical section whose beginning preceded the call - * to call_rcu(). It also means that each CPU executing an RCU read-side - * critical section that continues beyond the start of "func()" must have - * executed a memory barrier after the call_rcu() but before the beginning - * of that RCU read-side critical section. Note that these guarantees - * include CPUs that are offline, idle, or executing in user mode, as - * well as CPUs that are executing in the kernel. - * - * Furthermore, if CPU A invoked call_rcu() and CPU B invoked the - * resulting RCU callback function "func()", then both CPU A and CPU B are - * guaranteed to execute a full memory barrier during the time interval - * between the call to call_rcu() and the invocation of "func()" -- even - * if CPU A and CPU B are the same CPU (but again only if the system has - * more than one CPU). - * - * Implementation of these memory-ordering guarantees is described here: - * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst. - */ -void call_rcu(struct rcu_head *head, rcu_callback_t func) +static void +__call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy) { static atomic_t doublefrees; unsigned long flags; @@ -3139,7 +3100,7 @@ void call_rcu(struct rcu_head *head, rcu_callback_t f= unc) } =20 check_cb_ovld(rdp); - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags)) + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) return; // Enqueued onto ->nocb_bypass, so just leave. // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. rcu_segcblist_enqueue(&rdp->cblist, head); @@ -3161,8 +3122,86 @@ void call_rcu(struct rcu_head *head, rcu_callback_t = func) local_irq_restore(flags); } } -EXPORT_SYMBOL_GPL(call_rcu); =20 +#ifdef CONFIG_RCU_LAZY +/** + * call_rcu_lazy() - Lazily queue RCU callback for invocation after grace = period. + * @head: structure to be used for queueing the RCU updates. + * @func: actual callback function to be invoked after the grace period + * + * The callback function will be invoked some time after a full grace + * period elapses, in other words after all pre-existing RCU read-side + * critical sections have completed. + * + * Use this API instead of call_rcu() if you don't mind the callback being + * invoked after very long periods of time on systems without memory press= ure + * and on systems which are lightly loaded or mostly idle. + * + * Other than the extra delay in callbacks being invoked, this function is + * identical to, and reuses call_rcu()'s logic. Refer to call_rcu() for mo= re + * details about memory ordering and other functionality. + */ +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func) +{ + return __call_rcu_common(head, func, true); +} +EXPORT_SYMBOL_GPL(call_rcu_lazy); +#endif + +static bool force_call_rcu_to_lazy; + +void rcu_force_call_rcu_to_lazy(bool force) +{ + if (IS_ENABLED(CONFIG_RCU_SCALE_TEST)) + WRITE_ONCE(force_call_rcu_to_lazy, force); +} +EXPORT_SYMBOL_GPL(rcu_force_call_rcu_to_lazy); + +/** + * call_rcu() - Queue an RCU callback for invocation after a grace period. + * @head: structure to be used for queueing the RCU updates. + * @func: actual callback function to be invoked after the grace period + * + * The callback function will be invoked some time after a full grace + * period elapses, in other words after all pre-existing RCU read-side + * critical sections have completed. However, the callback function + * might well execute concurrently with RCU read-side critical sections + * that started after call_rcu() was invoked. + * + * RCU read-side critical sections are delimited by rcu_read_lock() + * and rcu_read_unlock(), and may be nested. In addition, but only in + * v5.0 and later, regions of code across which interrupts, preemption, + * or softirqs have been disabled also serve as RCU read-side critical + * sections. This includes hardware interrupt handlers, softirq handlers, + * and NMI handlers. + * + * Note that all CPUs must agree that the grace period extended beyond + * all pre-existing RCU read-side critical section. On systems with more + * than one CPU, this means that when "func()" is invoked, each CPU is + * guaranteed to have executed a full memory barrier since the end of its + * last RCU read-side critical section whose beginning preceded the call + * to call_rcu(). It also means that each CPU executing an RCU read-side + * critical section that continues beyond the start of "func()" must have + * executed a memory barrier after the call_rcu() but before the beginning + * of that RCU read-side critical section. Note that these guarantees + * include CPUs that are offline, idle, or executing in user mode, as + * well as CPUs that are executing in the kernel. + * + * Furthermore, if CPU A invoked call_rcu() and CPU B invoked the + * resulting RCU callback function "func()", then both CPU A and CPU B are + * guaranteed to execute a full memory barrier during the time interval + * between the call to call_rcu() and the invocation of "func()" -- even + * if CPU A and CPU B are the same CPU (but again only if the system has + * more than one CPU). + * + * Implementation of these memory-ordering guarantees is described here: + * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst. + */ +void call_rcu(struct rcu_head *head, rcu_callback_t func) +{ + return __call_rcu_common(head, func, force_call_rcu_to_lazy); +} +EXPORT_SYMBOL_GPL(call_rcu); =20 /* Maximum number of jiffies to wait before draining a batch. */ #define KFREE_DRAIN_JIFFIES (HZ / 50) @@ -4056,7 +4095,8 @@ static void rcu_barrier_entrain(struct rcu_data *rdp) rdp->barrier_head.func =3D rcu_barrier_callback; debug_rcu_head_queue(&rdp->barrier_head); rcu_nocb_lock(rdp); - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false, + /* wake gp thread */ true)); if (rcu_segcblist_entrain(&rdp->cblist, &rdp->barrier_head)) { atomic_inc(&rcu_state.barrier_cpu_count); } else { @@ -4476,7 +4516,7 @@ void rcutree_migrate_callbacks(int cpu) my_rdp =3D this_cpu_ptr(&rcu_data); my_rnp =3D my_rdp->mynode; rcu_nocb_lock(my_rdp); /* irqs already disabled. */ - WARN_ON_ONCE(!rcu_nocb_flush_bypass(my_rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(my_rdp, NULL, jiffies, false, false)); raw_spin_lock_rcu_node(my_rnp); /* irqs already disabled. */ /* Leverage recent GPs and set GP for new callbacks. */ needwake =3D rcu_advance_cbs(my_rnp, rdp) || diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 2ccf5845957d..7b1ddee6a159 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -267,8 +267,9 @@ struct rcu_data { /* Values for nocb_defer_wakeup field in struct rcu_data. */ #define RCU_NOCB_WAKE_NOT 0 #define RCU_NOCB_WAKE_BYPASS 1 -#define RCU_NOCB_WAKE 2 -#define RCU_NOCB_WAKE_FORCE 3 +#define RCU_NOCB_WAKE_LAZY 2 +#define RCU_NOCB_WAKE 3 +#define RCU_NOCB_WAKE_FORCE 4 =20 #define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500)) /* For jiffies_till_first_fqs and */ @@ -436,9 +437,10 @@ static struct swait_queue_head *rcu_nocb_gp_get(struct= rcu_node *rnp); static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); static void rcu_init_one_nocb(struct rcu_node *rnp); static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *r= hp, - unsigned long j); + unsigned long j, bool lazy, bool wakegp); static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags); + bool *was_alldone, unsigned long flags, + bool lazy); static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_empty, unsigned long flags); static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp, int level); diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index e369efe94fda..55636da76bc2 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -256,6 +256,31 @@ static bool wake_nocb_gp(struct rcu_data *rdp, bool fo= rce) return __wake_nocb_gp(rdp_gp, rdp, force, flags); } =20 +/* + * LAZY_FLUSH_JIFFIES decides the maximum amount of time that + * can elapse before lazy callbacks are flushed. Lazy callbacks + * could be flushed much earlier for a number of other reasons + * however, LAZY_FLUSH_JIFFIES will ensure no lazy callbacks are + * left unsubmitted to RCU after those many jiffies. + */ +#define LAZY_FLUSH_JIFFIES (10 * HZ) +unsigned long jiffies_till_flush =3D LAZY_FLUSH_JIFFIES; + +#ifdef CONFIG_RCU_LAZY +// To be called only from test code. +void rcu_lazy_set_jiffies_till_flush(unsigned long jif) +{ + jiffies_till_flush =3D jif; +} +EXPORT_SYMBOL(rcu_lazy_set_jiffies_till_flush); + +unsigned long rcu_lazy_get_jiffies_till_flush(void) +{ + return jiffies_till_flush; +} +EXPORT_SYMBOL(rcu_lazy_get_jiffies_till_flush); +#endif + /* * Arrange to wake the GP kthread for this NOCB group at some future * time when it is safe to do so. @@ -265,6 +290,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, in= t waketype, { unsigned long flags; struct rcu_data *rdp_gp =3D rdp->nocb_gp_rdp; + unsigned long mod_jif =3D 0; =20 raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags); =20 @@ -272,16 +298,32 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, = int waketype, * Bypass wakeup overrides previous deferments. In case * of callback storm, no need to wake up too early. */ - if (waketype =3D=3D RCU_NOCB_WAKE_BYPASS) { - mod_timer(&rdp_gp->nocb_timer, jiffies + 2); - WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); - } else { + switch (waketype) { + case RCU_NOCB_WAKE_LAZY: + if (rdp->nocb_defer_wakeup !=3D RCU_NOCB_WAKE_LAZY) + mod_jif =3D jiffies_till_flush; + break; + + case RCU_NOCB_WAKE_BYPASS: + mod_jif =3D 2; + break; + + case RCU_NOCB_WAKE: + case RCU_NOCB_WAKE_FORCE: + // If the type of deferred wake is "stronger" + // than it was before, make it wake up the soonest. if (rdp_gp->nocb_defer_wakeup < RCU_NOCB_WAKE) - mod_timer(&rdp_gp->nocb_timer, jiffies + 1); - if (rdp_gp->nocb_defer_wakeup < waketype) - WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); + mod_jif =3D 1; + break; } =20 + if (mod_jif) + mod_timer(&rdp_gp->nocb_timer, jiffies + mod_jif); + + // If new type of wake up is stronger than before, promote. + if (rdp_gp->nocb_defer_wakeup < waketype) + WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); + raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags); =20 trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, reason); @@ -296,7 +338,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, in= t waketype, * Note that this function always returns true if rhp is NULL. */ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head= *rhp, - unsigned long j) + unsigned long j, bool lazy) { struct rcu_cblist rcl; =20 @@ -310,7 +352,9 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *r= dp, struct rcu_head *rhp, /* Note: ->cblist.len already accounts for ->nocb_bypass contents. */ if (rhp) rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */ - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); + + /* The lazy CBs are being flushed, but a new one might be enqueued. */ + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp, lazy); rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); WRITE_ONCE(rdp->nocb_bypass_first, j); rcu_nocb_bypass_unlock(rdp); @@ -326,13 +370,20 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data = *rdp, struct rcu_head *rhp, * Note that this function always returns true if rhp is NULL. */ static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *r= hp, - unsigned long j) + unsigned long j, bool lazy, bool wake_gp) { + bool ret; + if (!rcu_rdp_is_offloaded(rdp)) return true; rcu_lockdep_assert_cblist_protected(rdp); rcu_nocb_bypass_lock(rdp); - return rcu_nocb_do_flush_bypass(rdp, rhp, j); + ret =3D rcu_nocb_do_flush_bypass(rdp, rhp, j, lazy); + + if (wake_gp) + wake_nocb_gp(rdp, true); + + return ret; } =20 /* @@ -345,7 +396,7 @@ static void rcu_nocb_try_flush_bypass(struct rcu_data *= rdp, unsigned long j) if (!rcu_rdp_is_offloaded(rdp) || !rcu_nocb_bypass_trylock(rdp)) return; - WARN_ON_ONCE(!rcu_nocb_do_flush_bypass(rdp, NULL, j)); + WARN_ON_ONCE(!rcu_nocb_do_flush_bypass(rdp, NULL, j, false)); } =20 /* @@ -367,12 +418,14 @@ static void rcu_nocb_try_flush_bypass(struct rcu_data= *rdp, unsigned long j) * there is only one CPU in operation. */ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags) + bool *was_alldone, unsigned long flags, + bool lazy) { unsigned long c; unsigned long cur_gp_seq; unsigned long j =3D jiffies; long ncbs =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); + long n_lazy_cbs =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); =20 lockdep_assert_irqs_disabled(); =20 @@ -414,30 +467,47 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp,= struct rcu_head *rhp, } WRITE_ONCE(rdp->nocb_nobypass_count, c); =20 - // If there hasn't yet been all that many ->cblist enqueues - // this jiffy, tell the caller to enqueue onto ->cblist. But flush - // ->nocb_bypass first. - if (rdp->nocb_nobypass_count < nocb_nobypass_lim_per_jiffy) { + // If caller passed a non-lazy CB and there hasn't yet been all that + // many ->cblist enqueues this jiffy, tell the caller to enqueue it + // onto ->cblist. But flush ->nocb_bypass first. Also do so, if total + // number of CBs (lazy + non-lazy) grows too much, or there were lazy + // CBs previously queued and the current one is non-lazy. + // + // Note that if the bypass list has lazy CBs, and the main list is + // empty, and rhp happens to be non-lazy, then we end up flushing all + // the lazy CBs to the main list as well. That's the right thing to do, + // since we are kick-starting RCU GP processing anyway for the non-lazy + // one, we can just reuse that GP for the already queued-up lazy ones. + if ((rdp->nocb_nobypass_count < nocb_nobypass_lim_per_jiffy && !lazy) || + (!lazy && n_lazy_cbs) || + (lazy && n_lazy_cbs >=3D qhimark)) { rcu_nocb_lock(rdp); - *was_alldone =3D !rcu_segcblist_pend_cbs(&rdp->cblist); + + // This variable helps decide if a wakeup of the rcuog thread + // is needed. It is passed to __call_rcu_nocb_wake() by the + // caller. If only lazy CBs were previously queued and this one + // is non-lazy, make sure the caller does a wake up. + *was_alldone =3D !rcu_segcblist_pend_cbs(&rdp->cblist) || + (!lazy && n_lazy_cbs); + if (*was_alldone) trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("FirstQ")); - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j)); + lazy ? TPS("FirstLazyQ") : TPS("FirstQ")); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j, lazy, false)); WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass)); return false; // Caller must enqueue the callback. } =20 // If ->nocb_bypass has been used too long or is too full, // flush ->nocb_bypass to ->cblist. - if ((ncbs && j !=3D READ_ONCE(rdp->nocb_bypass_first)) || - ncbs >=3D qhimark) { + if ((ncbs && j !=3D READ_ONCE(rdp->nocb_bypass_first)) || ncbs >=3D qhima= rk) { rcu_nocb_lock(rdp); - if (!rcu_nocb_flush_bypass(rdp, rhp, j)) { - *was_alldone =3D !rcu_segcblist_pend_cbs(&rdp->cblist); + if (!rcu_nocb_flush_bypass(rdp, rhp, j, lazy, false)) { + *was_alldone =3D !rcu_segcblist_pend_cbs(&rdp->cblist) || + (!lazy && n_lazy_cbs); if (*was_alldone) trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("FirstQ")); + lazy ? TPS("FirstLazyQ") : TPS("FirstQ")); WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass)); return false; // Caller must enqueue the callback. } @@ -455,12 +525,18 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp,= struct rcu_head *rhp, rcu_nocb_wait_contended(rdp); rcu_nocb_bypass_lock(rdp); ncbs =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); + n_lazy_cbs =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */ - rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp, lazy); + if (!ncbs) { WRITE_ONCE(rdp->nocb_bypass_first, j); - trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstBQ")); + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, + lazy ? TPS("FirstLazyBQ") : TPS("FirstBQ")); + } else if (!n_lazy_cbs && lazy) { + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstLazyBQ")); } + rcu_nocb_bypass_unlock(rdp); smp_mb(); /* Order enqueue before wake. */ if (ncbs) { @@ -493,7 +569,7 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, = bool was_alldone, { unsigned long cur_gp_seq; unsigned long j; - long len; + long len, lazy_len, bypass_len; struct task_struct *t; =20 // If we are being polled or there is no kthread, just leave. @@ -506,9 +582,16 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp,= bool was_alldone, } // Need to actually to a wakeup. len =3D rcu_segcblist_n_cbs(&rdp->cblist); + bypass_len =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); + lazy_len =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); if (was_alldone) { rdp->qlen_last_fqs_check =3D len; - if (!irqs_disabled_flags(flags)) { + // Only lazy CBs in bypass list + if (lazy_len && bypass_len =3D=3D lazy_len) { + rcu_nocb_unlock_irqrestore(rdp, flags); + wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_LAZY, + TPS("WakeLazy")); + } else if (!irqs_disabled_flags(flags)) { /* ... if queue was empty ... */ rcu_nocb_unlock_irqrestore(rdp, flags); wake_nocb_gp(rdp, false); @@ -599,8 +682,8 @@ static inline bool nocb_gp_update_state_deoffloading(st= ruct rcu_data *rdp, */ static void nocb_gp_wait(struct rcu_data *my_rdp) { - bool bypass =3D false; - long bypass_ncbs; + bool bypass =3D false, lazy =3D false; + long bypass_ncbs, lazy_ncbs; int __maybe_unused cpu =3D my_rdp->cpu; unsigned long cur_gp_seq; unsigned long flags; @@ -636,6 +719,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) */ list_for_each_entry_rcu(rdp, &my_rdp->nocb_head_rdp, nocb_entry_rdp, 1) { bool needwake_state =3D false; + bool flush_bypass =3D false; =20 if (!nocb_gp_enabled_cb(rdp)) continue; @@ -648,22 +732,37 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) continue; } bypass_ncbs =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); - if (bypass_ncbs && + lazy_ncbs =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + + if (lazy_ncbs && + (time_after(j, READ_ONCE(rdp->nocb_bypass_first) + jiffies_till_flus= h) || + bypass_ncbs > 2 * qhimark)) { + flush_bypass =3D true; + } else if (bypass_ncbs && (lazy_ncbs !=3D bypass_ncbs) && (time_after(j, READ_ONCE(rdp->nocb_bypass_first) + 1) || bypass_ncbs > 2 * qhimark)) { - // Bypass full or old, so flush it. - (void)rcu_nocb_try_flush_bypass(rdp, j); - bypass_ncbs =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); + flush_bypass =3D true; } else if (!bypass_ncbs && rcu_segcblist_empty(&rdp->cblist)) { rcu_nocb_unlock_irqrestore(rdp, flags); if (needwake_state) swake_up_one(&rdp->nocb_state_wq); continue; /* No callbacks here, try next. */ } + + if (flush_bypass) { + // Bypass full or old, so flush it. + (void)rcu_nocb_try_flush_bypass(rdp, j); + bypass_ncbs =3D rcu_cblist_n_cbs(&rdp->nocb_bypass); + lazy_ncbs =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + } + if (bypass_ncbs) { trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("Bypass")); - bypass =3D true; + bypass_ncbs =3D=3D lazy_ncbs ? TPS("Lazy") : TPS("Bypass")); + if (bypass_ncbs =3D=3D lazy_ncbs) + lazy =3D true; + else + bypass =3D true; } rnp =3D rdp->mynode; =20 @@ -713,12 +812,21 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) my_rdp->nocb_gp_gp =3D needwait_gp; my_rdp->nocb_gp_seq =3D needwait_gp ? wait_gp_seq : 0; =20 - if (bypass && !rcu_nocb_poll) { - // At least one child with non-empty ->nocb_bypass, so set - // timer in order to avoid stranding its callbacks. - wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_BYPASS, - TPS("WakeBypassIsDeferred")); + // At least one child with non-empty ->nocb_bypass, so set + // timer in order to avoid stranding its callbacks. + if (!rcu_nocb_poll) { + // If bypass list only has lazy CBs. Add a deferred + // lazy wake up. + if (lazy && !bypass) { + wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_LAZY, + TPS("WakeLazyIsDeferred")); + // Otherwise add a deferred bypass wake up. + } else if (bypass) { + wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_BYPASS, + TPS("WakeBypassIsDeferred")); + } } + if (rcu_nocb_poll) { /* Polling, so trace if first poll in the series. */ if (gotcbs) @@ -999,7 +1107,7 @@ static long rcu_nocb_rdp_deoffload(void *arg) * return false, which means that future calls to rcu_nocb_try_bypass() * will refuse to put anything into the bypass. */ - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false, false)); /* * Start with invoking rcu_core() early. This way if the current thread * happens to preempt an ongoing call to rcu_core() in the middle, @@ -1500,13 +1608,14 @@ static void rcu_init_one_nocb(struct rcu_node *rnp) } =20 static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *r= hp, - unsigned long j) + unsigned long j, bool lazy, bool wakegp) { return true; } =20 static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags) + bool *was_alldone, unsigned long flags, + bool lazy) { return false; } --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75A7BC32773 for ; Fri, 19 Aug 2022 20:49:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351711AbiHSUtT (ORCPT ); Fri, 19 Aug 2022 16:49:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351277AbiHSUtL (ORCPT ); Fri, 19 Aug 2022 16:49:11 -0400 Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com [IPv6:2607:f8b0:4864:20::832]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD9E8A00F2 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: by mail-qt1-x832.google.com with SMTP id y18so4222665qtv.5 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=F0DFzzuR1hRvjFCx5f50gxrlxhxkE8RR47wLjamCGKQ=; b=PDRRy8BUg0BVEOmkM3/umMktIdoT0PATsFNsP3S9aQxm0mOktahmyf0W1PnRtMIF1k +hYZJlBjP2QS2bHR9G27hjQNi1b7nWu/8M6bLMaPfR20p+8TqhUwB5kWkI/00EKOkwrs kJv5t5QsRNzz6hP0aShIFbYqZnM6/c7rwDhDk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=F0DFzzuR1hRvjFCx5f50gxrlxhxkE8RR47wLjamCGKQ=; b=wb2XMB7wJp4birRNkx667Vxm3h1wv9WrCNDxHvg3jvNILoIso3aKLlhaFHiOCoU3yv /b0AdEM89l8SR6MgJJy2D4GAbq9PVgBPQUDwBogO7qL7K+IjPETw+5NW6wBKQJ5BtPyB sSUqEeSmKm9LPWGG4dqxr0TxnKnQrlcL+XtXvKmezjFKzWjnL6pXAbRSsHUl8myld73z rnJQPk39zyGwioxqvt6szrx9g1/LpqRj0sMGWjsEfjNBahIZg6lxgTJXgXqoCn/9dLuz 0rIMjvWWuoKTK9VzUwEUWx6aH58MZALM1NZrrgnZK5lj30WaYcuaYYbUFK2O3rjs2/DN yTEA== X-Gm-Message-State: ACgBeo1TmuUSkW2/dfLSF7KZwuaZn5fWdKykq4PIZ2HXGG7LAasEwgCp luNrbe8zKzYlXl4pg9hqpARmFInN+/I4Hg== X-Google-Smtp-Source: AA6agR47nKBv/28eI1huj/AljfLxaiQclxiU4+As9WyDKtjPCt5M3ct1MUu0njaW3f0XwqH8n6XDyQ== X-Received: by 2002:ac8:5b03:0:b0:343:679b:64f2 with SMTP id m3-20020ac85b03000000b00343679b64f2mr8250251qtw.260.1660942145760; Fri, 19 Aug 2022 13:49:05 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:05 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: Vineeth Pillai , Joel Fernandes , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu Subject: [PATCH v4 02/14] rcu: shrinker for lazy rcu Date: Fri, 19 Aug 2022 20:48:45 +0000 Message-Id: <20220819204857.3066329-3-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Vineeth Pillai The shrinker is used to speed up the free'ing of memory potentially held by RCU lazy callbacks. RCU kernel module test cases show this to be effective. Test is introduced in a later patch. Signed-off-by: Vineeth Pillai Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree_nocb.h | 52 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 55636da76bc2..edb4e59dbf38 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1259,6 +1259,55 @@ int rcu_nocb_cpu_offload(int cpu) } EXPORT_SYMBOL_GPL(rcu_nocb_cpu_offload); =20 +static unsigned long +lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu; + unsigned long count =3D 0; + + /* Snapshot count of all CPUs */ + for_each_possible_cpu(cpu) { + struct rcu_data *rdp =3D per_cpu_ptr(&rcu_data, cpu); + + count +=3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + } + + return count ? count : SHRINK_EMPTY; +} + +static unsigned long +lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu; + unsigned long flags; + unsigned long count =3D 0; + + /* Snapshot count of all CPUs */ + for_each_possible_cpu(cpu) { + struct rcu_data *rdp =3D per_cpu_ptr(&rcu_data, cpu); + int _count =3D rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + + if (_count =3D=3D 0) + continue; + rcu_nocb_lock_irqsave(rdp, flags); + rcu_cblist_reset_lazy_len(&rdp->nocb_bypass); + rcu_nocb_unlock_irqrestore(rdp, flags); + wake_nocb_gp(rdp, false); + sc->nr_to_scan -=3D _count; + count +=3D _count; + if (sc->nr_to_scan <=3D 0) + break; + } + return count ? count : SHRINK_STOP; +} + +static struct shrinker lazy_rcu_shrinker =3D { + .count_objects =3D lazy_rcu_shrink_count, + .scan_objects =3D lazy_rcu_shrink_scan, + .batch =3D 0, + .seeks =3D DEFAULT_SEEKS, +}; + void __init rcu_init_nohz(void) { int cpu; @@ -1296,6 +1345,9 @@ void __init rcu_init_nohz(void) if (!rcu_state.nocb_is_setup) return; =20 + if (register_shrinker(&lazy_rcu_shrinker)) + pr_err("Failed to register lazy_rcu shrinker!\n"); + #if defined(CONFIG_NO_HZ_FULL) if (tick_nohz_full_running) cpumask_or(rcu_nocb_mask, rcu_nocb_mask, tick_nohz_full_mask); --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB280C32772 for ; Fri, 19 Aug 2022 20:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351759AbiHSUtX (ORCPT ); Fri, 19 Aug 2022 16:49:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351317AbiHSUtM (ORCPT ); Fri, 19 Aug 2022 16:49:12 -0400 Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com [IPv6:2607:f8b0:4864:20::730]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A186A061E for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: by mail-qk1-x730.google.com with SMTP id b9so4121447qka.2 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=JSZQFOSju4bKCbYuTUWbkQrUq8c2e+wetnXoNQ/tht0=; b=caHyO3BmP+0RpkPUaYE04/HYeqfdQU2RKMn4DwMgWRu0rJG/nRGkA7j1MqVK/EpFbb MWetztEvdSaiHrkbE47XvgxtC269jGLU6f/T21p4Anh9Y2NO/LbAaJpaZ/IKiYkfSDpE wM5Pb01twdId5UQJU4Fqnah2YU2iVXgl5uccM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=JSZQFOSju4bKCbYuTUWbkQrUq8c2e+wetnXoNQ/tht0=; b=XRa3qquv5fncSqK1ZNG/1DoDOoyQnStxsyn4btlUvV0A+t1ZAK2V5DDtgXAvgubgzB EwIYfMca1jFbnAZ0E4NYDi9Yef7Z4ni/e1XoW8K8/f2qQbXAMwYHaRZZkoCBS1dBXt59 fvyUiYd6GbeIAPSAM8pFt6dZnejGmIuvwoDExA5/WTUs/U1gOGdXtvsC62KThKtDc0R0 9+TF8dknPa+nGzjGsgEJKZpvnCs4sZAH/2eROOJuuCVgiLjZmTNx52WtLD7mxEhIKJvg 45GU4k/u6YWpI/oPJ+IDTzkS794oTKTUW7CChuFxn8L8XgjfDlkDAZqav4qV5fbQz6gi 8SWw== X-Gm-Message-State: ACgBeo0pOfPQO6/37C+vwWBgQQtBmRnCKSgRZQuF5ivN1hMtC7EN6aJv KnxCOV/i5jpzNvLPo9w98WFA+vdsel9zxA== X-Google-Smtp-Source: AA6agR4z4jgkFfg7uRkEkkb5jX+zLRAP7ehrAQSepSz3xp65zrEriNppnDOFer4fL5iBHOMoaipLQA== X-Received: by 2002:a05:620a:1a13:b0:6b8:bd72:a0b2 with SMTP id bk19-20020a05620a1a1300b006b8bd72a0b2mr6209407qkb.229.1660942146313; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:06 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 03/14] rcuscale: Add laziness and kfree tests Date: Fri, 19 Aug 2022 20:48:46 +0000 Message-Id: <20220819204857.3066329-4-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We aad 2 tests to rcuscale, first one is a startup test to check whether we are not too lazy or too hard working. Two, emulate kfree_rcu() itself to use call_rcu_lazy() and check memory pressure. In my testing, call_rcu_lazy() does well to keep memory pressure under control, similar to kfree_rcu(). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcuscale.c | 74 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 73 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index 277a5bfb37d4..ed5544227f4d 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -95,6 +95,7 @@ torture_param(int, verbose, 1, "Enable verbose debugging = printk()s"); torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to d= isable"); torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?= "); torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate.= "); +torture_param(int, kfree_rcu_by_lazy, 0, "Use call_rcu_lazy() to emulate k= free_rcu()?"); =20 static char *scale_type =3D "rcu"; module_param(scale_type, charp, 0444); @@ -658,6 +659,14 @@ struct kfree_obj { struct rcu_head rh; }; =20 +/* Used if doing RCU-kfree'ing via call_rcu_lazy(). */ +static void kfree_rcu_lazy(struct rcu_head *rh) +{ + struct kfree_obj *obj =3D container_of(rh, struct kfree_obj, rh); + + kfree(obj); +} + static int kfree_scale_thread(void *arg) { @@ -695,6 +704,11 @@ kfree_scale_thread(void *arg) if (!alloc_ptr) return -ENOMEM; =20 + if (kfree_rcu_by_lazy) { + call_rcu_lazy(&(alloc_ptr->rh), kfree_rcu_lazy); + continue; + } + // By default kfree_rcu_test_single and kfree_rcu_test_double are // initialized to false. If both have the same value (false or true) // both are randomly tested, otherwise only the one with value true @@ -737,6 +751,9 @@ kfree_scale_cleanup(void) { int i; =20 + if (kfree_rcu_by_lazy) + rcu_force_call_rcu_to_lazy(false); + if (torture_cleanup_begin()) return; =20 @@ -766,11 +783,64 @@ kfree_scale_shutdown(void *arg) return -EINVAL; } =20 +// Used if doing RCU-kfree'ing via call_rcu_lazy(). +static unsigned long jiffies_at_lazy_cb; +static struct rcu_head lazy_test1_rh; +static int rcu_lazy_test1_cb_called; +static void call_rcu_lazy_test1(struct rcu_head *rh) +{ + jiffies_at_lazy_cb =3D jiffies; + WRITE_ONCE(rcu_lazy_test1_cb_called, 1); +} + static int __init kfree_scale_init(void) { long i; int firsterr =3D 0; + unsigned long orig_jif, jif_start; + + // If lazy-rcu based kfree'ing is requested, then for kernels that + // support it, force all call_rcu() to call_rcu_lazy() so that non-lazy + // CBs do not remove laziness of the lazy ones (since the test tries to + // stress call_rcu_lazy() for OOM). + // + // Also, do a quick self-test to ensure laziness is as much as + // expected. + if (kfree_rcu_by_lazy && !IS_ENABLED(CONFIG_RCU_LAZY)) { + pr_alert("CONFIG_RCU_LAZY is disabled, falling back to kfree_rcu() " + "for delayed RCU kfree'ing\n"); + kfree_rcu_by_lazy =3D 0; + } + + if (kfree_rcu_by_lazy) { + /* do a test to check the timeout. */ + orig_jif =3D rcu_lazy_get_jiffies_till_flush(); + + rcu_force_call_rcu_to_lazy(true); + rcu_lazy_set_jiffies_till_flush(2 * HZ); + rcu_barrier(); + + jif_start =3D jiffies; + jiffies_at_lazy_cb =3D 0; + call_rcu_lazy(&lazy_test1_rh, call_rcu_lazy_test1); + + smp_cond_load_relaxed(&rcu_lazy_test1_cb_called, VAL =3D=3D 1); + + rcu_lazy_set_jiffies_till_flush(orig_jif); + + if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) { + pr_alert("ERROR: Lazy CBs are not being lazy as expected!\n"); + WARN_ON_ONCE(1); + return -1; + } + + if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) { + pr_alert("ERROR: Lazy CBs are being too lazy!\n"); + WARN_ON_ONCE(1); + return -1; + } + } =20 kfree_nrealthreads =3D compute_real(kfree_nthreads); /* Start up the kthreads. */ @@ -783,7 +853,9 @@ kfree_scale_init(void) schedule_timeout_uninterruptible(1); } =20 - pr_alert("kfree object size=3D%zu\n", kfree_mult * sizeof(struct kfree_ob= j)); + pr_alert("kfree object size=3D%zu, kfree_rcu_by_lazy=3D%d\n", + kfree_mult * sizeof(struct kfree_obj), + kfree_rcu_by_lazy); =20 kfree_reader_tasks =3D kcalloc(kfree_nrealthreads, sizeof(kfree_reader_ta= sks[0]), GFP_KERNEL); --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFCEDC32771 for ; Fri, 19 Aug 2022 20:49:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351785AbiHSUt1 (ORCPT ); Fri, 19 Aug 2022 16:49:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351367AbiHSUtM (ORCPT ); Fri, 19 Aug 2022 16:49:12 -0400 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7979B3B36 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: by mail-qk1-x736.google.com with SMTP id c9so2966832qkk.6 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=yBV3vkCTgPe3Ul1W4jUUwZZjvExObdYpbC/MGzFj//E=; b=ZGz05HeG9rIUBCpyhoDA37RYMUY14Ci8MukMO2MctVw3Xe1BHKtAeXAY/niZLnMPjS kfEHlgOt7DV+W//bFQCWm5EdkDghGTOByuPu2baAjLmzD+cP4LBHlsZImjWVz5uc+rv9 dnTxHzcVSJVgKwx3nMaXnp1JsnLcEBTO949lY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=yBV3vkCTgPe3Ul1W4jUUwZZjvExObdYpbC/MGzFj//E=; b=ObbohkXM/0jxzn4gFkJt2CPzX5AtQlSt5FzW/QQgM8+pMUi+F12SVIHEBolIy7b4zP MvAwggTiKyGu85eczyePpM8Jgp7YBH1LkzB6IM3cXyrnY1fTb0RgSTsuRkRHwRh6o4MI E/8DoGWMH9rjDg65lvmqdP+IgooFnie8GVQz0NklZW9pfFud1f07m1hMan55V32lrgyj jnSaacvY5+tG1KulTI2KPU90fb3G4DqFo9V6pOL8HeTZYaC8EaE7TlioP6jDaWguWRD1 ksQCwx/04r1sQTth09iH6nLpcr1BJyGfbnLdFnD8shaSbeSW4qOlDmPggYM5foUH/r33 aDMA== X-Gm-Message-State: ACgBeo0N5iPLi0cnxuNZZcITQBI0cDB7ey2L721yE8lPfztkVhD1V+vc d+ulIS+iTpYbgthNZ2xS1YiqQyDco5wTMQ== X-Google-Smtp-Source: AA6agR5ZBW9PkeFmvrVxxY7HxjpUWeF/w6JpG0gJ82bEIn1XL0g30W+QVZyeOAN1ZVNeLdLYknEEgg== X-Received: by 2002:a05:620a:25c8:b0:6ae:ba71:ea7d with SMTP id y8-20020a05620a25c800b006aeba71ea7dmr6443457qko.547.1660942146948; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:06 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 04/14] fs: Move call_rcu() to call_rcu_lazy() in some paths Date: Fri, 19 Aug 2022 20:48:47 +0000 Message-Id: <20220819204857.3066329-5-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. When testing, we found that these paths were invoked often when the system is not doing anything (screen is ON but otherwise idle). Signed-off-by: Joel Fernandes (Google) --- fs/dcache.c | 4 ++-- fs/eventpoll.c | 2 +- fs/file_table.c | 2 +- fs/inode.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 93f4f5ee07bf..7f51bac390c8 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -366,7 +366,7 @@ static void dentry_free(struct dentry *dentry) if (unlikely(dname_external(dentry))) { struct external_name *p =3D external_name(dentry); if (likely(atomic_dec_and_test(&p->u.count))) { - call_rcu(&dentry->d_u.d_rcu, __d_free_external); + call_rcu_lazy(&dentry->d_u.d_rcu, __d_free_external); return; } } @@ -374,7 +374,7 @@ static void dentry_free(struct dentry *dentry) if (dentry->d_flags & DCACHE_NORCU) __d_free(&dentry->d_u.d_rcu); else - call_rcu(&dentry->d_u.d_rcu, __d_free); + call_rcu_lazy(&dentry->d_u.d_rcu, __d_free); } =20 /* diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 971f98af48ff..57b3f781760c 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -729,7 +729,7 @@ static int ep_remove(struct eventpoll *ep, struct epite= m *epi) * ep->mtx. The rcu read side, reverse_path_check_proc(), does not make * use of the rbn field. */ - call_rcu(&epi->rcu, epi_rcu_free); + call_rcu_lazy(&epi->rcu, epi_rcu_free); =20 percpu_counter_dec(&ep->user->epoll_watches); =20 diff --git a/fs/file_table.c b/fs/file_table.c index 5424e3a8df5f..417f57e9cb30 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -56,7 +56,7 @@ static inline void file_free(struct file *f) security_file_free(f); if (!(f->f_mode & FMODE_NOACCOUNT)) percpu_counter_dec(&nr_files); - call_rcu(&f->f_u.fu_rcuhead, file_free_rcu); + call_rcu_lazy(&f->f_u.fu_rcuhead, file_free_rcu); } =20 /* diff --git a/fs/inode.c b/fs/inode.c index bd4da9c5207e..38fe040ddbd6 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -312,7 +312,7 @@ static void destroy_inode(struct inode *inode) return; } inode->free_inode =3D ops->free_inode; - call_rcu(&inode->i_rcu, i_callback); + call_rcu_lazy(&inode->i_rcu, i_callback); } =20 /** --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 232F6C32771 for ; Fri, 19 Aug 2022 20:49:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351928AbiHSUtw (ORCPT ); Fri, 19 Aug 2022 16:49:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351460AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B56649F0E4 for ; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) Received: by mail-qv1-xf2e.google.com with SMTP id d1so4226666qvs.0 for ; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=9DL/cHJo5H0hKH4rMWH0ykvZgellvC/KgQpRDuKPDSk=; b=l4RVSAofkBcfMT478Bx4AOsqIhKeWXLa0QDRhXlpMVVGXsHfsbVP7KPQvSUkHSxcqI PC2mXE8lPXsYIt7u9qUAT61snSdKEVMOsKDEq0ntvqKd+j+bQQke+9ggJTN6mud5E3SJ zVY+x3UT+mZyYeyb1RERlIxWzUYezNs6sZKyc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=9DL/cHJo5H0hKH4rMWH0ykvZgellvC/KgQpRDuKPDSk=; b=WjWATqjvGNnWpVjGoZagEXa+NNEm6IxxvczJ6AN25HswYfu1R+V6TJBXegX2X3bpsR 0lSIU7eQemRTO2v/EC7D9N/QaHRqQWuCN3sVi3pAZYLUS1u1L95mvAo+aQUB7P/1gG8F 5ESbes7KnK/qFZt7/WfG6lq4fFg3NJ2lrmABVAO3ofk5fU/R+RyhY1sdYxPm5W3swpJu JdKUjiFlnJh84Xx3b0X2fy+RABbmbdF+cEX6zgjLRSkZti01MhKCvBs2ssk+eEQuM1RX +wGK3kfQ/GrzejIqe9YxH+QB0d6J/05mTZ93kfq03X+y1rTprTKGtpLG400vOYLgclPp FXaA== X-Gm-Message-State: ACgBeo1y5napv1fiFCcbr9qRSqoBx/jtyrACpP5H8VApTJVbZgzPcFxM brRzW06AlSmD1zxiGYPMIrzfvGzPSyFb5w== X-Google-Smtp-Source: AA6agR58SuILA8j7MENBOHD4KAqwfTvSkpMvgUjabkpCDtGiAYTUvnrUZaW4E2T+C4VmDACOt/9F0Q== X-Received: by 2002:a05:6214:3009:b0:482:5a89:c09b with SMTP id ke9-20020a056214300900b004825a89c09bmr8046360qvb.71.1660942147663; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:07 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 05/14] rcutorture: Add test code for call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:48 +0000 Message-Id: <20220819204857.3066329-6-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We add a new RCU type to test call_rcu_lazy(). This allows us to just override the '.call' callback. To compensate for the laziness, we force the laziness to a small number of jiffies. The idea of this test is to stress the new code paths for stability and ensure it at least is providing behavior in parity with, or similar to, call_rcu(). The actual check for amount of laziness is in another test (rcuscale). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcu.h | 1 + kernel/rcu/rcutorture.c | 60 ++++++++++++++++++- kernel/rcu/tree.c | 1 + .../selftests/rcutorture/configs/rcu/CFLIST | 1 + .../selftests/rcutorture/configs/rcu/TREE11 | 18 ++++++ .../rcutorture/configs/rcu/TREE11.boot | 8 +++ 6 files changed, 88 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TREE11 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TREE11.b= oot diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 608f6ab76c7f..aa3243e49506 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -460,6 +460,7 @@ enum rcutorture_type { RCU_TASKS_TRACING_FLAVOR, RCU_TRIVIAL_FLAVOR, SRCU_FLAVOR, + RCU_LAZY_FLAVOR, INVALID_RCU_FLAVOR }; =20 diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 7120165a9342..c52cc4c064f9 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -872,6 +872,64 @@ static struct rcu_torture_ops tasks_rude_ops =3D { =20 #endif // #else #ifdef CONFIG_TASKS_RUDE_RCU =20 +#ifdef CONFIG_RCU_LAZY + +/* + * Definitions for lazy RCU torture testing. + */ +static unsigned long orig_jiffies_till_flush; + +static void rcu_sync_torture_init_lazy(void) +{ + rcu_sync_torture_init(); + + orig_jiffies_till_flush =3D rcu_lazy_get_jiffies_till_flush(); + rcu_lazy_set_jiffies_till_flush(50); +} + +static void rcu_lazy_cleanup(void) +{ + rcu_lazy_set_jiffies_till_flush(orig_jiffies_till_flush); +} + +static struct rcu_torture_ops rcu_lazy_ops =3D { + .ttype =3D RCU_LAZY_FLAVOR, + .init =3D rcu_sync_torture_init_lazy, + .cleanup =3D rcu_lazy_cleanup, + .readlock =3D rcu_torture_read_lock, + .read_delay =3D rcu_read_delay, + .readunlock =3D rcu_torture_read_unlock, + .readlock_held =3D torture_readlock_not_held, + .get_gp_seq =3D rcu_get_gp_seq, + .gp_diff =3D rcu_seq_diff, + .deferred_free =3D rcu_torture_deferred_free, + .sync =3D synchronize_rcu, + .exp_sync =3D synchronize_rcu_expedited, + .get_gp_state =3D get_state_synchronize_rcu, + .start_gp_poll =3D start_poll_synchronize_rcu, + .poll_gp_state =3D poll_state_synchronize_rcu, + .cond_sync =3D cond_synchronize_rcu, + .call =3D call_rcu_lazy, + .cb_barrier =3D rcu_barrier, + .fqs =3D rcu_force_quiescent_state, + .stats =3D NULL, + .gp_kthread_dbg =3D show_rcu_gp_kthreads, + .check_boost_failed =3D rcu_check_boost_fail, + .stall_dur =3D rcu_jiffies_till_stall_check, + .irq_capable =3D 1, + .can_boost =3D IS_ENABLED(CONFIG_RCU_BOOST), + .extendables =3D RCUTORTURE_MAX_EXTEND, + .name =3D "rcu_lazy" +}; + +#define LAZY_OPS &rcu_lazy_ops, + +#else // #ifdef CONFIG_RCU_LAZY + +#define LAZY_OPS + +#endif // #else #ifdef CONFIG_RCU_LAZY + =20 #ifdef CONFIG_TASKS_TRACE_RCU =20 @@ -3145,7 +3203,7 @@ rcu_torture_init(void) unsigned long gp_seq =3D 0; static struct rcu_torture_ops *torture_ops[] =3D { &rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, &busted_srcud_ops, - TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS + TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS LAZY_OPS &trivial_ops, }; =20 diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e76fef8031be..67026382dc21 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -600,6 +600,7 @@ void rcutorture_get_gp_data(enum rcutorture_type test_t= ype, int *flags, { switch (test_type) { case RCU_FLAVOR: + case RCU_LAZY_FLAVOR: *flags =3D READ_ONCE(rcu_state.gp_flags); *gp_seq =3D rcu_seq_current(&rcu_state.gp_seq); break; diff --git a/tools/testing/selftests/rcutorture/configs/rcu/CFLIST b/tools/= testing/selftests/rcutorture/configs/rcu/CFLIST index 98b6175e5aa0..609c3370616f 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/CFLIST @@ -5,6 +5,7 @@ TREE04 TREE05 TREE07 TREE09 +TREE11 SRCU-N SRCU-P SRCU-T diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE11 b/tools/= testing/selftests/rcutorture/configs/rcu/TREE11 new file mode 100644 index 000000000000..436013f3e015 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE11 @@ -0,0 +1,18 @@ +CONFIG_SMP=3Dy +CONFIG_PREEMPT_NONE=3Dn +CONFIG_PREEMPT_VOLUNTARY=3Dn +CONFIG_PREEMPT=3Dy +#CHECK#CONFIG_PREEMPT_RCU=3Dy +CONFIG_HZ_PERIODIC=3Dn +CONFIG_NO_HZ_IDLE=3Dy +CONFIG_NO_HZ_FULL=3Dn +CONFIG_RCU_TRACE=3Dy +CONFIG_HOTPLUG_CPU=3Dy +CONFIG_MAXSMP=3Dy +CONFIG_CPUMASK_OFFSTACK=3Dy +CONFIG_RCU_NOCB_CPU=3Dy +CONFIG_DEBUG_LOCK_ALLOC=3Dn +CONFIG_RCU_BOOST=3Dn +CONFIG_DEBUG_OBJECTS_RCU_HEAD=3Dn +CONFIG_RCU_EXPERT=3Dy +CONFIG_RCU_LAZY=3Dy diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot b/t= ools/testing/selftests/rcutorture/configs/rcu/TREE11.boot new file mode 100644 index 000000000000..9b6f720d4ccd --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot @@ -0,0 +1,8 @@ +maxcpus=3D8 nr_cpus=3D43 +rcutree.gp_preinit_delay=3D3 +rcutree.gp_init_delay=3D3 +rcutree.gp_cleanup_delay=3D3 +rcu_nocbs=3D0-7 +rcutorture.torture_type=3Drcu_lazy +rcutorture.nocbs_nthreads=3D8 +rcutorture.fwd_progress=3D0 --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE6B6C32771 for ; Fri, 19 Aug 2022 20:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351832AbiHSUtf (ORCPT ); Fri, 19 Aug 2022 16:49:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351462AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7130EB5A6C for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: by mail-qk1-x731.google.com with SMTP id w18so4100399qki.8 for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=lNcAKp5cSys1fGk6soAVHXz41xwD4aYJNt/yj8PSxMI=; b=Z5NOz5b17X7HfkZzJOmU+TrGq9k9v7VqVqcjqBeqEHaupK1YDx6isupnlPQZnlKJoi skTSm0cZbcoldx2512oPzJfb96Sm6Fkf1RxlhKRuXxSIirY3obUi/9OvTDhEwlG2rkKx olPINJzLr6/w6LKuFHLL3aqhVPRvriUt+oUZg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=lNcAKp5cSys1fGk6soAVHXz41xwD4aYJNt/yj8PSxMI=; b=IeUCF5xiVLRH1WlwcY0gc4v3AtnFQr1WNCwdjK+Y8y4k5eKVhTVf49nQ2w9EzN1cj1 v3/M+RIMCO/HDzZVQKnN06RtvcyWDBEolNQ7Y8oIDNO+aQmn2WhhQT6oBuxjcZDx14j2 uEBIPOchwhof12ajwxFBunNxs0euTp24vrxLQn/FKypIx/fyU7x8Ft99v7R9L5VFEXcK QocGk9W0nNlCDoc/DZm8cCfLb/spJxwX8AEVYJ0jfjINjeYGA/tVSu41+pfcDZf0XatV igNDvuV97nP2pCyLWbfQrbhBvh8kpvAClEzbKNK7GyckZz3442Z9VFcmLyDr//evNJXL m/0w== X-Gm-Message-State: ACgBeo0VXmyId8XMozZ9XUX42zrCqmsIUgpWy/ukXDRcN19QmMkPi9De TeOGEjCX+sjxJanYse1BRChPv1f+fwx6hQ== X-Google-Smtp-Source: AA6agR4Y+4r+MWffKODfxjuKe49rH8feKDIxdpzllu1ok0KwILGkckGGq/CDgdkru5b5f1OB/tBAUg== X-Received: by 2002:a05:620a:290d:b0:6b5:cecc:1cab with SMTP id m13-20020a05620a290d00b006b5cecc1cabmr6185949qkp.465.1660942148400; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:07 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 06/14] debug: Toggle lazy at runtime and change flush jiffies Date: Fri, 19 Aug 2022 20:48:49 +0000 Message-Id: <20220819204857.3066329-7-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable/Disable this feature by writing 1 or 0 /proc/sys/vm/rcu_lazy. Change value of /proc/sys/vm/rcu_lazy_jiffies to change max duration before flush. Do not merge, only for debug for reviewers. Signed-off-by: Joel Fernandes (Google) --- include/linux/sched/sysctl.h | 3 +++ kernel/rcu/tree_nocb.h | 9 +++++++++ kernel/sysctl.c | 17 +++++++++++++++++ 3 files changed, 29 insertions(+) diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 2cd928f15df6..54610f9cd962 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -14,6 +14,9 @@ extern unsigned long sysctl_hung_task_timeout_secs; enum { sysctl_hung_task_timeout_secs =3D 0 }; #endif =20 +extern unsigned int sysctl_rcu_lazy; +extern unsigned int sysctl_rcu_lazy_jiffies; + enum sched_tunable_scaling { SCHED_TUNABLESCALING_NONE, SCHED_TUNABLESCALING_LOG, diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index edb4e59dbf38..16621b32de46 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -266,6 +266,9 @@ static bool wake_nocb_gp(struct rcu_data *rdp, bool for= ce) #define LAZY_FLUSH_JIFFIES (10 * HZ) unsigned long jiffies_till_flush =3D LAZY_FLUSH_JIFFIES; =20 +unsigned int sysctl_rcu_lazy_jiffies =3D LAZY_FLUSH_JIFFIES; +unsigned int sysctl_rcu_lazy =3D 1; + #ifdef CONFIG_RCU_LAZY // To be called only from test code. void rcu_lazy_set_jiffies_till_flush(unsigned long jif) @@ -292,6 +295,9 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, in= t waketype, struct rcu_data *rdp_gp =3D rdp->nocb_gp_rdp; unsigned long mod_jif =3D 0; =20 + /* debug: not for merge */ + rcu_lazy_set_jiffies_till_flush(sysctl_rcu_lazy_jiffies); + raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags); =20 /* @@ -697,6 +703,9 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) unsigned long wait_gp_seq =3D 0; // Suppress "use uninitialized" warning. bool wasempty =3D false; =20 + /* debug: not for merge */ + rcu_lazy_set_jiffies_till_flush(sysctl_rcu_lazy_jiffies); + /* * Each pass through the following loop checks for CBs and for the * nearest grace period (if any) to wait for next. The CB kthreads diff --git a/kernel/sysctl.c b/kernel/sysctl.c index b00f92df0af5..bbe25d635dc0 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2450,6 +2450,23 @@ static struct ctl_table vm_table[] =3D { .extra2 =3D SYSCTL_ONE, }, #endif +#ifdef CONFIG_RCU_LAZY + { + .procname =3D "rcu_lazy", + .data =3D &sysctl_rcu_lazy, + .maxlen =3D sizeof(unsigned int), + .mode =3D 0644, + .proc_handler =3D proc_dointvec, + }, + { + .procname =3D "rcu_lazy_jiffies", + .data =3D &sysctl_rcu_lazy_jiffies, + .maxlen =3D sizeof(unsigned int), + .mode =3D 0644, + .proc_handler =3D proc_dointvec, + }, +#endif + { } }; =20 --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7550C32771 for ; Fri, 19 Aug 2022 20:49:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351820AbiHSUtb (ORCPT ); Fri, 19 Aug 2022 16:49:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351453AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C7D5B777F for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) Received: by mail-qv1-xf34.google.com with SMTP id b2so4200368qvp.1 for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=QdCIKbuW1jeO5q/TYvf2HOrVzS0Qp3Smb7xunCfM3lw=; b=gZWOZnoLCqwLKrKaGSGgINgLYm0kJJl2I1xZ7YC9qS5ho5NKETZldVKc2T7wzBc/kp yd6ok2kyEYDIjRHEjDjt15Fv74oPiGbRhRQs9i2uwCc3ht75pNtI4f8Oq7ZlJwX6zRgN DPkRRkCeMhK6O9SpU8FGtY0iTjVskeu3BkgOs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=QdCIKbuW1jeO5q/TYvf2HOrVzS0Qp3Smb7xunCfM3lw=; b=t2d3hblsG/RuAVdN4ynMaE3IOP9xDg5rrAACbHZ4KmBYUzGsDigdL94o87bPa2u/uZ njG++updi2L3SnIM7ng2jdlq9DA8zXeL8DivbP1yRXUYPMVoAc7LxxmWSHThUw149OAd Kx0gztm1MjGS3SolU+71QL0C5UHVIi+i6ozH+UCQJFN+dqfd+pb7hdkvmopWV8Yax7Oh pessGf+bWeKmiIWOHNsA5MXv/9yqpcONG5S4CWQfbTQSas8BiXQBNpB4TggsKB8lHY+R LetGwhD3P2ejOxDIKNmdi0OrYT+YuoZSle/nM+aA4WlNIvU2Z78G5WCRG0+Qj78/swoZ Nj0A== X-Gm-Message-State: ACgBeo3wcROOfNS/+9DY1DWtaNFdoxJhd5dvgxFCvCUhBxmD3WVO9Ugm 6TTDYPEFXdWCxa4aCPI/WLEabZCWLRmSJg== X-Google-Smtp-Source: AA6agR58m90yo1MdsoByPiyi9pH1L0gd07IBChVP/o41vOm26QOn/tNeRaWYQBthPaSJibA3a0skDA== X-Received: by 2002:a05:6214:411e:b0:476:858d:b2c3 with SMTP id kc30-20020a056214411e00b00476858db2c3mr8024858qvb.22.1660942149084; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:08 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 07/14] cred: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:50 +0000 Message-Id: <20220819204857.3066329-8-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- kernel/cred.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cred.c b/kernel/cred.c index e10c15f51c1f..c7cb2e3ac73a 100644 --- a/kernel/cred.c +++ b/kernel/cred.c @@ -150,7 +150,7 @@ void __put_cred(struct cred *cred) if (cred->non_rcu) put_cred_rcu(&cred->rcu); else - call_rcu(&cred->rcu, put_cred_rcu); + call_rcu_lazy(&cred->rcu, put_cred_rcu); } EXPORT_SYMBOL(__put_cred); =20 --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4E38C32773 for ; Fri, 19 Aug 2022 20:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351851AbiHSUti (ORCPT ); Fri, 19 Aug 2022 16:49:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351575AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qv1-xf2b.google.com (mail-qv1-xf2b.google.com [IPv6:2607:f8b0:4864:20::f2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA603B7EFE for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) Received: by mail-qv1-xf2b.google.com with SMTP id c5so3774658qvt.11 for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=426wMCJsiGTV85lu42+JZPwXusrFJ9i/3rufNlqDqLk=; b=Zf0eIAQmZsTmWin33rp1mlGvqcz+m2sPSmH3oNADGIUsHoKnVEiH9xF+s1jb97xCiO J0knpuEdwimm+Im9nIVGP8G1P07+5I5eQ4bVNmf31TVrIf/xuxXXdRoMQb4VbNXObzhg F0hJq6xJ6mrwwj1riPdTvVJnS5dP5eb/go24s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=426wMCJsiGTV85lu42+JZPwXusrFJ9i/3rufNlqDqLk=; b=mMyVFdDCT8F6vZJyClGOdiuZgPDgOh0ksdrusEPm6zffqTcsThLqr3qOYgcqlgEr79 SzdS3OlLtTnSOFAsXDePhPvkcNQZ5+L7JTrVVGoYgX9Q6lyylps5g5baiaEawduR4b7O UuShLF7aWDlMyBWo9Y8CDmnMWYHb/jJdC20RO096SKCPaGkqrtUEieSE8pbuLr99SfEj 1Ux7Ho2zanAvzrqB3J+BNI0pSwBGdcayx/Dq+CGovuHU5iTT0N49N2SWobQsthmcSWSx Vggcj6mQM5YbFF6J2eZnKpMBtY3ozyCVEy2WNHw3fVZvKyrvw1cBGacJTOXlNfggly+K VJaw== X-Gm-Message-State: ACgBeo3aBH93KjlK8DLOhvZWAo1NnxjYCkS2Ug5Tx28RGa7mGAzdpB+X 4s171fI9OnRDFbYFQcyvDT6l/Orz5MFJdw== X-Google-Smtp-Source: AA6agR7qo/QCjvB0s7nlcW0al3t3YGKIAXKBT2yMiuCoFuRaib2mPR2/P0JockuC08afh9Yxq6314A== X-Received: by 2002:a0c:8ecc:0:b0:473:2fa4:df7c with SMTP id y12-20020a0c8ecc000000b004732fa4df7cmr7842135qvb.55.1660942149873; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:09 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 08/14] security: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:51 +0000 Message-Id: <20220819204857.3066329-9-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- security/security.c | 2 +- security/selinux/avc.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/security/security.c b/security/security.c index ea7163c20751..d76f4951b2bd 100644 --- a/security/security.c +++ b/security/security.c @@ -1053,7 +1053,7 @@ void security_inode_free(struct inode *inode) * The inode will be freed after the RCU grace period too. */ if (inode->i_security) - call_rcu((struct rcu_head *)inode->i_security, + call_rcu_lazy((struct rcu_head *)inode->i_security, inode_free_by_rcu); } =20 diff --git a/security/selinux/avc.c b/security/selinux/avc.c index 9a43af0ebd7d..381f046d820f 100644 --- a/security/selinux/avc.c +++ b/security/selinux/avc.c @@ -442,7 +442,7 @@ static void avc_node_free(struct rcu_head *rhead) static void avc_node_delete(struct selinux_avc *avc, struct avc_node *node) { hlist_del_rcu(&node->list); - call_rcu(&node->rhead, avc_node_free); + call_rcu_lazy(&node->rhead, avc_node_free); atomic_dec(&avc->avc_cache.active_nodes); } =20 @@ -458,7 +458,7 @@ static void avc_node_replace(struct selinux_avc *avc, struct avc_node *new, struct avc_node *old) { hlist_replace_rcu(&old->list, &new->list); - call_rcu(&old->rhead, avc_node_free); + call_rcu_lazy(&old->rhead, avc_node_free); atomic_dec(&avc->avc_cache.active_nodes); } =20 --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2670C32771 for ; Fri, 19 Aug 2022 20:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351897AbiHSUtt (ORCPT ); Fri, 19 Aug 2022 16:49:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351309AbiHSUtR (ORCPT ); Fri, 19 Aug 2022 16:49:17 -0400 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 981DBB9FBA for ; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) Received: by mail-qk1-x736.google.com with SMTP id i7so4089983qka.13 for ; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=CUKjiZgOClRq235gPyXEiSWcYvWtXl40+K+H8hE5PTY=; b=ZeNq/S53z993Qm1b3qc1CtoWTVw+6FELWTENN3rke55aGVFwfejU8VrnxPEcZF3RCM QmXAwQ36ohAoS0U7oI14itD54h4q8Lq8EnZ73/RXRv78aP/ZrgsbHmw1fYxSBuzlD8dv kiDcrmHTDIax2p0zBZyqFl8pJrq5+RBJA2PIE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=CUKjiZgOClRq235gPyXEiSWcYvWtXl40+K+H8hE5PTY=; b=k4k1WtLsslBNNjwq0+BjqNd599RdSAoAVWifpiE0R6bcWwZFQZeCv6yJwv2NfvgZja /Gn5cF/sRV72aNNwEK+1Xa7jK3xAJMIhqilGhfISUQwV0qveYU+/MoQakhwMnPXX5Xfc /2wfLlKetIQNGHoefy7Gkf0LIHd79a98xj4738Hh6xN7qz/aZqjr1jkcyog/74oZLwS3 ohQZbzdRwAJpnHgF9lF1Yw2Ro9D5pwMxXNLsrudRyvwAZAg7EOWemM5ub78bowmKsQAt cUOJDU7ONPhPfbKhI0ca/JQZkCgRCNS2ozuzFxeC+tAEpNd7lxOvCf1i0s6dRzbYDQNJ PU0Q== X-Gm-Message-State: ACgBeo2c7oyR4EDgWw2q1kfe69qd+IDb07dxOfV/Hq/Jrqwfqhk1Cef3 5sZRJWxOks2O+gf5mkZkXwB+cLLcMthocw== X-Google-Smtp-Source: AA6agR760kQ1F9HXuvtToQwm97uNKPe+/wm1Rvx67YbqcFXibwH1PNZkV0oM7bJ1+sHzaI6VoDxYlA== X-Received: by 2002:a05:620a:1243:b0:6bb:daa4:88ea with SMTP id a3-20020a05620a124300b006bbdaa488eamr2431047qkl.628.1660942150634; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:10 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 09/14] net/core: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:52 +0000 Message-Id: <20220819204857.3066329-10-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- net/core/dst.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/dst.c b/net/core/dst.c index d16c2c9bfebd..68c240a4a0d7 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -174,7 +174,7 @@ void dst_release(struct dst_entry *dst) net_warn_ratelimited("%s: dst:%p refcnt:%d\n", __func__, dst, newrefcnt); if (!newrefcnt) - call_rcu(&dst->rcu_head, dst_destroy_rcu); + call_rcu_lazy(&dst->rcu_head, dst_destroy_rcu); } } EXPORT_SYMBOL(dst_release); --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5008BC32772 for ; Fri, 19 Aug 2022 20:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351660AbiHSUtn (ORCPT ); Fri, 19 Aug 2022 16:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351599AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82268A0335 for ; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: by mail-qv1-xf2e.google.com with SMTP id e4so4198579qvr.2 for ; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=IM3xuvgYuRBTMeRN6UX20iY+v3dQGfWqdrHzqRgQ3nc=; b=j1zt/vmPhAeHqF3WzjQbwEdocA4ht6wnFFga1N4nnE/ogTiDii9fSTlX1cbj76ACKx rkDnBXZzmErhi7X3m2eBFCzCzDmUBL5ffMMdYDxeijnHDwVr3uO7inraAuHPZ94DWzIh S6PULMcp5wfkjvgZ5K6KG090NhD3f4pk7La+4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=IM3xuvgYuRBTMeRN6UX20iY+v3dQGfWqdrHzqRgQ3nc=; b=f5cuUGlcs1DgNkQh4C+kSwftQ3IcW/lV5pQGPnXC0JnQTRq9UlhpYIBpFAWqQmGaqU i4AbVn+eyLQYS33G+qnmItVsJqHa1CV5+VDRtcEgjL7Sk3GAhuEnFHrsEAsYbUodh+mg ZdXJSZXOGkiVCDTTNjQUTAAnQR1G3uTwD1qt8kup6LrzstoZlm6A0rLVhoLlY0rLZjvs QQIe6d3s2V3Xf+SyM0dPwnM4xJNlAEz6NsR7GIJ6VCBcLQMIl8o+G4n9zC12HmrhFdKy aWGG6vnyYYmG8SdZCu7b6kta+ivFU+hXlHFQeMGBZRkWeL7x5IjYFMHutwd6jffdN8lW mO/w== X-Gm-Message-State: ACgBeo0YcG6KQyHSEeDUd0kXryt3R4ww1Pv33ovnUTU37zDSH0Odr7vk fNhI8SwNZHLPKSdfMsQK0hk/NKMgp7Y0cw== X-Google-Smtp-Source: AA6agR78xm+acrn/3AsCNGALGerD8M0khl3rhGk2dAyVszt75ahBd55xnbLLHZC62JHF+LYJ9HIWfA== X-Received: by 2002:ad4:5cae:0:b0:496:a988:ddc0 with SMTP id q14-20020ad45cae000000b00496a988ddc0mr7940572qvh.3.1660942151448; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:10 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 10/14] kernel: Move various core kernel usages to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:53 +0000 Message-Id: <20220819204857.3066329-11-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Signed-off-by: Joel Fernandes (Google) --- kernel/exit.c | 2 +- kernel/pid.c | 2 +- kernel/time/posix-timers.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/exit.c b/kernel/exit.c index 853c6a943fce..14cde19ff4c2 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -180,7 +180,7 @@ static void delayed_put_task_struct(struct rcu_head *rh= p) void put_task_struct_rcu_user(struct task_struct *task) { if (refcount_dec_and_test(&task->rcu_users)) - call_rcu(&task->rcu, delayed_put_task_struct); + call_rcu_lazy(&task->rcu, delayed_put_task_struct); } =20 void release_task(struct task_struct *p) diff --git a/kernel/pid.c b/kernel/pid.c index 2fc0a16ec77b..5a5144519d70 100644 --- a/kernel/pid.c +++ b/kernel/pid.c @@ -153,7 +153,7 @@ void free_pid(struct pid *pid) } spin_unlock_irqrestore(&pidmap_lock, flags); =20 - call_rcu(&pid->rcu, delayed_put_pid); + call_rcu_lazy(&pid->rcu, delayed_put_pid); } =20 struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 06d1236b3804..63489c4070cd 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -485,7 +485,7 @@ static void release_posix_timer(struct k_itimer *tmr, i= nt it_id_set) } put_pid(tmr->it_pid); sigqueue_free(tmr->sigq); - call_rcu(&tmr->rcu, k_itimer_rcu_free); + call_rcu_lazy(&tmr->rcu, k_itimer_rcu_free); } =20 static int common_timer_create(struct k_itimer *new_timer) --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6D3DC32773 for ; Fri, 19 Aug 2022 20:49:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351942AbiHSUt4 (ORCPT ); Fri, 19 Aug 2022 16:49:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351625AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E8AFBE4CA for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: by mail-qv1-xf29.google.com with SMTP id y4so4203200qvp.3 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=E7Xq/XCIcO6OqJQYLMUxDOI6zGFTDKoXgxMJiXC3P5I=; b=PnhSY8rM7rduW9739YQGmVnMq/JGCCmIPkz6MaFXLYg5jNAb1UrdPTDggg3S+dNGgJ 2Y7P7W5dgjp2YYRCd8rcBCIO8TU/aSkgyDY1qZlMu1yk3Y+QY+oYAUm9E8JWOIN6jBd7 9cLfbbI/yGRKAc3Y15XmddVSxaTj8LHTLbh0Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=E7Xq/XCIcO6OqJQYLMUxDOI6zGFTDKoXgxMJiXC3P5I=; b=rdalrlzheEVF66ukQfTC8a0gO4KbIzIdpDwwif00XW6c2ESZCPYhoE38WWqchwkZHk hOSXdSrEZY2k43Z9uLV2KuxsmuG1c2bGY3Yp0JMwdxaw+EFoGHoID1GNNGXr0nKk8W2P j414UtpiVuGiqF1TFDnirMl0t4Jf6GB8Me2dVos43RbJ1L+cHGrbB46rPVOsil7cWS7z 9CetYGw7xNfywod57VM8XwkwuyOaiXUJ61aUZpHyz1NfDqv+Yh4DAzT9G3Rv8RrSfqQr 7UHGkt1pQr+E9KkWPEpF3vuehnno6F197vfsP2Dswo0A7JR483pcb6D7DOL1eTf7e4Bs 5hfg== X-Gm-Message-State: ACgBeo2Ha2D6G7BQ0cI+UO6gcGxdmOn05FOSLWCrhbP6HqCR+U0Xrjy6 tYGvMdjyuNHf3mXfea9hh1Sb5l8Zw3W+CQ== X-Google-Smtp-Source: AA6agR5kzA0uvHG4tfTkS5Y71k+SYHXpzSvkUg40aguVGCWBXSgV6NPjLlhGB6DaqBmALd9Ti9wdHg== X-Received: by 2002:a05:6214:2aa2:b0:477:47ad:c2bf with SMTP id js2-20020a0562142aa200b0047747adc2bfmr8105366qvb.125.1660942152066; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:11 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 11/14] lib: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:54 +0000 Message-Id: <20220819204857.3066329-12-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Move radix-tree and xarray to call_rcu_lazy(). This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- lib/radix-tree.c | 2 +- lib/xarray.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index b3afafe46fff..1526dc9e1d93 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -305,7 +305,7 @@ void radix_tree_node_rcu_free(struct rcu_head *head) static inline void radix_tree_node_free(struct radix_tree_node *node) { - call_rcu(&node->rcu_head, radix_tree_node_rcu_free); + call_rcu_lazy(&node->rcu_head, radix_tree_node_rcu_free); } =20 /* diff --git a/lib/xarray.c b/lib/xarray.c index ea9ce1f0b386..230abc8045fe 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -257,7 +257,7 @@ static void xa_node_free(struct xa_node *node) { XA_NODE_BUG_ON(node, !list_empty(&node->private_list)); node->array =3D XA_RCU_FREE; - call_rcu(&node->rcu_head, radix_tree_node_rcu_free); + call_rcu_lazy(&node->rcu_head, radix_tree_node_rcu_free); } =20 /* --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 674D7C32771 for ; Fri, 19 Aug 2022 20:50:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351951AbiHSUt7 (ORCPT ); Fri, 19 Aug 2022 16:49:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351662AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com [IPv6:2607:f8b0:4864:20::832]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 534C5BE4C8 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: by mail-qt1-x832.google.com with SMTP id y18so4222842qtv.5 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=F8eqajEJ1oxnqHr7+lKrv0iQaE0C9fuKhV4+YpdptQc=; b=thJJdMHDttiQHWHA9YMQ6t8JnYuDXswlIU6Fd+r+vPkOe/Ek1KbSL/8SQcoZu2Q1dm a6l64ZImF9AQuS+8r2YgWRnXA993zdsdBeTloskJo8YE3oISzmLAx3TnSZz5CXa4vi2n 6l+d+3Ebcjv72aP2LdFQGNu2iIZgKzSg77Fq4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=F8eqajEJ1oxnqHr7+lKrv0iQaE0C9fuKhV4+YpdptQc=; b=45gWJxeuVDlw28f0eJzHku+4/1vTG5ErUpkS0GV5kx79BrRitynuyaaNx4jdIsQZCX UV3gjPZwj8KxNbnjkPPt9oyEwV0K/XtpbtQ45Iuc+5wDiL9O7X1q0r6LeFGarei7Q14C rPUNjsSl/G19FMC6JjhqWkuGbhfbTUg0SXlDjEYgH1Pjf89ZJ1zulwB/XPSka2OyY3Oe mDAgVhoRjwbTvzGWdiUgCltOwCu1OeC/LRNN1obPRMt7ZROoFhY2xBXQADq93f2APdXV /wPbU3YHix49KUvxEXWA2dL3J36BqaVp6wPdBXkwQIM5j4l5ecwY1weHsp+JquzYYFs5 sllw== X-Gm-Message-State: ACgBeo36kAcwvR8so8dYwHOyLxb4e1MZsXVw64vP7kLGpOvZM8LO2TZY CHP0BD814T/4RLnMIF1Sc7AvoQJK6buK3w== X-Google-Smtp-Source: AA6agR4uPFXBKcvKDD94qylED1tqBj5WVVD6PD6c49FLD/T+dqSi7r/B+UGnudkd1B7IAJicQq6Yxg== X-Received: by 2002:a05:622a:1048:b0:344:9494:3617 with SMTP id f8-20020a05622a104800b0034494943617mr6809664qte.143.1660942152841; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:12 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 12/14] i915: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:55 +0000 Message-Id: <20220819204857.3066329-13-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i= 915/gem/i915_gem_object.c index 06b1b188ce5a..74f4b6e707c2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -343,7 +343,7 @@ static void __i915_gem_free_objects(struct drm_i915_pri= vate *i915, __i915_gem_free_object(obj); =20 /* But keep the pointer alive for RCU-protected lookups */ - call_rcu(&obj->rcu, __i915_gem_free_object_rcu); + call_rcu_lazy(&obj->rcu, __i915_gem_free_object_rcu); cond_resched(); } } --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3875CC32772 for ; Fri, 19 Aug 2022 20:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351971AbiHSUuE (ORCPT ); Fri, 19 Aug 2022 16:50:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351684AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93712BFABF for ; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) Received: by mail-qt1-x82d.google.com with SMTP id y18so4222854qtv.5 for ; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=eOVP7uxcuakWCkD5+BqZt5+xcwwCNJthr/gxNxOlHcg=; b=TbqfUYYOBRlQM1AsIY4GgE+2GpQ/GkZpdyKoO5F732/uJp6Il9dKitV7sHVM1cA9rC RRFRRBgNtpaHAmgnmmdaHwkZAVJ47CrSC6J+Z5mx1DKVwYKo7bCgePitrBl0iQ1ONj5z kVNnqB403PU8vOH/tLiG3e5URPaaC2NwHeDgo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=eOVP7uxcuakWCkD5+BqZt5+xcwwCNJthr/gxNxOlHcg=; b=Gsq0Nxqc/CFbNCnJiORrmoUl6w1bj9kjgF1a5fDNsE1jC2Qzfhn1qCNaXbcIUxyMY8 2d9E4DX+9YaKjAIRRKUbTpfaVW+dveK38wiW1AUv7CpIYSdaAQsN/013eElXYS0ha1+S M5cBxClWtclR69JQr/4yc5KgpGoVIpZIObJF5gVyIcn86dsz8Df4D2wSWvTOymVvDdFx bHIt3cgCaC8Y6U43hIeg9I0ylyFaJ2xLXGCaAkFb5X4zAzdDCiU81tac5T3v1Eb6Y4rF dkKew2pAr4JzXC7870sDwP387DTxNI6JxT4Hex4shaVhUmg27JVn+f31n9FIbDS9qnWf UMZQ== X-Gm-Message-State: ACgBeo1c5Gp6lU07/8lWxNvK1jj+vrk6BzyKyaCIAlCFpCq8Wop/+aVo R9pXNhX77My1KlzWr1d2JTdv+XKwct4OWw== X-Google-Smtp-Source: AA6agR4jfVHEKJ7HKI1Pca8LM6+bM5tC6NBQKXVudmLfwSZfaIKrw8QW130ggamnP9whdSSpTKpeRA== X-Received: by 2002:ac8:5889:0:b0:344:57e5:dc54 with SMTP id t9-20020ac85889000000b0034457e5dc54mr8038436qta.465.1660942153560; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:13 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 13/14] fork: Move thread_stack_free_rcu to call_rcu_lazy Date: Fri, 19 Aug 2022 20:48:56 +0000 Message-Id: <20220819204857.3066329-14-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- kernel/fork.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index c9a2e19d67e5..a4535cf5446f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -227,7 +227,7 @@ static void thread_stack_delayed_free(struct task_struc= t *tsk) struct vm_stack *vm_stack =3D tsk->stack; =20 vm_stack->stack_vm_area =3D tsk->stack_vm_area; - call_rcu(&vm_stack->rcu, thread_stack_free_rcu); + call_rcu_lazy(&vm_stack->rcu, thread_stack_free_rcu); } =20 static int free_vm_stack_cache(unsigned int cpu) @@ -354,7 +354,7 @@ static void thread_stack_delayed_free(struct task_struc= t *tsk) { struct rcu_head *rh =3D tsk->stack; =20 - call_rcu(rh, thread_stack_free_rcu); + call_rcu_lazy(rh, thread_stack_free_rcu); } =20 static int alloc_thread_stack_node(struct task_struct *tsk, int node) @@ -389,7 +389,7 @@ static void thread_stack_delayed_free(struct task_struc= t *tsk) { struct rcu_head *rh =3D tsk->stack; =20 - call_rcu(rh, thread_stack_free_rcu); + call_rcu_lazy(rh, thread_stack_free_rcu); } =20 static int alloc_thread_stack_node(struct task_struct *tsk, int node) --=20 2.37.2.609.g9ff673ca1a-goog From nobody Wed Apr 8 12:01:27 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 233FBC32771 for ; Fri, 19 Aug 2022 20:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351982AbiHSUuI (ORCPT ); Fri, 19 Aug 2022 16:50:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351704AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com [IPv6:2607:f8b0:4864:20::82f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72677BFC7E for ; Fri, 19 Aug 2022 13:49:15 -0700 (PDT) Received: by mail-qt1-x82f.google.com with SMTP id j17so4210481qtp.12 for ; Fri, 19 Aug 2022 13:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=6M/kEQXI6tA+u5+q0q3BfPnsnjCNXvNhf8n1k8Caals=; b=G4RH/9CKm7jJBfWPa3xTDK0qYze8HQ31EQsg72UX6wDKbasiRRCXhvuugbCwbLm31P 5TGYIRvZ06nuvqEvU0IRARdrxjuSNS1BbQ/ZZrrV3Jm4MJxp/6fa30p/OFvXnu5mRbSo aOHwkSMPTBQ0nsadszNSKzfIkSPtFP8lxf7mI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=6M/kEQXI6tA+u5+q0q3BfPnsnjCNXvNhf8n1k8Caals=; b=Nw4AU8oNfKMkyIk3kTbGe1DJL/zAArjlX1bLOGjm07terXwyGXw1Pwth433m4Dazng /WBG2FoWdLRHd35n9RqR7SpNuJutAx3OySkP+MJtYtMWJ40Pk3wSxXWVNBGzoemC2TZT jZFa5AokVGK0jkijY5gi7TH3JvCG91uExiVyNSePEjnbIy+u427oI7nZYJ5fvGSmyRbs EBKeCwaqeqavHQLXwKW3ZrRNs0kATuq9z/dwyo2EK5BaybbTGvuee8XN2OEnClUNAeBN 61wXxDKAeBLwIab+smfuowe3QDYlYYcZ6h2hqhCvUk8AZKkir3iiRB12IlPU9ad2FpA3 X47Q== X-Gm-Message-State: ACgBeo0k1ssg49y5dOiCAj7bp3d9+k7+5UbWLRhgeAociHWnfBBn0ZGD /hPZy72Nn4aBbrKPqtu7gvanZviVKceF1g== X-Google-Smtp-Source: AA6agR5WaU1OuwtlNb5E8x5UWuI4aQIOzMrkS+bdJ4qsVwxVLpQSeA2Msl6wE+7uTnDjr6siJ+hX/Q== X-Received: by 2002:ac8:5f0d:0:b0:343:6e79:f1a2 with SMTP id x13-20020ac85f0d000000b003436e79f1a2mr7981121qta.657.1660942154287; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:14 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 14/14] rcu/tree: Move trace_rcu_callback() before bypassing Date: Fri, 19 Aug 2022 20:48:57 +0000 Message-Id: <20220819204857.3066329-15-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If any CB is queued into the bypass list, then trace_rcu_callback() does not show it. This makes it not clear when a callback was actually queued, as you only end up getting a trace_rcu_invoke_callback() trace. Fix it by moving trace_rcu_callback() before trace_rcu_nocb_try_bypass(). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 67026382dc21..6e14f0257669 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3101,10 +3101,7 @@ __call_rcu_common(struct rcu_head *head, rcu_callbac= k_t func, bool lazy) } =20 check_cb_ovld(rdp); - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) - return; // Enqueued onto ->nocb_bypass, so just leave. - // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. - rcu_segcblist_enqueue(&rdp->cblist, head); + if (__is_kvfree_rcu_offset((unsigned long)func)) trace_rcu_kvfree_callback(rcu_state.name, head, (unsigned long)func, @@ -3113,6 +3110,11 @@ __call_rcu_common(struct rcu_head *head, rcu_callbac= k_t func, bool lazy) trace_rcu_callback(rcu_state.name, head, rcu_segcblist_n_cbs(&rdp->cblist)); =20 + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) + return; // Enqueued onto ->nocb_bypass, so just leave. + // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. + rcu_segcblist_enqueue(&rdp->cblist, head); + trace_rcu_segcb_stats(&rdp->cblist, TPS("SegCBQueued")); =20 /* Go handle any RCU core processing required. */ --=20 2.37.2.609.g9ff673ca1a-goog