Historically, UBIFS embedded cond_resched() calls inside its
list_sort() comparison callbacks (data_nodes_cmp, nondata_nodes_cmp,
and replay_entries_cmp) to prevent soft lockups when sorting long
lists.
However, further inspection by Richard Weinberger reveals that these
compare functions are extremely lightweight and do not perform any
blocking MTD I/O. Furthermore, the lists being sorted are strictly
bounded in size:
- In the GC case, the list contains at most the number of nodes that
fit into a single LEB.
- In the replay case, the list spans across a few LEBs from the UBIFS
journal, amounting to at most a few thousand elements.
Since the compare functions are called a few thousand times at most,
the overhead of frequent scheduling points is unjustified. Removing the
cond_resched() calls simplifies the comparison logic and reduces
unnecessary context switch checks during the sort.
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
fs/ubifs/gc.c | 2 --
fs/ubifs/replay.c | 1 -
2 files changed, 3 deletions(-)
diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c
index 0bf08b7755b8..933c79b5cd6b 100644
--- a/fs/ubifs/gc.c
+++ b/fs/ubifs/gc.c
@@ -109,7 +109,6 @@ static int data_nodes_cmp(void *priv, const struct list_head *a,
struct ubifs_info *c = priv;
struct ubifs_scan_node *sa, *sb;
- cond_resched();
if (a == b)
return 0;
@@ -153,7 +152,6 @@ static int nondata_nodes_cmp(void *priv, const struct list_head *a,
struct ubifs_info *c = priv;
struct ubifs_scan_node *sa, *sb;
- cond_resched();
if (a == b)
return 0;
diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
index a9a568f4a868..263045e05cf1 100644
--- a/fs/ubifs/replay.c
+++ b/fs/ubifs/replay.c
@@ -305,7 +305,6 @@ static int replay_entries_cmp(void *priv, const struct list_head *a,
struct ubifs_info *c = priv;
struct replay_entry *ra, *rb;
- cond_resched();
if (a == b)
return 0;
--
2.53.0.959.g497ff81fa9-goog
----- Ursprüngliche Mail ----- > Von: "Kuan-Wei Chiu" <visitorckw@gmail.com> > An: "richard" <richard@nod.at>, "Andrew Morton" <akpm@linux-foundation.org> > CC: "chengzhihao1" <chengzhihao1@huawei.com>, "Christoph Hellwig" <hch@infradead.org>, "jserv" <jserv@ccns.ncku.edu.tw>, > "eleanor15x" <eleanor15x@gmail.com>, "marscheng" <marscheng@google.com>, "linux-mtd" <linux-mtd@lists.infradead.org>, > "linux-kernel" <linux-kernel@vger.kernel.org>, "Kuan-Wei Chiu" <visitorckw@gmail.com> > Gesendet: Freitag, 20. März 2026 19:09:37 > Betreff: [PATCH v3 1/2] ubifs: Remove unnecessary cond_resched() from list_sort() compare > Historically, UBIFS embedded cond_resched() calls inside its > list_sort() comparison callbacks (data_nodes_cmp, nondata_nodes_cmp, > and replay_entries_cmp) to prevent soft lockups when sorting long > lists. > > However, further inspection by Richard Weinberger reveals that these > compare functions are extremely lightweight and do not perform any > blocking MTD I/O. Furthermore, the lists being sorted are strictly > bounded in size: > - In the GC case, the list contains at most the number of nodes that > fit into a single LEB. > - In the replay case, the list spans across a few LEBs from the UBIFS > journal, amounting to at most a few thousand elements. > > Since the compare functions are called a few thousand times at most, > the overhead of frequent scheduling points is unjustified. Removing the > cond_resched() calls simplifies the comparison logic and reduces > unnecessary context switch checks during the sort. > > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Acked-by: Richard Weinberger <richard@nod.at> Thanks, //richard
在 2026/3/21 2:09, Kuan-Wei Chiu 写道: > Historically, UBIFS embedded cond_resched() calls inside its > list_sort() comparison callbacks (data_nodes_cmp, nondata_nodes_cmp, > and replay_entries_cmp) to prevent soft lockups when sorting long > lists. > > However, further inspection by Richard Weinberger reveals that these > compare functions are extremely lightweight and do not perform any > blocking MTD I/O. Furthermore, the lists being sorted are strictly > bounded in size: > - In the GC case, the list contains at most the number of nodes that > fit into a single LEB. > - In the replay case, the list spans across a few LEBs from the UBIFS > journal, amounting to at most a few thousand elements. > > Since the compare functions are called a few thousand times at most, > the overhead of frequent scheduling points is unjustified. Removing the > cond_resched() calls simplifies the comparison logic and reduces > unnecessary context switch checks during the sort. > > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> > --- > fs/ubifs/gc.c | 2 -- > fs/ubifs/replay.c | 1 - > 2 files changed, 3 deletions(-) Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com> > > diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c > index 0bf08b7755b8..933c79b5cd6b 100644 > --- a/fs/ubifs/gc.c > +++ b/fs/ubifs/gc.c > @@ -109,7 +109,6 @@ static int data_nodes_cmp(void *priv, const struct list_head *a, > struct ubifs_info *c = priv; > struct ubifs_scan_node *sa, *sb; > > - cond_resched(); > if (a == b) > return 0; > > @@ -153,7 +152,6 @@ static int nondata_nodes_cmp(void *priv, const struct list_head *a, > struct ubifs_info *c = priv; > struct ubifs_scan_node *sa, *sb; > > - cond_resched(); > if (a == b) > return 0; > > diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c > index a9a568f4a868..263045e05cf1 100644 > --- a/fs/ubifs/replay.c > +++ b/fs/ubifs/replay.c > @@ -305,7 +305,6 @@ static int replay_entries_cmp(void *priv, const struct list_head *a, > struct ubifs_info *c = priv; > struct replay_entry *ra, *rb; > > - cond_resched(); > if (a == b) > return 0; > >
© 2016 - 2026 Red Hat, Inc.