mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Currently order is signed in one version of the function and unsigned in
the other. Tidy that up.
In page_alloc.c, order is unsigned in the vast majority of cases. But,
there is a cluster of exceptions in compaction-related code (probably
stemming from the fact that compact_control.order is signed). So, prefer
local consistency and make this one signed too.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d1d037f97c5fc76f8a7739e8515d7593e0ad44f9..8faa0ad9f461fbe151ec347e331d83c2fdc8cad2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4182,7 +4182,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
}
static inline bool
-should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags,
+should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
enum compact_result compact_result,
enum compact_priority *compact_priority,
int *compaction_retries)
---
base-commit: fab1beda7597fac1cecc01707d55eadb6bbe773c
change-id: 20250826-cleanup-should_compact_retry-9c6b5cdf8d27
Best regards,
--
Brendan Jackman <jackmanb@google.com>
On Tue, 26 Aug 2025 14:06:54 +0000 Brendan Jackman <jackmanb@google.com> wrote: > Currently order is signed in one version of the function and unsigned in > the other. Tidy that up. > > In page_alloc.c, order is unsigned in the vast majority of cases. But, > there is a cluster of exceptions in compaction-related code (probably > stemming from the fact that compact_control.order is signed). So, prefer > local consistency and make this one signed too. > grumble, pet peeve. Negative orders make no sense. Can we make cc->order unsigned in order (heh) to make everything nice? > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4182,7 +4182,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, > } > > static inline bool > -should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags, > +should_compact_retry(struct alloc_context *ac, int order, int alloc_flags, > enum compact_result compact_result, > enum compact_priority *compact_priority, > int *compaction_retries) >
On Wed Aug 27, 2025 at 2:13 AM UTC, Andrew Morton wrote: > On Tue, 26 Aug 2025 14:06:54 +0000 Brendan Jackman <jackmanb@google.com> wrote: > >> Currently order is signed in one version of the function and unsigned in >> the other. Tidy that up. >> >> In page_alloc.c, order is unsigned in the vast majority of cases. But, >> there is a cluster of exceptions in compaction-related code (probably >> stemming from the fact that compact_control.order is signed). So, prefer >> local consistency and make this one signed too. >> > > grumble, pet peeve. Negative orders make no sense. Can we make > cc->order unsigned in order (heh) to make everything nice? I think we can't "just" do that: /* * order == -1 is expected when compacting proactively via * 1. /proc/sys/vm/compact_memory * 2. /sys/devices/system/node/nodex/compact * 3. /proc/sys/vm/compaction_proactiveness */ static inline bool is_via_compact_memory(int order) { return order == -1; }
On 8/27/25 17:30, Brendan Jackman wrote: > On Wed Aug 27, 2025 at 2:13 AM UTC, Andrew Morton wrote: >> On Tue, 26 Aug 2025 14:06:54 +0000 Brendan Jackman <jackmanb@google.com> wrote: >> >>> Currently order is signed in one version of the function and unsigned in >>> the other. Tidy that up. >>> >>> In page_alloc.c, order is unsigned in the vast majority of cases. But, >>> there is a cluster of exceptions in compaction-related code (probably >>> stemming from the fact that compact_control.order is signed). So, prefer >>> local consistency and make this one signed too. >>> >> >> grumble, pet peeve. Negative orders make no sense. Can we make >> cc->order unsigned in order (heh) to make everything nice? > > I think we can't "just" do that: That part should be easy to convert to a compact_control flag. Zi's point about going negative seems like more prone to overlook some case. But worth trying and the cleanups I'd say. > /* > * order == -1 is expected when compacting proactively via > * 1. /proc/sys/vm/compact_memory > * 2. /sys/devices/system/node/nodex/compact > * 3. /proc/sys/vm/compaction_proactiveness > */ > static inline bool is_via_compact_memory(int order) > { > return order == -1; > }
On Thu Aug 28, 2025 at 6:31 PM UTC, Vlastimil Babka wrote: > On 8/27/25 17:30, Brendan Jackman wrote: >> On Wed Aug 27, 2025 at 2:13 AM UTC, Andrew Morton wrote: >>> On Tue, 26 Aug 2025 14:06:54 +0000 Brendan Jackman <jackmanb@google.com> wrote: >>> >>>> Currently order is signed in one version of the function and unsigned in >>>> the other. Tidy that up. >>>> >>>> In page_alloc.c, order is unsigned in the vast majority of cases. But, >>>> there is a cluster of exceptions in compaction-related code (probably >>>> stemming from the fact that compact_control.order is signed). So, prefer >>>> local consistency and make this one signed too. >>>> >>> >>> grumble, pet peeve. Negative orders make no sense. Can we make >>> cc->order unsigned in order (heh) to make everything nice? >> >> I think we can't "just" do that: > > That part should be easy to convert to a compact_control flag. > Zi's point about going negative seems like more prone to overlook some case. > But worth trying and the cleanups I'd say. OK, I can take a look next week. From a quick glance it does seem to be worth having a sniff, there could be bugs in there where code is expecting non-negative and doing stuff like using it as a shift index, in cases where it can actually be negative.
On 26 Aug 2025, at 22:13, Andrew Morton wrote: > On Tue, 26 Aug 2025 14:06:54 +0000 Brendan Jackman <jackmanb@google.com> wrote: > >> Currently order is signed in one version of the function and unsigned in >> the other. Tidy that up. >> >> In page_alloc.c, order is unsigned in the vast majority of cases. But, >> there is a cluster of exceptions in compaction-related code (probably >> stemming from the fact that compact_control.order is signed). So, prefer >> local consistency and make this one signed too. >> > > grumble, pet peeve. Negative orders make no sense. Can we make > cc->order unsigned in order (heh) to make everything nice? Unless we do not do order--, where order can go negative. See next_search_order() in mm/compaction.c as an example. > >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -4182,7 +4182,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, >> } >> >> static inline bool >> -should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags, >> +should_compact_retry(struct alloc_context *ac, int order, int alloc_flags, >> enum compact_result compact_result, >> enum compact_priority *compact_priority, >> int *compaction_retries) >> -- Best Regards, Yan, Zi
On 26 Aug 2025, at 10:06, Brendan Jackman wrote: > Currently order is signed in one version of the function and unsigned in > the other. Tidy that up. > > In page_alloc.c, order is unsigned in the vast majority of cases. But, > there is a cluster of exceptions in compaction-related code (probably > stemming from the fact that compact_control.order is signed). So, prefer > local consistency and make this one signed too. > > Signed-off-by: Brendan Jackman <jackmanb@google.com> > --- > mm/page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d1d037f97c5fc76f8a7739e8515d7593e0ad44f9..8faa0ad9f461fbe151ec347e331d83c2fdc8cad2 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4182,7 +4182,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, > } > > static inline bool > -should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags, > +should_compact_retry(struct alloc_context *ac, int order, int alloc_flags, > enum compact_result compact_result, > enum compact_priority *compact_priority, > int *compaction_retries) > > --- > base-commit: fab1beda7597fac1cecc01707d55eadb6bbe773c > change-id: 20250826-cleanup-should_compact_retry-9c6b5cdf8d27 > LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com> -- Best Regards, Yan, Zi
© 2016 - 2025 Red Hat, Inc.