[PATCH 39/44] mm: use min() instead of min_t()

david.laight.linux@gmail.com posted 44 patches 1 week, 5 days ago
There is a newer version of this series
[PATCH 39/44] mm: use min() instead of min_t()
Posted by david.laight.linux@gmail.com 1 week, 5 days ago
From: David Laight <david.laight.linux@gmail.com>

min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
and so cannot discard significant bits.

In this case the 'unsigned long' values are small enough that the result
is ok.

(Similarly for clamp_t().)

Detected by an extra check added to min_t().

Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
 mm/gup.c      | 4 ++--
 mm/memblock.c | 2 +-
 mm/memory.c   | 2 +-
 mm/percpu.c   | 2 +-
 mm/truncate.c | 3 +--
 mm/vmscan.c   | 2 +-
 6 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index a8ba5112e4d0..55435b90dcc3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
 	unsigned int nr = 1;
 
 	if (folio_test_large(folio))
-		nr = min_t(unsigned int, npages - i,
-			   folio_nr_pages(folio) - folio_page_idx(folio, next));
+		nr = min(npages - i,
+			 folio_nr_pages(folio) - folio_page_idx(folio, next));
 
 	*ntails = nr;
 	return folio;
diff --git a/mm/memblock.c b/mm/memblock.c
index e23e16618e9b..19b491d39002 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2208,7 +2208,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
 		 * the case.
 		 */
 		if (start)
-			order = min_t(int, MAX_PAGE_ORDER, __ffs(start));
+			order = min(MAX_PAGE_ORDER, __ffs(start));
 		else
 			order = MAX_PAGE_ORDER;
 
diff --git a/mm/memory.c b/mm/memory.c
index 74b45e258323..72f7bd71d65f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2375,7 +2375,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
 
 	while (pages_to_write_in_pmd) {
 		int pte_idx = 0;
-		const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
+		const int batch_size = min(pages_to_write_in_pmd, 8);
 
 		start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
 		if (!start_pte) {
diff --git a/mm/percpu.c b/mm/percpu.c
index 81462ce5866e..cad59221d298 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1228,7 +1228,7 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits,
 	/*
 	 * Search to find a fit.
 	 */
-	end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
+	end = umin(start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
 		    pcpu_chunk_map_bits(chunk));
 	bit_off = pcpu_find_zero_area(chunk->alloc_map, end, start, alloc_bits,
 				      align_mask, &area_off, &area_bits);
diff --git a/mm/truncate.c b/mm/truncate.c
index 91eb92a5ce4f..7a56372d39a3 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -849,8 +849,7 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
 		unsigned int offset, end;
 
 		offset = from - folio_pos(folio);
-		end = min_t(unsigned int, to - folio_pos(folio),
-			    folio_size(folio));
+		end = umin(to - folio_pos(folio), folio_size(folio));
 		folio_zero_segment(folio, offset, end);
 	}
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b2fc8b626d3d..82cd99a5d843 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3489,7 +3489,7 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
 
 static bool suitable_to_scan(int total, int young)
 {
-	int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8);
+	int n = clamp(cache_line_size() / sizeof(pte_t), 2, 8);
 
 	/* suitable if the average number of young PTEs per cacheline is >=1 */
 	return young * n >= total;
-- 
2.39.5
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by Lorenzo Stoakes 1 week, 4 days ago
I guess you decided to drop all reviewers for the series...?

I do wonder what the aversion to sending to more people is, email for review is
flawed but I don't think it's problematic to ensure that people signed up to
review everything for maintained files are cc'd...

On Wed, Nov 19, 2025 at 10:41:35PM +0000, david.laight.linux@gmail.com wrote:
> From: David Laight <david.laight.linux@gmail.com>
>
> min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> and so cannot discard significant bits.

you're changing min_t(int, ...) too? This commit message seems incomplete as a
result.

None of the changes you make here seem to have any bearing on reality, so I
think the commit message should reflect that this is an entirely pedantic change
for the sake of satisfying a check you feel will reveal actual bugs in the
future or something?

Commit messages should include actual motivation rather than a theoretical one.

>
> In this case the 'unsigned long' values are small enough that the result
> is ok.
>
> (Similarly for clamp_t().)
>
> Detected by an extra check added to min_t().

In general I really question the value of the check when basically every use
here is pointless...?

I guess idea is in future it'll catch some real cases right?

Is this check implemented in this series at all? Because presumably with the
cover letter saying you couldn't fix the CFS code etc. you aren't? So it's just
laying the groundwork for this?

>
> Signed-off-by: David Laight <david.laight.linux@gmail.com>
> ---
>  mm/gup.c      | 4 ++--
>  mm/memblock.c | 2 +-
>  mm/memory.c   | 2 +-
>  mm/percpu.c   | 2 +-
>  mm/truncate.c | 3 +--
>  mm/vmscan.c   | 2 +-
>  6 files changed, 7 insertions(+), 8 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index a8ba5112e4d0..55435b90dcc3 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
>  	unsigned int nr = 1;
>
>  	if (folio_test_large(folio))
> -		nr = min_t(unsigned int, npages - i,
> -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
> +		nr = min(npages - i,
> +			 folio_nr_pages(folio) - folio_page_idx(folio, next));

There's no cases where any of these would discard significant bits. But we
ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.

But at the same time I guess no harm.

>
>  	*ntails = nr;
>  	return folio;
> diff --git a/mm/memblock.c b/mm/memblock.c
> index e23e16618e9b..19b491d39002 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -2208,7 +2208,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
>  		 * the case.
>  		 */
>  		if (start)
> -			order = min_t(int, MAX_PAGE_ORDER, __ffs(start));
> +			order = min(MAX_PAGE_ORDER, __ffs(start));

I guess this would already be defaulting to int anyway.

>  		else
>  			order = MAX_PAGE_ORDER;
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 74b45e258323..72f7bd71d65f 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2375,7 +2375,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
>
>  	while (pages_to_write_in_pmd) {
>  		int pte_idx = 0;
> -		const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
> +		const int batch_size = min(pages_to_write_in_pmd, 8);

Feels like there's just a mistake in pages_to_write_in_pmd being unsigned long?

Again I guess correct because we're not going to even come close to ulong64
issues with a count of pages to write.

>
>  		start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
>  		if (!start_pte) {
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 81462ce5866e..cad59221d298 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1228,7 +1228,7 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits,
>  	/*
>  	 * Search to find a fit.
>  	 */
> -	end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
> +	end = umin(start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
>  		    pcpu_chunk_map_bits(chunk));

Is it really that useful to use umin() here? I mean in examples above all the
values would be positive too. Seems strange to use umin() when everything involves an int?

>  	bit_off = pcpu_find_zero_area(chunk->alloc_map, end, start, alloc_bits,
>  				      align_mask, &area_off, &area_bits);
> diff --git a/mm/truncate.c b/mm/truncate.c
> index 91eb92a5ce4f..7a56372d39a3 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -849,8 +849,7 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
>  		unsigned int offset, end;
>
>  		offset = from - folio_pos(folio);
> -		end = min_t(unsigned int, to - folio_pos(folio),
> -			    folio_size(folio));
> +		end = umin(to - folio_pos(folio), folio_size(folio));

Again confused about why we choose to use umin() here...

min(loff_t - loff_t, size_t)

so min(long long, unsigned long)

And I guess based on fact we don't expect delta between from and folio start to
be larger than a max folio size.

So probably fine.

>  		folio_zero_segment(folio, offset, end);
>  	}
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b2fc8b626d3d..82cd99a5d843 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3489,7 +3489,7 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
>
>  static bool suitable_to_scan(int total, int young)
>  {
> -	int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8);
> +	int n = clamp(cache_line_size() / sizeof(pte_t), 2, 8);

int, size_t (but a size_t way < INT_MAX), int, int

So seems fine.

>
>  	/* suitable if the average number of young PTEs per cacheline is >=1 */
>  	return young * n >= total;
> --
> 2.39.5
>

Generally the changes look to be correct but pointless.
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Laight 1 week, 4 days ago
On Thu, 20 Nov 2025 10:36:16 +0000
Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:

> I guess you decided to drop all reviewers for the series...?
> 
> I do wonder what the aversion to sending to more people is, email for review is
> flawed but I don't think it's problematic to ensure that people signed up to
> review everything for maintained files are cc'd...

Even sending all 44 patches to all the mailing lists was over 5000 emails.
Sending to all 124 maintainers and lists is some 50000 emails.
And that is just the maintainers, not the reviewers etc.
I don't have access to a mail server that will let me send more than
500 messages/day (the gmail limit is 100).
So each patch was send to the maintainers for the files it contained,
that reduced it to just under 400 emails.

> 
> On Wed, Nov 19, 2025 at 10:41:35PM +0000, david.laight.linux@gmail.com wrote:
> > From: David Laight <david.laight.linux@gmail.com>
> >
> > min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> > Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> > and so cannot discard significant bits.  
> 
> you're changing min_t(int, ...) too? This commit message seems incomplete as a
> result.

Ok, I used the same commit message for most of the 44 patches.
The large majority are 'unsigned int' ones.

> 
> None of the changes you make here seem to have any bearing on reality, so I
> think the commit message should reflect that this is an entirely pedantic change
> for the sake of satisfying a check you feel will reveal actual bugs in the
> future or something?
> 
> Commit messages should include actual motivation rather than a theoretical one.
> 
> >
> > In this case the 'unsigned long' values are small enough that the result
> > is ok.
> >
> > (Similarly for clamp_t().)
> >
> > Detected by an extra check added to min_t().  
> 
> In general I really question the value of the check when basically every use
> here is pointless...?
> 
> I guess idea is in future it'll catch some real cases right?
> 
> Is this check implemented in this series at all? Because presumably with the
> cover letter saying you couldn't fix the CFS code etc. you aren't? So it's just
> laying the groundwork for this?

I could fix the CFS code, but not with a trivial patch.
I also wanted to put the 'fixes' in the first few patches, I didn't realise
how bad that code was until I looked again.
(I've also not fixed all the drivers I don't build.)

> 
> >
> > Signed-off-by: David Laight <david.laight.linux@gmail.com>
> > ---
> >  mm/gup.c      | 4 ++--
> >  mm/memblock.c | 2 +-
> >  mm/memory.c   | 2 +-
> >  mm/percpu.c   | 2 +-
> >  mm/truncate.c | 3 +--
> >  mm/vmscan.c   | 2 +-
> >  6 files changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index a8ba5112e4d0..55435b90dcc3 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
> >  	unsigned int nr = 1;
> >
> >  	if (folio_test_large(folio))
> > -		nr = min_t(unsigned int, npages - i,
> > -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
> > +		nr = min(npages - i,
> > +			 folio_nr_pages(folio) - folio_page_idx(folio, next));  
> 
> There's no cases where any of these would discard significant bits. But we
> ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.

The (implicit) cast to unsigned int is irrelevant - that happens after the min().
The issue is that 'npages' is 'unsigned long' so can (in theory) be larger than 4G.
Ok that would be a 16TB buffer, but someone must have decided that npages might
not fit in 32 bits otherwise they wouldn't have used 'unsigned long'.

> 
> But at the same time I guess no harm.
> 
> >
> >  	*ntails = nr;
> >  	return folio;
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index e23e16618e9b..19b491d39002 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -2208,7 +2208,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
> >  		 * the case.
> >  		 */
> >  		if (start)
> > -			order = min_t(int, MAX_PAGE_ORDER, __ffs(start));
> > +			order = min(MAX_PAGE_ORDER, __ffs(start));  
> 
> I guess this would already be defaulting to int anyway.

Actually that one is also fixed by patch 0001 - which changes the return
type of the x86-64 __ffs() to unsigned int.
Which will be why min_t() was used in the first place.
I probably did this edit first.

> 
> >  		else
> >  			order = MAX_PAGE_ORDER;
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 74b45e258323..72f7bd71d65f 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2375,7 +2375,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
> >
> >  	while (pages_to_write_in_pmd) {
> >  		int pte_idx = 0;
> > -		const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
> > +		const int batch_size = min(pages_to_write_in_pmd, 8);  
> 
> Feels like there's just a mistake in pages_to_write_in_pmd being unsigned long?

Changing that would be a different 'fix'.

> Again I guess correct because we're not going to even come close to ulong64
> issues with a count of pages to write.

That fact that the count of pages is small is why the existing code isn't wrong.
The patch can't make things worse.

> 
> >
> >  		start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
> >  		if (!start_pte) {
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index 81462ce5866e..cad59221d298 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -1228,7 +1228,7 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits,
> >  	/*
> >  	 * Search to find a fit.
> >  	 */
> > -	end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
> > +	end = umin(start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
> >  		    pcpu_chunk_map_bits(chunk));  
> 
> Is it really that useful to use umin() here? I mean in examples above all the
> values would be positive too. Seems strange to use umin() when everything involves an int?
> 
> >  	bit_off = pcpu_find_zero_area(chunk->alloc_map, end, start, alloc_bits,
> >  				      align_mask, &area_off, &area_bits);
> > diff --git a/mm/truncate.c b/mm/truncate.c
> > index 91eb92a5ce4f..7a56372d39a3 100644
> > --- a/mm/truncate.c
> > +++ b/mm/truncate.c
> > @@ -849,8 +849,7 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
> >  		unsigned int offset, end;
> >
> >  		offset = from - folio_pos(folio);
> > -		end = min_t(unsigned int, to - folio_pos(folio),
> > -			    folio_size(folio));
> > +		end = umin(to - folio_pos(folio), folio_size(folio));  
> 
> Again confused about why we choose to use umin() here...
> 
> min(loff_t - loff_t, size_t)
> 
> so min(long long, unsigned long)

Which is a signedness error because both are 64bit.
min(s64, u32) also reports a signedness error even though u32 is promoted
to s64, allowing that would bloat min() somewhat (and it isn't common).

> 
> And I guess based on fact we don't expect delta between from and folio start to
> be larger than a max folio size.

The problem arises if 'to - folio_pos(folio)' doesn't fit in 32 bits
(and its low 32bit are small).
I think that might be possible if truncating a large file.
So this might be a real bug.

> 
> So probably fine.
> 
> >  		folio_zero_segment(folio, offset, end);
> >  	}
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index b2fc8b626d3d..82cd99a5d843 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -3489,7 +3489,7 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
> >
> >  static bool suitable_to_scan(int total, int young)
> >  {
> > -	int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8);
> > +	int n = clamp(cache_line_size() / sizeof(pte_t), 2, 8);  
> 
> int, size_t (but a size_t way < INT_MAX), int, int

Unfortunately even if cache_line_size() is u32, the division makes the result
size_t and gcc doesn't detect the value as being 'smaller that it used to be'.

	David

> 
> So seems fine.
> 
> >
> >  	/* suitable if the average number of young PTEs per cacheline is >=1 */
> >  	return young * n >= total;
> > --
> > 2.39.5
> >  
> 
> Generally the changes look to be correct but pointless.
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Hildenbrand (Red Hat) 1 week, 4 days ago
>>
>>>
>>> Signed-off-by: David Laight <david.laight.linux@gmail.com>
>>> ---
>>>   mm/gup.c      | 4 ++--
>>>   mm/memblock.c | 2 +-
>>>   mm/memory.c   | 2 +-
>>>   mm/percpu.c   | 2 +-
>>>   mm/truncate.c | 3 +--
>>>   mm/vmscan.c   | 2 +-
>>>   6 files changed, 7 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/gup.c b/mm/gup.c
>>> index a8ba5112e4d0..55435b90dcc3 100644
>>> --- a/mm/gup.c
>>> +++ b/mm/gup.c
>>> @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
>>>   	unsigned int nr = 1;
>>>
>>>   	if (folio_test_large(folio))
>>> -		nr = min_t(unsigned int, npages - i,
>>> -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
>>> +		nr = min(npages - i,
>>> +			 folio_nr_pages(folio) - folio_page_idx(folio, next));
>>
>> There's no cases where any of these would discard significant bits. But we
>> ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.
> 
> The (implicit) cast to unsigned int is irrelevant - that happens after the min().
> The issue is that 'npages' is 'unsigned long' so can (in theory) be larger than 4G.
> Ok that would be a 16TB buffer, but someone must have decided that npages might
> not fit in 32 bits otherwise they wouldn't have used 'unsigned long'.

See commit fa17bcd5f65e ("mm: make folio page count functions return 
unsigned") why that function used to return "long" instead of "unsigned 
int" and how we changed it to "unsigned long".

Until that function actually returns something that large might take a 
while, so no need to worry about that right now.



-- 
Cheers

David
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Laight 1 week, 4 days ago
On Thu, 20 Nov 2025 14:42:24 +0100
"David Hildenbrand (Red Hat)" <david@kernel.org> wrote:

> >>  
> >>>
> >>> Signed-off-by: David Laight <david.laight.linux@gmail.com>
> >>> ---
> >>>   mm/gup.c      | 4 ++--
> >>>   mm/memblock.c | 2 +-
> >>>   mm/memory.c   | 2 +-
> >>>   mm/percpu.c   | 2 +-
> >>>   mm/truncate.c | 3 +--
> >>>   mm/vmscan.c   | 2 +-
> >>>   6 files changed, 7 insertions(+), 8 deletions(-)
> >>>
> >>> diff --git a/mm/gup.c b/mm/gup.c
> >>> index a8ba5112e4d0..55435b90dcc3 100644
> >>> --- a/mm/gup.c
> >>> +++ b/mm/gup.c
> >>> @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
> >>>   	unsigned int nr = 1;
> >>>
> >>>   	if (folio_test_large(folio))
> >>> -		nr = min_t(unsigned int, npages - i,
> >>> -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
> >>> +		nr = min(npages - i,
> >>> +			 folio_nr_pages(folio) - folio_page_idx(folio, next));  
> >>
> >> There's no cases where any of these would discard significant bits. But we
> >> ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.  
> > 
> > The (implicit) cast to unsigned int is irrelevant - that happens after the min().
> > The issue is that 'npages' is 'unsigned long' so can (in theory) be larger than 4G.
> > Ok that would be a 16TB buffer, but someone must have decided that npages might
> > not fit in 32 bits otherwise they wouldn't have used 'unsigned long'.  
> 
> See commit fa17bcd5f65e ("mm: make folio page count functions return 
> unsigned") why that function used to return "long" instead of "unsigned 
> int" and how we changed it to "unsigned long".
> 
> Until that function actually returns something that large might take a 
> while, so no need to worry about that right now.

Except that it gives a false positive on a compile-time test that finds a
few real bugs.

I've been (slowly) fixing 'allmodconfig' and found 'goodies' like:
	min_t(u32, MAX_UINT, expr)
and
	min_t(u8, expr, 255)

Pretty much all the min_t(unsigned xxx) that compile when changed to min()
are safe changes and might fix an obscure bug.
Probably 99% make no difference.

So I'd like to get rid of the ones that make no difference.

	David
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Hildenbrand (Red Hat) 1 week, 3 days ago
On 11/20/25 16:44, David Laight wrote:
> On Thu, 20 Nov 2025 14:42:24 +0100
> "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> 
>>>>   
>>>>>
>>>>> Signed-off-by: David Laight <david.laight.linux@gmail.com>
>>>>> ---
>>>>>    mm/gup.c      | 4 ++--
>>>>>    mm/memblock.c | 2 +-
>>>>>    mm/memory.c   | 2 +-
>>>>>    mm/percpu.c   | 2 +-
>>>>>    mm/truncate.c | 3 +--
>>>>>    mm/vmscan.c   | 2 +-
>>>>>    6 files changed, 7 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/mm/gup.c b/mm/gup.c
>>>>> index a8ba5112e4d0..55435b90dcc3 100644
>>>>> --- a/mm/gup.c
>>>>> +++ b/mm/gup.c
>>>>> @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
>>>>>    	unsigned int nr = 1;
>>>>>
>>>>>    	if (folio_test_large(folio))
>>>>> -		nr = min_t(unsigned int, npages - i,
>>>>> -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
>>>>> +		nr = min(npages - i,
>>>>> +			 folio_nr_pages(folio) - folio_page_idx(folio, next));
>>>>
>>>> There's no cases where any of these would discard significant bits. But we
>>>> ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.
>>>
>>> The (implicit) cast to unsigned int is irrelevant - that happens after the min().
>>> The issue is that 'npages' is 'unsigned long' so can (in theory) be larger than 4G.
>>> Ok that would be a 16TB buffer, but someone must have decided that npages might
>>> not fit in 32 bits otherwise they wouldn't have used 'unsigned long'.
>>
>> See commit fa17bcd5f65e ("mm: make folio page count functions return
>> unsigned") why that function used to return "long" instead of "unsigned
>> int" and how we changed it to "unsigned long".
>>
>> Until that function actually returns something that large might take a
>> while, so no need to worry about that right now.
> 
> Except that it gives a false positive on a compile-time test that finds a
> few real bugs.
> 
> I've been (slowly) fixing 'allmodconfig' and found 'goodies' like:
> 	min_t(u32, MAX_UINT, expr)
> and
> 	min_t(u8, expr, 255)
> 

:)

> Pretty much all the min_t(unsigned xxx) that compile when changed to min()
> are safe changes and might fix an obscure bug.
> Probably 99% make no difference.
> 
> So I'd like to get rid of the ones that make no difference.

No objection from my side if using min() is the preferred way now and 
introduces no observable changes.

-- 
Cheers

David
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by Lorenzo Stoakes 1 week, 4 days ago
On Thu, Nov 20, 2025 at 10:36:16AM +0000, Lorenzo Stoakes wrote:
> I guess you decided to drop all reviewers for the series...?
>
> I do wonder what the aversion to sending to more people is, email for review is
> flawed but I don't think it's problematic to ensure that people signed up to
> review everything for maintained files are cc'd...
>
> On Wed, Nov 19, 2025 at 10:41:35PM +0000, david.laight.linux@gmail.com wrote:
> > From: David Laight <david.laight.linux@gmail.com>
> >
> > min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> > Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> > and so cannot discard significant bits.
>
> you're changing min_t(int, ...) too? This commit message seems incomplete as a
> result.
>
> None of the changes you make here seem to have any bearing on reality, so I
> think the commit message should reflect that this is an entirely pedantic change
> for the sake of satisfying a check you feel will reveal actual bugs in the
> future or something?
>
> Commit messages should include actual motivation rather than a theoretical one.
>
> >
> > In this case the 'unsigned long' values are small enough that the result
> > is ok.
> >
> > (Similarly for clamp_t().)
> >
> > Detected by an extra check added to min_t().
>
> In general I really question the value of the check when basically every use
> here is pointless...?
>
> I guess idea is in future it'll catch some real cases right?
>
> Is this check implemented in this series at all? Because presumably with the
> cover letter saying you couldn't fix the CFS code etc. you aren't? So it's just
> laying the groundwork for this?
>
> >
> > Signed-off-by: David Laight <david.laight.linux@gmail.com>

I mean I don't see anything wrong here, and on the basis that this will be
useful in adding this upcoming check, with the nit about commit msg above, this
LGTM so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> > ---
> >  mm/gup.c      | 4 ++--
> >  mm/memblock.c | 2 +-
> >  mm/memory.c   | 2 +-
> >  mm/percpu.c   | 2 +-
> >  mm/truncate.c | 3 +--
> >  mm/vmscan.c   | 2 +-
> >  6 files changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index a8ba5112e4d0..55435b90dcc3 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -237,8 +237,8 @@ static inline struct folio *gup_folio_range_next(struct page *start,
> >  	unsigned int nr = 1;
> >
> >  	if (folio_test_large(folio))
> > -		nr = min_t(unsigned int, npages - i,
> > -			   folio_nr_pages(folio) - folio_page_idx(folio, next));
> > +		nr = min(npages - i,
> > +			 folio_nr_pages(folio) - folio_page_idx(folio, next));
>
> There's no cases where any of these would discard significant bits. But we
> ultimately cast to unisnged int anyway (nr) so not sure this achieves anything.
>
> But at the same time I guess no harm.
>
> >
> >  	*ntails = nr;
> >  	return folio;
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index e23e16618e9b..19b491d39002 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -2208,7 +2208,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
> >  		 * the case.
> >  		 */
> >  		if (start)
> > -			order = min_t(int, MAX_PAGE_ORDER, __ffs(start));
> > +			order = min(MAX_PAGE_ORDER, __ffs(start));
>
> I guess this would already be defaulting to int anyway.
>
> >  		else
> >  			order = MAX_PAGE_ORDER;
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 74b45e258323..72f7bd71d65f 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2375,7 +2375,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
> >
> >  	while (pages_to_write_in_pmd) {
> >  		int pte_idx = 0;
> > -		const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
> > +		const int batch_size = min(pages_to_write_in_pmd, 8);
>
> Feels like there's just a mistake in pages_to_write_in_pmd being unsigned long?
>
> Again I guess correct because we're not going to even come close to ulong64
> issues with a count of pages to write.
>
> >
> >  		start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
> >  		if (!start_pte) {
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index 81462ce5866e..cad59221d298 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -1228,7 +1228,7 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits,
> >  	/*
> >  	 * Search to find a fit.
> >  	 */
> > -	end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
> > +	end = umin(start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
> >  		    pcpu_chunk_map_bits(chunk));
>
> Is it really that useful to use umin() here? I mean in examples above all the
> values would be positive too. Seems strange to use umin() when everything involves an int?
>
> >  	bit_off = pcpu_find_zero_area(chunk->alloc_map, end, start, alloc_bits,
> >  				      align_mask, &area_off, &area_bits);
> > diff --git a/mm/truncate.c b/mm/truncate.c
> > index 91eb92a5ce4f..7a56372d39a3 100644
> > --- a/mm/truncate.c
> > +++ b/mm/truncate.c
> > @@ -849,8 +849,7 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
> >  		unsigned int offset, end;
> >
> >  		offset = from - folio_pos(folio);
> > -		end = min_t(unsigned int, to - folio_pos(folio),
> > -			    folio_size(folio));
> > +		end = umin(to - folio_pos(folio), folio_size(folio));
>
> Again confused about why we choose to use umin() here...
>
> min(loff_t - loff_t, size_t)
>
> so min(long long, unsigned long)
>
> And I guess based on fact we don't expect delta between from and folio start to
> be larger than a max folio size.
>
> So probably fine.
>
> >  		folio_zero_segment(folio, offset, end);
> >  	}
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index b2fc8b626d3d..82cd99a5d843 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -3489,7 +3489,7 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
> >
> >  static bool suitable_to_scan(int total, int young)
> >  {
> > -	int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8);
> > +	int n = clamp(cache_line_size() / sizeof(pte_t), 2, 8);
>
> int, size_t (but a size_t way < INT_MAX), int, int
>
> So seems fine.
>
> >
> >  	/* suitable if the average number of young PTEs per cacheline is >=1 */
> >  	return young * n >= total;
> > --
> > 2.39.5
> >
>
> Generally the changes look to be correct but pointless.
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Hildenbrand (Red Hat) 1 week, 4 days ago
On 11/19/25 23:41, david.laight.linux@gmail.com wrote:
> From: David Laight <david.laight.linux@gmail.com>
> 
> min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> and so cannot discard significant bits.

I thought using min() was frowned upon and we were supposed to use 
min_t() instead to make it clear which type we want to use.

Do I misremember or have things changed?

Wasn't there a checkpatch warning that states exactly that?

-- 
Cheers

David
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Laight 1 week, 4 days ago
On Thu, 20 Nov 2025 10:20:41 +0100
"David Hildenbrand (Red Hat)" <david@kernel.org> wrote:

> On 11/19/25 23:41, david.laight.linux@gmail.com wrote:
> > From: David Laight <david.laight.linux@gmail.com>
> > 
> > min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> > Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> > and so cannot discard significant bits.  
> 
> I thought using min() was frowned upon and we were supposed to use 
> min_t() instead to make it clear which type we want to use.

I'm not sure that was ever true.
min_t() is just an accident waiting to happen.
(and I found a few of them, the worst are in sched/fair.c)

Most of the min_t() are there because of the rather overzealous type
check that used to be in min().
But even then it would really be better to explicitly cast one of the
parameters to min(), so min_t(T, a, b) => min(a, (T)b).
Then it becomes rather more obvious that min_t(u8, x->m_u8, expr)
is going mask off the high bits of 'expr'.

> Do I misremember or have things changed?
> 
> Wasn't there a checkpatch warning that states exactly that?

There is one that suggests min_t() - it ought to be nuked.
The real fix is to backtrack the types so there isn't an error.
min_t() ought to be a 'last resort' and a single cast is better.

With the relaxed checks in min() most of the min_t() can just
be replaced by min(), even this is ok:
	int len = fun();
	if (len < 0)
		return;
	count = min(len, sizeof(T));

I did look at the history of min() and min_t().
IIRC some of the networking code had a real function min() with
'unsigned int' arguments.
This was moved to a common header, changed to a #define and had
a type added - so min(T, a, b).
Pretty much immediately that was renamed min_t() and min() added
that accepted any type - but checked the types of 'a' and 'b'
exactly matched.
Code was then changed (over the years) to use min(), but in many
cases the types didn't quite match - so min_t() was used a lot.

I keep spotting new commits that pass too small a type to min_t().
So this is the start of a '5 year' campaign to nuke min_t() (et al).

	David
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by Eric Biggers 1 week, 4 days ago
On Thu, Nov 20, 2025 at 09:59:46AM +0000, David Laight wrote:
> On Thu, 20 Nov 2025 10:20:41 +0100
> "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> 
> > On 11/19/25 23:41, david.laight.linux@gmail.com wrote:
> > > From: David Laight <david.laight.linux@gmail.com>
> > > 
> > > min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> > > Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> > > and so cannot discard significant bits.  
> > 
> > I thought using min() was frowned upon and we were supposed to use 
> > min_t() instead to make it clear which type we want to use.
> 
> I'm not sure that was ever true.
> min_t() is just an accident waiting to happen.
> (and I found a few of them, the worst are in sched/fair.c)
> 
> Most of the min_t() are there because of the rather overzealous type
> check that used to be in min().
> But even then it would really be better to explicitly cast one of the
> parameters to min(), so min_t(T, a, b) => min(a, (T)b).
> Then it becomes rather more obvious that min_t(u8, x->m_u8, expr)
> is going mask off the high bits of 'expr'.
> 
> > Do I misremember or have things changed?
> > 
> > Wasn't there a checkpatch warning that states exactly that?
> 
> There is one that suggests min_t() - it ought to be nuked.
> The real fix is to backtrack the types so there isn't an error.
> min_t() ought to be a 'last resort' and a single cast is better.
> 
> With the relaxed checks in min() most of the min_t() can just
> be replaced by min(), even this is ok:
> 	int len = fun();
> 	if (len < 0)
> 		return;
> 	count = min(len, sizeof(T));
> 
> I did look at the history of min() and min_t().
> IIRC some of the networking code had a real function min() with
> 'unsigned int' arguments.
> This was moved to a common header, changed to a #define and had
> a type added - so min(T, a, b).
> Pretty much immediately that was renamed min_t() and min() added
> that accepted any type - but checked the types of 'a' and 'b'
> exactly matched.
> Code was then changed (over the years) to use min(), but in many
> cases the types didn't quite match - so min_t() was used a lot.
> 
> I keep spotting new commits that pass too small a type to min_t().
> So this is the start of a '5 year' campaign to nuke min_t() (et al).

Yes, checkpatch suggests min_t() or max_t() if you cast an argument to
min() or max().  Grep for "typecasts on min/max could be min_t/max_t" in
scripts/checkpatch.pl.

And historically you could not pass different types to min() and max(),
which is why people use min_t() and max_t().  It looks like you fixed
that a couple years ago in
https://lore.kernel.org/all/b97faef60ad24922b530241c5d7c933c@AcuMS.aculab.com/,
which is great!  It just takes some time for the whole community to get
the message.  Also, it seems that checkpatch is in need of an update.

Doing these conversions looks good to me, but unfortunately this is
probably the type of thing that shouldn't be a single kernel-wide patch
series.  They should be sent out per-subsystem.

I suggest also putting a sentence in the commit message that mentions
that min() and max() have been updated to accept arguments with
different types.  (Seeing as historically that wasn't true.)  I suggest
also being extra clear about when each change is a cleanup vs a fix. 

- Eric
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Laight 1 week, 3 days ago
On Thu, 20 Nov 2025 23:45:22 +0000
Eric Biggers <ebiggers@kernel.org> wrote:

> On Thu, Nov 20, 2025 at 09:59:46AM +0000, David Laight wrote:
> > On Thu, 20 Nov 2025 10:20:41 +0100
> > "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> >   
> > > On 11/19/25 23:41, david.laight.linux@gmail.com wrote:  
> > > > From: David Laight <david.laight.linux@gmail.com>
> > > > 
> > > > min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
> > > > Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
> > > > and so cannot discard significant bits.    
> > > 
> > > I thought using min() was frowned upon and we were supposed to use 
> > > min_t() instead to make it clear which type we want to use.  
> > 
> > I'm not sure that was ever true.
> > min_t() is just an accident waiting to happen.
> > (and I found a few of them, the worst are in sched/fair.c)
> > 
> > Most of the min_t() are there because of the rather overzealous type
> > check that used to be in min().
> > But even then it would really be better to explicitly cast one of the
> > parameters to min(), so min_t(T, a, b) => min(a, (T)b).
> > Then it becomes rather more obvious that min_t(u8, x->m_u8, expr)
> > is going mask off the high bits of 'expr'.
> >   
> > > Do I misremember or have things changed?
> > > 
> > > Wasn't there a checkpatch warning that states exactly that?  
> > 
> > There is one that suggests min_t() - it ought to be nuked.
> > The real fix is to backtrack the types so there isn't an error.
> > min_t() ought to be a 'last resort' and a single cast is better.
> > 
> > With the relaxed checks in min() most of the min_t() can just
> > be replaced by min(), even this is ok:
> > 	int len = fun();
> > 	if (len < 0)
> > 		return;
> > 	count = min(len, sizeof(T));
> > 
> > I did look at the history of min() and min_t().
> > IIRC some of the networking code had a real function min() with
> > 'unsigned int' arguments.
> > This was moved to a common header, changed to a #define and had
> > a type added - so min(T, a, b).
> > Pretty much immediately that was renamed min_t() and min() added
> > that accepted any type - but checked the types of 'a' and 'b'
> > exactly matched.
> > Code was then changed (over the years) to use min(), but in many
> > cases the types didn't quite match - so min_t() was used a lot.
> > 
> > I keep spotting new commits that pass too small a type to min_t().
> > So this is the start of a '5 year' campaign to nuke min_t() (et al).  
> 
> Yes, checkpatch suggests min_t() or max_t() if you cast an argument to
> min() or max().  Grep for "typecasts on min/max could be min_t/max_t" in
> scripts/checkpatch.pl.

IMHO that is a really bad suggestion (and always has been).
In reality min(a, (T)b) is less likely to be buggy than min_t(T, a, b).
Someone will notice that (u16)long_var is likely to be buggy but min_t()
is expected to 'do something magic'.

There are a log of examples of 'T_var = min_t(T, T_var, b)' which really
needed (typeof (b))T_var rather than (T)b
and T_var = min_t(T, a, b) which just doesn't need a cast at all.


> 
> And historically you could not pass different types to min() and max(),
> which is why people use min_t() and max_t().  It looks like you fixed
> that a couple years ago in
> https://lore.kernel.org/all/b97faef60ad24922b530241c5d7c933c@AcuMS.aculab.com/,
> which is great!

I wrote that, and then Linus redid it to avoid some very long lines
from nested expansion (with some tree-wide patches that only he could do).

>  It just takes some time for the whole community to get
> the message.  Also, it seems that checkpatch is in need of an update.
> 
> Doing these conversions looks good to me, but unfortunately this is
> probably the type of thing that shouldn't be a single kernel-wide patch
> series.  They should be sent out per-subsystem.

In effect it is a list of separate patches, one per subsystem.
They just have a common 0/n wrapper.
I wanted to link them together, I guess I could have put a bit more
text in the common commit message I pasted into all the commits.

I didn't post the change to minmax.h (apart from a summary in 0/44)
because I hadn't even tried to build a 32bit kernel nevery mind
an allmodconfig or allyesconfig one.

I spent all yesterday trying to build allyesconfig...

	David

> 
> I suggest also putting a sentence in the commit message that mentions
> that min() and max() have been updated to accept arguments with
> different types.  (Seeing as historically that wasn't true.)  I suggest
> also being extra clear about when each change is a cleanup vs a fix. 
> 
> - Eric
Re: [PATCH 39/44] mm: use min() instead of min_t()
Posted by David Hildenbrand (Red Hat) 1 week, 3 days ago
On 11/21/25 00:45, Eric Biggers wrote:
> On Thu, Nov 20, 2025 at 09:59:46AM +0000, David Laight wrote:
>> On Thu, 20 Nov 2025 10:20:41 +0100
>> "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
>>
>>> On 11/19/25 23:41, david.laight.linux@gmail.com wrote:
>>>> From: David Laight <david.laight.linux@gmail.com>
>>>>
>>>> min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
>>>> Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
>>>> and so cannot discard significant bits.
>>>
>>> I thought using min() was frowned upon and we were supposed to use
>>> min_t() instead to make it clear which type we want to use.
>>
>> I'm not sure that was ever true.
>> min_t() is just an accident waiting to happen.
>> (and I found a few of them, the worst are in sched/fair.c)
>>
>> Most of the min_t() are there because of the rather overzealous type
>> check that used to be in min().
>> But even then it would really be better to explicitly cast one of the
>> parameters to min(), so min_t(T, a, b) => min(a, (T)b).
>> Then it becomes rather more obvious that min_t(u8, x->m_u8, expr)
>> is going mask off the high bits of 'expr'.
>>
>>> Do I misremember or have things changed?
>>>
>>> Wasn't there a checkpatch warning that states exactly that?
>>
>> There is one that suggests min_t() - it ought to be nuked.
>> The real fix is to backtrack the types so there isn't an error.
>> min_t() ought to be a 'last resort' and a single cast is better.
>>
>> With the relaxed checks in min() most of the min_t() can just
>> be replaced by min(), even this is ok:
>> 	int len = fun();
>> 	if (len < 0)
>> 		return;
>> 	count = min(len, sizeof(T));
>>
>> I did look at the history of min() and min_t().
>> IIRC some of the networking code had a real function min() with
>> 'unsigned int' arguments.
>> This was moved to a common header, changed to a #define and had
>> a type added - so min(T, a, b).
>> Pretty much immediately that was renamed min_t() and min() added
>> that accepted any type - but checked the types of 'a' and 'b'
>> exactly matched.
>> Code was then changed (over the years) to use min(), but in many
>> cases the types didn't quite match - so min_t() was used a lot.
>>
>> I keep spotting new commits that pass too small a type to min_t().
>> So this is the start of a '5 year' campaign to nuke min_t() (et al).
> 
> Yes, checkpatch suggests min_t() or max_t() if you cast an argument to
> min() or max().  Grep for "typecasts on min/max could be min_t/max_t" in
> scripts/checkpatch.pl.

Right, that's the one I recalled.

> 
> And historically you could not pass different types to min() and max(),
> which is why people use min_t() and max_t().  It looks like you fixed
> that a couple years ago in
> https://lore.kernel.org/all/b97faef60ad24922b530241c5d7c933c@AcuMS.aculab.com/,
> which is great!  It just takes some time for the whole community to get
> the message.  Also, it seems that checkpatch is in need of an update.

Exactly.

And whenever it comes to such things, I wonder if we want to clearly 
spell them out somewhere (codying-style): especially, when to use 
min/max and when to use min_t/max_t.

coding-style currently mentions:

"There are also min() and max() macros that do strict type checking ..." 
is that also outdated or am I just confused at this point?

> 
> Doing these conversions looks good to me, but unfortunately this is
> probably the type of thing that shouldn't be a single kernel-wide patch
> series.  They should be sent out per-subsystem.

Agreed!

In particular as there is no need to rush and individual subsystems can 
just pick it up separately.

> 
> I suggest also putting a sentence in the commit message that mentions
> that min() and max() have been updated to accept arguments with
> different types.  (Seeing as historically that wasn't true.)  I suggest
> also being extra clear about when each change is a cleanup vs a fix.

+1

-- 
Cheers

David