[PATCH 0/4] SLUB: calculate_order() cleanups

Vlastimil Babka posted 4 patches 2 years, 3 months ago
mm/slub.c | 63 ++++++++++++++++++++++++-------------------------------
1 file changed, 27 insertions(+), 36 deletions(-)
[PATCH 0/4] SLUB: calculate_order() cleanups
Posted by Vlastimil Babka 2 years, 3 months ago
Since reviewing recent patches made me finally dig into these functions
in details for the first time, I've also noticed some opportunities for
cleanups that should make them simpler and also deliver more consistent
results for some corner case object sizes (probably not seen in
practice). Thus patch 3 can increase slab orders somewhere, but only in
the way that was already intended. Otherwise it's almost no functional
changes.

Vlastimil Babka (4):
  mm/slub: simplify the last resort slab order calculation
  mm/slub: remove min_objects loop from calculate_order()
  mm/slub: attempt to find layouts up to 1/2 waste in calculate_order()
  mm/slub: refactor calculate_order() and calc_slab_order()

 mm/slub.c | 63 ++++++++++++++++++++++++-------------------------------
 1 file changed, 27 insertions(+), 36 deletions(-)

-- 
2.42.0
Re: [PATCH 0/4] SLUB: calculate_order() cleanups
Posted by Jay Patel 2 years, 2 months ago
On Fri, 2023-09-08 at 16:53 +0200, Vlastimil Babka wrote:
> Since reviewing recent patches made me finally dig into these
> functions
> in details for the first time, I've also noticed some opportunities
> for
> cleanups that should make them simpler and also deliver more
> consistent
> results for some corner case object sizes (probably not seen in
> practice). Thus patch 3 can increase slab orders somewhere, but only
> in
> the way that was already intended. Otherwise it's almost no
> functional
> changes.
> 
Hi Vlastimil,

This cleanup patchset looks promising.
I've conducted test
on PowerPC with 16 CPUs and a 64K page size, and here are the results.

S
lub Memory Usage

+-------------------+--------+------------+
|                   | Normal | With Patch |
+-------------------+--------+------------+
| Total Slub Memory | 476992 | 478464     |
| Wastage           | 431    | 451        |
+-------------------+--------+------------+

Also, I have not detected any changes in the page order for slub caches
across all objects with 64K page size.

Hackbench Results

+-------+----+---------+------------+----------+
|     
  |    | Normal  | With Patch |          |
+-------+----+---------+-----
-------+----------+
| Amean | 1  | 1.1530  | 1.1347     | ( 1.59%) |
|
Amean | 4  | 3.9220  | 3.8240     | ( 2.50%) |
| Amean | 7  | 6.7943  |
6.6300     | ( 2.42%) |
| Amean | 12 | 11.7067 | 11.4423    | ( 2.26%) |
| Amean | 21 | 20.6617 | 20.1680    | ( 2.39%) |
| Amean | 30 | 29.4200
| 28.6460    | ( 2.63%) |
| Amean | 48 | 47.2797 | 46.2820    | ( 2.11%)
|
| Amean | 64 | 63.4680 | 62.1813    | ( 2.03%) |
+-------+----+------
---+------------+----------+  


Reviewed-by: Jay Patel
<jaypatel@linux.ibm.com>
Tested-by: Jay Patel <jaypatel@linux.ibm.com>

Th
ank You 
Jay Patel
> Vlastimil Babka (4):
>   mm/slub: simplify the last resort slab order calculation
>   mm/slub: remove min_objects loop from calculate_order()
>   mm/slub: attempt to find layouts up to 1/2 waste in
> calculate_order()
>   mm/slub: refactor calculate_order() and calc_slab_order()
> 
>  mm/slub.c | 63 ++++++++++++++++++++++++-----------------------------
> --
>  1 file changed, 27 insertions(+), 36 deletions(-)
>
Re: [PATCH 0/4] SLUB: calculate_order() cleanups
Posted by Vlastimil Babka 2 years, 2 months ago
On 9/28/23 06:46, Jay Patel wrote:
> On Fri, 2023-09-08 at 16:53 +0200, Vlastimil Babka wrote:
>> Since reviewing recent patches made me finally dig into these
>> functions
>> in details for the first time, I've also noticed some opportunities
>> for
>> cleanups that should make them simpler and also deliver more
>> consistent
>> results for some corner case object sizes (probably not seen in
>> practice). Thus patch 3 can increase slab orders somewhere, but only
>> in
>> the way that was already intended. Otherwise it's almost no
>> functional
>> changes.
>> 
> Hi Vlastimil,

Hi, Jay!

> This cleanup patchset looks promising.
> I've conducted test
> on PowerPC with 16 CPUs and a 64K page size, and here are the results.
> 
> S
> lub Memory Usage
> 
> +-------------------+--------+------------+
> |                   | Normal | With Patch |
> +-------------------+--------+------------+
> | Total Slub Memory | 476992 | 478464     |
> | Wastage           | 431    | 451        |
> +-------------------+--------+------------+
> 
> Also, I have not detected any changes in the page order for slub caches
> across all objects with 64K page size.

As expected. Which should mean any benchmark differences should be noise and
not caused by the patches.

> Hackbench Results
> 
> +-------+----+---------+------------+----------+
> |     
>   |    | Normal  | With Patch |          |
> +-------+----+---------+-----
> -------+----------+
> | Amean | 1  | 1.1530  | 1.1347     | ( 1.59%) |
> |
> Amean | 4  | 3.9220  | 3.8240     | ( 2.50%) |
> | Amean | 7  | 6.7943  |
> 6.6300     | ( 2.42%) |
> | Amean | 12 | 11.7067 | 11.4423    | ( 2.26%) |
> | Amean | 21 | 20.6617 | 20.1680    | ( 2.39%) |
> | Amean | 30 | 29.4200
> | 28.6460    | ( 2.63%) |
> | Amean | 48 | 47.2797 | 46.2820    | ( 2.11%)
> |
> | Amean | 64 | 63.4680 | 62.1813    | ( 2.03%) |
> +-------+----+------
> ---+------------+----------+  
> 
> 
> Reviewed-by: Jay Patel
> <jaypatel@linux.ibm.com>
> Tested-by: Jay Patel <jaypatel@linux.ibm.com>

Thanks! Applied your Reviewed-and-tested-by:

> Th
> ank You 
> Jay Patel
>> Vlastimil Babka (4):
>>   mm/slub: simplify the last resort slab order calculation
>>   mm/slub: remove min_objects loop from calculate_order()
>>   mm/slub: attempt to find layouts up to 1/2 waste in
>> calculate_order()
>>   mm/slub: refactor calculate_order() and calc_slab_order()
>> 
>>  mm/slub.c | 63 ++++++++++++++++++++++++-----------------------------
>> --
>>  1 file changed, 27 insertions(+), 36 deletions(-)
>> 
>