mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Use folio_nr_pages() helper instead of manual calculation (1 << order)
for better code readability and maintainability.
Signed-off-by: Hu Song <husong@kylinos.cn>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index d257141896c9..eba25461641a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
{
struct folio *folio = slab_folio(slab);
int order = folio_order(folio);
- int pages = 1 << order;
+ int pages = folio_nr_pages(folio);
__slab_clear_pfmemalloc(slab);
folio->mapping = NULL;
--
2.25.1
On Tue, Sep 09, 2025 at 03:48:11PM +0800, Hu Song wrote: > Use folio_nr_pages() helper instead of manual calculation (1 << order) > for better code readability and maintainability. https://lore.kernel.org/linux-mm/20250829154728.3397606-10-willy@infradead.org/
On 09/09/25 1:18 pm, Hu Song wrote: > Use folio_nr_pages() helper instead of manual calculation (1 << order) > for better code readability and maintainability. > > Signed-off-by: Hu Song <husong@kylinos.cn> > --- > mm/slub.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index d257141896c9..eba25461641a 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) > { > struct folio *folio = slab_folio(slab); > int order = folio_order(folio); > - int pages = 1 << order; > + int pages = folio_nr_pages(folio); > > __slab_clear_pfmemalloc(slab); > folio->mapping = NULL; I don't know, the current version is more readable to me. We literally compute the order before, so we do a simple 1 << order. I'll leave it to the rest.
在 2025/9/9 16:46, Dev Jain 写道: > > On 09/09/25 1:18 pm, Hu Song wrote: >> Use folio_nr_pages() helper instead of manual calculation (1 << order) >> for better code readability and maintainability. >> >> Signed-off-by: Hu Song <husong@kylinos.cn> >> --- >> mm/slub.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index d257141896c9..eba25461641a 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) >> { >> struct folio *folio = slab_folio(slab); >> int order = folio_order(folio); >> - int pages = 1 << order; >> + int pages = folio_nr_pages(folio); >> __slab_clear_pfmemalloc(slab); >> folio->mapping = NULL; > > I don't know, the current version is more readable to me. We literally > compute the order before, so we do a simple 1 << order. I'll leave it > to the rest. > > Is the reason for calculating 'order' first because it's needed later. I suggest using folio_nr_pages to replace the calculation of pages, unifying the retrieval of pages in the folio and also highlighting the significance of folio_nr_pages. -- Thanks, Ye Liu
On Tue, Sep 09, 2025 at 04:59:37PM +0800, Ye Liu wrote: > > 在 2025/9/9 16:46, Dev Jain 写道: > > > > On 09/09/25 1:18 pm, Hu Song wrote: > >> Use folio_nr_pages() helper instead of manual calculation (1 << order) > >> for better code readability and maintainability. > >> > >> Signed-off-by: Hu Song <husong@kylinos.cn> > >> --- > >> mm/slub.c | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/mm/slub.c b/mm/slub.c > >> index d257141896c9..eba25461641a 100644 > >> --- a/mm/slub.c > >> +++ b/mm/slub.c > >> @@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) > >> { > >> struct folio *folio = slab_folio(slab); > >> int order = folio_order(folio); > >> - int pages = 1 << order; > >> + int pages = folio_nr_pages(folio); > >> __slab_clear_pfmemalloc(slab); > >> folio->mapping = NULL; > > > > I don't know, the current version is more readable to me. We literally > > compute the order before, so we do a simple 1 << order. I'll leave it > > to the rest. > > > > > Is the reason for calculating 'order' first because it's needed later. Yes, we use it 6 lines down. -- Pedro
On Tue, Sep 09, 2025 at 03:48:11PM +0800, Hu Song wrote: > Use folio_nr_pages() helper instead of manual calculation (1 << order) > for better code readability and maintainability. > > Signed-off-by: Hu Song <husong@kylinos.cn> > --- > mm/slub.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index d257141896c9..eba25461641a 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) > { > struct folio *folio = slab_folio(slab); > int order = folio_order(folio); > - int pages = 1 << order; > + int pages = folio_nr_pages(folio); Sure nothing might happen, but I find a bit weird that folio_nr_pages() returns a 'long' and we store it in an 'int' type. And then sure we handle that to mm_account_reclaimed_pages() which gets 'unsigned long', but that's another story. -- Oscar Salvador SUSE Labs
在 2025/9/9 16:00, Oscar Salvador 写道: > On Tue, Sep 09, 2025 at 03:48:11PM +0800, Hu Song wrote: >> Use folio_nr_pages() helper instead of manual calculation (1 << order) >> for better code readability and maintainability. >> >> Signed-off-by: Hu Song <husong@kylinos.cn> >> --- >> mm/slub.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index d257141896c9..eba25461641a 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2719,7 +2719,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) >> { >> struct folio *folio = slab_folio(slab); >> int order = folio_order(folio); >> - int pages = 1 << order; >> + int pages = folio_nr_pages(folio); > Sure nothing might happen, but I find a bit weird that folio_nr_pages() > returns a 'long' and we store it in an 'int' type. > And then sure we handle that to mm_account_reclaimed_pages() which gets > 'unsigned long', but that's another story. > > Maybe also correct the int->unsigned long conversion.:) > -- Thanks, Ye Liu
© 2016 - 2025 Red Hat, Inc.