Teach folio_dup_swap to handle a batch of consecutive pages. Note that
folio_dup_swap already can handle a subset of this: nr_pages == 1 and
nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
Currently we have a not-so-nice logic of passing in subpage == NULL if
we mean to exercise the logic on the entire folio, and subpage != NULL if
we want to exercise the logic on only that subpage. Remove this
indirection, and explicitly pass subpage != NULL, and the number of
pages required.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/rmap.c | 2 +-
mm/shmem.c | 2 +-
mm/swap.h | 5 +++--
mm/swapfile.c | 12 +++++-------
4 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index dd638429c963e..f6d5b187cf09b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2282,7 +2282,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
goto discard;
}
- if (folio_dup_swap(folio, subpage) < 0) {
+ if (folio_dup_swap(folio, subpage, 1) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
goto walk_abort;
}
diff --git a/mm/shmem.c b/mm/shmem.c
index 5e7dcf5bc5d3c..86ee34c9b40b3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
spin_unlock(&shmem_swaplist_lock);
}
- folio_dup_swap(folio, NULL);
+ folio_dup_swap(folio, folio_page(folio, 0), folio_nr_pages(folio));
shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
BUG_ON(folio_mapped(folio));
diff --git a/mm/swap.h b/mm/swap.h
index a77016f2423b9..d9cb58ebbddd1 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -206,7 +206,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp);
* folio_put_swap(): does the opposite thing of folio_dup_swap().
*/
int folio_alloc_swap(struct folio *folio);
-int folio_dup_swap(struct folio *folio, struct page *subpage);
+int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages);
void folio_put_swap(struct folio *folio, struct page *subpage);
/* For internal use */
@@ -390,7 +390,8 @@ static inline int folio_alloc_swap(struct folio *folio)
return -EINVAL;
}
-static inline int folio_dup_swap(struct folio *folio, struct page *page)
+static inline int folio_dup_swap(struct folio *folio, struct page *page,
+ unsigned int nr_pages)
{
return -EINVAL;
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 915bc93964dbd..eaf61ae6c3817 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio)
/**
* folio_dup_swap() - Increase swap count of swap entries of a folio.
* @folio: folio with swap entries bounded.
- * @subpage: if not NULL, only increase the swap count of this subpage.
+ * @subpage: Increase the swap count of this subpage till nr number of
+ * pages forward.
*
* Typically called when the folio is unmapped and have its swap entry to
* take its place: Swap entries allocated to a folio has count == 0 and pinned
@@ -1752,18 +1753,15 @@ int folio_alloc_swap(struct folio *folio)
* swap_put_entries_direct on its swap entry before this helper returns, or
* the swap count may underflow.
*/
-int folio_dup_swap(struct folio *folio, struct page *subpage)
+int folio_dup_swap(struct folio *folio, struct page *subpage,
+ unsigned int nr_pages)
{
swp_entry_t entry = folio->swap;
- unsigned long nr_pages = folio_nr_pages(folio);
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
- if (subpage) {
- entry.val += folio_page_idx(folio, subpage);
- nr_pages = 1;
- }
+ entry.val += folio_page_idx(folio, subpage);
return swap_dup_entries_cluster(swap_entry_to_info(entry),
swp_offset(entry), nr_pages);
--
2.34.1
Hi Dev, kernel test robot noticed the following build warnings: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Dev-Jain/mm-rmap-make-nr_pages-signed-in-try_to_unmap_one/20260310-153604 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20260310073013.4069309-7-dev.jain%40arm.com patch subject: [PATCH 6/9] mm/swapfile: Make folio_dup_swap batchable config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20260318/202603180851.hRAaHR6o-lkp@intel.com/config) compiler: m68k-linux-gcc (GCC) 15.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260318/202603180851.hRAaHR6o-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202603180851.hRAaHR6o-lkp@intel.com/ All warnings (new ones prefixed by >>): >> Warning: mm/swapfile.c:1757 function parameter 'nr_pages' not described in 'folio_dup_swap' >> Warning: mm/swapfile.c:1757 function parameter 'nr_pages' not described in 'folio_dup_swap' -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
On Tue, Mar 10, 2026 at 01:00:10PM +0530, Dev Jain wrote:
> Teach folio_dup_swap to handle a batch of consecutive pages. Note that
> folio_dup_swap already can handle a subset of this: nr_pages == 1 and
> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
>
> Currently we have a not-so-nice logic of passing in subpage == NULL if
> we mean to exercise the logic on the entire folio, and subpage != NULL if
> we want to exercise the logic on only that subpage. Remove this
> indirection, and explicitly pass subpage != NULL, and the number of
> pages required.
You've made the interface more confusing? Now we can update multiple subpages
but specify only one? :)
Let's try to actually refactor this into something sane... see below.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> mm/rmap.c | 2 +-
> mm/shmem.c | 2 +-
> mm/swap.h | 5 +++--
> mm/swapfile.c | 12 +++++-------
> 4 files changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index dd638429c963e..f6d5b187cf09b 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2282,7 +2282,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> goto discard;
> }
>
> - if (folio_dup_swap(folio, subpage) < 0) {
> + if (folio_dup_swap(folio, subpage, 1) < 0) {
> set_pte_at(mm, address, pvmw.pte, pteval);
> goto walk_abort;
> }
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 5e7dcf5bc5d3c..86ee34c9b40b3 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
> spin_unlock(&shmem_swaplist_lock);
> }
>
> - folio_dup_swap(folio, NULL);
> + folio_dup_swap(folio, folio_page(folio, 0), folio_nr_pages(folio));
> shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
>
> BUG_ON(folio_mapped(folio));
> diff --git a/mm/swap.h b/mm/swap.h
> index a77016f2423b9..d9cb58ebbddd1 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -206,7 +206,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp);
> * folio_put_swap(): does the opposite thing of folio_dup_swap().
> */
> int folio_alloc_swap(struct folio *folio);
> -int folio_dup_swap(struct folio *folio, struct page *subpage);
> +int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages);
> void folio_put_swap(struct folio *folio, struct page *subpage);
>
> /* For internal use */
> @@ -390,7 +390,8 @@ static inline int folio_alloc_swap(struct folio *folio)
> return -EINVAL;
> }
>
> -static inline int folio_dup_swap(struct folio *folio, struct page *page)
> +static inline int folio_dup_swap(struct folio *folio, struct page *page,
> + unsigned int nr_pages)
> {
> return -EINVAL;
> }
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 915bc93964dbd..eaf61ae6c3817 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio)
> /**
> * folio_dup_swap() - Increase swap count of swap entries of a folio.
> * @folio: folio with swap entries bounded.
> - * @subpage: if not NULL, only increase the swap count of this subpage.
> + * @subpage: Increase the swap count of this subpage till nr number of
> + * pages forward.
(Obviously also Kairui's point about missing entry in kdoc)
This is REALLY confusing sorry. And this interface is just a horror show.
Before we had subpage == only increase the swap count of the subpage.
Now subpage = the first subpage at which we do that? Please, no.
You just need to rework this interface in general, this is a hack.
Something like:
int __folio_dup_swap(struct folio *folio, unsigned int subpage_start_index,
unsigned int nr_subpages)
{
...
}
...
int folio_dup_swap_subpage(struct folio *folio, struct page *subpage)
{
return __folio_dup_swap(folio, folio_page_idx(folio, subpage), 1);
}
int folio_dup_swap(struct folio *folio)
{
return __folio_dup_swap(folio, 0, folio_nr_pages(folio));
}
Or something like that.
We're definitely _not_ keeping the subpage parameter like that and hacking on
batching, PLEASE.
> *
> * Typically called when the folio is unmapped and have its swap entry to
> * take its place: Swap entries allocated to a folio has count == 0 and pinned
> @@ -1752,18 +1753,15 @@ int folio_alloc_swap(struct folio *folio)
> * swap_put_entries_direct on its swap entry before this helper returns, or
> * the swap count may underflow.
> */
> -int folio_dup_swap(struct folio *folio, struct page *subpage)
> +int folio_dup_swap(struct folio *folio, struct page *subpage,
> + unsigned int nr_pages)
> {
> swp_entry_t entry = folio->swap;
> - unsigned long nr_pages = folio_nr_pages(folio);
>
> VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
>
> - if (subpage) {
> - entry.val += folio_page_idx(folio, subpage);
> - nr_pages = 1;
> - }
> + entry.val += folio_page_idx(folio, subpage);
>
> return swap_dup_entries_cluster(swap_entry_to_info(entry),
> swp_offset(entry), nr_pages);
> --
> 2.34.1
>
Thanks, Lorenzo
On 10/03/26 2:19 pm, Lorenzo Stoakes (Oracle) wrote:
> On Tue, Mar 10, 2026 at 01:00:10PM +0530, Dev Jain wrote:
>> Teach folio_dup_swap to handle a batch of consecutive pages. Note that
>> folio_dup_swap already can handle a subset of this: nr_pages == 1 and
>> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
>>
>> Currently we have a not-so-nice logic of passing in subpage == NULL if
>> we mean to exercise the logic on the entire folio, and subpage != NULL if
>> we want to exercise the logic on only that subpage. Remove this
>> indirection, and explicitly pass subpage != NULL, and the number of
>> pages required.
>
> You've made the interface more confusing? Now we can update multiple subpages
> but specify only one? :)
>
> Let's try to actually refactor this into something sane... see below.
>
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> mm/rmap.c | 2 +-
>> mm/shmem.c | 2 +-
>> mm/swap.h | 5 +++--
>> mm/swapfile.c | 12 +++++-------
>> 4 files changed, 10 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index dd638429c963e..f6d5b187cf09b 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2282,7 +2282,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>> goto discard;
>> }
>>
>> - if (folio_dup_swap(folio, subpage) < 0) {
>> + if (folio_dup_swap(folio, subpage, 1) < 0) {
>> set_pte_at(mm, address, pvmw.pte, pteval);
>> goto walk_abort;
>> }
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 5e7dcf5bc5d3c..86ee34c9b40b3 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
>> spin_unlock(&shmem_swaplist_lock);
>> }
>>
>> - folio_dup_swap(folio, NULL);
>> + folio_dup_swap(folio, folio_page(folio, 0), folio_nr_pages(folio));
>> shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
>>
>> BUG_ON(folio_mapped(folio));
>> diff --git a/mm/swap.h b/mm/swap.h
>> index a77016f2423b9..d9cb58ebbddd1 100644
>> --- a/mm/swap.h
>> +++ b/mm/swap.h
>> @@ -206,7 +206,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp);
>> * folio_put_swap(): does the opposite thing of folio_dup_swap().
>> */
>> int folio_alloc_swap(struct folio *folio);
>> -int folio_dup_swap(struct folio *folio, struct page *subpage);
>> +int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages);
>> void folio_put_swap(struct folio *folio, struct page *subpage);
>>
>> /* For internal use */
>> @@ -390,7 +390,8 @@ static inline int folio_alloc_swap(struct folio *folio)
>> return -EINVAL;
>> }
>>
>> -static inline int folio_dup_swap(struct folio *folio, struct page *page)
>> +static inline int folio_dup_swap(struct folio *folio, struct page *page,
>> + unsigned int nr_pages)
>> {
>> return -EINVAL;
>> }
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index 915bc93964dbd..eaf61ae6c3817 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio)
>> /**
>> * folio_dup_swap() - Increase swap count of swap entries of a folio.
>> * @folio: folio with swap entries bounded.
>> - * @subpage: if not NULL, only increase the swap count of this subpage.
>> + * @subpage: Increase the swap count of this subpage till nr number of
>> + * pages forward.
>
> (Obviously also Kairui's point about missing entry in kdoc)
>
> This is REALLY confusing sorry. And this interface is just a horror show.
>
> Before we had subpage == only increase the swap count of the subpage.
>
> Now subpage = the first subpage at which we do that? Please, no.
>
> You just need to rework this interface in general, this is a hack.
>
> Something like:
>
> int __folio_dup_swap(struct folio *folio, unsigned int subpage_start_index,
> unsigned int nr_subpages)
> {
> ...
> }
>
> ...
>
> int folio_dup_swap_subpage(struct folio *folio, struct page *subpage)
> {
> return __folio_dup_swap(folio, folio_page_idx(folio, subpage), 1);
> }
>
> int folio_dup_swap(struct folio *folio)
> {
> return __folio_dup_swap(folio, 0, folio_nr_pages(folio));
> }
>
> Or something like that.
I get the essence of the point you are making.
Since most callers of folio_put_swap mean it for entire folio, perhaps
we can have folio_put_swap for these callers, and the ones which are
not sure can call folio_put_swap_subpages? Same for folio_dup_swap.
And since we are calling it folio_put_swap_subpages, we can retain
the subpage parameter?
>
> We're definitely _not_ keeping the subpage parameter like that and hacking on
> batching, PLEASE.
>
>> *
>> * Typically called when the folio is unmapped and have its swap entry to
>> * take its place: Swap entries allocated to a folio has count == 0 and pinned
>> @@ -1752,18 +1753,15 @@ int folio_alloc_swap(struct folio *folio)
>> * swap_put_entries_direct on its swap entry before this helper returns, or
>> * the swap count may underflow.
>> */
>> -int folio_dup_swap(struct folio *folio, struct page *subpage)
>> +int folio_dup_swap(struct folio *folio, struct page *subpage,
>> + unsigned int nr_pages)
>> {
>> swp_entry_t entry = folio->swap;
>> - unsigned long nr_pages = folio_nr_pages(folio);
>>
>> VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
>> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
>>
>> - if (subpage) {
>> - entry.val += folio_page_idx(folio, subpage);
>> - nr_pages = 1;
>> - }
>> + entry.val += folio_page_idx(folio, subpage);
>>
>> return swap_dup_entries_cluster(swap_entry_to_info(entry),
>> swp_offset(entry), nr_pages);
>> --
>> 2.34.1
>>
>
> Thanks, Lorenzo
On Wed, Mar 11, 2026 at 11:12:22AM +0530, Dev Jain wrote: > On 10/03/26 2:19 pm, Lorenzo Stoakes (Oracle) wrote: > > On Tue, Mar 10, 2026 at 01:00:10PM +0530, Dev Jain wrote: > >> Teach folio_dup_swap to handle a batch of consecutive pages. Note that > >> folio_dup_swap already can handle a subset of this: nr_pages == 1 and > >> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. > >> > >> Currently we have a not-so-nice logic of passing in subpage == NULL if > >> we mean to exercise the logic on the entire folio, and subpage != NULL if > >> we want to exercise the logic on only that subpage. Remove this > >> indirection, and explicitly pass subpage != NULL, and the number of > >> pages required. > > > > You've made the interface more confusing? Now we can update multiple subpages > > but specify only one? :) For the next version, please lose the "subpage" name. It's just a page. folios have pages, not subpages.
On Wed, Mar 11, 2026 at 11:12:22AM +0530, Dev Jain wrote:
>
>
> On 10/03/26 2:19 pm, Lorenzo Stoakes (Oracle) wrote:
> > On Tue, Mar 10, 2026 at 01:00:10PM +0530, Dev Jain wrote:
> >> Teach folio_dup_swap to handle a batch of consecutive pages. Note that
> >> folio_dup_swap already can handle a subset of this: nr_pages == 1 and
> >> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
> >>
> >> Currently we have a not-so-nice logic of passing in subpage == NULL if
> >> we mean to exercise the logic on the entire folio, and subpage != NULL if
> >> we want to exercise the logic on only that subpage. Remove this
> >> indirection, and explicitly pass subpage != NULL, and the number of
> >> pages required.
> >
> > You've made the interface more confusing? Now we can update multiple subpages
> > but specify only one? :)
> >
> > Let's try to actually refactor this into something sane... see below.
> >
> >>
> >> Signed-off-by: Dev Jain <dev.jain@arm.com>
> >> ---
> >> mm/rmap.c | 2 +-
> >> mm/shmem.c | 2 +-
> >> mm/swap.h | 5 +++--
> >> mm/swapfile.c | 12 +++++-------
> >> 4 files changed, 10 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index dd638429c963e..f6d5b187cf09b 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -2282,7 +2282,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >> goto discard;
> >> }
> >>
> >> - if (folio_dup_swap(folio, subpage) < 0) {
> >> + if (folio_dup_swap(folio, subpage, 1) < 0) {
> >> set_pte_at(mm, address, pvmw.pte, pteval);
> >> goto walk_abort;
> >> }
> >> diff --git a/mm/shmem.c b/mm/shmem.c
> >> index 5e7dcf5bc5d3c..86ee34c9b40b3 100644
> >> --- a/mm/shmem.c
> >> +++ b/mm/shmem.c
> >> @@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
> >> spin_unlock(&shmem_swaplist_lock);
> >> }
> >>
> >> - folio_dup_swap(folio, NULL);
> >> + folio_dup_swap(folio, folio_page(folio, 0), folio_nr_pages(folio));
> >> shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
> >>
> >> BUG_ON(folio_mapped(folio));
> >> diff --git a/mm/swap.h b/mm/swap.h
> >> index a77016f2423b9..d9cb58ebbddd1 100644
> >> --- a/mm/swap.h
> >> +++ b/mm/swap.h
> >> @@ -206,7 +206,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp);
> >> * folio_put_swap(): does the opposite thing of folio_dup_swap().
> >> */
> >> int folio_alloc_swap(struct folio *folio);
> >> -int folio_dup_swap(struct folio *folio, struct page *subpage);
> >> +int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages);
> >> void folio_put_swap(struct folio *folio, struct page *subpage);
> >>
> >> /* For internal use */
> >> @@ -390,7 +390,8 @@ static inline int folio_alloc_swap(struct folio *folio)
> >> return -EINVAL;
> >> }
> >>
> >> -static inline int folio_dup_swap(struct folio *folio, struct page *page)
> >> +static inline int folio_dup_swap(struct folio *folio, struct page *page,
> >> + unsigned int nr_pages)
> >> {
> >> return -EINVAL;
> >> }
> >> diff --git a/mm/swapfile.c b/mm/swapfile.c
> >> index 915bc93964dbd..eaf61ae6c3817 100644
> >> --- a/mm/swapfile.c
> >> +++ b/mm/swapfile.c
> >> @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio)
> >> /**
> >> * folio_dup_swap() - Increase swap count of swap entries of a folio.
> >> * @folio: folio with swap entries bounded.
> >> - * @subpage: if not NULL, only increase the swap count of this subpage.
> >> + * @subpage: Increase the swap count of this subpage till nr number of
> >> + * pages forward.
> >
> > (Obviously also Kairui's point about missing entry in kdoc)
> >
> > This is REALLY confusing sorry. And this interface is just a horror show.
> >
> > Before we had subpage == only increase the swap count of the subpage.
> >
> > Now subpage = the first subpage at which we do that? Please, no.
> >
> > You just need to rework this interface in general, this is a hack.
> >
> > Something like:
> >
> > int __folio_dup_swap(struct folio *folio, unsigned int subpage_start_index,
> > unsigned int nr_subpages)
> > {
> > ...
> > }
> >
> > ...
> >
> > int folio_dup_swap_subpage(struct folio *folio, struct page *subpage)
> > {
> > return __folio_dup_swap(folio, folio_page_idx(folio, subpage), 1);
> > }
> >
> > int folio_dup_swap(struct folio *folio)
> > {
> > return __folio_dup_swap(folio, 0, folio_nr_pages(folio));
> > }
> >
> > Or something like that.
>
>
> I get the essence of the point you are making.
>
> Since most callers of folio_put_swap mean it for entire folio, perhaps
> we can have folio_put_swap for these callers, and the ones which are
> not sure can call folio_put_swap_subpages? Same for folio_dup_swap.
> And since we are calling it folio_put_swap_subpages, we can retain
> the subpage parameter?
Right but I'm talking about dup variants here but the principle is the same.
>
> >
> > We're definitely _not_ keeping the subpage parameter like that and hacking on
> > batching, PLEASE.
> >
> >> *
> >> * Typically called when the folio is unmapped and have its swap entry to
> >> * take its place: Swap entries allocated to a folio has count == 0 and pinned
> >> @@ -1752,18 +1753,15 @@ int folio_alloc_swap(struct folio *folio)
> >> * swap_put_entries_direct on its swap entry before this helper returns, or
> >> * the swap count may underflow.
> >> */
> >> -int folio_dup_swap(struct folio *folio, struct page *subpage)
> >> +int folio_dup_swap(struct folio *folio, struct page *subpage,
> >> + unsigned int nr_pages)
> >> {
> >> swp_entry_t entry = folio->swap;
> >> - unsigned long nr_pages = folio_nr_pages(folio);
> >>
> >> VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> >> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
> >>
> >> - if (subpage) {
> >> - entry.val += folio_page_idx(folio, subpage);
> >> - nr_pages = 1;
> >> - }
> >> + entry.val += folio_page_idx(folio, subpage);
> >>
> >> return swap_dup_entries_cluster(swap_entry_to_info(entry),
> >> swp_offset(entry), nr_pages);
> >> --
> >> 2.34.1
> >>
> >
> > Thanks, Lorenzo
>
Cheers, Lorenzo
On Tue, Mar 10, 2026 at 3:36 PM Dev Jain <dev.jain@arm.com> wrote: > > Teach folio_dup_swap to handle a batch of consecutive pages. Note that > folio_dup_swap already can handle a subset of this: nr_pages == 1 and > nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. Thanks a lot for doing this. I was thinking it's about time we respin the batch unmapping of anon folios idea. Barry tried that before with an RFC, and now batching from swap side is easier, so some parts can be done cleaner. > Currently we have a not-so-nice logic of passing in subpage == NULL if > we mean to exercise the logic on the entire folio, and subpage != NULL if > we want to exercise the logic on only that subpage. Remove this > indirection, and explicitly pass subpage != NULL, and the number of > pages required. I was hoping most callers will just use the whole folio, but after checking your code, yeah, using explicit subpage and nr does fit the other parts better. > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 915bc93964dbd..eaf61ae6c3817 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio) > /** > * folio_dup_swap() - Increase swap count of swap entries of a folio. > * @folio: folio with swap entries bounded. > - * @subpage: if not NULL, only increase the swap count of this subpage. > + * @subpage: Increase the swap count of this subpage till nr number of > + * pages forward. The new nr_pages parameter isn't documented in the kernel-doc comment?
On 10/03/26 1:57 pm, Kairui Song wrote: > On Tue, Mar 10, 2026 at 3:36 PM Dev Jain <dev.jain@arm.com> wrote: >> >> Teach folio_dup_swap to handle a batch of consecutive pages. Note that >> folio_dup_swap already can handle a subset of this: nr_pages == 1 and >> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. > > Thanks a lot for doing this. I was thinking it's about time we respin > the batch unmapping of anon folios idea. Barry tried that before with > an RFC, and now batching from swap side is easier, so some parts can > be done cleaner. > >> Currently we have a not-so-nice logic of passing in subpage == NULL if >> we mean to exercise the logic on the entire folio, and subpage != NULL if >> we want to exercise the logic on only that subpage. Remove this >> indirection, and explicitly pass subpage != NULL, and the number of >> pages required. > > I was hoping most callers will just use the whole folio, but after > checking your code, yeah, using explicit subpage and nr does fit the > other parts better. > >> diff --git a/mm/swapfile.c b/mm/swapfile.c >> index 915bc93964dbd..eaf61ae6c3817 100644 >> --- a/mm/swapfile.c >> +++ b/mm/swapfile.c >> @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio) >> /** >> * folio_dup_swap() - Increase swap count of swap entries of a folio. >> * @folio: folio with swap entries bounded. >> - * @subpage: if not NULL, only increase the swap count of this subpage. >> + * @subpage: Increase the swap count of this subpage till nr number of >> + * pages forward. > > The new nr_pages parameter isn't documented in the kernel-doc comment? Oops, will add this in v2, thanks.
© 2016 - 2026 Red Hat, Inc.