zspage migration can terminate as soon as it moves the last
allocated object from the source zspage. Add a simple helper
zspage_empty() that tests zspage ->inuse on each migration
iteration.
Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru>
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
mm/zsmalloc.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3f057970504e..5d60eaedc3b7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
return get_zspage_inuse(zspage) == class->objs_per_zspage;
}
+static bool zspage_empty(struct zspage *zspage)
+{
+ return get_zspage_inuse(zspage) == 0;
+}
+
/**
* zs_lookup_class_index() - Returns index of the zsmalloc &size_class
* that hold objects of the provided size.
@@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
obj_idx++;
record_obj(handle, free_obj);
obj_free(class->size, used_obj);
+
+ /* Stop if there are no more objects to migrate */
+ if (zspage_empty(get_zspage(s_page)))
+ break;
}
/* Remember last position in this iteration */
--
2.41.0.162.gfafddb0af9-goog
Hello!
On Fri, Jun 23, 2023 at 01:40:01PM +0900, Sergey Senozhatsky wrote:
> zspage migration can terminate as soon as it moves the last
> allocated object from the source zspage. Add a simple helper
> zspage_empty() that tests zspage ->inuse on each migration
> iteration.
>
> Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
> mm/zsmalloc.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3f057970504e..5d60eaedc3b7 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
> return get_zspage_inuse(zspage) == class->objs_per_zspage;
> }
>
> +static bool zspage_empty(struct zspage *zspage)
> +{
> + return get_zspage_inuse(zspage) == 0;
> +}
> +
> /**
> * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> * that hold objects of the provided size.
> @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> obj_idx++;
> record_obj(handle, free_obj);
> obj_free(class->size, used_obj);
> +
> + /* Stop if there are no more objects to migrate */
> + if (zspage_empty(get_zspage(s_page)))
> + break;
> }
>
> /* Remember last position in this iteration */
> --
> 2.41.0.162.gfafddb0af9-goog
>
I think we can add similar check in zs_reclaim_page() function.
There we also scan zspage to find the allocated object.
--
Thank you,
Alexey
On (23/06/23 10:49), Alexey Romanov wrote:
> > +static bool zspage_empty(struct zspage *zspage)
> > +{
> > + return get_zspage_inuse(zspage) == 0;
> > +}
> > +
> > /**
> > * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> > * that hold objects of the provided size.
> > @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > obj_idx++;
> > record_obj(handle, free_obj);
> > obj_free(class->size, used_obj);
> > +
> > + /* Stop if there are no more objects to migrate */
> > + if (zspage_empty(get_zspage(s_page)))
> > + break;
> > }
> >
> > /* Remember last position in this iteration */
> > --
> > 2.41.0.162.gfafddb0af9-goog
> >
>
> I think we can add similar check in zs_reclaim_page() function.
> There we also scan zspage to find the allocated object.
LRU was moved to zswap, so zs_reclaim_page() doesn't exist any longer
(in linux-next).
On Sat, Jun 24, 2023 at 11:29:17AM +0900, Sergey Senozhatsky wrote:
> On (23/06/23 10:49), Alexey Romanov wrote:
> > > +static bool zspage_empty(struct zspage *zspage)
> > > +{
> > > + return get_zspage_inuse(zspage) == 0;
> > > +}
> > > +
> > > /**
> > > * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> > > * that hold objects of the provided size.
> > > @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > > obj_idx++;
> > > record_obj(handle, free_obj);
> > > obj_free(class->size, used_obj);
> > > +
> > > + /* Stop if there are no more objects to migrate */
> > > + if (zspage_empty(get_zspage(s_page)))
> > > + break;
> > > }
> > >
> > > /* Remember last position in this iteration */
> > > --
> > > 2.41.0.162.gfafddb0af9-goog
> > >
> >
> > I think we can add similar check in zs_reclaim_page() function.
> > There we also scan zspage to find the allocated object.
>
> LRU was moved to zswap, so zs_reclaim_page() doesn't exist any longer
> (in linux-next).
Yeah, sorry. Just looking in current linux master.
--
Thank you,
Alexey
© 2016 - 2026 Red Hat, Inc.