[PATCH] mm/vmscan: consider previously reclaimed pages in shrink_lruvec()

Hyeongtak Ji posted 1 patch 2 years, 1 month ago
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH] mm/vmscan: consider previously reclaimed pages in shrink_lruvec()
Posted by Hyeongtak Ji 2 years, 1 month ago
shrink_lruvec() currently ignores previously reclaimed pages in
scan_control->nr_reclaimed.  This can lead shrink_lruvec() to reclaiming
more pages than expected.

This patch fixes shrink_lruvec() to take into account the previously
reclaimed pages.

Signed-off-by: Hyeongtak Ji <hyeongtak.ji@sk.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1080209a568b..315da4ae16f1 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6261,7 +6261,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 	unsigned long nr_to_scan;
 	enum lru_list lru;
 	unsigned long nr_reclaimed = 0;
-	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
+	unsigned long nr_to_reclaim = sc->nr_to_reclaim - sc->nr_reclaimed;
 	bool proportional_reclaim;
 	struct blk_plug plug;
 
-- 
2.7.4
Re: [PATCH] mm/vmscan: consider previously reclaimed pages in shrink_lruvec()
Posted by Johannes Weiner 1 year, 10 months ago
On Mon, Aug 07, 2023 at 07:01:16PM +0900, Hyeongtak Ji wrote:
> shrink_lruvec() currently ignores previously reclaimed pages in
> scan_control->nr_reclaimed.  This can lead shrink_lruvec() to reclaiming
> more pages than expected.
> 
> This patch fixes shrink_lruvec() to take into account the previously
> reclaimed pages.

Do you run into real world issues from this? The code has been like
this for at least a decade.

It's an intentional choice to ensure fairness across all visited
cgroups. sc->nr_to_reclaim is 32 pages or less - it's only to guard
against extreme overreclaim. But we want to make sure we reclaim a bit
from all cgroups, rather than always hit the first one and then bail.
Re: [PATCH] mm/vmscan: consider previously reclaimed pages in shrink_lruvec()
Posted by Hyeongtak Ji 1 year, 10 months ago
Hello,

Thank you for your reply.

On Wed, Nov 8, 2023 at 3:33 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Mon, Aug 07, 2023 at 07:01:16PM +0900, Hyeongtak Ji wrote:
> > shrink_lruvec() currently ignores previously reclaimed pages in
> > scan_control->nr_reclaimed.  This can lead shrink_lruvec() to reclaiming
> > more pages than expected.
> >
> > This patch fixes shrink_lruvec() to take into account the previously
> > reclaimed pages.
>
> Do you run into real world issues from this? The code has been like
> this for at least a decade.
>

I believed this was merely a misinitialization that resulted in
shrink_lruvec() reclaiming more pages than intended. However, I do
acknowledge that there have not been any real world issues arising from
this behavior.

> It's an intentional choice to ensure fairness across all visited
> cgroups. sc->nr_to_reclaim is 32 pages or less - it's only to guard

sc->nr_to_reclaim can be larger than 32 (e.g., about 5K) in the case that I
was worrying about. kswapd_shrink_node() in mm/vmscan.c sets the value
and it is passed down to shrink_lruvec().

> against extreme overreclaim. But we want to make sure we reclaim a bit
> from all cgroups, rather than always hit the first one and then bail.