From nobody Thu Sep 18 23:34:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C175C4332F for ; Thu, 1 Dec 2022 22:40:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231480AbiLAWkF (ORCPT ); Thu, 1 Dec 2022 17:40:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231375AbiLAWj5 (ORCPT ); Thu, 1 Dec 2022 17:39:57 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63498C8D1C for ; Thu, 1 Dec 2022 14:39:48 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id k7-20020a256f07000000b006cbcc030bc8so3191119ybc.18 for ; Thu, 01 Dec 2022 14:39:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JHZglG5ilwT7hsqyBjm+nL4wtqZp55RCSn8BZ+7jNpg=; b=RHNDbvszyBbPFEB77yE6AJGQfBt0EWpkE3qN5H9J3Pp5Q77hSPzW8ArtLS4SmB33mD 8SXDsp5hzK2/hxWT7MKoqC3fSW77JLZagIWlq/EqS/blNXzBtdfoH26kkDNbrKgwi2Tm JjamnWumeGYaJKFjwgbEqgqlyxC6Jx2bu0SUugLD26lLkdaQpsAYAiFUKxkw19M6otbb 1Z3qS4bBjRYFp6/H6vgSb6BjlQ2H8QkpZ1WmZkealqf3ubG/R58XI2SQtnO7fhHTTiOr 1saN6mqKZeo9AQuP9prToIk9MQu8V2Z8qFrUVWJgcECwRdBaWIGCcT4+RTIcmpvH7gmP 0EVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JHZglG5ilwT7hsqyBjm+nL4wtqZp55RCSn8BZ+7jNpg=; b=T95ebS5Y5GQRC33mAKNJYlYoss+08V9bOxihPVLJZCwAXc80OgxMS3M+acOlGXmlvC 7Dw8nCPKORZbSUMikKuj1kJzFG+EJx/maQYsL0EqcIAwnRBCb2epFAhIfr792wQjCRgN eYsjbSEpTf9E5ic5/hF3j7mU/T8uurpj7X2JMpJ0x2oNynps6DoK4Yeb44VtrtZxxu3b 3lVcbtEE2ShyUpxLXdpCTr9blacgNKOE83B93Nn0k5lkWkdtQss0a2Gw+O2UKvbIbGDZ s9M03Gudu8UxY4NOKbF0utuDDR90TrC9UYtuWCFr+OzXhPvsXEdA5sVx4CWUxnpDqXnl gGwg== X-Gm-Message-State: ANoB5pndXC4Ae8iHDgDzdSwT9YV6fmo6L/85Sp1p6XgmsBaH2tvvtxLG AsghenAdjiUHMJVqb/S5lfbu4ArYVsM= X-Google-Smtp-Source: AA0mqf7944UNERKD5m4EdoDunAUNDT5CtA5j6zI05nuoJYATtJBJKXDdLhv1W54WaUpHG3XVTXdKF39O88I= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a81:4789:0:b0:39a:adfe:bf5c with SMTP id u131-20020a814789000000b0039aadfebf5cmr46711595ywa.403.1669934387675; Thu, 01 Dec 2022 14:39:47 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:21 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-6-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 5/8] mm: multi-gen LRU: shuffle should_run_aging() From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move should_run_aging() next to its only caller left. Signed-off-by: Yu Zhao --- mm/vmscan.c | 124 ++++++++++++++++++++++++++-------------------------- 1 file changed, 62 insertions(+), 62 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 67967a4b18a9..0557adce75c5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4452,68 +4452,6 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec= , unsigned long max_seq, return true; } =20 -static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, - struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) -{ - int gen, type, zone; - unsigned long old =3D 0; - unsigned long young =3D 0; - unsigned long total =3D 0; - struct lru_gen_folio *lrugen =3D &lruvec->lrugen; - struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); - DEFINE_MIN_SEQ(lruvec); - - /* whether this lruvec is completely out of cold folios */ - if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { - *nr_to_scan =3D 0; - return true; - } - - for (type =3D !can_swap; type < ANON_AND_FILE; type++) { - unsigned long seq; - - for (seq =3D min_seq[type]; seq <=3D max_seq; seq++) { - unsigned long size =3D 0; - - gen =3D lru_gen_from_seq(seq); - - for (zone =3D 0; zone < MAX_NR_ZONES; zone++) - size +=3D max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); - - total +=3D size; - if (seq =3D=3D max_seq) - young +=3D size; - else if (seq + MIN_NR_GENS =3D=3D max_seq) - old +=3D size; - } - } - - /* try to scrape all its memory if this memcg was deleted */ - *nr_to_scan =3D mem_cgroup_online(memcg) ? (total >> sc->priority) : tota= l; - - /* - * The aging tries to be lazy to reduce the overhead, while the eviction - * stalls when the number of generations reaches MIN_NR_GENS. Hence, the - * ideal number of generations is MIN_NR_GENS+1. - */ - if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) - return false; - - /* - * It's also ideal to spread pages out evenly, i.e., 1/(MIN_NR_GENS+1) - * of the total number of pages for each generation. A reasonable range - * for this average portion is [1/MIN_NR_GENS, 1/(MIN_NR_GENS+2)]. The - * aging cares about the upper bound of hot pages, while the eviction - * cares about the lower bound of cold pages. - */ - if (young * MIN_NR_GENS > total) - return true; - if (old * (MIN_NR_GENS + 2) < total) - return true; - - return false; -} - static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *= sc) { int gen, type, zone; @@ -5097,6 +5035,68 @@ static int evict_folios(struct lruvec *lruvec, struc= t scan_control *sc, int swap return scanned; } =20 +static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, + struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) +{ + int gen, type, zone; + unsigned long old =3D 0; + unsigned long young =3D 0; + unsigned long total =3D 0; + struct lru_gen_folio *lrugen =3D &lruvec->lrugen; + struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); + DEFINE_MIN_SEQ(lruvec); + + /* whether this lruvec is completely out of cold folios */ + if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { + *nr_to_scan =3D 0; + return true; + } + + for (type =3D !can_swap; type < ANON_AND_FILE; type++) { + unsigned long seq; + + for (seq =3D min_seq[type]; seq <=3D max_seq; seq++) { + unsigned long size =3D 0; + + gen =3D lru_gen_from_seq(seq); + + for (zone =3D 0; zone < MAX_NR_ZONES; zone++) + size +=3D max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); + + total +=3D size; + if (seq =3D=3D max_seq) + young +=3D size; + else if (seq + MIN_NR_GENS =3D=3D max_seq) + old +=3D size; + } + } + + /* try to scrape all its memory if this memcg was deleted */ + *nr_to_scan =3D mem_cgroup_online(memcg) ? (total >> sc->priority) : tota= l; + + /* + * The aging tries to be lazy to reduce the overhead, while the eviction + * stalls when the number of generations reaches MIN_NR_GENS. Hence, the + * ideal number of generations is MIN_NR_GENS+1. + */ + if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) + return false; + + /* + * It's also ideal to spread pages out evenly, i.e., 1/(MIN_NR_GENS+1) + * of the total number of pages for each generation. A reasonable range + * for this average portion is [1/MIN_NR_GENS, 1/(MIN_NR_GENS+2)]. The + * aging cares about the upper bound of hot pages, while the eviction + * cares about the lower bound of cold pages. + */ + if (young * MIN_NR_GENS > total) + return true; + if (old * (MIN_NR_GENS + 2) < total) + return true; + + return false; +} + /* * For future optimizations: * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg --=20 2.39.0.rc0.267.gcb52ba06e7-goog