From nobody Thu Sep 18 20:23:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 795F7C4332F for ; Fri, 2 Dec 2022 03:15:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231945AbiLBDPh (ORCPT ); Thu, 1 Dec 2022 22:15:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230382AbiLBDPd (ORCPT ); Thu, 1 Dec 2022 22:15:33 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B235D49E4 for ; Thu, 1 Dec 2022 19:15:32 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id 67-20020a621946000000b00575f8210320so3799977pfz.10 for ; Thu, 01 Dec 2022 19:15:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XH6mwHf7EDaGC67MtDUhfOFlBKrgBczQTfSrrHT2Rug=; b=ZTA6jNpY64q+OTD6IPuzuRI5YVchd2BVByQJxeZ+0jNgkDeRrpoOSkoYrs8JTmtO3d bQeuiSzvI8Ajv8CanGZU6FLXzkwcAwv8yy1vGr0fNUUydgPcY8g7mBvrUDpb0e/XO8fs T06y1p4yGVbIgmqg58n2Hwu6IDMMPHWVc0EEgdWfTYNjjs8rp/kWAwifEabu7DOLnPVo 8VTwxQgN+22poPKnK24M4MaFK+hfJ4iz2zC1d2eAK71nOT6Gep1QdONb5DEWjyPhqqtk JgQr59SzkaGqoo588LCfjHTJIbDJZ8Pyt8mGpq3V7fKVudzULA9iipwyeSMlqpqctAZ0 dNvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XH6mwHf7EDaGC67MtDUhfOFlBKrgBczQTfSrrHT2Rug=; b=RK+zdI2JBJEe1Wk9f6fY6isKtKlIE1ntH9k68OGEf181QQASrIFaqWbUitFy5dqjE9 quYCblYd/9SX3toPHvm0harKeJMU1L4W/nC2lGKo1wKhyB395os6OqEmDYmqPTSYd3Mn g9Flg1bq/290XVHSgWEfdLYPJliqkTVxKcShfabATg+DKg1WYYUeAbrKKoPayE/Zqgbv MarhfcQrMTjmJfA6OLNxKsFlMRuV5lLoSCVxAgBYbn4Czmm19LGSmjabZUHBdRAfkIZt 57QmKhxChCcDLhxGJE05yMsJWndVmjvZ3eicgwWbsqWBO1+eOEhMN1VctjzbUO3TD5J6 bHcg== X-Gm-Message-State: ANoB5pk1AhQUl1MmVzn3KHhH7mqciC6xv/Q5YptsVZtYHzxbNlMJ60H6 bnDXN13oNaQyFcFIgdIcP33IroAMFTXeiKZs X-Google-Smtp-Source: AA0mqf7PBETjtescCL3/FyX99YGmCALEDVA/h4FWTwolclKu6Pt0Ujihl1FAwjY7B0JyQYI/Hpzx3pIuupApd/fK X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90b:f89:b0:219:5b3b:2b9f with SMTP id ft9-20020a17090b0f8900b002195b3b2b9fmr1432190pjb.2.1669950931626; Thu, 01 Dec 2022 19:15:31 -0800 (PST) Date: Fri, 2 Dec 2022 03:15:10 +0000 In-Reply-To: <20221202031512.1365483-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221202031512.1365483-1-yosryahmed@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202031512.1365483-2-yosryahmed@google.com> Subject: [PATCH v3 1/3] mm: memcg: fix stale protection of reclaim target memcg From: Yosry Ahmed To: Andrew Morton , Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song , Tejun Heo Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , David Rientjes , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During reclaim, mem_cgroup_calculate_protection() is used to determine the effective protection (emin and elow) values of a memcg. The protection of the reclaim target is ignored, but we cannot set their effective protection to 0 due to a limitation of the current implementation (see comment in mem_cgroup_protection()). Instead, we leave their effective protection values unchaged, and later ignore it in mem_cgroup_protection(). However, mem_cgroup_protection() is called later in shrink_lruvec()->get_scan_count(), which is after the mem_cgroup_below_{min/low}() checks in shrink_node_memcgs(). As a result, the stale effective protection values of the target memcg may lead us to skip reclaiming from the target memcg entirely, before calling shrink_lruvec(). This can be even worse with recursive protection, where the stale target memcg protection can be higher than its standalone protection. See two examples below (a similar version of example (a) is added to test_memcontrol in a later patch). (a) A simple example with proactive reclaim is as follows. Consider the following hierarchy: ROOT | A | B (memory.min =3D 10M) Consider the following scenario: - B has memory.current =3D 10M. - The system undergoes global reclaim (or memcg reclaim in A). - In shrink_node_memcgs(): - mem_cgroup_calculate_protection() calculates the effective min (emin) of B as 10M. - mem_cgroup_below_min() returns true for B, we do not reclaim from B. - Now if we want to reclaim 5M from B using proactive reclaim (memory.reclaim), we should be able to, as the protection of the target memcg should be ignored. - In shrink_node_memcgs(): - mem_cgroup_calculate_protection() immediately returns for B without doing anything, as B is the target memcg, relying on mem_cgroup_protection() to ignore B's stale effective min (still 10M). - mem_cgroup_below_min() reads the stale effective min for B and we skip it instead of ignoring its protection as intended, as we never reach mem_cgroup_protection(). (b) An more complex example with recursive protection is as follows. Consider the following hierarchy with memory_recursiveprot: ROOT | A (memory.min =3D 50M) | B (memory.min =3D 10M, memory.high =3D 40M) Consider the following scenario: - B has memory.current =3D 35M. - The system undergoes global reclaim (target memcg is NULL). - B will have an effective min of 50M (all of A's unclaimed protection). - B will not be reclaimed from. - Now allocate 10M more memory in B, pushing it above it's high limit. - The system undergoes memcg reclaim from B (target memcg is B). - Like example (a), we do nothing in mem_cgroup_calculate_protection(), then call mem_cgroup_below_min(), which will read the stale effective min for B (50M) and skip it. In this case, it's even worse because we are not just considering B's standalone protection (10M), but we are reading a much higher stale protection (50M) which will cause us to not reclaim from B at all. This is an artifact of commit 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from protection checks") which made mem_cgroup_calculate_protection() only change the state without returning any value. Before that commit, we used to return MEMCG_PROT_NONE for the target memcg, which would cause us to skip the mem_cgroup_below_{min/low}() checks. After that commit we do not return anything and we end up checking the min & low effective protections for the target memcg, which are stale. Update mem_cgroup_supports_protection() to also check if we are reclaiming from the target, and rename it to mem_cgroup_unprotected() (now returns true if we should not protect the memcg, much simpler logic). Fixes: 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from p= rotection checks") Signed-off-by: Yosry Ahmed Reviewed-by: Roman Gushchin --- include/linux/memcontrol.h | 31 +++++++++++++++++++++---------- mm/vmscan.c | 11 ++++++----- 2 files changed, 27 insertions(+), 15 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e1644a24009c..d3c8203cab6c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -615,28 +615,32 @@ static inline void mem_cgroup_protection(struct mem_c= group *root, void mem_cgroup_calculate_protection(struct mem_cgroup *root, struct mem_cgroup *memcg); =20 -static inline bool mem_cgroup_supports_protection(struct mem_cgroup *memcg) +static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, + struct mem_cgroup *memcg) { /* * The root memcg doesn't account charges, and doesn't support - * protection. + * protection. The target memcg's protection is ignored, see + * mem_cgroup_calculate_protection() and mem_cgroup_protection() */ - return !mem_cgroup_disabled() && !mem_cgroup_is_root(memcg); - + return mem_cgroup_disabled() || mem_cgroup_is_root(memcg) || + memcg =3D=3D target; } =20 -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + if (mem_cgroup_unprotected(target, memcg)) return false; =20 return READ_ONCE(memcg->memory.elow) >=3D page_counter_read(&memcg->memory); } =20 -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + if (mem_cgroup_unprotected(target, memcg)) return false; =20 return READ_ONCE(memcg->memory.emin) >=3D @@ -1209,12 +1213,19 @@ static inline void mem_cgroup_calculate_protection(= struct mem_cgroup *root, { } =20 -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, + struct mem_cgroup *memcg) +{ + return true; +} +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } =20 -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 04d8b88e5216..79ef0fe67518 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4486,7 +4486,7 @@ static bool age_lruvec(struct lruvec *lruvec, struct = scan_control *sc, unsigned =20 mem_cgroup_calculate_protection(NULL, memcg); =20 - if (mem_cgroup_below_min(memcg)) + if (mem_cgroup_below_min(NULL, memcg)) return false; =20 need_aging =3D should_run_aging(lruvec, max_seq, min_seq, sc, swappiness,= &nr_to_scan); @@ -5047,8 +5047,9 @@ static unsigned long get_nr_to_scan(struct lruvec *lr= uvec, struct scan_control * DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); =20 - if (mem_cgroup_below_min(memcg) || - (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg) || + (mem_cgroup_below_low(sc->target_mem_cgroup, memcg) && + !sc->memcg_low_reclaim)) return 0; =20 *need_aging =3D should_run_aging(lruvec, max_seq, min_seq, sc, can_swap, = &nr_to_scan); @@ -6048,13 +6049,13 @@ static void shrink_node_memcgs(pg_data_t *pgdat, st= ruct scan_control *sc) =20 mem_cgroup_calculate_protection(target_memcg, memcg); =20 - if (mem_cgroup_below_min(memcg)) { + if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection. * If there is no reclaimable memory, OOM. */ continue; - } else if (mem_cgroup_below_low(memcg)) { + } else if (mem_cgroup_below_low(target_memcg, memcg)) { /* * Soft protection. * Respect the protection only as long as --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 20:23:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45137C4332F for ; Fri, 2 Dec 2022 03:15:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231955AbiLBDPm (ORCPT ); Thu, 1 Dec 2022 22:15:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231908AbiLBDPf (ORCPT ); Thu, 1 Dec 2022 22:15:35 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7183AD78D4 for ; Thu, 1 Dec 2022 19:15:34 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id p6-20020a170902e74600b001896ba6837bso4674797plf.17 for ; Thu, 01 Dec 2022 19:15:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DuRs+eCOo373fWFIamKdpS1DYx9E7KO6vBATpERNpN8=; b=P8OpmCy+hsdPsPghjVdy/8p6gbBKyWTBeA78u/eWGmraKF4XDkTC9BUAU4xols3RBJ Eb3T4ls0TY54/uxuYu7J+7atf4/xsoYt0KtUeRT9Zb5hY23o7MjxGkNxTpE6HlEG6JLX U4PTYctA7PHrrQ/iwUqeB94jmIuWs5SyK7sWXKJ1Nk2bdcheYAGV4dLM23SFnBy69R0L vBw+xnHv9O+RKwOcXilgLaN2TxlnG+nleMjWKyBjPB5guj+1LDHVlpKt8044WVFp6kV+ prfW9gEk3LyWoDGd81vPJOKNBNtQiWQcZaS9NglCCm5pty1GVtmv0As3GfjkM/d/SVij WH3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DuRs+eCOo373fWFIamKdpS1DYx9E7KO6vBATpERNpN8=; b=AILXxk2TH90CUHZHGTIBl3UP4R8SvzLwhNdI+Qw5zGdStX8u36rJyYUn6QgDEKO46h dRIcT+Yy+00TMHpyoSMrzxRF0u0uauv2wPIpSDMn//gEkwvrKjc4QfuM8FueMTTjixld 7LXUX8hV83MgHJ1XDgoWEaqD2sJUOs55dIspjOVRK0yvKVN6fu+EqPsiD6wB4fJwxQ8F C/0DYn8Mr7MuaFTVQ8lWDrYEz2iP9WZuN0hU0nhWiCa3yWvM8SOitzCoHJ7ktksOZQsg ZGYWZOC5ss+EFbda5jW4bT2vKJKyi2eI3OqeqKuNbiSuJzTocRjBpltr6M01XHliIW0Z jhmw== X-Gm-Message-State: ANoB5pnZHPuOyrnON8x/GP6ZCxyzbeNrj4E33w5INdS+5id8Xq+yI+Jl YrIc+/Z8mG9TWYF1IywkSSAXTKDkdlm5x06U X-Google-Smtp-Source: AA0mqf6TTfBSxhIz81VhCaQGBqVtXZxPisNhr7Boyb7mKndn+6VVW50DIORS98OFuwLJLcf1hGpLLhUt4jIB/zNT X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a05:6a00:4396:b0:575:1493:88d0 with SMTP id bt22-20020a056a00439600b00575149388d0mr25914503pfb.70.1669950933870; Thu, 01 Dec 2022 19:15:33 -0800 (PST) Date: Fri, 2 Dec 2022 03:15:11 +0000 In-Reply-To: <20221202031512.1365483-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221202031512.1365483-1-yosryahmed@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202031512.1365483-3-yosryahmed@google.com> Subject: [PATCH v3 2/3] selftests: cgroup: refactor proactive reclaim code to reclaim_until() From: Yosry Ahmed To: Andrew Morton , Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song , Tejun Heo Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , David Rientjes , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the code that drives writing to memory.reclaim (retrying, error handling, etc) from test_memcg_reclaim() to a helper called reclaim_until(), which proactively reclaims from a memcg until its usage reaches a certain value. While we are at it, refactor and simplify the reclaim loop. This will be used in a following patch in another test. Signed-off-by: Yosry Ahmed Suggested-by: Roman Gushchin Reviewed-by: Roman Gushchin --- .../selftests/cgroup/test_memcontrol.c | 80 ++++++++++--------- 1 file changed, 44 insertions(+), 36 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testi= ng/selftests/cgroup/test_memcontrol.c index 8833359556f3..a8f4700353a4 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -645,6 +645,48 @@ static int test_memcg_max(const char *root) return ret; } =20 +/* + * Reclaim from @memcg until usage reaches @goal by writing to + * memory.reclaim. + * + * This function will return false if the usage is already below the + * goal. + * + * This function assumes that writing to memory.reclaim is the only + * source of change in memory.current (no concurrent allocations or + * reclaim). + * + * This function makes sure memory.reclaim is sane. It will return + * false if memory.reclaim's error codes do not make sense, even if + * the usage goal was satisfied. + */ +static bool reclaim_until(const char *memcg, long goal) +{ + char buf[64]; + int retries, err; + long current, to_reclaim; + bool reclaimed =3D false; + + for (retries =3D 5; retries > 0; retries--) { + current =3D cg_read_long(memcg, "memory.current"); + + if (current < goal || values_close(current, goal, 3)) + break; + /* Did memory.reclaim return 0 incorrectly? */ + else if (reclaimed) + return false; + + to_reclaim =3D current - goal; + snprintf(buf, sizeof(buf), "%ld", to_reclaim); + err =3D cg_write(memcg, "memory.reclaim", buf); + if (!err) + reclaimed =3D true; + else if (err !=3D -EAGAIN) + return false; + } + return reclaimed; +} + /* * This test checks that memory.reclaim reclaims the given * amount of memory (from both anon and file, if possible). @@ -653,8 +695,7 @@ static int test_memcg_reclaim(const char *root) { int ret =3D KSFT_FAIL, fd, retries; char *memcg; - long current, expected_usage, to_reclaim; - char buf[64]; + long current, expected_usage; =20 memcg =3D cg_name(root, "memcg_test"); if (!memcg) @@ -705,41 +746,8 @@ static int test_memcg_reclaim(const char *root) * Reclaim until current reaches 30M, this makes sure we hit both anon * and file if swap is enabled. */ - retries =3D 5; - while (true) { - int err; - - current =3D cg_read_long(memcg, "memory.current"); - to_reclaim =3D current - MB(30); - - /* - * We only keep looping if we get EAGAIN, which means we could - * not reclaim the full amount. - */ - if (to_reclaim <=3D 0) - goto cleanup; - - - snprintf(buf, sizeof(buf), "%ld", to_reclaim); - err =3D cg_write(memcg, "memory.reclaim", buf); - if (!err) { - /* - * If writing succeeds, then the written amount should have been - * fully reclaimed (and maybe more). - */ - current =3D cg_read_long(memcg, "memory.current"); - if (!values_close(current, MB(30), 3) && current > MB(30)) - goto cleanup; - break; - } - - /* The kernel could not reclaim the full amount, try again. */ - if (err =3D=3D -EAGAIN && retries--) - continue; - - /* We got an unexpected error or ran out of retries. */ + if (!reclaim_until(memcg, MB(30))) goto cleanup; - } =20 ret =3D KSFT_PASS; cleanup: --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 20:23:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81A44C4332F for ; Fri, 2 Dec 2022 03:15:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232037AbiLBDPq (ORCPT ); Thu, 1 Dec 2022 22:15:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231965AbiLBDPi (ORCPT ); Thu, 1 Dec 2022 22:15:38 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91E78D78DB for ; Thu, 1 Dec 2022 19:15:36 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id k18-20020a170902c41200b001896d523dc8so4636990plk.19 for ; Thu, 01 Dec 2022 19:15:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bNyz6y2Y8+pcZcYRhDrKnevmtY0/xVh2JMC54suzd2M=; b=c5bVHWOVWFxZh/IsKMqBecLiB8RmgVwYrX1GzaIQJrvdCmfUyfs8qg0W9hhScf5hH4 mG1HkIkP0USGP4r6R6YG/H5u+ozhVJBPbINAptVF+p7iL7Zdme2Si3JoPCUQg5PnuM8i 9sgrk6G5d9YLX23J/vJ2Mgm92LjuGfj+UUMBTfr4WknUpFkwvTonc3miyBG+12r8p9WO rKEJGNGnV/+QeTnBLn5B4wArNPSCesKqvyX6p+9fULbYoMK0hz3Mmtd+Zort4W8lRQ75 zHRyiG+ShUvq2dHWqAQo/BfIP9kKzWFVg6HrESVMYli7xM6bVzIBYa/ChJND9QG28tZc zf6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bNyz6y2Y8+pcZcYRhDrKnevmtY0/xVh2JMC54suzd2M=; b=GdwXh+UP1fJXb/etGsXEMKTpoZmoZUI10jLjjzghNuXD/s0wJ2vabehZxdEPOFXuY/ bFNnJJYo31WNK23vYI0Z2MaCv3EL0ZixUsgMt5P5n4hX+GbpSOu8tMyv1C6TdzQqCA9J 8zPj+eHSQgf0FgrnwB/eoERw/LJJ8hP6kvAoFSSQzLOj1SpIl5HdiEDIU/ra0kt3owo+ GMJAQi8Zd1ccf2gtYWjs9BsP/1ITCA93pm0A1NpvLlBFD+2zYmfLFqZPqWUdSwHgo1PH tUI1bKRHdZhUfqKVoKn6XXpuYdYo6Ef81RtJlOr415CCO+CzBnbbQPlv/j/K//EwWMko h1sg== X-Gm-Message-State: ANoB5pnXVcOx0jpgc9Zi9+YpW5NVKmplIBlWEuEokVv+cDX+X9hrC6Hl DPDhcVdt68VFyjp4dphvBxcYXbTpPtI94crQ X-Google-Smtp-Source: AA0mqf5+TNJlExQui1Z3kcvsElvim4BKI/6fNytWOv4a9T8n9JlH01lAtNxLgUNDVBfiWHcWgrDzDV9aKBY00gko X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a62:62c4:0:b0:575:e44e:3e0 with SMTP id w187-20020a6262c4000000b00575e44e03e0mr10149400pfb.53.1669950936154; Thu, 01 Dec 2022 19:15:36 -0800 (PST) Date: Fri, 2 Dec 2022 03:15:12 +0000 In-Reply-To: <20221202031512.1365483-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221202031512.1365483-1-yosryahmed@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202031512.1365483-4-yosryahmed@google.com> Subject: [PATCH v3 3/3] selftests: cgroup: make sure reclaim target memcg is unprotected From: Yosry Ahmed To: Andrew Morton , Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song , Tejun Heo Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , David Rientjes , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make sure that we ignore protection of a memcg that is the target of memcg reclaim. Signed-off-by: Yosry Ahmed Reviewed-by: Roman Gushchin --- tools/testing/selftests/cgroup/test_memcontrol.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testi= ng/selftests/cgroup/test_memcontrol.c index a8f4700353a4..1e616a8c6a9c 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -238,6 +238,8 @@ static int cg_test_proc_killed(const char *cgroup) return -1; } =20 +static bool reclaim_until(const char *memcg, long goal); + /* * First, this test creates the following hierarchy: * A memory.min =3D 0, memory.max =3D 200M @@ -266,6 +268,12 @@ static int cg_test_proc_killed(const char *cgroup) * unprotected memory in A available, and checks that: * a) memory.min protects pagecache even in this case, * b) memory.low allows reclaiming page cache with low events. + * + * Then we try to reclaim from A/B/C using memory.reclaim until its + * usage reaches 10M. + * This makes sure that: + * (a) We ignore the protection of the reclaim target memcg. + * (b) The previously calculated emin value (~29M) should be dismissed. */ static int test_memcg_protection(const char *root, bool min) { @@ -385,6 +393,9 @@ static int test_memcg_protection(const char *root, bool= min) if (!values_close(cg_read_long(parent[1], "memory.current"), MB(50), 3)) goto cleanup; =20 + if (!reclaim_until(children[0], MB(10))) + goto cleanup; + if (min) { ret =3D KSFT_PASS; goto cleanup; --=20 2.39.0.rc0.267.gcb52ba06e7-goog