From nobody Thu Apr 16 02:12:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 887C8C4332F for ; Wed, 23 Nov 2022 09:22:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237547AbiKWJW2 (ORCPT ); Wed, 23 Nov 2022 04:22:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237616AbiKWJWA (ORCPT ); Wed, 23 Nov 2022 04:22:00 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30FB810B437 for ; Wed, 23 Nov 2022 01:21:42 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pa16-20020a17090b265000b0020a71040b4cso8659540pjb.6 for ; Wed, 23 Nov 2022 01:21:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aeM6HrPa45A9LrLgfXlJ7cGalj3225I7akGQB6U5iS8=; b=VtKptZQ3mFwcqrOi51ztxB+DNgvHVrFNViBXIaddfpS5U7tazXMXjhQi+D0Kb5BOYB kWAX/pxmE4/9q3DRcQTU6r5l/tPiAevFUilKdbrsysVT3IDE0PmkaN5ndghki6ufiH4P N1WA3y4o1ra1Xwt6XDHrQJx8JwWvpvj4ygpSBlCYlACuCqEt2bkTACMt/BaZpCjLomq0 T7rBx6TAjHgH302COt8/S0+/TBk9ZGp+uqj6IW5nN0j9hKQPSLdmZwQT79Gs+kwyrL0G UVfodtsTsXP5K4JI6FPWiS0q3sTEIxzK9lwT2qMViOzun5UhzjHa5z8rp0lumY6eouqQ h2FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aeM6HrPa45A9LrLgfXlJ7cGalj3225I7akGQB6U5iS8=; b=S36qMjix343SenFhg7+n9W+7Gb94cRRj+JzG3E2c901W/kCoVLpPmxMGFYOrLxGLkN GuYe9lRwBjAKx68p5m0reGiRoiHh3kki7MiBeHK8N5e6+zb8JnQIHpNuh9y71Lo4YX0P d3gLkSdZ6Q50erXhza3dmq5fFs3SidiIrrYhdhLo9TUI9VqbogzYKFSvFbbiXnEoAArD psWd+7YOzHvZ3el3i/YfgjHE7V1EbU0vF2WqAQa0GCvt8/Rw3pLe6m/X5D1ETh6X2/Vi MzRzVk35MYdUpcoNJqkH7bCiXVGrieEngSN8Cv1LIcHVLcZ+xkMUxA5hNaf4iXfZ/YNI DgaA== X-Gm-Message-State: ANoB5pnelt6fT4IKwX4sPlbk13MhkFAbHBntT7wi8TkAtQRoC5iiPu0b 5vl5U19VdW4f0atoUfc7a0IpXnx/Heqrw5y4 X-Google-Smtp-Source: AA0mqf7fGcAgpowKlMMo+r73S5L06hv/b6o0wgKy1fa3xGSp5eKRBsQn/AzZFtFKxD7Bs7qipf2cPe0Zcmk2hL46 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a62:4e81:0:b0:56c:12c0:a89b with SMTP id c123-20020a624e81000000b0056c12c0a89bmr28895540pfb.40.1669195301704; Wed, 23 Nov 2022 01:21:41 -0800 (PST) Date: Wed, 23 Nov 2022 09:21:30 +0000 In-Reply-To: <20221123092132.2521764-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221123092132.2521764-1-yosryahmed@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221123092132.2521764-2-yosryahmed@google.com> Subject: [PATCH v2 1/3] mm: memcg: fix stale protection of reclaim target memcg From: Yosry Ahmed To: Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During reclaim, mem_cgroup_calculate_protection() is used to determine the effective protection (emin and elow) values of a memcg. The protection of the reclaim target is ignored, but we cannot set their effective protection to 0 due to a limitation of the current implementation (see comment in mem_cgroup_protection()). Instead, we leave their effective protection values unchaged, and later ignore it in mem_cgroup_protection(). However, mem_cgroup_protection() is called later in shrink_lruvec()->get_scan_count(), which is after the mem_cgroup_below_{min/low}() checks in shrink_node_memcgs(). As a result, the stale effective protection values of the target memcg may lead us to skip reclaiming from the target memcg entirely, before calling shrink_lruvec(). This can be even worse with recursive protection, where the stale target memcg protection can be higher than its standalone protection. See two examples below (a similar version of example (a) is added to test_memcontrol in a later patch). (a) A simple example with proactive reclaim is as follows. Consider the following hierarchy: ROOT | A | B (memory.min =3D 10M) Consider the following scenario: - B has memory.current =3D 10M. - The system undergoes global reclaim (or memcg reclaim in A). - In shrink_node_memcgs(): - mem_cgroup_calculate_protection() calculates the effective min (emin) of B as 10M. - mem_cgroup_below_min() returns true for B, we do not reclaim from B. - Now if we want to reclaim 5M from B using proactive reclaim (memory.reclaim), we should be able to, as the protection of the target memcg should be ignored. - In shrink_node_memcgs(): - mem_cgroup_calculate_protection() immediately returns for B without doing anything, as B is the target memcg, relying on mem_cgroup_protection() to ignore B's stale effective min (still 10M). - mem_cgroup_below_min() reads the stale effective min for B and we skip it instead of ignoring its protection as intended, as we never reach mem_cgroup_protection(). (b) An more complex example with recursive protection is as follows. Consider the following hierarchy with memory_recursiveprot: ROOT | A (memory.min =3D 50M) | B (memory.min =3D 10M, memory.high =3D 40M) Consider the following scenario: - B has memory.current =3D 35M. - The system undergoes global reclaim (target memcg is NULL). - B will have an effective min of 50M (all of A's unclaimed protection). - B will not be reclaimed from. - Now allocate 10M more memory in B, pushing it above it's high limit. - The system undergoes memcg reclaim from B (target memcg is B). - Like example (a), we do nothing in mem_cgroup_calculate_protection(), then call mem_cgroup_below_min(), which will read the stale effective min for B (50M) and skip it. In this case, it's even worse because we are not just considering B's standalone protection (10M), but we are reading a much higher stale protection (50M) which will cause us to not reclaim from B at all. This is an artifact of commit 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from protection checks") which made mem_cgroup_calculate_protection() only change the state without returning any value. Before that commit, we used to return MEMCG_PROT_NONE for the target memcg, which would cause us to skip the mem_cgroup_below_{min/low}() checks. After that commit we do not return anything and we end up checking the min & low effective protections for the target memcg, which are stale. Update mem_cgroup_supports_protection() to also check if we are reclaiming from the target, and rename it to mem_cgroup_unprotected() (now returns true if we should not protect the memcg, much simpler logic). Fixes: 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from p= rotection checks") Signed-off-by: Yosry Ahmed Reviewed-by: Roman Gushchin --- include/linux/memcontrol.h | 31 +++++++++++++++++++++---------- mm/vmscan.c | 11 ++++++----- 2 files changed, 27 insertions(+), 15 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e1644a24009c..d3c8203cab6c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -615,28 +615,32 @@ static inline void mem_cgroup_protection(struct mem_c= group *root, void mem_cgroup_calculate_protection(struct mem_cgroup *root, struct mem_cgroup *memcg); =20 -static inline bool mem_cgroup_supports_protection(struct mem_cgroup *memcg) +static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, + struct mem_cgroup *memcg) { /* * The root memcg doesn't account charges, and doesn't support - * protection. + * protection. The target memcg's protection is ignored, see + * mem_cgroup_calculate_protection() and mem_cgroup_protection() */ - return !mem_cgroup_disabled() && !mem_cgroup_is_root(memcg); - + return mem_cgroup_disabled() || mem_cgroup_is_root(memcg) || + memcg =3D=3D target; } =20 -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + if (mem_cgroup_unprotected(target, memcg)) return false; =20 return READ_ONCE(memcg->memory.elow) >=3D page_counter_read(&memcg->memory); } =20 -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + if (mem_cgroup_unprotected(target, memcg)) return false; =20 return READ_ONCE(memcg->memory.emin) >=3D @@ -1209,12 +1213,19 @@ static inline void mem_cgroup_calculate_protection(= struct mem_cgroup *root, { } =20 -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, + struct mem_cgroup *memcg) +{ + return true; +} +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } =20 -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 04d8b88e5216..79ef0fe67518 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4486,7 +4486,7 @@ static bool age_lruvec(struct lruvec *lruvec, struct = scan_control *sc, unsigned =20 mem_cgroup_calculate_protection(NULL, memcg); =20 - if (mem_cgroup_below_min(memcg)) + if (mem_cgroup_below_min(NULL, memcg)) return false; =20 need_aging =3D should_run_aging(lruvec, max_seq, min_seq, sc, swappiness,= &nr_to_scan); @@ -5047,8 +5047,9 @@ static unsigned long get_nr_to_scan(struct lruvec *lr= uvec, struct scan_control * DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); =20 - if (mem_cgroup_below_min(memcg) || - (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg) || + (mem_cgroup_below_low(sc->target_mem_cgroup, memcg) && + !sc->memcg_low_reclaim)) return 0; =20 *need_aging =3D should_run_aging(lruvec, max_seq, min_seq, sc, can_swap, = &nr_to_scan); @@ -6048,13 +6049,13 @@ static void shrink_node_memcgs(pg_data_t *pgdat, st= ruct scan_control *sc) =20 mem_cgroup_calculate_protection(target_memcg, memcg); =20 - if (mem_cgroup_below_min(memcg)) { + if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection. * If there is no reclaimable memory, OOM. */ continue; - } else if (mem_cgroup_below_low(memcg)) { + } else if (mem_cgroup_below_low(target_memcg, memcg)) { /* * Soft protection. * Respect the protection only as long as --=20 2.38.1.584.g0f3c55d4c2-goog From nobody Thu Apr 16 02:12:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DA24C4332F for ; Wed, 23 Nov 2022 09:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237555AbiKWJWc (ORCPT ); Wed, 23 Nov 2022 04:22:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237621AbiKWJWB (ORCPT ); Wed, 23 Nov 2022 04:22:01 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B48D10613D for ; Wed, 23 Nov 2022 01:21:44 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id p18-20020a170902a41200b00188e87d0f04so13484083plq.14 for ; Wed, 23 Nov 2022 01:21:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zZ8JuQS8X9t3i4zrgE/S0Ri9wo/GkYIgsNIx6OEFr5o=; b=DEp8BSTd9VIwtnULP6VzWamTubDDP1gDZgCtUL3t/6x4ArdhrAAY+23qy7pz5jAyW9 hLlIbnT90c1Tcw3JhR6bcFVLBaJXvv/TICLht+RD563Od8KYY0M9H7AwOaE8hkyi+4P4 lAEpAQvnKj8LcQZvAPh5ONRxCteCpkOxZFqu2HI97mVrs24uh8gbeIWiwFXqbA01HNIQ oqzk4l0p28W4KMTCuTNQqh+eKppRX/STqk/GYhGQ3ygXEegrR4ifdKGyGqXuYdDqtcK/ YrKqTZIheFyOuQXeSjmaXlXOKUutxGa8Ub6CnWuSRIBjbAKKxVKbtELfuCSM/AFejFol ekRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zZ8JuQS8X9t3i4zrgE/S0Ri9wo/GkYIgsNIx6OEFr5o=; b=qtg3ts27UoFDaQRZp9Lko7qLMiBnJqyq+78sgNbojvgw7LFQcvpjvYue65XnkSq1Kc otdGVOlGeFSORxNRIV+QScKCX899KXDzb/jtyl/xhSYtMqaS17hs/ncKFvsst89ABy0N 1kSXTPgJEZ2fns9IpXOf202F1lQv1S+R481TdHXwZNPnQXbmlu07werKvz/GFPZotsvM bcqcEw7N7e39SbIHARR0InHCEI7myyq0Ukn5rxS1IEtuNsUb9En4fYC1rgBJU5RdpVZe O7kylnKR2+3vkjrNLttq8xmr5KGtidaHSfHiEPolK2r0yABwuLdqjvNKZ8kTp8e5jAFE saVg== X-Gm-Message-State: ANoB5pmA7t0Lg/M6A4b94tPk0CusBT7cTd83lYmo0iPkcdfEiloZpPVj OoRo4VRiue2zfQImo8Qfze8MuKFC6NUe9SG6 X-Google-Smtp-Source: AA0mqf7lNnDk3AMn4WDks5KSg+GBb6+n4WUj4qBI2dF5jmITLTNcCXXmiBK6oEulkWnC0jyWRsCxudd0bBgcirOa X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:902:f2c5:b0:189:1cc3:802a with SMTP id h5-20020a170902f2c500b001891cc3802amr9596140plc.56.1669195304144; Wed, 23 Nov 2022 01:21:44 -0800 (PST) Date: Wed, 23 Nov 2022 09:21:31 +0000 In-Reply-To: <20221123092132.2521764-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221123092132.2521764-1-yosryahmed@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221123092132.2521764-3-yosryahmed@google.com> Subject: [PATCH v2 2/3] selftests: cgroup: refactor proactive reclaim code to reclaim_until() From: Yosry Ahmed To: Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the code that drives writing to memory.reclaim (retrying, error handling, etc) from test_memcg_reclaim() to a helper called reclaim_until(), which proactively reclaims from a memcg until its usage reaches a certain value. This will be used in a following patch in another test. Signed-off-by: Yosry Ahmed Reviewed-by: Roman Gushchin --- .../selftests/cgroup/test_memcontrol.c | 85 +++++++++++-------- 1 file changed, 49 insertions(+), 36 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testi= ng/selftests/cgroup/test_memcontrol.c index 8833359556f3..d4182e94945e 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -645,6 +645,53 @@ static int test_memcg_max(const char *root) return ret; } =20 +/* Reclaim from @memcg until usage reaches @goal_usage */ +static bool reclaim_until(const char *memcg, long goal_usage) +{ + char buf[64]; + int retries =3D 5; + int err; + long current, to_reclaim; + + /* Nothing to do here */ + if (cg_read_long(memcg, "memory.current") <=3D goal_usage) + return true; + + while (true) { + current =3D cg_read_long(memcg, "memory.current"); + to_reclaim =3D current - goal_usage; + + /* + * We only keep looping if we get -EAGAIN, which means we could + * not reclaim the full amount. This means we got -EAGAIN when + * we actually reclaimed the requested amount, so fail. + */ + if (to_reclaim <=3D 0) + break; + + snprintf(buf, sizeof(buf), "%ld", to_reclaim); + err =3D cg_write(memcg, "memory.reclaim", buf); + if (!err) { + /* + * If writing succeeds, then the written amount should have been + * fully reclaimed (and maybe more). + */ + current =3D cg_read_long(memcg, "memory.current"); + if (!values_close(current, goal_usage, 3) && current > goal_usage) + break; + return true; + } + + /* The kernel could not reclaim the full amount, try again. */ + if (err =3D=3D -EAGAIN && retries--) + continue; + + /* We got an unexpected error or ran out of retries. */ + break; + } + return false; +} + /* * This test checks that memory.reclaim reclaims the given * amount of memory (from both anon and file, if possible). @@ -653,8 +700,7 @@ static int test_memcg_reclaim(const char *root) { int ret =3D KSFT_FAIL, fd, retries; char *memcg; - long current, expected_usage, to_reclaim; - char buf[64]; + long current, expected_usage; =20 memcg =3D cg_name(root, "memcg_test"); if (!memcg) @@ -705,41 +751,8 @@ static int test_memcg_reclaim(const char *root) * Reclaim until current reaches 30M, this makes sure we hit both anon * and file if swap is enabled. */ - retries =3D 5; - while (true) { - int err; - - current =3D cg_read_long(memcg, "memory.current"); - to_reclaim =3D current - MB(30); - - /* - * We only keep looping if we get EAGAIN, which means we could - * not reclaim the full amount. - */ - if (to_reclaim <=3D 0) - goto cleanup; - - - snprintf(buf, sizeof(buf), "%ld", to_reclaim); - err =3D cg_write(memcg, "memory.reclaim", buf); - if (!err) { - /* - * If writing succeeds, then the written amount should have been - * fully reclaimed (and maybe more). - */ - current =3D cg_read_long(memcg, "memory.current"); - if (!values_close(current, MB(30), 3) && current > MB(30)) - goto cleanup; - break; - } - - /* The kernel could not reclaim the full amount, try again. */ - if (err =3D=3D -EAGAIN && retries--) - continue; - - /* We got an unexpected error or ran out of retries. */ + if (!reclaim_until(memcg, MB(30))) goto cleanup; - } =20 ret =3D KSFT_PASS; cleanup: --=20 2.38.1.584.g0f3c55d4c2-goog From nobody Thu Apr 16 02:12:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12D91C4332F for ; Wed, 23 Nov 2022 09:22:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237582AbiKWJWf (ORCPT ); Wed, 23 Nov 2022 04:22:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237639AbiKWJWD (ORCPT ); Wed, 23 Nov 2022 04:22:03 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A9A91095BF for ; Wed, 23 Nov 2022 01:21:48 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-38f92b4b3f2so161804367b3.1 for ; Wed, 23 Nov 2022 01:21:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wv2KXXYxhgmiwiwKTV3Llr06CSap+0UqOMvMPP82Z+U=; b=d/rtyAMOztdMTDBjXLhp+iK6iUvN4N9zc37scM2toboxh4tk1p4sCS10UBQdR4N3qD 5ze2ZpOfEEFYscj7wiB8Ugseb4cQQM/uJ4nqiUFppNlb+jUaaH7Ra4gBBobPGJ+MtlrL l4OSPfSUX70myNYZ0QWyiCku2BlYtAWqMjxZA4mnibWQ6/pknMLv+6ygUQfQtkEWgxAC JAtp93YrJbt0CZrdzKM7ss1ntR+8fufAN6h8qscMZMKMP/1LgkHNnxCz5O2YMMIMAv9O YiGzeYNmP7lLNDM3YplOWbNjmb/9MZlcx+TEI26B5TV0+3OFk1q8lSVhVH5VJN7YtJEI e6iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wv2KXXYxhgmiwiwKTV3Llr06CSap+0UqOMvMPP82Z+U=; b=epK85SEOCS9LxPqD366CrrzetCI2OvMnY9Jj8aICqhlpa2qAakNRor4sjiL+x5SmhR tV+m+WF09UsCX88YrIu9/X4/LC8PzrfLm0jTxGE9WZf2rMJgzDRWrL+HN2aIBwjek6Gk 3o1EF4YQ5rEA+joP70SUMtnppISwBcGqy980wCtG+qs4rXp95qj6+0MkipUgcRuLWeuc sgrbzBrKx6Ut81vKVd/DAwbaZmq6EuGmj8r0cxx5+4dnydqkEtSDeuetQBfQb2VG99Q6 FToPSFe5iY03LC1BDmG8Xfa3SZdfSGlCiNL8lE6flEDyh93GwgXptkszKuroRIr75fqL Lvyw== X-Gm-Message-State: ANoB5plz5pufGSsDAQYpQI4XOP63HkejfYyZbW1TkjRSUiq20pzQIGwa vx0t4X7al8elDFdi/jGm707mDd5Nz7FFjtJa X-Google-Smtp-Source: AA0mqf6sGopFfR1Aoda1Zdzt1YJ6XFFAb/E4mCK0N9ulMBl/Uu/0CQNXAGVehsabcRRiBRWhaT2ftO3U35pybhbT X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a0d:e507:0:b0:390:3cd1:cb16 with SMTP id o7-20020a0de507000000b003903cd1cb16mr10405812ywe.307.1669195307300; Wed, 23 Nov 2022 01:21:47 -0800 (PST) Date: Wed, 23 Nov 2022 09:21:32 +0000 In-Reply-To: <20221123092132.2521764-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20221123092132.2521764-1-yosryahmed@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221123092132.2521764-4-yosryahmed@google.com> Subject: [PATCH v2 3/3] selftests: cgroup: make sure reclaim target memcg is unprotected From: Yosry Ahmed To: Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make sure that we ignore protection of a memcg that is the target of memcg reclaim. Signed-off-by: Yosry Ahmed Reviewed-by: Roman Gushchin --- tools/testing/selftests/cgroup/test_memcontrol.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testi= ng/selftests/cgroup/test_memcontrol.c index d4182e94945e..bac3b91f1579 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -238,6 +238,8 @@ static int cg_test_proc_killed(const char *cgroup) return -1; } =20 +static bool reclaim_until(const char *memcg, long goal_usage); + /* * First, this test creates the following hierarchy: * A memory.min =3D 0, memory.max =3D 200M @@ -266,6 +268,12 @@ static int cg_test_proc_killed(const char *cgroup) * unprotected memory in A available, and checks that: * a) memory.min protects pagecache even in this case, * b) memory.low allows reclaiming page cache with low events. + * + * Then we try to reclaim from A/B/C using memory.reclaim until its + * usage reaches 10M. + * This makes sure that: + * (a) We ignore the protection of the reclaim target memcg. + * (b) The previously calculated emin value (~29M) should be dismissed. */ static int test_memcg_protection(const char *root, bool min) { @@ -385,6 +393,9 @@ static int test_memcg_protection(const char *root, bool= min) if (!values_close(cg_read_long(parent[1], "memory.current"), MB(50), 3)) goto cleanup; =20 + if (!reclaim_until(children[0], MB(10))) + goto cleanup; + if (min) { ret =3D KSFT_PASS; goto cleanup; --=20 2.38.1.584.g0f3c55d4c2-goog