From nobody Sun Dec 14 12:37:54 2025 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D541465B4 for ; Sat, 19 Apr 2025 05:38:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745041112; cv=none; b=jKq5uNXwVvfWtIRmxktdUK+H2kL2rBzKv9C+bRdZTqOKohKNfzQqaB0ZLNax+NWdZdKKhIQ4mlXSPUcCm9jaCL0hYqGpwEdkOwDaMRZAANF3KPlx/goHyVY9NjRbIECsCT5s52YfbDhDvYeBrAoPcMoMbfhV3QSY28B1XZV45hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745041112; c=relaxed/simple; bh=c4fUz8aVIrlFPNP5xijmvt4GCQo1SdqFqgZKishAgD4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nBu5KiKgjczHDwTWESO//z/KLz7PrzEtUePlL8aOa9ogBgiACUJf/oST1f9SVmBKPNBg2Dnv7Ixz0Jwvr1f7izORJuErfaTBEWBL+7m8SRjNd1GcCDyqQ0STNyhXnEVfPD5daC/j/QKc9HGdUMDHLRLV6kMIKeepDz0QuvA8oGA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net; spf=pass smtp.mailfrom=gourry.net; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b=ddIA7ag3; arc=none smtp.client-ip=209.85.219.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gourry.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b="ddIA7ag3" Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-6e8f6970326so22246786d6.0 for ; Fri, 18 Apr 2025 22:38:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1745041110; x=1745645910; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w1bTt6yFTuQ7vQdV7xpRPwjzlibRkEqLBXyWJok/2X4=; b=ddIA7ag3yj8wVbeD8wa8ynrneFqWMX0xxUdR2ltR77Uh/Oy4E6lwMJ5FY1orMZmqcP AKPW+RWl/uSXCtFWTpGLxtaRRbRR8/2X56Mpj9RlnbzTMGUJ+VWZTm/HBg7OIYZoErj3 9+eZCVm6XAPIjiowyNPEMw2tQuzx9Vh6tJoiSabJxwDSkSx9ehVLGEKyJF/3oYQsGQ5F eYbuLgPspBX4iwD5Z3V+GXvIm7baGunIYN3GvtNpO1gO4Y8WEd6N6ByZNwTMqKuj2qlj 2Wb9a4Ke6RgNH7NoHEEM3FhVB2do20cnUK+t/JEi2tHLLQqW5Qd6JRVnz5cN0tGHk6Nb xgDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745041110; x=1745645910; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w1bTt6yFTuQ7vQdV7xpRPwjzlibRkEqLBXyWJok/2X4=; b=gBARn4u67B8gNc+hovztNkcGUtOPo5cOKU5n/J6xBVzsUNXzMmjx2jeY+SrYFwrlQC /EdNazFihqqIlCHDwyO13cjvbVVa2rT/xkzOTX9hl8reeijseva1hR6u91QZIMA4so92 P4xR6iLJwtROAd5ihc9YH4cv2QhN1UNuMQWJwquSRWG29ZIuXOUOyT/rANZi/oXHV5y/ pqjtL6cTcosEWU0OKp+GHr14+ve2d2iABzapNW3rqIYEZ7fYjjJezyt2MIsC6wnjK7q8 6i0oJTtpJkXqOrkJGHAEt8F8XIzzV5w8G92mWnfXme1viejy5i0FOZX/4DLMIN+8BYn2 7OkA== X-Forwarded-Encrypted: i=1; AJvYcCWPahv6krPaqe7em58C2ZdqoIpbaJnphwMAx03mMO28L2ssF2OiWEhP20aT2jjEuUiq1hJIjmPDSaJLJiA=@vger.kernel.org X-Gm-Message-State: AOJu0YwPD/8aVk/yoUgphM4oU/26i5JcS8VFLd/kofXuyNH/yN0+oqOg jKbi0F9n9yM1DooomfRsxozPyt2pCxBOG/kkxao7LERd/OtvG1aq1a3xl2SiSuM= X-Gm-Gg: ASbGncv1AyWb2o7jO0u1yeLvjwInoPQXK5o63/wN+jAxrnJhgJWuwZO6PVO+9YxH6Np zI9Z5Ar7P1mNIL06ZySz/IG5wXa68dqmBSvpIUursXeO8S6nZA9lyjQIcCR/0nK0kvzgu7u8ekS MFrVWgRLxVi2mXwRkAN96tUZAMKZSoJflcGa6rXtBmeait+7QfSHOpR82U2vz4eUdAoE5hkAJQ8 SrT8Se+dmYbqX0jvQUJZLXXSPe8DnN/EWSaVpyFGHwAXd/nDAmEomvnUlFd9inbIu3UzZ+YMcgS sqqEtvRW1wZuxhc06FwlJnSeblkNmFxwIduVZtv0JmPb84YqkdsYyoHb6trj5/MsemOksVRw9Qv RroxZt6GlMdBLbxDxyDHYCG/pJApv X-Google-Smtp-Source: AGHT+IGCrA8A+FHDapqOFzvLIREWI4Jo2OiyP1UKGY1JE22mQ0QmetHADCci956kIivIzU/KXqVUhw== X-Received: by 2002:a05:6214:240e:b0:6d8:9ead:c665 with SMTP id 6a1803df08f44-6f2c45a35dfmr93433756d6.27.1745041110016; Fri, 18 Apr 2025 22:38:30 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F.lan (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2b30c65sm18341956d6.51.2025.04.18.22.38.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Apr 2025 22:38:29 -0700 (PDT) From: Gregory Price To: linux-mm@kvack.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, longman@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, tj@kernel.org, mkoutny@suse.com, akpm@linux-foundation.org Subject: [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Date: Sat, 19 Apr 2025 01:38:23 -0400 Message-ID: <20250419053824.1601470-2-gourry@gourry.net> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250419053824.1601470-1-gourry@gourry.net> References: <20250419053824.1601470-1-gourry@gourry.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename cpuset_node_allowed to reflect that the function checks the current task's cpuset.mems. This allows us to make a new cpuset_node_allowed function that checks a target cgroup's cpuset.mems. Acked-by: Waiman Long Signed-off-by: Gregory Price Reviewed-by: Shakeel Butt --- include/linux/cpuset.h | 4 ++-- kernel/cgroup/cpuset.c | 4 ++-- mm/page_alloc.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 835e7b793f6a..893a4c340d48 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -82,11 +82,11 @@ extern nodemask_t cpuset_mems_allowed(struct task_struc= t *p); void cpuset_init_current_mems_allowed(void); int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask); =20 -extern bool cpuset_node_allowed(int node, gfp_t gfp_mask); +extern bool cpuset_current_node_allowed(int node, gfp_t gfp_mask); =20 static inline bool __cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask) { - return cpuset_node_allowed(zone_to_nid(z), gfp_mask); + return cpuset_current_node_allowed(zone_to_nid(z), gfp_mask); } =20 static inline bool cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 0f910c828973..f8e6a9b642cb 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -4090,7 +4090,7 @@ static struct cpuset *nearest_hardwall_ancestor(struc= t cpuset *cs) } =20 /* - * cpuset_node_allowed - Can we allocate on a memory node? + * cpuset_current_node_allowed - Can current task allocate on a memory nod= e? * @node: is this an allowed node? * @gfp_mask: memory allocation flags * @@ -4129,7 +4129,7 @@ static struct cpuset *nearest_hardwall_ancestor(struc= t cpuset *cs) * GFP_KERNEL - any node in enclosing hardwalled cpuset ok * GFP_USER - only nodes in current tasks mems allowed ok. */ -bool cpuset_node_allowed(int node, gfp_t gfp_mask) +bool cpuset_current_node_allowed(int node, gfp_t gfp_mask) { struct cpuset *cs; /* current cpuset ancestors */ bool allowed; /* is allocation in zone z allowed? */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5079b1b04d49..233ce25f8f3d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3461,7 +3461,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int o= rder, int alloc_flags, retry: /* * Scan zonelist, looking for a zone with enough free. - * See also cpuset_node_allowed() comment in kernel/cgroup/cpuset.c. + * See also cpuset_current_node_allowed() comment in kernel/cgroup/cpuset= .c. */ no_fallback =3D alloc_flags & ALLOC_NOFRAGMENT; z =3D ac->preferred_zoneref; @@ -4148,7 +4148,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) /* * Ignore cpuset mems for non-blocking __GFP_HIGH (probably * GFP_ATOMIC) rather than fail, see the comment for - * cpuset_node_allowed(). + * cpuset_current_node_allowed(). */ if (alloc_flags & ALLOC_MIN_RESERVE) alloc_flags &=3D ~ALLOC_CPUSET; --=20 2.49.0 From nobody Sun Dec 14 12:37:54 2025 Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A03718A6A7 for ; Sat, 19 Apr 2025 05:38:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745041115; cv=none; b=S+JSITcCRcEEK0MrPOxmFEkFgobt6DIeoMq4gYwUgdytXR7eDSTkLg7qdXilXZCRlKKdUryrgIPsyaMxhpfMlNYjD/lRNz0V3vreE5HrLz1AeMVIT/ehB+07FMJK4lJahQE73gUnsYo7mkQafzhYrWM9H+JzIu23iKglkttxJKs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745041115; c=relaxed/simple; bh=j5nyS1jQOepDC8tMs075jfTRcxAytHTavkThiK4YD+0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iDc+7BeQGqM2KP+IQRhBMzeE0hzEuOsnS7MZj51FUq+7Aani4gx40ZiS2BDM5wce4yuKUzYKcLf8L78IAO524QRjP9+1AwijJ0VYNgbpyaHdFONUBCQTPBv4zu+qD2sSYeYlsmH+LTuucIqdx3WzWltcvV6K+onHlIN07cxLb8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net; spf=pass smtp.mailfrom=gourry.net; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b=J+nOK59V; arc=none smtp.client-ip=209.85.219.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gourry.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b="J+nOK59V" Received: by mail-qv1-f46.google.com with SMTP id 6a1803df08f44-6eaf1b6ce9aso28058156d6.2 for ; Fri, 18 Apr 2025 22:38:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1745041112; x=1745645912; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kphArVCSz0KLbAMkbfXsG72n7nuxXX/C7s3H9vWRSxo=; b=J+nOK59VfNl9jJXcQ1RNA+1P39+yyKqdF2bKRZyimivzskqHa1MHUMDW/ABd4n/faW 75DVDlxvhcav/qGmg/IqfQf0uTkTfcuO+u3PYBCvgVT06zkguvoMBlP83C8W8/OGmh38 Xx79b921hgigSoiBAoG+Q3Vb36C5XgSPNlvOFjjzcv4bvBo4U9Xh9cv8gY+MLDDadGDl IMiVznpFOd14xAnnBg2lqJ6ILQ2nJJwU0S/dGH0aXNYtXO5M7PXk8s1uf5lp0OkEw0cI 4TmHppl69EmmXZ/IF60So89f8VvwJ7ajZWSY4fJbyHVkq4ej+W0WvSqNKoMpdiF75aoy 28RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745041112; x=1745645912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kphArVCSz0KLbAMkbfXsG72n7nuxXX/C7s3H9vWRSxo=; b=vEzJxRODCfjmv7GcXQSepPGPeS9RL5Te41WLLYvf94Zrf62cvuaZzDK7Fv3amb1/Iu X248/lt15NHd8/u0n6TE4NiD3Ne/rztNc+vAvu6J0zdQCoeIlWri1mn/nNVm6uwGHdUl b70j+gB/r7aFzWC3b6e+YF+fj+XrqBwWqhmbc8cR/xbaFV9nb72qh64YqUTxFphh/6us d+/HquzaAzx4nug1M3yo2u2QgkVPKNsxmFWpWB5/OD5EbNHe4bqpDQLwIPtdzuEASquf XgXuKOr++ItfxtF48HbfKFXip/ujW5TpI1co3QB+iEIyxO6VUFO6xcT/Wn7dt9UeZWvn febg== X-Forwarded-Encrypted: i=1; AJvYcCV0qvcd9j/Ar4P+KlD/2SOU4mRYcXkebl1qSgWIedBU2Q8IU3t8ZupeysTGyEOJUb54L03NPaN5Do/yspA=@vger.kernel.org X-Gm-Message-State: AOJu0YwkdmNTrOPPZqS4Z/kCTN/15XZce0R6A8UA/dxsruxx5ZEiWtTy NVh3Nt4T0izbcZFgIPmiHvX9fwAFeYq4I42CGuyVzKAAqnByMIQegq1RyNVHXPE= X-Gm-Gg: ASbGncvgp0z0Mrzl0pBcbvTdyTp6BTSPUhnq2c+HJ/CTS+ptdoGAsrykk/9HHRcy6Ej JAJCCAnt/tepagJ1Zo1jTu2gPWgDh0RGIgJHjqmSbUoUeZmcmOjf/0reRdibmAQJEMwiae76Lpb kr++6pn6ydfJ8p/P8XKrcXMO36Pdu0U5NqwxJO70zJZiMG6pzDZ0tCbSRoUVRnGEKahUzCvd3hq rwZFmp4fd3sr2m7sykK16p+ZJzE0vTJnMKT539gIczFBcSQGgzTkkZTeyr4fynQD7bz0NqLvOQQ WfIQ9xuSK8+iCdnwDnKsEGWo1lJghQPXOlPTNnJAr8CwFujdDnteQGx/HHjUNbJaX8qZhv3W5R7 uh7VzhgRxOAvux0kJxCTAbksFML6f X-Google-Smtp-Source: AGHT+IGokloKomTVMayniE6Mf++d9cjUZKWpbDcg5f1ADpmX7RSw+BGIJ9I5WqbA/PGhMdrqH+rBOQ== X-Received: by 2002:ad4:5d43:0:b0:6e8:f940:50af with SMTP id 6a1803df08f44-6f2c4688419mr88612006d6.44.1745041112115; Fri, 18 Apr 2025 22:38:32 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F.lan (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2b30c65sm18341956d6.51.2025.04.18.22.38.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Apr 2025 22:38:31 -0700 (PDT) From: Gregory Price To: linux-mm@kvack.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, longman@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, tj@kernel.org, mkoutny@suse.com, akpm@linux-foundation.org Subject: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Date: Sat, 19 Apr 2025 01:38:24 -0400 Message-ID: <20250419053824.1601470-3-gourry@gourry.net> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250419053824.1601470-1-gourry@gourry.net> References: <20250419053824.1601470-1-gourry@gourry.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is possible for a reclaimer to cause demotions of an lruvec belonging to a cgroup with cpuset.mems set to exclude some nodes. Attempt to apply this limitation based on the lruvec's memcg and prevent demotion. Notably, this may still allow demotion of shared libraries or any memory first instantiated in another cgroup. This means cpusets still cannot cannot guarantee complete isolation when demotion is enabled, and the docs have been updated to reflect this. This is useful for isolating workloads on a multi-tenant system from certain classes of memory more consistently - with the noted exceptions. Signed-off-by: Gregory Price --- .../ABI/testing/sysfs-kernel-mm-numa | 14 ++++--- include/linux/cpuset.h | 5 +++ include/linux/memcontrol.h | 6 +++ kernel/cgroup/cpuset.c | 21 ++++++++++ mm/memcontrol.c | 6 +++ mm/vmscan.c | 41 +++++++++++-------- 6 files changed, 72 insertions(+), 21 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation= /ABI/testing/sysfs-kernel-mm-numa index 77e559d4ed80..27cdcab901f7 100644 --- a/Documentation/ABI/testing/sysfs-kernel-mm-numa +++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa @@ -16,9 +16,13 @@ Description: Enable/disable demoting pages during reclaim Allowing page migration during reclaim enables these systems to migrate pages from fast tiers to slow tiers when the fast tier is under pressure. This migration - is performed before swap. It may move data to a NUMA - node that does not fall into the cpuset of the - allocating process which might be construed to violate - the guarantees of cpusets. This should not be enabled - on systems which need strict cpuset location + is performed before swap if an eligible numa node is + present in cpuset.mems for the cgroup. If cpusets.mems + changes at runtime, it may move data to a NUMA node that + does not fall into the cpuset of the new cpusets.mems, + which might be construed to violate the guarantees of + cpusets. Shared memory, such as libraries, owned by + another cgroup may still be demoted and result in memory + use on a node not present in cpusets.mem. This should not + be enabled on systems which need strict cpuset location guarantees. diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 893a4c340d48..c64b4a174456 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask) task_unlock(current); } =20 +extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid); #else /* !CONFIG_CPUSETS */ =20 static inline bool cpusets_enabled(void) { return false; } @@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned in= t seq) return false; } =20 +static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + return false; +} #endif /* !CONFIG_CPUSETS */ =20 #endif /* _LINUX_CPUSET_H */ diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 53364526d877..a6c4e3faf721 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1736,6 +1736,8 @@ static inline void count_objcg_events(struct obj_cgro= up *objcg, rcu_read_unlock(); } =20 +bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid); + #else static inline bool mem_cgroup_kmem_disabled(void) { @@ -1793,6 +1795,10 @@ static inline void count_objcg_events(struct obj_cgr= oup *objcg, { } =20 +static inline bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int n= id) +{ + return true; +} #endif /* CONFIG_MEMCG */ =20 #if defined(CONFIG_MEMCG) && defined(CONFIG_ZSWAP) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index f8e6a9b642cb..8814ca8ec710 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -4163,6 +4163,27 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp= _mask) return allowed; } =20 +bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + struct cgroup_subsys_state *css; + unsigned long flags; + struct cpuset *cs; + bool allowed; + + css =3D cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys); + if (!css) + return true; + + cs =3D container_of(css, struct cpuset, css); + spin_lock_irqsave(&callback_lock, flags); + /* On v1 effective_mems may be empty, simply allow */ + allowed =3D node_isset(nid, cs->effective_mems) || + nodes_empty(cs->effective_mems); + spin_unlock_irqrestore(&callback_lock, flags); + css_put(css); + return allowed; +} + /** * cpuset_spread_node() - On which node to begin search for a page * @rotor: round robin rotor diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 40c07b8699ae..2f61d0060fd1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -5437,3 +5438,8 @@ static int __init mem_cgroup_swap_init(void) subsys_initcall(mem_cgroup_swap_init); =20 #endif /* CONFIG_SWAP */ + +bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid) +{ + return memcg ? cpuset_node_allowed(memcg->css.cgroup, nid) : true; +} diff --git a/mm/vmscan.c b/mm/vmscan.c index 2b2ab386cab5..32a7ce421e42 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -342,16 +342,22 @@ static void flush_reclaim_state(struct scan_control *= sc) } } =20 -static bool can_demote(int nid, struct scan_control *sc) +static bool can_demote(int nid, struct scan_control *sc, + struct mem_cgroup *memcg) { + int demotion_nid; + if (!numa_demotion_enabled) return false; if (sc && sc->no_demotion) return false; - if (next_demotion_node(nid) =3D=3D NUMA_NO_NODE) + + demotion_nid =3D next_demotion_node(nid); + if (demotion_nid =3D=3D NUMA_NO_NODE) return false; =20 - return true; + /* If demotion node isn't in the cgroup's mems_allowed, fall back */ + return mem_cgroup_node_allowed(memcg, demotion_nid); } =20 static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, @@ -376,7 +382,7 @@ static inline bool can_reclaim_anon_pages(struct mem_cg= roup *memcg, * * Can it be reclaimed from this node via demotion? */ - return can_demote(nid, sc); + return can_demote(nid, sc, memcg); } =20 /* @@ -1096,7 +1102,8 @@ static bool may_enter_fs(struct folio *folio, gfp_t g= fp_mask) */ static unsigned int shrink_folio_list(struct list_head *folio_list, struct pglist_data *pgdat, struct scan_control *sc, - struct reclaim_stat *stat, bool ignore_references) + struct reclaim_stat *stat, bool ignore_references, + struct mem_cgroup *memcg) { struct folio_batch free_folios; LIST_HEAD(ret_folios); @@ -1109,7 +1116,7 @@ static unsigned int shrink_folio_list(struct list_hea= d *folio_list, folio_batch_init(&free_folios); memset(stat, 0, sizeof(*stat)); cond_resched(); - do_demote_pass =3D can_demote(pgdat->node_id, sc); + do_demote_pass =3D can_demote(pgdat->node_id, sc, memcg); =20 retry: while (!list_empty(folio_list)) { @@ -1658,7 +1665,7 @@ unsigned int reclaim_clean_pages_from_list(struct zon= e *zone, */ noreclaim_flag =3D memalloc_noreclaim_save(); nr_reclaimed =3D shrink_folio_list(&clean_folios, zone->zone_pgdat, &sc, - &stat, true); + &stat, true, NULL); memalloc_noreclaim_restore(noreclaim_flag); =20 list_splice(&clean_folios, folio_list); @@ -2031,7 +2038,8 @@ static unsigned long shrink_inactive_list(unsigned lo= ng nr_to_scan, if (nr_taken =3D=3D 0) return 0; =20 - nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &stat, false); + nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &stat, false, + lruvec_memcg(lruvec)); =20 spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); @@ -2214,7 +2222,7 @@ static unsigned int reclaim_folio_list(struct list_he= ad *folio_list, .no_demotion =3D 1, }; =20 - nr_reclaimed =3D shrink_folio_list(folio_list, pgdat, &sc, &stat, true); + nr_reclaimed =3D shrink_folio_list(folio_list, pgdat, &sc, &stat, true, N= ULL); while (!list_empty(folio_list)) { folio =3D lru_to_folio(folio_list); list_del(&folio->lru); @@ -2646,7 +2654,7 @@ static void get_scan_count(struct lruvec *lruvec, str= uct scan_control *sc, * Anonymous LRU management is a waste if there is * ultimately no way to reclaim the memory. */ -static bool can_age_anon_pages(struct pglist_data *pgdat, +static bool can_age_anon_pages(struct lruvec *lruvec, struct scan_control *sc) { /* Aging the anon LRU is valuable if swap is present: */ @@ -2654,7 +2662,8 @@ static bool can_age_anon_pages(struct pglist_data *pg= dat, return true; =20 /* Also valuable if anon pages can be demoted: */ - return can_demote(pgdat->node_id, sc); + return can_demote(lruvec_pgdat(lruvec)->node_id, sc, + lruvec_memcg(lruvec)); } =20 #ifdef CONFIG_LRU_GEN @@ -2732,7 +2741,7 @@ static int get_swappiness(struct lruvec *lruvec, stru= ct scan_control *sc) if (!sc->may_swap) return 0; =20 - if (!can_demote(pgdat->node_id, sc) && + if (!can_demote(pgdat->node_id, sc, memcg) && mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH) return 0; =20 @@ -4695,7 +4704,7 @@ static int evict_folios(struct lruvec *lruvec, struct= scan_control *sc, int swap if (list_empty(&list)) return scanned; retry: - reclaimed =3D shrink_folio_list(&list, pgdat, sc, &stat, false); + reclaimed =3D shrink_folio_list(&list, pgdat, sc, &stat, false, memcg); sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; sc->nr_reclaimed +=3D reclaimed; trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, @@ -5850,7 +5859,7 @@ static void shrink_lruvec(struct lruvec *lruvec, stru= ct scan_control *sc) * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) && + if (can_age_anon_pages(lruvec, sc) && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); @@ -6681,10 +6690,10 @@ static void kswapd_age_node(struct pglist_data *pgd= at, struct scan_control *sc) return; } =20 - if (!can_age_anon_pages(pgdat, sc)) + lruvec =3D mem_cgroup_lruvec(NULL, pgdat); + if (!can_age_anon_pages(lruvec, sc)) return; =20 - lruvec =3D mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; =20 --=20 2.49.0