From nobody Mon Apr 6 17:25:45 2026 Received: from mail-ot1-f48.google.com (mail-ot1-f48.google.com [209.85.210.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D8DC3876A2 for ; Wed, 18 Mar 2026 22:30:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773873030; cv=none; b=MC43XV8xqE7T2Nc5GJEMnEvtX253eAPFPc4fmpmNsgrtRkLSmVOcfenUV3KGz/cbsAmAlD3oZ94ESrLE8W3smWSMPZNCS7zwj1sP52Tm/qIgXkEKauX4pSfcwiqW36B4xCevC5tSO6eMpX12i7o1Gqni4p8vGYTz41FPy0oMiLc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773873030; c=relaxed/simple; bh=SLWNcExtGBAna8TNO7YxdDa/Na9CfEEr7Lzcl87e66I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lfEIo6UacqhfUTYcTHsg6qpGyi7BM9w6bV0txywxFBtlGcXPcgy/bvk5Nt2OjGqfuL78BdYQFuTHUFjbpOnBu40tntkJnAzFNrMPdkCoBsi6/lR3eNeqTDSepgmxFIbhiDa3YHp5inoqIoO9J9XiNMojwKXwdih9I4/+sAdgi08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DYIZSCXG; arc=none smtp.client-ip=209.85.210.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DYIZSCXG" Received: by mail-ot1-f48.google.com with SMTP id 46e09a7af769-7d738fe814cso40902a34.3 for ; Wed, 18 Mar 2026 15:30:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773873026; x=1774477826; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ESVpGQYFBCiWMDqw6fxjJx7j9Ms0hezUwaEqwm+xWcM=; b=DYIZSCXGLo+eNKPzkI2VnozKMlrndBhDN43G9C4dKzmCKuC7EV3YktW8tup6y5Ok7r GlGM9SYQlnZ3zpeHSdHlnTrxKgoogGNaXTgcgZuSyAtbg5GS3WYWfjNGqimk4EQbbdGw Zczm2g5AAKwVWFWHbxuEoYzhmaL1nHooxE7lmjDpkOwH6A6TmRaAcjEfZUQlY384HZTx XcPy491MlFtKcxXjXRGWnDnaF66LFWlrWOYJvo8yUD+AjMXHCyqBfxUwZYr5Xvl7apj0 VicV+Hun2PiPWd1lKN4mG1nPRQE5VUK4DEGCjCS2Ln61LEYEfjN0LBVQAJ0c9bSEJ63t Fhig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773873026; x=1774477826; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ESVpGQYFBCiWMDqw6fxjJx7j9Ms0hezUwaEqwm+xWcM=; b=iV01vUVmc0e4/zQ/SuFTQ6jCcpXyLMFyWOTtZMOuDKnKBSD8cEglTcxXRcAi4hnkmZ bPdQExuEHItdoIQXovka9gcFcKC4JJT6SdPjosRCoXlDctZyHrRJAb9Ah/BH2/83RGIP pV+xCcsJ4g//BwBtKSBJnEBP8Jtkd1vgyM3oIpG29VtDWSHU6Rl853mmyngAw7LRDps0 yN6pWgfAE3+wdJZ9Q8yTM9e1GQWw9zQfl1caCETaLSSEvjwGA7SteIhiI3gIYtt/24nE IKc3541beQw3xPZMXiwcHpQt7IGjizF9VD/4hLX38LeU5ipt9Sx0Myqmy1FvPdJGo6uz HK/Q== X-Forwarded-Encrypted: i=1; AJvYcCWlvWyfDbNcBs/uQyoHhbM+9Fr8WV+PtjWXBqAHI9IOkKcMbzkTGr4NQu9vwUZZGa02q6jKzkhoCIXI90o=@vger.kernel.org X-Gm-Message-State: AOJu0Yw5wIsDiBx+o54E16+MNqQEA7wqItYPoyQbtfH8ysCIeI8QyDgS OR8w0msGVS2jMy4thbZvJ0VKV8OKZp+/aAfokz5wmYQsAMi2ZqgfUVFA X-Gm-Gg: ATEYQzx9pUnvsxyrg3g9HCkjt4WE63vqVO8HRsSrvowCbjJBhpwz8y0ROuCPWE9iwSx cAQAkAPIJ+arZ4EF8Js9XmfXMq1q9/jHZqLOl3+xh9YAA6+p/O/PJMTyxOgsuapMd6zZ2qiq/SV Y08lcPrv2p3ntvXvQ7oP3aKNd5Apty8nUSot62CDaVWcDNztLrKVwgp8kfOKdOF2vwRzwMg16aL QgHbyDD1oLsmqEqR65U//0En+9PMn74xQgYfH7Z9prY1yhJ1WBaHUOnfZwVSeODPm64V40r9Jzs xMWD7Jq5Q68rRC87H4EMPTEcDNs7myI7+RhYzBzDg6srmaRD0SU/EMhx+QUFRzPUwGt9nGLS9L6 xkLdOOChha5ycxZKpSMxtOzj0s/2ba5OPk7chAzsmxiwePq0550HExG7puS+otaKTyg7EHBPK2w vszJe0PfWnrRrzPmlaNhc53fzT7kPoXlP8brAkwc/OPrek1g== X-Received: by 2002:a05:6830:4985:b0:7d7:4aa5:521c with SMTP id 46e09a7af769-7d7ca6bb6edmr3479621a34.21.1773873026118; Wed, 18 Mar 2026 15:30:26 -0700 (PDT) Received: from localhost ([2a03:2880:10ff:48::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-7d7c9b837f0sm2976890a34.21.2026.03.18.15.30.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2026 15:30:24 -0700 (PDT) From: Nhat Pham To: kasong@tencent.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, apopple@nvidia.com, axelrasmussen@google.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, bhe@redhat.com, byungchul@sk.com, cgroups@vger.kernel.org, chengming.zhou@linux.dev, chrisl@kernel.org, corbet@lwn.net, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jannh@google.com, joshua.hahnjy@gmail.com, lance.yang@linux.dev, lenb@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, lorenzo.stoakes@oracle.com, matthew.brost@intel.com, mhocko@suse.com, muchun.song@linux.dev, npache@redhat.com, nphamcs@gmail.com, pavel@kernel.org, peterx@redhat.com, peterz@infradead.org, pfalcato@suse.de, rafael@kernel.org, rakie.kim@sk.com, roman.gushchin@linux.dev, rppt@kernel.org, ryan.roberts@arm.com, shakeel.butt@linux.dev, shikemeng@huaweicloud.com, surenb@google.com, tglx@kernel.org, vbabka@suse.cz, weixugc@google.com, ying.huang@linux.alibaba.com, yosry.ahmed@linux.dev, yuanchu@google.com, zhengqi.arch@bytedance.com, ziy@nvidia.com, kernel-team@meta.com, riel@surriel.com Subject: [PATCH v4 18/21] memcg: swap: only charge physical swap slots Date: Wed, 18 Mar 2026 15:29:49 -0700 Message-ID: <20260318222953.441758-19-nphamcs@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260318222953.441758-1-nphamcs@gmail.com> References: <20260318222953.441758-1-nphamcs@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that zswap and the zero-filled swap page optimization no longer takes up any physical swap space, we should not charge towards the swap usage and limits of the memcg in these case. We will only record the memcg id on virtual swap slot allocation, and defer physical swap charging (i.e towards memory.swap.current) until the virtual swap slot is backed by an actual physical swap slot (on zswap store failure fallback or zswap writeback). Signed-off-by: Nhat Pham --- include/linux/swap.h | 26 ++++++++++++++ mm/memcontrol-v1.c | 6 ++++ mm/memcontrol.c | 83 ++++++++++++++++++++++++++++++++------------ mm/vswap.c | 39 +++++++++------------ 4 files changed, 108 insertions(+), 46 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index cc1ca4ac2946d..21e528d8d3480 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -676,6 +676,22 @@ static inline void folio_throttle_swaprate(struct foli= o *folio, gfp_t gfp) #endif =20 #if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP) +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry); +static inline void mem_cgroup_record_swap(struct folio *folio, + swp_entry_t entry) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_record_swap(folio, entry); +} + +void __mem_cgroup_clear_swap(swp_entry_t entry, unsigned int nr_pages); +static inline void mem_cgroup_clear_swap(swp_entry_t entry, + unsigned int nr_pages) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_clear_swap(entry, nr_pages); +} + int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry); static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) @@ -696,6 +712,16 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_= t entry, unsigned int nr_p extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg); extern bool mem_cgroup_swap_full(struct folio *folio); #else +static inline void mem_cgroup_record_swap(struct folio *folio, + swp_entry_t entry) +{ +} + +static inline void mem_cgroup_clear_swap(swp_entry_t entry, + unsigned int nr_pages) +{ +} + static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) { diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c index 6eed14bff7426..4580a034dcf72 100644 --- a/mm/memcontrol-v1.c +++ b/mm/memcontrol-v1.c @@ -680,6 +680,12 @@ void memcg1_swapin(swp_entry_t entry, unsigned int nr_= pages) * memory+swap charge, drop the swap entry duplicate. */ mem_cgroup_uncharge_swap(entry, nr_pages); + + /* + * Clear the cgroup association now to prevent double memsw + * uncharging when the backends are released later. + */ + mem_cgroup_clear_swap(entry, nr_pages); } } =20 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2ba5811e7edba..4525c21754e7f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5172,6 +5172,49 @@ int __init mem_cgroup_init(void) } =20 #ifdef CONFIG_SWAP +/** + * __mem_cgroup_record_swap - record the folio's cgroup for the swap entri= es. + * @folio: folio being swapped out. + * @entry: the first swap entry in the range. + */ +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry) +{ + unsigned int nr_pages =3D folio_nr_pages(folio); + struct mem_cgroup *memcg; + + /* Recording will be done by memcg1_swapout(). */ + if (do_memsw_account()) + return; + + memcg =3D folio_memcg(folio); + + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); + if (!memcg) + return; + + memcg =3D mem_cgroup_id_get_online(memcg); + if (nr_pages > 1) + mem_cgroup_id_get_many(memcg, nr_pages - 1); + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); +} + +/** + * __mem_cgroup_clear_swap - clear cgroup information of the swap entries. + * @entry: the first swap entry in the range. + * @nr_pages: the number of pages in the range. + */ +void __mem_cgroup_clear_swap(swp_entry_t entry, unsigned int nr_pages) +{ + unsigned short id =3D swap_cgroup_clear(entry, nr_pages); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg =3D mem_cgroup_from_id(id); + if (memcg) + mem_cgroup_id_put_many(memcg, nr_pages); + rcu_read_unlock(); +} + /** * __mem_cgroup_try_charge_swap - try charging swap space for a folio * @folio: folio being added to swap @@ -5190,34 +5233,24 @@ int __mem_cgroup_try_charge_swap(struct folio *foli= o, swp_entry_t entry) if (do_memsw_account()) return 0; =20 - memcg =3D folio_memcg(folio); - - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) - return 0; - - if (!entry.val) { - memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; - } - - memcg =3D mem_cgroup_id_get_online(memcg); + /* + * We already record the cgroup on virtual swap allocation. + * Note that the virtual swap slot holds a reference to memcg, + * so this lookup should be safe. + */ + rcu_read_lock(); + memcg =3D mem_cgroup_from_id(lookup_swap_cgroup_id(entry)); + rcu_read_unlock(); =20 if (!mem_cgroup_is_root(memcg) && !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) { memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - mem_cgroup_id_put(memcg); return -ENOMEM; } =20 - /* Get references for the tail pages, too */ - if (nr_pages > 1) - mem_cgroup_id_get_many(memcg, nr_pages - 1); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); =20 - swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); - return 0; } =20 @@ -5231,7 +5264,8 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, un= signed int nr_pages) struct mem_cgroup *memcg; unsigned short id; =20 - id =3D swap_cgroup_clear(entry, nr_pages); + id =3D lookup_swap_cgroup_id(entry); + rcu_read_lock(); memcg =3D mem_cgroup_from_id(id); if (memcg) { @@ -5242,7 +5276,6 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, un= signed int nr_pages) page_counter_uncharge(&memcg->swap, nr_pages); } mod_memcg_state(memcg, MEMCG_SWAP, -nr_pages); - mem_cgroup_id_put_many(memcg, nr_pages); } rcu_read_unlock(); } @@ -5251,14 +5284,18 @@ static bool mem_cgroup_may_zswap(struct mem_cgroup = *original_memcg); =20 long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) { - long nr_swap_pages, nr_zswap_pages =3D 0; + long nr_swap_pages; =20 if (zswap_is_enabled() && (mem_cgroup_disabled() || do_memsw_account() || mem_cgroup_may_zswap(memcg))) { - nr_zswap_pages =3D PAGE_COUNTER_MAX; + /* + * No need to check swap cgroup limits, since zswap is not charged + * towards swap consumption. + */ + return PAGE_COUNTER_MAX; } =20 - nr_swap_pages =3D max_t(long, nr_zswap_pages, get_nr_swap_pages()); + nr_swap_pages =3D get_nr_swap_pages(); if (mem_cgroup_disabled() || do_memsw_account()) return nr_swap_pages; for (; !mem_cgroup_is_root(memcg); memcg =3D parent_mem_cgroup(memcg)) diff --git a/mm/vswap.c b/mm/vswap.c index b391511e0f0b9..96f4615f29a95 100644 --- a/mm/vswap.c +++ b/mm/vswap.c @@ -544,6 +544,7 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_s= lot_t slot, struct vswap_cluster *cluster =3D NULL; struct swp_desc *desc; unsigned long flush_nr, phys_swap_start =3D 0, phys_swap_end =3D 0; + unsigned long phys_swap_released =3D 0; unsigned int phys_swap_type =3D 0; bool need_flushing_phys_swap =3D false; swp_slot_t flush_slot; @@ -573,6 +574,7 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_s= lot_t slot, if (desc->type =3D=3D VSWAP_ZSWAP && desc->zswap_entry) { zswap_entry_free(desc->zswap_entry); } else if (desc->type =3D=3D VSWAP_SWAPFILE) { + phys_swap_released++; if (!phys_swap_start) { /* start a new contiguous range of phys swap */ phys_swap_start =3D swp_slot_offset(desc->slot); @@ -603,6 +605,9 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_s= lot_t slot, flush_nr =3D phys_swap_end - phys_swap_start; swap_slot_free_nr(flush_slot, flush_nr); } + + if (phys_swap_released) + mem_cgroup_uncharge_swap(entry, phys_swap_released); } =20 /* @@ -630,7 +635,7 @@ static void vswap_free(struct vswap_cluster *cluster, s= truct swp_desc *desc, spin_unlock(&cluster->lock); =20 release_backing(entry, 1); - mem_cgroup_uncharge_swap(entry, 1); + mem_cgroup_clear_swap(entry, 1); =20 /* erase forward mapping and release the virtual slot for reallocation */ spin_lock(&cluster->lock); @@ -645,9 +650,6 @@ static void vswap_free(struct vswap_cluster *cluster, s= truct swp_desc *desc, */ int folio_alloc_swap(struct folio *folio) { - struct vswap_cluster *cluster =3D NULL; - int i, nr =3D folio_nr_pages(folio); - struct swp_desc *desc; swp_entry_t entry; =20 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); @@ -657,25 +659,7 @@ int folio_alloc_swap(struct folio *folio) if (!entry.val) return -ENOMEM; =20 - /* - * XXX: for now, we charge towards the memory cgroup's swap limit on virt= ual - * swap slots allocation. This will be changed soon - we will only charge= on - * physical swap slots allocation. - */ - if (mem_cgroup_try_charge_swap(folio, entry)) { - rcu_read_lock(); - for (i =3D 0; i < nr; i++) { - desc =3D vswap_iter(&cluster, entry.val + i); - VM_WARN_ON(!desc); - vswap_free(cluster, desc, (swp_entry_t){ entry.val + i }); - } - spin_unlock(&cluster->lock); - rcu_read_unlock(); - atomic_add(nr, &vswap_alloc_reject); - entry.val =3D 0; - return -ENOMEM; - } - + mem_cgroup_record_swap(folio, entry); swap_cache_add_folio(folio, entry, NULL); =20 return 0; @@ -717,6 +701,15 @@ bool vswap_alloc_swap_slot(struct folio *folio) if (!slot.val) return false; =20 + if (mem_cgroup_try_charge_swap(folio, entry)) { + /* + * We have not updated the backing type of the virtual swap slot. + * Simply free up the physical swap slots here! + */ + swap_slot_free_nr(slot, nr); + return false; + } + /* establish the vrtual <-> physical swap slots linkages. */ si =3D __swap_slot_to_info(slot); ci =3D swap_cluster_lock(si, swp_slot_offset(slot)); --=20 2.52.0