From nobody Wed Oct 1 22:31:23 2025 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CA2F2DCBFA for ; Sun, 28 Sep 2025 11:17:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759058267; cv=none; b=VDIHjNepdldKB/1xI1XcA7opCz469roM1LAWAAav3Zo1kwIUam9rfdF8YgJCnvXYBVzAYCLs4lqmTEpPATjMEdZLNxXkXXm4nhaOcld3ahBRbnJrWTaBASpx1K5ttFwV19uql5/CwmZiGv1ihVxNI+fzj2A49K4JFz7IPdkxPqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759058267; c=relaxed/simple; bh=Wf7w5cXuqBs2uVEih/JherOIpJySNVLPPcmveznoocA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Tys3oX4jo+/hIYg7yd99bqaqecV1oxwBxumO/IXkXihM+27cgXwq9X9FrGglJxs3vOQ/vKEP5KZZRokYl597+rUBPkyK+4vOM+RNzNvjDJvbfsUtvBSYuKQgMbtp+KlQlznt3VxeJQSYrGXp4ojRbpbJw5LtvykzR7YrIU8iAgE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=JQmUYBlN; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="JQmUYBlN" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-782023ca359so938894b3a.2 for ; Sun, 28 Sep 2025 04:17:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1759058265; x=1759663065; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kqIkKVMb0D1T6X1uViaX+LF5tLEzwLH/sPWmDCgIpN4=; b=JQmUYBlNPt2ML3NpbDy+vCIUz4UgXhYJPA2eVhAOCkqqwouqZ0pAQbpNIaZVxgKYhr rzkSrqYgcJT8EZr+h5LjftYdnAOfCdkJXbyOzQxAZSN0yCO4cEEc1FyoCUbU0k8jTPPd RE2Ez0OX2+YJX7pOj71Eg4s4ink6Es+g5eJQDPIRLUOO2gQg20I13mYOEH9PEk9PcGWT 8oNddUQo862+THA2/hry58l/Gg8O/Ohg7pDtG0Fo/ud63iXHvNerqfZIhEsJTAUy/kgr Xhei4AopfUmB1dVwwMWm7zrnEm0p1xA3LHmkLA+bwmaF7Y+716qkhXG0ucMZrB/+ZQIO oIrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759058265; x=1759663065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kqIkKVMb0D1T6X1uViaX+LF5tLEzwLH/sPWmDCgIpN4=; b=kpOFGyI208IMiQI7WdG6WFXMS4zqApQ/n5xfAJNECEvmtwVabqo4k00KH0IF7CPYFC gOgnIMM4MtpIlHYYLrzsU8wqIeNzpCc/M/cLqOw+P0kGp/SbkkslI/aBLJb0qRacRaB5 ky//QJhMeYJXNJPMvRUXGMqQTSsuJv2hSYYePQiEWH05TdNAwMzAospZujsEjo/29K3E azoMki7GfWz4//vMNLrWJ7yYB5TIWFwF6EbY05m8YcCn0vAy6KraYh0kwZaPLiptgJJ+ /s7Qc0WZNZ9E35f4NCBP10HqFfn7SM3TFtwPEYztK1Kun/5OOo8TMIqiTxqSCUdMcPbi xJPg== X-Forwarded-Encrypted: i=1; AJvYcCVzsQETa627AF68sDbLQmUtWzrab2N1QvAtJsApARMxSqJ+u74sAQBO2APa1aVP7iqnbR2yzNPGPTgVa8M=@vger.kernel.org X-Gm-Message-State: AOJu0YxY/3A9BQ1ZTyvluNjCTOrmGaMzfnzhRS485j9auyJA4F6NXgON VFXPp2jZuNeUX3cx7TlATNFNlw8hsKicSOy9w7vSs64wgsOkAdWe5S8NdAnwQfc/qtM= X-Gm-Gg: ASbGnctjfnVJLA8A8/U5NK965Y93JrFSLSKGXKh+nGfYw08npLHgzG9PfI1FFuBlmNL 8c3mnBWXqPoH323p0ArZzFt5ybB8VUjsBlzSDJl5NJmZMebwfqicyGy/xfB+TsikCJ5A5ftFogX PcCrH9LnfJc4AaYU2gkE2b/I5obRPmi29Eey15CgSiQR3Vil2iNq6mP9YwOp+4WU19nngFqAVe6 fHLKcOeSGVbXtoy0WPknZAGCdHv3px/e1MyryO5a/n4w23Nra61bOSUr2meAdV680/ATUc+pBe4 NRobw6PqNDaSOIo0HCa4y1uYh2BdQA6uHvk/XB1KUd8HSFylZaVoTYlGvRYfhom5g6xqBssbbF5 e87aDIaETiMBjnpQv1stGHb/8FENGJfHWMb3b2Dx7NqklkAurIJoJ+EPwYsJcWn3z/Q== X-Google-Smtp-Source: AGHT+IFpgKlNfJwp+S3wjiWhEQGfyxqtvOuQ9QjT3paL+rGW25mXbvLbUn9yt5U2oEGJqOcasX6FEg== X-Received: by 2002:a05:6a20:3d83:b0:2df:8271:f095 with SMTP id adf61e73a8af0-2e7c75000e0mr17723876637.24.1759058264714; Sun, 28 Sep 2025 04:17:44 -0700 (PDT) Received: from G7HT0H2MK4.bytedance.net ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b57c55a2c45sm8687451a12.45.2025.09.28.04.17.36 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 28 Sep 2025 04:17:44 -0700 (PDT) From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v3 2/4] mm: thp: introduce folio_split_queue_lock and its variants Date: Sun, 28 Sep 2025 19:17:00 +0800 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Muchun Song In future memcg removal, the binding between a folio and a memcg may change, making the split lock within the memcg unstable when held. A new approach is required to reparent the split queue to its parent. This patch starts introducing a unified way to acquire the split lock for future work. It's a code-only refactoring with no functional changes. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Acked-by: Johannes Weiner Reviewed-by: Zi Yan Acked-by: Shakeel Butt Acked-by: David Hildenbrand --- include/linux/memcontrol.h | 10 ++++ mm/huge_memory.c | 104 ++++++++++++++++++++++++++++--------- 2 files changed, 89 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 16fe0306e50ea..99876af13c315 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1662,6 +1662,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); void reparent_shrinker_deferred(struct mem_cgroup *memcg); + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return shrinker->id; +} #else #define mem_cgroup_sockets_enabled 0 =20 @@ -1693,6 +1698,11 @@ static inline void set_shrinker_bit(struct mem_cgrou= p *memcg, int nid, int shrinker_id) { } + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return -1; +} #endif =20 #ifdef CONFIG_MEMCG diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6db24b3a57005..0ac3b97177b7f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1078,26 +1078,83 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_s= truct *vma) =20 #ifdef CONFIG_MEMCG static inline -struct deferred_split *get_deferred_split_queue(struct folio *folio) +struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) { - struct mem_cgroup *memcg =3D folio_memcg(folio); - struct pglist_data *pgdat =3D NODE_DATA(folio_nid(folio)); + if (mem_cgroup_disabled()) + return NULL; + if (&NODE_DATA(folio_nid(folio))->deferred_split_queue =3D=3D queue) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} =20 - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct mem_cgroup *memcg; + struct deferred_split *queue; + + memcg =3D folio_memcg(folio); + queue =3D memcg ? &memcg->deferred_split_queue : + &NODE_DATA(folio_nid(folio))->deferred_split_queue; + spin_lock(&queue->split_queue_lock); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct mem_cgroup *memcg; + struct deferred_split *queue; + + memcg =3D folio_memcg(folio); + queue =3D memcg ? &memcg->deferred_split_queue : + &NODE_DATA(folio_nid(folio))->deferred_split_queue; + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; } #else static inline -struct deferred_split *get_deferred_split_queue(struct folio *folio) +struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) +{ + return NULL; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) { struct pglist_data *pgdat =3D NODE_DATA(folio_nid(folio)); + struct deferred_split *queue =3D &pgdat->deferred_split_queue; + + spin_lock(&queue->split_queue_lock); =20 - return &pgdat->deferred_split_queue; + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct pglist_data *pgdat =3D NODE_DATA(folio_nid(folio)); + struct deferred_split *queue =3D &pgdat->deferred_split_queue; + + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; } #endif =20 +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *qu= eue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + static inline bool is_transparent_hugepage(const struct folio *folio) { if (!folio_test_large(folio)) @@ -3579,7 +3636,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, struct page *split_at, struct page *lock_at, struct list_head *list, bool uniform_split) { - struct deferred_split *ds_queue =3D get_deferred_split_queue(folio); + struct deferred_split *ds_queue; XA_STATE(xas, &folio->mapping->i_pages, folio->index); struct folio *end_folio =3D folio_next(folio); bool is_anon =3D folio_test_anon(folio); @@ -3718,7 +3775,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, } =20 /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue =3D folio_split_queue_lock(folio); if (folio_ref_freeze(folio, 1 + extra_pins)) { struct swap_cluster_info *ci =3D NULL; struct lruvec *lruvec; @@ -3740,7 +3797,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, */ list_del_init(&folio->_deferred_list); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr =3D folio_nr_pages(folio); =20 @@ -3835,7 +3892,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, if (ci) swap_cluster_unlock(ci); } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); ret =3D -EAGAIN; } fail: @@ -4016,8 +4073,7 @@ bool __folio_unqueue_deferred_split(struct folio *fol= io) WARN_ON_ONCE(folio_ref_count(folio)); WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg_charged(folio)); =20 - ds_queue =3D get_deferred_split_queue(folio); - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue =3D folio_split_queue_lock_irqsave(folio, &flags); if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; if (folio_test_partially_mapped(folio)) { @@ -4028,7 +4084,7 @@ bool __folio_unqueue_deferred_split(struct folio *fol= io) list_del_init(&folio->_deferred_list); unqueued =3D true; } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); =20 return unqueued; /* useful for debug warnings */ } @@ -4036,10 +4092,7 @@ bool __folio_unqueue_deferred_split(struct folio *fo= lio) /* partially_mapped=3Dfalse won't clear PG_partially_mapped folio flag */ void deferred_split_folio(struct folio *folio, bool partially_mapped) { - struct deferred_split *ds_queue =3D get_deferred_split_queue(folio); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg =3D folio_memcg(folio); -#endif + struct deferred_split *ds_queue; unsigned long flags; =20 /* @@ -4062,7 +4115,7 @@ void deferred_split_folio(struct folio *folio, bool p= artially_mapped) if (folio_test_swapcache(folio)) return; =20 - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue =3D folio_split_queue_lock_irqsave(folio, &flags); if (partially_mapped) { if (!folio_test_partially_mapped(folio)) { folio_set_partially_mapped(folio); @@ -4077,15 +4130,16 @@ void deferred_split_folio(struct folio *folio, bool= partially_mapped) VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); } if (list_empty(&folio->_deferred_list)) { + struct mem_cgroup *memcg; + + memcg =3D folio_split_queue_memcg(folio, ds_queue); list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); ds_queue->split_queue_len++; -#ifdef CONFIG_MEMCG if (memcg) set_shrinker_bit(memcg, folio_nid(folio), - deferred_split_shrinker->id); -#endif + shrinker_id(deferred_split_shrinker)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } =20 static unsigned long deferred_split_count(struct shrinker *shrink, --=20 2.20.1