From nobody Wed Sep 10 05:43:41 2025 Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 747771DFF7 for ; Mon, 8 Sep 2025 22:19:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757369964; cv=none; b=V8rJ52qew42UqSZQwL3hAWMmP4W+wEn8jQAVbV9Kn1kiU9VeocVMJIhLIiwUY1XwWMkdKjxt5lzuMkTOMdbOl7Oe40SgeMKXPODBb8M2VOUNXnsQdxkkKvBoLDzTu0onHrvVgBX+hy0D7G+/1Vo3WpJFokSHDkD1qzLQYsPBsS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757369964; c=relaxed/simple; bh=IM2inVkSeDSG4vXLlRZDfByFMogEwSnISRphKNqxbMw=; h=Date:From:To:cc:Subject:In-Reply-To:Message-ID:References: MIME-Version:Content-Type; b=sSDpSQ8b/T2jXjeaqpqKGTR6gv/f7udGxfnPvV/xFxkbffp7FN2io1jKTjvms8frOaxYGXrfzeWwhN5hKUodrEhaHx+muGYsZziM/3WklGZZJaqV1zy5mc+lWNb1+JNtrREHJQ9tjld3aSxH6yu96fuk8d2nfBvTSd3nsTr7kvc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tibFOcAr; arc=none smtp.client-ip=209.85.128.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tibFOcAr" Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-71d6083cc69so47896697b3.2 for ; Mon, 08 Sep 2025 15:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1757369961; x=1757974761; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=0nKmTdIpeS7PLxyfeMC8A21vcUc3Lgf8ocBEs9nNb+0=; b=tibFOcAr2G8VdKN2Y0MuEDdNn6Z8D6rU2L+tkb/zulYQt9QhvTQh14j+i2KoKhULUZ 0S3uQqbu62CmiaGJJFZqrg62djKU0gvjM6seovLHVA2zEPbLgVfSkKe7Mz3Z9bY0El58 D4gxOej7cylDAMU+f3O0Jk7F33cFFuDiu1Pg855EzkPb60ys4sGEhOd7/vAAIXr/7P7a 4/hdybrwGZcE6t+3FZ8KqOcXaiJGuUffz1EYiAdBw6LjRS/Krus3Yvpxx2WFxpuxgTss 7SxCvyVbbrFPE51EpgHGxc8rL82X1OAfNynP7B04fsFQVMkoN9sTMFQkclpQDEHfWDw1 GolA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757369961; x=1757974761; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0nKmTdIpeS7PLxyfeMC8A21vcUc3Lgf8ocBEs9nNb+0=; b=CAU/QBh2sj3tCYghyseHlpkTxa+aQtqKSogbgynDWDN7J7SYx+m5/z7qJeK8z+lSIE kFNjG7u7rBji8gLexLndGaEBzJFCDWQ1+SnzKT9TOB5PIa6jzVlB1Nto68Po39erE3/I znFitj+0bFzEOVYj21M9pUr1/HxYSFqrS1pZ53KmZgbWVMs+An1yrXDbly8550x9A9xz fmjyJtfmGI4TdedUpWk0u3ShPaLjCAIU43DH2uKrBO1HwO4Qne34VH9Emf/DkSHCerSy X4uoH33FNC8XOArcmAFILLr6qLVNb5IczOl4Yl6gwABDrCmI5ZBO0r9buCp+FemlIavH 6BNQ== X-Forwarded-Encrypted: i=1; AJvYcCWTkJmDV1t1m12bgY/Q5hS9eBvYQpGryV2C4/VHgogB5L25W1m3q7NTxHGRoY1ROIddWc7M+lMO7B3CyOc=@vger.kernel.org X-Gm-Message-State: AOJu0YwV7s8/O9rsPvFn6zk4MAn518VQ4bsndJiRyY+7RhXf6pXsfFt8 aY8+TVA7O9Fp7TGWotjX/WiJ+Fzjgz375Jyt4I5wCVr908CVu0w0259AWpYJxco4PQ== X-Gm-Gg: ASbGncvPItC19ElaBH9DWSQR2Hfqy0FPGyii8zNPk+94bXBhFNa27yr7j6yFfcNzrCk ooX6YSd18ftRsBqJA3/q79kZulHPHEYGcA+6Cz6AonZikGZDg6zrg5hGlbJzxvSdmoV7905TFMU ExvjQQ3cMiXdgh54cOsZ/aLoeKlB1yyJb1IwKDvvngOr9iFHH9MvVEn/NSWBHNGgkpYJ7ngRC3A Sz8xTzL/lRWEGBq8R7JhUIe8H2kFEFbMCBV8/rVICaVEOxbLFFvqWY68vLmIivMVChwhGk68UpE 251N5xWB/KT5G2l9q/YvI6+KmlqYSrmmCbIDRh/N31LUeyXxZjqexpTGpXGIFjR0fDPc/olRzOV 75rusgFu4aK/3XJOgvMKFfDTok7QSbosFiWcDfU2kDz9beBySm40AaVz/QnL22Kgr4OrplapkEV wkAh65VUw3urg1/Q== X-Google-Smtp-Source: AGHT+IFqQlqmrOh/oql4z+l2T+HqRRernXj//jbExJMrmWbstfFIE+mLXYbw9wt3KjgX0uSjw8WSMQ== X-Received: by 2002:a05:690c:620d:b0:724:a06b:cafe with SMTP id 00721157ae682-727f388271emr94315057b3.24.1757369961162; Mon, 08 Sep 2025 15:19:21 -0700 (PDT) Received: from darker.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 00721157ae682-723a850287fsm56874777b3.47.2025.09.08.15.19.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Sep 2025 15:19:20 -0700 (PDT) Date: Mon, 8 Sep 2025 15:19:17 -0700 (PDT) From: Hugh Dickins To: Andrew Morton cc: Alexander Krabler , "Aneesh Kumar K.V" , Axel Rasmussen , Chris Li , Christoph Hellwig , David Hildenbrand , Frederick Mayle , Jason Gunthorpe , Johannes Weiner , John Hubbard , Keir Fraser , Konstantin Khlebnikov , Li Zhe , Matthew Wilcox , Peter Xu , Rik van Riel , Shivank Garg , Vlastimil Babka , Wei Xu , Will Deacon , yangge , Yuanchu Xie , Yu Zhao , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/6] mm: Revert "mm/gup: clear the LRU flag of a page before adding to LRU batch" In-Reply-To: <41395944-b0e3-c3ac-d648-8ddd70451d28@google.com> Message-ID: <05905d7b-ed14-68b1-79d8-bdec30367eba@google.com> References: <41395944-b0e3-c3ac-d648-8ddd70451d28@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit 33dfe9204f29b415bbc0abb1a50642d1ba94f5e9: now that collect_longterm_unpinnable_folios() is checking ref_count instead of lru, and mlock/munlock do not participate in the revised LRU flag clearing, those changes are misleading, and enlarge the window during which mlock/munlock may miss an mlock_count update. It is possible (I'd hesitate to claim probable) that the greater likelihood of missed mlock_count updates would explain the "Realtime threads delayed due to kcompactd0" observed on 6.12 in the Link below. If that is the case, this reversion will help; but a complete solution needs also a further patch, beyond the scope of this series. Included some 80-column cleanup around folio_batch_add_and_move(). The role of folio_test_clear_lru() (before taking per-memcg lru_lock) is questionable since 6.13 removed mem_cgroup_move_account() etc; but perhaps there are still some races which need it - not examined here. Link: https://lore.kernel.org/linux-mm/DU0PR01MB10385345F7153F3341009818882= 59A@DU0PR01MB10385.eurprd01.prod.exchangelabs.com/ Signed-off-by: Hugh Dickins Acked-by: David Hildenbrand Cc: --- mm/swap.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 3632dd061beb..6ae2d5680574 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -164,6 +164,10 @@ static void folio_batch_move_lru(struct folio_batch *f= batch, move_fn_t move_fn) for (i =3D 0; i < folio_batch_count(fbatch); i++) { struct folio *folio =3D fbatch->folios[i]; =20 + /* block memcg migration while the folio moves between lru */ + if (move_fn !=3D lru_add && !folio_test_clear_lru(folio)) + continue; + folio_lruvec_relock_irqsave(folio, &lruvec, &flags); move_fn(lruvec, folio); =20 @@ -176,14 +180,10 @@ static void folio_batch_move_lru(struct folio_batch *= fbatch, move_fn_t move_fn) } =20 static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, - struct folio *folio, move_fn_t move_fn, - bool on_lru, bool disable_irq) + struct folio *folio, move_fn_t move_fn, bool disable_irq) { unsigned long flags; =20 - if (on_lru && !folio_test_clear_lru(folio)) - return; - folio_get(folio); =20 if (disable_irq) @@ -191,8 +191,8 @@ static void __folio_batch_add_and_move(struct folio_bat= ch __percpu *fbatch, else local_lock(&cpu_fbatches.lock); =20 - if (!folio_batch_add(this_cpu_ptr(fbatch), folio) || folio_test_large(fol= io) || - lru_cache_disabled()) + if (!folio_batch_add(this_cpu_ptr(fbatch), folio) || + folio_test_large(folio) || lru_cache_disabled()) folio_batch_move_lru(this_cpu_ptr(fbatch), move_fn); =20 if (disable_irq) @@ -201,13 +201,13 @@ static void __folio_batch_add_and_move(struct folio_b= atch __percpu *fbatch, local_unlock(&cpu_fbatches.lock); } =20 -#define folio_batch_add_and_move(folio, op, on_lru) \ - __folio_batch_add_and_move( \ - &cpu_fbatches.op, \ - folio, \ - op, \ - on_lru, \ - offsetof(struct cpu_fbatches, op) >=3D offsetof(struct cpu_fbatches, loc= k_irq) \ +#define folio_batch_add_and_move(folio, op) \ + __folio_batch_add_and_move( \ + &cpu_fbatches.op, \ + folio, \ + op, \ + offsetof(struct cpu_fbatches, op) >=3D \ + offsetof(struct cpu_fbatches, lock_irq) \ ) =20 static void lru_move_tail(struct lruvec *lruvec, struct folio *folio) @@ -231,10 +231,10 @@ static void lru_move_tail(struct lruvec *lruvec, stru= ct folio *folio) void folio_rotate_reclaimable(struct folio *folio) { if (folio_test_locked(folio) || folio_test_dirty(folio) || - folio_test_unevictable(folio)) + folio_test_unevictable(folio) || !folio_test_lru(folio)) return; =20 - folio_batch_add_and_move(folio, lru_move_tail, true); + folio_batch_add_and_move(folio, lru_move_tail); } =20 void lru_note_cost_unlock_irq(struct lruvec *lruvec, bool file, @@ -328,10 +328,11 @@ static void folio_activate_drain(int cpu) =20 void folio_activate(struct folio *folio) { - if (folio_test_active(folio) || folio_test_unevictable(folio)) + if (folio_test_active(folio) || folio_test_unevictable(folio) || + !folio_test_lru(folio)) return; =20 - folio_batch_add_and_move(folio, lru_activate, true); + folio_batch_add_and_move(folio, lru_activate); } =20 #else @@ -507,7 +508,7 @@ void folio_add_lru(struct folio *folio) lru_gen_in_fault() && !(current->flags & PF_MEMALLOC)) folio_set_active(folio); =20 - folio_batch_add_and_move(folio, lru_add, false); + folio_batch_add_and_move(folio, lru_add); } EXPORT_SYMBOL(folio_add_lru); =20 @@ -685,13 +686,13 @@ void lru_add_drain_cpu(int cpu) void deactivate_file_folio(struct folio *folio) { /* Deactivating an unevictable folio will not accelerate reclaim */ - if (folio_test_unevictable(folio)) + if (folio_test_unevictable(folio) || !folio_test_lru(folio)) return; =20 if (lru_gen_enabled() && lru_gen_clear_refs(folio)) return; =20 - folio_batch_add_and_move(folio, lru_deactivate_file, true); + folio_batch_add_and_move(folio, lru_deactivate_file); } =20 /* @@ -704,13 +705,13 @@ void deactivate_file_folio(struct folio *folio) */ void folio_deactivate(struct folio *folio) { - if (folio_test_unevictable(folio)) + if (folio_test_unevictable(folio) || !folio_test_lru(folio)) return; =20 if (lru_gen_enabled() ? lru_gen_clear_refs(folio) : !folio_test_active(fo= lio)) return; =20 - folio_batch_add_and_move(folio, lru_deactivate, true); + folio_batch_add_and_move(folio, lru_deactivate); } =20 /** @@ -723,10 +724,11 @@ void folio_deactivate(struct folio *folio) void folio_mark_lazyfree(struct folio *folio) { if (!folio_test_anon(folio) || !folio_test_swapbacked(folio) || + !folio_test_lru(folio) || folio_test_swapcache(folio) || folio_test_unevictable(folio)) return; =20 - folio_batch_add_and_move(folio, lru_lazyfree, true); + folio_batch_add_and_move(folio, lru_lazyfree); } =20 void lru_add_drain(void) --=20 2.51.0